NAV Navbar
python

Rocketbase illustration

Documentation

Introduction

Welcome to RocketBase documentation. Here you will find information on how to use RocketBase library and how to use our pre-trained Rockets.

Landing a Rocket

A Rocket can be loaded using its Rocket Slug

model = Rocket.land('username/modelName')

To use a Rocket you need to land it first. This can be done easily by using the PIP package and the desired Rocket Slug, a unique identifier of each Rocket. Think of it as the GitHub repository URL.

The land method has two parameters, first you need to define the Rocket Slug to land a specific Rocket. The second parameter is optional and can be used to select the device on which the Rocket should run. Rockets can be run on three different devices: CPU, GPU and API. CPUis the default device where the Rocket will run.

Launching a Rocket

At some point you might want to modify the landed Rocket or even create your own Rocket. You can push your new Rocket to the RocketBase.ai platform by using the launch functionality of the PIP package. This feature is in private beta at the moment. We will notify you when you can start using it. If you would like to join the private beta please send us an email at rocketbase@mirage.id.

Features coming soon:

Installation

The RocketBase python package works with Python 3.6+ and is built on the PyTorch deep learning framework.

To install RocketBase you must first install PIP.

After that, you can install the RocketBase PIP package using the following command in your terminal:

pip install rocketbase

Test your RocketBase installation

Run the following code as a python script to check your installation.

from rocketbase import Rocket
model = Rocket.land('igor/esrgan')
print('Success!')

The script should return Sucess if everything worked.

To test whether your installation was successful you can simply try to run the code we provide on the right.

What are Rockets?

A simple Rocket for image superresolution Open In Colab

import torch
from rocketbase import Rocket
from PIL import Image

img = Image.open('cat.png')

model = Rocket.land('igor/esrgan').eval()
with torch.no_grad():
  img_tensor = model.preprocess(img)
  out = model(img_tensor)
  img_out = model.postprocess(out)

A Rocket is a Deep Learning container packaging a standardized and independent deep learning model. On RocketBase.ai you will find already built and pretrained Rockets for the most common use cases in computer vision.

Each Rocket belongs to a Rocket family such as object detection, image classification, style transfer etc.

What is part of a Rocket?

Usually, working with Rockets consists of four parts. First, we need to land the Rocket:

model = Rocket.land('igor/esrgan')

Now, we need to prepare the input data such that we can use PyTorch:

input_tensor = model.preprocess(input_data)

After preprocessing the input we can run the Rocket:

out_tensor = model(input_tensor)

Finally, we want to convert the raw output into a standardized output format:

output_data = model.postprocess(out_tensor)

In order to simplify deep learning workflows we separate the model from the code. To make reusing the model as simple as possible, they are packaged into so called Rockets.

As you can see in the following illustration a Rocket contains more than just the model. We also include functionality for data pre- and postprocessing. Some Rockets which are marked as retrainable also include the necessary components to retrain (e.g. loss function).

Rocket parts illustration

A Rocket always contains:

Every Rocket is identified by its unique identifier, the so called Rocket Slug

Swapping a Model example

Rockets can easily be swapped out by changing the Rocket slug

# this code uses yolov3 for object detection
model = Rocket.land('lucas/yolov3')

# and this code uses retinanet for object detection
model = Rocket.land('igor/retinanet')

For example, we can swap out Rockets with a single line of code. This might sound trivial, but in fact this is something unusual in deep learning. Usually, a code base is linked with one or more models.

This also allows us to work on the deep learning model separately from the code base. The code you write can also be used for all Rockets from the same Rocket family without any change.

Code once, use everywhere

Using a Rocket on different devices

# automatically run on GPU if available, otherwise CPU
model = Rocket.land('lucas/yolov3')

# run on CPU
model = Rocket.land('lucas/yolov3', device='CPU')

# # run on GPU
# model = Rocket.land('lucas/yolov3', device='GPU')

# run via API (you get the api url from the RocketBase.ai platform)
model = Rocket.land('lucas/yolov3', device='API')

The standardization of a Rocket allows you to run any Rocket on a CPU or GPU. The only dependency is that the PyTorch framework has to run on the device.

But how about running a Rocket on a less powerful device without a GPU, like a RaspberryPi?

Just add API as the device parameter of the Rocket.land(..) method. Your code will automatically use the Rocket via an API provided by RocketBase.ai

The input and output of the same Rocket should always match, no matter on which device you run it. This gives you a lot of additional freedom while prototyping.

Building a deep learning prototype on a Macbook and later sharing it with a friend who has a powerful GPU without any extra work suddenly becomes very easy.

Benchmarking Rockets example

Code to benchmark 3 different object detection Rockets Open In Colab

# create our list of Rockets
models = []
models.append([Rocket.land('lucas/yolov3').eval(), 'yolov3'])
models.append([Rocket.land('igor/retinanet-resnet101-800px').eval(), 'retinanet101'])
models.append([Rocket.land('igor/retinanet').eval(), 'retinanet'])

for model, name in models:

  # do a `dry` run to make sure we have proper benchmarking results
  img_tensor = model.preprocess(img)
  with torch.no_grad():
    out_tensor = model(img_tensor)
    img_tensor = model.preprocess(img)
    list_of_bboxes = model.postprocess(out_tensor, img)

  # track the processing time
  start_time = time.time()

  # now we can process some images with the same code
  with torch.no_grad():
    img_tensor = model.preprocess(img)
    out_tensor = model(img_tensor)
    list_of_bboxes = model.postprocess(out_tensor, img)

  # calculate the processing time
  processing_time = time.time() - start_time

  print(f'Running Rocket {name} on CPU took {processing_time:.2f} seconds')

Rockets allow for simpler workflows for benchmarking or replacing existing deep learning models. A Rocket can always be replaced by any other Rocket from the same Rocket family without any extra work.

For the example on the left we obtain the following numbers for inference time:

Rocket Name CPU GPU
lucas/yolov3 1.40s 0.14s
igor/retinanet101 13.73s 0.69s
igor/retinanet 43.24s 0.31s

Note that we are only benchmarking the runtime and not the accuracy of the Rockets

A similar approach can be taken to find the optimal model based on your data. You can train different Rockets on the same dataset without rewriting the whole code.

Rocket Families

To further support the standardization process we group Rockets into so called Rocket families. A Rocket family always shares the same input/output interface. In this section we introduce you to the different Rocket families and provide you some additional information on how to use the Rockets.

Image Object Detection

Detecting objects in images or videos is a very common task in computer vision. To help you out in object detection problems we provide state-of-the-art models packaged in Rockets.

Input and Output

Rockets in this family work on images. The image will automatically be resized so you don't have to worry about that. The postprocess method also requires the input image as additional input. We use the input image for resizing the bounding boxes to the right size and for the optional visualization.

input of preprocess output of postprocess
PIL.Image list of dictionaries with elements
[{topLeft_x, topLeft_y, width, height, bbox_confidence, class_name, class_confidence},...]

Object detection illustration

Additionally, we provide a simple way to visualize the output detections by setting the visualize flag in postprocess to True.

Object detection visualize illustration

Counting Cups in an Image using Rockets

Count cups in an image using a Rocket Open In Colab

import torch
from PIL import Image
from rocketbase import Rocket

# load an image using PIL
image_path = 'image_with_cups.jpg'
img = Image.open(image_path).convert('RGB')

# land the Rocekt
rocket = 'lucas/yolov3'
model = Rocket.land(rocket).eval()

# process the image with our Rocket
with torch.no_grad():
    img_tensor = model.preprocess(img)
    out = model(img_tensor)
    bboxes_out = model.postprocess(out, img)

# create a counter to keep track of the cups
cup_counter = 0

# loop through all objects detected and count cups
for bbox in bboxes_out:
  class_name = bbox['class_name']
  if class_name == 'cup':
    cup_counter += 1

print(f'The image contains {cup_counter} cups.')

To show how we can use an object detection Rocket we want to build a python script to count the amount of cups in an image. Think about a shared kitchen and we want to keep track of how many cups are in the kitchen sink instead of the dishwasher.

Here are the parts we need:

Image Superresolution

Turning a low resolution image into HD sounds like science fiction. But using a Rocket, this can be done with just a few lines of code.

Input and Output

Rockets in this family work with images.

input of preprocess output of postprocess
PIL.Image PIL.Image

Superresolution illustration

Selfie Enhancer using Rockets

Improve picture quality using a Rocket

import torch
from PIL import Image
from rocketbase import Rocket

# load an image using PIL
image_path = 'selfie.png'
img = Image.open(image_path).convert('RGB')

# land the Rocekt
rocket = 'igor/esrgan'
model = Rocket.land(rocket).eval()

# process the image with our Rocket
with torch.no_grad():
  img_tensor = model.preprocess(img)
  out = model(img_tensor)
  img_out = model.postprocess(out, img)

# save high definition image
img_out.save('selfie_HD.png')

In this example we will create a selfie enhancer or simply a python script to improve quality of an image.

Here are the parts we need:

You will notice, that superresolution Rockets have images as both, input and output.

Image Style Transfer

Image Style transfer Rockets can be used to transfer the style of an input image A to match the style of image B. The type of style varies depending on the specific Rocket. A style transformation can range from a simple color transformation up to replacing complete faces.

Input and Output Image Style transfer Rockets have a quite simple input and output interface. Both input and output are images.

input of preprocess output of postprocess
PIL.Image PIL.Image

Turning Summer into Winter

Turn summer into winter using Rockets Open In Colab

from PIL import Image
from rocketbase import Rocket

# load an image using PIL
image_path = 'summer-image.png'
img = Image.open(image_path).convert('RGB')

# land the Rocket
rocket = 'igor/cycle_gan_summer2winter_yosemite'
model = Rocket.land(rocket).eval()

# process the image with our Rocket
with torch.no_grad():
    img_tensor = model.preprocess(img)
    out = model(img_tensor)
    img_out = model.postprocess(out, img)

# save the winter image
img_out.save('winter.png')

Here we show you the result of using cyclegan-summer2winter-yosemite to transfer images from summer to look like images from winter. Style transfer illustration