Image Labeling
Label Images
Image Labeling Tool
Labeling Tips
To draw a polygon, press Enter or New object (shortcut N). The following click will start a new object. Add or move a vertice by selecting a polygon and clicking Edit (shortcut E). To move a vertex click on it and drag. You can add a new vetice by clicking the projection of a new vertex when close to the border of an image.
Create a new separate polygon by pressing New polygon in group (shortcut G) which will then be part of the same object. This is handy for labeling multipart objects.
To create a hole click New hole (shortcut H) which can be edited just like polygons. Parts of the image covered by polygons are not part of the object the hole belongs too.
Polylines are a selection of points connected by lines, useful for labeling parts that are defined by structure and not the exact shape.
To draw a polyline, press Enter or New object (shortcut N). The following click will start a new object. Points can be moved by selecting the polyline and clicking Edit (shortcut E). To Add points click Add points (shortcut K), the new point will be connected to the previous. To remove points click on a specific one and Press Remove points (shortcut R). The polyline will redraw itself, remaining connected.
A bitmap is a freeform hand-drawn mask used to label objects that are complex in shape.
Bitmaps are drawn with a paint-like brush and do not have to be connected in any way to form an object. To draw a bitmap, press Enter or New object (shortcut N). The following click will start a new object. To add an existing bitmap, select and press Draw (shortcut E). To change from a brush to the eraser, select a bitmap and press Erase (shortcut R). To fill the mask so that it becomes solid, Press Fill (shortcut G) and click inside the closed bitmap shape.
Shared Features
Smart labeling tool
- Select a rectangular area of work for the tool
- Mark foreground. A few lines is enough for clear, contrasting backgrounds in the image. For regions with complex colors that may blend with the background it might be better…
- Mark background.
- Click Extract so that the tool marks the foreground. Ensure all desired parts of the image are covered.
- Repeat steps 2-4 until you are happy with the outcome.
- Press Done once you are finished. The bitmap tool can be used to make additional touch ups.
Classification labeling
- Select images that have the desired label, and label these again using ‘+’ with that label. The default label will then change for all of those images.
Project sharing & user management
Filtering tools
Image Similarity Search
- 1vN that finds similar images to a single query image
- NvN that finds most similar image pairs in your data set
In all types of image similarity search, you can optionally specify the maximum number of results to be displayed, as well as the similarity score threshold. Additionally, to reduce the search space, users can specify one or more labels to filter the images in your data set prior to image similarity search.
Users can choose between “and” and “or” operators when filtering images by labels. All of these options can be specified as GET parameters when formatting the URL for the request.
- API token
- Project ID
To use the 1vN similarity search you will need an image file, either from an existing project or stored in your computer.
For detailed information in multiple coding languages on using image similarity search via REST API, please refer to our user guide.
To use the image similarity search offline, you will have to download an offline version of image similarity search model. To do so, click on “pre-trained models -> Image Similarity Search -> Download Model” Once the model has downloaded, follow the instructions in the readme.md to set up your local REST API server.
Note that the REST AOU server must be run on a Linux system, but the client devices can run on any operating system. For more information regarding offline image similarity search, please visit our user guide.
Image Classification
- Upload your images
- Label your images with objects or concepts you want the network to learn to recognize
- Train your model using the SentiSight.ai platform
- Use the trained model to make predictions on new images.
Uploading and Labeling Images
- Click on the label of interest on the image, using your mouse
- Select some images that already have the label of interest, and label these again using the ‘+’ button using the same label. This will change the default label for all of those images.
- CSV: The first field in each row should read the filename of the image, and the remaining fields should be ‘image labels’.
- JSON file should have these fields:
- name - image name
- mainClassificationLabel - single label that acts as the image's default label for the purpose of single-label model training
- classificationLabels - array of assigned classification labels
- <add an example JSON file from image labeling user guide
Training your classification model
- Single-label classification: best-suited when each image contains a single object/concept. For instance, differentiating between multiple dog breeds (Bulldog, German Shepherd, Poodle etc). In this case, the image should only contain one dog.
- Multi-label classification: best-suited for when an image contains multiple objects or concepts. For instance, identifying several different animals within the same image (e.g. dog, cat, chicken, pig etc.). Also, multi-label is ideal for recognizing mutually-exclusive abstract concepts within the same image (e.g. expression, skin color, gender of a person).
- Validation set size (%): a split between training and validation of images.
- Use user-defined validation set - instead of the model using automated percentage split, use images marked for validation. Images can be marked for validation by using ‘add to validation set’ option on the right-click menu.
- Learning rate: modifies the rate at which the model weights are updated throughout the training.
- Upload images containing a label which requires classification.
- Upload a group of ‘background’ images which should not contain the chosen object for classification. Incorporate a similar diversity of background images that are expected to be replicated within the production usage of this model.
- You can then begin training, selecting single-label classification.
- Once the model has finished training, click ‘View training statistics’, then ‘Show predictions’. You will see images classified as ‘background’ or your label.
- Single-label prediction percentages of all labels add up to 100%.
- Multi-label predictions have a minimum score threshold. If this threshold is exceeded then the prediction is known as ‘positive’, and ‘negative’ if the prediction falls below the threshold.
Making Predictions
- Using the web interface. This is the easiest and fastest way to test your model but it’s not suitable if you want to automate the process.
- Using our online REST API. The idea is that you train the model using our web interface and then send requests with your images to our online REST API server to get back the predictions for those images. You can send the requests from any operating system, even from mobile devices, as long as they are connected to the internet.
- Downloading the offline model and setting up your own REST API server. In this case, the REST API server has to be set up on a Linux operating system and you will need an nvidia GPU card to reach the maximum speed. The client devices can still be run on any operating system or mobile devices. If everything is correctly set up, this option has a potential to reach a faster speed than the online web interface or the online REST API.
- For new images: open predictions window by either clicking ‘Make a new prediction’ button in the Trained Models dropdown, or clicking ‘Make a new prediction’ in the Model statistics window.
- For existing images: right-click on an image in the project and select Predict, then choose your preferred model.
- API token (under ‘User profile’ menu tab)
- Project ID (under ‘User profile’ menu tab)
- Model name (under ‘Trained models’ menu)
Use this endpoint:
https://platform.sentisight.ai/api/predict/{your_project_id}/{your_model_name}/
If you prefer to assign the ‘last model’ checkpoint for making your predictions use this:
https://platform.sentisight.ai/api/predict/{your_project_id}/{your_model_name}/last
- Download an offline version of the model: click Download model on the ‘View training statistics” webpage.
- Follow the instructions in README.html to successfully set up your own REST API server (Note: this server runs only on linux operating system). Client requests can be made either on the same PC so the model can run offline or on any other device (e.g. mobile). Client devices can run any operating system.
Object Detection
Basics of Bounding Box labeling
It is a good idea to label the images for classification as you upload them, because the classification labels will be the suggested label names for the bounding boxes so it will increase the speed of bounding box labeling, provided that classification labels and object detection labels match.
Additionally, you can use hotkeys 1-9 to quickly change the label of selected object to one of existing labels. You can see which hotkey corresponds to which label in labeling settings (see "Labeling tool setting" section below). There you can also assign label names to hotkeys.
Note that if you don't want to label all the images in the project, you can either select a number of them, or use filters—labeling tool will ignore images that are not selected or are filtered out. By default, labeling tool will iterate through all of the images.
Selecting Parameters
Basic users can set two parameters:
- The model training time
- The time after which the model would stop training if there is no improvement in the performance
The above parameters are usually enough to train a good model. However, if you are an advanced user, you might want to set some additional parameters. To access these additional parameters, turn on the advanced view. The parameters that you will be able to select and customize include;
- Use User-defined validation set
- Change the validation set size percentage
- Learning rate
- Model size (small, medium or large)
Training Object Detection Model
The standard training time for an object detection model is significantly longer than that for a classification model.
The default training time for object detection models depends on the number of different classes in the training set (1-2 classes: 2 hours, 3-5 classes: 3 hours, 6-10 classes: 6 hours, 11+ classes: 12 hours).
Analysing Learning Curve
In object detection model training you can check the learning curves at any time, to see how the training is going. You can also decide to stop the training early if you do not see any improvement in the learning curves.
After the model is trained, you can find the final learning curves in the model info window.
For more information on learning curves, please visit analyzing the learning curve and early stopping the training
Analysing statistics and predictions
After the model has been trained, you can view the model’s performance by clicking on View training statistics from the “Trained models” menu. You can also click Show predictions to see the actual predictions for specific images, for either the train or validation set.
For more information on model statistics and predictions, please visit analyzing the model’s performance
Analysing precision-recall curve
Changing score thresholds
The score threshold determines when the prediction is considered positive and a bounding box is drawn. For example, if the score threshold is 50%, all bounding boxes whose score is above 50% are drawn. When you increase the score threshold, fewer bounding boxes will be drawn, but they will be more likely correct, thus increasing the precision. On the contrary, when you decrease the score threshold, more bounding boxes will be drawn, each of which will be less likely to be correct, but they will cover a larger amount of ground truth bounding boxes, thus increasing the recall.
By default the score threshold is optimized to maximize F1 value, and it is visualized by the red dashed line on the precision-recall curve You can enter your own threshold by unchecking ‘Use optimized thresholds’ and clicking anywhere on the precision-recall curve or entering a value into the text box.
Note: score thresholds change simultaneously both for training and validation sets.
Making Predictions
- Using the web interface. This is the easiest and fastest way to test your model but it’s not suitable if you want to automate the process.
- Using our online REST API. The idea is that you train the model using our web interface and then send requests with your images to our online REST API server to get back the predictions for those images. You can send the requests from any operating system, even from mobile devices, as long as they are connected to the internet.
- Downloading the offline model and setting up your own REST API server. In this case, the REST API server has to be set up on a Linux operating system and you will need an nvidia GPU card to reach the maximum speed. The client devices can still be run on any operating system or mobile devices. If everything is correctly set up, this option has the potential to reach a faster speed than the online web interface or the online REST API.
Downloading model or using it online
- API token (available under "User profile" menu tab)
- Project ID (available under "User profile" menu tab)
- Model name (shown in many places, for example, under "Trained models" menu)
- Download an offline version of the model: click Download model on the ‘View training statistics” webpage.
- Follow the instructions in README.html to successfully set up your own REST API server (Note: this server runs only on linux operating system). Client requests can be made either on the same PC so the model can run offline or on any other device (e.g. mobile). Client devices can run any operating system.
- You will be able to run the offline model version for 30 days, afterwards you will need to buy a license.