Welcome to LOST’s documentation!

_images/LOSTFeaturesIn40seconds.gif

LOST features in a nutshell.

About LOST

LOST (Label Object and Save Time) is a flexible web-based framework for semi-automatic image annotation. It provides multiple annotation interfaces for fast image annotation.

LOST is flexible since it allows to run user defined annotation pipelines where different annotation interfaces/ tools and algorithms can be combined in one process.

It is web-based since the whole annotation process is visualized in your browser. You can quickly setup LOST with docker on your local machine or run it on a web server to make an annotation process available to your annotators around the world. LOST allows to organize label trees, to monitor the state of an annotation process and to do annotations inside the browser.

LOST was especially designed to model semi-automatic annotation pipelines to speed up the annotation process. Such a semi-automatic can be achieved by using AI generated annotation proposals that are presented to an annotator inside the annotation tool.

Getting Started

Setup LOST

LOST releases are hosted on DockerHub and shipped in Containers. See quick-setup for more information.

Getting Data into LOST

Image Data

In the current version there is no GUI available in order to load images into LOST. So we will use the command line or a file explorer to do that. An image dataset in LOST is just a folder with images. LOST will recognize all folders that are located at path_to_lost/data/data/media in your filesystem as a dataset. In order to add your dataset just copy it to the path above e.g.:

# Copy your dataset into the LOST media folder
cp -r path/to/my/dataset path_to_lost/data/data/media

# It may be required that you copy it as a super user since the
# docker container that executes LOST is running as root service
# and owns the media folder.
sudo cp -r path/to/my/dataset path_to_lost/data/data/media

LabelTrees

#TODO Review + add import export feature link Labels are organized in LabelTrees. Each LabelLeaf need to have at least a name. Optional information for a LabelLeaf are a description, an abbreviation and an external ID (maybe from another system). LOST provides a GUI to create or edit LabelTrees and an import of LabelTrees defined in a CSV file via the command line. In order to be able to edit LabelTrees in LOST you need to login as a user with role Designer. After login you need to click on the Labels button on the left navigation bar.

Users, Groups and Roles

#TODO GR Add Admin Role + Link to visibilty docs There are two main user roles in LOST: A Designer and an Annotator role. Both roles have different views and access to information. An Annotators job is just to work on annotation tasks that are assigned to him, while a Designer may do more advanced stuff and everything an Annotator may do. For example a Designer will start annotation piplines and choose or edit LabelTrees for the annotation tasks.

Independent of its role a user can be part of one or multiple user Groups. In this way annotation tasks can be a assigned to Groups of users that can work collaborative on the same task.

In order to manage users and groups, click on the Users icon on the left menu bar. Please note that only users with the role Designer are allowed to manage users.

Starting an Annotation Pipeline

#TODO GR Review ! All annotation processes in LOST are modeled as pipelines. Such a pipeline defines the order in which specific pipeline elements will be executed. Possible elements are Datasources, Scripts, AnnotationTasks, DataExports and VisualOutputs.

Each version of LOST is equipped with a selection of standard pipelines that can be used as a quick start to annotate your data. In order to start an annotation pipeline you need to be logged in in as a user with role Designer and click on the Start Pipeline button on the left navigation bar. Now you will see a table with possible pipelines that can be started.

After selecting a pipeline by a click on a specific row in the table you need to configure it. A visualization of the selected pipeline will be displayed. In most cases a Datasource is the first element of a pipeline. Click on it and select an available dataset. After a click on the OK button the pipeline element will turn green to indicate that the configuration was successful.

The next element you need to look for is an AnnotationTask. After clicking on it a wizard will pop up and guide you through the configuration of this AnnotationTask. In the first step a name and instructions for the AnnotationTask can be defined. Click on the next button and select a user or group of users that should perform this AnnotationTask. Now a LabelTree need to be selected by clicking on a specific tree in the table. Now a visualization of the LabelTree will be displayed. Here you can select a subset of labels that should be used for the AnnotationTask. The idea is that each parent leaf represents a category that can be selected to use all direct child leafs as labels. So if you click on a leaf, all direct child leafs will be used as possible labels for the AnnotationTask. It is possible to select multiple leafs as label categories. After selecting the label subset click on OK and the configuration of this AnnotationTask is done.

Now visit all other elements that not have been configured (indicated by a yellow color) and move on to the next step in the wizard. Here you can enter a name and a description of your pipeline. After entering these information you can click on the checkmark symbol to get to the Start Pipe button. With a click on this button your annotation pipeline will be started :-)

You can monitor the state of all running pipelines on your Designer dashboard. To get to a specific pipeline click on the Dashboard button in the left navigation bar and select a pipeline in the table.

Out of the box pipelines

#TODO GR

Annotate Your Images

#TODO GR Update + Add links to SIA / MIA docs Once your pipeline has requested all annotations for an AnnotationTask, selected annotators will be able to work on it. If you are logged in as a user with role Designer you can now switch to the annotator view by clicking on the Annotator button at the upper right corner of your browser. You will be redirected to the annotator dashboard. If you are logged in as a user with role Annotator you see this dashboard directly after login.

Here you can see a table with all available AnnotationTasks for you. Click on a task you want to work on and you will be redirected to one of the annotation tools (see also the For Annotators chapter). Now instructions will pop up and you are ready to annotate.

Download Your Annotation Results

Instant Annotation Export

#TODO GR

Out Of The Box pack_dataset Pipeline

#TODO GR

Data Export Pipeline Element

#TODO GR All example pipelines in LOST have a Script element that will export your annotations to a CSV file when the annotation process has finished. To download this file go to the Designer dashboard that is part of the Designer view and select a pipeline. A visualization of the annotation process will be displayed. Look for a DataExport element and click on it. A pop up will appear that shows all files that are available for download. Now click on a file and the download will start.

Managing Annotation Pipelines

Datasources

#TODO Datasources

Label Trees

Start Pipeline

Choose Datasource

#TODO

Configure AnnotationTask

Configure Script arguments

Running Pipelines

Instant Annotation Export

Review Annotations

Designer Statistics

For Annotators

#TODO GR Review

Your Dashboard

_images/annotator-dashboard.png

Figure 1: The annotator dashboard.

In Figure 1 you can see an example of the annotator dashboard. At the top, the progress and some statistics of the current selected AnnotationTask are shown.

In the table on the button all available AnnotationTasks are presented. A click on a specific row will direct you to the annotation tool that is required to accomplish the selected AnnotationTask. Rows that have a grey background mark finished tasks and can not be selected to work on.

Getting To Know SIA - A Single Image Annotation Tool

SIA was designed to annotate single images with Points, Lines, Polygons and Boxes. To each of the above mentioned annotations a class label can also be assigned.

Figure 2 shows an example of the SIA tool. At the top you can see a progess bar with some information about the current AnnotationTask. Beyond this bar the actual annotation tool is presented. SIA consists of three main components. These components are the canvas, the image bar and the tool bar

_images/sia-example.png

Figure 2: An example of SIA.

_images/sia-canvas.png

Figure 3: An example of the SIA canvas component. It presents the image to the annotator. By right click, you can draw annotations on the image.

_images/sia-image-bar.png

Figure 4: The image bar component provides information about the image. Beginning with the filename of the image and the id of this image in the database. This is followed by the number of the image in the current annotation session and the overall number of images to annotate. The last information is the label that was given to the whole image, if provided.

_images/sia-toolbar.png

Figure 5: The toolbar provides a control to assign a label to the whole image. The navigation between images. Buttons to select the annotation tool. A button to toggle the SIA fullscreen mode. A junk button to mark the whole image as junk that should not be considered. A control to delete all annotations in the image. A settings button and a help button.

Warning

There may be also tasks where you can not assign a label to an annotation. The designer of pipeline a can decide that no class labels should be assigned.

Warning

Please note that there may be also tasks where no new annotations can be drawn and where you only can delete or adjust annotations.

Note

Please note that not all tools may be available for all tasks. The designer of a pipeline can decide to allow only specific tools.

Meet MIA - A Multi Image Annotation Tool

MIA was designed to annotate clusters of similar objects or images. The idea here is to speed up the annotation process by assigning a class label to a whole cluster of images. The annotators task is remove images that do not belong to the cluster clicking on these images. When all wrong images are removed, the remaining images get the same label assigned by the annotator.

As an example, in Figure 7 the annotator clicked on the car since it does not belong to the cluster of aeroplanes. Since he clicked on it the car is grayed out. Now the annotator moved on to the label input field and selected Aeroplane as label for the remaining images. Now the annotator needs to click on the Submit button to complete this annotation step.

_images/mia-example.png

Figure 7: An example of MIA.

Figure 8 shows the left part of the MIA control panel. You can see the label input field and the current selected label in a red box.

_images/mia-controls1.png

Figure 8: Left part of the MIA control panel.

In Figure 9 the right part of the MIA control panel is presented. The blue submit button on the left can be used to submit the annotations.

On the right part of the figure there is a reverse button to invert your selection. When clicked in the example of Figure 7 the car would be selected for annotation again and all aeroplanes would be grayed out. Next to the reverse button there are two zoom buttons that can be used to scale all presented images simultaneously. Next to the zoom buttons there is a dropdown with name amount here the annotator can select the maximum number of images that are presented at the same time within the cluster view.

_images/mia-controls2.png

Figure 9: Right part of the MIA control panel.

In some cases the annotator may want to have a closer look at a specific image of the cluster. In order to zoom a single image perform a double click on it. Figure 10 shows an example of a single image zoom. To scale the image back to original size, double click again.

_images/mia-example-zoom.png

Figure 10: Zoomed view of a specific image of the cluster.

Admin Area

User & Groups

#TODO GR

Management

Users and groups can be added via the “Users” section. Each user created gets its own default group with the same name as the username. No more users can be added to this default group. Groups that are added manually can be assigned to any number of users.

Visibility

Pipeline

Pipelines can be assigned to a group or to the own user when starting. Only groups to which the user is assigned can be selected. Later, these pipelines will only be visible to the selected group or user.

Label Trees

Label Trees are visible system-wide across all applications.

AnnoTasks

AnnoTask can be assigned either to your own user or to a group when starting a pipeline. Only groups to which the user is assigned can be selected.

Pipeline Templates

Pipeline Templates are visible system-wide across all applications.

Import & Export Pipelines

#TODO GR

Global LabelTrees

#TODO GR

Global Datasources

#TODO GR

JupyterLab

#TODO GR

Developing Pipelines

LOST Ecosystem

All about Pipelines

#TODO JJ Review

Pipeline Projects

A pipeline project in LOST is defined as a folder or git repository that contains pipeline definition files in json format and related python3 scripts. Additional, other files can be placed into this folder that can be accessed by the scripts of a pipeline.

Pipeline Project Examples

Pipeline project examples can be found here: LOST Out of the Box Pipelines

Repository Structure
Example repo structure of a lost pipeline project.
lost_ootb_pipes/
├── found
│   ├── cluster_using_prev_stage.py
│   ├── __init__.py
│   ├── mia.json
│   ├── mia_request_again.json
│   ├── mia_sia.json
│   ├── request_annos_again.py
│   ├── request_annos.py
│   ├── request_images_by_lbl.py
│   ├── sia.json
│   ├── sia_request_again.json
│   └── two_stage.json
├── LICENSE
└── README.md

1 directory, 13 files

The listing above shows an example for a pipeline directory structure. Where the root folder lost_ootb_pipes is the repo name and found is the name of the pipeline project. found contains all files required for the pipelines of the pipeline project. Within the project there are json files where each represents a pipeline definition. A pipeline is composed from different scripts (request_annos.py, request_annos_again.py, request_images._by_lbl.py) and other pipeline elements. Some of the scripts may require a special python package you have written. So if you want to use this package (e.g. my_special_python_lib), just place it also inside the pipeline project folder. Sometimes it is also useful to place some files into the project folder, for example a pretrained ai model that should be loaded inside a script.

Importing a Pipeline Project into LOST

After creating a pipeline it needs to be imported into LOST. In order to do that, you need to perform the following steps:

  1. Log into LOST as Admin
  2. Go to Admin Area
  3. Click on the Pipeline Projects tab
  4. Click on Import pipeline project button
  5. Click on Import/ Update pipeline project from a public git repository
  6. Add the url of the pipeline project you like to import
  7. Click on Import/ Update
_images/pipe_import.png

Pipeline import GUI

Updating a LOST Pipeline

If there was an update for one of your pipelines you need to update your pipe project in LOST.In order to do this, the procedure is the same as for importing a pipeline

Namespacing

When importing or updating a pipeline project in LOST the following namespacing will be applied to pipelines: <name of pipeline project folder>.<name of pipeline json file>. In the same way scripts will be namespaced internally by LOST: <name of pipeline project folder>.<name of python script file>.

So in our example the pipelines would be named found.mia and found.mia_request_again ….

Pipeline Definition Files

Within the pipeline definition file you define your annotation process. Such a pipeline is composed of different standard elements that are supported by LOST like datasource, script, annotTask, dataExport, visualOutput and loop. Each pipeline element is represented by a json object inside the pipeline definition.

As you can see in the example, the pipeline itself is also defined by a json object. This object has a description, a author, a pipe-schema-version and a list of pipeline elements. Each element object has a peN (pipeline element number) which is the identifier of the element itself. An element needs also an attribute that is called peOut and contains a list of elements where the current element is connected to.

An Example

Possible Pipeline Elements

Below you will find the definition of all possible pipeline elements in LOST.

1
2
3
4
5
6
7
 {
   "peN" : "[int]",
   "peOut" : "[list of int]|[null]",
   "datasource" : {
     "type" : "rawFile"
   }
 }

Datasource elements are intended to provide datasets to Script elements. To be more specific it will provide a path inside the LOST system. In most cases this will be a path to a folder with images that should be annotated. The listing above shows the definition of a Datasource element. At the current state only type rawFile is supported, which will provide a path.

1
2
3
4
5
6
7
8
 {
   "peN" : "[int]",
   "peOut" : "[list of int]|[null]",
   "script" : {
     "path": "[string]",
     "description" : "[string]"
   }
 }

Script elements represent python3 scripts that are executed as part of your pipeline. In order to define a Script you need to specify a path to the script file relative to the pipeline project folder and a short description of your script.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
 {
   "peN" : "[int]",
   "peOut" : "[list of int]|[null]",
   "annoTask" : {
     "type" : "mia|sia",
     "name" : "[string]",
     "instructions" : "[string]",
     "configuration":{"..."}
   }
 }

An AnnoTask represents an annotation task for a human-in-the-loop. Scripts can request annotations for specific images that will be presented in one of the annotation tools in the web gui.

Right now two types of annotation tools are available. If you set type to sia the single image annotation tool will be used for annotation. When choosing mia the images will be present in the multi image annotation tool.

An AnnoTask requires also a name and instructions for the annotator. Based on the type a specific configuration is required.

If “type” is “mia” the configuration will be the following:

1
2
3
4
5
6
 {
   "type": "annoBased|imageBased",
   "showProposedLabel": "[boolean]",
   "drawAnno": "[boolean]",
   "addContext": "[float]"
 }
MIA configuration:
  • type
    • If imageBased a whole image will be presented in the clustered view.
    • If annoBased all lost.db.model.TwoDAnno objects related to an image will be cropped and presented in the clustered view.
  • showProposedLabel
    • If true, the assigned sim_class will be interpreted as label and be used as pre-selection of the label in the MIA tool.
  • drawAnno
    • If true and type : annoBased the specific annotation will be drawn inside the cropped image.
  • addContext
    • If type : annoBased and addContext > 0.0, some amount of pixels will be added around the annotation when the annotation is cropped. The number of pixels that are add is calculated relative to the image size. So if you set addContext to 0.1, 10 percent of the image size will be added to the crop. This setting is useful to provide the annotator some more visual context during the annotation step.

If “type” is “sia” the configuration will be the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
 {
   "tools": {
           "point": "[boolean]",
           "line": "[boolean]",
           "polygon": "[boolean]",
           "bbox": "[boolean]",
           "junk": "[boolean]"
   },
   "annos":{
       "multilabels": "[boolean]",
       "actions": {
           "draw": "[boolean]",
           "label": "[boolean]",
           "edit": "[boolean]",
       },
       "minArea": "[int]",
       "maxAnnos": "[int or null]"
   },
   "img": {
       "multilabels": "[boolean]",
       "actions": {
           "label": "[boolean]",
       }
   }
 }
SIA configuration:
  • tools
    • Inside the tools object you can select which drawing tools are available and if the junk button is present in the SIA gui. You may choose either true or false for each of the tools (point, line, polygon, bbox, junk).
  • annos (configuration for annotations on the image)
    • actions
      • draw is set to false a user may not draw any new annotations. This is useful if a script sent annotation proposals to SIA and the user should only correct the proposed annotations.
      • label allows to disable the possibility to assign labels to annotations. This option is useful if you wish that your annotator will only draw annotations.
      • edit inidcates wether an annotator may edit an annotation that is already present.
    • multilabels allows to assign multiple labels per annotation.
    • minArea The minimum area in pixels that an annotation may have. This constraint is only applied to annotations where an area can be defined (e.g. BBoxs, Polygons).
    • maxAnnos Maximum number of annos that are allowed per image. If null an infinite number of annotation are allowed per image.
  • img (configuration for the image)
    • actions
      • label allows to disable the possibility to assign labels to the image.
    • multilabels allows to assign multiple labels to the image.
1
2
3
4
5
 {
   "peN" : "[int]",
   "peOut" : "[list of int]|[null]",
   "dataExport" : {}
 }

A DataExport is used to serve a file generated by a script in the web gui. No special configuration is required for this pipeline element. The file to download will be provided by a Script that is connected to the input of the DataExport element. This Script will call the lost.pyapi.inout.ScriptOutput.add_data_export() method in order to do that.

1
2
3
4
5
 {
   "peN" : "[int]",
   "peOut" : "[list of int]|[null]",
   "visualOutput" : {}
 }

A VisualOutput element can display images and html text inside the LOST web gui. A connected Script element will provide the content to an VisualOutput by calling lost.pyapi.inout.ScriptOutput.add_visual_output().

1
2
3
4
5
6
7
8
 {
   "peN": "[int]",
   "peOut": "[list of int]|[null]",
   "loop": {
     "maxIteration": "[int]|[null]",
     "peJumpId": "[int]"
   }
 }

A Loop element can be used to build learning loops inside of a pipeline. Such a Loop models a similar behaviour to a while loop in a programming language.

The peJumpId defines the peN of another element in the pipeline where this Loop should jump to while looping. The maxIteration setting inside a loop definition can be set to a maximum amount of iterations that should be performed or to null in order to have an infinity loop.

A Script element inside a loop cycle may break a loop by calling lost.pyapi.script.Script.break_loop(). Scripts inside a loop cycle may check if a loop was broken by calling lost.pyapi.script.Script.loop_is_broken().

All about Scripts

pyapi

#TODO JJ Review

class lost.pyapi.script.Script(pe_id=None)[source]

Superclass for a user defined Script.

Custom scripts need to inherit from Script and implement the main method.

pe_id

Pipe element id. Assign the pe id of a pipline script in order to emulate this script in a jupyter notebook for example.

Type:int
break_loop()[source]

Break next loop in pipeline.

create_label_tree(name, external_id=None)[source]

Create a new LabelTree

Parameters:
  • name (str) – Name of the tree / name of the root leaf.
  • external_id (str) – An external id for the root leaf.
Returns:

The created LabelTree.

Return type:

lost.logic.label.LabelTree

get_alien_element(pe_id)[source]

Get an pipeline element by id from somewhere in the LOST system.

It is an alien element since it is most likely not part of the pipeline instance this script belongs to.

Parameters:pe_id (int) – PipeElementID of the alien element.
Returns:
get_arg(arg_name)[source]

Get argument value by name for this script.

Parameters:arg_name (str) – Name of the argument.
Returns:Value of the given argument.
get_fs(name=None)[source]

Get default lost filesystem or a specific filesystem by name.

Returns:See https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem
Return type:fsspec.spec.AbstractFileSystem
get_label_tree(name)[source]

Get a LabelTree by name.

Parameters:name (str) – Name of the desired LabelTree.
Retruns:
lost.logic.label.LabelTree or None:
If a label tree with the given name exists it will be returned. Otherwise None will be returned
get_path(file_name, context='instance')[source]

Get path for the filename in a specific context in filesystem.

Parameters:
  • file_name (str) – Name or relative path for a file.
  • context (str) – Options: instance, pipe
Returns:

Absolute path to the file in the specified context.

Return type:

str

inp

lost.pyapi.inout.Input

iteration

Get the current iteration.

Number of times this script has been executed.

Type:int
logger

A standard python logger for this script.

It will log to the pipline log file.

Type:logging.Logger
loop_is_broken()[source]

Check if the current loop is broken

outp

lost.pyapi.inout.ScriptOutput

pipe_info

An object with pipeline informations

Type:lost.pyapi.pipeline.PipeInfo
progress

Get current progress that is displayed in the progress bar of this script.

Current progress in percent 0…100

Type:float
reject_execution()[source]

Reject execution of this script and set it to PENDING again.

Note

This method is useful if you want to execute this script only when some condition based on previous pipeline elements is meet.

report_err(msg)[source]

Report an error for this user script to portal

Parameters:msg – The error message that should be reported.

Note

You can call this method multiple times if you like. All messages will be concatenated an sent to the portal.

update_progress(value)[source]

Update the progress for this script.

Parameters:value (float) – Progress in percent 0…100

inout

ScriptOutput

class lost.pyapi.inout.ScriptOutput(script)[source]

Special Output class since lost.pyapi.script.Script objects may manipulate and request annotations.

add_data_export(file_path, fs)[source]

Serve a file for download inside the web gui via a DataExport element.

Parameters:
  • file_path (str) – Path to the file that should be provided for download.
  • fs (filesystem) – Filesystem, where file_path is valid.
add_visual_output(img_path=None, html=None)[source]

Display an image and html in the web gui via a VisualOutput element.

Parameters:
  • img_path (str) – Path in the lost filesystem to the image to display.
  • html (str) – HTML text to display.
anno_tasks

list of lost.pyapi.pipe_elements.AnnoTask objects

bbox_annos

Iterate over all bbox annotation.

Returns:Iterator of lost.db.model.TwoDAnno.
data_exports

list of lost.pyapi.pipe_elements.VisualOutput objects.

datasources

list of lost.pyapi.pipe_elements.Datasource objects

img_annos

Iterate over all lost.db.model.ImageAnno objects in this Resultset.

Returns:Iterator of lost.db.model.ImageAnno objects.
line_annos

Iterate over all line annotations.

Returns:Iterator of lost.db.model.TwoDAnno objects.
mia_tasks

list of lost.pyapi.pipe_elements.MIATask objects

point_annos

Iterate over all point annotations.

Returns:Iterator of lost.db.model.TwoDAnno.
polygon_annos

Iterate over all polygon annotations.

Returns:Iterator of lost.db.model.TwoDAnno objects.
request_annos(img, img_labels=None, img_sim_class=None, annos=[], anno_types=[], anno_labels=[], anno_sim_classes=[], frame_n=None, video_path=None, fs=None, img_meta=None, anno_meta=None, img_comment=None)[source]

Request annotations for a subsequent annotaiton task.

Parameters:
  • img (str or ImageAnno) – Path to the image or database image where annotations will be requested for
  • img_label (list of int) – Labels that will be assigned to the image. The labels should be represented by a label_leaf_id. An image may have multiple labels.
  • img_sim_class (int) – A culster id that will be used to cluster this image in the MIA annotation tool.
  • annos (list of list) – A list of POINTs: [x,y] BBOXes: [x,y,w,h] LINEs or POLYGONs: [[x,y], [x,y], …]
  • anno_types (list of str) – Can be ‘point’, ‘bbox’, ‘line’, ‘polygon’
  • anno_labels (list of int) – Labels for the twod annos. Each label in the list is represented by a label_leaf_id. (see also LabelLeaf).
  • anno_sim_classes (list of ints) – List of arbitrary cluster ids that are used to cluster annotations in the MIA annotation tool.
  • frame_n (int) – If img_path belongs to a video frame_n indicates the framenumber.
  • video_path (str) – If img_path belongs to a video this is the path to this video.
  • fs (fsspec.spec.AbstractFileSystem) – The filesystem where image is located. Use lost standard filesystem if no filesystem was given. You can get this Filesystem object from a DataSource-Element by calling get_fm method.
  • img_meta (dict) – Dictionary with meta information that should be added to the image annotation. Each meta key will be added as column during annotation export. the dict-value will be row content.
  • anno_meta (list of dict) – List of dictionaries with meta information that should be added to a specific annotation. Each meta key will be added as column during annotation export. The dict-value will be row content.
  • img_comment (str) – A comment that will be added to this image.

Example

Request human annotations for an image with annotation proposals:

 >>> self.outp.request_annos('path/to/img.jpg',
...     annos = [
...         [0.1, 0.1, 0.2, 0.2],
...         [0.1, 0.2],
...         [[0.1, 0.3], [0.2, 0.3], [0.15, 0.1]]
...     ],
...     anno_types=['bbox', 'point', 'polygon'],
...     anno_labels=[
...         [1],
...         [1],
...         [4]
...     ],
...     anno_sim_classes=[10, 10, 15]
... )

Reqest human annotations for an image without porposals:

>>> self.outp.request_annos('path/to/img.jpg')
request_lds_annos(lds, fs=None, anno_meta_keys=[], img_meta_keys=[], img_path_key='img_path')[source]

Request annos from LOSTDataset.

Parameters:
  • lds (LOSTDataset) – A lost dataset object. Request all annotation in this dataset again.
  • fs (fsspec.spec.AbstractFileSystem) – The filesystem where image is located. Use lost standard filesystem if no filesystem was given. You can get this Filesystem object from a DataSource-Element by calling get_fm method.
  • img_meta_keys (list) – Keys that should be used for img_anno meta information
  • anno_meta_keys (list) – Keys that should be used for two_d_anno meta information
sia_tasks

list of lost.pyapi.pipe_elements.SIATask objects

to_df()

Get a pandas DataFrame of all annotations related to this object.

Returns:
Column names are:
’img_uid’, img_timestamp’, img_state’, img_sim_class’, img_frame_n’, ‘img_path’,’img_iteration’,’img_user_id’,’img_anno_time’,’img_lbl’, ‘img_lbl_id’,’img_user’,’img_is_junk’,’img_fs_name’,’anno_uid’, ‘anno_timestamp’,’anno_state’,’anno_dtype’,’anno_sim_class’, ‘anno_iteration’,’anno_user_id’,’anno_user’,’anno_confidence’, ‘anno_time’,’anno_lbl’,’anno_lbl_id’,’anno_style’,’anno_format’, ‘anno_comment’,’anno_data’
Return type:pandas.DataFrame
to_vec(columns='all')

Get a vector of all Annotations related to this object.

Parameters:columns (str or list of str) – ‘all’ OR ‘img_uid’, img_timestamp’, img_state’, img_sim_class’, img_frame_n’, ‘img_path’,’img_iteration’,’img_user_id’,’img_anno_time’,’img_lbl’, ‘img_lbl_id’,’img_user’,’img_is_junk’,’img_fs_name’,’anno_uid’, ‘anno_timestamp’,’anno_state’,’anno_dtype’,’anno_sim_class’, ‘anno_iteration’,’anno_user_id’,’anno_user’,’anno_confidence’, ‘anno_time’,’anno_lbl’,’anno_lbl_id’,’anno_style’,’anno_format’, ‘anno_comment’,’anno_data’
Retruns:
list OR list of lists: Desired columns

Example

Return just a list of 2d anno labels:

>>> self.outp.to_vec('anno_lbl')
[['Person'],[],['Cat'],[],['Car'],['Person'],[],['Bird'],['Bird']]

Return a list of lists:

>>> self.outp.to_vec(['img_path', 'anno_lbl',
...     'anno_data', 'anno_dtype'])
[
    ['path/to/img1.jpg', ['Aeroplane'], [[0.1, 0.1, 0.2, 0.2]], 'bbox'],
    ['path/to/img1.jpg', ['Bicycle'], [[0.1, 0.1]], 'point'],
    ['path/to/img2.jpg', ['Bottle'], [[0.1, 0.1], [0.2, 0.2]], 'line'],
    ['path/to/img3.jpg', ['Horse'], [[0.2, 0.15, 0.3, 0.18]], 'bbox']
]
twod_annos

Iterate over 2D-annotations.

Returns:of lost.db.model.TwoDAnno objects.
Return type:Iterator
visual_outputs

list of lost.pyapi.pipe_elements.VisualOutput objects.

Output

class lost.pyapi.inout.Output(element)[source]
anno_tasks

list of lost.pyapi.pipe_elements.AnnoTask objects

bbox_annos

Iterate over all bbox annotation.

Returns:Iterator of lost.db.model.TwoDAnno.
data_exports

list of lost.pyapi.pipe_elements.VisualOutput objects.

datasources

list of lost.pyapi.pipe_elements.Datasource objects

img_annos

Iterate over all lost.db.model.ImageAnno objects in this Resultset.

Returns:Iterator of lost.db.model.ImageAnno objects.
line_annos

Iterate over all line annotations.

Returns:Iterator of lost.db.model.TwoDAnno objects.
mia_tasks

list of lost.pyapi.pipe_elements.MIATask objects

point_annos

Iterate over all point annotations.

Returns:Iterator of lost.db.model.TwoDAnno.
polygon_annos

Iterate over all polygon annotations.

Returns:Iterator of lost.db.model.TwoDAnno objects.
sia_tasks

list of lost.pyapi.pipe_elements.SIATask objects

to_df()

Get a pandas DataFrame of all annotations related to this object.

Returns:
Column names are:
’img_uid’, img_timestamp’, img_state’, img_sim_class’, img_frame_n’, ‘img_path’,’img_iteration’,’img_user_id’,’img_anno_time’,’img_lbl’, ‘img_lbl_id’,’img_user’,’img_is_junk’,’img_fs_name’,’anno_uid’, ‘anno_timestamp’,’anno_state’,’anno_dtype’,’anno_sim_class’, ‘anno_iteration’,’anno_user_id’,’anno_user’,’anno_confidence’, ‘anno_time’,’anno_lbl’,’anno_lbl_id’,’anno_style’,’anno_format’, ‘anno_comment’,’anno_data’
Return type:pandas.DataFrame
to_vec(columns='all')

Get a vector of all Annotations related to this object.

Parameters:columns (str or list of str) – ‘all’ OR ‘img_uid’, img_timestamp’, img_state’, img_sim_class’, img_frame_n’, ‘img_path’,’img_iteration’,’img_user_id’,’img_anno_time’,’img_lbl’, ‘img_lbl_id’,’img_user’,’img_is_junk’,’img_fs_name’,’anno_uid’, ‘anno_timestamp’,’anno_state’,’anno_dtype’,’anno_sim_class’, ‘anno_iteration’,’anno_user_id’,’anno_user’,’anno_confidence’, ‘anno_time’,’anno_lbl’,’anno_lbl_id’,’anno_style’,’anno_format’, ‘anno_comment’,’anno_data’
Retruns:
list OR list of lists: Desired columns

Example

Return just a list of 2d anno labels:

>>> self.outp.to_vec('anno_lbl')
[['Person'],[],['Cat'],[],['Car'],['Person'],[],['Bird'],['Bird']]

Return a list of lists:

>>> self.outp.to_vec(['img_path', 'anno_lbl',
...     'anno_data', 'anno_dtype'])
[
    ['path/to/img1.jpg', ['Aeroplane'], [[0.1, 0.1, 0.2, 0.2]], 'bbox'],
    ['path/to/img1.jpg', ['Bicycle'], [[0.1, 0.1]], 'point'],
    ['path/to/img2.jpg', ['Bottle'], [[0.1, 0.1], [0.2, 0.2]], 'line'],
    ['path/to/img3.jpg', ['Horse'], [[0.2, 0.15, 0.3, 0.18]], 'bbox']
]
twod_annos

Iterate over 2D-annotations.

Returns:of lost.db.model.TwoDAnno objects.
Return type:Iterator
visual_outputs

list of lost.pyapi.pipe_elements.VisualOutput objects.

Input

class lost.pyapi.inout.Input(element)[source]

Class that represants an input of a pipeline element.

Parameters:element (object) – Related lost.db.model.PipeElement object.
anno_tasks

list of lost.pyapi.pipe_elements.AnnoTask objects

bbox_annos

Iterate over all bbox annotation.

Returns:Iterator of lost.db.model.TwoDAnno.
data_exports

list of lost.pyapi.pipe_elements.VisualOutput objects.

datasources

list of lost.pyapi.pipe_elements.Datasource objects

img_annos

Iterate over all lost.db.model.ImageAnno objects in this Resultset.

Returns:Iterator of lost.db.model.ImageAnno objects.
line_annos

Iterate over all line annotations.

Returns:Iterator of lost.db.model.TwoDAnno objects.
mia_tasks

list of lost.pyapi.pipe_elements.MIATask objects

point_annos

Iterate over all point annotations.

Returns:Iterator of lost.db.model.TwoDAnno.
polygon_annos

Iterate over all polygon annotations.

Returns:Iterator of lost.db.model.TwoDAnno objects.
sia_tasks

list of lost.pyapi.pipe_elements.SIATask objects

to_df()[source]

Get a pandas DataFrame of all annotations related to this object.

Returns:
Column names are:
’img_uid’, img_timestamp’, img_state’, img_sim_class’, img_frame_n’, ‘img_path’,’img_iteration’,’img_user_id’,’img_anno_time’,’img_lbl’, ‘img_lbl_id’,’img_user’,’img_is_junk’,’img_fs_name’,’anno_uid’, ‘anno_timestamp’,’anno_state’,’anno_dtype’,’anno_sim_class’, ‘anno_iteration’,’anno_user_id’,’anno_user’,’anno_confidence’, ‘anno_time’,’anno_lbl’,’anno_lbl_id’,’anno_style’,’anno_format’, ‘anno_comment’,’anno_data’
Return type:pandas.DataFrame
to_vec(columns='all')[source]

Get a vector of all Annotations related to this object.

Parameters:columns (str or list of str) – ‘all’ OR ‘img_uid’, img_timestamp’, img_state’, img_sim_class’, img_frame_n’, ‘img_path’,’img_iteration’,’img_user_id’,’img_anno_time’,’img_lbl’, ‘img_lbl_id’,’img_user’,’img_is_junk’,’img_fs_name’,’anno_uid’, ‘anno_timestamp’,’anno_state’,’anno_dtype’,’anno_sim_class’, ‘anno_iteration’,’anno_user_id’,’anno_user’,’anno_confidence’, ‘anno_time’,’anno_lbl’,’anno_lbl_id’,’anno_style’,’anno_format’, ‘anno_comment’,’anno_data’
Retruns:
list OR list of lists: Desired columns

Example

Return just a list of 2d anno labels:

>>> self.outp.to_vec('anno_lbl')
[['Person'],[],['Cat'],[],['Car'],['Person'],[],['Bird'],['Bird']]

Return a list of lists:

>>> self.outp.to_vec(['img_path', 'anno_lbl',
...     'anno_data', 'anno_dtype'])
[
    ['path/to/img1.jpg', ['Aeroplane'], [[0.1, 0.1, 0.2, 0.2]], 'bbox'],
    ['path/to/img1.jpg', ['Bicycle'], [[0.1, 0.1]], 'point'],
    ['path/to/img2.jpg', ['Bottle'], [[0.1, 0.1], [0.2, 0.2]], 'line'],
    ['path/to/img3.jpg', ['Horse'], [[0.2, 0.15, 0.3, 0.18]], 'bbox']
]
twod_annos

Iterate over 2D-annotations.

Returns:of lost.db.model.TwoDAnno objects.
Return type:Iterator
visual_outputs

list of lost.pyapi.pipe_elements.VisualOutput objects.

pipeline

PipeInfo

class lost.pyapi.pipeline.PipeInfo(pipe, dbm)[source]
description

Description that was defined when pipeline was started.

Type:str
logfile_path

Path to pipeline log file.

Type:str
name

Name of this pipeline

Type:str
timestamp

Timestamp when pipeline was started.

Type:str
timestamp_finished

Timestamp when pipeline was finished.

Type:str
user

User who started this pipe

Type:User object

pipe_elements

Datasource

class lost.pyapi.pipe_elements.Datasource(pe, dbm)[source]
get_fs()[source]

Get filesystem for this datasource

inp

Input of this pipeline element

Type:lost.pyapi.inout.Input
outp

Output of this pipeline element

Type:lost.pyapi.inout.Output
path

Relative path to file or folder

Type:str
pipe_info

An object with pipeline informations

Type:lost.pyapi.pipeline.PipeInfo

AnnoTask

class lost.pyapi.pipe_elements.AnnoTask(pe, dbm)[source]
configuration

Configuration of this annotask.

Type:str
inp

Input of this pipeline element

Type:lost.pyapi.inout.Input
instructions

Instructions for the annotator of this AnnoTask.

Type:str
lbl_map

Map lbl_name to idx

Note

All label names will be mapped to lower case!

Type:dict
name

A name for this annotask.

Type:str
outp

Output of this pipeline element

Type:lost.pyapi.inout.Output
pipe_info

An object with pipeline informations

Type:lost.pyapi.pipeline.PipeInfo
possible_label_df

Get all possible labels for this annotation task in DataFrame format

pd.DataFrame: Column names are:
‘idx’, ‘name’, ‘abbreviation’, ‘description’, ‘timestamp’, ‘external_id’, ‘is_deleted’, ‘parent_leaf_id’ ,’is_root’
Type:pd.DataFrame
progress

Progress in percent.

Value range 0…100.

Type:float

MIATask

class lost.pyapi.pipe_elements.MIATask(pe, dbm)[source]
configuration

Configuration of this annotask.

Type:str
inp

Input of this pipeline element

Type:lost.pyapi.inout.Input
instructions

Instructions for the annotator of this AnnoTask.

Type:str
lbl_map

Map lbl_name to idx

Note

All label names will be mapped to lower case!

Type:dict
name

A name for this annotask.

Type:str
outp

Output of this pipeline element

Type:lost.pyapi.inout.Output
pipe_info

An object with pipeline informations

Type:lost.pyapi.pipeline.PipeInfo
possible_label_df

Get all possible labels for this annotation task in DataFrame format

pd.DataFrame: Column names are:
‘idx’, ‘name’, ‘abbreviation’, ‘description’, ‘timestamp’, ‘external_id’, ‘is_deleted’, ‘parent_leaf_id’ ,’is_root’
Type:pd.DataFrame
progress

Progress in percent.

Value range 0…100.

Type:float

SIATask

class lost.pyapi.pipe_elements.SIATask(pe, dbm)[source]
configuration

Configuration of this annotask.

Type:str
inp

Input of this pipeline element

Type:lost.pyapi.inout.Input
instructions

Instructions for the annotator of this AnnoTask.

Type:str
lbl_map

Map lbl_name to idx

Note

All label names will be mapped to lower case!

Type:dict
name

A name for this annotask.

Type:str
outp

Output of this pipeline element

Type:lost.pyapi.inout.Output
pipe_info

An object with pipeline informations

Type:lost.pyapi.pipeline.PipeInfo
possible_label_df

Get all possible labels for this annotation task in DataFrame format

pd.DataFrame: Column names are:
‘idx’, ‘name’, ‘abbreviation’, ‘description’, ‘timestamp’, ‘external_id’, ‘is_deleted’, ‘parent_leaf_id’ ,’is_root’
Type:pd.DataFrame
progress

Progress in percent.

Value range 0…100.

Type:float

DataExport

class lost.pyapi.pipe_elements.DataExport(pe, dbm)[source]
file_path

A list of absolute path to exported files

Type:list of str
inp

Input of this pipeline element

Type:lost.pyapi.inout.Input
outp

Output of this pipeline element

Type:lost.pyapi.inout.Output
pipe_info

An object with pipeline informations

Type:lost.pyapi.pipeline.PipeInfo
to_dict()[source]

Transform a list of exports to a dictionary.

Returns:[{‘iteration’:int, ‘file_path’:str},…]
Return type:list of dict

VisualOutput

class lost.pyapi.pipe_elements.VisualOutput(pe, dbm)[source]
html_strings

list of html strings.

Type:list of str
img_paths

List of absolute paths to images.

Type:list of str
inp

Input of this pipeline element

Type:lost.pyapi.inout.Input
outp

Output of this pipeline element

Type:lost.pyapi.inout.Output
pipe_info

An object with pipeline informations

Type:lost.pyapi.pipeline.PipeInfo
to_dict()[source]

Transforms a list of visualization information into a list of dicts.

Returns:[{‘iteration’:int, ‘img_path’:str, ‘html_string’:str},…]
Return type:list of dicts

Loop

class lost.pyapi.pipe_elements.Loop(pe, dbm)[source]
inp

Input of this pipeline element

Type:lost.pyapi.inout.Input
is_broken

True if loop is broken

Type:bool
iteration

Current iteration of this loop.

Type:int
max_iteration

Maximum number of iteration.

Type:int
outp

Output of this pipeline element

Type:lost.pyapi.inout.Output
pe_jump

PipelineElement where this loop will jump to when looping.

Can be of type:
pipe_info

An object with pipeline informations

Type:lost.pyapi.pipeline.PipeInfo

model

ImageAnno

class lost.db.model.ImageAnno(anno_task_id=None, user_id=None, timestamp=None, state=None, sim_class=None, result_id=None, img_path=None, frame_n=None, video_path=None, iteration=0, anno_time=None, is_junk=None, description=None, fs_id=None, meta=None, meta_blob=None, img_actions=None)[source]

An ImageAnno represents an image annotation.

Multiple labels as well as 2d annotations (e.g. points, lines, boxes, polygons) can be assigned to an image.

labels

The related Label object.

Type:list
twod_annos

A list of TwoDAnno objects.

Type:list
img_path

Abs path to image in file system

Type:str
frame_n

If this image is part of an video, frame_n indicates the frame number.

Type:int
video_path

If this image is part of an video, this should be the path to that video in file system.

Type:str
sim_class

The similarity class this anno belong to. It is used to cluster similar annos in MIA

Type:int
anno_time

Overall annotation time in seconds.

timestamp

Timestamp of ImageAnno

Type:DateTime
iteration

The iteration of a loop when this anno was created.

Type:int
idx

ID of this ImageAnno in database

Type:int
anno_task_id

ID of the anno_task this ImageAnno belongs to.

Type:int
state

See lost.db.state.Anno

Type:enum
result_id

Id of the related result.

user_id

Id of the annotator.

Type:int
is_junk

This image was marked as Junk.

Type:bool
description

Description for this annotation. Assigned by an annotator or algorithm.

Type:str
fs_id

Id of the filesystem where image is located

Type:int
meta

A field for meta information added by a script

Type:str
img_actions

Actions performed by users for this image

Type:str
get_anno_vec(anno_type='bbox')[source]

Get related 2d annotations in list style.

Parameters:anno_type (str) – Can be ‘bbox’, ‘point’, ‘line’, ‘polygon’
Returns:
For POINTs:
[[x, y], [x, y], …]
For BBOXs:
[[x, y, w, h], [x, y, w, h], …]
For LINEs and POLYGONs:
[[[x, y], [x, y],…], [[x, y], [x, y],…]]
Return type:list of list of floats

Example

In the following example all bounding boxes of the image annotation will be returned in list style:

>>> img_anno.anno_vec()
[[0.1 , 0.2 , 0.3 , 0.18],
 [0.25, 0.25, 0.2, 0.4]]
iter_annos(anno_type='bbox')[source]

Iterator for all related 2D annotations of this image.

Parameters:anno_type (str) – Can be bbox’, ‘point’, ‘line’, ‘polygon’, ‘all’
Retruns:
iterator of TwoDAnno objects

Example

>>> for bb in img_anno.iter_annos('bbox'):
...     do_something(bb)
to_df()[source]

Tranform this ImageAnnotation and all related TwoDAnnotaitons into a pandas DataFrame.

Returns:
Column names are:
’img_uid’, ‘img_timestamp’, ‘img_state’, ‘img_sim_class’, ‘img_frame_n’, ‘img_path’, ‘img_iteration’, ‘img_user_id’, ‘img_anno_time’, ‘img_lbl’, ‘img_lbl_id’, ‘img_user’, ‘img_is_junk’, ‘img_fs_name’, ‘anno_uid’, ‘anno_timestamp’, ‘anno_state’, ‘anno_dtype’, ‘anno_sim_class’, ‘anno_iteration’, ‘anno_user_id’, ‘anno_user’, ‘anno_confidence’, ‘anno_time’, ‘anno_lbl’, ‘anno_lbl_id’, ‘anno_style’, ‘anno_format’, ‘anno_comment’, ‘anno_data’
Return type:pandas.DataFrame
to_dict(style='flat')[source]

Transform this ImageAnno and all related TwoDAnnos into a dict.

Parameters:style (str) – ‘flat’ or ‘hierarchical’. Return a dict in flat or nested style.
Returns:In ‘flat’ style return a list of dicts with one dict per annotation. In ‘hierarchical’ style, return a nested dictionary.
Return type:list of dict OR dict

Example

HowTo iterate through all TwoDAnnotations of this ImageAnno dictionary in flat style:

>>> for d in img_anno.to_dict():
...     print(d['img_path'], d['anno_lbl'], d['anno_dtype'])
path/to/img1.jpg [] None
path/to/img1.jpg ['Aeroplane'] bbox
path/to/img1.jpg ['Bicycle'] point

Possible keys in flat style:

>>> img_anno.to_dict()[0].keys()
dict_keys([
    'img_uid', 'img_timestamp', 'img_state', 'img_sim_class',
    'img_frame_n', 'img_path', 'img_iteration', 'img_user_id',
    'img_anno_time', 'img_lbl', 'img_lbl_id', 'img_user',
    'img_is_junk', 'img_fs_name', 'anno_uid', 'anno_timestamp',
    'anno_state', 'anno_dtype', 'anno_sim_class', 'anno_iteration',
    'anno_user_id', 'anno_user', 'anno_confidence', 'anno_time',
    'anno_lbl', 'anno_lbl_id', 'anno_style', 'anno_format',
    'anno_comment', 'anno_data'
])

HowTo iterate through all TwoDAnnotations of this ImageAnno dictionary in hierarchical style:

>>> h_dict = img_anno.to_dict(style='hierarchical')
>>> for d in h_dict['img_2d_annos']:
...     print(h_dict['img_path'], d['anno_lbl'], d['anno_dtype'])
path/to/img1.jpg [Aeroplane] bbox
path/to/img1.jpg [Bicycle] point

Possible keys in hierarchical style:

>>> h_dict = img_anno.to_dict(style='hierarchical')
>>> h_dict.keys()
dict_keys([
    'img_uid', 'img_timestamp', 'img_state', 'img_sim_class',
    'img_frame_n', 'img_path', 'img_iteration', 'img_user_id',
    'img_anno_time', 'img_lbl', 'img_lbl_id', 'img_user',
    'img_is_junk', 'img_fs_name', 'img_2d_annos'
])
>>> h_dict['img.twod_annos'][0].keys()
dict_keys([
    'anno_uid', 'anno_timestamp', 'anno_state', 'anno_dtype',
    'anno_sim_class', 'anno_iteration', 'anno_user_id',
    'anno_user', 'anno_confidence', 'anno_time', 'anno_lbl',
    'anno_lbl_id', 'anno_style', 'anno_format', 'anno_comment',
    'anno_data'
])
to_vec(columns='all')[source]

Transform this ImageAnnotation and all related TwoDAnnotations in list style.

Parameters:columns (str or list of str) – ‘all’ OR ‘img_uid’, ‘img_timestamp’, ‘img_state’, ‘img_sim_class’, ‘img_frame_n’, ‘img_path’, ‘img_iteration’, ‘img_user_id’, ‘img_anno_time’, ‘img_lbl’, ‘img_lbl_id’, ‘img_user’, ‘img_is_junk’, ‘img_fs_name’, ‘anno_uid’, ‘anno_timestamp’, ‘anno_state’, ‘anno_dtype’, ‘anno_sim_class’, ‘anno_iteration’, ‘anno_user_id’, ‘anno_user’, ‘anno_confidence’, ‘anno_time’, ‘anno_lbl’, ‘anno_lbl_id’, ‘anno_style’, ‘anno_format’, ‘anno_comment’, ‘anno_data’
Retruns:
list OR list of lists: Desired columns

Example

Return just a list of 2d anno labels:

>>> img_anno.to_vec('anno_lbl')
[['Aeroplane'], ['Bicycle']]

Return a list of lists:

>>> img_anno.to_vec(['img_path', 'anno_lbl'])
[
    ['path/to/img1.jpg', ['Aeroplane']],
    ['path/to/img1.jpg', ['Bicycle']]
]

TwoDAnno

class lost.db.model.TwoDAnno(anno_task_id=None, user_id=None, timestamp=None, state=None, track_id=None, sim_class=None, img_anno_id=None, timestamp_lock=None, iteration=0, data=None, dtype=None, confidence=None, anno_time=None, description=None, meta=None, is_example=False, meta_blob=None)[source]

A TwoDAnno represents a 2D annotation/ drawing for an image.

A TwoDAnno can be of type point, line, bbox or polygon.

idx

ID of this TwoDAnno in database

Type:int
anno_task_id

ID of the anno_task this TwoDAnno belongs to.

Type:int
timestamp

Timestamp created of TwoDAnno

Type:DateTime
timestamp_lock

Timestamp locked in view

Type:DateTime
state

can be unlocked, locked, locked_priority or labeled (see lost.db.state.Anno)

Type:enum
track_id

The track id this TwoDAnno belongs to.

Type:int
sim_class

The similarity class this anno belong to. It is used to cluster similar annos in MIA.

Type:int
iteration

The iteration of a loop when this anno was created.

Type:int
user_id

Id of the annotator.

Type:int
img_anno_id

ID of ImageAnno this TwoDAnno is appended to

Type:int
data

drawing data (for e.g. x,y, width, height) of anno - depends on dtype

Type:Text
dtype

type of TwoDAnno (for e.g. bbox, polygon) (see lost.db.dtype.TwoDAnno)

Type:int
labels

A list of Label objects related to the TwoDAnno.

Type:list
confidence

Confidence of Annotation.

Type:float
anno_time

Overall Annotation Time in ms.

description

Description for this annotation. Assigned by an annotator or algorithm.

Type:str
meta

A field for meta information added by a script

Type:str
is_example

Indicates wether this annotation is an example for the selected label.

Type:bool
add_label(label_leaf_id)[source]

Add a label to this 2D annotation.

Parameters:label_leaf_id (int) – Id of the label_leaf that should be added.
bbox

BBOX annotation in list style [x, y, w, h]

Example

>>> anno = TwoDAnno()
>>> anno.bbox = [0.1, 0.1, 0.2, 0.2]
>>> anno.bbox
[0.1, 0.1, 0.2, 0.2]
Type:list
get_anno_serialization_format()[source]

Get annotation data in list style parquet serialization.

Returns:
For a POINT:
[[ x, y ]]
For a BBOX:
[[ x, y, w, h ]]
For a LINE and POLYGONS:
[[x, y], [x, y],…]
Return type:list of floats

Example

HowTo get a numpy array? In the following example a bounding box is returned:

>>> np.array(twod_anno.get_anno_vec())
array([0.1 , 0.2 , 0.3 , 0.18])
get_anno_vec()[source]

Get annotation data in list style.

Returns:
For a POINT:
[x, y]
For a BBOX:
[x, y, w, h]
For a LINE and POLYGONS:
[[x, y], [x, y],…]
Return type:list of floats

Example

HowTo get a numpy array? In the following example a bounding box is returned:

>>> np.array(twod_anno.get_anno_vec())
array([0.1 , 0.2 , 0.3 , 0.18])
line

LINE annotation in list style [[x, y], [x, y], …]

Example

>>> anno = TwoDAnno()
>>> anno.line = [[0.1, 0.1], [0.2, 0.2]]
>>> anno.line
[[0.1, 0.1], [0.2, 0.2]]
Type:list of list
point

POINT annotation in list style [x, y]

Example

>>> anno = TwoDAnno()
>>> anno.point = [0.1, 0.1]
>>> anno.point
[0.1, 0.1]
Type:list
polygon

polygon annotation in list style [[x, y], [x, y], …]

Example

>>> anno = TwoDAnno()
>>> anno.polygon = [[0.1, 0.1], [0.2, 0.1], [0.15, 0.2]]
>>> anno.polygon
[[0.1, 0.1], [0.2, 0.1], [0.15, 0.2]]
Type:list of list
to_df()[source]

Transform this annotation into a pandas DataFrame

Returns:A DataFrame where column names correspond to the keys of the dictionary returned from to_dict() method.
Return type:pandas.DataFrame

Note

Column names are:
‘anno_uid’, ‘anno_timestamp’, ‘anno_state’, ‘anno_dtype’, ‘anno_sim_class’, ‘anno_iteration’, ‘anno_user_id’, ‘anno_user’, ‘anno_confidence’, ‘anno_time’, ‘anno_lbl’, ‘anno_lbl_id’, ‘anno_style’, ‘anno_format’, ‘anno_comment’, ‘anno_data’
to_dict(style='flat')[source]

Transform this object into a dict.

Parameters:style (str) – ‘flat’ or ‘hierarchical’ ‘flat’: Return a dictionray in table style ‘hierarchical’: Return a nested dictionary
Retruns:
dict: In flat or hierarchical style.

Example

Get a dict in flat style. Note that ‘anno.data’, ‘anno.lbl.idx’, ‘anno.lbl.name’ and ‘anno.lbl.external_id’ are json strings in contrast to the hierarchical style.

>>> bbox.to_dict(style='flat')
{
    'anno_uid': 1,
    'anno_timestamp': datetime.datetime(2022, 10, 27, 11, 27, 31),
    'anno_state': 4,
    'anno_dtype': 'point',
    'anno_sim_class': None,
    'anno_iteration': 0,
    'anno_user_id': 1,
    'anno_user': 'admin',
    'anno_confidence': None,
    'anno_time': 2.5548,
    'anno_lbl': ['Person'],
    'anno_lbl_id': [16],
    'anno_style': 'xy',
    'anno_format': 'rel',
    'anno_comment': None,
    'anno_data': [[0.5683337459767269, 0.3378842004739504]]}
}
to_vec(columns='all')[source]

Tansfrom this annotation in list style.

Parameters:columns (list of str OR str) – Possible column names are: ‘all’ OR ‘anno_uid’, ‘anno_timestamp’, ‘anno_state’, ‘anno_dtype’, ‘anno_sim_class’, ‘anno_iteration’, ‘anno_user_id’, ‘anno_user’, ‘anno_confidence’, ‘anno_time’, ‘anno_lbl’, ‘anno_lbl_id’, ‘anno_style’, ‘anno_format’, ‘anno_comment’, ‘anno_data’
Returns:A list of the desired columns.
Return type:list of objects

Example

If you want to get only the annotation in list style e.g. [xc, yc, w, h] (if this TwoDAnnotation is a bbox).

>>> anno.to_vec('anno_data')
[0.1, 0.1, 0.2, 0.2]

LabelLeaf

class lost.db.model.LabelLeaf(idx=None, name=None, abbreviation=None, description=None, timestamp=None, external_id=None, label_tree_id=None, is_deleted=None, parent_leaf_id=None, is_root=None, group_id=None, color=None)[source]

A LabelLeaf

idx

ID in database.

Type:int
name

Name of the LabelName.

Type:str
abbreviation
Type:str
description
Type:str
timestamp
Type:DateTime
external_id

Id of an external semantic label system (for e.g. synsetid of wordnet)

Type:str
is_deleted
Type:Boolean
is_root

Indicates if this leaf is the root of a tree.

Type:Boolean
parent_leaf_id

Reference to parent LabelLeaf.

Type:Integer
group_id

Group this Label Leaf belongs to

Type:int
color

Color of the label in Hex format.

Type:str
label_leafs
Type:list of LabelLeaf

Note

group_id is None if this filesystem is available for all users!

to_df()[source]

Transform this LabelLeaf to a pandas DataFrame.

Returns:
Return type:pd.DataFrame
to_dict()[source]

Transform this object to a dict.

Returns:
Return type:dict

Label

class lost.db.model.Label(idx=None, dtype=None, label_leaf_id=None, img_anno_id=None, two_d_anno_id=None, annotator_id=None, timestamp_lock=None, timestamp=None, confidence=None, anno_time=None)[source]

Represants an Label that is related to an annoation.

idx

ID in database.

Type:int
dtype

lost.db.dtype.Result type of this attribute.

Type:enum
label_leaf_id

ID of related model.LabelLeaf.

img_anno_id
Type:int
two_d_anno_id
Type:int
timestamp
Type:DateTime
timestamp_lock
Type:DateTime
label_leaf

related model.LabelLeaf object.

Type:model.LabelLeaf
annotator_id

GroupID of Annotator who has assigned this Label.

Type:Integer
confidence

Confidence of Annotation.

Type:float
anno_time

Time of annotaiton duration

Type:float

logic.label

LabelTree

class lost.logic.label.LabelTree(dbm, root_id=None, root_leaf=None, name=None, logger=None, group_id=None)[source]

A class that represants a LabelTree.

Parameters:
  • dbm (lost.db.access.DBMan) – Database manager object.
  • root_id (int) – label_leaf_id of the root Leaf.
  • root_leaf (lost.db.model.LabelLeaf) – Root leaf of the tree.
  • name (str) – Name of a label tree.
  • logger (logger) – A logger.
  • group_id (int) – Id of the group where the LabelTree belongs to.
create_child(parent_id, name, external_id=None)[source]

Create a new leaf in label tree.

Parameters:
  • parent_id (int) – Id of the parend leaf.
  • name (str) – Name of the leaf e.g the class name.
  • external_id (str) – Some id of an external label system.
Retruns:
lost.db.model.LabelLeaf: The the created child leaf.
create_root(name, external_id=None)[source]

Create the root of a label tree.

Parameters:
  • name (str) – Name of the root leaf.
  • external_id (str) – Some id of an external label system.
Retruns:
lost.db.model.LabelLeaf or None:
The created root leaf or None if a root leaf with same name is already present in database.
delete_subtree(leaf)[source]

Recursive delete all leafs in subtree starting with leaf

Parameters:leaf (lost.db.model.LabelLeaf) – Delete all childs of this leaf. The leaf itself stays.
delete_tree()[source]

Delete whole tree from system

get_child_vec(parent_id, columns='idx')[source]

Get a vector of child labels.

Parameters:
  • parent_id (int) – Id of the parent leaf.
  • columns (str or list of str) – Can be any attribute of lost.db.model.LabelLeaf for example ‘idx’, ‘external_idx’, ‘name’ or a list of these e.g. [‘name’, ‘idx’]

Example

>>> label_tree.get_child_vec(1, columns='idx')
[2, 3, 4]
>>> label_tree.get_child_vec(1, columns=['idx', 'name'])
[
    [2, 'cow'],
    [3, 'horse'],
    [4, 'person']
]
Returns:
Return type:list in the requested columns
import_df(df)[source]

Import LabelTree from DataFrame

Parameters:df (pandas.DataFrame) – LabelTree in DataFrame style.
Retruns:
lost.db.model.LabelLeaf or None:
The created root leaf or None if a root leaf with same name is already present in database.
to_df()[source]

Transform this LabelTree to a pandas DataFrame.

Returns:pandas.DataFrame

dtype

TwoDAnno

class lost.db.dtype.TwoDAnno[source]

Type of a TwoDAnno

BBOX

A BBox.

Type:1
POLYGON

A Polygon.

Type:2
POINT

A Point.

Type:3
LINE

A Line.

Type:4
CIRCLE

A Circle.

Type:5

util methods

anno_helper

A module with helper methods to tranform annotations into different formats and to crop annotations from an image.

lost.pyapi.utils.anno_helper.calc_box_for_anno(annos, types, point_padding=0.05)[source]

Calculate a bouning box for an arbitrary 2DAnnotation.

Parameters:
  • annos (list) – List of annotations.
  • types (list) – List of types.
  • point_padding (float, optional) – In case of a point we need to add some padding to get a box.
Returns:

A list of bounding boxes in format [[xc,yc,w,h],…]

Return type:

list

lost.pyapi.utils.anno_helper.crop_boxes(annos, types, img, context=0.0, draw_annotations=False)[source]

Crop a bounding boxes for TwoDAnnos from image.

Parameters:
  • annos (list) – List of annotations.
  • types (list) – List of types.
  • img (numpy.array) – The image where boxes should be cropped from.
  • context (float) – The context that should be added to the box.
  • draw_annotations (bool) – If true, annotation will be painted inside the crop.
Returns:

A tuple that contains a list of image crops and a list of bboxes [[xc,yc,w,h],…]

Return type:

(list of numpy.array, list of list of float)

lost.pyapi.utils.anno_helper.divide_into_patches(img, x_splits=2, y_splits=2)[source]

Divide image into x_splits*y_splits patches.

Parameters:
  • img (array) – RGB image (skimage.io.imread).
  • x_splits (int) – Number of elements on x axis.
  • y_splits (int) – Number of elements on y axis.
Returns:

img_patches, box_coordinates

img batches and box coordinates of these patches in the image.

Return type:

list, list

Note

img_patches are in following order:
[[x0,y0], [x0,y1],…[x0,yn],…,[xn,y0], [xn, y1]…[xn,yn]]
lost.pyapi.utils.anno_helper.draw_annos(annos, types, img, color=(255, 0, 0), point_r=2)[source]

Draw annotations inside a image

Parameters:
  • annos (list) – List of annotations.
  • types (list) – List of types.
  • img (numpy.array) – The image to draw annotations in.
  • color (tuple) – (R,G,B) color that is used for drawing.

Note

The given image will be directly edited!

Returns:Image with drawn annotations
Return type:numpy.array
lost.pyapi.utils.anno_helper.to_abs(annos, types, img_size)[source]

Convert relative annotation coordinates to absolute ones

Parameters:
  • annos (list of list) –
  • types (list of str) –
  • img_size (tuple) – (width, height) of the image in pixels.
Returns:

Annotations in absolute format.

Return type:

list of list

lost.pyapi.utils.anno_helper.trans_boxes_to(boxes, convert_to='minmax')[source]

Transform a box from standard lost format into a different format

Parameters:
  • boxes (list of list) – Boxes in standard lost format [[xc,yc,w,h],…]
  • convert_to (str) – minmax -> [[xmim,ymin,xmax,ymax]…]
Returns:

Converted boxes.

Return type:

list of list

blacklist

A helper module to deal with blacklists.

class lost.pyapi.utils.blacklist.ImgBlacklist(my_script, name='img-blacklist.json', context='pipe')[source]

A class to deal with image blacklists.

Such blacklists are often used for annotation loops, in order to prevent annotating the same image multiple times.

my_script

The script instance that creates this blacklist.

Type:lost.pyapi.script.Script
name

The name of the blacklist file.

Type:str
context

Options: instance, pipe, static

Type:str

Example

Add images to blacklist.

>>> blacklist = ImgBlacklist(self, name='blacklist.json')
>>> blacklist.add(['path/to/img0.jpg'])
>>> balcklist.save()

Load a blacklist and check if a certain image is already in list.

>>> blacklist = ImgBlacklist(self, name='blacklist.json')
>>> blacklist.contains('path/to/img0.jpg')
True
>>> blacklist.contains('path/to/img1.jpg')
False

Get list of images that are not part of the blacklist

>>> blacklist.get_whitelist(['path/to/img0.jpg', 'path/to/img1.jpg', 'path/to/img2.jpg'])
['path/to/img1.jpg', 'path/to/img2.jpg']

Add images to the blacklist

>>> blacklist.add(['path/to/img1.jpg', 'path/to/img2.jpg'])
add(imgs)[source]

Add a list of images to blacklist.

Parameters:imgs (list) – A list of image identifiers that should be added to the blacklist.
contains(img)[source]

Check if blacklist contains a spcific image

Parameters:img (str) – The image identifier
Returns:True if img in blacklist, False if not.
Return type:bool
delete_blacklist()[source]

Remove blacklist from filesystem

get_whitelist(img_list, n='all')[source]

Get a list of images that are not part of the blacklist.

Parameters:
  • img_list (list of str) – A list of images where should be checked if they are in the blacklist
  • n ('all' or 'int') – The maximum number of images that should be returned.
Returns:

A list of images that are not in the blacklist.

Return type:

list of str

remove_item(item)[source]

Remove item from blacklist

Parameters:item (str) – The item/ image to remove from blacklist.
save()[source]

Write blacklist to filesystem

vis

Importing / Updating Pipelines

Import via GUI

LOST cli

Configuration

E-Mail Notifications

In order to activate E-Mail Notifications you have to provide an outgoing E-Mail Account. In your .env file you have to add the following environment variables. If you have set up lost with the quick setup script, these variables only need to be commented out and adjusted:

LOST_MAIL_ACTIVE=True
LOST_MAIL_SERVER=mailserver.com
LOST_MAIL_PORT=465
LOST_MAIL_USE_SSL=True
LOST_MAIL_USE_TLS=True
LOST_MAIL_USERNAME=email@email.com
LOST_MAIL_PASSWORD=password
LOST_MAIL_DEFAULT_SENDER=LOST Notification System <email@email.com>
LOST_MAIL_LOST_URL=http://mylostinstance.url/

LDAP

LDAP can be configured using the following environment variables in your .env file:

LOST_LDAP_ACTIVE=True
LOST_LDAP_HOST=192.168.0.100
LOST_LDAP_PORT=389
LOST_LDAP_BASE_DN=dc=example,dc=com
LOST_LDAP_USER_DN=ou=myOrganizationUnit
LOST_LDAP_BIND_USER_DN=cn=binduser,dc=example,dc=com
LOST_LDAP_BIND_USER_PASSWORD=ldap_bind_password

For more LDAP configurations just check the Flask LDAP documentation: Flask LDAP Documentation.

It is important that all LDAP environment variables are prefixed with LOST so that the settings are applied:
LOST_LDAP_GROUP_OBJECT_FILTER=(objectclass=posixGroup)
LOST_LDAP_GROUP_DN=
LOST_LDAP_USER_RDN_ATTR=cn
LOST_LDAP_USER_LOGIN_ATTR=uid
LOST_LDAP_USE_SSL=False
LOST_LDAP_ADD_SERVER= True

Note

Users logging into LOST for the first time using LDAP are automatically assigned the Annotator role. If you want to assign another role to the user, you have to do so in the user management in the Admin Area.

Note

The resolution of groups via LDAP is not yet supported. If a LOST group should be assigned to an LDAP user, this must be done via the user management in the Admin Area.

Warning

If a local user with the same user name of a new LDAP user already exists, the local user settings will be overwritten by those of the LDAP user.

JupyterLab

The JupyterLab integration is primarily intended for pipeline developers and quick experiments in LOST. Through this integration it is very easy to access all pipelines and their elements at any time and manipulate them through a web interface. By accessing the LOST pyAPI, various operations can be investigated, as they are also executed in the scripts of the annotation pipelines.

In order to activate the JupyterLab Integration you have to add the following environment variables in your .env file:

LOST_JUPYTER_LAB_ACTIVE=True
LOST_JUPYTER_LAB_ROOT_PATH=/code/src
LOST_JUPYTER_LAB_TOKEN=mysecrettoken
LOST_JUPYTER_LAB_PORT=8888

In addition, the port for the JupyterLab must be enabled in the lost service of your docker-compose.yml file:

ports:
    - "${LOST_FRONTEND_PORT}:8080"
    - "${LOST_JUPYTER_LAB_PORT:-8888}:8888"

Once the JupyterLab integration has been activated, the started JupyterLab can be accessed via the GUI in the Admin Area. Within the Admin Area, a tab (far right) now appears that contains the link to the JupyterLab.

Warning

The environment variable LOST_JUPYTER_ROOT_PATH defines from which path the Jupyter Lab is started in the docker container. If this path is not in a location mounted in the docker container, notebooks and other data will not be persistently stored.

Danger

Using JupyterLab gives full access to the database and connected file systems. The JupyterLab integration should therefore only be used in development environments and in no case in production systems.

Git Access Token

With the help of the Git configuration, you can have your Git access data (Personal Access Token) stored in the container. This means that, for example, private Git repositories can be used within the JupyterLab environment without having to enter a password. Furthermore, the configuration of the Git settings is necessary so that private Git repositories can be imported via the GUI.

In order to configure your Git authentication you have to add the following environment variables in your .env file:

LOST_GIT_USER=Git User                                                                            │
LOST_GIT_EMAIL=myemail                                                                      │
LOST_GIT_ACCESS_TOKEN=https://mygitusername:mygitaccesstoken@github.com

Nginx Configuration

Configuration File

When starting the lost container the corresponding nginx configuration file (depending on debug mode) for nginx is copied from the repository into the folder

/etc/nginx/conf.d/default.conf

by the entrypoint.sh script.

Both nginx configuration files (debug mode and production) can be found at: lost/docker/lost/nginx in our GitHub repository.

Custom Configuration File

If a custom configuration file is desired, this file must be mounted from the host machine into the lost container.

volumes:
    - /host/path/to/nginx/conf:/etc/nginx/conf.d/default.conf

Note

By default, files with a maximum size of 1GB can be uploaded in LOST. To change the maximum size you have to change the value client_max_body_size 1024M; inside the nginx configuration file. In addition, the environment variable LOST_MAX_FILE_UPLOAD_SIZE must also be adjusted in the LOST configuration.

LOST Setup

Default Setup with Docker

LOST provides a quick_setup script, that will configure LOST and instruct you how to start LOST. We designed this script for Linux environments, but it will also work on Windows host machines.

LOST releases are hosted on DockerHub and shipped in Containers. For a quick setup perform the following steps (these steps have been tested for Ubuntu):

  1. Install docker on your machine or server:

    https://docs.docker.com/install/

  2. Install docker-compose:

    https://docs.docker.com/compose/install/

  3. Clone LOST:
    git clone https://github.com/l3p-cv/lost.git
    
  4. Install the cryptography package in your python environment:
    pip install cryptography
    
  5. Run quick_setup script:
    cd lost/docker/quick_setup/
    python3 quick_setup.py /path/to/install/lost --release 2.0.0
    

    If you want to use phpmyadmin, you can set it via argument

    python3 quick_setup.py /path/to/install/lost --release 2.0.0 --phpmyadmin
    
  6. Run LOST:

    Follow instructions of the quick_setup script, printed in the command line.

Note

The quick setup script has now created the docker configuration files docker-compose.yml and .env . In the following sections, additional desired configurations usually refer to these two files.

Setup On Linux (without docker)

#TODO: JG

Contribution Guide

#TODO JJ Review

How to contribute new features or bug fixes?

  1. Select a feature you want to implement / a bug you want to fix from the lost issue list
    • If you have a new feature, create a new feature request
  2. State in the issue comments that you are willing to implement the feature/ fix the bug
  3. We will respond to your comment
  4. Implement the feature
  5. Create a pull request

How to do backend development?

The backend is written in python. We use flask as webserver and celery to execute time consuming tasks.

If you want to adjust backend code and test your changes please perform the following steps:

  1. Install LOST as described in LOST QuickSetup.
  2. Adjust the DEBUG variable in the .env config file. This file should be located at lost_install_dir/docker/.env.
Changes that need to be performed in the .env file. This will cause the LOST flask server to start in debug mode.
    DEBUG=True
  1. In oder to run your code, you need to mount your code into the docker container. You can do this by adding docker volumes in the docker-compose.yml file. The file should be located at lost_install_dir/docker/docker-compose.yml. Do this for all containers in the compose file that contain lost source code (lost, lost-cv, lost-cv-gpu)
Adjustments to docker-compose.yml. Mount your backend code into the docker container.
  version: '2'
    services:
        lost:
          image: l3pcv/lost:${LOST_VERSION}
          container_name: lost
          command: bash /entrypoint.sh
          env_file:
            - .env
          volumes:
            - ${LOST_DATA}:/home/lost
            - </path/to/lost_clone>/backend/lost:/code/src/backend/lost

Note

Because flask is in debug mode, code changes are applied immediately. An exception to this behaviour are changes to code that is related to celery tasks. After such changes lost needs to be restarted manually to get the code changes working.

How to do frontend development?

The Frontend is developed with React, Redux, CoreUI and reactstrap

  1. To start developing frontend follow the LOST QuickSetup instruction.
  2. Change directory to the frontend folder and install npm packages
cd lost/frontend/lost/
npm i
  1. [Optional] Set backend port in package.json start script with REACT_APP_PORT variable.
  2. Start development server with
npm start
Frontend Applications
Application Directory
Dashboard src/components/Dashboard
SIA (Single Image Annotation) src/components/SIA
MIA (Multi Image Annotation) src/components/MIA
Running Pipeline src/components/pipeline/src/running
Start Pipeline src/components/pipeline/src/start
Labels src/components/Labels
Workers src/components/Workers
Users src/components/Users

Building lost containers locally

  • The whole build process is described in .gitlab-ci.yml.
  • All required docker files are provided in lost/docker within the lost repo.
  • There are 3 lost container that will be executing scripts and the webserver
    • lost: Will run the webserver and provide the basic environment where scripts can be executed.
    • lost-cv: Will provide an computer vision environment in oder to execute scripts that require special libraries like opencv.
    • lost-cv-gpu: Will provide gpu support for scripts that use libraries that need gpu support like tensorflow.
  • Building the lost container
    • The lost container will inherit from the lost-base.
    • As first step build lost-base. The Dockerfile is located at lost/docker/lost-base.
    • After that you can build the lost container, using your local version of lost-base. The dockerfile can be found here: lost/docker/lost

Indices and tables