Follow

Machine Learning Assisted Data Labeling

Machine Learning Assisted Data Labeling uses machine learning models from IBM Watson to prepare training data at scale. The models can also be used to automate business processes that require data categorization. Once a model has made predictions on an unlabeled dataset, low confidence predictions can be automatically routed to a Figure Eight job for human labeling.

Contact your Customer Success Manager for access to this feature.

Train a custom image categorization model

In this guide, we’ll be training a custom multiclass model to categorize images that contain different types of footwear.

  1. Visit the Models tab, then click Create Custom Model and input a name.

unnamed__1_.gif 

Note: We currently offer models from IBM Watson.

  1. Upload a .csv containing training data. Once the upload is complete, click Begin Training to start training the model.
  • The csv should meet the following requirements:
    • Must be UTF-8 encoded and contain column headers image_url and label
      • Image requirements: must be JPG or PNG, 20 MB limit
    • We recommend you provide at least 50 examples per class.
      • IBM recommends training a minimum of 150 – 200 images to balance between training time and accuracy. After 5000 images there is likely to be little improvement.
    • Include negative examples to improve the accuracy of the model:
      • For binary models (e.g. Bird or Not Bird) - label all negative examples (i.e. Not Bird) as a “negative_examples” class
      • For multi-class models (e.g. Boots, Sneakers, Other) - label all other examples (i.e. Not Boots, Not Sneakers) as a “negative_examples” class
    • See more best practices using IBM model services here.

 Screenshot_2018-12-10_20.59.48.png

Example training data CSV for a multiclass model

unnamed__2_.gif

Model training can take up to 30 minutes to complete. You’ll be able to review the total number of rows trained once training is complete.

Note: 20% of your training data is automatically set aside to calculate the accuracy reporting on the predictions snapshot. 

Evaluate a custom model

It is best practice to ensure a model is making acceptable predictions before using it for a production workflow. Here is an outline of how to evaluate a model before adding it to a workflow.

  1. Upload an unlabeled dataset to begin evaluating the model.
    • The csv should contain an image_url header.
    • A maximum of 100 rows can be evaluated at a time.
  1. Review model predictions using the predictions snapshot. Filter the predictions snapshot by class and minimum confidence to view model predictions.

unnamed__4_.gif

Note: you can always add more training data to the model to improve its predictions by returning to the Train tab.

Use a custom model in a workflow

Now that you’ve trained a model, use it to make predictions on unlabeled data in a production workflow. To route low confidence predictions for human labeling, create a job using the Image Categorization for Model Routing template for this model before you start creating a workflow. 

Important Notes

  • Finalize your job design and add test questions before adding a job to a workflow. Make sure to update the following job-level conditions and settings:
    • CML values exactly match model class labels. 
      • Make sure to include a CML value for "negative_examples" labeled Other, N/A, Does Not Apply, etc.
    • Displayed data using {{image_url}} tags
    • Job instructions updated to reflect use case
    • Automatic launching of rows” (Job > Settings > API) is enabled
  • Do not change the job design or routing logic after a workflow has been launched.
  1. Visit Workflows tab, then click Create a Workflow. You can select from one of the following workflow types:
    • Model Only - receive predictions from a model
    • Model then Job - receive predictions from a model, then route low confidence predictions to a Figure Eight job for human labeling

unnamed__5_.gif

For this example, we’ve selected Model then Job.

  1. Select a model to add to the workflow.

unnamed__6_.gif

Note: A model cannot be used in more than one workflow. However, multiple workflows can be configured to route rows to the same job.

  1. Add a job, then set routing rules.
    • Enter a confidence threshold that will be used to determine which low confidence rows should be routed to your job. For example, a threshold of 0.8 means rows between 0.2 and 0.8 confidence will be sent to the job since we do not have at least the required confidence.

unnamed__7_.gif 

  1. Click Save to create the workflow.

  2. Upload unlabeled data
    • The csv should contain an image_url header.

unnamed__8_.gif

  1. Proceed to the Launch page and review the model and job id before you launch the workflow.

unnamed__9_.gif

Note: Launched workflows will remain in Running status unless they are manually turned off

  1. As the model completes its predictions, rows will be routed to the job based on your rules. We recommend you monitor your job once it has been automatically launched.

 

Workflow Reports

Workflow report column header definitions:

  • error
    • returns a boolean
    • if ‘error = true’ the model could not make a prediction on the row and the row could not be routed
  • completed
    • returns a boolean
    • references the row status
    • if 'completed = false' the row is still in progress
    • if 'completed = true' the row has been completed by a human labeler or the model
  • machine_label:example_class
    • returns a confidence value between 0.0 - 1
    • expresses the confidence value associated with each model prediction
  • human_label:example_class
    • defaults to “blank”
    • expresses aggregated human judgments as a confidence value between 0.0 - 1

Example row statuses:

row status error completed: machine_label: human_label:
awaiting prediction false false null null
received prediction, did not qualify for routing false true confidence value (0.0 - 1) null
received prediction, qualified for routing and awaiting human label false false confidence value (0.0 - 1) null
received prediction, qualified for routing and received human label false true confidence value (0.0 - 1) confidence value (0.0 - 1)
error - model could not make a prediction  true true null null 

 

Note: Rows will not contain prediction values if they are in error = true or completed = false state


Was this article helpful?
1 out of 2 found this helpful


Have more questions? Submit a request
Powered by Zendesk