Who’s afraid of Machine Learning? Part 5 : Running ML-Kit On Device

Intro to ML & ML-Kit for mobile developers

Britt Barak
Google Developer Experts

--

Last posts gave an intro to ML, to MLKit, and discussed why do we need a mobile specific solution for ML capabilities.

Now… Time to write some code!

Before getting started:

  1. Clone this project with code to get started, and the implementation per step https://github.com/brittBarak/MLKitDemo
  2. Add Firebase to your app:
  • Under General tab → under You Apps section, choose “add app”.
  • Follow the steps in the Firebase tutorial, to add Firebase to your app.

3. Add firebase-ml-vision library to your app: on your app-level buid.gradle file add:

dependencies {
// …

implementation ‘com.google.firebase:firebase-ml-vision:17.0.0’
}

firebase-ml-vision library supports all the logic needed for all of the MLKit out-of-the-box use cases that has to do we vision (which are all of those that are currently available, and were outlined on a previous post)

As mentioned, we’ll use a local, an on device and a custom detector. Each has 4 steps:

0. Setting up (it’s not cheating :) doesn’t really count as a step…)

  1. Setup the Classifier
  2. Process the input
  3. Run the model
  4. Process the output
  • Note: If you prefer to follow long with the final code, you can find it on branch 1.run_local_model of the demo’s repo.

Running a local (on-device) model

Choosing a local model, is the lightweight and offline supported option. In return, its accuracy in limited, which we must take into account.

The UI takes the bitmap → calls ImageClassifier.executeLocal(bitmap)ImageClassifier calls LocalClassifier.execute()

Step 0: Setting up

  1. Adding to your app the local detector, facilitated by Firebase MLKit:

On your app-level build.gradle file add:

dependencies {
// ...
implementation 'com.google.firebase:firebase-ml-vision-image-label-model:15.0.0'
}

Optional, but recommended: by default, the ML model itself will be downloaded only once you execute the detector. It means that there will be some latency at the first execution, as well as network access required. To by-pass that, and have the ML model downloaded as the app is installed from Play Store, simply add the following declaration to your app’s AndroidManifest.xml file:

<application ...>
...
<meta-data
android:name="com.google.firebase.ml.vision.DEPENDENCIES"
android:value="label" />
<!-- To use multiple models: android:value="label,barcode,face..." --></application>

Step 1: Setup the Classifier

Create LocalClassifier class that holds the detector object:

public class LocalClassifier {    detector = FirebaseVision.getInstance().getVisionLabelDetector();
}

This is the basic detector instance. You can be more picky about the output returned, and add Confidence Threshold , which is between 0–1, with 0.5 as a default.

public class LocalClassifier {    FirebaseVisionLabelDetectorOptions localDetectorOptions =
new FirebaseVisionLabelDetectorOptions.Builder()
.setConfidenceThreshold(ImageClassifier.CONFIDENCE_THRESHOLD)
.build();
private FirebaseVisionLabelDetector classifier = FirebaseVision.getInstance().getVisionLabelDetector(localDetectorOptions);}

Step 2: Process The Input

FirebaseVisionLabelDetector knows how to work with an input of type FirebaseVisionImage. You can obtain a FirebaseVisionImage instance from either:

  • Bitmap- which is what we will do at this demo app. For simplicity I saved the images as static files on the assets folder.
  • Image Uri — if we get an input image from the device storage, a had the image stored on device, for example, on the user’s Gallery.
  • Media Image — if we get the input image from Media, for example, from the device camera.
  • ByteArray
  • ByteBuffer

Since we work with a Bitmap, the input processing is done simply as such:

class LocalClassifier {
//...

FirebaseVisionImage image;
public void execute(Bitmap bitmap) {
image = FirebaseVisionImage.fromBitmap(bitmap);
}
}
  • Tip: one of the reasons we’d want to use a local model, is since the its execution is quicker. But still, executing any model takes some time. If you use the model on a real-time application, you might need the results even faster. Reducing the bitmap size before moving to the next step, can improve the model’s processing time.

Step 3: Run The Model

This is where the magic happens! 🔮 Since the model does take some computation time, we should have the model run asynchronously, and return the success or failure result using listeners.

public class LocalClassifier {    //...    public void execute(Bitmap bitmap, OnSuccessListener     successListener, OnFailureListener failureListener) {
//...
detector.detectInImage(image)
.addOnSuccessListener(successListener)
.addOnFailureListener(failureListener);
}
}

Step 4: Process The Output

The detection output is provided on OnSuccessListener. I prefer to have the OnSuccessListener passed to LocalClassifier from ImageClassifier, that handles the communication between the UI and LocalClassifier.

The UI calls ImageClassifier.executeLocal() , which should look something like that:

OnImageClassifier.java :

localClassifier = new LocalClassifier();public void executeLocal(Bitmap bitmap, ClassifierCallback callback) {    successListener = new OnSuccessListener<List<FirebaseVisionLabel>>() {        public void onSuccess(List<FirebaseVisionLabel> labels) {
processLocalResult(labels, callback, start);
}
}; localClassifier.execute(bitmap, successListener, failureListener);}

processLocalResult() just prepares the output labels to display in the UI.

In my specific case, I chose to display the 3 results with highest probability. You may choose any other format type. To complete the picture, this is my implementation:

OnImageClassifier.java :

void processLocalResult(List<FirebaseVisionLabel> labels, ClassifierCallback callback) {    labels.sort(localLabelComparator);    resultLabels.clear();    FirebaseVisionLabel label;    for (int i = 0; i < Math.min(3, labels.size()); ++i) {        label = labels.get(i);        resultLabels.add(label.getLabel() + “:” + label.getConfidence());    }    callback.onClassified(“Local Model”, resultLabels);}

ClassifierCallback is a simple interface I created, in order to communicate the results back to the UI to display. We could have, of course use any other approach.

interface ClassifierCallback {
void onClassified(String modelTitle, List<String> topLabels);
}

That’s it!

you used your first ML model to classify an image! 🎉 How simple was that?!

Let’s run the app and see some results!

Pretty good!!! We got some general labels like “food” or “fruit”, that definitely fit the image, but I’d expect the model to me able to tell me which fruit is it..

Get the final code for this part on this demo’s repo , on branch 1.run_local_model

Next up: let’s try to get some more indicative and accurate labels, by using the cloud based detector… on the next post!

--

--

Britt Barak
Google Developer Experts

Product manager @ Facebook. Formerly: DevRel / DevX ; Google Developer Expert; Women Techmakers Israel community lead.