How to Use Deeplearning4J in Android Apps

How to Use Deeplearning4J in Android Apps


Generally speaking, training a neural network is a task best suited for powerful computers with multiple GPUs. But what if you want to do it on your humble Android phone or tablet? Well, it’s definitely possible. Considering an average Android device’s specifications, however, it will most likely be quite slow. If that’s not a problem for you, keep reading.

In this tutorial, I’ll show you how to use Deeplearning4J, a popular Java-based deep learning library, to create and train a neural network on an Android device.

Prerequisites

For best results, you’ll need the following:

  • An Android device or emulator that runs API level 21 or higher, and has about 200 MB of internal storage space free. I strongly suggest you use an emulator first because you can quickly tweak it in case you run out of memory or storage space.
  • Android Studio 2.2 or newer

Configuring Your Android Studio Project

To be able to use Deeplearning4J in your project, add the following compile dependencies to your app module’s build.gradle file:

compile 'org.deeplearning4j:deeplearning4j-core:0.8.0'
compile 'org.nd4j:nd4j-native:0.8.0'
compile 'org.nd4j:nd4j-native:0.8.0:android-x86'
compile 'org.nd4j:nd4j-native:0.8.0:android-arm'
compile 'org.bytedeco.javacpp-presets:openblas:0.2.19-1.3:android-x86'
compile 'org.bytedeco.javacpp-presets:openblas:0.2.19-1.3:android-arm'

Android Studio 3.0 introduced new Gradle, now annotationProcessors should be defined too If you are using it, add following code to gradle dependencies:

<code class="language-groovy" data-lang="groovy"annotationProcessor 'org.projectlombok:lombok:1.16.16'</code>

If you have errors like Error:Error converting bytecode to dex: Cause: com.android.dex.DexException: Multiple dex files define Ledu/umd/cs/findbugs/annotations/Nullable; Add the following dependency:

compile 'com.google.code.findbugs:annotations:3.0.1', { 
  exclude module: 'jsr305'
  exclude module: 'jcip-annotations'
}
</span>

If you want to include snapshot version of DL4J/ND4J you should write dependencies like below:

compile group: 'org.nd4j', name: 'nd4j-native', version: '0.8.1-SNAPSHOT-android-arm'
as classifier doesn't work

As you can see, DL4J depends on ND4J, short for N-Dimensions for Java, which is a library that offers fast n-dimensional arrays. ND4J internally depends on a library called OpenBLAS, which contains platform-specific native code. Therefore, you must load a version of OpenBLAS and ND4J that matches the architecture of your Android device. Because I own an x86 device, I’m using android-x86 as the platform.

Dependencies of DL4J and ND4J have several files with identical names. In order to avoid build errors, add the following exclude parameters to your packagingOptions.

packagingOptions {
    exclude 'META-INF/DEPENDENCIES'
    exclude 'META-INF/DEPENDENCIES.txt'
    exclude 'META-INF/LICENSE'
    exclude 'META-INF/LICENSE.txt'
    exclude 'META-INF/license.txt'
    exclude 'META-INF/NOTICE'
    exclude 'META-INF/NOTICE.txt'
    exclude 'META-INF/notice.txt'
    exclude 'META-INF/INDEX.LIST'
}

It is highly possible that you will have errors like: Error:Execution failed for task ':app:transformResourcesWithMergeJavaResForDebug'. > More than one file was found with OS independent path 'org/bytedeco/javacpp/windows-x86/msvcp120.dll' We should exclude them too. In my case:

exclude 'org/bytedeco/javacpp/windows-x86/msvcp120.dll'
exclude 'org/bytedeco/javacpp/windows-x86_64/msvcp120.dll'
exclude 'org/bytedeco/javacpp/windows-x86/msvcr120.dll'
exclude 'org/bytedeco/javacpp/windows-x86_64/msvcr120.dll'

Furthermore, your compiled code will have well over 65,536 methods. To be able to handle this condition, add the following option in the defaultConfig:

multiDexEnabled true

And now, press Sync Now to update the project. Finally, make sure that your APK doesn't contain both lib/armeabi and lib/armeabi-v7a subdirectories. If it does, move all files to one or the other as some Android devices will have problems with both present.

Starting an Asynchronous Task

Training a neural network is CPU-intensive, which is why you wouldn’t want to do it in your application’s UI thread. I’m not too sure if DL4J trains its networks asynchronously by default. Just to be safe, I’ll spawn a separate thread now using the AsyncTask class.

AsyncTask.execute(new Runnable() {
    @Override
    public void run() {
        createAndUseNetwork();
    }
});

Because the method createAndUseNetwork() doesn’t exist yet, create it.

private void createAndUseNetwork() {

}

Creating a Neural Network

DL4J has a very intuitive API. Let us now use it to create a simple multi-layer perceptron with hidden layers. It will take two input values, and spit out one output value. To create the layers, we’ll use the DenseLayer and OutputLayer classes. Accordingly, add the following code to the createAndUseNetwork() method you created in the previous step:

DenseLayer inputLayer = new DenseLayer.Builder()
        .nIn(2)
        .nOut(3)
        .name("Input")
        .build();

DenseLayer hiddenLayer = new DenseLayer.Builder()
        .nIn(3)
        .nOut(2)
        .name("Hidden")
        .build();

OutputLayer outputLayer = new OutputLayer.Builder()
        .nIn(2)
        .nOut(1)
        .name("Output")
        .build();

Now that our layers are ready, let’s create a NeuralNetConfiguration.Builder object to configure our neural network.

NeuralNetConfiguration.Builder nncBuilder = new NeuralNetConfiguration.Builder();
nncBuilder.iterations(10000);
nncBuilder.learningRate(0.01);

In the above code, I’ve set the values of two important parameters: learning rate and number of iterations. Feel free to change those values.

We must now create a NeuralNetConfiguration.ListBuilder object to actually connect our layers and specify their order.

NeuralNetConfiguration.ListBuilder listBuilder = nncBuilder.list();
listBuilder.layer(0, inputLayer);
listBuilder.layer(1, hiddenLayer);
listBuilder.layer(2, outputLayer);

Additionally, enable backpropagation by adding the following code:

listBuilder.backprop(true);

At this point, we can generate and initialize our neural network as an instance of the MultiLayerNetwork class.

MultiLayerNetwork myNetwork = new MultiLayerNetwork(listBuilder.build());
myNetwork.init();

Creating Training Data

To create our training data, we’ll be using the INDArray class, which is provided by ND4J. Here’s what our training data will look like:

INPUTS      EXPECTED OUTPUTS
------      ----------------
0,0         0
0,1         1
1,0         1
1,1         0

As you might have guessed, our neural network will behave like an XOR gate. The training data has four samples, and you must mention it in your code.

final int NUM_SAMPLES = 4;

And now, create two INDArray objects for the inputs and expected outputs, and initialize them with zeroes.

INDArray trainingInputs = Nd4j.zeros(NUM_SAMPLES, inputLayer.getNIn());
INDArray trainingOutputs = Nd4j.zeros(NUM_SAMPLES, outputLayer.getNOut());

Note that the number of columns in the inputs array is equal to the number of neurons in the input layer. Similarly, the number of columns in the outputs array is equal to the number of neurons in the output layer.

Filling those arrays with the training data is easy. Just use the putScalar() method:

// If 0,0 show 0
trainingInputs.putScalar(new int[]{0,0}, 0);
trainingInputs.putScalar(new int[]{0,1}, 0);
trainingOutputs.putScalar(new int[]{0,0}, 0);

// If 0,1 show 1
trainingInputs.putScalar(new int[]{1,0}, 0);
trainingInputs.putScalar(new int[]{1,1}, 1);
trainingOutputs.putScalar(new int[]{1,0}, 1);

// If 1,0 show 1
trainingInputs.putScalar(new int[]{2,0}, 1);
trainingInputs.putScalar(new int[]{2,1}, 0);
trainingOutputs.putScalar(new int[]{2,0}, 1);

// If 1,1 show 0
trainingInputs.putScalar(new int[]{3,0}, 1);
trainingInputs.putScalar(new int[]{3,1}, 1);
trainingOutputs.putScalar(new int[]{3,0}, 0);

We won’t be using the INDArray objects directly. Instead, we’ll convert them into a DataSet.

DataSet myData = new DataSet(trainingInputs, trainingOutputs);

At this point, we can start the training by calling the fit() method of the neural network and passing the data set to it.

myNetwork.fit(myData);

And that’s all there is to it. Your neural network is ready to be used.

Conclusion

In this tutorial, you saw how easy it is to create and train a neural network using the Deeplearning4J library in an Android Studio project. I’d like to warn you, however, that training a neural network on a low-powered, battery operated device might not always be a good idea.

This was originally posted at Progur by Ashraff Hathibelagal.

Chat with us on Gitter