How to Use Deeplearning4J in Android Apps
Generally speaking, training a neural network is a task best suited for powerful computers with multiple GPUs. But what if you want to do it on your humble Android phone or tablet? Well, it’s definitely possible. Considering an average Android device’s specifications, however, it will most likely be quite slow. If that’s not a problem for you, keep reading.
In this tutorial, I’ll show you how to use Deeplearning4J, a popular Java-based deep learning library, to create and train a neural network on an Android device.
Prerequisites
For best results, you’ll need the following:
- An Android device or emulator that runs API level 21 or higher, and has about 200 MB of internal storage space free. I strongly suggest you use an emulator first because you can quickly tweak it in case you run out of memory or storage space.
- Android Studio 2.2 or newer
Configuring Your Android Studio Project
To be able to use Deeplearning4J in your project, add the following compile
dependencies to your app module’s build.gradle file:
Android Studio 3.0 introduced new Gradle, now annotationProcessors should be defined too If you are using it, add following code to gradle dependencies:
If you have errors like
Error:Error converting bytecode to dex:
Cause: com.android.dex.DexException: Multiple dex files define Ledu/umd/cs/findbugs/annotations/Nullable;
Add the following dependency:
If you want to include snapshot version of DL4J/ND4J you should write dependencies like below: as classifier doesn't work
As you can see, DL4J depends on ND4J, short for N-Dimensions for Java, which is a library that offers fast n-dimensional arrays. ND4J internally depends on a library called OpenBLAS, which contains platform-specific native code. Therefore, you must load a version of OpenBLAS and ND4J that matches the architecture of your Android device. Because I own an x86 device, I’m using android-x86
as the platform.
Dependencies of DL4J and ND4J have several files with identical names. In order to avoid build errors, add the following exclude
parameters to your packagingOptions
.
It is highly possible that you will have errors like:
Error:Execution failed for task ':app:transformResourcesWithMergeJavaResForDebug'.
> More than one file was found with OS independent path 'org/bytedeco/javacpp/windows-x86/msvcp120.dll'
We should exclude them too. In my case:
Furthermore, your compiled code will have well over 65,536 methods. To be able to handle this condition, add the following option in the defaultConfig
:
And now, press Sync Now to update the project. Finally, make sure that your APK doesn't contain both lib/armeabi
and lib/armeabi-v7a
subdirectories. If it does, move all files to one or the other as some Android devices will have problems with both present.
Starting an Asynchronous Task
Training a neural network is CPU-intensive, which is why you wouldn’t want to do it in your application’s UI thread. I’m not too sure if DL4J trains its networks asynchronously by default. Just to be safe, I’ll spawn a separate thread now using the AsyncTask
class.
Because the method createAndUseNetwork()
doesn’t exist yet, create it.
Creating a Neural Network
DL4J has a very intuitive API. Let us now use it to create a simple multi-layer perceptron with hidden layers. It will take two input values, and spit out one output value. To create the layers, we’ll use the DenseLayer
and OutputLayer
classes. Accordingly, add the following code to the createAndUseNetwork()
method you created in the previous step:
Now that our layers are ready, let’s create a NeuralNetConfiguration.Builder
object to configure our neural network.
In the above code, I’ve set the values of two important parameters: learning rate and number of iterations. Feel free to change those values.
We must now create a NeuralNetConfiguration.ListBuilder
object to actually connect our layers and specify their order.
Additionally, enable backpropagation by adding the following code:
At this point, we can generate and initialize our neural network as an instance of the MultiLayerNetwork
class.
Creating Training Data
To create our training data, we’ll be using the INDArray
class, which is provided by ND4J. Here’s what our training data will look like:
INPUTS EXPECTED OUTPUTS
------ ----------------
0,0 0
0,1 1
1,0 1
1,1 0
As you might have guessed, our neural network will behave like an XOR gate. The training data has four samples, and you must mention it in your code.
And now, create two INDArray
objects for the inputs and expected outputs, and initialize them with zeroes.
Note that the number of columns in the inputs array is equal to the number of neurons in the input layer. Similarly, the number of columns in the outputs array is equal to the number of neurons in the output layer.
Filling those arrays with the training data is easy. Just use the putScalar()
method:
We won’t be using the INDArray
objects directly. Instead, we’ll convert them into a DataSet
.
At this point, we can start the training by calling the fit()
method of the neural network and passing the data set to it.
And that’s all there is to it. Your neural network is ready to be used.
Conclusion
In this tutorial, you saw how easy it is to create and train a neural network using the Deeplearning4J library in an Android Studio project. I’d like to warn you, however, that training a neural network on a low-powered, battery operated device might not always be a good idea.
This was originally posted at Progur by Ashraff Hathibelagal.