Keras Model Import: Supported Features
Little-known fact: Deeplearning4j’s creator, Skymind, has two of the top six Keras contributors on our team, making it the largest contributor to Keras after Keras creator Francois Chollet, who’s at Google.
While not every concept in DL4J has an equivalent in Keras and vice versa, many of the key concepts can be matched. Importing keras models into DL4J is done in our deeplearning4j-modelimport module. Below is a comprehensive list of currently supported features.
Layers
Mapping keras to DL4J layers is done in the layers sub-module of model import. The structure of this project loosely reflects the structure of Keras.
Core Layers
- Dense
- Activation
- Dropout
- Flatten
- Reshape
- Merge
- Permute
- RepeatVector
- Lambda
- ActivityRegularization
- Masking
Convolutional Layers
- Conv1D
- Conv2D
- Conv3D
- AtrousConvolution1D
- AtrousConvolution2D
- SeparableConv2D
- Conv2DTranspose
- Cropping1D
- Cropping2D
- Cropping3D
- UpSampling1D
- UpSampling2D
- UpSampling3D
- ZeroPadding1D
- ZeroPadding2D
- ZeroPadding3D
Pooling Layers
- MaxPooling1D
- MaxPooling2D
- MaxPooling3D
- AveragePooling1D
- AveragePooling2D
- AveragePooling3D
- GlobalMaxPooling1D
- GlobalMaxPooling2D
- GlobalAveragePooling1D
- GlobalAveragePooling2D
Locally-connected Layers
DL4J currently does not support Locally-connected layers.
- LocallyConnected1D
- LocallyConnected2D
Recurrent Layers
Embedding Layers
Merge Layers
- Add / add
- Multiply / multiply
- Subtract / subtract
- Average / average
- Maximum / maximum
- Concatenate / concatenate
- Dot / dot
- Cos / cos
Advanced Activation Layers
- LeakyReLU
- PReLU
- ELU
- ThresholdedReLU
Normalization Layers
Noise Layers
Currently, DL4J does not support noise layers.
Layer Wrappers
DL4j does not have the concept of layer wrappers, but there is an implementation of bi-directional LSTMs available here.
- TimeDistributed
- Bidirectional
Losses
- mean_squared_error
- mean_absolute_error
- mean_absolute_percentage_error
- mean_squared_logarithmic_error
- squared_hinge
- hinge
- categorical_hinge
- logcosh
- categorical_crossentropy
- sparse_categorical_crossentropy
- binary_crossentropy
- kullback_leibler_divergence
- poisson
- cosine_proximity
Activations
- softmax
- elu
- selu
- softplus
- softsign
- relu
- tanh
- sigmoid
- hard_sigmoid
- linear
Initializers
- Zeros
- Ones
- Constant
- RandomNormal
- RandomUniform
- TruncatedNormal
- VarianceScaling
- Orthogonal
- Identity
- lecun_uniform
- lecun_normal
- glorot_normal
- glorot_uniform
- he_normal
- he_uniform
Regularizers
- l1
- l2
- l1_l2
Constraints
- max_norm
- non_neg
- unit_norm
- min_max_norm
Metrics
- binary_accuracy
- categorical_accuracy
- sparse_categorical_accuracy
- top_k_categorical_accuracy
- sparse_top_k_categorical_accuracy
Optimizers
- SGD
- RMSprop
- Adagrad
- Adadelta
- Adam
- Adamax
- Nadam
- TFOptimizer
Other Machine Learning Tutorials
For people just getting started with deep learning, the following tutorials and videos provide an easy entrance to the fundamental ideas of feedforward networks:
- Recurrent Networks and LSTMs
- Deep Reinforcement Learning
- Deep Convolutional Networks
- Multilayer Perceptron (MLPs) for Classification
- Generative Adversarial Networks (GANs)
- Symbolic Reasoning & Deep Learning
- Using Graph Data with Deep Learning
- AI vs. Machine Learning vs. Deep Learning
- Markov Chain Monte Carlo & Machine Learning
- MNIST for Beginners
- Restricted Boltzmann Machines
- Eigenvectors, PCA, Covariance and Entropy
- Glossary of Deep-Learning and Neural-Net Terms
- Word2vec and Natural-Language Processing
- Deeplearning4j Examples via Quickstart
- Neural Networks Demystified (A seven-video series)
- Inference: Machine Learning Model Server