public interface Layer extends java.io.Serializable, java.lang.Cloneable, Model
Modifier and Type | Interface and Description |
---|---|
static class |
Layer.TrainingMode |
static class |
Layer.Type |
Modifier and Type | Method and Description |
---|---|
org.nd4j.linalg.api.ndarray.INDArray |
activate()
Trigger an activation with the last specified input
|
org.nd4j.linalg.api.ndarray.INDArray |
activate(boolean training)
Trigger an activation with the last specified input
|
org.nd4j.linalg.api.ndarray.INDArray |
activate(org.nd4j.linalg.api.ndarray.INDArray input)
Initialize the layer with the given input
and return the activation for this layer
given this input
|
org.nd4j.linalg.api.ndarray.INDArray |
activate(org.nd4j.linalg.api.ndarray.INDArray input,
boolean training)
Initialize the layer with the given input
and return the activation for this layer
given this input
|
org.nd4j.linalg.api.ndarray.INDArray |
activate(org.nd4j.linalg.api.ndarray.INDArray input,
Layer.TrainingMode training)
Initialize the layer with the given input
and return the activation for this layer
given this input
|
org.nd4j.linalg.api.ndarray.INDArray |
activate(Layer.TrainingMode training)
Trigger an activation with the last specified input
|
org.nd4j.linalg.api.ndarray.INDArray |
activationMean()
Deprecated.
As of 0.7.3 - Feb 2017. No longer used.
|
Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> |
backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon)
Calculate the gradient relative to the error in the next layer
|
Gradient |
calcGradient(Gradient layerError,
org.nd4j.linalg.api.ndarray.INDArray indArray)
Deprecated.
As of 0.7.3 - Feb 2017. No longer used.
|
double |
calcL1(boolean backpropOnlyParams)
Calculate the l1 regularization term
0.0 if regularization is not used. |
double |
calcL2(boolean backpropOnlyParams)
Calculate the l2 regularization term
0.0 if regularization is not used. |
Layer |
clone()
Clone the layer
|
org.nd4j.linalg.api.ndarray.INDArray |
derivativeActivation(org.nd4j.linalg.api.ndarray.INDArray input)
Deprecated.
As of 0.7.3 - Feb 2017. No longer used.
|
Gradient |
error(org.nd4j.linalg.api.ndarray.INDArray input)
Deprecated.
As of 0.7.3 - Feb 2017. No longer used.
|
Pair<org.nd4j.linalg.api.ndarray.INDArray,MaskState> |
feedForwardMaskArray(org.nd4j.linalg.api.ndarray.INDArray maskArray,
MaskState currentMaskState,
int minibatchSize)
Feed forward the input mask array, setting in in the layer as appropriate.
|
int |
getIndex()
Get the layer index.
|
int |
getInputMiniBatchSize()
Get current/last input mini-batch size, as set by setInputMiniBatchSize(int)
|
java.util.Collection<IterationListener> |
getListeners()
Get the iteration listeners for this layer.
|
org.nd4j.linalg.api.ndarray.INDArray |
getMaskArray() |
boolean |
isPretrainLayer()
Returns true if the layer can be trained in an unsupervised/pretrain manner (VAE, RBMs etc)
|
void |
merge(Layer layer,
int batchSize)
Deprecated.
As of 0.7.3 - Feb 2017. No longer used. Merging (for parameter averaging) done via alternative means
|
org.nd4j.linalg.api.ndarray.INDArray |
preOutput(org.nd4j.linalg.api.ndarray.INDArray x)
Raw activations
|
org.nd4j.linalg.api.ndarray.INDArray |
preOutput(org.nd4j.linalg.api.ndarray.INDArray x,
boolean training)
Raw activations
|
org.nd4j.linalg.api.ndarray.INDArray |
preOutput(org.nd4j.linalg.api.ndarray.INDArray x,
Layer.TrainingMode training)
Raw activations
|
void |
setIndex(int index)
Set the layer index.
|
void |
setInput(org.nd4j.linalg.api.ndarray.INDArray input)
Get the layer input.
|
void |
setInputMiniBatchSize(int size)
Set current/last input mini-batch size.
Used for score and gradient calculations. |
void |
setListeners(java.util.Collection<IterationListener> listeners)
Set the iteration listeners for this layer.
|
void |
setListeners(IterationListener... listeners)
Set the iteration listeners for this layer.
|
void |
setMaskArray(org.nd4j.linalg.api.ndarray.INDArray maskArray)
Set the mask array.
|
Layer |
transpose()
Return a transposed copy of the weights/bias
(this means reverse the number of inputs and outputs on the weights)
|
Layer.Type |
type()
Returns the layer type
|
accumulateScore, applyLearningRateScoreDecay, batchSize, clear, computeGradientAndScore, conf, fit, fit, getOptimizer, getParam, gradient, gradientAndScore, init, initParams, input, iterate, numParams, numParams, params, paramTable, paramTable, score, setBackpropGradientsViewArray, setConf, setParam, setParams, setParamsViewArray, setParamTable, update, update, validateInput
double calcL2(boolean backpropOnlyParams)
backpropOnlyParams
- If true: calculate L2 based on backprop params only. If false: calculate
based on all params (including pretrain params, if any)double calcL1(boolean backpropOnlyParams)
backpropOnlyParams
- If true: calculate L1 based on backprop params only. If false: calculate
based on all params (including pretrain params, if any)Layer.Type type()
@Deprecated Gradient error(org.nd4j.linalg.api.ndarray.INDArray input)
input
- the gradient for the forward layer
If this is the final layer, it will start
with the error from the output.
This is on the user to initialize.@Deprecated org.nd4j.linalg.api.ndarray.INDArray derivativeActivation(org.nd4j.linalg.api.ndarray.INDArray input)
input
- the input to take the derivative of@Deprecated Gradient calcGradient(Gradient layerError, org.nd4j.linalg.api.ndarray.INDArray indArray)
layerError
- the layer errorindArray
- Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon)
epsilon
- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.@Deprecated void merge(Layer layer, int batchSize)
layer
- the layer to mergebatchSize
- the batch size to merge on@Deprecated org.nd4j.linalg.api.ndarray.INDArray activationMean()
org.nd4j.linalg.api.ndarray.INDArray preOutput(org.nd4j.linalg.api.ndarray.INDArray x)
x
- the input to transformorg.nd4j.linalg.api.ndarray.INDArray preOutput(org.nd4j.linalg.api.ndarray.INDArray x, Layer.TrainingMode training)
x
- the input to transformorg.nd4j.linalg.api.ndarray.INDArray activate(Layer.TrainingMode training)
training
- training or test modeorg.nd4j.linalg.api.ndarray.INDArray activate(org.nd4j.linalg.api.ndarray.INDArray input, Layer.TrainingMode training)
input
- the input to usetraining
- train or test modeorg.nd4j.linalg.api.ndarray.INDArray preOutput(org.nd4j.linalg.api.ndarray.INDArray x, boolean training)
x
- the input to transformorg.nd4j.linalg.api.ndarray.INDArray activate(boolean training)
training
- training or test modeorg.nd4j.linalg.api.ndarray.INDArray activate(org.nd4j.linalg.api.ndarray.INDArray input, boolean training)
input
- the input to usetraining
- train or test modeorg.nd4j.linalg.api.ndarray.INDArray activate()
org.nd4j.linalg.api.ndarray.INDArray activate(org.nd4j.linalg.api.ndarray.INDArray input)
input
- the input to useLayer transpose()
Layer clone()
java.util.Collection<IterationListener> getListeners()
void setListeners(IterationListener... listeners)
setListeners
in interface Model
void setListeners(java.util.Collection<IterationListener> listeners)
setListeners
in interface Model
void setIndex(int index)
int getIndex()
void setInput(org.nd4j.linalg.api.ndarray.INDArray input)
void setInputMiniBatchSize(int size)
int getInputMiniBatchSize()
setInputMiniBatchSize(int)
void setMaskArray(org.nd4j.linalg.api.ndarray.INDArray maskArray)
feedForwardMaskArray(INDArray, MaskState, int)
should be used in
preference to this.maskArray
- Mask array to setorg.nd4j.linalg.api.ndarray.INDArray getMaskArray()
boolean isPretrainLayer()
Pair<org.nd4j.linalg.api.ndarray.INDArray,MaskState> feedForwardMaskArray(org.nd4j.linalg.api.ndarray.INDArray maskArray, MaskState currentMaskState, int minibatchSize)
maskArray
- Mask array to setcurrentMaskState
- Current state of the mask - see MaskState
minibatchSize
- Current minibatch size. Needs to be known as it cannot always be inferred from the activations
array due to reshaping (such as a DenseLayer within a recurrent neural network)