public class RBM extends BasePretrainNetwork<RBM>
Layer.TrainingMode, Layer.Type
Modifier and Type | Field and Description |
---|---|
protected org.nd4j.linalg.api.ndarray.INDArray |
hiddenSigma
Deprecated.
|
protected org.nd4j.linalg.api.ndarray.INDArray |
sigma
Deprecated.
|
trainingListeners
conf, dropoutApplied, dropoutMask, gradient, gradientsFlattened, gradientViews, index, input, iterationListeners, maskArray, maskState, optimizer, params, paramsFlattened, score, solver
Constructor and Description |
---|
RBM(NeuralNetConfiguration conf) |
RBM(NeuralNetConfiguration conf,
org.nd4j.linalg.api.ndarray.INDArray input) |
Modifier and Type | Method and Description |
---|---|
org.nd4j.linalg.api.ndarray.INDArray |
activate(boolean training)
Reconstructs the visible INPUT.
|
Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> |
backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon)
Calculate the gradient relative to the error in the next layer
|
void |
computeGradientAndScore()
Update the score
|
void |
contrastiveDivergence()
Deprecated.
No longer used; use fit methods in MultiLayerNetwork
|
Pair<Pair<org.nd4j.linalg.api.ndarray.INDArray,org.nd4j.linalg.api.ndarray.INDArray>,Pair<org.nd4j.linalg.api.ndarray.INDArray,org.nd4j.linalg.api.ndarray.INDArray>> |
gibbhVh(org.nd4j.linalg.api.ndarray.INDArray h)
Gibbs sampling step: hidden ---> visible ---> hidden
|
boolean |
isPretrainLayer()
Returns true if the layer can be trained in an unsupervised/pretrain manner (VAE, RBMs etc)
|
void |
iterate(org.nd4j.linalg.api.ndarray.INDArray input)
Deprecated.
No longer used
|
org.nd4j.linalg.api.ndarray.INDArray |
preOutput(org.nd4j.linalg.api.ndarray.INDArray v,
boolean training)
Raw activations
|
org.nd4j.linalg.api.ndarray.INDArray |
propDown(org.nd4j.linalg.api.ndarray.INDArray h)
Calculates the activation of the hidden:
activation(h * W + vbias)
|
org.nd4j.linalg.api.ndarray.INDArray |
propUp(org.nd4j.linalg.api.ndarray.INDArray v)
Calculates the activation of the visible :
sigmoid(v * W + hbias)
|
org.nd4j.linalg.api.ndarray.INDArray |
propUp(org.nd4j.linalg.api.ndarray.INDArray v,
boolean training)
Calculates the activation of the visible :
sigmoid(v * W + hbias)
|
org.nd4j.linalg.api.ndarray.INDArray |
propUpDerivative(org.nd4j.linalg.api.ndarray.INDArray z) |
Pair<org.nd4j.linalg.api.ndarray.INDArray,org.nd4j.linalg.api.ndarray.INDArray> |
sampleHiddenGivenVisible(org.nd4j.linalg.api.ndarray.INDArray v)
Binomial sampling of the hidden values given visible
|
Pair<org.nd4j.linalg.api.ndarray.INDArray,org.nd4j.linalg.api.ndarray.INDArray> |
sampleVisibleGivenHidden(org.nd4j.linalg.api.ndarray.INDArray h)
Guess the visible values given the hidden
|
Layer |
transpose()
Deprecated.
|
calcL1, calcL2, createGradient, getCorruptedInput, numParams, numParams, params, paramTable, setListeners, setListeners, setParams, setScoreWithZ
accumulateScore, activate, activate, activate, activate, activate, activationMean, applyDropOutIfNecessary, applyLearningRateScoreDecay, applyMask, batchSize, calcGradient, clear, clone, conf, createGradient, derivativeActivation, error, feedForwardMaskArray, fit, fit, getIndex, getInput, getInputMiniBatchSize, getListeners, getMaskArray, getOptimizer, getParam, gradient, gradientAndScore, init, initParams, input, layerConf, layerNameAndIndex, merge, paramTable, preOutput, preOutput, preOutput, score, setBackpropGradientsViewArray, setConf, setIndex, setInput, setInputMiniBatchSize, setMaskArray, setParam, setParams, setParamsViewArray, setParamTable, toString, type, update, update, validateInput
@Deprecated protected org.nd4j.linalg.api.ndarray.INDArray sigma
@Deprecated protected org.nd4j.linalg.api.ndarray.INDArray hiddenSigma
public RBM(NeuralNetConfiguration conf)
public RBM(NeuralNetConfiguration conf, org.nd4j.linalg.api.ndarray.INDArray input)
@Deprecated public void contrastiveDivergence()
and lower the likelihood (increase the energy) of the hidden samples.
Other insights: CD - k involves keeping the first k samples of a gibbs sampling of the model.
public void computeGradientAndScore()
Model
computeGradientAndScore
in interface Model
computeGradientAndScore
in class BaseLayer<RBM>
public Pair<Pair<org.nd4j.linalg.api.ndarray.INDArray,org.nd4j.linalg.api.ndarray.INDArray>,Pair<org.nd4j.linalg.api.ndarray.INDArray,org.nd4j.linalg.api.ndarray.INDArray>> gibbhVh(org.nd4j.linalg.api.ndarray.INDArray h)
h
- the hidden inputpublic Pair<org.nd4j.linalg.api.ndarray.INDArray,org.nd4j.linalg.api.ndarray.INDArray> sampleHiddenGivenVisible(org.nd4j.linalg.api.ndarray.INDArray v)
sampleHiddenGivenVisible
in class BasePretrainNetwork<RBM>
v
- the visible valuespublic Pair<org.nd4j.linalg.api.ndarray.INDArray,org.nd4j.linalg.api.ndarray.INDArray> sampleVisibleGivenHidden(org.nd4j.linalg.api.ndarray.INDArray h)
sampleVisibleGivenHidden
in class BasePretrainNetwork<RBM>
h
- the hidden unitspublic org.nd4j.linalg.api.ndarray.INDArray preOutput(org.nd4j.linalg.api.ndarray.INDArray v, boolean training)
Layer
public org.nd4j.linalg.api.ndarray.INDArray propUp(org.nd4j.linalg.api.ndarray.INDArray v)
v
- the visible layerpublic org.nd4j.linalg.api.ndarray.INDArray propUp(org.nd4j.linalg.api.ndarray.INDArray v, boolean training)
v
- the visible layerpublic org.nd4j.linalg.api.ndarray.INDArray propUpDerivative(org.nd4j.linalg.api.ndarray.INDArray z)
public org.nd4j.linalg.api.ndarray.INDArray propDown(org.nd4j.linalg.api.ndarray.INDArray h)
h
- the hidden layerpublic org.nd4j.linalg.api.ndarray.INDArray activate(boolean training)
public Pair<Gradient,org.nd4j.linalg.api.ndarray.INDArray> backpropGradient(org.nd4j.linalg.api.ndarray.INDArray epsilon)
Layer
backpropGradient
in interface Layer
backpropGradient
in class BasePretrainNetwork<RBM>
epsilon
- w^(L+1)*delta^(L+1). Or, equiv: dC/da, i.e., (dC/dz)*(dz/da) = dC/da, where C
is cost function a=sigma(z) is activation.@Deprecated public void iterate(org.nd4j.linalg.api.ndarray.INDArray input)
BaseLayer
@Deprecated public Layer transpose()
Layer
public boolean isPretrainLayer()
Layer