public static class VariationalAutoencoder.Builder extends BasePretrainNetwork.Builder<VariationalAutoencoder.Builder>
lossFunction, preTrainIterations, visibleBiasInit
nIn, nOut
activationFn, adamMeanDecay, adamVarDecay, biasInit, biasLearningRate, dist, dropOut, epsilon, gradientNormalization, gradientNormalizationThreshold, l1, l1Bias, l2, l2Bias, layerName, learningRate, learningRatePolicy, learningRateSchedule, momentum, momentumAfter, rho, rmsDecay, updater, weightInit
Constructor and Description |
---|
Builder() |
Modifier and Type | Method and Description |
---|---|
VariationalAutoencoder |
build() |
VariationalAutoencoder.Builder |
decoderLayerSizes(int... decoderLayerSizes)
Size of the decoder layers, in units.
|
VariationalAutoencoder.Builder |
encoderLayerSizes(int... encoderLayerSizes)
Size of the encoder layers, in units.
|
VariationalAutoencoder.Builder |
lossFunction(org.nd4j.linalg.activations.Activation outputActivationFn,
org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction)
Configure the VAE to use the specified loss function for the reconstruction, instead of a ReconstructionDistribution.
|
VariationalAutoencoder.Builder |
lossFunction(org.nd4j.linalg.activations.IActivation outputActivationFn,
org.nd4j.linalg.lossfunctions.ILossFunction lossFunction)
Configure the VAE to use the specified loss function for the reconstruction, instead of a ReconstructionDistribution.
|
VariationalAutoencoder.Builder |
lossFunction(org.nd4j.linalg.activations.IActivation outputActivationFn,
org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction)
Configure the VAE to use the specified loss function for the reconstruction, instead of a ReconstructionDistribution.
|
VariationalAutoencoder.Builder |
nOut(int nOut)
Set the size of the VAE state Z.
|
VariationalAutoencoder.Builder |
numSamples(int numSamples)
Set the number of samples per data point (from VAE state Z) used when doing pretraining.
|
VariationalAutoencoder.Builder |
pzxActivationFn(org.nd4j.linalg.activations.IActivation activationFunction)
Activation function for the input to P(z|data).
Care should be taken with this, as some activation functions (relu, etc) are not suitable due to being bounded in range [0,infinity). |
VariationalAutoencoder.Builder |
pzxActivationFunction(org.nd4j.linalg.activations.Activation activation)
Activation function for the input to P(z|data).
Care should be taken with this, as some activation functions (relu, etc) are not suitable due to being bounded in range [0,infinity). |
VariationalAutoencoder.Builder |
pzxActivationFunction(java.lang.String activationFunction)
Deprecated.
|
VariationalAutoencoder.Builder |
reconstructionDistribution(ReconstructionDistribution distribution)
The reconstruction distribution for the data given the hidden state - i.e., P(data|Z).
This should be selected carefully based on the type of data being modelled. |
lossFunction, preTrainIterations, visibleBiasInit
nIn
activation, activation, activation, adamMeanDecay, adamVarDecay, biasInit, biasLearningRate, dist, dropOut, epsilon, gradientNormalization, gradientNormalizationThreshold, l1, l1Bias, l2, l2Bias, learningRate, learningRateDecayPolicy, learningRateSchedule, momentum, momentumAfter, name, rho, rmsDecay, updater, weightInit
public VariationalAutoencoder.Builder encoderLayerSizes(int... encoderLayerSizes)
DenseLayer
.
Typically the number and size of the decoder layers (set via decoderLayerSizes(int...)
is similar to the encoder layers.encoderLayerSizes
- Size of each encoder layer in the variational autoencoderpublic VariationalAutoencoder.Builder decoderLayerSizes(int... decoderLayerSizes)
DenseLayer
.
Typically the number and size of the decoder layers is similar to the encoder layers (set via encoderLayerSizes(int...)
.decoderLayerSizes
- Size of each deccoder layer in the variational autoencoderpublic VariationalAutoencoder.Builder reconstructionDistribution(ReconstructionDistribution distribution)
GaussianReconstructionDistribution
+ {identity or tanh} for real-valued (Gaussian) dataBernoulliReconstructionDistribution
+ sigmoid for binary-valued (0 or 1) datadistribution
- Reconstruction distributionpublic VariationalAutoencoder.Builder lossFunction(org.nd4j.linalg.activations.IActivation outputActivationFn, org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction)
outputActivationFn
- Activation function for the output/reconstructionlossFunction
- Loss function to usepublic VariationalAutoencoder.Builder lossFunction(org.nd4j.linalg.activations.Activation outputActivationFn, org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction lossFunction)
outputActivationFn
- Activation function for the output/reconstructionlossFunction
- Loss function to usepublic VariationalAutoencoder.Builder lossFunction(org.nd4j.linalg.activations.IActivation outputActivationFn, org.nd4j.linalg.lossfunctions.ILossFunction lossFunction)
outputActivationFn
- Activation function for the output/reconstructionlossFunction
- Loss function to usepublic VariationalAutoencoder.Builder pzxActivationFn(org.nd4j.linalg.activations.IActivation activationFunction)
activationFunction
- Activation function for p(z|x)@Deprecated public VariationalAutoencoder.Builder pzxActivationFunction(java.lang.String activationFunction)
pzxActivationFunction(Activation)
public VariationalAutoencoder.Builder pzxActivationFunction(org.nd4j.linalg.activations.Activation activation)
activation
- Activation function for p(z|x)public VariationalAutoencoder.Builder nOut(int nOut)
nOut
in class FeedForwardLayer.Builder<VariationalAutoencoder.Builder>
nOut
- Size of P(Z|data) and output sizepublic VariationalAutoencoder.Builder numSamples(int numSamples)
This is parameter L from Kingma and Welling: "In our experiments we found that the number of samples L per datapoint can be set to 1 as long as the minibatch size M was large enough, e.g. M = 100."
numSamples
- Number of samples per data point for pretrainingpublic VariationalAutoencoder build()
build
in class Layer.Builder<VariationalAutoencoder.Builder>