sknn.ae — Auto-Encoders

In this module, a neural network is made up of stacked layers of weights that encode input data (upwards pass) and then decode it again (downward pass). This is implemented in layers:

  • sknn.ae.Layer: Used to specify an upward and downward layer with non-linear activations.

In practice, you need to create a list of these specifications and provide them as the layers parameter to the sknn.ae.AutoEncoder constructor.

Layer Specifications

class sknn.ae.Layer(activation, warning=None, type=u'autoencoder', name=None, units=None, cost=u'msre', tied_weights=True, corruption_level=0.5)

Specification for a layer to be passed to the auto-encoder during construction. This includes a variety of parameters to configure each layer based on its activation type.

Parameters:

activation: str

Select which activation function this layer should use, as a string. Specifically, options are Sigmoid and Tanh only for such auto-encoders.

type: str, optional

The type of encoding and decoding layer to use, specifically denoising for randomly corrupting data, and a more traditional autoencoder which is used by default.

name: str, optional

You optionally can specify a name for this layer, and its parameters will then be accessible to scikit-learn via a nested sub-object. For example, if name is set to layer1, then the parameter layer1__units from the network is bound to this layer’s units variable.

The name defaults to hiddenN where N is the integer index of that layer, and the final layer is always output without an index.

units: int

The number of units (also known as neurons) in this layer. This applies to all layer types except for convolution.

cost: string, optional

What type of cost function to use during the layerwise pre-training. This can be either msre for mean-squared reconstruction error (default), and mbce for mean binary cross entropy.

tied_weights: bool, optional

Whether to use the same weights for the encoding and decoding phases of the simulation and training. Default is True.

corruption_level: float, optional

The ratio of inputs to corrupt in this layer; 0.25 means that 25% of the inputs will be corrupted during the training. The default is 0.5.

warning: None

You should use keyword arguments after type when initializing this object. If not, the code will raise an AssertionError.

Methods

set_params

Auto-Encoding Transformers

This class serves two high-level purposes:

  1. Unsupervised Learning — Provide a form of unsupervised learning to train weights in each layer. These weights can then be reused in a sknn.mlp.MultiLayerPerceptron for better pre-training.
  2. Pipeline Transformation — Encode inputs into an intermediate representation for use in a pipeline, for example to reduce the dimensionality of an input vector using stochastic gradient descent.
class sknn.ae.AutoEncoder(layers, warning=None, parameters=None, random_state=None, learning_rule=u'sgd', learning_rate=0.01, learning_momentum=0.9, normalize=None, regularize=None, weight_decay=None, dropout_rate=None, batch_size=1, n_iter=None, n_stable=10, f_stable=0.001, valid_set=None, valid_size=0.0, loss_type=None, callback=None, debug=False, verbose=None, **params)

Attributes

is_classifier
is_initialized

Methods

fit
get_parameters
is_convolution
set_parameters
transfer
transform