scikit-learn Features

The examples in this section help you get more out of scikit-neuralnetwork, in particular via its integration with scikit-learn.

sklearn Pipeline

Typically, neural networks perform better when their inputs have been normalized or standardized. Using a scikit-learn’s pipeline support is an obvious choice to do this.

Here’s how to setup such a pipeline with a multi-layer perceptron as a classifier:

from sknn.mlp import Classifier, Layer

from sklearn.pipeline import Pipeline
from sklearn.preprocessing import MinMaxScaler

pipeline = Pipeline([
        ('min/max scaler', MinMaxScaler(feature_range=(0.0, 1.0))),
        ('neural network', Classifier(layers=[Layer("Softmax")], n_iter=25))])
pipeline.fit(X_train, y_train)

You can then use the pipeline as you would the neural network, or any other standard API from scikit-learn.

Unsupervised Pre-Training

(NOTE: This is currently not supported with the Lasagne backend.)

If you have large quantities of unlabeled data, you may benefit from pre-training using an auto-encoder style architecture in an unsupervised learning fashion.

from sknn import ae, mlp

# Initialize auto-encoder for unsupervised learning.
myae = ae.AutoEncoder(
            layers=[
                ae.Layer("Tanh", units=128),
                ae.Layer("Sigmoid", units=64)],
            learning_rate=0.002,
            n_iter=10)

# Layerwise pre-training using only the input data.
myae.fit(X)

# Initialize the multi-layer perceptron with same base layers.
mymlp = mlp.Regressor(
            layers=[
                mlp.Layer("Tanh", units=128),
                mlp.Layer("Sigmoid", units=64),
                mlp.Layer("Linear")])

# Transfer the weights from the auto-encoder.
myae.transfer(mymlp)
# Now perform supervised-learning as usual.
mymlp.fit(X, y)

The downside of this approach is that auto-encoders only support activation fuctions Tanh and Sigmoid (currently), which excludes the benefits of more modern activation functions like Rectifier.