Dense layers¶
-
class
lasagne.layers.
DenseLayer
(incoming, num_units, W=lasagne.init.GlorotUniform(), b=lasagne.init.Constant(0.), nonlinearity=lasagne.nonlinearities.rectify, **kwargs)[source]¶ A fully connected layer.
Parameters: incoming : a
Layer
instance or a tupleThe layer feeding into this layer, or the expected input shape
num_units : int
The number of units of the layer
W : Theano shared variable, expression, numpy array or callable
Initial value, expression or initializer for the weights. These should be a matrix with shape
(num_inputs, num_units)
. Seelasagne.utils.create_param()
for more information.b : Theano shared variable, expression, numpy array, callable or
None
Initial value, expression or initializer for the biases. If set to
None
, the layer will have no biases. Otherwise, biases should be a 1D array with shape(num_units,)
. Seelasagne.utils.create_param()
for more information.nonlinearity : callable or None
The nonlinearity that is applied to the layer activations. If None is provided, the layer will be linear.
Notes
If the input to this layer has more than two axes, it will flatten the trailing axes. This is useful for when a dense layer follows a convolutional layer, for example. It is not necessary to insert a
FlattenLayer
in this case.Examples
>>> from lasagne.layers import InputLayer, DenseLayer >>> l_in = InputLayer((100, 20)) >>> l1 = DenseLayer(l_in, num_units=50)
-
class
lasagne.layers.
NINLayer
(incoming, num_units, untie_biases=False, W=lasagne.init.GlorotUniform(), b=lasagne.init.Constant(0.), nonlinearity=lasagne.nonlinearities.rectify, **kwargs)[source]¶ Network-in-network layer. Like DenseLayer, but broadcasting across all trailing dimensions beyond the 2nd. This results in a convolution operation with filter size 1 on all trailing dimensions. Any number of trailing dimensions is supported, so NINLayer can be used to implement 1D, 2D, 3D, … convolutions.
Parameters: incoming : a
Layer
instance or a tupleThe layer feeding into this layer, or the expected input shape
num_units : int
The number of units of the layer
untie_biases : bool
If false the network has a single bias vector similar to a dense layer. If true a separate bias vector is used for each trailing dimension beyond the 2nd.
W : Theano shared variable, expression, numpy array or callable
Initial value, expression or initializer for the weights. These should be a matrix with shape
(num_inputs, num_units)
, wherenum_inputs
is the size of the second dimension of the input. Seelasagne.utils.create_param()
for more information.b : Theano shared variable, expression, numpy array, callable or
None
Initial value, expression or initializer for the biases. If set to
None
, the layer will have no biases. Otherwise, biases should be a 1D array with shape(num_units,)
foruntie_biases=False
, and a tensor of shape(num_units, input_shape[2], ..., input_shape[-1])
foruntie_biases=True
. Seelasagne.utils.create_param()
for more information.nonlinearity : callable or None
The nonlinearity that is applied to the layer activations. If None is provided, the layer will be linear.
References
[R5555] Lin, Min, Qiang Chen, and Shuicheng Yan (2013): Network in network. arXiv preprint arXiv:1312.4400. Examples
>>> from lasagne.layers import InputLayer, NINLayer >>> l_in = InputLayer((100, 20, 10, 3)) >>> l1 = NINLayer(l_in, num_units=5)