Multi-Layer Perceptron (MLP)

Multi-Layer Perceptron (MLP)

Finger Biometric -- Neural networks 1 Neural networks Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated by the red circle. A weight is simply a floating point number and it's these we

adjust when we eventually come to train the network. 2 Neural networks A neuron can have any number of inputs from one to n, where n is the total number of inputs. The inputs may be represented therefore as x1, x2, x3 xn.

And the corresponding weights for the inputs as w1, w2, w3 wn. Output a = x1w1+x2w2+x3w3... +xnwn 3 How do we actually use an artificial neuron? Feedforward network: The neurons in each layer feed their output forward to the next layer until we get the final output

from the neural network. There can be any number of hidden layers within a feedforward network. The number of neurons can be completely arbitrary. 4 Neural Networks by an Example

let's design a neural network that will detect the number '4'. Given a panel made up of a grid of lights which can be either on or off, we want our neural net to let us know whenever it thinks it sees the character '4'. The panel is eight cells square and looks like this: the neural net will have 64 inputs, each one representing a particular cell in the panel and a hidden layer consisting of a number of neurons (more on

this later) all feeding their output into just one neuron in the output layer 5 Neural Networks by an Example initialize the neural net with random weights feed it a series of inputs which represent, in this example, the different panel configurations For each configuration we check to see what its output is and adjust the weights accordingly so that whenever it sees

something looking like a number 4 it outputs a 1 and for everything else it outputs a zero. More: http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/rep ort.html 6 Multi-Layer Perceptron (MLP)

7 x1 xn We will introduce the MLP and the backpropagation algorithm which is used to train it MLP used to describe any general feedforward (no recurrent connections) network

However, we will concentrate on nets with units arranged in layers 8 x1 xn Different books refer to the above as either 4 layer (no. of layers of neurons) or 3 layer (no. of layers of adaptive

weights). We will follow the latter convention 1st question: what do the extra layers gain you? Start with looking at what a single layer cant do 9 Perceptron Learning Theorem Recap: A perceptron (threshold unit) can learn anything that it can represent (i.e. anything separable with a hyperplane)

10 The Exclusive OR problem A Perceptron cannot represent Exclusive OR since it is not linearly separable. 11 12

Minsky & Papert (1969) offered solution to XOR problem by combining perceptron unit responses using a second layer of Units. Piecewise linear classification using an MLP with threshold (perceptron) units +1 1 3 2

+1 13 Three-layer networks x1 x2 Input Output

xn Hidden layers 14 Properties of architecture No connections within a layer Each unit is a perceptron

m yi f ( wij x j bi ) j 1 15 Properties of architecture No connections within a layer No direct connections between input and output layers

Each unit is a perceptron m yi f ( wij x j bi ) j 1 16

Properties of architecture No connections within a layer No direct connections between input and output layers Fully connected between layers Each unit is a perceptron m yi f ( wij x j bi )

j 1 17 Properties of architecture No connections within a layer No direct connections between input and output layers Fully connected between layers Often more than 3 layers Number of output units need not equal number of input units

Number of hidden units per layer can be more or less than input or output units Each unit is a perceptron m yi f ( wij x j bi ) j 1 Often include bias as an extra weight

18 What do each of the layers do? 1st layer draws linear boundaries 2nd layer combines the boundaries

3rd layer can generate arbitrarily complex 19 boundaries Backpropagation learning algorithm BP Solution to credit assignment problem in MLP. Rumelhart, Hinton and Williams (1986) (though actually invented earlier in a PhD thesis relating to economics) BP has two phases:

Forward pass phase: computes functional signal, feed forward propagation of input pattern signals through network Backward pass phase: computes error signal, propagates the error backwards through network starting at output units (where the error is the difference between actual and desired output values) 20 Conceptually: Forward Activity Backward Error

21 Forward Propagation of Activity Step 1: Initialise weights at random, choose a learning rate Until network is trained: For each training example i.e. input pattern and target output(s): Step 2: Do forward pass through net (with fixed weights) to produce output(s)

i.e., in Forward Direction, layer by layer: Inputs applied Multiplied by weights Summed

Squashed by sigmoid activation function Output passed to each neuron in next layer Repeat above until network output(s) produced 22 Step 3. Back-propagation of error Compute error (delta or local gradient) for each output unit k Layer-by-layer, compute error (delta or local

gradient) for each hidden unit j by backpropagating errors (as shown previously) Step 4: Next, update all the weights wij By gradient descent, and go back to Step 2 The overall MLP learning algorithm, involving forward pass and backpropagation of error (until the network training completion), is known as the Generalised Delta Rule (GDR), or more commonly, the Back Propagation (BP) algorithm

23 Back-prop algorithm summary (with Maths!) (Not Examinable) 24 Back-prop algorithm summary (with NO Maths!)

25 MLP/BP: A worked example 26 Worked example: Forward Pass 27

Worked example: Forward Pass 28 Worked example: Backward Pass 29 Worked example: Update Weights Using Generalized Delta Rule (BP)

30 Similarly for the all weights wij: 31 Verification that it works 32

Training This was a single iteration of back-prop Training requires many iterations with many training examples or epochs (one epoch is entire presentation of complete training set) It can be slow ! Note that computation in MLP is local (with respect to each neuron) Parallel computation implementation is also

possible 33 Training and testing data How many examples ? The more the merrier ! Disjoint training and testing data sets learn from training data but evaluate performance (generalization ability) on

unseen test data Aim: minimize error on test data 34

Recently Viewed Presentations

  • LOG 561 RETAIL MANAGEMENT - ieu.edu.tr

    LOG 561 RETAIL MANAGEMENT - ieu.edu.tr

    SWOT. Analysis. Opportunities. What favorable environmental trends may benefit our firm? What is the competition doing in our market? What areas of business that are closely related to ours are undeveloped? Ikea. Target. McDonald's. LO 1
  • Ross Lake 3C-3D seismic - CREWES

    Ross Lake 3C-3D seismic - CREWES

    Interpreting a 3C-3D seismic survey, Ross Lake, Saskatchewan Chuandong (Richard) Xu and Robert R. Stewart CREWES Sponsors Meeting, Nov. 21, 2003 Outline Introduction Geology (channel sands) 3C-3D seismic and VSP data Interpretation of PP & PS sections, isochrons, and Vp/Vs...
  • Wireless Standards, Organizations, & Fundamentals

    Wireless Standards, Organizations, & Fundamentals

    parabolic dish antennas and . grid antennas. The parabolic dish antenna is similar in appearance to the small digital satellite Tv antennas that can be seen on the roofs of many houses. The grid antenna resembles the grill of a...
  • Construction of cancer pathways for personalized medicine Construction

    Construction of cancer pathways for personalized medicine Construction

    Promoter Binding: Protein TF->Protein. Example of expression regulators and Cell processes identified by SNEA . in lung cancer patient. STEP2: Mapping expression regulators on pathways. ... EGFR activation by apoptotic clearance (wound healing pathway)
  • The Roman Invasion of Britain - PrimaryBlogger

    The Roman Invasion of Britain - PrimaryBlogger

    Some tribes fought long, bloody battles. The Romans were still fighting in Yorkshire and other parts of Northern Britain forty years later. They never actually gained full control of Britain although they were still there almost 400 years after the...
  • Permafrost - South Carolina

    Permafrost - South Carolina

    And in Section 8, Contingency, you find the following information: General Contingency. Buried Contingencies. Specifications for Contingency Analysis. Construction Project Contingences. Contingency Factors with Project Complexity. Design . C. ompleteness. Market Conditions . Special Conditions. Construction Phase Contingency
  • DFO/MFO Joint Forest Health Cooperator's Meeting October 4-6 ...

    DFO/MFO Joint Forest Health Cooperator's Meeting October 4-6 ...

    Times New Roman MS PGothic Arial Calibri MS Pゴシック Office Theme 1_Office Theme 2_Office Theme 3_Office Theme 4_Office Theme 5_Office Theme 6_Office Theme 7_Office Theme 8_Office Theme 9_Office Theme 10_Office Theme 11_Office Theme Emerald Ash Borer in New Jersey PowerPoint...
  • Unit 5 NIMS Command and Management NIMS Resource

    Unit 5 NIMS Command and Management NIMS Resource

    The first Command and Management element is the Incident Command System (ICS). This unit reviews the key ICS concepts and terminology used within NIMS and is not a substitute for comprehensive ICS training. Additional information on ICS training requirements is...