Learning Neural Network
- The perceptron an invention of (1962) Rosenblatt was one of the earliest neural network models.
- Also, It models a neuron by taking a weighted sum of its inputs and sending the output 1 if the sum is greater than some adjustable threshold value (otherwise it sends 0).
Figure: A neuron & a Perceptron
Figure: Perceptron with adjustable threshold
- In case of zero with two inputs g(x) = w0 + w1x1 + w2x2 = 0
- x2 = -(w1/w2)x1 – (w0/w2) → equation for a line
- the location of the line is determined by the weight w0 w1 and w2
- if an input vector lies on one side of the line, the perceptron will output 1
- if it lies on the other side, the perception will output 0
- Moreover, Decision surface: a line that correctly separates the training instances corresponds to a perfectly function perceptron.
Perceptron Learning Algorithm
Given: A classification problem with n input feature (x1, x2, …., xn) and two output classes.
Compute A set of weights (w0, w1, w2,….,wn) that will cause a perceptron to fire whenever the input falls into the first output class.
- Create a perceptron with n+ 1 input and n+ 1 weight, where the x0 is always set to 1.
- Initialize the weights (w0, w1,…., wn) to random real values.
- Iterate through the training set, collecting all examples misclassified by the current set of weights.
- If all examples are classified correctly, output the weights and quit.
- Otherwise, compute the vector sum S of the misclassified input vectors where each vector has the form (x0, x1, …, Xn). In creating the sum, add to S a vector x if x is an input for which the perceptron incorrectly fails to fire, but – x if x is an input for which the perceptron incorrectly fires. Multiply sum by a scale factor η.
- Moreover, Modify the weights (w0, w1, …, wn) by adding the elements of the vector S to them.
Go to step 3.
- The perceptron learning algorithm is a search algorithm. It begins with a random initial state and finds a solution state. The search space is simply all possible assignments of real values to the weights of the perception, and the search strategy is gradient descent.
- The perceptron learning rule is guaranteed to converge to a solution in a finite number of steps, so long as a solution exists.
- Moreover, This brings us to an important question. What problems can a perceptron solve? Recall that a single-neuron perceptron is able to divide the input space into two regions.
- Also, The perception can be used to classify input vectors that can be separated by a linear boundary. We call such vectors linearly separable.
- Unfortunately, many problems are not linearly separable. The classic example is the XOR gate. It was the inability of the basic perceptron to solve such simple problems that are not linearly separable or non-linear.