· The Perceptron Learning
Algorithm is a search algorithm.
· It begins in a random
initial state and finds a solution state.
· The search space is simply
all possible assignments of real values
to the weights of the perceptron.
· The search strategy is gradient descent.
· The perceptron learning
rule is guaranteed to converge to a solution
in a finite no. of steps, as long as a solution exists.
· The
perceptron can be used to classify input
vectors that can be separated by a linear boundary. Such vectors are called Linearly
Separable.
Algorithm:
Given: A classification problem with n input
features (x1, x2, …, xn) and two output
classes.
Compute: A set of weights
(w0, w1, w2, …, wn) that will cause a perceptron to fire whenever the input falls into the first output class.
1. Create a perceptron with n+1 input and n+1 weight, where x0 is always set to 1.
2. Initialize weights (w0, w1, w2, …, wn) to random real values.
3.
Iterate
through the training
set, collecting all examples misclassified by the current set of weights.
4. If all examples are classified correctly, output the weights,
and quit.
5.
Otherwise, compute the vector
sum S, of the misclassified input vectors. Multiply
the sum by a scale factor η.
6.
Modify the weights
(w0, w1, w2, …, wn) by adding the elements of the vector
S to them. Go to step
3.
No comments:
Post a Comment