Starting from:
$30

$24

Assignment 6 Solution

1. [5 points] Consider the following training set







:




x 1 =
0
,t 1 = 0 , x 2 =
0
,t 2 = 1 , x 3 =
1
,t 3 = 1 , x 4 =
1
,t 4 = 1




















0


1


0


1


a) Plot the training samples in the feature space.































Apply the perceptron learning rule to the training samples one-at-a-time to obtain weights w1, w2, and bias w0 that separate the training samples. Use w = [w0, w1, w2] = [0, 0, 0] as initial values (consider bias input x0 = 1, and learning rate = 1). Write the expression for the resulting decision boundary and draw it in the graph. [Hint: You can use Excel / OO Calc to implement the learning rule for perceptron, such as the spreadsheet of InClass_09 posted on eClass].



Epoch
Inputs
Desired
Initial weights
Actual
Error
Updated weights






output t






output y










x1
x2


w0
w1
w2




w0
w1
w2
1
0
0
0
0
0
0












0
1
1


















1
0
1


















1
1
1
















2
0
0
0










































0
1
1


















1
0
1


















1
1
1
















3
0
0
0


















0
1
1


















1
0
1


















1
1
1








































2. [5 points] Consider the following training set










x 1 =
0
,t 1 = 0 , x 2 =
0
,t 2
= 1 , x 3 =
1
,t 3 = 1 , x 4 =
1
,t 4 = 0






















0


1




0


1


which describes the exclusive OR (XOR) problem.




Establish mathematical (not graphical) proof that this problem is not linearly separable. [Hint: Start with assumption that these patterns are linearly separable, write down equations/inequalities corresponding to this assumption and examine them for conflict; first such inequality is provided below as an example.]



Suppose that the problem is linearly separable. The decision boundary can be represented as:




∑20 = 0 or (expanded) 0 0 + 1 1 + 2 2 = 0 This assumption means that either

+ + < 0 ( , ) = (0,1) ∧ ( , ) = (1,0)

0 0 + 1 1 + 2 2 ≥ 0 ( 1, 2) = (0,0) ∧ ( 1, 2) = (1,1),



or 0 01 12 21 21 2

+ + 0 ( , ) = (0,1) ∧ ( , ) = (1,0)

0 0+ 1 1+ 2 2 ≤0 ( 1, 2)=(0,0)∧( 1, 2)=(1,1).0011221212
must be satisfied. Following one of the cases and putting the values ( 1, 2) under variables, one obtains

0 0+ 2<0



(2)




(3)




(4)
















































Apply the perceptron learning rule following the same procedure as in Problem 1. Describe your observation.
Epoch
Inputs
Desired
Initial weights
Actual
Error
Updated weights






output t






output y










x1
x2


w0
w1
w2




w0
w1
w2
1
0
0
0
0
0
0












0
1
1


















1
0
1


















1
1
0
















2
0
0
0


















0
1
1


















1
0
1


















1
1
0
















3
0
0
0


















0
1
1


















1
0
1


















1
1
0















More products