# Perceptron Exercise

Hi there,
I am doing the perceptron exercise part of the perceptron module and I am getting stuck on step 9. On the walkthrough video, the demonstrator is getting

``````[-4. 1. -1.5]
``````

as the result. I am not getting this for some reason when my code matches this exactly.

``````Instead I get
``````

[-2. 2. 0.]

``````
My code is below

data = [ [0,0], [0,1], [1,0], [1,1] ]
labels = [0,0,0,1]

plt.scatter([point[0] for point in data],
[point[1] for point in data],
c=labels)

classifier = Perceptron(max_iter=40)
classifier.fit(data, labels)
print(classifier.score(data, labels))
print(classifier.decision_function([[0, 0], [1, 1], [0.5, 0.5]]))

plt.show()
``````

Any help here would be great

That is interesting (and confirmed different from the video though all the code seems to be exactly the same). Maybe itâ€™s an older version of the scikit learn that the video is using. Iâ€™ll see if this discrepancy means anything.

2 Likes

Great, Will be good to know. Not only are the numbers different but the size of the values in the list I get
`[-2. 2. 0.]`

This means I get both the first to values the same distant away from the line when in fact point 2 should be closer than point 1

Any further update here?

Let me tag someone else who is more into ML at the moment, I got swamped with other work. Thanks for checking up!

1 Like

Could you try downloading the two different versions of this package (whatever youâ€™re using on your own PC and the one on Codecademyâ€™s learning environment)? If you stick them each in their own environment then I think thatâ€™d be the easiest way to check if the same exact script produced different values due to versioning alone.

It seems to be the order of the input data:

``````#labels = [0, 1, 0,0]
data, labels = [[0,0],[0,1],[1,0],[1,1]], [0,0,0,1]
#data = [ [0,0], [0,1], [1,0], [1,1] ]
#labels = [0,0,0,1]
validation = [[0, 0], [1, 1], [0.5, 0.5]]

plt.scatter([x for x,y in data],[y for x,y in data], c = labels)
plt.show()

classifier = Perceptron(max_iter = 40)

classifier.fit(data, labels)

print(classifier.score(data, labels))
print(classifier.decision_function(validation))

plt.scatter([point[0] for point in data],
[point[1] for point in data],
c=labels)

#classifier = Perceptron(max_iter=40)
classifier.fit(data, labels)
print(classifier.score(data, labels))
print(classifier.decision_function(validation))

plt.show()
``````

I get your results with the order you posted and their results with the order I used (1st set of commented data/labels).

Now why thatâ€™s the case Iâ€™m not sure. Iâ€™ll try to look into it.

2 Likes

Iâ€™m sure we must be getting close. I tried sklearn in a Python2 environment and I got `[-4. 1. -1.5]`. The same script in Python3 yielded `[-2. 2. 0.]`. A bit blunt perhaps but quicker than getting multiple different package versions.

Perhaps a change of argument order in the function (hence why two slightly different datasets also alter output that much). That is a complete guess, Iâ€™ve not used the function before so I canâ€™t really add much else. It wouldnâ€™t surprise me if the python version was updated for these lessons at some point (between when the video was made and now) which necessitated a few package changes.

Anyone have the link to the video walkthrough?

1 Like

Hey, Thanks for looking into this. Yes the order does seem to change the output. But When I use the data and labels that are commented out
#labels = [0, 1, 0, 0]
#data = [ [0,0], [0,1], [1,0], [1,1] ]
I get

[-1. -2. -1.5]

That pair of labels and data donâ€™t correspond to an AND gate.

looks like I didnâ€™t copy the top line of my data:

``````# my data order
data = [[0,0], [1,1], [0,1], [1,0]]
labels = [0, 1, 0,0]

#data = [ [0,0], [0,1], [1,0], [1,1] ]
#labels = [0,0,0,1]
validation = [[0, 0], [1, 1], [0.5, 0.5]]
``````

It also looks like you had the data in the same order as the video.

I ran a â€śtest suiteâ€ť on the exercise locally and got different answers for each data permutation:

``````Data: [[0, 0], [0, 1], [1, 0], [1, 1]]
Labels: [0, 0, 0, 1]
Score: 1.0
DV: [-2.  2.  0.]

Data: [[1, 1], [0, 0], [0, 1], [1, 0]]
Labels: [1, 0, 0, 0]
Score: 1.0
DV: [-2.   1.  -0.5]

Data: [[0, 0], [0, 1], [1, 1], [1, 0]]
Labels: [0, 0, 1, 0]
Score: 1.0
DV: [-3.  1. -1.]

Data: [[0, 0], [1, 1], [0, 1], [1, 0]]
Labels: [0, 1, 0, 0]
Score: 1.0
DV: [-4.   1.  -1.5]

``````

Iâ€™ve tried searching to see if order matters but canâ€™t find anything.

1 Like

It looks like the order of the inputs does end up generating different models: Perceptron Project: AND Heatmap not Displaying Correctly

I ran a 1000 iteration simulation where I generated a certain number of data points (4 or 10000) and fit the Perceptron. The results are below:

2 Likes