Perceptron Logic Gates

Hi, everyone!

So I’m doing the Perceptron Logic Gates project in Tensorflow course. I have this code so far, but the results it prints are incorrect and different from the ones in the walkthrough video (which make more sense). The code is EXACTLY the same. If there’s any difference, I can’t find it… The results are the following, followed by the code I used:


[-2. 2. 0.]

Walkthrough video:

[-4., 1., -1.5]

import codecademylib3_seaborn
from sklearn.linear_model import Perceptron
import matplotlib.pyplot as plt
import numpy as np
from itertools import product

data = [[0, 0], [0, 1], [1, 0], [1, 1]]
labels = [0, 0, 0, 1]

plt.scatter([point[0] for point in data], [point[1] for point in data], c=labels)

classifier = Perceptron(max_iter=40), labels)
print(classifier.score(data, labels))
print(classifier.decision_function([[0, 0], [1, 1], [0.5, 0.5]]))
1 Like

I’m afraid I can’t give you an exact answer but the following links might be of interest to you, where @van19bois provided some useful testing-

I have gotten the same results whilst working through them myself and am also unsure

As @tgrtim provided, I did some testing on this and the order of the data points matters in the final model. Your results are correct for your code. The video walkthrough uses

Data: [[0, 0], [1, 1], [0, 1], [1, 0]]
Labels: [0, 1, 0, 0]

which will produce different results. I ran a big test suite to see the outcomes of the perceptron results and it was pretty variable.

CC: @uwusfdfggh


This topic was automatically closed 41 days after the last reply. New replies are no longer allowed.