FAQ: Perceptron - The Perceptron Algorithm

This community-built FAQ covers the “The Perceptron Algorithm” exercise from the lesson “Perceptron”.

Paths and Courses
This exercise can be found in the following Codecademy content:

FAQs on the exercise The Perceptron Algorithm

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head here.

Looking for motivation to keep learning? Join our wider discussions.

Learn more about how to use this guide.

Found a bug? Report it!

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

This is not a question, its a bug report…

The code does not run. It just get stuck.

I ran the script in my computer and it runs just fine. I can not continuy whit the lesson.

1 Like

I’m having the same problem.
Got stuck on question 3 and can’t run the code.
Had to get solution on the “Get Help” link.

I had the same problem

As we change the total_error to total_error + abs(error):
I don’t understand how it’ll ever reach zero. As the error count is only adding up.

      for inputs in training_set:
        prediction = self.activation(self.weighted_sum(inputs))
        actual = training_set[inputs]
        error = actual - prediction
        total_error += abs(error)
        for i in range(self.num_inputs):
          self.weights[i] += error*inputs[i]
1 Like

key is the little bit before the for loop:
while not foundLine:
total_error = 0

every iteration completed total_error is reset to 0

That is what we want, actually it should only reach 0 if all the individual errors become 0. If all the individual errors are not 0 then it should re-enter the while loop to further nudge the weights.

1 Like

Can anyone explain why

foundLine = (total_error == 0)

returns an error? In any other IDE it works.

Does anyone know how one would go about making this code compatible with list inputs instead of the older dictionary inputs?

feel free to DM me about it
@:joy::fire: : Discord
Or if you just want to talk about artificial neurons running on minimal code with no plugins installed