FAQ: Deep Learning Math - Backpropagation

This community-built FAQ covers the “Backpropagation” exercise from the lesson “Deep Learning Math”.

Paths and Courses
This exercise can be found in the following Codecademy content:

Data Scientist

FAQs on the exercise Backpropagation

There are currently no frequently asked questions associated with this exercise – that’s where you come in! You can contribute to this section by offering your own questions, answers, or clarifications on this exercise. Ask or answer a question by clicking reply (reply) below.

If you’ve had an “aha” moment about the concepts, formatting, syntax, or anything else with this exercise, consider sharing those insights! Teaching others and answering their questions is one of the best ways to learn and stay sharp.

Join the Discussion. Help a fellow learner on their journey.

Ask or answer a question about this exercise by clicking reply (reply) below!
You can also find further discussion and get answers to your questions over in Language Help.

Agree with a comment or answer? Like (like) to up-vote the contribution!

Need broader help or resources? Head to Language Help and Tips and Resources. If you are wanting feedback or inspiration for a project, check out Projects.

Looking for motivation to keep learning? Join our wider discussions in Community

Learn more about how to use this guide.

Found a bug? Report it online, or post in Bug Reporting

Have a question about your account or billing? Reach out to our customer support team!

None of the above? Find out where to ask other questions here!

Why are we using gradient descent, or some other numerical method, that may have difficulty arriving at a true global minimum of the loss function, instead of calculus to minimize the loss function?
Finding Maxima and Minima using Calculus
Wikipedia warning about limitations of gradient descent
Especially since with libraries like sympy we can solve derivatives and integrals?

I’m replying to myself here, but I found this video by 3Blue1Brown.

To find the minimum using calculus, you would have to do multivariable calculus on a shape that exists in an n-dimensional space where n is the amount of features of your dataset, the number of columns. That results in a lot of local minimums which are not the global minimum, and it is very hard to do conventionally, which is why we resort to more exotic methods like gradient descent, stochastic gradient descent, etc.