Is gradient descent only applicable to two dimensions?


In the context of this exercise introducing gradient descent, is gradient descent only applicable to two dimensions?


No, gradient descent is not limited to two dimensions.

For the models we use in the lesson, we apply it in two dimensions, but it can also apply to three, four, or even an infinite number of dimensions. This is because, in general, gradient descent is used to find the minimum of a function, regardless of how many dimensions it is in.

For an example of gradient descent applied in more than two dimensions, we can picture gradient descent applying to three dimensions, where instead of a curve in two dimensions, we have a sort of topology with hills and valleys. In this case, gradient descent will start at some point on this surface and attempt to find a minimum, going down the hill into a valley or low point.


But what could be the equation for that? Or is each time we increase a dimension the equation differ?


dont have explaination for this -2/N


N is for normalizing, and 2 comes when we differentiate the squared error term. If you go through the derivation, you can understand it.

1 Like

you can find some info in this video:
in Calculus we have the gradient vector. This gradient give us the direction in wich your function (n variable function) grows faster. If you multiply it by -1 then you will find the oposite direction (fastest decresing).

Hope that help you.


Do you have any book title that covers this topics with math in an engeneering level?

1 Like