I recently started a project where I’m Implementing most of the most commonly used gradient-based optimizers. I do this for 2 reasons:
- for practice, I used pre-implemented optimizers with TensorFlow for a long time now so I refresh my knowledge this way
- to create an overview for a few people I know that just started out with deep learning math
So here’s why I’m sharing this: as you can see in GitHub, I just implemented a few optimizers yet and didn’t write the explanations for them. But I plan to do a lot more so before I continue I wanted to ask for feedback on the existing code so I can implement Ideas for improvement on the go.
The project can be visited with the following link: GitHub - VincentBrunner/neural-network-and-optimizers-with-numpy: In this project I implemented a basic neural network together with some of the most common optimizers.
I would really appreciate it if anyone could give me feedback on potential mistakes I made during implementation and especially the way/style it was implemented cause that’s usually where I have the most room for improvement left.
Thanks in advance