I am really struggling with this question and it’s driving me insane! Italicised words are the ones I select.

1.Nodes between each layer in a neural network are connected by *weights*, which are the learning parameters of our neural network. They determine the strength and connection between each linked node.

2. *Activation Functions* are used in each layer of a neural network and determine whether neurons should be “fired” or not based on output from a weighted sum.

3.Various tyles of activation functions can be applied at each later. The most commonly used activation functions are *ReLu*, softmax and sigmoid.

4.When a value is outputted by a neural network, we calculate its error using a *loss function*. Some commonly used examples of there are MSE and cross entropy loss.

5. *Backpropogation* refers to the calculation of the gradient of the loss function with respect to the weight parameters in a neural network. The *Machine Learning Model’s* algorithm updates our weight parameters by iteratively minimising our loss function to increase our model’s accuracy.

*unused words: residual, layers, auto-grad, weights, gradient descent, bias.*

I suspect I am getting 5 wrong, but I’m not sure, I’ve gone through the material, but must be messing up somewhere.

Thank you if you can clear this up for me!