diff --git a/content/10_nn.html b/content/10_nn.html
index 5906f0d..62924b9 100644
--- a/content/10_nn.html
+++ b/content/10_nn.html
@@ -274,9 +274,13 @@ let guess = perceptron.feedForward(inputs);
Provide the perceptron with inputs for which there is a known answer.
Ask the perceptron to guess an answer.
Compute the error. (Did it get the answer right or wrong?)
-
Adjust all the weights according to the error.
-
Return to step 1 and repeat!
+
+
+
Adjust all the weights according to the error.
+
Return to step 1 and repeat!
+
+
This process can be packaged into a method on the Perceptron class, but before I can write it, I need to examine steps 3 and 4 in more detail. How do I define the perceptron’s error? And how should I adjust the weights according to this error?
The perceptron’s error can be defined as the difference between the desired answer and its guess:
Collect the data. Data forms the foundation of any machine learning task. This stage might involve running experiments, manually inputting values, sourcing public data, or a myriad of other methods (like generating synthetic data).
Prepare the data. Raw data often isn’t in a format suitable for machine learning algorithms. It might also have duplicate or missing values, or contain outliers that skew the data. Such inconsistencies may need to be manually adjusted. Additionally, as I mentioned earlier, neural networks work best with normalized data, which has values scaled to fit within a standard range. Another key part of preparing data is separating it into distinct sets: training, validation, and testing. The training data is used to teach the model (step 4), while the validation and testing data (the distinction is subtle—more on this later) are set aside and reserved for evaluating the model’s performance (step 5).
Choose a model. Design the architecture of the neural network. Different models are more suitable for certain types of data and outputs.
-
Train the model. Feed the training portion of the data through the model and allow the model to adjust the weights of the neural network based on its errors. This process is known as optimization: the model tunes the weights so they result in the fewest number of errors.
-
Evaluate the model. Remember the testing data that was set aside in step 2? Since that data wasn’t used in training, it provides a means to evaluate how well the model performs on new, unseen data.
+
Train the model. Feed the training portion of the data through the model and allow the model to adjust the weights of the neural network based on its errors. This process is known as optimization: the model tunes the weights so they result in the fewest number of errors.
+
Evaluate the model. Remember the testing data that was set aside in step 2? Since that data wasn’t used in training, it provides a means to evaluate how well the model performs on new, unseen data.
Tune the parameters. The training process is influenced by a set of parameters (often called hyperparameters) such as the learning rate, which dictates how much the model should adjust its weights based on errors in prediction. I called this the learningConstant in the perceptron example. By fine-tuning these parameters and revisiting steps 4 (training), 3 (model selection), and even 2 (data preparation), you can often improve the model’s performance.
Deploy the model. Once the model is trained and its performance is evaluated satisfactorily, it’s time to use the model out in the real world with new data!