Notion - Update docs

This commit is contained in:
shiffman 2024-02-25 03:03:22 +00:00 committed by GitHub
parent 9736ef195a
commit 568fab9fdf
2 changed files with 14 additions and 6 deletions

View file

@ -274,9 +274,13 @@ let guess = perceptron.feedForward(inputs);</pre>
<li>Provide the perceptron with inputs for which there is a known answer.</li>
<li>Ask the perceptron to guess an answer.</li>
<li>Compute the error. (Did it get the answer right or wrong?)</li>
<li>Adjust all the weights according to the error.</li>
<li>Return to step 1 and repeat!</li>
</ol>
<div class="avoid-break">
<ol>
<li value="4">Adjust all the weights according to the error.</li>
<li value="5">Return to step 1 and repeat!</li>
</ol>
</div>
<p>This process can be packaged into a method on the <code>Perceptron</code> class, but before I can write it, I need to examine steps 3 and 4 in more detail. How do I define the perceptrons error? And how should I adjust the weights according to this error?</p>
<p>The perceptrons error can be defined as the difference between the desired answer and its guess:</p>
<div data-type="equation">\text{error} = \text{desired output} - \text{guess output}</div>
@ -552,14 +556,14 @@ function draw() {
<li><strong>Collect the data.</strong> Data forms the foundation of any machine learning task. This stage might involve running experiments, manually inputting values, sourcing public data, or a myriad of other methods (like generating synthetic data).</li>
<li><strong>Prepare the data.</strong> Raw data often isnt in a format suitable for machine learning algorithms. It might also have duplicate or missing values, or contain outliers that skew the data. Such inconsistencies may need to be manually adjusted. Additionally, as I mentioned earlier, neural networks work best with normalized data, which has values scaled to fit within a standard range. Another key part of preparing data is separating it into distinct sets: training, validation, and testing. The training data is used to teach the model (step 4), while the validation and testing data (the distinction is subtle—more on this later) are set aside and reserved for evaluating the models performance (step 5).</li>
<li><strong>Choose a model.</strong> Design the architecture of the neural network. Different models are more suitable for certain types of data and outputs.</li>
<li><strong>Train the model.</strong> Feed the training portion of the data through the model and allow the model to adjust the weights of the neural network based on its errors. This process is known as <strong>optimization</strong>: the model tunes the weights so they result in the fewest number of errors.</li>
</ol>
<div class="avoid-break">
<ol>
<li value="5"><strong>Evaluate the model.</strong> Remember the testing data that was set aside in step 2? Since that data wasnt used in training, it provides a means to evaluate how well the model performs on new, unseen data.</li>
<li value="4"><strong>Train the model.</strong> Feed the training portion of the data through the model and allow the model to adjust the weights of the neural network based on its errors. This process is known as <strong>optimization</strong>: the model tunes the weights so they result in the fewest number of errors.</li>
</ol>
</div>
<ol>
<li value="5"><strong>Evaluate the model.</strong> Remember the testing data that was set aside in step 2? Since that data wasnt used in training, it provides a means to evaluate how well the model performs on new, unseen data.</li>
<li value="6"><strong>Tune the parameters.</strong> The training process is influenced by a set of parameters (often called <strong>hyperparameters</strong>) such as the learning rate, which dictates how much the model should adjust its weights based on errors in prediction. I called this the <code>learningConstant</code> in the perceptron example. By fine-tuning these parameters and revisiting steps 4 (training), 3 (model selection), and even 2 (data preparation), you can often improve the models performance.</li>
<li value="7"><strong>Deploy the model. </strong>Once the model is trained and its performance is evaluated satisfactorily, its time to use the model out in the real world with new data!</li>
</ol>

View file

@ -45,9 +45,13 @@
<li>The y-position of the bird</li>
<li>The y-velocity of the bird</li>
<li>The y-position of the next top pipes opening</li>
<li>The y-position of the next bottom pipes opening</li>
<li>The x-distance to the next pipe</li>
</ol>
<div class="avoid-break">
<ol>
<li value="4">The y-position of the next bottom pipes opening</li>
<li value="5">The x-distance to the next pipe</li>
</ol>
</div>
<p>These features are illustrated in Figure 11.2.</p>
<figure>
<img src="images/11_nn_ga/11_nn_ga_4.png" alt="Figure 11.2: The Flappy Bird input features for a neural network">