Notion - Update docs
|
@ -15,7 +15,7 @@
|
|||
<h3 id="example-11-bouncing-ball-with-no-vectors">Example 1.1: Bouncing Ball with No Vectors</h3>
|
||||
<figure>
|
||||
<div data-type="embed" data-p5-editor="https://editor.p5js.org/natureofcode/sketches/oadKdOndU" data-example-path="examples/01_vectors/example_1_1_bouncing_ball_with_no_vectors"><img src="examples/01_vectors/example_1_1_bouncing_ball_with_no_vectors/screenshot.png"></div>
|
||||
<figcaption>If you’re reading this book as a PDF or in print, this is the first example where the screenshot includes a trail to give a sense of the motion in the sketch. For more about how to draw tails, see the code examples linked from the website.</figcaption>
|
||||
<figcaption>If you’re reading this book as a PDF or in print, this is the first example where the screenshot includes a trail to give a sense of the motion in the sketch. For more about how to draw trails, see the code examples linked from the website.</figcaption>
|
||||
</figure>
|
||||
</div>
|
||||
<pre class="codesplit" data-code-language="javascript">// Variables for position and speed of ball.
|
||||
|
|
|
@ -4,18 +4,19 @@
|
|||
<p>“This is an exercise in fictional science, or science fiction, if you like that better.”</p>
|
||||
<p>— Valentino Braitenberg</p>
|
||||
</blockquote>
|
||||
<p>Let’s think for a moment. Why are you here? The <em>nature</em> of code, right? What have I been demonstrating so far? Inanimate objects. Lifeless shapes sitting in canvas that flop around when affected by forces in their environment. What if you could breathe life into those shapes? What if those shapes could live by their own rules? Can shapes have hopes and dreams and fears? This is the domain on this chapter—<em>autonomous agents</em>.</p>
|
||||
<p>So far I’ve been demonstrating inanimate objects, lifeless shapes sitting in the canvas that flop around when affected by forces in their environment. But this is <em>The </em><strong><em>Nature</em></strong><em> of Code</em>. What if I could breathe life into those shapes? What if those shapes could live by their own rules? Can shapes have hopes and dreams and fears? These sorts of questions are the domain on this chapter. They’re what separate unthinking objects from something much more interesting: autonomous agents.</p>
|
||||
<h2 id="forces-from-within">Forces from Within</h2>
|
||||
<p>The term <strong><em>autonomous agent</em></strong> generally refers to an entity that makes its own choices about how to act in its environment without any influence from a leader or global plan. For the context here, “acting” will mean moving. This addition is a significant conceptual leap. Instead of a box sitting on a boundary waiting to be pushed by another falling box, I would like to now design a box that has the ability and “desire” to leap out of the way of that other falling box, if it so chooses. While the concept of forces that come from within is a major shift in design thinking, the code base will barely change, as these desires and actions are simply that—<em>forces</em>.</p>
|
||||
<p>Here are three key components of autonomous agents to keep in mind as I build the examples.</p>
|
||||
<p>An <strong><em>autonomous agent</em></strong> is an entity that makes its own choices about how to act in its environment without any influence from a leader or global plan. In this book, “acting” will mean moving. For example, instead of designing a box that sits on a boundary waiting to be pushed by another falling box, I’d now like to design a box that has the ability—or even the “desire”—to leap out of the way of that other falling box, if it so chooses.</p>
|
||||
<p>The switch from inanimate objects to autonomous agents is a significant conceptual leap, but the code base itself will barely change. The “desire” for an autonomous agent to move is just another force, like the force of gravity or the force of the wind. It’s just that now the force is coming <em>from within</em>.</p>
|
||||
<p>Here are three key components of autonomous agents to keep in mind as I build this chapter’s examples:</p>
|
||||
<ul>
|
||||
<li><strong>An autonomous agent has a </strong><strong><em>limited</em></strong><strong> ability to perceive environment. </strong>It makes sense that a living, breathing being should have an awareness of its environment. What does this mean, however? Throughout all the examples in this chapter, I will point out programming techniques for objects to store references to other objects and therefore “perceive” their environment. It’s also crucial to consider the word <em>limited</em> here. Are you designing an all-knowing rectangle that flies around a p5.js canvas, aware of everything else in that canvas? Or are you creating a shape that can only examine any other object within fifteen pixels of itself? Of course, there is no right answer to this question; it all depends. I’ll explore several possibilities throughout this chapter. For a simulation to feel more “natural,” however, limitations are a good thing. An insect, for example, may only be aware of the sights and smells that immediately surround it. For a real-world creature, you could study the exact science of these limitations. Luckily, I can just make stuff up and try it out.</li>
|
||||
<li><strong>An autonomous agent processes the information from its environment and calculates an action.</strong> This will be the easy part, as the action is a force. The environment might tell the agent that there’s a big scary-looking shark swimming right at it, and the action will be a powerful force in the opposite direction.</li>
|
||||
<li><strong>An autonomous agent has no leader.</strong> This third principle is something I care a little less about for the context here. After all, if you are designing a system where it makes sense to have a leader barking commands at various entities, then that’s what you’ll want to implement. Nevertheless, many of these examples will have no leader for an important reason. Towards the end of this chapter, I'll examine group behaviors and look at designing collections of autonomous agents that exhibit the properties of complex systems— intelligent and structured group dynamics that emerge not from a leader, but from the local interactions of the elements themselves.</li>
|
||||
<li><strong>An autonomous agent has a </strong><strong><em>limited</em></strong><strong> ability to perceive its environment.</strong><strong> </strong>It makes sense that a living, breathing being should have an awareness of its environment. What does this mean, however? Throughout the chapter, I’ll point out programming techniques for objects to store references to other objects and therefore “perceive” their environment. It’s also crucial to consider the word <em>limited</em> here. Are you designing an all-knowing rectangle that flies around a p5.js canvas, aware of everything else in that canvas? Or are you creating a shape that can only examine other objects within 15 pixels of itself? Of course, there’s no right answer to this question; it all depends on what you want. I’ll explore several possibilities throughout this chapter, but in general, for a simulation to feel more “natural,” however, limitations are a good thing. An insect, for example, may only be aware of the sights and smells that immediately surround it. To model a real-world creature, you could study the exact science of these limitations. Luckily, I can just make stuff up and try it out.</li>
|
||||
<li><strong>An autonomous agent processes the information from its environment and calculates an action.</strong> This will be the easy part, as the action is a force. The environment might tell the agent that there’s a big, scary-looking shark swimming right at it, and the action will be a powerful force in the opposite direction.</li>
|
||||
<li><strong>An autonomous agent has no leader.</strong> This third principle is something I care a little less about depending on the context. For example, if you’re designing a system where it makes sense to have a leader barking commands at various entities, then that’s what you’ll want to implement. Nevertheless, many of the chapter’s examples will have no leader for an important reason: toward the end of this chapter, I’ll examine group behaviors and look at designing collections of autonomous agents that exhibit the properties of <strong><em>complex systems</em></strong>. These are intelligent and structured group dynamics that emerge not from a leader, but from the local interactions of the elements themselves.</li>
|
||||
</ul>
|
||||
<p>In the late 1980s, computer scientist <a href="http://www.red3d.com/cwr/">Craig Reynolds</a> developed algorithmic steering behaviors for animated characters. These behaviors allowed individual elements to navigate their digital environments in a “lifelike” manner with strategies for fleeing, wandering, arriving, pursuing, evading, and more. Used in the case of a single autonomous agent, these behaviors are fairly simple to understand and implement. In addition, by building a system of multiple characters that steer themselves according to simple, locally based rules, surprising levels of complexity emerge. The most famous example is Reynolds’s “boids” model for “flocking/swarming” behavior.</p>
|
||||
<p>There are many places where I could start my exploration of autonomous agents. Artificial simulations of ant and termite colonies are fantastic demonstrations of systems of agents, for example. (For more on this topic, I encourage you to read <em>Turtles, Termites, and Traffic Jams</em> by Mitchel Resnick.) However, I want to begin by examining agent behaviors that build on the work in the first four chapters of this book: modeling motion with vectors and forces. And so I’ll return to the book’s ever-changing hero class—once <code>Walker</code>, then <code>Mover</code>, then <code>Particle</code>—and give it yet another incarnation.</p>
|
||||
<h2 id="vehicles-and-steering">Vehicles and Steering</h2>
|
||||
<p>Now that I‘ve discussed the core concepts behind autonomous agents, it‘s time to begin writing the code. There are many places where I could start. Artificial simulations of ant and termite colonies are fantastic demonstrations of systems of autonomous agents. (For more on this topic, I encourage you to read <em>Turtles, Termites, and Traffic Jams</em> by Mitchel Resnick.) However, I want to begin by examining agent behaviors that build on the work in the first four chapters of this book: modeling motion with vectors and forces. And so it’s time to once again rename the class that describes an entity moving about a canvas. What was once <code>Walker</code> became <code>Mover</code>which became <code>Particle</code> . In his 1999 paper “Steering Behaviors for Autonomous Characters,” Reynolds uses the word “vehicle” to describe his autonomous agents, so I will follow suit calling the class <code>Vehicle</code>.</p>
|
||||
<p>In the late 1980s, computer scientist <a href="http://www.red3d.com/cwr/">Craig Reynolds</a> developed algorithmic <strong><em>steering</em></strong> behaviors for animated characters. These behaviors allowed individual elements to navigate their digital environments in a “lifelike” manner, with strategies for fleeing, wandering, arriving, pursuing, evading, and more. Later, in his 1999 paper “Steering Behaviors for Autonomous Characters,” Reynolds uses the word “vehicle” to describe his autonomous agents. I’ll follow suit calling my autonomous agent class <code>Vehicle</code>.</p>
|
||||
<pre class="codesplit" data-code-language="javascript"> class Vehicle {
|
||||
|
||||
constructor(){
|
||||
|
@ -25,39 +26,45 @@
|
|||
}
|
||||
|
||||
//${inline} What else do I need to add?</pre>
|
||||
<p>Reynolds describes the motion of <em>idealized</em> vehicles (idealized because he was not concerned with the actual engineering of such vehicles, but rather started with the assumption that they work and respond to the rules defined) as a series of three layers—<strong>Action Selection</strong>, <strong>Steering</strong>, and <strong>Locomotion</strong>.</p>
|
||||
<p>Like the <code>Mover</code> and <code>Particle</code> classes before it, the <code>Vehicle</code> class’s motion is controlled through its position, velocity, and acceleration vectors. This will make the steering behaviors of a single autonomous agent quite straightforward to implement, and yet by building a system of multiple vehicles that steer themselves according to simple, locally based rules, surprising levels of complexity emerge. The most famous example is Reynolds’s “boids” model for flocking or swarming behavior, which I’ll demonstrate later in the chapter.</p>
|
||||
<div data-type="note">
|
||||
<h3 id="why-vehicle">Why Vehicle?</h3>
|
||||
<p>In 1986, Italian neuroscientist and cyberneticist Valentino Braitenberg described a series of hypothetical vehicles with simple internal structures in his book <em>Vehicles: Experiments in Synthetic Psychology</em>. Braitenberg argues that his extraordinarily simple mechanical vehicles manifest behaviors such as fear, aggression, love, foresight, and optimism. Reynolds took his inspiration from Braitenberg, and I’ll take mine from Reynolds.</p>
|
||||
<h3 id="why-vehicles">Why “Vehicles”?</h3>
|
||||
<p>In his 1986 book <em>Vehicles: Experiments in Synthetic Psychology</em>, Italian neuroscientist and cyberneticist Valentino Braitenberg described a series of hypothetical vehicles with simple internal structures. Braitenberg argues that his extraordinarily simple mechanical vehicles manifest behaviors such as fear, aggression, love, foresight, and optimism. Reynolds took his inspiration from Braitenberg, and I’ll take mine from Reynolds.</p>
|
||||
</div>
|
||||
<p>Reynolds describes the motion of <em>idealized</em> vehicles (idealized because he wasn’t concerned with the actual engineering of such vehicles, but rather started with the assumption that they work and respond to the rules defined) as a series of three layers—<strong>Action Selection</strong>, <strong>Steering</strong>, and <strong>Locomotion</strong>.</p>
|
||||
<ol>
|
||||
<li><strong><em>Action Selection.</em></strong> A vehicle has a goal (or goals) and can select an action (or a combination of actions) based on that goal. This is essentially where I left off the discussion of autonomous agents. The vehicle takes a look at its environment and calculates an action based on a desire: “I see a zombie marching towards me. Since I don’t want my brains to be eaten, I’m going to flee from the zombie.” The goal is to keep one’s brains and the action is to flee. Reynolds’s paper describes many goals and associated actions such as: seek a target, avoid an obstacle, and follow a path. In a moment, I’ll start building these examples out with p5.js code.</li>
|
||||
<li><strong><em>Steering.</em></strong> Once an action has been selected, the vehicle has to calculate its next move. That next move will be a force; more specifically, a steering force. Luckily, Reynolds has developed a simple steering force formula that I’ll use throughout the examples in this chapter: $$<strong><em>steering force = desired velocity - current velocity$$</em></strong>. I’ll get into the details of this formula and why it works so effectively in the next section.</li>
|
||||
<li><strong><em>Locomotion.</em></strong> For the most part, I’m going to ignore this third layer. In the case of fleeing zombies, the locomotion could be described as “left foot, right foot, left foot, right foot, as fast as you can.” In a canvas, however, a rectangle or circle or triangle’s actual movement across a window is irrelevant given that it’s all an illusion in the first place. Nevertheless, this isn’t to say that you should ignore locomotion entirely. You will find great value in thinking about the locomotive design of your vehicle and how you choose to animate it. The examples in this chapter will remain visually bare, and a good exercise would be to elaborate on the animation style —could you add spinning wheels or oscillating paddles or shuffling legs?</li>
|
||||
<li><strong><em>Action Selection.</em></strong> A vehicle has a goal (or goals) and can select an action (or a combination of actions) based on that goal. This is essentially where I left off the discussion of autonomous agents. The vehicle takes a look at its environment and calculates an action based on a desire: “I see a zombie marching toward me. Since I don’t want my brains to be eaten, I’m going to flee from the zombie.” The goal is to keep one’s brains, and the action is to flee. Reynolds’s paper describes many goals and associated actions, such as seeking a target, avoiding an obstacle, and following a path. In a moment, I’ll start building these examples out with p5.js code.</li>
|
||||
<li><strong><em>Steering.</em></strong> Once an action has been selected, the vehicle has to calculate its next move. That next move will be a force—; more specifically, a steering force. Luckily, Reynolds has developed a simple steering force formula that I’ll use throughout the examples in this chapter: <strong><em>steering force = desired velocity – current velocity</em></strong>. I’ll get into the details of this formula and why it works so effectively in the next section.</li>
|
||||
<li><strong><em>Locomotion.</em></strong> For the most part, I’m going to ignore this third layer. In the case of fleeing from zombies, the locomotion could be described as “left foot, right foot, left foot, right foot, as fast as you can.” In a canvas, however, a rectangle, circle, or triangle’s actual movement across a window is irrelevant, given that it’s all an illusion in the first place. This isn’t to say that you should ignore locomotion entirely, however. You’ll find great value in thinking about the locomotive design of your vehicle and how you choose to animate it. The examples in this chapter will remain visually bare; a good exercise would be to elaborate on the animation style. For example, could you add spinning wheels, oscillating paddles, or shuffling legs?</li>
|
||||
</ol>
|
||||
<p>Ultimately, the most important layer for you to consider is #1—<em>Action Selection</em>. What are the elements of your system and what are their goals? In this chapter, I am going to cover a series of steering behaviors (i.e. actions): seek, flee, follow a path, follow a flow field, flock with your neighbors, etc. It’s important to realize, however, that the point of understanding how to write the code for these behaviors is not because you should use them in all of your projects. Rather, these are a set of building blocks, a foundation from which you can design and develop vehicles with creative goals and new and exciting behaviors. And even though the examples will be highly literal in this chapter (follow that pixel!), you should allow yourself to think more abstractly (like Braitenberg). What would it mean for your vehicle to have “love” or “fear” as its goal, its driving force? Finally (and I’ll address this later in the chapter), you won’t get very far by developing simulations with only one action. Yes, the first example will be “seek a target.” But for you to be creative—to make these steering behaviors <em>your own</em>—it will all come down to mixing and matching multiple actions within the same vehicle. So view these examples not as singular behaviors to be emulated, but as pieces of a larger puzzle that you will eventually assemble.</p>
|
||||
<h2 id="the-steering-force">The Steering Force</h2>
|
||||
<p>I could entertain you by discussing the theoretical principles behind autonomous agents and steering as much as you like, but we won’t get anywhere without first understanding the concept of a steering force. Consider the following scenario: a vehicle with a current velocity seeks a target. And let’s think of the vehicle as a bug-like creature who desires to savor a delicious strawberry.</p>
|
||||
<p>Ultimately, the most important layer for you to consider is the first one, action selection. What are the elements of your system, and what are their goals? In this chapter, I’m going to cover a series of steering behaviors (that is, actions): seeking, fleeing, following a path, following a flow field, flocking with your neighbors, and so on. As I’ve said in other chapters, however, the point isn’t that you should use these exact behaviors in all of your projects. Rather, the point is to show you <em>how</em> to model a steering behavior—<em>any</em> steering behavior—in code, and to provide a foundation from which you can design and develop your own vehicles with new and exciting goals and behaviors.</p>
|
||||
<p>What’s more, even though the examples in this chapter will be highly literal (follow that pixel!), you should allow yourself to think more abstractly (like Braitenberg). What would it mean for your vehicle to have “love” as its goal or “fear” as its driving force? Finally (and I’ll address this later in the chapter), you won’t get very far by developing simulations with only one action. Yes, the first example will be “seek a target.” But for you to be creative—to make these steering behaviors <em>your own</em>—it will all come down to mixing and matching multiple actions within the same vehicle. View the coming examples not as singular behaviors to be emulated, but as pieces of a larger puzzle that you’ll eventually assemble.</p>
|
||||
<h3 id="the-steering-force">The Steering Force</h3>
|
||||
<p>What exactly is a steering force? To answer, consider the following scenario: a vehicle with a current velocity is seeking a target. For fun, let’s think of the vehicle as a bug-like creature who desires to savor a delicious strawberry, as in Figure 5.1.</p>
|
||||
<figure>
|
||||
<img src="images/05_steering/05_steering_1.png" alt="Figure 5.1 A vehicle with a velocity and a target.">
|
||||
<figcaption>Figure 5.1 A vehicle with a velocity and a target.</figcaption>
|
||||
<img src="images/05_steering/05_steering_1.png" alt="Figure 5.1 A vehicle with a velocity and a target">
|
||||
<figcaption>Figure 5.1 A vehicle with a velocity and a target</figcaption>
|
||||
</figure>
|
||||
<p>Its goal and subsequent action is to seek the target in Figure 5.1. If you think back to Chapter 2, you might begin by making the target an attractor and apply a gravitational force that pulls the vehicle to the target. This would be a perfectly reasonable solution, but conceptually it’s not what I’m looking for here. I don’t want to simply calculate a force that pushes the vehicle towards its target; rather, I would like to ask the vehicle to make an intelligent decision to steer towards the target based on its perception of its state and environment (i.e. how fast and in what direction is it currently moving). The vehicle should look at how it desires to move (a vector pointing to the target), compare that goal with how it is currently moving (its velocity), and apply a force accordingly.</p>
|
||||
<p>The vehicle’s goal and subsequent action is to seek the target. Thinking back to Chapter 2, you might begin by making the target an attractor and applying a gravitational force that pulls the vehicle to the target. This would be a perfectly reasonable solution, but conceptually it’s not what I’m looking for here. I don’t want to simply calculate a force that pushes the vehicle toward its target; rather, I want to ask the vehicle to make an intelligent decision to steer toward the target based on its perception of its state (how fast and din what direction it’s currently moving) and its environment (the location of the target). The vehicle should look at how it desires to move (a vector pointing to the target), compare that goal with how it’s currently moving (its velocity), and apply a force accordingly. That’s exactly what Reynolds’s steering force formula says.</p>
|
||||
<div data-type="equation">\text{steering force} = \text{desired velocity} - \text{current velocity}</div>
|
||||
<p>Or as you might write in p5:</p>
|
||||
<p>Or, as you might write in p5.js:</p>
|
||||
<pre class="codesplit" data-code-language="javascript">let steer = p5.Vector.sub(desired, velocity);</pre>
|
||||
<p>In the above formula, velocity is not a problem. After all, there is already a variable for that. However, the <em>desired velocity</em> is something that has to be calculated. Take a look at Figure 5.2. If the vehicle’s goal is defined as “seeking the target,” then its desired velocity is a vector that points from its current position to the target position.</p>
|
||||
<p>The <em>current</em> velocity isn’t a problem: the <code>Vehicle</code> class already has a variable for that. However, the <em>desired</em> velocity has to be calculated. Take a look at Figure 5.2. If the vehicle’s goal is defined as “seeking the target,” then its desired velocity is a vector that points from its current position to the target position.</p>
|
||||
<figure>
|
||||
<img src="images/05_steering/05_steering_2.png" alt="Figure 5.2 The vehicle’s desired velocity points from its position to the target. The desired vector should point from the vehicle’s center to the vehicle’s target but is shortened for illustration purposes.">
|
||||
<figcaption>Figure 5.2 The vehicle’s desired velocity points from its position to the target. The desired vector should point from the vehicle’s center to the vehicle’s target but is shortened for illustration purposes.</figcaption>
|
||||
<img src="images/05_steering/05_steering_2.png" alt="Figure 5.2 The vehicle’s desired velocity points from its position to the target. (The desired vector should point from the vehicle’s center to the vehicle’s target but is shortened for illustration purposes.)">
|
||||
<figcaption>Figure 5.2 The vehicle’s desired velocity points from its position to the target. (The desired vector should point from the vehicle’s center to the vehicle’s target but is shortened for illustration purposes.)</figcaption>
|
||||
</figure>
|
||||
<p>Assuming a <code>p5.Vector</code> target, I then have:</p>
|
||||
<p>Assuming a <code>p5.Vector</code> called <code>target</code> defining the target’s position, I then have:</p>
|
||||
<pre class="codesplit" data-code-language="javascript">let desired = p5.Vector.sub(target, position);</pre>
|
||||
<p>But this there is more to the story here. What if the canvas is high-resolution and the target is thousands of pixels away? Sure, the vehicle might desire to teleport itself instantly to the target position with a massive velocity, but this won’t make for an effective animation. I’ll restate the desire as follows:</p>
|
||||
<p><span class="highlight"><em>The vehicle desires to move towards the target at maximum speed.</em></span></p>
|
||||
<p>In other words, the vector should point from the vehicle's current position to the target position, with a magnitude equal to the maximum speed of the vehicle. The concept of maximum speed was introduced in Chapter 1 to ensure that the speed remained within a reasonable range. However, I did not always use it in the succeeding chapters. In Chapter 2, other forces such as friction and drag kept the speed in check, while in Chapter 3, oscillation was caused by opposing forces that keep the speed limited. In this chapter, maximum speed is a key parameter for controlling the behavior of a steering agent, so it will be included in all the examples.</p>
|
||||
<p>While I encourage you to consider how other forces such as friction and drag could be combined with steering behaviors, I am going to focus only on steering forces for the time being. So it makes sense to include the concept of maximum speed as a limiting factor in the force calculation.</p>
|
||||
<p>So first, I’ll need to make sure to add a property to the <code>Vehicle</code> class for maximum speed itself.</p>
|
||||
<p>There’s more to the story, however. What if the canvas is high-resolution and the target is thousands of pixels away? Sure, the vehicle might desire to teleport itself instantly to the target position with a massive velocity, but this won’t make for an effective animation. I’ll restate the desire as follows:</p>
|
||||
<p><span class="highlight"><em>The vehicle desires to move toward the target </em><em>at maximum speed</em><em>.</em></span></p>
|
||||
<p>In other words, the <code>desired</code> vector should point from the vehicle’s current position to the target position, with a magnitude equal to the maximum speed of the vehicle, as shown in Figure 5.3.</p>
|
||||
<figure>
|
||||
<img src="images/05_steering/05_steering_3.png" alt="Figure 5.3: The magnitude of the vehicle’s desired velocity is “max speed.”">
|
||||
<figcaption>Figure 5.3: The magnitude of the vehicle’s desired velocity is “max speed.”</figcaption>
|
||||
</figure>
|
||||
<p>The concept of maximum speed was introduced in Chapter 1 to ensure that a mover’s speed remained within a reasonable range. However, I didn’t always use it in the subsequent chapters. In Chapter 2, other forces such as friction and drag kept the speed in check, while in Chapter 3, oscillation was caused by opposing forces that kept the speed limited. In this chapter, maximum speed is a key parameter for controlling the behavior of a steering agent, so I’ll include it in all the examples.</p>
|
||||
<p>While I encourage you to consider how other forces such as friction and drag could be combined with steering behaviors, I’m going to focus only on steering forces for the time being. As such, I can include the concept of maximum speed as a limiting factor in the force calculation. First, I need to add a property to the <code>Vehicle</code> class setting the maximum speed.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">class Vehicle {
|
||||
|
||||
constructor(){
|
||||
|
@ -72,11 +79,7 @@
|
|||
<pre class="codesplit" data-code-language="javascript">let desired = p5.Vector.sub(target, this.position);
|
||||
desired.normalize();
|
||||
desired.mult(this.maxspeed);</pre>
|
||||
<figure>
|
||||
<img src="images/05_steering/05_steering_3.png" alt="Figure 5.3: The magnitude of the vehicle’s desired velocity is “max speed.”">
|
||||
<figcaption>Figure 5.3: The magnitude of the vehicle’s desired velocity is “max speed.”</figcaption>
|
||||
</figure>
|
||||
<p>Putting this all together, I can now write a function called <code>seek()</code> that receives a <code>p5.Vector</code> target and calculates a steering force towards that target.</p>
|
||||
<p>Putting this all together, I can now write a method called <code>seek()</code> that receives a <code>p5.Vector</code> target and calculates a steering force toward that target.</p>
|
||||
<pre class="codesplit" data-code-language="javascript"> seek(target) {
|
||||
let desired = p5.Vector.sub(target,this.position);
|
||||
desired.normalize();
|
||||
|
@ -90,14 +93,14 @@ desired.mult(this.maxspeed);</pre>
|
|||
// to the object’s acceleration
|
||||
this.applyForce(steer);
|
||||
}</pre>
|
||||
<p>Note how in the above function I finish by passing the steering force into <code>applyForce()</code>. This assumes that the the code is built on top of the foundation built in <a href="/force#">Chapter 2</a>. However, you could just as easily use the steering force with Box2D’s <code>applyForce()</code> function or toxiclibs’ <code>addForce()</code> function.</p>
|
||||
<p>So why does this all work so well? Let’s see what the steering force looks like relative to the vehicle and target positions.</p>
|
||||
<p>Notice how I finish the method by passing the steering force into <code>applyForce()</code>. This assumes that the code is built on top of the foundation I developed in <a href="/force#">Chapter 2</a>. However, you could just as easily use the steering force with Box2D’s <code>applyForce()</code> function or toxiclibs’ <code>addForce()</code> function.</p>
|
||||
<p>To see why Reynolds’s steering fomrula works so well, take a look at Figure 5.4. It shows what the steering force looks like relative to the vehicle and target positions.</p>
|
||||
<figure>
|
||||
<img src="images/05_steering/05_steering_4.png" alt="Figure 5.4: The vehicle applies a steering force equal to its desired velocity minus its current velocity.">
|
||||
<figcaption>Figure 5.4: The vehicle applies a steering force equal to its desired velocity minus its current velocity.</figcaption>
|
||||
</figure>
|
||||
<p>Again, notice how this is not at all the same force as gravitational attraction. Remember one of the principles of autonomous agents: An autonomous agent has a <em>limited</em> ability to perceive its environment. Here is that ability, subtly embedded into Reynolds’s steering formula. If the vehicle weren’t moving at all (zero velocity), desired minus velocity would be equal to desired. But this is not the case. The vehicle is aware of its own velocity and its steering force compensates accordingly. This creates a more active simulation, as the way in which the vehicle moves towards the targets depends on the way it is moving in the first place.</p>
|
||||
<p>In all of this excitement, however, I’ve missed one last step. What sort of vehicle is this? Is it a super sleek race car with amazing handling? Or a large city bus that needs a lot of advance notice to turn? A graceful panda, or a lumbering elephant? The example code, as it stands, has no feature to account for this variability in steering ability. Steering ability can be controlled by limiting the magnitude of the steering force. Let’s call that limit the “maximum force” (or <code>maxforce</code> for short). And so finally:</p>
|
||||
<p>This force looks quite different from gravitational attraction. Remember one of the principles of autonomous agents: an autonomous agent has a <em>limited</em> ability to perceive its environment. Here’s that ability, subtly embedded into Reynolds’s steering formula. If the vehicle weren’t moving at all (zero velocity), desired minus velocity would equal desired. But in example, that isn’t the case. The vehicle is aware of its own velocity, and its steering force compensates accordingly. This creates a more active simulation, as the way in which the vehicle moves towards the targets depends on the way it was moving in the first place.</p>
|
||||
<p>In all of this excitement, I’ve missed one last step. What sort of vehicle is this? Is it a super sleek race car with amazing handling? Or a large city bus that needs a lot of advance notice to turn? A graceful panda, or a lumbering elephant? The example code, as it stands, has no feature to account for this variation in steering ability. For that, I need to limit the magnitude of the steering force. I’ll call this limit the “maximum force” (or <code>maxforce</code> for short).</p>
|
||||
<pre class="codesplit" data-code-language="javascript">class Vehicle {
|
||||
|
||||
constructor(){
|
||||
|
@ -106,11 +109,11 @@ desired.mult(this.maxspeed);</pre>
|
|||
this.acceleration = createVector();
|
||||
// Maximum speed
|
||||
this.maxspeed = ????;
|
||||
// Now also have maximum force.
|
||||
// Now I also have a maximum force.
|
||||
this.maxforce = ????;
|
||||
}
|
||||
</pre>
|
||||
<p>followed by:</p>
|
||||
<p>Now I just need to impose that limit before applying the steering force.</p>
|
||||
<pre class="codesplit" data-code-language="javascript"> seek(target) {
|
||||
let desired = p5.Vector.sub(target, this.position);
|
||||
desired.normalize();
|
||||
|
@ -122,15 +125,15 @@ desired.mult(this.maxspeed);</pre>
|
|||
|
||||
this.applyForce(steer);
|
||||
}</pre>
|
||||
<p>Limiting the steering force brings up an important point. Remember, the goal is not to get the vehicle to the target as fast as possible. If that were the case, I would just say “set position equal to target” and there the vehicle would instantly teleport to there!</p>
|
||||
<p>The goal, as Reynolds puts it, is to move the vehicle in a “lifelike and improvisational manner.” I’m trying to make it appear as if the vehicle is steering its way to the target, and so it’s up to me to play with the forces and variables of the system to simulate a given behavior. For example, a large maximum steering force would result in a very different path than a small one. One is not inherently better or worse than the other; it depends on the desired effect. (And of course, these values need not be fixed and could change based on other conditions. Perhaps a vehicle has health: the higher the health, the better it can steer.)</p>
|
||||
<p>Limiting the steering force brings up an important point: the goal isn’t to get the vehicle to the target as fast as possible. If it were, I would just say “set position equal to target” and the vehicle would instantly teleport to there! Instead, as Reynolds puts it, the goal is to move the vehicle in a “lifelike and improvisational manner.”</p>
|
||||
<p>I’m trying to make it appear as if the vehicle is steering its way to the target, and so it’s up to me to play with the forces and variables of the system to simulate a given behavior. For example, a large maximum steering force would result in a very different path than a small one (see Figure 5.5). One isn’t inherently better or worse than the other; it depends on the desired effect. (And of course, these values need not be fixed and could change based on other conditions. Perhaps a vehicle has health: the higher the health, the better it can steer.)</p>
|
||||
<figure>
|
||||
<img src="images/05_steering/05_steering_5.png" alt="Figure 5.5: Showing the path for a strong maximum force versus a weaker one.">
|
||||
<figcaption>Figure 5.5: Showing the path for a strong maximum force versus a weaker one.</figcaption>
|
||||
<img src="images/05_steering/05_steering_5.png" alt="Figure 5.5: The path for a stronger maximum force (left) versus a weaker one (right)">
|
||||
<figcaption>Figure 5.5: The path for a stronger maximum force (left) versus a weaker one (right)</figcaption>
|
||||
</figure>
|
||||
<p>Here is the full <code>Vehicle</code> class, incorporating the rest of the elements from the Chapter 2 <code>Mover</code> object.</p>
|
||||
<p>Here’s the full <code>Vehicle</code> class, incorporating the rest of the elements from the Chapter 2 <code>Mover</code> class.</p>
|
||||
<div data-type="example">
|
||||
<h3 id="example-51-seeking-a-target">Example 5.1: Seeking a target</h3>
|
||||
<h3 id="example-51-seeking-a-target">Example 5.1: Seeking a Target</h3>
|
||||
<figure>
|
||||
<div data-type="embed" data-p5-editor="https://editor.p5js.org/natureofcode/sketches/Y74O77yxy" data-example-path="examples/05_steering/noc_5_01_seek"></div>
|
||||
<figcaption></figcaption>
|
||||
|
@ -190,32 +193,32 @@ desired.mult(this.maxspeed);</pre>
|
|||
}</pre>
|
||||
<div data-type="exercise">
|
||||
<h3 id="exercise-51">Exercise 5.1</h3>
|
||||
<p>Implement a “fleeing” steering behavior (desired velocity is the same as “seek” but pointed in the opposite direction).</p>
|
||||
<p>Implement a “fleeing” steering behavior (the desired velocity is the same as “seek,” but pointed in the opposite direction).</p>
|
||||
</div>
|
||||
<div data-type="exercise">
|
||||
<h3 id="exercise-52">Exercise 5.2</h3>
|
||||
<p>Implement seeking a moving target, often referred to as “pursuit.” In this case, your desired vector won’t point towards the object’s current position, but rather its “future” position as extrapolated from its current velocity. You’ll see this ability for a vehicle to “predict the future” in later examples.</p>
|
||||
<p>Implement a seeking behavior with a moving target, often referred to as “pursuit.” In this case, your desired vector won’t point toward the object’s current position, but rather its “future” position as extrapolated from its current velocity. You’ll see this ability for a vehicle to “predict the future” in later examples.</p>
|
||||
</div>
|
||||
<div data-type="exercise">
|
||||
<h3 id="exercise-53">Exercise 5.3</h3>
|
||||
<p>Create a sketch where a vehicle’s maximum force and maximum speed do not remain constant, but vary according to environmental factors.</p>
|
||||
<p>Create a sketch where a vehicle’s maximum force and maximum speed don’t remain constant, but vary according to environmental factors.</p>
|
||||
</div>
|
||||
<h2 id="arriving-behavior">Arriving Behavior</h2>
|
||||
<p>After working for a bit with the seeking behavior, you probably are asking yourself, “What if I want the vehicle to slow down as it approaches the target?” Before I can even begin to answer this question, I should look at the reasons behind why the seek behavior causes the vehicle to fly past the target so that it has to turn around and go back. Let’s consider the brain of a seeking vehicle. What is it thinking?</p>
|
||||
<h3 id="arriving-behavior">Arriving Behavior</h3>
|
||||
<p>After working for a bit with the seeking behavior, you’re probably are asking yourself, “What if I want the vehicle to slow down as it approaches the target?” Before I can even begin to answer this question, I should explain why the seek behavior causes the vehicle to fly past the target in the first place, forcing it to turn around and go back. Consider the brain of a seeking vehicle. What is it thinking at each frame of the animation?</p>
|
||||
<ul>
|
||||
<li>I want to go as fast as possible towards the target</li>
|
||||
<li>I want to go as fast as possible towards the target</li>
|
||||
<li>I want to go as fast as possible towards the target</li>
|
||||
<li>I want to go as fast as possible towards the target</li>
|
||||
<li>I want to go as fast as possible towards the target</li>
|
||||
<li>and so on…</li>
|
||||
<li>I want to go as fast as possible toward the target.</li>
|
||||
<li>I want to go as fast as possible toward the target.</li>
|
||||
<li>I want to go as fast as possible toward the target.</li>
|
||||
<li>I want to go as fast as possible toward the target.</li>
|
||||
<li>I want to go as fast as possible toward the target.</li>
|
||||
<li>and so on . . .</li>
|
||||
</ul>
|
||||
<p>The vehicle is so gosh darn excited about getting to the target that it doesn’t bother to make any intelligent decisions about its speed relative to the target’s proximity. Whether it’s far away or very close, it always wants to go as fast as possible.</p>
|
||||
<p>The vehicle is so gosh darn excited about getting to the target that it doesn’t bother to make any intelligent decisions about its speed. No matter the distance to the target, it always wants to go as fast as possible. When it’s very close, that means the vehicle will end up overshooting the target (see Figure 5.6).</p>
|
||||
<figure class="half-width-right">
|
||||
<img src="images/05_steering/05_steering_6.png" alt="Figure 5.6 A vehicle with a desired velocity always at maximum speed will overshoot the target. (Note that while I encourage you to think about the vehicle as a cute bug-like creature, to keep things simple it will now be drawn as a triangle.)">
|
||||
<figcaption>Figure 5.6 A vehicle with a desired velocity always at maximum speed will overshoot the target. (Note that while I encourage you to think about the vehicle as a cute bug-like creature, to keep things simple it will now be drawn as a triangle.)</figcaption>
|
||||
<img src="images/05_steering/05_steering_6.png" alt="Figure 5.6 A vehicle with a desired velocity always at maximum speed will overshoot the target. (Note that while I encourage you to continue thinking about the vehicle as a cute, bug-like creature, to keep things simple it will now be drawn as a triangle.)">
|
||||
<figcaption>Figure 5.6 A vehicle with a desired velocity always at maximum speed will overshoot the target. (Note that while I encourage you to continue thinking about the vehicle as a cute, bug-like creature, to keep things simple it will now be drawn as a triangle.)</figcaption>
|
||||
</figure>
|
||||
<p>In some cases, this is the desired behavior (consider a puppy going after it’s favorite toy, it’s not slowing down no matter how close it gets!) However, in many other cases (a car pulling into a parking spot, a bee landing on a flower), the vehicle’s thought process needs to consider its speed relative to the distance from its target. For example:</p>
|
||||
<p>In some cases, this is the desired behavior. (Consider a puppy going after it’s favorite toy: it’s not slowing down no matter how close it gets!) However, in many other cases (a car pulling into a parking spot, a bee landing on a flower), the vehicle’s thought process needs to consider its speed relative to the distance from its target. For example:</p>
|
||||
<ul>
|
||||
<li>I’m very far away. I want to go as fast as possible towards the target</li>
|
||||
<li>I’m very far away. I want to go as fast as possible towards the target</li>
|
||||
|
|
|
@ -13,7 +13,8 @@
|
|||
<p>These activities have yielded a set of motion simulation examples, allowing you to creatively define the physics of the worlds you build (whether realistic or fantastical). Of course, we’re not the first to do this. The world of computer graphics and programming is full of source code dedicated to physics simulations. Just try searching “open-source physics engine” and you could spend the rest of your day pouring over rich and complex code. This begs the question: If a code library takes care of physics simulation, why should you bother learning how to write any of the algorithms yourself?</p>
|
||||
<p>Here is where the philosophy behind this book comes into play. While many of the libraries out there provide “out of the box” physics (and super awesome sophisticate and robus physics at that), there are significant reasons for learning the fundamentals before diving into libraries. First, without an understanding of vectors, forces, and trigonometry, you’d likely be lost just reading the documentation of a library. Second, even though a library may take care of the math behind the scenes, it won’t necessarily simplify your code. There can be a great deal of overhead in understanding how a library works and what it expects from you code-wise. Finally, as wonderful as a physics engine might be, if you look deep down into your hearts, it’s likely that you seek to create worlds and visualizations that stretch the limits of imagination. A library is great, but it provides a limited set of features. It’s important to know both when to live within those limitations in the pursuit of a creative coding project and when those limits prove to be confining.</p>
|
||||
<p>This chapter is dedicated to examining two open-source physics libraries for JavaScript—<a href="https://brm.io/matter-js/">matter.js</a> and the toxiclibs.js. This is not to say that these are the only libraries I specifically recommend for any and all creative coding projects that merit the use of a physics engine. Both, however, integrate nicely with p5.js and will allow me to demonstrate the fundamental concepts behind physics engines and how they relate to and build upon the material from the first five chapters of this book.</p>
|
||||
<p>There are a multitude of other physics libraries worth exploring alongside these two case studies. Two that I would highly recommend are <a href="https://p5play.org/">p5.play</a> (created by Paolo Pedercini and developed by Quinton Ashley) and <a href="http://wellcaffeinated.net/PhysicsJS/">PhysicsJS</a> (by Jasper Palfree). p5.play was specifically designed for game development and simplifies the creation of visual objects—known as “sprites”—and manages their interactions, or “collisions”. As noted in the name, it’s tailored to work seamlessly with p5.js. PhysicsJS also has a comprehensive set of features including collision detection and resolution, gravity, and friction among others. Each of these libraries has its own strengths, and may offer unique advantages for specific projects. The aim of this chapter isn't to limit you to matter.js and toxiclibs.js, but to provide you with a foundation in working with physics libraries. The skills you acquire here will enable you to navigate and understand documentation, opening the door to expanding your abilities with any library you choose. Check the book’s website for ports of the examples in this chapter to other libraries.</p>
|
||||
<p>There are a multitude of other physics libraries worth exploring alongside these two case studies. One that I would highly recommend is <a href="https://p5play.org/">p5play</a>, a project that was initiated by Paolo Pedercini and currently led by Quinton Ashley. p5play was specifically designed for game development and simplifies the creation of visual objects—known as “sprites”—and manages their interactions, namely “collisions” and "overlaps". As you may have guessed from the name, it’s tailored to work seamlessly with p5.js. It uses Box2D for physics simulation, which I’ll discuss in the next section.</p>
|
||||
<p>Each physics library has its own strengths, and may offer unique advantages for specific projects. The aim of this chapter isn't to limit you to matter.js and toxiclibs.js, but to provide you with a foundation in working with physics libraries. The skills you acquire here will enable you to navigate and understand documentation, opening the door to expanding your abilities with any library you choose. Check the book’s website for ports of the examples in this chapter to other libraries.</p>
|
||||
<h2 id="what-is-matterjs">What is Matter.js?</h2>
|
||||
<p>When I first began writing this book, matter.js did not exist! The physics engine I used to demonstrate the examples at the time was (and likely still is) the most well known of them all: Box2D. Box2D began as a set of physics tutorials written in C++ by Erin Catto for the Game Developer’s Conference in 2006. Since then it has evolved into a rich and elaborate open-source physics engine. It’s been used for countless projects, most notably highly successful games such as the award-winning Crayon Physics and the runaway hit Angry Birds.</p>
|
||||
<p>One of the key things about Box2D is that it is a true physics engine. Box2D knows nothing about computer graphics and the world of pixels. All of Box2D’s measurements and calculations are real-world measurements (meters, kilograms, seconds)—only its “world” is a two-dimensional plane with top, bottom, left, and right edges. You tell it things like: “The gravity of the world is 9.81 newtons per kilogram, and a circle with a radius of four meters and a mass of fifty kilograms is located ten meters above the world’s bottom.” Box2D will then tell you things like: “One second later, the rectangle is at five meters from the bottom; two seconds later, it is ten meters below,” and so on. While this provides for an amazing and realistic physics engine, it also necessitates lots of complicated code in order to translate back and forth between the physics “world” (a key term in Box2D) and the world you want to draw — the “pixel” world of graphics canvas.</p>
|
||||
|
|
|
@ -320,21 +320,21 @@ let population = [];</pre>
|
|||
<p>What should go in the <code>DNA</code> class? For a typing monkey, its DNA would be the random phrase it types, a string of characters. However, using an array of characters (rather than a string object) provides a more generic template that can extend easily to other data types. For example, the DNA of a creature in a physics system could be an array of vectors—or for an image, an array of numbers (RGB pixel values). Any set of properties can be listed in an array, and even though a string is convenient for this particular scenario, an array will serve as a better foundation for future evolutionary examples.</p>
|
||||
<p>The genetic algorithm specifies that I create a population of <span data-type="equation">N</span> elements, each with <em>randomly generated genes</em>. The DNA constructor therefore includes a loop to fill in each element of the <code>genes</code> array.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">class DNA {
|
||||
constructor(length){
|
||||
//{!1} The individual "genes" are stored in an array
|
||||
this.genes = [];
|
||||
// There are "length" genes
|
||||
for (let i = 0; i < length; i++) {
|
||||
constructor(length){
|
||||
//{!1} The individual "genes" are stored in an array
|
||||
this.genes = [];
|
||||
// There are "length" genes
|
||||
for (let i = 0; i < length; i++) {
|
||||
// Each gene is a random character
|
||||
this.genes[i] = randomCharacter();
|
||||
}
|
||||
}
|
||||
this.genes[i] = randomCharacter();
|
||||
}
|
||||
}
|
||||
}</pre>
|
||||
<p>In order to randomly generate a character, I will write a helper function called <code>randomCharacter()</code> for each individual gene.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">// Return a random character (letter, number, symbol, space, etc)
|
||||
function randomCharacter() {
|
||||
let c = floor(random(32, 127));
|
||||
return String.fromCharCode(c);
|
||||
let c = floor(random(32, 127));
|
||||
return String.fromCharCode(c);
|
||||
}</pre>
|
||||
<p>The random numbers picked correspond to a specific character according to a standard known as ASCII (American Standard Code for Information Interchange). <code>String.fromCharCode(c)</code> is a native JavaScript function that converts the number into its corresponding character based on that standard. Note that this function will also return numbers, punctuation marks, and special characters. A more modern approach might involve the “Unicode” standard, which includes emojis and characters from a wide variety of world languages.</p>
|
||||
<p>Now that I have the constructor, I can return to <code>setup()</code> and initialize each <code>DNA</code> object in the population array.</p>
|
||||
|
|
508
content/10_nn.html
Normal file
|
@ -0,0 +1,508 @@
|
|||
<section data-type="chapter">
|
||||
<h1 id="chapter-10-neural-networks">Chapter 10. Neural Networks</h1>
|
||||
<blockquote data-type="epigraph">
|
||||
<p>“The human brain has 100 billion neurons, each neuron connected to 10 thousand other neurons. Sitting on your shoulders is the most complicated object in the known universe.”</p>
|
||||
<p>— Michio Kaku</p>
|
||||
</blockquote>
|
||||
<p>I began with inanimate objects living in a world of forces, and gave them desires, autonomy, and the ability to take action according to a system of rules. Next, I allowed those objects, now called creatures, to live in a population and evolve over time. Now I’d like to ask: What is each creature’s decision-making process? How can it adjust its choices by learning over time? Can a computational entity process its environment and generate a decision?</p>
|
||||
<p>The human brain can be described as a biological neural network—an interconnected web of neurons transmitting elaborate patterns of electrical signals. Dendrites receive input signals and, based on those inputs, fire an output signal via an axon. Or something like that. How the human brain actually works is an elaborate and complex mystery, one that I certainly am not going to attempt to tackle in rigorous detail in this chapter.</p>
|
||||
<figure>
|
||||
<img src="images/10_nn/10_nn_1.png" alt="Figure 10. An illustration of a neuron with dendrites and an axon connected to another neuron.">
|
||||
<figcaption>Figure 10. An illustration of a neuron with dendrites and an axon connected to another neuron.</figcaption>
|
||||
</figure>
|
||||
<p>The good news is that developing engaging animated systems with code does not require scientific rigor or accuracy, as you've learned throughout this book. You can simply be inspired by the idea of brain function.</p>
|
||||
<p>In this chapter, I'll begin with a conceptual overview of the properties and features of neural networks and build the simplest possible example of one (a network that consists of a single neuron). I’ll then introduce you to more complex neural networks using the ml5.js library. Finally, I'll cover “neuroevolution”, a technique that combines genetic algorithms with neural networks to create a “Brain” object that can be inserted into the <code>Vehicle</code> class and used to calculate steering.</p>
|
||||
<h2 id="artificial-neural-networks-introduction-and-application">Artificial Neural Networks: Introduction and Application</h2>
|
||||
<p>Computer scientists have long been inspired by the human brain. In 1943, Warren S. McCulloch, a neuroscientist, and Walter Pitts, a logician, developed the first conceptual model of an artificial neural network. In their paper, "A logical calculus of the ideas immanent in nervous activity,” they describe the concept of a neuron, a single cell living in a network of cells that receives inputs, processes those inputs, and generates an output.</p>
|
||||
<p>Their work, and the work of many scientists and researchers that followed, was not meant to accurately describe how the biological brain works. Rather, an <em>artificial</em> neural network (hereafter referred to as a “neural network”) was designed as a computational model based on the brain to solve certain kinds of problems.</p>
|
||||
<p>It’s probably pretty obvious to you that there are problems that are incredibly simple for a computer to solve, but difficult for you. Take the square root of 964,324, for example. A quick line of code produces the value 982, a number your computer computed in less than a millisecond. There are, on the other hand, problems that are incredibly simple for you or me to solve, but not so easy for a computer. Show any toddler a picture of a kitten or puppy and they’ll be able to tell you very quickly which one is which. Say “hello” and shake my hand one morning and you should be able to pick me out of a crowd of people the next day. But need a machine to perform one of these tasks? Scientists have already spent entire careers researching and implementing complex solutions.</p>
|
||||
<p>The most prevalent use of neural networks in computing today involves these “easy-for-a-human, difficult-for-a-machine” tasks known as pattern recognition. These encompass a wide variety of problem areas, where the aim is to detect, interpret, and classify data. This includes everything from identifying objects in images, recognizing spoken words, understanding and generating human-like text, and even more complex tasks such as predicting your next favorite song or movie, teaching a machine to win at complex games, and detecting unusual cyber activities.</p>
|
||||
<figure class="half-width-right">
|
||||
<img src="images/10_nn/10_nn_2.png" alt="Figure 10.2">
|
||||
<figcaption>Figure 10.2</figcaption>
|
||||
</figure>
|
||||
<p>One of the key elements of a neural network is its ability to <em>learn</em>. A neural network is not just a complex system, but a complex <strong><em>adaptive</em></strong> system, meaning it can change its internal structure based on the information flowing through it. Typically, this is achieved through the adjusting of <em>weights</em>. In the diagram above, each line represents a connection between two neurons and indicates the pathway for the flow of information. Each connection has a <strong><em>weight</em></strong>, a number that controls the signal between the two neurons. If the network generates a “good” output (which I'll define later), there is no need to adjust the weights. However, if the network generates a “poor” output—an error, so to speak—then the system adapts, altering the weights in order to improve subsequent results.</p>
|
||||
<p>There are several strategies for learning, and I'll examine two of them in this chapter.</p>
|
||||
<ul>
|
||||
<li><strong><em>Supervised Learning</em></strong> —Essentially, a strategy that involves a teacher that is smarter than the network itself. For example, let’s take the facial recognition example. The teacher shows the network a bunch of faces, and the teacher already knows the name associated with each face. The network makes its guesses, then the teacher provides the network with the answers. The network can then compare its answers to the known “correct” ones and make adjustments according to its errors. Our first neural network in the next section will follow this model.</li>
|
||||
<li><strong><em>Unsupervised Learning</em></strong> —Required when there isn’t an example data set with known answers. Imagine searching for a hidden pattern in a data set. An application of this is clustering, i.e. dividing a set of elements into groups according to some unknown pattern. I won’t be showing at any examples of unsupervised learning in this chapter, as this strategy is less relevant for the examples in this book.</li>
|
||||
<li><strong><em>Reinforcement Learning</em></strong> —A strategy built on observation. Think of a little mouse running through a maze. If it turns left, it gets a piece of cheese; if it turns right, it receives a little shock. (Don’t worry, this is just a pretend mouse.) Presumably, the mouse will learn over time to turn left. Its neural network makes a decision with an outcome (turn left or right) and observes its environment (yum or ouch). If the observation is negative, the network can adjust its weights in order to make a different decision the next time. Reinforcement learning is common in robotics. At time <code>t</code>, the robot performs a task and observes the results. Did it crash into a wall or fall off a table? Or is it unharmed? I'll showcase how reinforcement learning works in the context of our simulated steering vehicles.</li>
|
||||
</ul>
|
||||
<p>Reinforcement learning comes in many variants and styles. In this chapter, while I will lay the groundwork of neural networks using supervised learning, my primary focus will be a technique related to reinforcement learning known as <em>neuroevolution</em>. This method builds upon the code from chapter 9 and "evolves" the weights (and in some cases, the structure itself) of a neural network over generations of "trial and error" learning. It is especially effective in environments where the learning rules are not precisely defined or the task is complex with numerous potential solutions. And yes, it can indeed be applied to simulated steering vehicles!</p>
|
||||
<p>A neural network itself is a “connectionist” computational system. The computational systems I have been writing in this book are procedural; a program starts at the first line of code, executes it, and goes on to the next, following instructions in a linear fashion. A true neural network does not follow a linear path. Rather, information is processed collectively, in parallel throughout a network of nodes (the nodes, in this case, being neurons).</p>
|
||||
<p>Here I am showing yet another example of a complex system, much like the ones seen throughout this book. Remember how the individual boids in a flocking system, following only three rules—separation, alignment, cohesion, created complex behaviors? The individual elements of a neural network network are equally simple to understand. They read an input, a number, process it, and generate an output, another number. A network of many neurons, however, can exhibit incredibly rich and intelligent behaviors, echoing the complex dynamics seen in a flock of boids.</p>
|
||||
<p>This ability of a neural network to learn, to make adjustments to its structure over time, is what makes it so useful in the field of artificial intelligence. Here are some standard uses of neural networks in software today.</p>
|
||||
<ul>
|
||||
<li><strong><em>Pattern Recognition</em></strong> — As I’ve discussed, this is one of the most common applications, with examples that range from facial recognition and optical character recognition to more complex tasks like gesture recognition.</li>
|
||||
<li><strong><em>Time Series Prediction and Anomaly Detection</em></strong> — Neural networks are utilized both in forecasting, such as predicting stock market trends or weather patterns, and in recognizing anomalies, which can be applied to areas like cyberattack detection and fraud prevention.</li>
|
||||
<li><strong><em>Natural Language Processing (or “NLP” for short)</em></strong> — One of the biggest developments in recent years has been the use of neural networks for processing and understanding human language. They are used in various tasks including machine translation, sentiment analysis, text summarization, and are the underlying technology behind many digital assistants and chat bots.</li>
|
||||
<li><strong><em>Signal Processing and Soft Sensors</em></strong> — Neural networks play a crucial role in devices like cochlear implants and hearing aids by filtering noise and amplifying essential sounds. They're also involved in 'soft sensor' scenarios, where they process data from multiple sources to give a comprehensive analysis of the environment.</li>
|
||||
<li><strong><em>Control and Adaptive Decision-Making Systems</em></strong> — These applications range from autonomous systems like self-driving cars and drones, to adaptive decision-making used in game playing, pricing models, and recommendation systems on media platforms.</li>
|
||||
<li><strong><em>Generative Models</em></strong> — The rise of novel neural network architectures has made it possible to generate new content. They are used for synthesizing images, enhancing image resolution, style transfer between images, and even generating music and video.</li>
|
||||
</ul>
|
||||
<p>This is by no means a comprehensive list of applications of neural networks. But hopefully it gives you an overall sense of the features and possibilities. Today, leveraging machine learning in creative coding and interactive media is not only feasible, but increasingly common. Two libraries that you may want to consider exploring further for working with neural networks are tensorflow.js and ml5.js. TensorFlow.js<strong> </strong>is an open-source library that lets you define, train, and run machine learning models in JavaScript. It's part of the TensorFlow ecosystem, which is maintained and developed by by Google. ml5.js is a library built on top of tensorflow.js designed specifically for use with p5.js. It’s goal is to be beginner friendly and make machine learning approachable for a braod audience of artists, creative coders, and students.</p>
|
||||
<p>One of the more common things to do with tensorflow.js and ml5.js is to use something known as a “pre-trained model.” A “model” in machine learning is a specific setup of neurons and connections and a “pre-trained” model is one that has already been trained on a dataset for a particular task. It can be used “as is” or as a starting point for additional learning (commonly referred to as “transfer learning”).</p>
|
||||
<p>Examples of popular pretrained models are ones that can classify images, identify body poses, recognize facial landmarks or hand positions, or even analyze the sentiment expressed in a text. Covering the full gamit of possibilities in this rapidly expanding and evolving space probably merits an entire additional book, maybe a series of books. And by the time that book was printed it would probably be out of date.</p>
|
||||
<p>So instead, for me, as I embark on this last hurrah in the nature of code, I’ll stick to just two things. First, I’ll look at how to build the simplest of all neural networks from scratch using only p5.js. The goal is to gain an understanding of how the concepts of neural networks and machine learning are implemented in code. Second, I’ll explore one library, specifically ml5.js, which offers the ability to create more sophisticated neural network models and use them to drive simulated vehicles.</p>
|
||||
<h2 id="the-perceptron">The Perceptron</h2>
|
||||
<p>Invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory, a perceptron is the simplest neural network possible: a computational model of a single neuron. A perceptron consists of one or more inputs, a processor, and a single output.</p>
|
||||
<figure>
|
||||
<img src="images/10_nn/10_nn_3.png" alt="Figure 10.3: The perceptron ">
|
||||
<figcaption>Figure 10.3: The perceptron </figcaption>
|
||||
</figure>
|
||||
<p>A perceptron follows the “feed-forward” model, meaning inputs are sent into the neuron, are processed, and result in an output. In the diagram above, this means the network (one neuron) reads from left to right: inputs come in, output goes out.</p>
|
||||
<p>Let’s follow each of these steps in more detail.</p>
|
||||
<p><span class="highlight">Step 1: Receive inputs.</span></p>
|
||||
<p>Say I have a perceptron with two inputs—let’s call them <em>x0</em> and <em>x1</em>.</p>
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Input</th>
|
||||
<th>Value</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td>x0</td>
|
||||
<td>12</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>x1</td>
|
||||
<td>4</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<p><span class="highlight">Step 2: Weight inputs.</span></p>
|
||||
<p>Each input sent into the neuron must first be weighted, meaning it is multiplied by some value, often a number between -1 and 1. When creating a perceptron, the inputs are typically assigned random weights. Let’s give the example inputs the following weights:</p>
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Weight</th>
|
||||
<th>Value</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td>w0</td>
|
||||
<td>0.5</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>w1</td>
|
||||
<td>-1</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<p>The next step is each input and multiply it by its weight.</p>
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Weight</th>
|
||||
<th>Input</th>
|
||||
<th>Weight <span data-type="equation">\times</span> Input</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td>12</td>
|
||||
<td>0.5</td>
|
||||
<td>6</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>4</td>
|
||||
<td>-1</td>
|
||||
<td>-4</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<p><span class="highlight">Step 3: Sum inputs.</span></p>
|
||||
<p>The weighted inputs are then summed.</p>
|
||||
<p><span data-type="equation">6 + -4 = 2</span></p>
|
||||
<p><span class="highlight">Step 4: Generate output.</span></p>
|
||||
<p>The output of a perceptron is produced by passing the sum through an activation function. Think about a “binary” output, one that is only “off” or “on” like an LED. In this case, the activation function determines whether the perceptron should "fire" or not. If it fires, the light turns on; otherwise, it remains off.</p>
|
||||
<p>Activation functions can get a little bit hairy. If you start reading about activation functions in artificial intelligence textbooks, you may find yourself reaching for a calculus textbook. However, with your new friend the simple perceptron, there’s an easy option which demonstrates the concept. Let’s make the activation function the sign of the sum. In other words, if the sum is a positive number, the output is 1; if it is negative, the output is -1.</p>
|
||||
<p><span data-type="equation">\text{sign}(2) = +1</span></p>
|
||||
<p>Let’s review and condense these steps and translate them into code.</p>
|
||||
<p><strong><em>The Perceptron Algorithm:</em></strong></p>
|
||||
<ol>
|
||||
<li>For every input, multiply that input by its weight.</li>
|
||||
<li>Sum all of the weighted inputs.</li>
|
||||
<li>Compute the output of the perceptron based on that sum passed through an activation function (the sign of the sum).</li>
|
||||
</ol>
|
||||
<p>I can start writing this algorithm in code using two arrays of values, one for the inputs and the weights.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">let inputs = [12 , 4];
|
||||
let weights = [0.5, -1];</pre>
|
||||
<p>Step #1 "for every input" implies a loop that multiplies each input by its corresponding weight. To obtain the sum, the results can be added up in that same loop.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">// Steps 1 and 2: Add up all the weighted inputs.
|
||||
let sum = 0;
|
||||
for (let i = 0; i < inputs.length; i++) {
|
||||
sum += inputs[i] * weights[i];
|
||||
}</pre>
|
||||
<p>With the sum, I can then compute the output.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">// Step 3: Passing the sum through an activation function
|
||||
let output = activate(sum);
|
||||
|
||||
// The activation function
|
||||
function activate(sum) {
|
||||
//{!5} Return a 1 if positive, -1 if negative.
|
||||
if (sum > 0) {
|
||||
return 1;
|
||||
} else {
|
||||
return -1;
|
||||
}
|
||||
}</pre>
|
||||
<h2 id="simple-pattern-recognition-using-a-perceptron">Simple Pattern Recognition Using a Perceptron</h2>
|
||||
<p>Now that I have explained the computational process of a perceptron, let's take a look at an example of one in action. As I mentioned earlier, neural networks are commonly used for pattern recognition applications, such as facial recognition. Even simple perceptrons can demonstrate the fundamentals of classification. Let’s demonstrate with the following scenario.</p>
|
||||
<figure class="half-width-right">
|
||||
<img src="images/10_nn/10_nn_4.png" alt="Figure 10.4">
|
||||
<figcaption>Figure 10.4</figcaption>
|
||||
</figure>
|
||||
<p>Consider a line in two-dimensional space. Points in that space can be classified as living on either one side of the line or the other. While this is a somewhat silly example (since there is clearly no need for a neural network; on which side a point lies can be determined with some simple algebra), it shows how a perceptron can be trained to recognize points on one side versus another.</p>
|
||||
<p>Let’s say a perceptron has 2 inputs: <span data-type="equation">x,y</span> coordinates of a point). When using a sign activation function, the output will either be -1 or 1. The input data are classified according to the sign of the output, the weighted sum of inputs. In the above diagram, you can see how each point is either below the line (-1) or above (+1).</p>
|
||||
<p>The perceptron itself can be diagrammed as follows. In machine learning <span data-type="equation">x</span>’s are typically the notation for inputs and <span data-type="equation">y</span> is typically the notation for an output. To keep this convention I’ll note in the diagram the inputs as <span data-type="equation">x_0</span> and <span data-type="equation">x_1</span>. <span data-type="equation">x_0</span> will correspond to the x cooordinate and <span data-type="equation">x_1</span> to the y. I name the output simply “<span data-type="equation">\text{output}</span>”.</p>
|
||||
<figure>
|
||||
<img src="images/10_nn/10_nn_5.png" alt="Figure 10.5 Two inputs (x_0 and y_0), a weight for each input (\text{weight}_0 and \text{weight}_1) as well as a processing neuron that generates the output.">
|
||||
<figcaption>Figure 10.5 Two inputs (<span data-type="equation">x_0</span> and <span data-type="equation">y_0</span>), a weight for each input (<span data-type="equation">\text{weight}_0</span> and <span data-type="equation">\text{weight}_1</span>) as well as a processing neuron that generates the output.</figcaption>
|
||||
</figure>
|
||||
<p>There is a pretty significant problem in Figure 10.5, however. Let’s consider the point <span data-type="equation">(0,0)</span>. What if I send this point into the perceptron as its input: <span data-type="equation">x_0 = 0</span> and <span data-type="equation">x_1=1</span>? What will the sum of its weighted inputs be? No matter what the weights are, the sum will always be 0! But this can’t be right—after all, the point <span data-type="equation">(0,0)</span> could certainly be above or below various lines in this two-dimensional world.</p>
|
||||
<p>To avoid this dilemma, the perceptron requires a third input, typically referred to as a <strong><em>bias</em></strong> input. A bias input always has the value of 1 and is also weighted. Here is the perceptron with the addition of the bias:</p>
|
||||
<figure>
|
||||
<img src="images/10_nn/10_nn_6.png" alt="Figure 10.6: Diagram of a perceptron with the added “bias” input.">
|
||||
<figcaption>Figure 10.6: Diagram of a perceptron with the added “bias” input.</figcaption>
|
||||
</figure>
|
||||
<p>Let’s go back to the point <span data-type="equation">(0,0)</span>.</p>
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>input value</th>
|
||||
<th>weight</th>
|
||||
<th>result</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td>0</td>
|
||||
<td><span data-type="equation">w_0</span></td>
|
||||
<td>0</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>0</td>
|
||||
<td><span data-type="equation">w_1</span></td>
|
||||
<td>0</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>1</td>
|
||||
<td><span data-type="equation">w_\text{bias}</span></td>
|
||||
<td><span data-type="equation">w_\text{bias}</span></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<p>The output is then the sum of the above three results: <span data-type="equation">0 + 0 + w_\text{bias}</span>. Therefore, the bias, by itself, answers the question of where <span data-type="equation">(0,0)</span> is in relation to the line. If the bias's weight is positive, then <span data-type="equation">(0,0)</span> is above the line; if negative, it is below. Its weight <strong><em>biases</em></strong> the perceptron's understanding of the line's position relative to <span data-type="equation">(0,0)</span>!</p>
|
||||
<h2 id="coding-the-perceptron">Coding the Perceptron</h2>
|
||||
<p>I am now ready to assemble the code for a <code>Perceptron</code> class. The perceptron only needs to track the input weights, which I can store using an array.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">class Perceptron {
|
||||
constructor() {
|
||||
this.weights = [];
|
||||
}</pre>
|
||||
<p>The constructor could receive an argument indicating the number of inputs (in this case three: <span data-type="equation">x_0</span>, <span data-type="equation">x_1</span>, and a bias) and size the array accordingly.</p>
|
||||
<pre class="codesplit" data-code-language="javascript"> // The argument "n" determines the number of inputs (including the bias)
|
||||
constructor(n) {
|
||||
this.weights = [];
|
||||
for (let i = 0; i < n; i++) {
|
||||
//{!1} The weights are picked randomly to start.
|
||||
this.weights[i] = random(-1, 1);
|
||||
}
|
||||
}</pre>
|
||||
<p>A perceptron’s job is to receive inputs and produce an output. These requirements can be packaged together in a <code>feedForward()</code> function. In this example, the perceptron's inputs are an array (which should be the same length as the array of weights), and the output is an a number, <span data-type="equation">+1</span> or <span data-type="equation">-1</span>, depending on the sign as returned by the activation function.</p>
|
||||
<pre class="codesplit" data-code-language="javascript"> feedForward(inputs) {
|
||||
let sum = 0;
|
||||
for (let i = 0; i < this.weights.length; i++) {
|
||||
sum += inputs[i] * this.weights[i];
|
||||
}
|
||||
//{!1} Result is the sign of the sum, -1 or +1.
|
||||
// Here the perceptron is making a guess.
|
||||
// Is it on one side of the line or the other?
|
||||
return this.activate(sum);
|
||||
}</pre>
|
||||
<p>I’ll note that the name of the function "feed forward" in this context comes from a commonly used term in neural networks to describe the process data passing through the network. This name relates to the way the data <em>feeds</em> directly <em>forward</em> through the network, read from left to right in a neural network diagram.</p>
|
||||
<p>Presumably, I could now create a <code>Perceptron</code> object and ask it to make a guess for any given point.</p>
|
||||
<figure>
|
||||
<img src="images/10_nn/10_nn_7.png" alt="Figure 10.7">
|
||||
<figcaption>Figure 10.7</figcaption>
|
||||
</figure>
|
||||
<pre class="codesplit" data-code-language="javascript">// Create the Perceptron.
|
||||
let perceptron = new Perceptron(3);
|
||||
// The input is 3 values: x, y, and bias.
|
||||
let inputs = [50, -12, 1];
|
||||
// The answer!
|
||||
let guess = perceptron.feedForward(inputs);</pre>
|
||||
<p>Did the perceptron get it right? At this point, the perceptron has no better than a 50/50 chance of arriving at the right answer. Remember, when I created it, I gave each weight a random value. A neural network is not a magic tool that can guess things correctly on its own. I need to teach it how to do so!</p>
|
||||
<p>To train a neural network to answer correctly, I will use the method of <em>supervised learning</em>, which I described in section 10.1. In this method, the network is provided with inputs for which there is a known answer. This enables the network to determine if it has made a correct guess. If it is incorrect, the network can learn from its mistake and adjust its weights. The process is as follows:</p>
|
||||
<ol>
|
||||
<li>Provide the perceptron with inputs for which there is a known answer.</li>
|
||||
<li>Ask the perceptron to guess an answer.</li>
|
||||
<li>Compute the error. (Did it get the answer right or wrong?)</li>
|
||||
<li>Adjust all the weights according to the error.</li>
|
||||
<li>Return to Step 1 and repeat!</li>
|
||||
</ol>
|
||||
<p>Steps 1 through 4 can be packaged into a function. Before I can write the entire function, however, I need to examine Steps 3 and 4 in more detail. How do I define the perceptron’s error? And how should I adjust the weights according to this error?</p>
|
||||
<p>The perceptron’s error can be defined as the difference between the desired answer and its guess.</p>
|
||||
<div data-type="equation">\text{error} = \text{desired output} - \text{guess output}</div>
|
||||
<p>Does the above formula look familiar to you? Maybe you are thinking what I’m thinking? What was that formula for a steering force again?</p>
|
||||
<div data-type="equation">\text{steering} = \text{desired velocity} - \text{current velocity}</div>
|
||||
<p>This is also a calculation of an error! The current velocity serves as a guess, and the error (the steering force) indicates how to adjust the velocity in the correct direction. In a moment, you will see how adjusting a vehicle's velocity to follow a target is similar to adjusting the weights of a neural network to arrive at the correct answer.</p>
|
||||
<p>In the case of the perceptron, the output has only two possible values: <span data-type="equation">+1</span> or <span data-type="equation">-1</span>. This means there are only three possible errors.</p>
|
||||
<p>If the perceptron guesses the correct answer, then the guess equals e the desired output and the error is 0. If the correct answer is -1 and it guessed +1, then the error is -2. If the correct answer is +1 and it guessed -1, then the error is +2.</p>
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Desired</th>
|
||||
<th>Guess</th>
|
||||
<th>Error</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><span data-type="equation">-1</span></td>
|
||||
<td><span data-type="equation">-1</span></td>
|
||||
<td><span data-type="equation">0</span></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><span data-type="equation">-1</span></td>
|
||||
<td><span data-type="equation">+1</span></td>
|
||||
<td><span data-type="equation">-2</span></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><span data-type="equation">+1</span></td>
|
||||
<td><span data-type="equation">-1</span></td>
|
||||
<td><span data-type="equation">+2</span></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><span data-type="equation">+1</span></td>
|
||||
<td><span data-type="equation">+1</span></td>
|
||||
<td><span data-type="equation">0</span></td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<p>The error is the determining factor in how the perceptron’s weights should be adjusted. For any given weight, what I am looking to calculate is the change in weight, often called <span data-type="equation">\Delta\text{weight}</span> (or “delta” weight, delta being the Greek letter <span data-type="equation">\Delta</span>).</p>
|
||||
<div data-type="equation">\text{new weight} = \text{weight} + \Delta\text{weight}</div>
|
||||
<p><span data-type="equation">\Delta\text{weight}</span> is calculated as the error multiplied by the input.</p>
|
||||
<div data-type="equation">\Delta\text{weight} = \text{error} \times \text{input}</div>
|
||||
<p>Therefore:</p>
|
||||
<div data-type="equation">\text{new weight} = \text{weight} + \text{error} \times \text{input}</div>
|
||||
<p>To understand why this works, I will again return to steering. A steering force is essentially an error in velocity. By applying a steering force as an acceleration (or <span data-type="equation">\Delta\text{velocity}</span>), then the velocity is adjusted to move in the correct direction. This is what I want to do with the neural network’s weights. I want to adjust them in the right direction, as defined by the error.</p>
|
||||
<p>With steering, however, I had an additional variable that controlled the vehicle’s ability to steer: the <em>maximum force</em>. A high maximum force allowed the vehicle to accelerate and turn quickly, while a lower force resulted in a slower velocity adjustment. The neural network will use a similar strategy with a variable called the "learning constant."</p>
|
||||
<div data-type="equation">\text{new weight} = \text{weight} + (\text{error} \times \text{input}) \times \text{learning constant}</div>
|
||||
<p>Note that a high learning constant causes the weight to change more drastically. This may help the perceptron arrive at a solution more quickly, but it also increases the risk of overshooting the optimal weights. A small learning constant, however, will adjust the weights slowly and require more training time, but allow the network to make small adjustments that could improve overall accuracy.</p>
|
||||
<p>Assuming the addition of a <code>this.learningConstant</code> property to the <code>Perceptron</code>class, , I can now write a training function for the perceptron following the above steps.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">// Step 1: Provide the inputs and known answer.
|
||||
// These are passed in as arguments to train().
|
||||
train(inputs, desired) {
|
||||
|
||||
// Step 2: Guess according to those inputs.
|
||||
let guess = this.feedforward(inputs);
|
||||
|
||||
// Step 3: Compute the error (difference between desired and guess).
|
||||
let error = desired - guess;
|
||||
|
||||
//{!3} Step 4: Adjust all the weights according to the error and learning constant.
|
||||
for (let i = 0; i < this.weights.length; i++) {
|
||||
this.weights[i] += error * inputs[i] * this.learningConstant;
|
||||
}
|
||||
}</pre>
|
||||
<p>Here’s the <code>Perceptron</code> class as a whole.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">class Perceptron {
|
||||
constructor(n) {
|
||||
//{!2} The Perceptron stores its weights and learning constants.
|
||||
this.weights = [];
|
||||
this.learningConstant = 0.01;
|
||||
//{!3} Weights start off random.
|
||||
for (let i = 0; i < n; i++) {
|
||||
this.weights[i] = random(-1,1);
|
||||
}
|
||||
}
|
||||
|
||||
//{!7} Return an output based on inputs.
|
||||
feedforward(inputs) {
|
||||
let sum = 0;
|
||||
for (let i = 0; i < this.weights.length; i++) {
|
||||
sum += inputs[i] * this.weights[i];
|
||||
}
|
||||
return this.activate(sum);
|
||||
}
|
||||
|
||||
// Output is a +1 or -1.
|
||||
activate(sum) {
|
||||
if (sum > 0) {
|
||||
return 1;
|
||||
} else {
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
//{!7} Train the network against known data.
|
||||
train(inputs, desired) {
|
||||
let guess = this.feedforward(inputs);
|
||||
let error = desired - guess;
|
||||
for (let i = 0; i < this.weights.length; i++) {
|
||||
this.weights[i] += error * inputs[i] * this.learningConstant;
|
||||
}
|
||||
}
|
||||
}</pre>
|
||||
<p>To train the perceptron, I need a set of inputs with a known answer. Now the question becomes, how do I pick a point and know whether it is above or below a line? Let’s start with the formula for a line, where <span data-type="equation">y</span> is calculated as a function of <span data-type="equation">x</span>:</p>
|
||||
<div data-type="equation">y = f(x)</div>
|
||||
<p>In generic terms, a line can be described as:</p>
|
||||
<div data-type="equation">y = ax + b</div>
|
||||
<p>Here’s a specific example:</p>
|
||||
<div data-type="equation">y = 2x + 1</div>
|
||||
<p>I can then write a function with this in mind.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">// A function to calculate y based on x along a line
|
||||
f(x) {
|
||||
return 2 * x + 1;
|
||||
}</pre>
|
||||
<p>So, if I make up a point:</p>
|
||||
<pre class="codesplit" data-code-language="javascript">let x = random(width);
|
||||
let y = random(height);</pre>
|
||||
<p>How do I know if this point is above or below the line? The line function <span data-type="equation">f(x)</span> returns <span data-type="equation">y</span> value on the line for that <span data-type="equation">x</span> position. Let’s call that <span data-type="equation">y_\text{line}</span>.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">// The y position on the line
|
||||
let yline = f(x);</pre>
|
||||
<p>If the <span data-type="equation">y</span> value I am examining is above the line, it will be less than <span data-type="equation">y_\text{line}</span>.</p>
|
||||
<figure>
|
||||
<img src="images/10_nn/10_nn_8.png" alt="Figure 10.8: If y is less than y_\text{line} then it is above the line. Note this is only true for a p5.js canvas where the y axis points down in the positive direction.">
|
||||
<figcaption>Figure 10.8: If <span data-type="equation">y</span> is less than <span data-type="equation">y_\text{line}</span> then it is above the line. Note this is only true for a p5.js canvas where the y axis points down in the positive direction.</figcaption>
|
||||
</figure>
|
||||
<pre class="codesplit" data-code-language="javascript">// Start with the value of +1
|
||||
let desired = 1;
|
||||
if (y < yline) {
|
||||
//{!1} The answer is -1 if y is above the line.
|
||||
desired = -1;
|
||||
}</pre>
|
||||
<p>I can then make an inputs array to go with the <code>desired</code> output.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">// Don't forget to include the bias!
|
||||
let trainingInputs = [x, y, 1];</pre>
|
||||
<p>Assuming that I have a <code>perceptron</code> variable, I can train it by providing the inputs along with the desired answer.</p>
|
||||
<pre class="codesplit" data-code-language="javascript">perceptron.train(trainingInputs, desired);</pre>
|
||||
<p>Now, it’s important to remember that this is just a demonstration. Remember the Shakespeare-typing monkeys? I asked the genetic algorithm to solve for “to be or not to be”—an answer I already knew. I did this to make sure the genetic algorithm worked properly. The same reasoning applies to this example. I don’t need a perceptron to tell me whether a point is above or below a line; I can do that with simple math. By using an example that I can easily solve without a perceptron, I can both demonstrate the algorithm of the perceptron and verify that it is working properly.</p>
|
||||
<p>Let’s look the perceptron trained with with an array of many points.</p>
|
||||
<p></p>
|
||||
<div data-type="example">
|
||||
<h3 id="example-101-the-perceptron">Example 10.1: The Perceptron</h3>
|
||||
<figure>
|
||||
<div data-type="embed" data-p5-editor="https://editor.p5js.org/natureofcode/sketches/sMozIaMCW" data-example-path="examples/10_nn/10_1_perceptron_with_normalization"></div>
|
||||
<figcaption></figcaption>
|
||||
</figure>
|
||||
</div>
|
||||
<pre class="codesplit" data-code-language="javascript">// The Perceptron
|
||||
let perceptron;
|
||||
//{!1} 2,000 training points
|
||||
let training = [];
|
||||
// A counter to track training points one by one
|
||||
let count = 0;
|
||||
|
||||
//{!3} The formula for a line
|
||||
function f(x) {
|
||||
return 2 * x + 1;
|
||||
}
|
||||
|
||||
function setup() {
|
||||
createCanvas(640, 240);
|
||||
|
||||
// Perceptron has 3 inputs (including bias) and learning rate of 0.01
|
||||
perceptron = new Perceptron(3, 0.01);
|
||||
|
||||
//{!1} Make 1,000 training points.
|
||||
for (let i = 0; i < 2000; i++) {
|
||||
let x = random(-width / 2,width / 2);
|
||||
let y = random(-height / 2,height / 2);
|
||||
//{!2} Is the correct answer 1 or -1?
|
||||
let desired = 1;
|
||||
if (y < f(x)) {
|
||||
desired = -1;
|
||||
}
|
||||
training[i] = {
|
||||
input: [x, y, 1],
|
||||
output: desired
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
function draw() {
|
||||
background(255);
|
||||
translate(width/0wiu2, height/2);
|
||||
|
||||
ptron.train(training[count].inputs, training[count].answer);
|
||||
//{!1} For animation, we are training one point at a time.
|
||||
count = (count + 1) % training.length;
|
||||
|
||||
for (let i = 0; i < count; i++) {
|
||||
stroke(0);
|
||||
let guess = ptron.feedforward(training[i].inputs);
|
||||
//{!2} Show the classification—no fill for -1, black for +1.
|
||||
if (guess > 0) noFill();
|
||||
else fill(0);
|
||||
ellipse(training[i].inputs[0], training[i].inputs[1], 8, 8);
|
||||
}
|
||||
}</pre>
|
||||
<p>Section on Normalizing Here?</p>
|
||||
<div data-type="exercise">
|
||||
<h3 id="exercise-101">Exercise 10.1</h3>
|
||||
<p>Instead of using the supervised learning model above, can you train the neural network to find the right weights by using a genetic algorithm?</p>
|
||||
</div>
|
||||
<div data-type="exercise">
|
||||
<h3 id="exercise-102">Exercise 10.2</h3>
|
||||
<p>Visualize the perceptron itself. Draw the inputs, the processing node, and the output.</p>
|
||||
</div>
|
||||
<h2 id="its-a-network-remember">It’s a “Network,” Remember?</h2>
|
||||
<p>Yes, a perceptron can have multiple inputs, but it is still a lonely neuron. The power of neural networks comes in the networking itself. Perceptrons are, sadly, incredibly limited in their abilities. If you read an AI textbook, it will say that a perceptron can only solve <strong><em>linearly separable</em></strong> problems. What’s a linearly separable problem? Let’s take a look at the first example, which determined whether points were on one side of a line or the other.</p>
|
||||
<figure>
|
||||
<img src="images/10_nn/10_nn_9.png" alt="Figure 10.11">
|
||||
<figcaption>Figure 10.11</figcaption>
|
||||
</figure>
|
||||
<p>On the left of Figure 10.11, is an example of classic linearly separable data. Graph all of the possibilities; if you can classify the data with a straight line, then it is linearly separable. On the right, however, is non-linearly separable data. You can’t draw a straight line to separate the black dots from the gray ones.</p>
|
||||
<p>One of the simplest examples of a non-linearly separable problem is <em>XOR</em>, or “exclusive or.” By now your should be familiar with <em>AND</em>. For <em>A</em> <em>AND</em> <em>B</em> to be true, both <em>A</em> and <em>B</em> must be true. With <em>OR</em>, either <em>A</em> or <em>B</em> can be true for <em>A</em> <em>OR</em> <em>B</em> to evaluate as true. These are both linearly separable problems. Let’s look at the solution space, a “truth table.”</p>
|
||||
<figure>
|
||||
<img src="images/10_nn/10_nn_10.png" alt="Figure 10.12">
|
||||
<figcaption>Figure 10.12</figcaption>
|
||||
</figure>
|
||||
<p>See how you can draw a line to separate the true outputs from the false ones?</p>
|
||||
<p><em>XOR</em> is the equivalent of <em>OR</em> and <em>NOT AND</em>. In other words, <em>A</em> <em>XOR</em> <em>B</em> only evaluates to true if one of them is true. If both are false or both are true, then we get false. Take a look at the following truth table.</p>
|
||||
<figure>
|
||||
<img src="images/10_nn/10_nn_11.png" alt="Figure 10.13">
|
||||
<figcaption>Figure 10.13</figcaption>
|
||||
</figure>
|
||||
<p>This is not linearly separable. Try to draw a straight line to separate the true outputs from the false ones—you can’t!</p>
|
||||
<p>So perceptrons can’t even solve something as simple as <em>XOR</em>. But what if we made a network out of two perceptrons? If one perceptron can solve <em>OR</em> and one perceptron can solve <em>NOT AND</em>, then two perceptrons combined can solve <em>XOR</em>.</p>
|
||||
<figure>
|
||||
<img src="images/10_nn/10_nn_12.png" alt="Figure 10.14">
|
||||
<figcaption>Figure 10.14</figcaption>
|
||||
</figure>
|
||||
<p>The above diagram is known as a <em>multi-layered perceptron</em>, a network of many neurons. Some are input neurons and receive the inputs, some are part of what’s called a “hidden” layer (as they are connected to neither the inputs nor the outputs of the network directly), and then there are the output neurons, from which the results are read.</p>
|
||||
<p>Training these networks is much more complicated. With the simple perceptron, you could easily evaluate how to change the weights according to the error. But here there are so many different connections, each in a different layer of the network. How does one know how much each neuron or connection contributed to the overall error of the network?</p>
|
||||
<p>The solution to optimizing weights of a multi-layered network is known as <strong><em>backpropagation</em></strong>. The output of the network is generated in the same manner as a perceptron. The inputs multiplied by the weights are summed and fed forward through the network. The difference here is that they pass through additional layers of neurons before reaching the output. Training the network (i.e. adjusting the weights) also involves taking the error (desired result - guess). The error, however, must be fed backwards through the network. The final error ultimately adjusts the weights of all the connections.</p>
|
||||
<p>Backpropagation is a bit beyond the scope of this book and involves a fancier activation function (called the sigmoid function) as well as some basic calculus. If you are interested in how backpropagation works, check the book website (and GitHub repository) for an example that solves <em>XOR</em> using a multi-layered feed forward network with backpropagation.</p>
|
||||
<p>Instead, here I'll shift the focus to using neural networks in ml5.js.</p>
|
||||
<h2 id="create-a-train-a-neural-network-with-ml5js">Create a train a neural network with ml5.js</h2>
|
||||
<p>simple example with colors?</p>
|
||||
<p>reference teachable machine, transfer learning and image classification?</p>
|
||||
<h2 id="classification-and-regression">Classification and Regression</h2>
|
||||
<p>Explain regression</p>
|
||||
<h2 id="what-is-neat-neuroevolution-augmented-topologies">What is NEAT “neuroevolution augmented topologies)</h2>
|
||||
<p>flappy bird scenario (classification) vs. steering force (regression)?</p>
|
||||
<p>features?</p>
|
||||
<h2 id="neuroevolution-steering">NeuroEvolution Steering</h2>
|
||||
<p>obstacle avoidance example</p>
|
||||
<h2 id="other-possibilities">Other possibilities?</h2>
|
||||
<p></p>
|
||||
<div data-type="project">
|
||||
<h3 id="the-ecosystem-project-9">The Ecosystem Project</h3>
|
||||
<p>Step 10 Exercise:</p>
|
||||
<p>Try incorporating the concept of a “brain” into your creatures.</p>
|
||||
<ul>
|
||||
<li>Use reinforcement learning in the creatures’ decision-making process.</li>
|
||||
<li>Create a creature that features a visualization of its brain as part of its design (even if the brain itself is not functional).</li>
|
||||
<li>Can the ecosystem as a whole emulate the brain? Can elements of the environment be neurons and the creatures act as inputs and outputs?</li>
|
||||
</ul>
|
||||
</div>
|
||||
<h3 id="the-end">The end</h3>
|
||||
<p>If you’re still reading, thank you! You’ve reached the end of the book. But for as much material as this book contains, we’ve barely scratched the surface of the world we inhabit and of techniques for simulating it. It’s my intention for this book to live as an ongoing project, and I hope to continue adding new tutorials and examples to the book’s website as well as expand and update the printed material. Your feedback is truly appreciated, so please get in touch via email at <code>(daniel@shiffman.net)</code> or by contributing to the GitHub repository, in keeping with the open-source spirit of the project. Share your work. Keep in touch. Let’s be two with nature.</p>
|
||||
</section>
|
|
@ -1 +1 @@
|
|||
[{"title":"Introduction","src":"./00_7_introduction.html","slug":"introduction"},{"title":"1. Vectors","src":"./01_vectors.html","slug":"vectors"},{"title":"2. Forces","src":"./02_forces.html","slug":"force"},{"title":"3. Oscillation","src":"./03_oscillation.html","slug":"oscillation"},{"title":"4. Particle Systems","src":"./04_particles.html","slug":"particles"},{"title":"5. Autonomous Agents","src":"./05_steering.html","slug":"autonomous-agents"},{"title":"6. Physics Libraries","src":"./06_libraries.html","slug":"physics-libraries"},{"title":"7. Cellular Automata","src":"./07_ca.html","slug":"cellular-automata"},{"title":"8. Fractals","src":"./08_fractals.html","slug":"fractals"},{"title":"9. Evolutionary Computing","src":"./09_ga.html","slug":"genetic-algorithms"}]
|
||||
[{"title":"Introduction","src":"./00_7_introduction.html","slug":"introduction"},{"title":"1. Vectors","src":"./01_vectors.html","slug":"vectors"},{"title":"2. Forces","src":"./02_forces.html","slug":"force"},{"title":"3. Oscillation","src":"./03_oscillation.html","slug":"oscillation"},{"title":"4. Particle Systems","src":"./04_particles.html","slug":"particles"},{"title":"5. Autonomous Agents","src":"./05_steering.html","slug":"autonomous-agents"},{"title":"6. Physics Libraries","src":"./06_libraries.html","slug":"physics-libraries"},{"title":"7. Cellular Automata","src":"./07_ca.html","slug":"cellular-automata"},{"title":"8. Fractals","src":"./08_fractals.html","slug":"fractals"},{"title":"9. Evolutionary Computing","src":"./09_ga.html","slug":"genetic-algorithms"},{"title":"10. Neural Networks","src":"./10_nn.html","slug":"neural-networks"}]
|
|
@ -48,4 +48,10 @@ class World {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
born(x, y) {
|
||||
let position = createVector(mouseX, mouseY);
|
||||
let dna = new DNA();
|
||||
this.bloops.push(new Bloop(position, dna));
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
<!doctype html>
|
||||
<html>
|
||||
<head>
|
||||
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.4.1/p5.min.js"></script>
|
||||
<meta charset="utf-8">
|
||||
<link rel="stylesheet" type="text/css" href="style.css">
|
||||
</head>
|
||||
<body>
|
||||
<script src="perceptron.js"></script>
|
||||
<script src="sketch.js"></script>
|
||||
</body>
|
||||
</html>
|
|
@ -0,0 +1,53 @@
|
|||
// Daniel Shiffman
|
||||
// The Nature of Code
|
||||
// http://natureofcode.com
|
||||
|
||||
// Simple Perceptron Example
|
||||
// See: http://en.wikipedia.org/wiki/Perceptron
|
||||
|
||||
// Perceptron Class
|
||||
|
||||
// Perceptron is created with n weights and learning constant
|
||||
class Perceptron {
|
||||
constructor(n, c) {
|
||||
// Array of weights for inputs
|
||||
this.weights = new Array(n);
|
||||
// Start with random weights
|
||||
for (let i = 0; i < this.weights.length; i++) {
|
||||
this.weights[i] = random(-1, 1);
|
||||
}
|
||||
this.c = c; // learning rate/constant
|
||||
}
|
||||
|
||||
// Function to train the Perceptron
|
||||
// Weights are adjusted based on "desired" answer
|
||||
train(inputs, desired) {
|
||||
// Guess the result
|
||||
let guess = this.feedforward(inputs);
|
||||
// Compute the factor for changing the weight based on the error
|
||||
// Error = desired output - guessed output
|
||||
// Note this can only be 0, -2, or 2
|
||||
// Multiply by learning constant
|
||||
let error = desired - guess;
|
||||
// Adjust weights based on weightChange * input
|
||||
for (let i = 0; i < this.weights.length; i++) {
|
||||
this.weights[i] += this.c * error * inputs[i];
|
||||
}
|
||||
}
|
||||
|
||||
// Guess -1 or 1 based on input values
|
||||
feedforward(inputs) {
|
||||
// Sum all values
|
||||
let sum = 0;
|
||||
for (let i = 0; i < this.weights.length; i++) {
|
||||
sum += inputs[i] * this.weights[i];
|
||||
}
|
||||
// Result is sign of the sum, -1 or 1
|
||||
return this.activate(sum);
|
||||
}
|
||||
|
||||
activate(sum) {
|
||||
if (sum > 0) return 1;
|
||||
else return -1;
|
||||
}
|
||||
}
|
|
@ -0,0 +1,97 @@
|
|||
// The Nature of Code
|
||||
// Daniel Shiffman
|
||||
// http://natureofcode.com
|
||||
|
||||
// Simple Perceptron Example
|
||||
// See: http://en.wikipedia.org/wiki/Perceptron
|
||||
|
||||
// Code based on text "Artificial Intelligence", George Luger
|
||||
|
||||
// A list of points we will use to "train" the perceptron
|
||||
let training = [];
|
||||
// A Perceptron object
|
||||
let perceptron;
|
||||
|
||||
// We will train the perceptron with one "Point" object at a time
|
||||
let count = 0;
|
||||
|
||||
// Coordinate space
|
||||
let xmin = -1;
|
||||
let ymin = -1;
|
||||
let xmax = 1;
|
||||
let ymax = 1;
|
||||
|
||||
// The function to describe a line
|
||||
function f(x) {
|
||||
let y = 0.3 * x + 0.4;
|
||||
return y;
|
||||
}
|
||||
|
||||
function setup() {
|
||||
createCanvas(640, 240);
|
||||
|
||||
// The perceptron has 3 inputs -- x, y, and bias
|
||||
// Second value is "Learning Constant"
|
||||
perceptron = new Perceptron(3, 0.01); // Learning Constant is low just b/c it's fun to watch, this is not necessarily optimal
|
||||
|
||||
// Create a random set of training points and calculate the "known" answer
|
||||
for (let i = 0; i < 1000; i++) {
|
||||
let x = random(xmin, xmax);
|
||||
let y = random(ymin, ymax);
|
||||
let answer = 1;
|
||||
if (y < f(x)) answer = -1;
|
||||
training[i] = {
|
||||
input: [x, y, 1],
|
||||
output: answer
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
function draw() {
|
||||
background(255);
|
||||
|
||||
// Draw the line
|
||||
strokeWeight(1);
|
||||
stroke(0);
|
||||
let x1 = map(xmin, xmin, xmax, 0, width);
|
||||
let y1 = map(f(xmin), ymin, ymax, height, 0);
|
||||
let x2 = map(xmax, xmin, xmax, 0, width);
|
||||
let y2 = map(f(xmax), ymin, ymax, height, 0);
|
||||
line(x1, y1, x2, y2);
|
||||
|
||||
// Draw the line based on the current weights
|
||||
// Formula is weights[0]*x + weights[1]*y + weights[2] = 0
|
||||
stroke(0);
|
||||
strokeWeight(2);
|
||||
let weights = perceptron.weights;
|
||||
x1 = xmin;
|
||||
y1 = (-weights[2] - weights[0] * x1) / weights[1];
|
||||
x2 = xmax;
|
||||
y2 = (-weights[2] - weights[0] * x2) / weights[1];
|
||||
|
||||
x1 = map(x1, xmin, xmax, 0, width);
|
||||
y1 = map(y1, ymin, ymax, height, 0);
|
||||
x2 = map(x2, xmin, xmax, 0, width);
|
||||
y2 = map(y2, ymin, ymax, height, 0);
|
||||
line(x1, y1, x2, y2);
|
||||
|
||||
|
||||
// Train the Perceptron with one "training" point at a time
|
||||
perceptron.train(training[count].input, training[count].output);
|
||||
count = (count + 1) % training.length;
|
||||
|
||||
// Draw all the points based on what the Perceptron would "guess"
|
||||
// Does not use the "known" correct answer
|
||||
for (let i = 0; i < count; i++) {
|
||||
stroke(0);
|
||||
strokeWeight(1);
|
||||
fill(127);
|
||||
let guess = perceptron.feedforward(training[i].input);
|
||||
if (guess > 0) noFill();
|
||||
|
||||
let x = map(training[i].input[0], xmin, xmax, 0, width);
|
||||
let y = map(training[i].input[1], ymin, ymax, height, 0);
|
||||
circle(x, y, 8);
|
||||
}
|
||||
}
|
|
@ -0,0 +1,7 @@
|
|||
html, body {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
canvas {
|
||||
display: block;
|
||||
}
|
BIN
content/images/10_nn/10_nn_1.png
Normal file
After Width: | Height: | Size: 125 KiB |
BIN
content/images/10_nn/10_nn_10.png
Normal file
After Width: | Height: | Size: 68 KiB |
BIN
content/images/10_nn/10_nn_11.png
Normal file
After Width: | Height: | Size: 32 KiB |
BIN
content/images/10_nn/10_nn_12.png
Normal file
After Width: | Height: | Size: 65 KiB |
BIN
content/images/10_nn/10_nn_2.png
Normal file
After Width: | Height: | Size: 101 KiB |
BIN
content/images/10_nn/10_nn_3.png
Normal file
After Width: | Height: | Size: 29 KiB |
BIN
content/images/10_nn/10_nn_4.png
Normal file
After Width: | Height: | Size: 24 KiB |
BIN
content/images/10_nn/10_nn_5.png
Normal file
After Width: | Height: | Size: 31 KiB |
BIN
content/images/10_nn/10_nn_6.png
Normal file
After Width: | Height: | Size: 43 KiB |
BIN
content/images/10_nn/10_nn_7.png
Normal file
After Width: | Height: | Size: 100 KiB |
BIN
content/images/10_nn/10_nn_8.png
Normal file
After Width: | Height: | Size: 60 KiB |
BIN
content/images/10_nn/10_nn_9.png
Normal file
After Width: | Height: | Size: 105 KiB |