Formatter (#51)

Enforce consistent formatting use `dprint`
This commit is contained in:
Luca Palmieri 2024-05-24 17:00:03 +02:00 committed by GitHub
parent 537118574b
commit 99591a715e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
157 changed files with 1057 additions and 1044 deletions

View file

@ -9,6 +9,12 @@ on:
- main - main
jobs: jobs:
formatter:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: dprint/check@v2.2
check-links: check-links:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:

View file

@ -1,15 +1,15 @@
# Learn Rust, one exercise at a time # Learn Rust, one exercise at a time
You've heard about Rust, but you never had the chance to try it out? You've heard about Rust, but you never had the chance to try it out?\
This course is for you! This course is for you!
You'll learn Rust by solving 100 exercises. You'll learn Rust by solving 100 exercises.\
You'll go from knowing nothing about Rust to being able to start You'll go from knowing nothing about Rust to being able to start
writing your own programs, one exercise at a time. writing your own programs, one exercise at a time.
> [!NOTE] > [!NOTE]
> This course has been written by [Mainmatter](https://mainmatter.com/rust-consulting/). > This course has been written by [Mainmatter](https://mainmatter.com/rust-consulting/).\
> It's one of the trainings in [our portfolio of Rust workshops](https://mainmatter.com/services/workshops/rust/). > It's one of the trainings in [our portfolio of Rust workshops](https://mainmatter.com/services/workshops/rust/).\
> Check out our [landing page](https://mainmatter.com/rust-consulting/) if you're looking for Rust consulting or > Check out our [landing page](https://mainmatter.com/rust-consulting/) if you're looking for Rust consulting or
> training! > training!
@ -20,7 +20,7 @@ to get started with the course.
## Requirements ## Requirements
- **Rust** (follow instructions [here](https://www.rust-lang.org/tools/install)). - **Rust** (follow instructions [here](https://www.rust-lang.org/tools/install)).\
If `rustup` is already installed on your system, run `rustup update` (or another appropriate command depending on how If `rustup` is already installed on your system, run `rustup update` (or another appropriate command depending on how
you installed Rust on your system) you installed Rust on your system)
to make sure you're running on the latest stable version. to make sure you're running on the latest stable version.

View file

@ -2,28 +2,28 @@
Welcome to **"100 Exercises To Learn Rust"**! Welcome to **"100 Exercises To Learn Rust"**!
This course will teach you Rust's core concepts, one exercise at a time. This course will teach you Rust's core concepts, one exercise at a time.\
You'll learn about Rust's syntax, its type system, its standard library, and its ecosystem. You'll learn about Rust's syntax, its type system, its standard library, and its ecosystem.
We don't assume any prior knowledge of Rust, but we assume you know at least We don't assume any prior knowledge of Rust, but we assume you know at least
another programming language. another programming language.\
We also don't assume any prior knowledge of systems programming or memory management. Those We also don't assume any prior knowledge of systems programming or memory management. Those
topics will be covered in the course. topics will be covered in the course.
In other words, we'll be starting from scratch! In other words, we'll be starting from scratch!\
You'll build up your Rust knowledge in small, manageable steps. You'll build up your Rust knowledge in small, manageable steps.
By the end of the course, you will have solved ~100 exercises, enough to By the end of the course, you will have solved ~100 exercises, enough to
feel comfortable working on small to medium-sized Rust projects. feel comfortable working on small to medium-sized Rust projects.
## Methodology ## Methodology
This course is based on the "learn by doing" principle. This course is based on the "learn by doing" principle.\
It has been designed to be interactive and hands-on. It has been designed to be interactive and hands-on.
[Mainmatter](https://mainmatter.com/rust-consulting/) developed this course [Mainmatter](https://mainmatter.com/rust-consulting/) developed this course
to be delivered in a classroom setting, over 4 days: each attendee advances to be delivered in a classroom setting, over 4 days: each attendee advances
through the lessons at their own pace, with an experienced instructor providing through the lessons at their own pace, with an experienced instructor providing
guidance, answering questions and diving deeper into the topics as needed. guidance, answering questions and diving deeper into the topics as needed.\
If you're interested in attending one of our training sessions, or if you'd like to If you're interested in attending one of our training sessions, or if you'd like to
bring this course to your company, please [get in touch](https://mainmatter.com/contact/). bring this course to your company, please [get in touch](https://mainmatter.com/contact/).
@ -35,11 +35,11 @@ also find solutions to all exercises in the
## Structure ## Structure
On the left side of the screen, you can see that the course is divided into sections. On the left side of the screen, you can see that the course is divided into sections.
Each section introduces a new concept or feature of the Rust language. Each section introduces a new concept or feature of the Rust language.\
To verify your understanding, each section is paired with an exercise that you need to solve. To verify your understanding, each section is paired with an exercise that you need to solve.
You can find the exercises in the You can find the exercises in the
[companion GitHub repository](https://github.com/mainmatter/100-exercises-to-learn-rust). [companion GitHub repository](https://github.com/mainmatter/100-exercises-to-learn-rust).\
Before starting the course, make sure to clone the repository to your local machine: Before starting the course, make sure to clone the repository to your local machine:
```bash ```bash
@ -80,7 +80,7 @@ Run the `wr` command to start the course:
wr wr
``` ```
`wr` will verify the solution to the current exercise. `wr` will verify the solution to the current exercise.\
Don't move on to the next section until you've solved the exercise for the current one. Don't move on to the next section until you've solved the exercise for the current one.
> We recommend committing your solutions to Git as you progress through the course, > We recommend committing your solutions to Git as you progress through the course,
@ -95,10 +95,10 @@ Enjoy the course!
## Author ## Author
This course was written by [Luca Palmieri](https://www.lpalmieri.com/), Principal Engineering This course was written by [Luca Palmieri](https://www.lpalmieri.com/), Principal Engineering
Consultant at [Mainmatter](https://mainmatter.com/rust-consulting/). Consultant at [Mainmatter](https://mainmatter.com/rust-consulting/).\
Luca has been working with Rust since 2018, initially at TrueLayer and then at AWS. Luca has been working with Rust since 2018, initially at TrueLayer and then at AWS.\
Luca is the author of ["Zero to Production in Rust"](https://zero2prod.com), Luca is the author of ["Zero to Production in Rust"](https://zero2prod.com),
the go-to resource for learning how to build backend applications in Rust. the go-to resource for learning how to build backend applications in Rust.\
He is also the author and maintainer of a variety of open-source Rust projects, including He is also the author and maintainer of a variety of open-source Rust projects, including
[`cargo-chef`](https://github.com/LukeMathWalker/cargo-chef), [`cargo-chef`](https://github.com/LukeMathWalker/cargo-chef),
[Pavex](https://pavex.dev) and [`wiremock`](https://github.com/LukeMathWalker/wiremock-rs). [Pavex](https://pavex.dev) and [`wiremock`](https://github.com/LukeMathWalker/wiremock-rs).

View file

@ -2,16 +2,16 @@
<div class="warning"> <div class="warning">
Don't jump ahead! Don't jump ahead!\
Complete the exercise for the previous section before you start this one. Complete the exercise for the previous section before you start this one.\
It's located in `exercises/01_intro/00_welcome`, in the [course GitHub's repository](https://github.com/mainmatter/100-exercises-to-learn-rust). It's located in `exercises/01_intro/00_welcome`, in the [course GitHub's repository](https://github.com/mainmatter/100-exercises-to-learn-rust).\
Use [`wr`](00_welcome.md#wr-the-workshop-runner) to start the course and verify your solutions. Use [`wr`](00_welcome.md#wr-the-workshop-runner) to start the course and verify your solutions.
</div> </div>
The previous task doesn't even qualify as an exercise, but it already exposed you to quite a bit of Rust **syntax**. The previous task doesn't even qualify as an exercise, but it already exposed you to quite a bit of Rust **syntax**.
We won't cover every single detail of Rust's syntax used in the previous exercise. We won't cover every single detail of Rust's syntax used in the previous exercise.
Instead, we'll cover _just enough_ to keep going without getting stuck in the details. Instead, we'll cover _just enough_ to keep going without getting stuck in the details.\
One step at a time! One step at a time!
## Comments ## Comments
@ -88,7 +88,7 @@ It is considered idiomatic to omit the `return` keyword when possible.
### Input parameters ### Input parameters
Input parameters are declared inside the parentheses `()` that follow the function's name. Input parameters are declared inside the parentheses `()` that follow the function's name.\
Each parameter is declared with its name, followed by a colon `:`, followed by its type. Each parameter is declared with its name, followed by a colon `:`, followed by its type.
For example, the `greet` function below takes a `name` parameter of type `&str` (a "string slice"): For example, the `greet` function below takes a `name` parameter of type `&str` (a "string slice"):
@ -105,10 +105,10 @@ If there are multiple input parameters, they must be separated with commas.
### Type annotations ### Type annotations
Since we've been mentioned "types" a few times, let's state it clearly: Rust is a **statically typed language**. Since we've been mentioned "types" a few times, let's state it clearly: Rust is a **statically typed language**.\
Every single value in Rust has a type and that type must be known to the compiler at compile-time. Every single value in Rust has a type and that type must be known to the compiler at compile-time.
Types are a form of **static analysis**. Types are a form of **static analysis**.\
You can think of a type as a **tag** that the compiler attaches to every value in your program. Depending on the You can think of a type as a **tag** that the compiler attaches to every value in your program. Depending on the
tag, the compiler can enforce different rules—e.g. you can't add a string to a number, but you can add two numbers tag, the compiler can enforce different rules—e.g. you can't add a string to a number, but you can add two numbers
together. together.

View file

@ -1,6 +1,6 @@
# A Basic Calculator # A Basic Calculator
In this chapter we'll learn how to use Rust as a **calculator**. In this chapter we'll learn how to use Rust as a **calculator**.\
It might not sound like much, but it'll give us a chance to cover a lot of Rust's basics, such as: It might not sound like much, but it'll give us a chance to cover a lot of Rust's basics, such as:
- How to define and call functions - How to define and call functions

View file

@ -1,6 +1,6 @@
# Types, part 1 # Types, part 1
In the ["Syntax" section](../01_intro/01_syntax.md) `compute`'s input parameters were of type `u32`. In the ["Syntax" section](../01_intro/01_syntax.md) `compute`'s input parameters were of type `u32`.\
Let's unpack what that _means_. Let's unpack what that _means_.
## Primitive types ## Primitive types
@ -18,25 +18,25 @@ An integer is a number that can be written without a fractional component. E.g.
### Signed vs. unsigned ### Signed vs. unsigned
An integer can be **signed** or **unsigned**. An integer can be **signed** or **unsigned**.\
An unsigned integer can only represent non-negative numbers (i.e. `0` or greater). An unsigned integer can only represent non-negative numbers (i.e. `0` or greater).
A signed integer can represent both positive and negative numbers (e.g. `-1`, `12`, etc.). A signed integer can represent both positive and negative numbers (e.g. `-1`, `12`, etc.).
The `u` in `u32` stands for **unsigned**. The `u` in `u32` stands for **unsigned**.\
The equivalent type for signed integer is `i32`, where the `i` stands for integer (i.e. any integer, positive or The equivalent type for signed integer is `i32`, where the `i` stands for integer (i.e. any integer, positive or
negative). negative).
### Bit width ### Bit width
The `32` in `u32` refers to the **number of bits[^bit]** used to represent the number in memory. The `32` in `u32` refers to the **number of bits[^bit]** used to represent the number in memory.\
The more bits, the larger the range of numbers that can be represented. The more bits, the larger the range of numbers that can be represented.
Rust supports multiple bit widths for integers: `8`, `16`, `32`, `64`, `128`. Rust supports multiple bit widths for integers: `8`, `16`, `32`, `64`, `128`.
With 32 bits, `u32` can represent numbers from `0` to `2^32 - 1` (a.k.a. [`u32::MAX`](https://doc.rust-lang.org/std/primitive.u32.html#associatedconstant.MAX)). With 32 bits, `u32` can represent numbers from `0` to `2^32 - 1` (a.k.a. [`u32::MAX`](https://doc.rust-lang.org/std/primitive.u32.html#associatedconstant.MAX)).\
With the same number of bits, a signed integer (`i32`) can represent numbers from `-2^31` to `2^31 - 1` With the same number of bits, a signed integer (`i32`) can represent numbers from `-2^31` to `2^31 - 1`
(i.e. from [`i32::MIN`](https://doc.rust-lang.org/std/primitive.i32.html#associatedconstant.MIN) (i.e. from [`i32::MIN`](https://doc.rust-lang.org/std/primitive.i32.html#associatedconstant.MIN)
to [`i32::MAX`](https://doc.rust-lang.org/std/primitive.i32.html#associatedconstant.MAX)). to [`i32::MAX`](https://doc.rust-lang.org/std/primitive.i32.html#associatedconstant.MAX)).\
The maximum value for `i32` is smaller than the maximum value for `u32` because one bit is used to represent The maximum value for `i32` is smaller than the maximum value for `u32` because one bit is used to represent
the sign of the number. Check out the [two's complement](https://en.wikipedia.org/wiki/Two%27s_complement) the sign of the number. Check out the [two's complement](https://en.wikipedia.org/wiki/Two%27s_complement)
representation for more details on how signed integers are represented in memory. representation for more details on how signed integers are represented in memory.
@ -46,7 +46,7 @@ representation for more details on how signed integers are represented in memory
Combining the two variables (signed/unsigned and bit width), we get the following integer types: Combining the two variables (signed/unsigned and bit width), we get the following integer types:
| Bit width | Signed | Unsigned | | Bit width | Signed | Unsigned |
|-----------|--------|----------| | --------- | ------ | -------- |
| 8-bit | `i8` | `u8` | | 8-bit | `i8` | `u8` |
| 16-bit | `i16` | `u16` | | 16-bit | `i16` | `u16` |
| 32-bit | `i32` | `u32` | | 32-bit | `i32` | `u32` |
@ -55,21 +55,21 @@ Combining the two variables (signed/unsigned and bit width), we get the followin
## Literals ## Literals
A **literal** is a notation for representing a fixed value in source code. A **literal** is a notation for representing a fixed value in source code.\
For example, `42` is a Rust literal for the number forty-two. For example, `42` is a Rust literal for the number forty-two.
### Type annotations for literals ### Type annotations for literals
But all values in Rust have a type, so... what's the type of `42`? But all values in Rust have a type, so... what's the type of `42`?
The Rust compiler will try to infer the type of a literal based on how it's used. The Rust compiler will try to infer the type of a literal based on how it's used.\
If you don't provide any context, the compiler will default to `i32` for integer literals. If you don't provide any context, the compiler will default to `i32` for integer literals.\
If you want to use a different type, you can add the desired integer type as a suffix—e.g. `2u64` is a 2 that's If you want to use a different type, you can add the desired integer type as a suffix—e.g. `2u64` is a 2 that's
explicitly typed as a `u64`. explicitly typed as a `u64`.
### Underscores in literals ### Underscores in literals
You can use underscores `_` to improve the readability of large numbers. You can use underscores `_` to improve the readability of large numbers.\
For example, `1_000_000` is the same as `1000000`. For example, `1_000_000` is the same as `1000000`.
## Arithmetic operators ## Arithmetic operators
@ -82,7 +82,7 @@ Rust supports the following arithmetic operators[^traits] for integers:
- `/` for division - `/` for division
- `%` for remainder - `%` for remainder
Precedence and associativity rules for these operators are the same as in mathematics. Precedence and associativity rules for these operators are the same as in mathematics.\
You can use parentheses to override the default precedence. E.g. `2 * (3 + 4)`. You can use parentheses to override the default precedence. E.g. `2 * (3 + 4)`.
> ⚠️ **Warning** > ⚠️ **Warning**
@ -92,7 +92,7 @@ You can use parentheses to override the default precedence. E.g. `2 * (3 + 4)`.
## No automatic type coercion ## No automatic type coercion
As we discussed in the previous exercise, Rust is a statically typed language. As we discussed in the previous exercise, Rust is a statically typed language.\
In particular, Rust is quite strict about type coercion. It won't automatically convert a value from one type to In particular, Rust is quite strict about type coercion. It won't automatically convert a value from one type to
another[^coercion], another[^coercion],
even if the conversion is lossless. You have to do it explicitly. even if the conversion is lossless. You have to do it explicitly.

View file

@ -1,6 +1,6 @@
# Variables # Variables
In Rust, you can use the `let` keyword to declare **variables**. In Rust, you can use the `let` keyword to declare **variables**.\
For example: For example:
```rust ```rust
@ -35,20 +35,20 @@ let x = 42;
let y: u32 = x; let y: u32 = x;
``` ```
In the example above, we didn't specify the type of `x`. In the example above, we didn't specify the type of `x`.\
`x` is later assigned to `y`, which is explicitly typed as `u32`. Since Rust doesn't perform automatic type coercion, `x` is later assigned to `y`, which is explicitly typed as `u32`. Since Rust doesn't perform automatic type coercion,
the compiler infers the type of `x` to be `u32`—the same as `y` and the only type that will allow the program to compile the compiler infers the type of `x` to be `u32`—the same as `y` and the only type that will allow the program to compile
without errors. without errors.
### Inference limitations ### Inference limitations
The compiler sometimes needs a little help to infer the correct variable type based on its usage. The compiler sometimes needs a little help to infer the correct variable type based on its usage.\
In those cases you'll get a compilation error and the compiler will ask you to provide an explicit type hint to In those cases you'll get a compilation error and the compiler will ask you to provide an explicit type hint to
disambiguate the situation. disambiguate the situation.
## Function arguments are variables ## Function arguments are variables
Not all heroes wear capes, not all variables are declared with `let`. Not all heroes wear capes, not all variables are declared with `let`.\
Function arguments are variables too! Function arguments are variables too!
```rust ```rust
@ -57,22 +57,22 @@ fn add_one(x: u32) -> u32 {
} }
``` ```
In the example above, `x` is a variable of type `u32`. In the example above, `x` is a variable of type `u32`.\
The only difference between `x` and a variable declared with `let` is that functions arguments **must** have their type The only difference between `x` and a variable declared with `let` is that functions arguments **must** have their type
explicitly declared. The compiler won't infer it for you. explicitly declared. The compiler won't infer it for you.\
This constraint allows the Rust compiler (and us humans!) to understand the function's signature without having to look This constraint allows the Rust compiler (and us humans!) to understand the function's signature without having to look
at its implementation. That's a big boost for compilation speed[^speed]! at its implementation. That's a big boost for compilation speed[^speed]!
## Initialization ## Initialization
You don't have to initialize a variable when you declare it. You don't have to initialize a variable when you declare it.\
For example For example
```rust ```rust
let x: u32; let x: u32;
``` ```
is a valid variable declaration. is a valid variable declaration.\
However, you must initialize the variable before using it. The compiler will throw an error if you don't: However, you must initialize the variable before using it. The compiler will throw an error if you don't:
```rust ```rust

View file

@ -1,6 +1,6 @@
# Control flow, part 1 # Control flow, part 1
All our programs so far have been pretty straightforward. All our programs so far have been pretty straightforward.\
A sequence of instructions is executed from top to bottom, and that's it. A sequence of instructions is executed from top to bottom, and that's it.
It's time to introduce some **branching**. It's time to introduce some **branching**.
@ -23,7 +23,7 @@ This program will print `number is smaller than 5` because the condition `number
### `else` clauses ### `else` clauses
Like most programming languages, Rust supports an optional `else` branch to execute a block of code when the condition in an Like most programming languages, Rust supports an optional `else` branch to execute a block of code when the condition in an
`if` expression is false. `if` expression is false.\
For example: For example:
```rust ```rust
@ -38,7 +38,7 @@ if number < 5 {
## Booleans ## Booleans
The condition in an `if` expression must be of type `bool`, a **boolean**. The condition in an `if` expression must be of type `bool`, a **boolean**.\
Booleans, just like integers, are a primitive type in Rust. Booleans, just like integers, are a primitive type in Rust.
A boolean can have one of two values: `true` or `false`. A boolean can have one of two values: `true` or `false`.
@ -67,12 +67,12 @@ error[E0308]: mismatched types
``` ```
This follows from Rust's philosophy around type coercion: there's no automatic conversion from non-boolean types to booleans. This follows from Rust's philosophy around type coercion: there's no automatic conversion from non-boolean types to booleans.
Rust doesn't have the concept of **truthy** or **falsy** values, like JavaScript or Python. Rust doesn't have the concept of **truthy** or **falsy** values, like JavaScript or Python.\
You have to be explicit about the condition you want to check. You have to be explicit about the condition you want to check.
### Comparison operators ### Comparison operators
It's quite common to use comparison operators to build conditions for `if` expressions. It's quite common to use comparison operators to build conditions for `if` expressions.\
Here are the comparison operators available in Rust when working with integers: Here are the comparison operators available in Rust when working with integers:
- `==`: equal to - `==`: equal to
@ -84,7 +84,7 @@ Here are the comparison operators available in Rust when working with integers:
## `if/else` is an expression ## `if/else` is an expression
In Rust, `if` expressions are **expressions**, not statements: they return a value. In Rust, `if` expressions are **expressions**, not statements: they return a value.\
That value can be assigned to a variable or used in other expressions. For example: That value can be assigned to a variable or used in other expressions. For example:
```rust ```rust
@ -97,10 +97,9 @@ let message = if number < 5 {
``` ```
In the example above, each branch of the `if` evaluates to a string literal, In the example above, each branch of the `if` evaluates to a string literal,
which is then assigned to the `message` variable. which is then assigned to the `message` variable.\
The only requirement is that both `if` branches return the same type. The only requirement is that both `if` branches return the same type.
## References ## References
- The exercise for this section is located in `exercises/02_basic_calculator/03_if_else` - The exercise for this section is located in `exercises/02_basic_calculator/03_if_else`

View file

@ -13,7 +13,7 @@ fn speed(start: u32, end: u32, time_elapsed: u32) -> u32 {
If you have a keen eye, you might have spotted one issue[^one]: what happens if `time_elapsed` is zero? If you have a keen eye, you might have spotted one issue[^one]: what happens if `time_elapsed` is zero?
You can try it You can try it
out [on the Rust playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=36e5ddbe3b3f741dfa9f74c956622bac)! out [on the Rust playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=36e5ddbe3b3f741dfa9f74c956622bac)!\
The program will exit with the following error message: The program will exit with the following error message:
```text ```text
@ -21,7 +21,7 @@ thread 'main' panicked at src/main.rs:3:5:
attempt to divide by zero attempt to divide by zero
``` ```
This is known as a **panic**. This is known as a **panic**.\
A panic is Rust's way to signal that something went so wrong that A panic is Rust's way to signal that something went so wrong that
the program can't continue executing, it's an **unrecoverable error**[^catching]. Division by zero classifies as such an the program can't continue executing, it's an **unrecoverable error**[^catching]. Division by zero classifies as such an
error. error.

View file

@ -1,6 +1,6 @@
# Loops, part 1: `while` # Loops, part 1: `while`
Your implementation of `factorial` has been forced to use recursion. Your implementation of `factorial` has been forced to use recursion.\
This may feel natural to you, especially if you're coming from a functional programming background. This may feel natural to you, especially if you're coming from a functional programming background.
Or it may feel strange, if you're used to more imperative languages like C or Python. Or it may feel strange, if you're used to more imperative languages like C or Python.
@ -8,7 +8,7 @@ Let's see how you can implement the same functionality using a **loop** instead.
## The `while` loop ## The `while` loop
A `while` loop is a way to execute a block of code as long as a **condition** is true. A `while` loop is a way to execute a block of code as long as a **condition** is true.\
Here's the general syntax: Here's the general syntax:
```rust ```rust
@ -62,7 +62,7 @@ error[E0384]: cannot assign twice to immutable variable `i`
| ^^^^^^ cannot assign twice to immutable variable | ^^^^^^ cannot assign twice to immutable variable
``` ```
This is because variables in Rust are **immutable** by default. This is because variables in Rust are **immutable** by default.\
You can't change their value once it has been assigned. You can't change their value once it has been assigned.
If you want to allow modifications, you have to declare the variable as **mutable** using the `mut` keyword: If you want to allow modifications, you have to declare the variable as **mutable** using the `mut` keyword:

View file

@ -1,6 +1,6 @@
# Loops, part 2: `for` # Loops, part 2: `for`
Having to manually increment a counter variable is somewhat tedious. The pattern is also extremely common! Having to manually increment a counter variable is somewhat tedious. The pattern is also extremely common!\
To make this easier, Rust provides a more concise way to iterate over a range of values: the `for` loop. To make this easier, Rust provides a more concise way to iterate over a range of values: the `for` loop.
## The `for` loop ## The `for` loop
@ -63,6 +63,6 @@ for i in 1..(end + 1) {
- [`for` loop documentation](https://doc.rust-lang.org/std/keyword.for.html) - [`for` loop documentation](https://doc.rust-lang.org/std/keyword.for.html)
[^iterator]: Later in the course we'll give a precise definition of what counts as an "iterator". [^iterator]: Later in the course we'll give a precise definition of what counts as an "iterator".
For now, think of it as a sequence of values that you can loop over. For now, think of it as a sequence of values that you can loop over.
[^weird-ranges]: You can use ranges with other types too (e.g. characters and IP addresses), [^weird-ranges]: You can use ranges with other types too (e.g. characters and IP addresses),
but integers are definitely the most common case in day-to-day Rust programming. but integers are definitely the most common case in day-to-day Rust programming.

View file

@ -1,18 +1,18 @@
# Overflow # Overflow
The factorial of a number grows quite fast. The factorial of a number grows quite fast.\
For example, the factorial of 20 is 2,432,902,008,176,640,000. That's already bigger than the maximum value for a For example, the factorial of 20 is 2,432,902,008,176,640,000. That's already bigger than the maximum value for a
32-bit integer, 2,147,483,647. 32-bit integer, 2,147,483,647.
When the result of an arithmetic operation is bigger than the maximum value for a given integer type, When the result of an arithmetic operation is bigger than the maximum value for a given integer type,
we are talking about **an integer overflow**. we are talking about **an integer overflow**.
Integer overflows are an issue because they violate the contract for arithmetic operations. Integer overflows are an issue because they violate the contract for arithmetic operations.\
The result of an arithmetic operation between two integers of a given type should be another integer of the same type. The result of an arithmetic operation between two integers of a given type should be another integer of the same type.
But the _mathematically correct result_ doesn't fit into that integer type! But the _mathematically correct result_ doesn't fit into that integer type!
> If the result is smaller than the minimum value for a given integer type, we refer to the event as **an integer > If the result is smaller than the minimum value for a given integer type, we refer to the event as **an integer
> underflow**. > underflow**.\
> For brevity, we'll only talk about integer overflows for the rest of this section, but keep in mind that > For brevity, we'll only talk about integer overflows for the rest of this section, but keep in mind that
> everything we say applies to integer underflows as well. > everything we say applies to integer underflows as well.
> >
@ -32,7 +32,7 @@ is not Rust's solution to the integer overflow problem.
## Alternatives ## Alternatives
Since we ruled out automatic promotion, what can we do when an integer overflow occurs? Since we ruled out automatic promotion, what can we do when an integer overflow occurs?\
It boils down to two different approaches: It boils down to two different approaches:
- Reject the operation - Reject the operation
@ -40,13 +40,13 @@ It boils down to two different approaches:
### Reject the operation ### Reject the operation
This is the most conservative approach: we stop the program when an integer overflow occurs. This is the most conservative approach: we stop the program when an integer overflow occurs.\
That's done via a panic, the mechanism we've already seen in the ["Panics" section](04_panics.md). That's done via a panic, the mechanism we've already seen in the ["Panics" section](04_panics.md).
### Come up with a "sensible" result ### Come up with a "sensible" result
When the result of an arithmetic operation is bigger than the maximum value for a given integer type, you can When the result of an arithmetic operation is bigger than the maximum value for a given integer type, you can
choose to **wrap around**. choose to **wrap around**.\
If you think of all the possible values for a given integer type as a circle, wrapping around means that when you If you think of all the possible values for a given integer type as a circle, wrapping around means that when you
reach the maximum value, you start again from the minimum value. reach the maximum value, you start again from the minimum value.
@ -69,14 +69,14 @@ You may be wondering—what is a profile setting? Let's get into that!
A [**profile**](https://doc.rust-lang.org/cargo/reference/profiles.html) is a set of configuration options that can be A [**profile**](https://doc.rust-lang.org/cargo/reference/profiles.html) is a set of configuration options that can be
used to customize the way Rust code is compiled. used to customize the way Rust code is compiled.
Cargo provides two built-in profiles: `dev` and `release`. Cargo provides two built-in profiles: `dev` and `release`.\
The `dev` profile is used every time you run `cargo build`, `cargo run` or `cargo test`. It's aimed at local The `dev` profile is used every time you run `cargo build`, `cargo run` or `cargo test`. It's aimed at local
development, development,
therefore it sacrifices runtime performance in favor of faster compilation times and a better debugging experience. therefore it sacrifices runtime performance in favor of faster compilation times and a better debugging experience.\
The `release` profile, instead, is optimized for runtime performance but incurs longer compilation times. You need The `release` profile, instead, is optimized for runtime performance but incurs longer compilation times. You need
to explicitly request via the `--release` flag—e.g. `cargo build --release` or `cargo run --release`. to explicitly request via the `--release` flag—e.g. `cargo build --release` or `cargo run --release`.
> "Have you built your project in release mode?" is almost a meme in the Rust community. > "Have you built your project in release mode?" is almost a meme in the Rust community.\
> It refers to developers who are not familiar with Rust and complain about its performance on > It refers to developers who are not familiar with Rust and complain about its performance on
> social media (e.g. Reddit, Twitter, etc.) before realizing they haven't built their project in > social media (e.g. Reddit, Twitter, etc.) before realizing they haven't built their project in
> release mode. > release mode.
@ -90,12 +90,12 @@ By default, `overflow-checks` is set to:
- `true` for the `dev` profile - `true` for the `dev` profile
- `false` for the `release` profile - `false` for the `release` profile
This is in line with the goals of the two profiles. This is in line with the goals of the two profiles.\
`dev` is aimed at local development, so it panics in order to highlight potential issues as early as possible. `dev` is aimed at local development, so it panics in order to highlight potential issues as early as possible.\
`release`, instead, is tuned for runtime performance: checking for overflows would slow down the program, so it `release`, instead, is tuned for runtime performance: checking for overflows would slow down the program, so it
prefers to wrap around. prefers to wrap around.
At the same time, having different behaviours for the two profiles can lead to subtle bugs. At the same time, having different behaviours for the two profiles can lead to subtle bugs.\
Our recommendation is to enable `overflow-checks` for both profiles: it's better to crash than to silently produce Our recommendation is to enable `overflow-checks` for both profiles: it's better to crash than to silently produce
incorrect results. The runtime performance hit is negligible in most cases; if you're working on a performance-critical incorrect results. The runtime performance hit is negligible in most cases; if you're working on a performance-critical
application, you can run benchmarks to decide if it's something you can afford. application, you can run benchmarks to decide if it's something you can afford.
@ -107,4 +107,4 @@ application, you can run benchmarks to decide if it's something you can afford.
## Further reading ## Further reading
- Check out ["Myths and legends about integer overflow in Rust"](https://huonw.github.io/blog/2016/04/myths-and-legends-about-integer-overflow-in-rust/) - Check out ["Myths and legends about integer overflow in Rust"](https://huonw.github.io/blog/2016/04/myths-and-legends-about-integer-overflow-in-rust/)
for an in-depth discussion about integer overflow in Rust. for an in-depth discussion about integer overflow in Rust.

View file

@ -1,12 +1,12 @@
# Case-by-case behavior # Case-by-case behavior
`overflow-checks` is a blunt tool: it's a global setting that affects the whole program. `overflow-checks` is a blunt tool: it's a global setting that affects the whole program.\
It often happens that you want to handle integer overflows differently depending on the context: sometimes It often happens that you want to handle integer overflows differently depending on the context: sometimes
wrapping is the right choice, other times panicking is preferable. wrapping is the right choice, other times panicking is preferable.
## `wrapping_` methods ## `wrapping_` methods
You can opt into wrapping arithmetic on a per-operation basis by using the `wrapping_` methods[^method]. You can opt into wrapping arithmetic on a per-operation basis by using the `wrapping_` methods[^method].\
For example, you can use `wrapping_add` to add two integers with wrapping: For example, you can use `wrapping_add` to add two integers with wrapping:
```rust ```rust
@ -18,7 +18,7 @@ assert_eq!(sum, 0);
## `saturating_` methods ## `saturating_` methods
Alternatively, you can opt into **saturating arithmetic** by using the `saturating_` methods. Alternatively, you can opt into **saturating arithmetic** by using the `saturating_` methods.\
Instead of wrapping around, saturating arithmetic will return the maximum or minimum value for the integer type. Instead of wrapping around, saturating arithmetic will return the maximum or minimum value for the integer type.
For example: For example:
@ -29,7 +29,7 @@ let sum = x.saturating_add(y);
assert_eq!(sum, 255); assert_eq!(sum, 255);
``` ```
Since `255 + 1` is `256`, which is bigger than `u8::MAX`, the result is `u8::MAX` (255). Since `255 + 1` is `256`, which is bigger than `u8::MAX`, the result is `u8::MAX` (255).\
The opposite happens for underflows: `0 - 1` is `-1`, which is smaller than `u8::MIN`, so the result is `u8::MIN` (0). The opposite happens for underflows: `0 - 1` is `-1`, which is smaller than `u8::MIN`, so the result is `u8::MIN` (0).
You can't get saturating arithmetic via the `overflow-checks` profile setting—you have to explicitly opt into it You can't get saturating arithmetic via the `overflow-checks` profile setting—you have to explicitly opt into it
@ -40,4 +40,4 @@ when performing the arithmetic operation.
- The exercise for this section is located in `exercises/02_basic_calculator/09_saturating` - The exercise for this section is located in `exercises/02_basic_calculator/09_saturating`
[^method]: You can think of methods as functions that are "attached" to a specific type. [^method]: You can think of methods as functions that are "attached" to a specific type.
We'll cover methods (and how to define them) in the next chapter. We'll cover methods (and how to define them) in the next chapter.

View file

@ -1,12 +1,12 @@
# Conversions, pt. 1 # Conversions, pt. 1
We've repeated over and over again that Rust won't perform We've repeated over and over again that Rust won't perform
implicit type conversions for integers. implicit type conversions for integers.\
How do you perform _explicit_ conversions then? How do you perform _explicit_ conversions then?
## `as` ## `as`
You can use the `as` operator to convert between integer types. You can use the `as` operator to convert between integer types.\
`as` conversions are **infallible**. `as` conversions are **infallible**.
For example: For example:
@ -62,7 +62,7 @@ memory representation:
Last 8 bits Last 8 bits
``` ```
Hence `256 as u8` is equal to `0`. That's... not ideal, in most scenarios. Hence `256 as u8` is equal to `0`. That's... not ideal, in most scenarios.\
In fact, the Rust compiler will actively try to stop you if it sees you trying In fact, the Rust compiler will actively try to stop you if it sees you trying
to cast a literal value which will result in a truncation: to cast a literal value which will result in a truncation:
@ -79,17 +79,17 @@ error: literal out of range for `i8`
### Recommendation ### Recommendation
As a rule of thumb, be quite careful with `as` casting. As a rule of thumb, be quite careful with `as` casting.\
Use it _exclusively_ for going from a smaller type to a larger type. Use it _exclusively_ for going from a smaller type to a larger type.
To convert from a larger to smaller integer type, rely on the To convert from a larger to smaller integer type, rely on the
[*fallible* conversion machinery](../05_ticket_v2/13_try_from.md) that we'll [_fallible_ conversion machinery](../05_ticket_v2/13_try_from.md) that we'll
explore later in the course. explore later in the course.
### Limitations ### Limitations
Surprising behaviour is not the only downside of `as` casting. Surprising behaviour is not the only downside of `as` casting.
It is also fairly limited: you can only rely on `as` casting It is also fairly limited: you can only rely on `as` casting
for primitive types and a few other special cases. for primitive types and a few other special cases.\
When working with composite types, you'll have to rely on When working with composite types, you'll have to rely on
different conversion mechanisms ([fallible](../05_ticket_v2/13_try_from.md) different conversion mechanisms ([fallible](../05_ticket_v2/13_try_from.md)
and [infallible](../04_traits/09_from.md)), which we'll explore later on. and [infallible](../04_traits/09_from.md)), which we'll explore later on.

View file

@ -1,14 +1,14 @@
# Modelling A Ticket # Modelling A Ticket
The first chapter should have given you a good grasp over some of Rust's primitive types, operators and The first chapter should have given you a good grasp over some of Rust's primitive types, operators and
basic control flow constructs. basic control flow constructs.\
In this chapter we'll go one step further and cover what makes Rust truly unique: **ownership**. In this chapter we'll go one step further and cover what makes Rust truly unique: **ownership**.\
Ownership is what enables Rust to be both memory-safe and performant, with no garbage collector. Ownership is what enables Rust to be both memory-safe and performant, with no garbage collector.
As our running example, we'll use a (JIRA-like) ticket, the kind you'd use to track bugs, features, or tasks in As our running example, we'll use a (JIRA-like) ticket, the kind you'd use to track bugs, features, or tasks in
a software project. a software project.\
We'll take a stab at modeling it in Rust. It'll be the first iteration—it won't be perfect nor very idiomatic We'll take a stab at modeling it in Rust. It'll be the first iteration—it won't be perfect nor very idiomatic
by the end of the chapter. It'll be enough of a challenge though! by the end of the chapter. It'll be enough of a challenge though!\
To move forward you'll have to pick up several new Rust concepts, such as: To move forward you'll have to pick up several new Rust concepts, such as:
- `struct`s, one of Rust's ways to define custom types - `struct`s, one of Rust's ways to define custom types

View file

@ -28,7 +28,7 @@ A struct is quite similar to what you would call a class or an object in other p
## Defining fields ## Defining fields
The new type is built by combining other types as **fields**. The new type is built by combining other types as **fields**.\
Each field must have a name and a type, separated by a colon, `:`. If there are multiple fields, they are separated by a comma, `,`. Each field must have a name and a type, separated by a colon, `:`. If there are multiple fields, they are separated by a comma, `,`.
Fields don't have to be of the same type, as you can see in the `Configuration` struct below: Fields don't have to be of the same type, as you can see in the `Configuration` struct below:
@ -64,7 +64,7 @@ let x = ticket.description;
## Methods ## Methods
We can attach behaviour to our structs by defining **methods**. We can attach behaviour to our structs by defining **methods**.\
Using the `Ticket` struct as an example: Using the `Ticket` struct as an example:
```rust ```rust
@ -140,4 +140,3 @@ but it's definitely more verbose. Prefer the method call syntax when possible.
## References ## References
- The exercise for this section is located in `exercises/03_ticket_v1/01_struct` - The exercise for this section is located in `exercises/03_ticket_v1/01_struct`

View file

@ -12,7 +12,7 @@ struct Ticket {
We are using "raw" types for the fields of our `Ticket` struct. We are using "raw" types for the fields of our `Ticket` struct.
This means that users can create a ticket with an empty title, a suuuuuuuper long description or This means that users can create a ticket with an empty title, a suuuuuuuper long description or
a nonsensical status (e.g. "Funny"). a nonsensical status (e.g. "Funny").\
We can do better than that! We can do better than that!
## References ## References

View file

@ -9,7 +9,7 @@ Let's start with modules.
## What is a module? ## What is a module?
In Rust a **module** is a way to group related code together, under a common namespace (i.e. the module's name). In Rust a **module** is a way to group related code together, under a common namespace (i.e. the module's name).\
You've already seen modules in action: the unit tests that verify the correctness of your code are defined in a You've already seen modules in action: the unit tests that verify the correctness of your code are defined in a
different module, named `tests`. different module, named `tests`.
@ -27,7 +27,7 @@ contents (the stuff inside `{ ... }`) are next to each other.
## Module tree ## Module tree
Modules can be nested, forming a **tree** structure. Modules can be nested, forming a **tree** structure.\
The root of the tree is the **crate** itself, which is the top-level module that contains all the other modules. The root of the tree is the **crate** itself, which is the top-level module that contains all the other modules.
For a library crate, the root module is usually `src/lib.rs` (unless its location has been customized). For a library crate, the root module is usually `src/lib.rs` (unless its location has been customized).
The root module is also known as the **crate root**. The root module is also known as the **crate root**.
@ -44,7 +44,7 @@ mod dog;
``` ```
`cargo`, Rust's build tool, is then in charge of finding the file that contains `cargo`, Rust's build tool, is then in charge of finding the file that contains
the module implementation. the module implementation.\
If your module is declared in the root of your crate (e.g. `src/lib.rs` or `src/main.rs`), If your module is declared in the root of your crate (e.g. `src/lib.rs` or `src/main.rs`),
`cargo` expects the file to be named either: `cargo` expects the file to be named either:
@ -76,7 +76,7 @@ fn mark_ticket_as_done(ticket: Ticket) {
} }
``` ```
That's not the case if you want to access an entity from a different module. That's not the case if you want to access an entity from a different module.\
You have to use a **path** pointing to the entity you want to access. You have to use a **path** pointing to the entity you want to access.
You can compose the path in various ways: You can compose the path in various ways:
@ -106,9 +106,9 @@ You can also import all the items from a module with a single `use` statement.
use crate::module_1::module_2::*; use crate::module_1::module_2::*;
``` ```
This is known as a **star import**. This is known as a **star import**.\
It is generally discouraged because it can pollute the current namespace, making it hard to understand It is generally discouraged because it can pollute the current namespace, making it hard to understand
where each name comes from and potentially introducing name conflicts. where each name comes from and potentially introducing name conflicts.\
Nonetheless, it can be useful in some cases, like when writing unit tests. You might have noticed Nonetheless, it can be useful in some cases, like when writing unit tests. You might have noticed
that most of our test modules start with a `use super::*;` statement to bring all the items from the parent module that most of our test modules start with a `use super::*;` statement to bring all the items from the parent module
(the one being tested) into scope. (the one being tested) into scope.

View file

@ -6,7 +6,7 @@ be it a struct, a function, a field, etc.
## Private by default ## Private by default
By default, everything in Rust is **private**. By default, everything in Rust is **private**.\
A private entity can only be accessed: A private entity can only be accessed:
1. within the same module where it's defined, or 1. within the same module where it's defined, or
@ -22,7 +22,7 @@ We've used this extensively in the previous exercises:
## Visibility modifiers ## Visibility modifiers
You can modify the default visibility of an entity using a **visibility modifier**. You can modify the default visibility of an entity using a **visibility modifier**.\
Some common visibility modifiers are: Some common visibility modifiers are:
- `pub`: makes the entity **public**, i.e. accessible from outside the module where it's defined, potentially from - `pub`: makes the entity **public**, i.e. accessible from outside the module where it's defined, potentially from

View file

@ -1,6 +1,6 @@
# Encapsulation # Encapsulation
Now that we have a basic understanding of modules and visibility, let's circle back to **encapsulation**. Now that we have a basic understanding of modules and visibility, let's circle back to **encapsulation**.\
Encapsulation is the practice of hiding the internal representation of an object. It is most commonly Encapsulation is the practice of hiding the internal representation of an object. It is most commonly
used to enforce some **invariants** on the object's state. used to enforce some **invariants** on the object's state.
@ -14,7 +14,7 @@ struct Ticket {
} }
``` ```
If all fields are made public, there is no encapsulation. If all fields are made public, there is no encapsulation.\
You must assume that the fields can be modified at any time, set to any value that's allowed by You must assume that the fields can be modified at any time, set to any value that's allowed by
their type. You can't rule out that a ticket might have an empty title or a status their type. You can't rule out that a ticket might have an empty title or a status
that doesn't make sense. that doesn't make sense.
@ -35,9 +35,9 @@ let ticket = Ticket {
}; };
``` ```
You've seen this in action in the previous exercise on visibility. You've seen this in action in the previous exercise on visibility.\
We now need to provide one or more public **constructors**—i.e. static methods or functions that can be used We now need to provide one or more public **constructors**—i.e. static methods or functions that can be used
from outside the module to create a new instance of the struct. from outside the module to create a new instance of the struct.\
Luckily enough we already have one: `Ticket::new`, as implemented in [a previous exercise](02_validation.md). Luckily enough we already have one: `Ticket::new`, as implemented in [a previous exercise](02_validation.md).
## Accessor methods ## Accessor methods
@ -50,7 +50,7 @@ In summary:
That's a good start, but it's not enough: apart from creating a `Ticket`, we also need to interact with it. That's a good start, but it's not enough: apart from creating a `Ticket`, we also need to interact with it.
But how can we access the fields if they're private? But how can we access the fields if they're private?
We need to provide **accessor methods**. We need to provide **accessor methods**.\
Accessor methods are public methods that allow you to read the value of a private field (or fields) of a struct. Accessor methods are public methods that allow you to read the value of a private field (or fields) of a struct.
Rust doesn't have a built-in way to generate accessor methods for you, like some other languages do. Rust doesn't have a built-in way to generate accessor methods for you, like some other languages do.

View file

@ -74,11 +74,11 @@ All these things are true at the same time for Rust:
2. As a developer, you rarely have to manage memory directly 2. As a developer, you rarely have to manage memory directly
3. You can't cause dangling pointers, double frees, and other memory-related bugs 3. You can't cause dangling pointers, double frees, and other memory-related bugs
Languages like Python, JavaScript, and Java give you 2. and 3., but not 1. Languages like Python, JavaScript, and Java give you 2. and 3., but not 1.\
Language like C or C++ give you 1., but neither 2. nor 3. Language like C or C++ give you 1., but neither 2. nor 3.
Depending on your background, 3. might sound a bit arcane: what is a "dangling pointer"? Depending on your background, 3. might sound a bit arcane: what is a "dangling pointer"?
What is a "double free"? Why are they dangerous? What is a "double free"? Why are they dangerous?\
Don't worry: we'll cover these concepts in more details during the rest of the course. Don't worry: we'll cover these concepts in more details during the rest of the course.
For now, though, let's focus on learning how to work within Rust's ownership system. For now, though, let's focus on learning how to work within Rust's ownership system.
@ -113,7 +113,7 @@ impl Ticket {
} }
``` ```
`Ticket::description` takes ownership of the `Ticket` instance it's called on. `Ticket::description` takes ownership of the `Ticket` instance it's called on.\
This is known as **move semantics**: ownership of the value (`self`) is **moved** from the caller to This is known as **move semantics**: ownership of the value (`self`) is **moved** from the caller to
the callee, and the caller can't use it anymore. the callee, and the caller can't use it anymore.
@ -152,10 +152,10 @@ To build _useful_ accessor methods we need to start working with **references**.
## Borrowing ## Borrowing
It is desirable to have methods that can read the value of a variable without taking ownership of it. It is desirable to have methods that can read the value of a variable without taking ownership of it.\
Programming would be quite limited otherwise. In Rust, that's done via **borrowing**. Programming would be quite limited otherwise. In Rust, that's done via **borrowing**.
Whenever you borrow a value, you get a **reference** to it. Whenever you borrow a value, you get a **reference** to it.\
References are tagged with their privileges[^refine]: References are tagged with their privileges[^refine]:
- Immutable references (`&`) allow you to read the value, but not to mutate it - Immutable references (`&`) allow you to read the value, but not to mutate it
@ -180,7 +180,7 @@ All these restrictions are enforced at compile-time by the borrow checker.
### Syntax ### Syntax
How do you borrow a value, in practice? How do you borrow a value, in practice?\
By adding `&` or `&mut` **in front a variable**, you're borrowing its value. By adding `&` or `&mut` **in front a variable**, you're borrowing its value.
Careful though! The same symbols (`&` and `&mut`) in **front of a type** have a different meaning: Careful though! The same symbols (`&` and `&mut`) in **front of a type** have a different meaning:
they denote a different type, a reference to the original type. they denote a different type, a reference to the original type.
@ -220,18 +220,18 @@ fn f(number: &mut u32) -> &u32 {
## Breathe in, breathe out ## Breathe in, breathe out
Rust's ownership system can be a bit overwhelming at first. Rust's ownership system can be a bit overwhelming at first.\
But don't worry: it'll become second nature with practice. But don't worry: it'll become second nature with practice.\
And you're going to get a lot of practice over the rest of this chapter, as well as the rest of the course! And you're going to get a lot of practice over the rest of this chapter, as well as the rest of the course!
We'll revisit each concept multiple times to make sure you get familiar with them We'll revisit each concept multiple times to make sure you get familiar with them
and truly understand how they work. and truly understand how they work.
Towards the end of this chapter we'll explain *why* Rust's ownership system is designed the way it is. Towards the end of this chapter we'll explain _why_ Rust's ownership system is designed the way it is.
For the time being, focus on understanding the *how*. Take each compiler error as a learning opportunity! For the time being, focus on understanding the _how_. Take each compiler error as a learning opportunity!
## References ## References
- The exercise for this section is located in `exercises/03_ticket_v1/06_ownership` - The exercise for this section is located in `exercises/03_ticket_v1/06_ownership`
[^refine]: This is a great mental model to start out, but it doesn't capture the _full_ picture. [^refine]: This is a great mental model to start out, but it doesn't capture the _full_ picture.
We'll refine our understanding of references [later in the course](../07_threads/06_interior_mutability.md). We'll refine our understanding of references [later in the course](../07_threads/06_interior_mutability.md).

View file

@ -18,7 +18,7 @@ impl Ticket {
} }
``` ```
A sprinkle of `&` here and there did the trick! A sprinkle of `&` here and there did the trick!\
We now have a way to access the fields of a `Ticket` instance without consuming it in the process. We now have a way to access the fields of a `Ticket` instance without consuming it in the process.
Let's see how we can enhance our `Ticket` struct with **setter methods** next. Let's see how we can enhance our `Ticket` struct with **setter methods** next.
@ -46,7 +46,7 @@ impl Ticket {
} }
``` ```
It takes ownership of `self`, changes the title, and returns the modified `Ticket` instance. It takes ownership of `self`, changes the title, and returns the modified `Ticket` instance.\
This is how you'd use it: This is how you'd use it:
```rust ```rust

View file

@ -5,16 +5,16 @@ Now it's a good time to take a look under the hood: let's talk about **memory**.
## Stack and heap ## Stack and heap
When discussing memory, you'll often hear people talk about the **stack** and the **heap**. When discussing memory, you'll often hear people talk about the **stack** and the **heap**.\
These are two different memory regions used by programs to store data. These are two different memory regions used by programs to store data.
Let's start with the stack. Let's start with the stack.
## Stack ## Stack
The **stack** is a **LIFO** (Last In, First Out) data structure. The **stack** is a **LIFO** (Last In, First Out) data structure.\
When you call a function, a new **stack frame** is added on top of the stack. That stack frame stores When you call a function, a new **stack frame** is added on top of the stack. That stack frame stores
the function's arguments, local variables and a few "bookkeeping" values. the function's arguments, local variables and a few "bookkeeping" values.\
When the function returns, the stack frame is popped off the stack[^stack-overflow]. When the function returns, the stack frame is popped off the stack[^stack-overflow].
```text ```text
@ -25,15 +25,15 @@ When the function returns, the stack frame is popped off the stack[^stack-overfl
+-----------------+ +-----------------+ +-----------------+ +-----------------+ +-----------------+ +-----------------+
``` ```
From an operational point of view, stack allocation/de-allocation is **very fast**. From an operational point of view, stack allocation/de-allocation is **very fast**.\
We are always pushing and popping data from the top of the stack, so we don't need to search for free memory. We are always pushing and popping data from the top of the stack, so we don't need to search for free memory.
We also don't have to worry about fragmentation: the stack is a single contiguous block of memory. We also don't have to worry about fragmentation: the stack is a single contiguous block of memory.
### Rust ### Rust
Rust will often allocate data on the stack. Rust will often allocate data on the stack.\
You have a `u32` input argument in a function? Those 32 bits will be on the stack. You have a `u32` input argument in a function? Those 32 bits will be on the stack.\
You define a local variable of type `i64`? Those 64 bits will be on the stack. You define a local variable of type `i64`? Those 64 bits will be on the stack.\
It all works quite nicely because the size of those integers is known at compile time, therefore It all works quite nicely because the size of those integers is known at compile time, therefore
the compiled program knows how much space it needs to reserve on the stack for them. the compiled program knows how much space it needs to reserve on the stack for them.
@ -57,6 +57,6 @@ assert_eq!(std::mem::size_of::<u8>(), 1);
- The exercise for this section is located in `exercises/03_ticket_v1/08_stack` - The exercise for this section is located in `exercises/03_ticket_v1/08_stack`
[^stack-overflow]: If you have nested function calls, each function pushes its data onto the stack when it's called but [^stack-overflow]: If you have nested function calls, each function pushes its data onto the stack when it's called but
it doesn't pop it off until the innermost function returns. it doesn't pop it off until the innermost function returns.
If you have too many nested function calls, you can run out of stack space—the stack is not infinite! If you have too many nested function calls, you can run out of stack space—the stack is not infinite!
That's called a [**stack overflow**](https://en.wikipedia.org/wiki/Stack_overflow). That's called a [**stack overflow**](https://en.wikipedia.org/wiki/Stack_overflow).

View file

@ -6,14 +6,14 @@ That's where the **heap** comes in.
## Heap allocations ## Heap allocations
You can visualize the heap as a big chunk of memory—a huge array, if you will. You can visualize the heap as a big chunk of memory—a huge array, if you will.\
Whenever you need to store data on the heap, you ask a special program, the **allocator**, to reserve for you Whenever you need to store data on the heap, you ask a special program, the **allocator**, to reserve for you
a subset of the heap. We call this interaction (and the memory you reserved) a **heap allocation**. a subset of the heap. We call this interaction (and the memory you reserved) a **heap allocation**.
If the allocation succeeds, the allocator will give you a **pointer** to the start of the reserved block. If the allocation succeeds, the allocator will give you a **pointer** to the start of the reserved block.
## No automatic de-allocation ## No automatic de-allocation
The heap is structured quite differently from the stack. The heap is structured quite differently from the stack.\
Heap allocations are not contiguous, they can be located anywhere inside the heap. Heap allocations are not contiguous, they can be located anywhere inside the heap.
``` ```
@ -29,7 +29,7 @@ calling the allocator again to **free** the memory you no longer need.
## Performance ## Performance
The heap's flexibility comes at a cost: heap allocations are **slower** than stack allocations. The heap's flexibility comes at a cost: heap allocations are **slower** than stack allocations.
There's a lot more bookkeeping involved! There's a lot more bookkeeping involved!\
If you read articles about performance optimization you'll often be advised to minimize heap allocations If you read articles about performance optimization you'll often be advised to minimize heap allocations
and prefer stack-allocated data whenever possible. and prefer stack-allocated data whenever possible.
@ -37,7 +37,7 @@ and prefer stack-allocated data whenever possible.
When you create a local variable of type `String`, When you create a local variable of type `String`,
Rust is forced to allocate on the heap[^empty]: it doesn't know in advance how much text you're going to put in it, Rust is forced to allocate on the heap[^empty]: it doesn't know in advance how much text you're going to put in it,
so it can't reserve the right amount of space on the stack. so it can't reserve the right amount of space on the stack.\
But a `String` is not _entirely_ heap-allocated, it also keeps some data on the stack. In particular: But a `String` is not _entirely_ heap-allocated, it also keeps some data on the stack. In particular:
- The **pointer** to the heap region you reserved. - The **pointer** to the heap region you reserved.
@ -65,9 +65,9 @@ Heap: | ? | ? | ? | ? | ? |
+---+---+---+---+---+ +---+---+---+---+---+
``` ```
We asked for a `String` that can hold up to 5 bytes of text. We asked for a `String` that can hold up to 5 bytes of text.\
`String::with_capacity` goes to the allocator and asks for 5 bytes of heap memory. The allocator returns `String::with_capacity` goes to the allocator and asks for 5 bytes of heap memory. The allocator returns
a pointer to the start of that memory block. a pointer to the start of that memory block.\
The `String` is empty, though. On the stack, we keep track of this information by distinguishing between The `String` is empty, though. On the stack, we keep track of this information by distinguishing between
the length and the capacity: this `String` can hold up to 5 bytes, but it currently holds 0 bytes of the length and the capacity: this `String` can hold up to 5 bytes, but it currently holds 0 bytes of
actual text. actual text.
@ -96,7 +96,7 @@ Three of the five bytes on the heap are used to store the characters `H`, `e`, a
### `usize` ### `usize`
How much space do we need to store pointer, length and capacity on the stack? How much space do we need to store pointer, length and capacity on the stack?\
It depends on the **architecture** of the machine you're running on. It depends on the **architecture** of the machine you're running on.
Every memory location on your machine has an [**address**](https://en.wikipedia.org/wiki/Memory_address), commonly Every memory location on your machine has an [**address**](https://en.wikipedia.org/wiki/Memory_address), commonly
@ -118,7 +118,7 @@ which is also known as the **size of the type**.
> What about the memory buffer that `String` is managing on the heap? Isn't that > What about the memory buffer that `String` is managing on the heap? Isn't that
> part of the size of `String`? > part of the size of `String`?
No! No!\
That heap allocation is a **resource** that `String` is managing. That heap allocation is a **resource** that `String` is managing.
It's not considered to be part of the `String` type by the compiler. It's not considered to be part of the `String` type by the compiler.
@ -129,7 +129,7 @@ therefore it doesn't track its size.
Unfortunately there is no equivalent of `std::mem::size_of` to measure the amount of Unfortunately there is no equivalent of `std::mem::size_of` to measure the amount of
heap memory that a certain value is allocating at runtime. Some types might heap memory that a certain value is allocating at runtime. Some types might
provide methods to inspect their heap usage (e.g. `String`'s `capacity` method), provide methods to inspect their heap usage (e.g. `String`'s `capacity` method),
but there is no general-purpose "API" to retrieve runtime heap usage in Rust. but there is no general-purpose "API" to retrieve runtime heap usage in Rust.\
You can, however, use a memory profiler tool (e.g. [DHAT](https://valgrind.org/docs/manual/dh-manual.html) You can, however, use a memory profiler tool (e.g. [DHAT](https://valgrind.org/docs/manual/dh-manual.html)
or [a custom allocator](https://docs.rs/dhat/latest/dhat/)) to inspect the heap usage of your program. or [a custom allocator](https://docs.rs/dhat/latest/dhat/)) to inspect the heap usage of your program.
@ -138,9 +138,9 @@ or [a custom allocator](https://docs.rs/dhat/latest/dhat/)) to inspect the heap
- The exercise for this section is located in `exercises/03_ticket_v1/09_heap` - The exercise for this section is located in `exercises/03_ticket_v1/09_heap`
[^empty]: `std` doesn't allocate if you create an **empty** `String` (i.e. `String::new()`). [^empty]: `std` doesn't allocate if you create an **empty** `String` (i.e. `String::new()`).
Heap memory will be reserved when you push data into it for the first time. Heap memory will be reserved when you push data into it for the first time.
[^equivalence]: The size of a pointer depends on the operating system too. [^equivalence]: The size of a pointer depends on the operating system too.
In certain environments, a pointer is **larger** than a memory address (e.g. [CHERI](https://blog.acolyer.org/2019/05/28/cheri-abi/)). In certain environments, a pointer is **larger** than a memory address (e.g. [CHERI](https://blog.acolyer.org/2019/05/28/cheri-abi/)).
Rust makes the simplifying assumption that pointers are the same size as memory addresses, Rust makes the simplifying assumption that pointers are the same size as memory addresses,
which is true for most modern systems you're likely to encounter. which is true for most modern systems you're likely to encounter.

View file

@ -2,7 +2,7 @@
What about references, like `&String` or `&mut String`? How are they represented in memory? What about references, like `&String` or `&mut String`? How are they represented in memory?
Most references[^fat] in Rust are represented, in memory, as a pointer to a memory location. Most references[^fat] in Rust are represented, in memory, as a pointer to a memory location.\
It follows that their size is the same as the size of a pointer, a `usize`. It follows that their size is the same as the size of a pointer, a `usize`.
You can verify this using `std::mem::size_of`: You can verify this using `std::mem::size_of`:
@ -12,7 +12,7 @@ assert_eq!(std::mem::size_of::<&String>(), 8);
assert_eq!(std::mem::size_of::<&mut String>(), 8); assert_eq!(std::mem::size_of::<&mut String>(), 8);
``` ```
A `&String`, in particular, is a pointer to the memory location where the `String`'s metadata is stored. A `&String`, in particular, is a pointer to the memory location where the `String`'s metadata is stored.\
If you run this snippet: If you run this snippet:
```rust ```rust
@ -42,7 +42,7 @@ The same goes for `&mut String`.
## Not all pointers point to the heap ## Not all pointers point to the heap
The example above should clarify one thing: not all pointers point to the heap. The example above should clarify one thing: not all pointers point to the heap.\
They just point to a memory location, which _may_ be on the heap, but doesn't have to be. They just point to a memory location, which _may_ be on the heap, but doesn't have to be.
## References ## References

View file

@ -1,6 +1,6 @@
# Destructors # Destructors
When introducing the heap, we mentioned that you're responsible for freeing the memory you allocate. When introducing the heap, we mentioned that you're responsible for freeing the memory you allocate.\
When introducing the borrow-checker, we also stated that you rarely have to manage memory directly in Rust. When introducing the borrow-checker, we also stated that you rarely have to manage memory directly in Rust.
These two statements might seem contradictory at first. These two statements might seem contradictory at first.
@ -38,10 +38,10 @@ It ends when one of the following happens:
## Destructors ## Destructors
When the owner of a value goes out of scope, Rust invokes its **destructor**. When the owner of a value goes out of scope, Rust invokes its **destructor**.\
The destructor tries to clean up the resources used by that value—in particular, whatever memory it allocated. The destructor tries to clean up the resources used by that value—in particular, whatever memory it allocated.
You can manually invoke the destructor of a value by passing it to `std::mem::drop`. You can manually invoke the destructor of a value by passing it to `std::mem::drop`.\
That's why you'll often hear Rust developers saying "that value has been **dropped**" as a way to state that a value That's why you'll often hear Rust developers saying "that value has been **dropped**" as a way to state that a value
has gone out of scope and its destructor has been invoked. has gone out of scope and its destructor has been invoked.
@ -129,12 +129,12 @@ error[E0382]: use of moved value: `x`
| ^ value used here after move | ^ value used here after move
``` ```
Drop **consumes** the value it's called on, meaning that the value is no longer valid after the call. Drop **consumes** the value it's called on, meaning that the value is no longer valid after the call.\
The compiler will therefore prevent you from using it, avoiding [use-after-free bugs](https://owasp.org/www-community/vulnerabilities/Using_freed_memory). The compiler will therefore prevent you from using it, avoiding [use-after-free bugs](https://owasp.org/www-community/vulnerabilities/Using_freed_memory).
### Dropping references ### Dropping references
What if a variable contains a reference? What if a variable contains a reference?\
For example: For example:
```rust ```rust
@ -143,7 +143,7 @@ let y = &x;
drop(y); drop(y);
``` ```
When you call `drop(y)`... nothing happens. When you call `drop(y)`... nothing happens.\
If you actually try to compile this code, you'll get a warning: If you actually try to compile this code, you'll get a warning:
```text ```text
@ -158,7 +158,7 @@ warning: calls to `std::mem::drop` with a reference
| |
``` ```
It goes back to what we said earlier: we only want to call the destructor once. It goes back to what we said earlier: we only want to call the destructor once.\
You can have multiple references to the same value—if we called the destructor for the value they point at You can have multiple references to the same value—if we called the destructor for the value they point at
when one of them goes out of scope, what would happen to the others? when one of them goes out of scope, what would happen to the others?
They would refer to a memory location that's no longer valid: a so-called [**dangling pointer**](https://en.wikipedia.org/wiki/Dangling_pointer), They would refer to a memory location that's no longer valid: a so-called [**dangling pointer**](https://en.wikipedia.org/wiki/Dangling_pointer),
@ -170,4 +170,4 @@ Rust's ownership system rules out these kinds of bugs by design.
- The exercise for this section is located in `exercises/03_ticket_v1/11_destructor` - The exercise for this section is located in `exercises/03_ticket_v1/11_destructor`
[^leak]: Rust doesn't guarantee that destructors will run. They won't, for example, if [^leak]: Rust doesn't guarantee that destructors will run. They won't, for example, if
you explicitly choose to [leak memory](../07_threads/03_leak.md). you explicitly choose to [leak memory](../07_threads/03_leak.md).

View file

@ -1,6 +1,6 @@
# Wrapping up # Wrapping up
We've covered a lot of foundational Rust concepts in this chapter. We've covered a lot of foundational Rust concepts in this chapter.\
Before moving on, let's go through one last exercise to consolidate what we've learned. Before moving on, let's go through one last exercise to consolidate what we've learned.
You'll have minimal guidance this time—just the exercise description and the tests to guide you. You'll have minimal guidance this time—just the exercise description and the tests to guide you.

View file

@ -1,9 +1,9 @@
# Traits # Traits
In the previous chapter we covered the basics of Rust's type and ownership system. In the previous chapter we covered the basics of Rust's type and ownership system.\
It's time to dig deeper: we'll explore **traits**, Rust's take on interfaces. It's time to dig deeper: we'll explore **traits**, Rust's take on interfaces.
Once you learn about traits, you'll start seeing their fingerprints all over the place. Once you learn about traits, you'll start seeing their fingerprints all over the place.\
In fact, you've already seen traits in action throughout the previous chapter, e.g. `.into()` invocations as well In fact, you've already seen traits in action throughout the previous chapter, e.g. `.into()` invocations as well
as operators like `==` and `+`. as operators like `==` and `+`.

View file

@ -38,7 +38,7 @@ error[E0369]: binary operation `==` cannot be applied to type `Ticket`
note: an implementation of `PartialEq` might be missing for `Ticket` note: an implementation of `PartialEq` might be missing for `Ticket`
``` ```
`Ticket` is a new type. Out of the box, there is **no behavior attached to it**. `Ticket` is a new type. Out of the box, there is **no behavior attached to it**.\
Rust doesn't magically infer how to compare two `Ticket` instances just because they contain `String`s. Rust doesn't magically infer how to compare two `Ticket` instances just because they contain `String`s.
The Rust compiler is nudging us in the right direction though: it's suggesting that we might be missing an implementation The Rust compiler is nudging us in the right direction though: it's suggesting that we might be missing an implementation
@ -46,7 +46,7 @@ of `PartialEq`. `PartialEq` is a **trait**!
## What are traits? ## What are traits?
Traits are Rust's way of defining **interfaces**. Traits are Rust's way of defining **interfaces**.\
A trait defines a set of methods that a type must implement to satisfy the trait's contract. A trait defines a set of methods that a type must implement to satisfy the trait's contract.
### Defining a trait ### Defining a trait

View file

@ -44,7 +44,7 @@ fn main() {
## One implementation ## One implementation
There are limitations to the trait implementations you can write. There are limitations to the trait implementations you can write.\
The simplest and most straight-forward one: you can't implement the same trait twice, The simplest and most straight-forward one: you can't implement the same trait twice,
in a crate, for the same type. in a crate, for the same type.
@ -101,7 +101,7 @@ Imagine the following situation:
- Crate `C` provides a (different) implementation of the `IsEven` trait for `u32` - Crate `C` provides a (different) implementation of the `IsEven` trait for `u32`
- Crate `D` depends on both `B` and `C` and calls `1.is_even()` - Crate `D` depends on both `B` and `C` and calls `1.is_even()`
Which implementation should be used? The one defined in `B`? Or the one defined in `C`? Which implementation should be used? The one defined in `B`? Or the one defined in `C`?\
There's no good answer, therefore the orphan rule was defined to prevent this scenario. There's no good answer, therefore the orphan rule was defined to prevent this scenario.
Thanks to the orphan rule, neither crate `B` nor crate `C` would compile. Thanks to the orphan rule, neither crate `B` nor crate `C` would compile.

View file

@ -5,7 +5,7 @@ Operator overloading is the ability to define custom behavior for operators like
## Operators are traits ## Operators are traits
In Rust, operators are traits. In Rust, operators are traits.\
For each operator, there is a corresponding trait that defines the behavior of that operator. For each operator, there is a corresponding trait that defines the behavior of that operator.
By implementing that trait for your type, you **unlock** the usage of the corresponding operators. By implementing that trait for your type, you **unlock** the usage of the corresponding operators.
@ -33,7 +33,7 @@ and replace `x == y` with `x.eq(y)`. It's syntactic sugar!
This is the correspondence for the main operators: This is the correspondence for the main operators:
| Operator | Trait | | Operator | Trait |
|--------------------------|-------------------------------------------------------------------------| | ------------------------ | ----------------------------------------------------------------------- |
| `+` | [`Add`](https://doc.rust-lang.org/std/ops/trait.Add.html) | | `+` | [`Add`](https://doc.rust-lang.org/std/ops/trait.Add.html) |
| `-` | [`Sub`](https://doc.rust-lang.org/std/ops/trait.Sub.html) | | `-` | [`Sub`](https://doc.rust-lang.org/std/ops/trait.Sub.html) |
| `*` | [`Mul`](https://doc.rust-lang.org/std/ops/trait.Mul.html) | | `*` | [`Mul`](https://doc.rust-lang.org/std/ops/trait.Mul.html) |
@ -47,9 +47,9 @@ while comparison ones live in the [`std::cmp`](https://doc.rust-lang.org/std/cmp
## Default implementations ## Default implementations
The comment on `PartialEq::ne` states that "`ne` is a provided method". The comment on `PartialEq::ne` states that "`ne` is a provided method".\
It means that `PartialEq` provides a **default implementation** for `ne` in the trait definition—the `{ ... }` elided It means that `PartialEq` provides a **default implementation** for `ne` in the trait definition—the `{ ... }` elided
block in the definition snippet. block in the definition snippet.\
If we expand the elided block, it looks like this: If we expand the elided block, it looks like this:
```rust ```rust
@ -62,7 +62,7 @@ pub trait PartialEq {
} }
``` ```
It's what you expect: `ne` is the negation of `eq`. It's what you expect: `ne` is the negation of `eq`.\
Since a default implementation is provided, you can skip implementing `ne` when you implement `PartialEq` for your type. Since a default implementation is provided, you can skip implementing `ne` when you implement `PartialEq` for your type.
It's enough to implement `eq`: It's enough to implement `eq`:

View file

@ -24,7 +24,7 @@ impl PartialEq for Ticket {
``` ```
If the definition of `Ticket` changes, the compiler will error out, complaining that your If the definition of `Ticket` changes, the compiler will error out, complaining that your
destructuring is no longer exhaustive. destructuring is no longer exhaustive.\
You can also rename struct fields, to avoid variable shadowing: You can also rename struct fields, to avoid variable shadowing:
```rust ```rust
@ -55,7 +55,7 @@ You've already encountered a few macros in past exercises:
- `assert_eq!` and `assert!`, in the test cases - `assert_eq!` and `assert!`, in the test cases
- `println!`, to print to the console - `println!`, to print to the console
Rust macros are **code generators**. Rust macros are **code generators**.\
They generate new Rust code based on the input you provide, and that generated code is then compiled alongside They generate new Rust code based on the input you provide, and that generated code is then compiled alongside
the rest of your program. Some macros are built into Rust's standard library, but you can also the rest of your program. Some macros are built into Rust's standard library, but you can also
write your own. We won't be creating our macro in this course, but you can find some useful write your own. We won't be creating our macro in this course, but you can find some useful

View file

@ -9,9 +9,9 @@ There's a third use case: **generic programming**.
## The problem ## The problem
All our functions and methods, so far, have been working with **concrete types**. All our functions and methods, so far, have been working with **concrete types**.\
Code that operates on concrete types is usually straightforward to write and understand. But it's also Code that operates on concrete types is usually straightforward to write and understand. But it's also
limited in its reusability. limited in its reusability.\
Let's imagine, for example, that we want to write a function that returns `true` if an integer is even. Let's imagine, for example, that we want to write a function that returns `true` if an integer is even.
Working with concrete types, we'd have to write a separate function for each integer type we want to Working with concrete types, we'd have to write a separate function for each integer type we want to
support: support:
@ -54,7 +54,7 @@ The duplication remains.
## Generic programming ## Generic programming
We can do better using **generics**. We can do better using **generics**.\
Generics allow us to write code that works with a **type parameter** instead of a concrete type: Generics allow us to write code that works with a **type parameter** instead of a concrete type:
```rust ```rust
@ -68,19 +68,19 @@ where
} }
``` ```
`print_if_even` is a **generic function**. `print_if_even` is a **generic function**.\
It isn't tied to a specific input type. Instead, it works with any type `T` that: It isn't tied to a specific input type. Instead, it works with any type `T` that:
- Implements the `IsEven` trait. - Implements the `IsEven` trait.
- Implements the `Debug` trait. - Implements the `Debug` trait.
This contract is expressed with a **trait bound**: `T: IsEven + Debug`. This contract is expressed with a **trait bound**: `T: IsEven + Debug`.\
The `+` symbol is used to require that `T` implements multiple traits. `T: IsEven + Debug` is equivalent to The `+` symbol is used to require that `T` implements multiple traits. `T: IsEven + Debug` is equivalent to
"where `T` implements `IsEven` **and** `Debug`". "where `T` implements `IsEven` **and** `Debug`".
## Trait bounds ## Trait bounds
What purpose do trait bounds serve in `print_if_even`? What purpose do trait bounds serve in `print_if_even`?\
To find out, let's try to remove them: To find out, let's try to remove them:
```rust ```rust
@ -114,9 +114,9 @@ help: consider restricting type parameter `T`
| +++++++++++++++++ | +++++++++++++++++
``` ```
Without trait bounds, the compiler doesn't know what `T` **can do**. Without trait bounds, the compiler doesn't know what `T` **can do**.\
It doesn't know that `T` has an `is_even` method, and it doesn't know how to format `T` for printing. It doesn't know that `T` has an `is_even` method, and it doesn't know how to format `T` for printing.
From the compiler point of view, a bare `T` has no behaviour at all. From the compiler point of view, a bare `T` has no behaviour at all.\
Trait bounds restrict the set of types that can be used by ensuring that the behaviour required by the function Trait bounds restrict the set of types that can be used by ensuring that the behaviour required by the function
body is present. body is present.
@ -148,7 +148,7 @@ fn print_if_even<T: IsEven + Debug>(n: T) {
## Syntax: meaningful names ## Syntax: meaningful names
In the examples above, we used `T` as the type parameter name. This is a common convention when a function has In the examples above, we used `T` as the type parameter name. This is a common convention when a function has
only one type parameter. only one type parameter.\
Nothing stops you from using a more meaningful name, though: Nothing stops you from using a more meaningful name, though:
```rust ```rust
@ -164,8 +164,8 @@ Follow Rust's conventions though: use camel case for type parameter names.
## The function signature is king ## The function signature is king
You may wonder why we need trait bounds at all. Can't the compiler infer the required traits from the function's body? You may wonder why we need trait bounds at all. Can't the compiler infer the required traits from the function's body?\
It could, but it won't. It could, but it won't.\
The rationale is the same as for [explicit type annotations on function parameters](../02_basic_calculator/02_variables.md#function-arguments-are-variables): The rationale is the same as for [explicit type annotations on function parameters](../02_basic_calculator/02_variables.md#function-arguments-are-variables):
each function signature is a contract between the caller and the callee, and the terms must be explicitly stated. each function signature is a contract between the caller and the callee, and the terms must be explicitly stated.
This allows for better error messages, better documentation, less unintentional breakages across versions, This allows for better error messages, better documentation, less unintentional breakages across versions,

View file

@ -16,7 +16,7 @@ The type of `s` is `&str`, a **reference to a string slice**.
## Memory layout ## Memory layout
`&str` and `String` are different types—they're not interchangeable. `&str` and `String` are different types—they're not interchangeable.\
Let's recall the memory layout of a `String` from our Let's recall the memory layout of a `String` from our
[previous exploration](../03_ticket_v1/09_heap.md). [previous exploration](../03_ticket_v1/09_heap.md).
If we run: If we run:
@ -45,21 +45,21 @@ If you remember, we've [also examined](../03_ticket_v1/10_references_in_memory.m
how a `&String` is laid out in memory: how a `&String` is laid out in memory:
```text ```text
-------------------------------------- --------------------------------------
| | | |
+----v----+--------+----------+ +----|----+ +----v----+--------+----------+ +----|----+
| pointer | length | capacity | | pointer | | pointer | length | capacity | | pointer |
| | | 5 | 5 | | | | | | 5 | 5 | | |
+----|----+--------+----------+ +---------+ +----|----+--------+----------+ +---------+
| s &s | s &s
| |
v v
+---+---+---+---+---+ +---+---+---+---+---+
| H | e | l | l | o | | H | e | l | l | o |
+---+---+---+---+---+ +---+---+---+---+---+
``` ```
`&String` points to the memory location where the `String`'s metadata is stored. `&String` points to the memory location where the `String`'s metadata is stored.\
If we follow the pointer, we get to the heap-allocated data. In particular, we get to the first byte of the string, `H`. If we follow the pointer, we get to the heap-allocated data. In particular, we get to the first byte of the string, `H`.
What if we wanted a type that represents a **substring** of `s`? E.g. `ello` in `Hello`? What if we wanted a type that represents a **substring** of `s`? E.g. `ello` in `Hello`?
@ -100,19 +100,19 @@ Heap: | H | e | l | l | o | |
- A pointer to the first byte of the slice. - A pointer to the first byte of the slice.
- The length of the slice. - The length of the slice.
`slice` doesn't own the data, it just points to it: it's a **reference** to the `String`'s heap-allocated data. `slice` doesn't own the data, it just points to it: it's a **reference** to the `String`'s heap-allocated data.\
When `slice` is dropped, the heap-allocated data won't be deallocated, because it's still owned by `s`. When `slice` is dropped, the heap-allocated data won't be deallocated, because it's still owned by `s`.
That's why `slice` doesn't have a `capacity` field: it doesn't own the data, so it doesn't need to know how much That's why `slice` doesn't have a `capacity` field: it doesn't own the data, so it doesn't need to know how much
space it was allocated for it; it only cares about the data it references. space it was allocated for it; it only cares about the data it references.
## `&str` vs `&String` ## `&str` vs `&String`
As a rule of thumb, use `&str` rather than `&String` whenever you need a reference to textual data. As a rule of thumb, use `&str` rather than `&String` whenever you need a reference to textual data.\
`&str` is more flexible and generally considered more idiomatic in Rust code. `&str` is more flexible and generally considered more idiomatic in Rust code.
If a method returns a `&String`, you're promising that there is heap-allocated UTF-8 text somewhere that If a method returns a `&String`, you're promising that there is heap-allocated UTF-8 text somewhere that
**matches exactly** the one you're returning a reference to. **matches exactly** the one you're returning a reference to.\
If a method returns a `&str`, instead, you have a lot more freedom: you're just saying that *somewhere* there's a If a method returns a `&str`, instead, you have a lot more freedom: you're just saying that _somewhere_ there's a
bunch of text data and that a subset of it matches what you need, therefore you're returning a reference to it. bunch of text data and that a subset of it matches what you need, therefore you're returning a reference to it.
## References ## References

View file

@ -38,7 +38,7 @@ Instead, it just works. **Why**?
## `Deref` to the rescue ## `Deref` to the rescue
The `Deref` trait is the mechanism behind the language feature known as [**deref coercion**](https://doc.rust-lang.org/std/ops/trait.Deref.html#deref-coercion). The `Deref` trait is the mechanism behind the language feature known as [**deref coercion**](https://doc.rust-lang.org/std/ops/trait.Deref.html#deref-coercion).\
The trait is defined in the standard library, in the `std::ops` module: The trait is defined in the standard library, in the `std::ops` module:
```rust ```rust
@ -51,13 +51,13 @@ pub trait Deref {
} }
``` ```
`type Target` is an **associated type**. `type Target` is an **associated type**.\
It's a placeholder for a concrete type that must be specified when the trait is implemented. It's a placeholder for a concrete type that must be specified when the trait is implemented.
## Deref coercion ## Deref coercion
By implementing `Deref<Target = U>` for a type `T` you're telling the compiler that `&T` and `&U` are By implementing `Deref<Target = U>` for a type `T` you're telling the compiler that `&T` and `&U` are
somewhat interchangeable. somewhat interchangeable.\
In particular, you get the following behavior: In particular, you get the following behavior:
- References to `T` are implicitly converted into references to `U` (i.e. `&T` becomes `&U`) - References to `T` are implicitly converted into references to `U` (i.e. `&T` becomes `&U`)
@ -84,7 +84,7 @@ Thanks to this implementation and deref coercion, a `&String` is automatically c
## Don't abuse deref coercion ## Don't abuse deref coercion
Deref coercion is a powerful feature, but it can lead to confusion. Deref coercion is a powerful feature, but it can lead to confusion.\
Automatically converting types can make the code harder to read and understand. If a method with the same name Automatically converting types can make the code harder to read and understand. If a method with the same name
is defined on both `T` and `U`, which one will be called? is defined on both `T` and `U`, which one will be called?

View file

@ -1,7 +1,7 @@
# `Sized` # `Sized`
There's more to `&str` than meets the eye, even after having There's more to `&str` than meets the eye, even after having
investigated deref coercion. investigated deref coercion.\
From our previous [discussion on memory layouts](../03_ticket_v1/10_references_in_memory.md), From our previous [discussion on memory layouts](../03_ticket_v1/10_references_in_memory.md),
it would have been reasonable to expect `&str` to be represented as a single `usize` on it would have been reasonable to expect `&str` to be represented as a single `usize` on
the stack, a pointer. That's not the case though. `&str` stores some **metadata** next the stack, a pointer. That's not the case though. `&str` stores some **metadata** next
@ -38,10 +38,10 @@ What's going on?
## Dynamically sized types ## Dynamically sized types
`str` is a **dynamically sized type** (DST). `str` is a **dynamically sized type** (DST).\
A DST is a type whose size is not known at compile time. Whenever you have a A DST is a type whose size is not known at compile time. Whenever you have a
reference to a DST, like `&str`, it has to include additional reference to a DST, like `&str`, it has to include additional
information about the data it points to. It is a **fat pointer**. information about the data it points to. It is a **fat pointer**.\
In the case of `&str`, it stores the length of the slice it points to. In the case of `&str`, it stores the length of the slice it points to.
We'll see more examples of DSTs in the rest of the course. We'll see more examples of DSTs in the rest of the course.
@ -59,14 +59,14 @@ A type is `Sized` if its size is known at compile time. In other words, it's not
### Marker traits ### Marker traits
`Sized` is your first example of a **marker trait**. `Sized` is your first example of a **marker trait**.\
A marker trait is a trait that doesn't require any methods to be implemented. It doesn't define any behavior. A marker trait is a trait that doesn't require any methods to be implemented. It doesn't define any behavior.
It only serves to **mark** a type as having certain properties. It only serves to **mark** a type as having certain properties.
The mark is then leveraged by the compiler to enable certain behaviors or optimizations. The mark is then leveraged by the compiler to enable certain behaviors or optimizations.
### Auto traits ### Auto traits
In particular, `Sized` is also an **auto trait**. In particular, `Sized` is also an **auto trait**.\
You don't need to implement it explicitly; the compiler implements it automatically for you You don't need to implement it explicitly; the compiler implements it automatically for you
based on the type's definition. based on the type's definition.
@ -74,7 +74,7 @@ based on the type's definition.
All the types we've seen so far are `Sized`: `u32`, `String`, `bool`, etc. All the types we've seen so far are `Sized`: `u32`, `String`, `bool`, etc.
`str`, as we just saw, is not `Sized`. `str`, as we just saw, is not `Sized`.\
`&str` is `Sized` though! We know its size at compile time: two `usize`s, one for the pointer `&str` is `Sized` though! We know its size at compile time: two `usize`s, one for the pointer
and one for the length. and one for the length.

View file

@ -20,7 +20,7 @@ impl Ticket {
} }
``` ```
We've also seen that string literals (such as `"A title"`) are of type `&str`. We've also seen that string literals (such as `"A title"`) are of type `&str`.\
We have a type mismatch here: a `String` is expected, but we have a `&str`. We have a type mismatch here: a `String` is expected, but we have a `&str`.
No magical coercion will come to save us this time; we need **to perform a conversion**. No magical coercion will come to save us this time; we need **to perform a conversion**.
@ -99,18 +99,18 @@ you can't use them with other traits.
## `&str` to `String` ## `&str` to `String`
In [`std`'s documentation](https://doc.rust-lang.org/std/convert/trait.From.html#implementors) In [`std`'s documentation](https://doc.rust-lang.org/std/convert/trait.From.html#implementors)
you can see which `std` types implement the `From` trait. you can see which `std` types implement the `From` trait.\
You'll find that `String` implements `From<&str> for String`. Thus, we can write: You'll find that `String` implements `From<&str> for String`. Thus, we can write:
```rust ```rust
let title = String::from("A title"); let title = String::from("A title");
``` ```
We've been primarily using `.into()`, though. We've been primarily using `.into()`, though.\
If you check out the [implementors of `Into`](https://doc.rust-lang.org/std/convert/trait.Into.html#implementors) If you check out the [implementors of `Into`](https://doc.rust-lang.org/std/convert/trait.Into.html#implementors)
you won't find `Into<&str> for String`. What's going on? you won't find `Into<&str> for String`. What's going on?
`From` and `Into` are **dual traits**. `From` and `Into` are **dual traits**.\
In particular, `Into` is implemented for any type that implements `From` using a **blanket implementation**: In particular, `Into` is implemented for any type that implements `From` using a **blanket implementation**:
```rust ```rust
@ -129,7 +129,7 @@ we can write `let title = "A title".into();`.
## `.into()` ## `.into()`
Every time you see `.into()`, you're witnessing a conversion between types. Every time you see `.into()`, you're witnessing a conversion between types.\
What's the target type, though? What's the target type, though?
In most cases, the target type is either: In most cases, the target type is either:

View file

@ -14,8 +14,8 @@ pub trait Deref {
} }
``` ```
They both feature type parameters. They both feature type parameters.\
In the case of `From`, it's a generic parameter, `T`. In the case of `From`, it's a generic parameter, `T`.\
In the case of `Deref`, it's an associated type, `Target`. In the case of `Deref`, it's an associated type, `Target`.
What's the difference? Why use one over the other? What's the difference? Why use one over the other?
@ -27,7 +27,7 @@ only deref to `str`.
It's about avoiding ambiguity: if you could implement `Deref` multiple times for a type, It's about avoiding ambiguity: if you could implement `Deref` multiple times for a type,
which `Target` type should the compiler choose when you call a `&self` method? which `Target` type should the compiler choose when you call a `&self` method?
That's why `Deref` uses an associated type, `Target`. That's why `Deref` uses an associated type, `Target`.\
An associated type is uniquely determined **by the trait implementation**. An associated type is uniquely determined **by the trait implementation**.
Since you can't implement `Deref` more than once, you'll only be able to specify one `Target` for a given type Since you can't implement `Deref` more than once, you'll only be able to specify one `Target` for a given type
and there won't be any ambiguity. and there won't be any ambiguity.
@ -51,7 +51,7 @@ impl From<u16> for WrappingU32 {
} }
``` ```
This works because `From<u16>` and `From<u32>` are considered **different traits**. This works because `From<u16>` and `From<u32>` are considered **different traits**.\
There is no ambiguity: the compiler can determine which implementation to use based on type of the value being converted. There is no ambiguity: the compiler can determine which implementation to use based on type of the value being converted.
## Case study: `Add` ## Case study: `Add`
@ -73,7 +73,7 @@ It uses both mechanisms:
### `RHS` ### `RHS`
`RHS` is a generic parameter to allow for different types to be added together. `RHS` is a generic parameter to allow for different types to be added together.\
For example, you'll find these two implementations in the standard library: For example, you'll find these two implementations in the standard library:
```rust ```rust
@ -125,7 +125,7 @@ impl Add<&u32> for &u32 {
} }
``` ```
The type they're implementing the trait for is `&u32`, but the result of the addition is `u32`. The type they're implementing the trait for is `&u32`, but the result of the addition is `u32`.\
It would be impossible[^flexible] to provide this implementation if `add` had to return `Self`, i.e. `&u32` in this case. It would be impossible[^flexible] to provide this implementation if `add` had to return `Self`, i.e. `&u32` in this case.
`Output` lets `std` decouple the implementor from the return type, thus supporting this case. `Output` lets `std` decouple the implementor from the return type, thus supporting this case.
@ -146,6 +146,5 @@ To recap:
- The exercise for this section is located in `exercises/04_traits/10_assoc_vs_generic` - The exercise for this section is located in `exercises/04_traits/10_assoc_vs_generic`
[^flexible]: Flexibility is rarely free: the trait definition is more complex due to `Output`, and implementors have to reason about [^flexible]: Flexibility is rarely free: the trait definition is more complex due to `Output`, and implementors have to reason about
what they want to return. The trade-off is only justified if that flexibility is actually needed. Keep that in mind what they want to return. The trade-off is only justified if that flexibility is actually needed. Keep that in mind
when designing your own traits. when designing your own traits.

View file

@ -1,12 +1,12 @@
# Copying values, pt. 1 # Copying values, pt. 1
In the previous chapter we introduced ownership and borrowing. In the previous chapter we introduced ownership and borrowing.\
We stated, in particular, that: We stated, in particular, that:
- Every value in Rust has a single owner at any given time. - Every value in Rust has a single owner at any given time.
- When a function takes ownership of a value ("it consumes it"), the caller can't use that value anymore. - When a function takes ownership of a value ("it consumes it"), the caller can't use that value anymore.
These restrictions can be somewhat limiting. These restrictions can be somewhat limiting.\
Sometimes we might have to call a function that takes ownership of a value, but we still need to use Sometimes we might have to call a function that takes ownership of a value, but we still need to use
that value afterward. that value afterward.
@ -50,7 +50,7 @@ fn example() {
``` ```
Instead of giving ownership of `s` to `consumer`, we create a new `String` (by cloning `s`) and give Instead of giving ownership of `s` to `consumer`, we create a new `String` (by cloning `s`) and give
that to `consumer` instead. that to `consumer` instead.\
`s` remains valid and usable after the call to `consumer`. `s` remains valid and usable after the call to `consumer`.
## In memory ## In memory
@ -92,7 +92,7 @@ If you're coming from a language like Java, you can think of `clone` as a way to
## Implementing `Clone` ## Implementing `Clone`
To make a type `Clone`-able, we have to implement the `Clone` trait for it. To make a type `Clone`-able, we have to implement the `Clone` trait for it.\
You almost always implement `Clone` by deriving it: You almost always implement `Clone` by deriving it:
```rust ```rust
@ -103,7 +103,7 @@ struct MyType {
``` ```
The compiler implements `Clone` for `MyType` as you would expect: it clones each field of `MyType` individually and The compiler implements `Clone` for `MyType` as you would expect: it clones each field of `MyType` individually and
then constructs a new `MyType` instance using the cloned fields. then constructs a new `MyType` instance using the cloned fields.\
Remember that you can use `cargo expand` (or your IDE) to explore the code generated by `derive` macros. Remember that you can use `cargo expand` (or your IDE) to explore the code generated by `derive` macros.
## References ## References

View file

@ -26,14 +26,14 @@ pub trait Copy: Clone { }
It is a marker trait, just like `Sized`. It is a marker trait, just like `Sized`.
If a type implements `Copy`, there's no need to call `.clone()` to create a new instance of the type: If a type implements `Copy`, there's no need to call `.clone()` to create a new instance of the type:
Rust does it **implicitly** for you. Rust does it **implicitly** for you.\
`u32` is an example of a type that implements `Copy`, which is why the example above compiles without errors: `u32` is an example of a type that implements `Copy`, which is why the example above compiles without errors:
when `consumer(s)` is called, Rust creates a new `u32` instance by performing a **bitwise copy** of `s`, when `consumer(s)` is called, Rust creates a new `u32` instance by performing a **bitwise copy** of `s`,
and then passes that new instance to `consumer`. It all happens behind the scenes, without you having to do anything. and then passes that new instance to `consumer`. It all happens behind the scenes, without you having to do anything.
## What can be `Copy`? ## What can be `Copy`?
`Copy` is not equivalent to "automatic cloning", although it implies it. `Copy` is not equivalent to "automatic cloning", although it implies it.\
Types must meet a few requirements in order to be allowed to implement `Copy`. Types must meet a few requirements in order to be allowed to implement `Copy`.
First of all, it must implement `Clone`, since `Copy` is a subtrait of `Clone`. First of all, it must implement `Clone`, since `Copy` is a subtrait of `Clone`.
@ -52,28 +52,28 @@ that performs the bitwise copy.
### Case study 1: `String` ### Case study 1: `String`
`String` is a type that doesn't implement `Copy`. `String` is a type that doesn't implement `Copy`.\
Why? Because it manages an additional resource: the heap-allocated memory buffer that stores the string's data. Why? Because it manages an additional resource: the heap-allocated memory buffer that stores the string's data.
Let's imagine that Rust allowed `String` to implement `Copy`. Let's imagine that Rust allowed `String` to implement `Copy`.\
Then, when a new `String` instance is created by performing a bitwise copy of the original instance, both the original Then, when a new `String` instance is created by performing a bitwise copy of the original instance, both the original
and the new instance would point to the same memory buffer: and the new instance would point to the same memory buffer:
```text ```text
s copied_s s copied_s
+---------+--------+----------+ +---------+--------+----------+ +---------+--------+----------+ +---------+--------+----------+
| pointer | length | capacity | | pointer | length | capacity | | pointer | length | capacity | | pointer | length | capacity |
| | | 5 | 5 | | | | 5 | 5 | | | | 5 | 5 | | | | 5 | 5 |
+--|------+--------+----------+ +--|------+--------+----------+ +--|------+--------+----------+ +--|------+--------+----------+
| | | |
| | | |
v | v |
+---+---+---+---+---+ | +---+---+---+---+---+ |
| H | e | l | l | o | | | H | e | l | l | o | |
+---+---+---+---+---+ | +---+---+---+---+---+ |
^ | ^ |
| | | |
+------------------------------------+ +------------------------------------+
``` ```
This is bad! This is bad!
@ -84,7 +84,7 @@ violating Rust's borrowing rules.
### Case study 2: `u32` ### Case study 2: `u32`
`u32` implements `Copy`. All integer types do, in fact. `u32` implements `Copy`. All integer types do, in fact.\
An integer is "just" the bytes that represent the number in memory. There's nothing more! An integer is "just" the bytes that represent the number in memory. There's nothing more!
If you copy those bytes, you get another perfectly valid integer instance. If you copy those bytes, you get another perfectly valid integer instance.
Nothing bad can happen, so Rust allows it. Nothing bad can happen, so Rust allows it.
@ -92,7 +92,7 @@ Nothing bad can happen, so Rust allows it.
### Case study 3: `&mut u32` ### Case study 3: `&mut u32`
When we introduced ownership and mutable borrows, we stated one rule quite clearly: there When we introduced ownership and mutable borrows, we stated one rule quite clearly: there
can only ever be *one* mutable borrow of a value at any given time. can only ever be _one_ mutable borrow of a value at any given time.\
That's why `&mut u32` doesn't implement `Copy`, even though `u32` does. That's why `&mut u32` doesn't implement `Copy`, even though `u32` does.
If `&mut u32` implemented `Copy`, you could create multiple mutable references to If `&mut u32` implemented `Copy`, you could create multiple mutable references to

View file

@ -15,7 +15,7 @@ pub trait Drop {
``` ```
The `Drop` trait is a mechanism for you to define _additional_ cleanup logic for your types, The `Drop` trait is a mechanism for you to define _additional_ cleanup logic for your types,
beyond what the compiler does for you automatically. beyond what the compiler does for you automatically.\
Whatever you put in the `drop` method will be executed when the value goes out of scope. Whatever you put in the `drop` method will be executed when the value goes out of scope.
## `Drop` and `Copy` ## `Drop` and `Copy`
@ -24,7 +24,7 @@ When talking about the `Copy` trait, we said that a type can't implement `Copy`
manages additional resources beyond the `std::mem::size_of` bytes that it occupies in memory. manages additional resources beyond the `std::mem::size_of` bytes that it occupies in memory.
You might wonder: how does the compiler know if a type manages additional resources? You might wonder: how does the compiler know if a type manages additional resources?
That's right: `Drop` trait implementations! That's right: `Drop` trait implementations!\
If your type has an explicit `Drop` implementation, the compiler will assume If your type has an explicit `Drop` implementation, the compiler will assume
that your type has additional resources attached to it and won't allow you to implement `Copy`. that your type has additional resources attached to it and won't allow you to implement `Copy`.

View file

@ -6,7 +6,7 @@ so often when writing Rust code that they'll soon become second nature.
## Closing thoughts ## Closing thoughts
Traits are powerful, but don't overuse them. Traits are powerful, but don't overuse them.\
A few guidelines to keep in mind: A few guidelines to keep in mind:
- Don't make a function generic if it is always invoked with a single type. It introduces indirection in your - Don't make a function generic if it is always invoked with a single type. It introduces indirection in your

View file

@ -1,7 +1,7 @@
# Enumerations # Enumerations
Based on the validation logic you wrote [in a previous chapter](../03_ticket_v1/02_validation.md), Based on the validation logic you wrote [in a previous chapter](../03_ticket_v1/02_validation.md),
there are only a few valid statuses for a ticket: `To-Do`, `InProgress` and `Done`. there are only a few valid statuses for a ticket: `To-Do`, `InProgress` and `Done`.\
This is not obvious if we look at the `status` field in the `Ticket` struct or at the type of the `status` This is not obvious if we look at the `status` field in the `Ticket` struct or at the type of the `status`
parameter in the `new` method: parameter in the `new` method:
@ -29,7 +29,7 @@ We can do better than that with **enumerations**.
## `enum` ## `enum`
An enumeration is a type that can have a fixed set of values, called **variants**. An enumeration is a type that can have a fixed set of values, called **variants**.\
In Rust, you define an enumeration using the `enum` keyword: In Rust, you define an enumeration using the `enum` keyword:
```rust ```rust

View file

@ -1,6 +1,6 @@
# `match` # `match`
You may be wondering—what can you actually **do** with an enum? You may be wondering—what can you actually **do** with an enum?\
The most common operation is to **match** on it. The most common operation is to **match** on it.
```rust ```rust
@ -22,13 +22,13 @@ impl Status {
} }
``` ```
A `match` statement that lets you compare a Rust value against a series of **patterns**. A `match` statement that lets you compare a Rust value against a series of **patterns**.\
You can think of it as a type-level `if`. If `status` is a `Done` variant, execute the first block; You can think of it as a type-level `if`. If `status` is a `Done` variant, execute the first block;
if it's a `InProgress` or `ToDo` variant, execute the second block. if it's a `InProgress` or `ToDo` variant, execute the second block.
## Exhaustiveness ## Exhaustiveness
There's one key detail here: `match` is **exhaustive**. You must handle all enum variants. There's one key detail here: `match` is **exhaustive**. You must handle all enum variants.\
If you forget to handle a variant, Rust will stop you **at compile-time** with an error. If you forget to handle a variant, Rust will stop you **at compile-time** with an error.
E.g. if we forget to handle the `ToDo` variant: E.g. if we forget to handle the `ToDo` variant:
@ -50,7 +50,7 @@ error[E0004]: non-exhaustive patterns: `ToDo` not covered
| ^^^^^^^^^^^^ pattern `ToDo` not covered | ^^^^^^^^^^^^ pattern `ToDo` not covered
``` ```
This is a big deal! This is a big deal!\
Codebases evolve over time—you might add a new status down the line, e.g. `Blocked`. The Rust compiler Codebases evolve over time—you might add a new status down the line, e.g. `Blocked`. The Rust compiler
will emit an error for every single `match` statement that's missing logic for the new variant. will emit an error for every single `match` statement that's missing logic for the new variant.
That's why Rust developers often sing the praises of "compiler-driven refactoring"—the compiler tells you That's why Rust developers often sing the praises of "compiler-driven refactoring"—the compiler tells you

View file

@ -8,7 +8,7 @@ enum Status {
} }
``` ```
Our `Status` enum is what's usually called a **C-style enum**. Our `Status` enum is what's usually called a **C-style enum**.\
Each variant is a simple label, a bit like a named constant. You can find this kind of enum in many programming Each variant is a simple label, a bit like a named constant. You can find this kind of enum in many programming
languages, like C, C++, Java, C#, Python, etc. languages, like C, C++, Java, C#, Python, etc.
@ -16,7 +16,7 @@ Rust enums can go further though. We can **attach data to each variant**.
## Variants ## Variants
Let's say that we want to store the name of the person who's currently working on a ticket. Let's say that we want to store the name of the person who's currently working on a ticket.\
We would only have this information if the ticket is in progress. It wouldn't be there for a to-do ticket or We would only have this information if the ticket is in progress. It wouldn't be there for a to-do ticket or
a done ticket. a done ticket.
We can model this by attaching a `String` field to the `InProgress` variant: We can model this by attaching a `String` field to the `InProgress` variant:
@ -31,7 +31,7 @@ enum Status {
} }
``` ```
`InProgress` is now a **struct-like variant**. `InProgress` is now a **struct-like variant**.\
The syntax mirrors, in fact, the one we used to define a struct—it's just "inlined" inside the enum, as a variant. The syntax mirrors, in fact, the one we used to define a struct—it's just "inlined" inside the enum, as a variant.
## Accessing variant data ## Accessing variant data
@ -55,7 +55,7 @@ error[E0609]: no field `assigned_to` on type `Status`
| ^^^^^^^^^^^ unknown field | ^^^^^^^^^^^ unknown field
``` ```
`assigned_to` is **variant-specific**, it's not available on all `Status` instances. `assigned_to` is **variant-specific**, it's not available on all `Status` instances.\
To access `assigned_to`, we need to use **pattern matching**: To access `assigned_to`, we need to use **pattern matching**:
```rust ```rust
@ -71,9 +71,9 @@ match status {
## Bindings ## Bindings
In the match pattern `Status::InProgress { assigned_to }`, `assigned_to` is a **binding**. In the match pattern `Status::InProgress { assigned_to }`, `assigned_to` is a **binding**.\
We're **destructuring** the `Status::InProgress` variant and binding the `assigned_to` field to We're **destructuring** the `Status::InProgress` variant and binding the `assigned_to` field to
a new variable, also named `assigned_to`. a new variable, also named `assigned_to`.\
If we wanted, we could bind the field to a different variable name: If we wanted, we could bind the field to a different variable name:
```rust ```rust

View file

@ -61,7 +61,7 @@ as the code that precedes it.
## Style ## Style
Both `if let` and `let/else` are idiomatic Rust constructs. Both `if let` and `let/else` are idiomatic Rust constructs.\
Use them as you see fit to improve the readability of your code, Use them as you see fit to improve the readability of your code,
but don't overdo it: `match` is always there when you need it. but don't overdo it: `match` is always there when you need it.

View file

@ -1,11 +1,11 @@
# Nullability # Nullability
Our implementation of the `assigned` method is fairly blunt: panicking for to-do and done tickets is far from ideal. Our implementation of the `assigned` method is fairly blunt: panicking for to-do and done tickets is far from ideal.\
We can do better using **Rust's `Option` type**. We can do better using **Rust's `Option` type**.
## `Option` ## `Option`
`Option` is a Rust type that represents **nullable values**. `Option` is a Rust type that represents **nullable values**.\
It is an enum, defined in Rust's standard library: It is an enum, defined in Rust's standard library:
```rust ```rust
@ -15,9 +15,9 @@ enum Option<T> {
} }
``` ```
`Option` encodes the idea that a value might be present (`Some(T)`) or absent (`None`). `Option` encodes the idea that a value might be present (`Some(T)`) or absent (`None`).\
It also forces you to **explicitly handle both cases**. You'll get a compiler error if you are working with It also forces you to **explicitly handle both cases**. You'll get a compiler error if you are working with
a nullable value and you forget to handle the `None` case. a nullable value and you forget to handle the `None` case.\
This is a significant improvement over "implicit" nullability in other languages, where you can forget to check This is a significant improvement over "implicit" nullability in other languages, where you can forget to check
for `null` and thus trigger a runtime error. for `null` and thus trigger a runtime error.
@ -27,7 +27,7 @@ for `null` and thus trigger a runtime error.
### Tuple-like variants ### Tuple-like variants
`Option` has two variants: `Some(T)` and `None`. `Option` has two variants: `Some(T)` and `None`.\
`Some` is a **tuple-like variant**: it's a variant that holds **unnamed fields**. `Some` is a **tuple-like variant**: it's a variant that holds **unnamed fields**.
Tuple-like variants are often used when there is a single field to store, especially when we're looking at a Tuple-like variants are often used when there is a single field to store, especially when we're looking at a
@ -51,7 +51,7 @@ let y = point.1;
### Tuples ### Tuples
It's weird say that something is tuple-like when we haven't seen tuples yet! It's weird say that something is tuple-like when we haven't seen tuples yet!\
Tuples are another example of a primitive Rust type. Tuples are another example of a primitive Rust type.
They group together a fixed number of values with (potentially different) types: They group together a fixed number of values with (potentially different) types:

View file

@ -52,13 +52,13 @@ Both `Ok` and `Err` are generic, allowing you to specify your own types for the
## No exceptions ## No exceptions
Recoverable errors in Rust are **represented as values**. Recoverable errors in Rust are **represented as values**.\
They're just an instance of a type, being passed around and manipulated like any other value. They're just an instance of a type, being passed around and manipulated like any other value.
This is a significant difference from other languages, such as Python or C#, where **exceptions** are used to signal errors. This is a significant difference from other languages, such as Python or C#, where **exceptions** are used to signal errors.
Exceptions create a separate control flow path that can be hard to reason about. Exceptions create a separate control flow path that can be hard to reason about.\
You don't know, just by looking at a function's signature, if it can throw an exception or not. You don't know, just by looking at a function's signature, if it can throw an exception or not.
You don't know, just by looking at a function's signature, **which** exception types it can throw. You don't know, just by looking at a function's signature, **which** exception types it can throw.\
You must either read the function's documentation or look at its implementation to find out. You must either read the function's documentation or look at its implementation to find out.
Exception handling logic has very poor locality: the code that throws the exception is far removed from the code Exception handling logic has very poor locality: the code that throws the exception is far removed from the code
@ -66,7 +66,7 @@ that catches it, and there's no direct link between the two.
## Fallibility is encoded in the type system ## Fallibility is encoded in the type system
Rust, with `Result`, forces you to **encode fallibility in the function's signature**. Rust, with `Result`, forces you to **encode fallibility in the function's signature**.\
If a function can fail (and you want the caller to have a shot at handling the error), it must return a `Result`. If a function can fail (and you want the caller to have a shot at handling the error), it must return a `Result`.
```rust ```rust

View file

@ -1,11 +1,11 @@
# Unwrapping # Unwrapping
`Ticket::new` now returns a `Result` instead of panicking on invalid inputs. `Ticket::new` now returns a `Result` instead of panicking on invalid inputs.\
What does this mean for the caller? What does this mean for the caller?
## Failures can't be (implicitly) ignored ## Failures can't be (implicitly) ignored
Unlike exceptions, Rust's `Result` forces you to **handle errors at the call site**. Unlike exceptions, Rust's `Result` forces you to **handle errors at the call site**.\
If you call a function that returns a `Result`, Rust won't allow you to implicitly ignore the error case. If you call a function that returns a `Result`, Rust won't allow you to implicitly ignore the error case.
```rust ```rust

View file

@ -1,6 +1,6 @@
# Error enums # Error enums
Your solution to the previous exercise may have felt awkward: matching on strings is not ideal! Your solution to the previous exercise may have felt awkward: matching on strings is not ideal!\
A colleague might rework the error messages returned by `Ticket::new` (e.g. to improve readability) and, A colleague might rework the error messages returned by `Ticket::new` (e.g. to improve readability) and,
all of a sudden, your calling code would break. all of a sudden, your calling code would break.
@ -22,7 +22,7 @@ enum U32ParseError {
``` ```
Using an error enum, you're encoding the different error cases in the type system—they become part of the Using an error enum, you're encoding the different error cases in the type system—they become part of the
signature of the fallible function. signature of the fallible function.\
This simplifies error handling for the caller, as they can use a `match` expression to react to the different This simplifies error handling for the caller, as they can use a `match` expression to react to the different
error cases: error cases:

View file

@ -3,7 +3,7 @@
## Error reporting ## Error reporting
In the previous exercise you had to destructure the `InvalidTitle` variant to extract the error message and In the previous exercise you had to destructure the `InvalidTitle` variant to extract the error message and
pass it to the `panic!` macro. pass it to the `panic!` macro.\
This is a (rudimentary) example of **error reporting**: transforming an error type into a representation that can be This is a (rudimentary) example of **error reporting**: transforming an error type into a representation that can be
shown to a user, a service operator, or a developer. shown to a user, a service operator, or a developer.
@ -46,8 +46,8 @@ pub trait Display {
} }
``` ```
The difference is in their *purpose*: `Display` returns a representation that's meant for "end-users", The difference is in their _purpose_: `Display` returns a representation that's meant for "end-users",
while `Debug` provides a low-level representation that's more suitable to developers and service operators. while `Debug` provides a low-level representation that's more suitable to developers and service operators.\
That's why `Debug` can be automatically implemented using the `#[derive(Debug)]` attribute, while `Display` That's why `Debug` can be automatically implemented using the `#[derive(Debug)]` attribute, while `Display`
**requires** a manual implementation. **requires** a manual implementation.

View file

@ -1,10 +1,10 @@
# Libraries and binaries # Libraries and binaries
It took a bit of code to implement the `Error` trait for `TicketNewError`, didn't it? It took a bit of code to implement the `Error` trait for `TicketNewError`, didn't it?\
A manual `Display` implementation, plus an `Error` impl block. A manual `Display` implementation, plus an `Error` impl block.
We can remove some of the boilerplate by using [`thiserror`](https://docs.rs/thiserror/latest/thiserror/), We can remove some of the boilerplate by using [`thiserror`](https://docs.rs/thiserror/latest/thiserror/),
a Rust crate that provides a **procedural macro** to simplify the creation of custom error types. a Rust crate that provides a **procedural macro** to simplify the creation of custom error types.\
But we're getting ahead of ourselves: `thiserror` is a third-party crate, it'd be our first dependency! But we're getting ahead of ourselves: `thiserror` is a third-party crate, it'd be our first dependency!
Let's take a step back to talk about Rust's packaging system before we dive into dependencies. Let's take a step back to talk about Rust's packaging system before we dive into dependencies.
@ -18,18 +18,18 @@ Go check the `Cargo.toml` file in the directory of this section's exercise!
## What is a crate? ## What is a crate?
Inside a package, you can have one or more **crates**, also known as **targets**. Inside a package, you can have one or more **crates**, also known as **targets**.\
The two most common crate types are **binary crates** and **library crates**. The two most common crate types are **binary crates** and **library crates**.
### Binaries ### Binaries
A binary is a program that can be compiled to an **executable file**. A binary is a program that can be compiled to an **executable file**.\
It must include a function named `main`—the program's entry point. `main` is invoked when the program is executed. It must include a function named `main`—the program's entry point. `main` is invoked when the program is executed.
### Libraries ### Libraries
Libraries, on the other hand, are not executable on their own. You can't _run_ a library, Libraries, on the other hand, are not executable on their own. You can't _run_ a library,
but you can _import its code_ from another package that depends on it. but you can _import its code_ from another package that depends on it.\
A library groups together code (i.e. functions, types, etc.) that can be leveraged by other packages as a **dependency**. A library groups together code (i.e. functions, types, etc.) that can be leveraged by other packages as a **dependency**.
All the exercises you've solved so far have been structured as libraries, with a test suite attached to them. All the exercises you've solved so far have been structured as libraries, with a test suite attached to them.

View file

@ -1,6 +1,6 @@
# Dependencies # Dependencies
A package can depend on other packages by listing them in the `[dependencies]` section of its `Cargo.toml` file. A package can depend on other packages by listing them in the `[dependencies]` section of its `Cargo.toml` file.\
The most common way to specify a dependency is by providing its name and version: The most common way to specify a dependency is by providing its name and version:
```toml ```toml
@ -43,7 +43,7 @@ details on where you can get dependencies from and how to specify them in your `
## Dev dependencies ## Dev dependencies
You can also specify dependencies that are only needed for development—i.e. they only get pulled in when you're You can also specify dependencies that are only needed for development—i.e. they only get pulled in when you're
running `cargo test`. running `cargo test`.\
They go in the `[dev-dependencies]` section of your `Cargo.toml` file: They go in the `[dev-dependencies]` section of your `Cargo.toml` file:
```toml ```toml

View file

@ -1,11 +1,11 @@
# `thiserror` # `thiserror`
That was a bit of detour, wasn't it? But a necessary one! That was a bit of detour, wasn't it? But a necessary one!\
Let's get back on track now: custom error types and `thiserror`. Let's get back on track now: custom error types and `thiserror`.
## Custom error types ## Custom error types
We've seen how to implement the `Error` trait "manually" for a custom error type. We've seen how to implement the `Error` trait "manually" for a custom error type.\
Imagine that you have to do this for most error types in your codebase. That's a lot of boilerplate, isn't it? Imagine that you have to do this for most error types in your codebase. That's a lot of boilerplate, isn't it?
We can remove some of the boilerplate by using [`thiserror`](https://docs.rs/thiserror/latest/thiserror/), We can remove some of the boilerplate by using [`thiserror`](https://docs.rs/thiserror/latest/thiserror/),
@ -23,12 +23,12 @@ enum TicketNewError {
## You can write your own macros ## You can write your own macros
All the `derive` macros we've seen so far were provided by the Rust standard library. All the `derive` macros we've seen so far were provided by the Rust standard library.\
`thiserror::Error` is the first example of a **third-party** `derive` macro. `thiserror::Error` is the first example of a **third-party** `derive` macro.
`derive` macros are a subset of **procedural macros**, a way to generate Rust code at compile time. `derive` macros are a subset of **procedural macros**, a way to generate Rust code at compile time.
We won't get into the details of how to write a procedural macro in this course, but it's important We won't get into the details of how to write a procedural macro in this course, but it's important
to know that you can write your own! to know that you can write your own!\
A topic to approach in a more advanced Rust course. A topic to approach in a more advanced Rust course.
## Custom syntax ## Custom syntax

View file

@ -1,7 +1,7 @@
# `TryFrom` and `TryInto` # `TryFrom` and `TryInto`
In the previous chapter we looked at the [`From` and `Into` traits](../04_traits/09_from.md), In the previous chapter we looked at the [`From` and `Into` traits](../04_traits/09_from.md),
Rust's idiomatic interfaces for **infallible** type conversions. Rust's idiomatic interfaces for **infallible** type conversions.\
But what if the conversion is not guaranteed to succeed? But what if the conversion is not guaranteed to succeed?
We now know enough about errors to discuss the **fallible** counterparts of `From` and `Into`: We now know enough about errors to discuss the **fallible** counterparts of `From` and `Into`:
@ -23,7 +23,7 @@ pub trait TryInto<T>: Sized {
} }
``` ```
The main difference between `From`/`Into` and `TryFrom`/`TryInto` is that the latter return a `Result` type. The main difference between `From`/`Into` and `TryFrom`/`TryInto` is that the latter return a `Result` type.\
This allows the conversion to fail, returning an error instead of panicking. This allows the conversion to fail, returning an error instead of panicking.
## `Self::Error` ## `Self::Error`
@ -36,7 +36,7 @@ being attempted.
## Duality ## Duality
Just like `From` and `Into`, `TryFrom` and `TryInto` are dual traits. Just like `From` and `Into`, `TryFrom` and `TryInto` are dual traits.\
If you implement `TryFrom` for a type, you get `TryInto` for free. If you implement `TryFrom` for a type, you get `TryInto` for free.
## References ## References

View file

@ -11,7 +11,7 @@ pub trait Error: Debug + Display {
} }
``` ```
The `source` method is a way to access the **error cause**, if any. The `source` method is a way to access the **error cause**, if any.\
Errors are often chained, meaning that one error is the cause of another: you have a high-level error (e.g. Errors are often chained, meaning that one error is the cause of another: you have a high-level error (e.g.
cannot connect to the database) that is caused by a lower-level error (e.g. can't resolve the database hostname). cannot connect to the database) that is caused by a lower-level error (e.g. can't resolve the database hostname).
The `source` method allows you to "walk" the full chain of errors, often used when capturing error context in logs. The `source` method allows you to "walk" the full chain of errors, often used when capturing error context in logs.
@ -19,7 +19,7 @@ The `source` method allows you to "walk" the full chain of errors, often used wh
## Implementing `source` ## Implementing `source`
The `Error` trait provides a default implementation that always returns `None` (i.e. no underlying cause). That's why The `Error` trait provides a default implementation that always returns `None` (i.e. no underlying cause). That's why
you didn't have to care about `source` in the previous exercises. you didn't have to care about `source` in the previous exercises.\
You can override this default implementation to provide a cause for your error type. You can override this default implementation to provide a cause for your error type.
```rust ```rust
@ -48,7 +48,7 @@ We then override the `source` method to return this source when called.
## `&(dyn Error + 'static)` ## `&(dyn Error + 'static)`
What's this `&(dyn Error + 'static)` type? What's this `&(dyn Error + 'static)` type?\
Let's unpack it: Let's unpack it:
- `dyn Error` is a **trait object**. It's a way to refer to any type that implements the `Error` trait. - `dyn Error` is a **trait object**. It's a way to refer to any type that implements the `Error` trait.
@ -89,7 +89,7 @@ Don't worry too much about either of these concepts for now. We'll cover them in
} }
} }
``` ```
- A field annotated with the `#[from]` attribute will automatically be used as the source of the error **and** - A field annotated with the `#[from]` attribute will automatically be used as the source of the error **and**
`thiserror` will automatically generate a `From` implementation to convert the annotated type into your error type. `thiserror` will automatically generate a `From` implementation to convert the annotated type into your error type.
```rust ```rust
use thiserror::Error; use thiserror::Error;
@ -106,7 +106,7 @@ Don't worry too much about either of these concepts for now. We'll cover them in
## The `?` operator ## The `?` operator
The `?` operator is a shorthand for propagating errors. The `?` operator is a shorthand for propagating errors.\
When used in a function that returns a `Result`, it will return early with an error if the `Result` is `Err`. When used in a function that returns a `Result`, it will return early with an error if the `Result` is `Err`.
For example: For example:
@ -145,7 +145,7 @@ fn read_file() -> Result<String, std::io::Error> {
} }
``` ```
You can use the `?` operator to shorten your error handling code significantly. You can use the `?` operator to shorten your error handling code significantly.\
In particular, the `?` operator will automatically convert the error type of the fallible operation into the error type In particular, the `?` operator will automatically convert the error type of the fallible operation into the error type
of the function, if a conversion is possible (i.e. if there is a suitable `From` implementation) of the function, if a conversion is possible (i.e. if there is a suitable `From` implementation)

View file

@ -1,11 +1,11 @@
# Wrapping up # Wrapping up
When it comes to domain modelling, the devil is in the details. When it comes to domain modelling, the devil is in the details.\
Rust offers a wide range of tools to help you represent the constraints of your domain directly in the type system, Rust offers a wide range of tools to help you represent the constraints of your domain directly in the type system,
but it takes some practice to get it right and write code that looks idiomatic. but it takes some practice to get it right and write code that looks idiomatic.
Let's close the chapter with one final refinement of our `Ticket` model. Let's close the chapter with one final refinement of our `Ticket` model.\
We'll introduce a new type for each of the fields in `Ticket` to encapsulate the respective constraints. We'll introduce a new type for each of the fields in `Ticket` to encapsulate the respective constraints.\
Every time someone accesses a `Ticket` field, they'll get back a value that's guaranteed to be valid—i.e. a Every time someone accesses a `Ticket` field, they'll get back a value that's guaranteed to be valid—i.e. a
`TicketTitle` instead of a `String`. They won't have to worry about the title being empty elsewhere in the code: `TicketTitle` instead of a `String`. They won't have to worry about the title being empty elsewhere in the code:
as long as they have a `TicketTitle`, they know it's valid **by construction**. as long as they have a `TicketTitle`, they know it's valid **by construction**.

View file

@ -8,7 +8,7 @@ What does Rust have to offer in this regard?
## Arrays ## Arrays
A first attempt could be to use an **array**. A first attempt could be to use an **array**.\
Arrays in Rust are fixed-size collections of elements of the same type. Arrays in Rust are fixed-size collections of elements of the same type.
Here's how you can define an array: Here's how you can define an array:
@ -18,7 +18,7 @@ Here's how you can define an array:
let numbers: [u32; 3] = [1, 2, 3]; let numbers: [u32; 3] = [1, 2, 3];
``` ```
This creates an array of 3 integers, initialized with the values `1`, `2`, and `3`. This creates an array of 3 integers, initialized with the values `1`, `2`, and `3`.\
The type of the array is `[u32; 3]`, which reads as "an array of `u32`s with a length of 3". The type of the array is `[u32; 3]`, which reads as "an array of `u32`s with a length of 3".
### Accessing elements ### Accessing elements
@ -31,7 +31,7 @@ let second = numbers[1];
let third = numbers[2]; let third = numbers[2];
``` ```
The index must be of type `usize`. The index must be of type `usize`.\
Arrays are **zero-indexed**, like everything in Rust. You've seen this before with string slices and field indexing in Arrays are **zero-indexed**, like everything in Rust. You've seen this before with string slices and field indexing in
tuples/tuple-like variants. tuples/tuple-like variants.
@ -45,7 +45,7 @@ let fourth = numbers[3]; // This will panic
``` ```
This is enforced at runtime using **bounds checking**. It comes with a small performance overhead, but it's how This is enforced at runtime using **bounds checking**. It comes with a small performance overhead, but it's how
Rust prevents buffer overflows. Rust prevents buffer overflows.\
In some scenarios the Rust compiler can optimize away bounds checks, especially if iterators are involved—we'll speak In some scenarios the Rust compiler can optimize away bounds checks, especially if iterators are involved—we'll speak
more about this later on. more about this later on.
@ -77,5 +77,5 @@ Stack: | 1 | 2 | 3 |
``` ```
In other words, the size of an array is `std::mem::size_of::<T>() * N`, where `T` is the type of the elements and `N` is In other words, the size of an array is `std::mem::size_of::<T>() * N`, where `T` is the type of the elements and `N` is
the number of elements. the number of elements.\
You can access and replace each element in `O(1)` time. You can access and replace each element in `O(1)` time.

View file

@ -22,7 +22,7 @@ This is where `Vec` comes in.
## `Vec` ## `Vec`
`Vec` is a growable array type, provided by the standard library. `Vec` is a growable array type, provided by the standard library.\
You can create an empty array using the `Vec::new` function: You can create an empty array using the `Vec::new` function:
```rust ```rust
@ -37,7 +37,7 @@ numbers.push(2);
numbers.push(3); numbers.push(3);
``` ```
New values are added to the end of the vector. New values are added to the end of the vector.\
You can also create an initialized vector using the `vec!` macro, if you know the values at creation time: You can also create an initialized vector using the `vec!` macro, if you know the values at creation time:
```rust ```rust
@ -55,7 +55,7 @@ let second = numbers[1];
let third = numbers[2]; let third = numbers[2];
``` ```
The index must be of type `usize`. The index must be of type `usize`.\
You can also use the `get` method, which returns an `Option<&T>`: You can also use the `get` method, which returns an `Option<&T>`:
```rust ```rust
@ -70,7 +70,7 @@ Access is bounds-checked, just element access with arrays. It has O(1) complexit
## Memory layout ## Memory layout
`Vec` is a heap-allocated data structure. `Vec` is a heap-allocated data structure.\
When you create a `Vec`, it allocates memory on the heap to store the elements. When you create a `Vec`, it allocates memory on the heap to store the elements.
If you run the following code: If you run the following code:
@ -102,7 +102,7 @@ Heap: | 1 | 2 | ? |
- The **length** of the vector, i.e. how many elements are in the vector. - The **length** of the vector, i.e. how many elements are in the vector.
- The **capacity** of the vector, i.e. the number of elements that can fit in the space reserved on the heap. - The **capacity** of the vector, i.e. the number of elements that can fit in the space reserved on the heap.
This layout should look familiar: it's exactly the same as `String`! This layout should look familiar: it's exactly the same as `String`!\
That's not a coincidence: `String` is defined as a vector of bytes, `Vec<u8>`, under the hood: That's not a coincidence: `String` is defined as a vector of bytes, `Vec<u8>`, under the hood:
```rust ```rust

View file

@ -11,7 +11,7 @@ numbers.push(3); // Max capacity reached
numbers.push(4); // What happens here? numbers.push(4); // What happens here?
``` ```
The `Vec` will **resize** itself. The `Vec` will **resize** itself.\
It will ask the allocator for a new (larger) chunk of heap memory, copy the elements over, and deallocate the old memory. It will ask the allocator for a new (larger) chunk of heap memory, copy the elements over, and deallocate the old memory.
This operation can be expensive, as it involves a new memory allocation and copying all existing elements. This operation can be expensive, as it involves a new memory allocation and copying all existing elements.
@ -19,7 +19,7 @@ This operation can be expensive, as it involves a new memory allocation and copy
## `Vec::with_capacity` ## `Vec::with_capacity`
If you have a rough idea of how many elements you'll store in a `Vec`, you can use the `Vec::with_capacity` If you have a rough idea of how many elements you'll store in a `Vec`, you can use the `Vec::with_capacity`
method to pre-allocate enough memory upfront. method to pre-allocate enough memory upfront.\
This can avoid a new allocation when the `Vec` grows, but it may waste memory if you overestimate actual usage. This can avoid a new allocation when the `Vec` grows, but it may waste memory if you overestimate actual usage.
Evaluate on a case-by-case basis. Evaluate on a case-by-case basis.

View file

@ -35,7 +35,7 @@ loop {
} }
``` ```
`loop` is another looping construct, on top of `for` and `while`. `loop` is another looping construct, on top of `for` and `while`.\
A `loop` block will run forever, unless you explicitly `break` out of it. A `loop` block will run forever, unless you explicitly `break` out of it.
## `Iterator` trait ## `Iterator` trait
@ -53,7 +53,7 @@ trait Iterator {
The `Item` associated type specifies the type of the values produced by the iterator. The `Item` associated type specifies the type of the values produced by the iterator.
`next` returns the next value in the sequence. `next` returns the next value in the sequence.\
It returns `Some(value)` if there's a value to return, and `None` when there isn't. It returns `Some(value)` if there's a value to return, and `None` when there isn't.
Be careful: there is no guarantee that an iterator is exhausted when it returns `None`. That's only Be careful: there is no guarantee that an iterator is exhausted when it returns `None`. That's only
@ -62,7 +62,7 @@ guaranteed if the iterator implements the (more restrictive)
## `IntoIterator` trait ## `IntoIterator` trait
Not all types implement `Iterator`, but many can be converted into a type that does. Not all types implement `Iterator`, but many can be converted into a type that does.\
That's where the `IntoIterator` trait comes in: That's where the `IntoIterator` trait comes in:
```rust ```rust
@ -73,7 +73,7 @@ trait IntoIterator {
} }
``` ```
The `into_iter` method consumes the original value and returns an iterator over its elements. The `into_iter` method consumes the original value and returns an iterator over its elements.\
A type can only have one implementation of `IntoIterator`: there can be no ambiguity as to what `for` should desugar to. A type can only have one implementation of `IntoIterator`: there can be no ambiguity as to what `for` should desugar to.
One detail: every type that implements `Iterator` automatically implements `IntoIterator` as well. One detail: every type that implements `Iterator` automatically implements `IntoIterator` as well.
@ -81,7 +81,7 @@ They just return themselves from `into_iter`!
## Bounds checks ## Bounds checks
Iterating over iterators has a nice side effect: you can't go out of bounds, by design. Iterating over iterators has a nice side effect: you can't go out of bounds, by design.\
This allows Rust to remove bounds checks from the generated machine code, making iteration faster. This allows Rust to remove bounds checks from the generated machine code, making iteration faster.
In other words, In other words,

View file

@ -21,7 +21,7 @@ for n in numbers.iter() {
``` ```
This pattern can be simplified by implementing `IntoIterator` for a **reference to the collection**. This pattern can be simplified by implementing `IntoIterator` for a **reference to the collection**.
In our example above, that would be `&Vec<Ticket>`. In our example above, that would be `&Vec<Ticket>`.\
The standard library does this, that's why the following code works: The standard library does this, that's why the following code works:
```rust ```rust

View file

@ -16,8 +16,8 @@ impl IntoIterator for &TicketStore {
} }
``` ```
What should `type IntoIter` be set to? What should `type IntoIter` be set to?\
Intuitively, it should be the type returned by `self.tickets.iter()`, i.e. the type returned by `Vec::iter()`. Intuitively, it should be the type returned by `self.tickets.iter()`, i.e. the type returned by `Vec::iter()`.\
If you check the standard library documentation, you'll find that `Vec::iter()` returns an `std::slice::Iter`. If you check the standard library documentation, you'll find that `Vec::iter()` returns an `std::slice::Iter`.
The definition of `Iter` is: The definition of `Iter` is:
@ -30,7 +30,7 @@ pub struct Iter<'a, T> { /* fields omitted */ }
## Lifetime parameters ## Lifetime parameters
Lifetimes are **labels** used by the Rust compiler to keep track of how long a reference (either mutable or Lifetimes are **labels** used by the Rust compiler to keep track of how long a reference (either mutable or
immutable) is valid. immutable) is valid.\
The lifetime of a reference is constrained by the scope of the value it refers to. Rust always makes sure, at compile-time, The lifetime of a reference is constrained by the scope of the value it refers to. Rust always makes sure, at compile-time,
that references are not used after the value they refer to has been dropped, to avoid dangling pointers and use-after-free bugs. that references are not used after the value they refer to has been dropped, to avoid dangling pointers and use-after-free bugs.
@ -49,7 +49,7 @@ impl <T> Vec<T> {
} }
``` ```
`Vec::iter()` is generic over a lifetime parameter, named `'a`. `Vec::iter()` is generic over a lifetime parameter, named `'a`.\
`'a` is used to **tie together** the lifetime of the `Vec` and the lifetime of the `Iter` returned by `iter()`. `'a` is used to **tie together** the lifetime of the `Vec` and the lifetime of the `Iter` returned by `iter()`.
In plain English: the `Iter` returned by `iter()` cannot outlive the `Vec` reference (`&self`) it was created from. In plain English: the `Iter` returned by `iter()` cannot outlive the `Vec` reference (`&self`) it was created from.
@ -74,7 +74,7 @@ No explicit lifetime parameter is present in the signature of `Vec::iter()`.
Elision rules imply that the lifetime of the `Iter` returned by `iter()` is tied to the lifetime of the `&self` reference. Elision rules imply that the lifetime of the `Iter` returned by `iter()` is tied to the lifetime of the `&self` reference.
You can think of `'_` as a **placeholder** for the lifetime of the `&self` reference. You can think of `'_` as a **placeholder** for the lifetime of the `&self` reference.
See the [References](#references) section for a link to the official documentation on lifetime elision. See the [References](#references) section for a link to the official documentation on lifetime elision.\
In most cases, you can rely on the compiler telling you when you need to add explicit lifetime annotations. In most cases, you can rely on the compiler telling you when you need to add explicit lifetime annotations.
## References ## References

View file

@ -1,6 +1,6 @@
# Combinators # Combinators
Iterators can do so much more than `for` loops! Iterators can do so much more than `for` loops!\
If you look at the documentation for the `Iterator` trait, you'll find a **vast** collections of If you look at the documentation for the `Iterator` trait, you'll find a **vast** collections of
methods that you can leverage to transform, filter, and combine iterators in various ways. methods that you can leverage to transform, filter, and combine iterators in various ways.
@ -15,7 +15,7 @@ Let's mention the most common ones:
- `take` stops the iterator after `n` elements. - `take` stops the iterator after `n` elements.
- `chain` combines two iterators into one. - `chain` combines two iterators into one.
These methods are called **combinators**. These methods are called **combinators**.\
They are usually **chained** together to create complex transformations in a concise and readable way: They are usually **chained** together to create complex transformations in a concise and readable way:
```rust ```rust
@ -29,10 +29,10 @@ let outcome: u32 = numbers.iter()
## Closures ## Closures
What's going on with the `filter` and `map` methods above? What's going on with the `filter` and `map` methods above?\
They take **closures** as arguments. They take **closures** as arguments.
Closures are **anonymous functions**, i.e. functions that are not defined using the `fn` syntax we are used to. Closures are **anonymous functions**, i.e. functions that are not defined using the `fn` syntax we are used to.\
They are defined using the `|args| body` syntax, where `args` are the arguments and `body` is the function body. They are defined using the `|args| body` syntax, where `args` are the arguments and `body` is the function body.
`body` can be a block of code or a single expression. `body` can be a block of code or a single expression.
For example: For example:
@ -70,10 +70,10 @@ let add_one: fn(i32) -> i32 = |x| x + 1;
## `collect` ## `collect`
What happens when you're done transforming an iterator using combinators? What happens when you're done transforming an iterator using combinators?\
You either iterate over the transformed values using a `for` loop, or you collect them into a collection. You either iterate over the transformed values using a `for` loop, or you collect them into a collection.
The latter is done using the `collect` method. The latter is done using the `collect` method.\
`collect` consumes the iterator and collects its elements into a collection of your choice. `collect` consumes the iterator and collects its elements into a collection of your choice.
For example, you can collect the squares of the even numbers into a `Vec`: For example, you can collect the squares of the even numbers into a `Vec`:
@ -86,7 +86,7 @@ let squares_of_evens: Vec<u32> = numbers.iter()
.collect(); .collect();
``` ```
`collect` is generic over its **return type**. `collect` is generic over its **return type**.\
Therefore you usually need to provide a type hint to help the compiler infer the correct type. Therefore you usually need to provide a type hint to help the compiler infer the correct type.
In the example above, we annotated the type of `squares_of_evens` to be `Vec<u32>`. In the example above, we annotated the type of `squares_of_evens` to be `Vec<u32>`.
Alternatively, you can use the **turbofish syntax** to specify the type: Alternatively, you can use the **turbofish syntax** to specify the type:

View file

@ -1,6 +1,6 @@
# `impl Trait` # `impl Trait`
`TicketStore::to_dos` returns a `Vec<&Ticket>`. `TicketStore::to_dos` returns a `Vec<&Ticket>`.\
That signature introduces a new heap allocation every time `to_dos` is called, which may be unnecessary depending That signature introduces a new heap allocation every time `to_dos` is called, which may be unnecessary depending
on what the caller needs to do with the result. on what the caller needs to do with the result.
It'd be better if `to_dos` returned an iterator instead of a `Vec`, thus empowering the caller to decide whether to It'd be better if `to_dos` returned an iterator instead of a `Vec`, thus empowering the caller to decide whether to
@ -25,8 +25,8 @@ The `filter` method returns an instance of `std::iter::Filter`, which has the fo
pub struct Filter<I, P> { /* fields omitted */ } pub struct Filter<I, P> { /* fields omitted */ }
``` ```
where `I` is the type of the iterator being filtered on and `P` is the predicate used to filter the elements. where `I` is the type of the iterator being filtered on and `P` is the predicate used to filter the elements.\
We know that `I` is `std::slice::Iter<'_, Ticket>` in this case, but what about `P`? We know that `I` is `std::slice::Iter<'_, Ticket>` in this case, but what about `P`?\
`P` is a closure, an **anonymous function**. As the name suggests, closures don't have a name, `P` is a closure, an **anonymous function**. As the name suggests, closures don't have a name,
so we can't write them down in our code. so we can't write them down in our code.
@ -65,5 +65,5 @@ only that it implements the specified trait(s). But the compiler knows the exact
## RPIT ## RPIT
If you read RFCs or deep-dives about Rust, you might come across the acronym **RPIT**. If you read RFCs or deep-dives about Rust, you might come across the acronym **RPIT**.\
It stands for **"Return Position Impl Trait"** and refers to the use of `impl Trait` in return position. It stands for **"Return Position Impl Trait"** and refers to the use of `impl Trait` in return position.

View file

@ -1,6 +1,6 @@
# `impl Trait` in argument position # `impl Trait` in argument position
In the previous section, we saw how `impl Trait` can be used to return a type without specifying its name. In the previous section, we saw how `impl Trait` can be used to return a type without specifying its name.\
The same syntax can also be used in **argument position**: The same syntax can also be used in **argument position**:
```rust ```rust
@ -11,7 +11,7 @@ fn print_iter(iter: impl Iterator<Item = i32>) {
} }
``` ```
`print_iter` takes an iterator of `i32`s and prints each element. `print_iter` takes an iterator of `i32`s and prints each element.\
When used in **argument position**, `impl Trait` is equivalent to a generic parameter with a trait bound: When used in **argument position**, `impl Trait` is equivalent to a generic parameter with a trait bound:
```rust ```rust
@ -27,6 +27,6 @@ where
## Downsides ## Downsides
As a rule of thumb, prefer generics over `impl Trait` in argument position. As a rule of thumb, prefer generics over `impl Trait` in argument position.\
Generics allow the caller to explicitly specify the type of the argument, using the turbofish syntax (`::<>`), Generics allow the caller to explicitly specify the type of the argument, using the turbofish syntax (`::<>`),
which can be useful for disambiguation. That's not the case with `impl Trait`. which can be useful for disambiguation. That's not the case with `impl Trait`.

View file

@ -21,12 +21,12 @@ Heap: | 1 | 2 | ? |
+---+---+---+ +---+---+---+
``` ```
We already remarked how `String` is just a `Vec<u8>` in disguise. We already remarked how `String` is just a `Vec<u8>` in disguise.\
The similarity should prompt you to ask: "What's the equivalent of `&str` for `Vec`?" The similarity should prompt you to ask: "What's the equivalent of `&str` for `Vec`?"
## `&[T]` ## `&[T]`
`[T]` is a **slice** of a contiguous sequence of elements of type `T`. `[T]` is a **slice** of a contiguous sequence of elements of type `T`.\
It's most commonly used in its borrowed form, `&[T]`. It's most commonly used in its borrowed form, `&[T]`.
There are various ways to create a slice reference from a `Vec`: There are various ways to create a slice reference from a `Vec`:
@ -54,7 +54,7 @@ let sum: i32 = numbers.iter().sum();
### Memory layout ### Memory layout
A `&[T]` is a **fat pointer**, just like `&str`. A `&[T]` is a **fat pointer**, just like `&str`.\
It consists of a pointer to the first element of the slice and the length of the slice. It consists of a pointer to the first element of the slice and the length of the slice.
If you have a `Vec` with three elements: If you have a `Vec` with three elements:
@ -90,7 +90,7 @@ Heap: | 1 | 2 | 3 | ? | |
### `&Vec<T>` vs `&[T]` ### `&Vec<T>` vs `&[T]`
When you need to pass an immutable reference to a `Vec` to a function, prefer `&[T]` over `&Vec<T>`. When you need to pass an immutable reference to a `Vec` to a function, prefer `&[T]` over `&Vec<T>`.\
This allows the function to accept any kind of slice, not necessarily one backed by a `Vec`. This allows the function to accept any kind of slice, not necessarily one backed by a `Vec`.
For example, you can then pass a subset of the elements in a `Vec`. For example, you can then pass a subset of the elements in a `Vec`.

View file

@ -1,6 +1,6 @@
# Mutable slices # Mutable slices
Every time we've talked about slice types (like `str` and `[T]`), we've used their immutable borrow form (`&str` and `&[T]`). Every time we've talked about slice types (like `str` and `[T]`), we've used their immutable borrow form (`&str` and `&[T]`).\
But slices can also be mutable! But slices can also be mutable!
Here's how you create a mutable slice: Here's how you create a mutable slice:
@ -21,7 +21,7 @@ This will change the first element of the `Vec` to `42`.
## Limitations ## Limitations
When working with immutable borrows, the recommendation was clear: prefer slice references over references to When working with immutable borrows, the recommendation was clear: prefer slice references over references to
the owned type (e.g. `&[T]` over `&Vec<T>`). the owned type (e.g. `&[T]` over `&Vec<T>`).\
That's **not** the case with mutable borrows. That's **not** the case with mutable borrows.
Consider this scenario: Consider this scenario:
@ -32,10 +32,10 @@ let mut slice: &mut [i32] = &mut numbers;
slice.push(1); slice.push(1);
``` ```
It won't compile! It won't compile!\
`push` is a method on `Vec`, not on slices. This is the manifestation of a more general principle: Rust won't `push` is a method on `Vec`, not on slices. This is the manifestation of a more general principle: Rust won't
allow you to add or remove elements from a slice. You will only be able to modify/replace the elements that are allow you to add or remove elements from a slice. You will only be able to modify/replace the elements that are
already there. already there.
In this regard, a `&mut Vec` or a `&mut String` are strictly more powerful than a `&mut [T]` or a `&mut str`. In this regard, a `&mut Vec` or a `&mut String` are strictly more powerful than a `&mut [T]` or a `&mut str`.\
Choose the type that best fits based on the operations you need to perform. Choose the type that best fits based on the operations you need to perform.

View file

@ -1,6 +1,6 @@
# Ticket ids # Ticket ids
Let's think again about our ticket management system. Let's think again about our ticket management system.\
Our ticket model right now looks like this: Our ticket model right now looks like this:
```rust ```rust
@ -11,13 +11,13 @@ pub struct Ticket {
} }
``` ```
One thing is missing here: an **identifier** to uniquely identify a ticket. One thing is missing here: an **identifier** to uniquely identify a ticket.\
That identifier should be unique for each ticket. That can be guaranteed by generating it automatically when That identifier should be unique for each ticket. That can be guaranteed by generating it automatically when
a new ticket is created. a new ticket is created.
## Refining the model ## Refining the model
Where should the id be stored? Where should the id be stored?\
We could add a new field to the `Ticket` struct: We could add a new field to the `Ticket` struct:
```rust ```rust
@ -29,7 +29,7 @@ pub struct Ticket {
} }
``` ```
But we don't know the id before creating the ticket. So it can't be there from the get-go. But we don't know the id before creating the ticket. So it can't be there from the get-go.\
It'd have to be optional: It'd have to be optional:
```rust ```rust
@ -61,7 +61,7 @@ pub struct Ticket {
} }
``` ```
A `TicketDraft` is a ticket that hasn't been created yet. It doesn't have an id, and it doesn't have a status. A `TicketDraft` is a ticket that hasn't been created yet. It doesn't have an id, and it doesn't have a status.\
A `Ticket` is a ticket that has been created. It has an id and a status. A `Ticket` is a ticket that has been created. It has an id and a status.\
Since each field in `TicketDraft` and `Ticket` embeds its own constraints, we don't have to duplicate logic Since each field in `TicketDraft` and `Ticket` embeds its own constraints, we don't have to duplicate logic
across the two types. across the two types.

View file

@ -1,6 +1,6 @@
# Indexing # Indexing
`TicketStore::get` returns an `Option<&Ticket>` for a given `TicketId`. `TicketStore::get` returns an `Option<&Ticket>` for a given `TicketId`.\
We've seen before how to access elements of arrays and vectors using Rust's We've seen before how to access elements of arrays and vectors using Rust's
indexing syntax: indexing syntax:
@ -9,7 +9,7 @@ let v = vec![0, 1, 2];
assert_eq!(v[0], 0); assert_eq!(v[0], 0);
``` ```
How can we provide the same experience for `TicketStore`? How can we provide the same experience for `TicketStore`?\
You guessed right: we need to implement a trait, `Index`! You guessed right: we need to implement a trait, `Index`!
## `Index` ## `Index`

View file

@ -42,13 +42,13 @@ where
} }
``` ```
The key type must implement the `Eq` and `Hash` traits. The key type must implement the `Eq` and `Hash` traits.\
Let's dig into those two. Let's dig into those two.
## `Hash` ## `Hash`
A hashing function (or hasher) maps a potentially infinite set of a values (e.g. A hashing function (or hasher) maps a potentially infinite set of a values (e.g.
all possible strings) to a bounded range (e.g. a `u64` value). all possible strings) to a bounded range (e.g. a `u64` value).\
There are many different hashing functions around, each with different properties There are many different hashing functions around, each with different properties
(speed, collision risk, reversibility, etc.). (speed, collision risk, reversibility, etc.).
@ -81,10 +81,10 @@ struct Person {
`HashMap` must be able to compare keys for equality. This is particularly important `HashMap` must be able to compare keys for equality. This is particularly important
when dealing with hash collisions—i.e. when two different keys hash to the same value. when dealing with hash collisions—i.e. when two different keys hash to the same value.
You may wonder: isn't that what the `PartialEq` trait is for? Almost! You may wonder: isn't that what the `PartialEq` trait is for? Almost!\
`PartialEq` is not enough for `HashMap` because it doesn't guarantee reflexivity, i.e. `a == a` is always `true`. `PartialEq` is not enough for `HashMap` because it doesn't guarantee reflexivity, i.e. `a == a` is always `true`.\
For example, floating point numbers (`f32` and `f64`) implement `PartialEq`, For example, floating point numbers (`f32` and `f64`) implement `PartialEq`,
but they don't satisfy the reflexivity property: `f32::NAN == f32::NAN` is `false`. but they don't satisfy the reflexivity property: `f32::NAN == f32::NAN` is `false`.\
Reflexivity is crucial for `HashMap` to work correctly: without it, you wouldn't be able to retrieve a value Reflexivity is crucial for `HashMap` to work correctly: without it, you wouldn't be able to retrieve a value
from the map using the same key you used to insert it. from the map using the same key you used to insert it.

View file

@ -1,16 +1,16 @@
# Ordering # Ordering
By moving from a `Vec` to a `HashMap` we have improved the performance of our ticket management system, By moving from a `Vec` to a `HashMap` we have improved the performance of our ticket management system,
and simplified our code in the process. and simplified our code in the process.\
It's not all roses, though. When iterating over a `Vec`-backed store, we could be sure that the tickets It's not all roses, though. When iterating over a `Vec`-backed store, we could be sure that the tickets
would be returned in the order they were added. would be returned in the order they were added.\
That's not the case with a `HashMap`: you can iterate over the tickets, but the order is random. That's not the case with a `HashMap`: you can iterate over the tickets, but the order is random.
We can recover a consistent ordering by switching from a `HashMap` to a `BTreeMap`. We can recover a consistent ordering by switching from a `HashMap` to a `BTreeMap`.
## `BTreeMap` ## `BTreeMap`
A `BTreeMap` guarantees that entries are sorted by their keys. A `BTreeMap` guarantees that entries are sorted by their keys.\
This is useful when you need to iterate over the entries in a specific order, or if you need to This is useful when you need to iterate over the entries in a specific order, or if you need to
perform range queries (e.g. "give me all tickets with an id between 10 and 20"). perform range queries (e.g. "give me all tickets with an id between 10 and 20").
@ -34,7 +34,7 @@ impl<K, V> BTreeMap<K, V> {
## `Ord` ## `Ord`
The `Ord` trait is used to compare values. The `Ord` trait is used to compare values.\
While `PartialEq` is used to compare for equality, `Ord` is used to compare for ordering. While `PartialEq` is used to compare for equality, `Ord` is used to compare for ordering.
It's defined in `std::cmp`: It's defined in `std::cmp`:
@ -46,7 +46,7 @@ pub trait Ord: Eq + PartialOrd {
``` ```
The `cmp` method returns an `Ordering` enum, which can be one The `cmp` method returns an `Ordering` enum, which can be one
of `Less`, `Equal`, or `Greater`. of `Less`, `Equal`, or `Greater`.\
`Ord` requires that two other traits are implemented: `Eq` and `PartialOrd`. `Ord` requires that two other traits are implemented: `Eq` and `PartialOrd`.
## `PartialOrd` ## `PartialOrd`
@ -61,7 +61,7 @@ pub trait PartialOrd: PartialEq {
``` ```
`PartialOrd::partial_cmp` returns an `Option`—it is not guaranteed that two values can `PartialOrd::partial_cmp` returns an `Option`—it is not guaranteed that two values can
be compared. be compared.\
For example, `f32` doesn't implement `Ord` because `NaN` values are not comparable, For example, `f32` doesn't implement `Ord` because `NaN` values are not comparable,
the same reason why `f32` doesn't implement `Eq`. the same reason why `f32` doesn't implement `Eq`.

View file

@ -1,10 +1,10 @@
# Intro # Intro
One of Rust's big promises is *fearless concurrency*: making it easier to write safe, concurrent programs. One of Rust's big promises is _fearless concurrency_: making it easier to write safe, concurrent programs.
We haven't seen much of that yet. All the work we've done so far has been single-threaded. We haven't seen much of that yet. All the work we've done so far has been single-threaded.
Time to change that! Time to change that!
In this chapter we'll make our ticket store multithreaded. In this chapter we'll make our ticket store multithreaded.\
We'll have the opportunity to touch most of Rust's core concurrency features, including: We'll have the opportunity to touch most of Rust's core concurrency features, including:
- Threads, using the `std::thread` module - Threads, using the `std::thread` module

View file

@ -5,21 +5,21 @@ and why we might want to use them.
## What is a thread? ## What is a thread?
A **thread** is an execution context managed by the underlying operating system. A **thread** is an execution context managed by the underlying operating system.\
Each thread has its own stack, instruction pointer, and program counter. Each thread has its own stack, instruction pointer, and program counter.
A single **process** can manage multiple threads. A single **process** can manage multiple threads.
These threads share the same memory space, which means they can access the same data. These threads share the same memory space, which means they can access the same data.
Threads are a **logical** construct. In the end, you can only run one set of instructions Threads are a **logical** construct. In the end, you can only run one set of instructions
at a time on a CPU core, the **physical** execution unit. at a time on a CPU core, the **physical** execution unit.\
Since there can be many more threads than there are CPU cores, the operating system's Since there can be many more threads than there are CPU cores, the operating system's
**scheduler** is in charge of deciding which thread to run at any given time, **scheduler** is in charge of deciding which thread to run at any given time,
partitioning CPU time among them to maximize throughput and responsiveness. partitioning CPU time among them to maximize throughput and responsiveness.
## `main` ## `main`
When a Rust program starts, it runs on a single thread, the **main thread**. When a Rust program starts, it runs on a single thread, the **main thread**.\
This thread is created by the operating system and is responsible for running the `main` This thread is created by the operating system and is responsible for running the `main`
function. function.
@ -66,12 +66,12 @@ fn main() {
``` ```
If you execute this program on the [Rust playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=afedf7062298ca8f5a248bc551062eaa) If you execute this program on the [Rust playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=afedf7062298ca8f5a248bc551062eaa)
you'll see that the main thread and the spawned thread run concurrently. you'll see that the main thread and the spawned thread run concurrently.\
Each thread makes progress independently of the other. Each thread makes progress independently of the other.
### Process termination ### Process termination
When the main thread finishes, the overall process will exit. When the main thread finishes, the overall process will exit.\
A spawned thread will continue running until it finishes or the main thread finishes. A spawned thread will continue running until it finishes or the main thread finishes.
```rust ```rust
@ -90,7 +90,7 @@ fn main() {
} }
``` ```
In the example above, you can expect to see the message "Hello from a thread!" printed roughly five times. In the example above, you can expect to see the message "Hello from a thread!" printed roughly five times.\
Then the main thread will finish (when the `sleep` call returns), and the spawned thread will be terminated Then the main thread will finish (when the `sleep` call returns), and the spawned thread will be terminated
since the overall process exits. since the overall process exits.
@ -109,7 +109,7 @@ fn main() {
} }
``` ```
In this example, the main thread will wait for the spawned thread to finish before exiting. In this example, the main thread will wait for the spawned thread to finish before exiting.\
This introduces a form of **synchronization** between the two threads: you're guaranteed to see the message This introduces a form of **synchronization** between the two threads: you're guaranteed to see the message
"Hello from a thread!" printed before the program exits, because the main thread won't exit "Hello from a thread!" printed before the program exits, because the main thread won't exit
until the spawned thread has finished. until the spawned thread has finished.

View file

@ -20,12 +20,12 @@ error[E0597]: `v` does not live long enough
`argument requires that v is borrowed for 'static`, what does that mean? `argument requires that v is borrowed for 'static`, what does that mean?
The `'static` lifetime is a special lifetime in Rust. The `'static` lifetime is a special lifetime in Rust.\
It means that the value will be valid for the entire duration of the program. It means that the value will be valid for the entire duration of the program.
## Detached threads ## Detached threads
A thread launched via `thread::spawn` can **outlive** the thread that spawned it. A thread launched via `thread::spawn` can **outlive** the thread that spawned it.\
For example: For example:
```rust ```rust
@ -44,10 +44,10 @@ fn f() {
``` ```
In this example, the first spawned thread will in turn spawn In this example, the first spawned thread will in turn spawn
a child thread that prints a message every second. a child thread that prints a message every second.\
The first thread will then finish and exit. When that happens, The first thread will then finish and exit. When that happens,
its child thread will **continue running** for as long as the its child thread will **continue running** for as long as the
overall process is running. overall process is running.\
In Rust's lingo, we say that the child thread has **outlived** In Rust's lingo, we say that the child thread has **outlived**
its parent. its parent.
@ -59,7 +59,7 @@ Since a spawned thread can:
- run until the program exits - run until the program exits
it must not borrow any values that might be dropped before the program exits; it must not borrow any values that might be dropped before the program exits;
violating this constraint would expose us to a use-after-free bug. violating this constraint would expose us to a use-after-free bug.\
That's why `std::thread::spawn`'s signature requires that the closure passed to it That's why `std::thread::spawn`'s signature requires that the closure passed to it
has the `'static` lifetime: has the `'static` lifetime:
@ -104,7 +104,7 @@ The most common case is a reference to **static data**, such as string literals:
let s: &'static str = "Hello world!"; let s: &'static str = "Hello world!";
``` ```
Since string literals are known at compile-time, Rust stores them *inside* your executable, Since string literals are known at compile-time, Rust stores them _inside_ your executable,
in a region known as **read-only data segment**. in a region known as **read-only data segment**.
All references pointing to that region will therefore be valid for as long as All references pointing to that region will therefore be valid for as long as
the program runs; they satisfy the `'static` contract. the program runs; they satisfy the `'static` contract.

View file

@ -1,7 +1,7 @@
# Leaking data # Leaking data
The main concern around passing references to spawned threads is use-after-free bugs: The main concern around passing references to spawned threads is use-after-free bugs:
accessing data using a pointer to a memory region that's already been freed/de-allocated. accessing data using a pointer to a memory region that's already been freed/de-allocated.\
If you're working with heap-allocated data, you can avoid the issue by If you're working with heap-allocated data, you can avoid the issue by
telling Rust that you'll never reclaim that memory: you choose to **leak memory**, telling Rust that you'll never reclaim that memory: you choose to **leak memory**,
intentionally. intentionally.
@ -32,7 +32,7 @@ fn oom_trigger() {
} }
``` ```
At the same time, memory leaked via `Box::leak` is not truly forgotten. At the same time, memory leaked via `Box::leak` is not truly forgotten.\
The operating system can map each memory region to the process responsible for it. The operating system can map each memory region to the process responsible for it.
When the process exits, the operating system will reclaim that memory. When the process exits, the operating system will reclaim that memory.

View file

@ -1,7 +1,7 @@
# Scoped threads # Scoped threads
All the lifetime issues we discussed so far have a common source: All the lifetime issues we discussed so far have a common source:
the spawned thread can outlive its parent. the spawned thread can outlive its parent.\
We can sidestep this issue by using **scoped threads**. We can sidestep this issue by using **scoped threads**.
```rust ```rust
@ -26,12 +26,12 @@ Let's unpack what's happening.
## `scope` ## `scope`
The `std::thread::scope` function creates a new **scope**. The `std::thread::scope` function creates a new **scope**.\
`std::thread::scope` takes as input a closure, with a single argument: a `Scope` instance. `std::thread::scope` takes as input a closure, with a single argument: a `Scope` instance.
## Scoped spawns ## Scoped spawns
`Scope` exposes a `spawn` method. `Scope` exposes a `spawn` method.\
Unlike `std::thread::spawn`, all threads spawned using a `Scope` will be Unlike `std::thread::spawn`, all threads spawned using a `Scope` will be
**automatically joined** when the scope ends. **automatically joined** when the scope ends.

View file

@ -1,6 +1,6 @@
# Channels # Channels
All our spawned threads have been fairly short-lived so far. All our spawned threads have been fairly short-lived so far.\
Get some input, run a computation, return the result, shut down. Get some input, run a computation, return the result, shut down.
For our ticket management system, we want to do something different: For our ticket management system, we want to do something different:
@ -9,10 +9,10 @@ a client-server architecture.
We will have **one long-running server thread**, responsible for managing We will have **one long-running server thread**, responsible for managing
our state, the stored tickets. our state, the stored tickets.
We will then have **multiple client threads**. We will then have **multiple client threads**.\
Each client will be able to send **commands** and **queries** to Each client will be able to send **commands** and **queries** to
the stateful thread, in order to change its state (e.g. add a new ticket) the stateful thread, in order to change its state (e.g. add a new ticket)
or retrieve information (e.g. get the status of a ticket). or retrieve information (e.g. get the status of a ticket).\
Client threads will run concurrently. Client threads will run concurrently.
## Communication ## Communication
@ -22,7 +22,7 @@ So far we've only had very limited parent-child communication:
- The spawned thread borrowed/consumed data from the parent context - The spawned thread borrowed/consumed data from the parent context
- The spawned thread returned data to the parent when joined - The spawned thread returned data to the parent when joined
This isn't enough for a client-server design. This isn't enough for a client-server design.\
Clients need to be able to send and receive data from the server thread Clients need to be able to send and receive data from the server thread
_after_ it has been launched. _after_ it has been launched.
@ -31,7 +31,7 @@ We can solve the issue using **channels**.
## Channels ## Channels
Rust's standard library provides **multi-producer, single-consumer** (mpsc) channels Rust's standard library provides **multi-producer, single-consumer** (mpsc) channels
in its `std::sync::mpsc` module. in its `std::sync::mpsc` module.\
There are two channel flavours: bounded and unbounded. We'll stick to the unbounded There are two channel flavours: bounded and unbounded. We'll stick to the unbounded
version for now, but we'll discuss the pros and cons later on. version for now, but we'll discuss the pros and cons later on.
@ -43,8 +43,8 @@ use std::sync::mpsc::channel;
let (sender, receiver) = channel(); let (sender, receiver) = channel();
``` ```
You get a sender and a receiver. You get a sender and a receiver.\
You call `send` on the sender to push data into the channel. You call `send` on the sender to push data into the channel.\
You call `recv` on the receiver to pull data from the channel. You call `recv` on the receiver to pull data from the channel.
### Multiple senders ### Multiple senders
@ -59,15 +59,15 @@ That's what **mpsc** (multi-producer single-consumer) stands for!
### Message type ### Message type
Both `Sender` and `Receiver` are generic over a type parameter `T`. Both `Sender` and `Receiver` are generic over a type parameter `T`.\
That's the type of the _messages_ that can travel on our channel. That's the type of the _messages_ that can travel on our channel.
It could be a `u64`, a struct, an enum, etc. It could be a `u64`, a struct, an enum, etc.
### Errors ### Errors
Both `send` and `recv` can fail. Both `send` and `recv` can fail.\
`send` returns an error if the receiver has been dropped. `send` returns an error if the receiver has been dropped.\
`recv` returns an error if all senders have been dropped and the channel is empty. `recv` returns an error if all senders have been dropped and the channel is empty.
In other words, `send` and `recv` error when the channel is effectively closed. In other words, `send` and `recv` error when the channel is effectively closed.

View file

@ -10,7 +10,7 @@ impl<T> Sender<T> {
} }
``` ```
`send` takes `&self` as its argument. `send` takes `&self` as its argument.\
But it's clearly causing a mutation: it's adding a new message to the channel. But it's clearly causing a mutation: it's adding a new message to the channel.
What's even more interesting is that `Sender` is cloneable: we can have multiple instances of `Sender` What's even more interesting is that `Sender` is cloneable: we can have multiple instances of `Sender`
trying to modify the channel state **at the same time**, from different threads. trying to modify the channel state **at the same time**, from different threads.
@ -32,7 +32,7 @@ It would have been more accurate to name them:
Immutable/mutable is a mental model that works for the vast majority of cases, and it's a great one to get started Immutable/mutable is a mental model that works for the vast majority of cases, and it's a great one to get started
with Rust. But it's not the whole story, as you've just seen: `&T` doesn't actually guarantee that the data it with Rust. But it's not the whole story, as you've just seen: `&T` doesn't actually guarantee that the data it
points to is immutable. points to is immutable.\
Don't worry, though: Rust is still keeping its promises. Don't worry, though: Rust is still keeping its promises.
It's just that the terms are a bit more nuanced than they might seem at first. It's just that the terms are a bit more nuanced than they might seem at first.
@ -40,15 +40,15 @@ It's just that the terms are a bit more nuanced than they might seem at first.
Whenever a type allows you to mutate data through a shared reference, you're dealing with **interior mutability**. Whenever a type allows you to mutate data through a shared reference, you're dealing with **interior mutability**.
By default, the Rust compiler assumes that shared references are immutable. It **optimises your code** based on that assumption. By default, the Rust compiler assumes that shared references are immutable. It **optimises your code** based on that assumption.\
The compiler can reorder operations, cache values, and do all sorts of magic to make your code faster. The compiler can reorder operations, cache values, and do all sorts of magic to make your code faster.
You can tell the compiler "No, this shared reference is actually mutable" by wrapping the data in an `UnsafeCell`. You can tell the compiler "No, this shared reference is actually mutable" by wrapping the data in an `UnsafeCell`.\
Every time you see a type that allows interior mutability, you can be certain that `UnsafeCell` is involved, Every time you see a type that allows interior mutability, you can be certain that `UnsafeCell` is involved,
either directly or indirectly. either directly or indirectly.\
Using `UnsafeCell`, raw pointers and `unsafe` code, you can mutate data through shared references. Using `UnsafeCell`, raw pointers and `unsafe` code, you can mutate data through shared references.
Let's be clear, though: `UnsafeCell` isn't a magic wand that allows you to ignore the borrow-checker! Let's be clear, though: `UnsafeCell` isn't a magic wand that allows you to ignore the borrow-checker!\
`unsafe` code is still subject to Rust's rules about borrowing and aliasing. `unsafe` code is still subject to Rust's rules about borrowing and aliasing.
It's an (advanced) tool that you can leverage to build **safe abstractions** whose safety can't be directly expressed It's an (advanced) tool that you can leverage to build **safe abstractions** whose safety can't be directly expressed
in Rust's type system. Whenever you use the `unsafe` keyword you're telling the compiler: in Rust's type system. Whenever you use the `unsafe` keyword you're telling the compiler:
@ -64,15 +64,15 @@ every day in Rust.
## Key examples ## Key examples
Let's go through a couple of important `std` types that leverage interior mutability. Let's go through a couple of important `std` types that leverage interior mutability.\
These are types that you'll encounter somewhat often in Rust code, especially if you peek under the hood of These are types that you'll encounter somewhat often in Rust code, especially if you peek under the hood of
some the libraries you use. some the libraries you use.
### Reference counting ### Reference counting
`Rc` is a reference-counted pointer. `Rc` is a reference-counted pointer.\
It wraps around a value and keeps track of how many references to the value exist. It wraps around a value and keeps track of how many references to the value exist.
When the last reference is dropped, the value is deallocated. When the last reference is dropped, the value is deallocated.\
The value wrapped in an `Rc` is immutable: you can only get shared references to it. The value wrapped in an `Rc` is immutable: you can only get shared references to it.
```rust ```rust

View file

@ -1,6 +1,6 @@
# Two-way communication # Two-way communication
In our current client-server implementation, communication flows in one direction: from the client to the server. In our current client-server implementation, communication flows in one direction: from the client to the server.\
The client has no way of knowing if the server received the message, executed it successfully, or failed. The client has no way of knowing if the server received the message, executed it successfully, or failed.
That's not ideal. That's not ideal.
@ -8,7 +8,7 @@ To solve this issue, we can introduce a two-way communication system.
## Response channel ## Response channel
We need a way for the server to send a response back to the client. We need a way for the server to send a response back to the client.\
There are various ways to do this, but the simplest option is to include a `Sender` channel in There are various ways to do this, but the simplest option is to include a `Sender` channel in
the message that the client sends to the server. After processing the message, the server can use the message that the client sends to the server. After processing the message, the server can use
this channel to send a response back to the client. this channel to send a response back to the client.

View file

@ -1,18 +1,18 @@
# Bounded vs unbounded channels # Bounded vs unbounded channels
So far we've been using unbounded channels. So far we've been using unbounded channels.\
You can send as many messages as you want, and the channel will grow to accommodate them. You can send as many messages as you want, and the channel will grow to accommodate them.\
In a multi-producer single-consumer scenario, this can be problematic: if the producers In a multi-producer single-consumer scenario, this can be problematic: if the producers
enqueues messages at a faster rate than the consumer can process them, the channel will enqueues messages at a faster rate than the consumer can process them, the channel will
keep growing, potentially consuming all available memory. keep growing, potentially consuming all available memory.
Our recommendation is to **never** use an unbounded channel in a production system. Our recommendation is to **never** use an unbounded channel in a production system.\
You should always enforce an upper limit on the number of messages that can be enqueued using a You should always enforce an upper limit on the number of messages that can be enqueued using a
**bounded channel**. **bounded channel**.
## Bounded channels ## Bounded channels
A bounded channel has a fixed capacity. A bounded channel has a fixed capacity.\
You can create one by calling `sync_channel` with a capacity greater than zero: You can create one by calling `sync_channel` with a capacity greater than zero:
```rust ```rust
@ -21,23 +21,23 @@ use std::sync::mpsc::sync_channel;
let (sender, receiver) = sync_channel(10); let (sender, receiver) = sync_channel(10);
``` ```
`receiver` has the same type as before, `Receiver<T>`. `receiver` has the same type as before, `Receiver<T>`.\
`sender`, instead, is an instance of `SyncSender<T>`. `sender`, instead, is an instance of `SyncSender<T>`.
### Sending messages ### Sending messages
You have two different methods to send messages through a `SyncSender`: You have two different methods to send messages through a `SyncSender`:
- `send`: if there is space in the channel, it will enqueue the message and return `Ok(())`. - `send`: if there is space in the channel, it will enqueue the message and return `Ok(())`.\
If the channel is full, it will block and wait until there is space available. If the channel is full, it will block and wait until there is space available.
- `try_send`: if there is space in the channel, it will enqueue the message and return `Ok(())`. - `try_send`: if there is space in the channel, it will enqueue the message and return `Ok(())`.\
If the channel is full, it will return `Err(TrySendError::Full(value))`, where `value` is the message that couldn't be sent. If the channel is full, it will return `Err(TrySendError::Full(value))`, where `value` is the message that couldn't be sent.
Depending on your use case, you might want to use one or the other. Depending on your use case, you might want to use one or the other.
### Backpressure ### Backpressure
The main advantage of using bounded channels is that they provide a form of **backpressure**. The main advantage of using bounded channels is that they provide a form of **backpressure**.\
They force the producers to slow down if the consumer can't keep up. They force the producers to slow down if the consumer can't keep up.
The backpressure can then propagate through the system, potentially affecting the whole architecture and The backpressure can then propagate through the system, potentially affecting the whole architecture and
preventing end users from overwhelming the system with requests. preventing end users from overwhelming the system with requests.

View file

@ -1,6 +1,6 @@
# Update operations # Update operations
So far we've implemented only insertion and retrieval operations. So far we've implemented only insertion and retrieval operations.\
Let's see how we can expand the system to provide an update operation. Let's see how we can expand the system to provide an update operation.
## Legacy updates ## Legacy updates
@ -18,7 +18,7 @@ There are a few ways to work around this limitation. We'll explore a few of them
### Patching ### Patching
We can't send a `&mut Ticket` over a channel, therefore we can't mutate on the client-side. We can't send a `&mut Ticket` over a channel, therefore we can't mutate on the client-side.\
Can we mutate on the server-side? Can we mutate on the server-side?
We can, if we tell the server what needs to be changed. In other words, if we send a **patch** to the server: We can, if we tell the server what needs to be changed. In other words, if we send a **patch** to the server:
@ -32,7 +32,7 @@ struct TicketPatch {
} }
``` ```
The `id` field is mandatory, since it's required to identify the ticket that needs to be updated. The `id` field is mandatory, since it's required to identify the ticket that needs to be updated.\
All other fields are optional: All other fields are optional:
- If a field is `None`, it means that the field should not be changed. - If a field is `None`, it means that the field should not be changed.

View file

@ -1,13 +1,13 @@
# Locks, `Send` and `Arc` # Locks, `Send` and `Arc`
The patching strategy you just implemented has a major drawback: it's racy. The patching strategy you just implemented has a major drawback: it's racy.\
If two clients send patches for the same ticket roughly at same time, the server will apply them in an arbitrary order. If two clients send patches for the same ticket roughly at same time, the server will apply them in an arbitrary order.
Whoever enqueues their patch last will overwrite the changes made by the other client. Whoever enqueues their patch last will overwrite the changes made by the other client.
## Version numbers ## Version numbers
We could try to fix this by using a **version number**. We could try to fix this by using a **version number**.\
Each ticket gets assigned a version number upon creation, set to `0`. Each ticket gets assigned a version number upon creation, set to `0`.\
Whenever a client sends a patch, they must include the current version number of the ticket alongside the Whenever a client sends a patch, they must include the current version number of the ticket alongside the
desired changes. The server will only apply the patch if the version number matches the one it has stored. desired changes. The server will only apply the patch if the version number matches the one it has stored.
@ -15,24 +15,24 @@ In the scenario described above, the server would reject the second patch, becau
have been incremented by the first patch and thus wouldn't match the one sent by the second client. have been incremented by the first patch and thus wouldn't match the one sent by the second client.
This approach is fairly common in distributed systems (e.g. when client and servers don't share memory), This approach is fairly common in distributed systems (e.g. when client and servers don't share memory),
and it is known as **optimistic concurrency control**. and it is known as **optimistic concurrency control**.\
The idea is that most of the time, conflicts won't happen, so we can optimize for the common case. The idea is that most of the time, conflicts won't happen, so we can optimize for the common case.
You know enough about Rust by now to implement this strategy on your own as a bonus exercise, if you want to. You know enough about Rust by now to implement this strategy on your own as a bonus exercise, if you want to.
## Locking ## Locking
We can also fix the race condition by introducing a **lock**. We can also fix the race condition by introducing a **lock**.\
Whenever a client wants to update a ticket, they must first acquire a lock on it. While the lock is active, Whenever a client wants to update a ticket, they must first acquire a lock on it. While the lock is active,
no other client can modify the ticket. no other client can modify the ticket.
Rust's standard library provides two different locking primitives: `Mutex<T>` and `RwLock<T>`. Rust's standard library provides two different locking primitives: `Mutex<T>` and `RwLock<T>`.\
Let's start with `Mutex<T>`. It stands for **mut**ual **ex**clusion, and it's the simplest kind of lock: Let's start with `Mutex<T>`. It stands for **mut**ual **ex**clusion, and it's the simplest kind of lock:
it allows only one thread to access the data, no matter if it's for reading or writing. it allows only one thread to access the data, no matter if it's for reading or writing.
`Mutex<T>` wraps the data it protects, and it's therefore generic over the type of the data. `Mutex<T>` wraps the data it protects, and it's therefore generic over the type of the data.\
You can't access the data directly: the type system forces you to acquire a lock first using either `Mutex::lock` or You can't access the data directly: the type system forces you to acquire a lock first using either `Mutex::lock` or
`Mutex::try_lock`. The former blocks until the lock is acquired, the latter returns immediately with an error if the lock `Mutex::try_lock`. The former blocks until the lock is acquired, the latter returns immediately with an error if the lock
can't be acquired. can't be acquired.\
Both methods return a guard object that dereferences to the data, allowing you to modify it. The lock is released when Both methods return a guard object that dereferences to the data, allowing you to modify it. The lock is released when
the guard is dropped. the guard is dropped.
@ -57,10 +57,10 @@ drop(guard)
## Locking granularity ## Locking granularity
What should our `Mutex` wrap? What should our `Mutex` wrap?\
The simplest option would be the wrap the entire `TicketStore` in a single `Mutex`. The simplest option would be the wrap the entire `TicketStore` in a single `Mutex`.\
This would work, but it would severely limit the system's performance: you wouldn't be able to read tickets in parallel, This would work, but it would severely limit the system's performance: you wouldn't be able to read tickets in parallel,
because every read would have to wait for the lock to be released. because every read would have to wait for the lock to be released.\
This is known as **coarse-grained locking**. This is known as **coarse-grained locking**.
It would be better to use **fine-grained locking**, where each ticket is protected by its own lock. It would be better to use **fine-grained locking**, where each ticket is protected by its own lock.
@ -74,15 +74,15 @@ struct TicketStore {
``` ```
This approach is more efficient, but it has a downside: `TicketStore` has to become **aware** of the multithreaded This approach is more efficient, but it has a downside: `TicketStore` has to become **aware** of the multithreaded
nature of the system; up until now, `TicketStore` has been blissfully ignored the existence of threads. nature of the system; up until now, `TicketStore` has been blissfully ignored the existence of threads.\
Let's go for it anyway. Let's go for it anyway.
## Who holds the lock? ## Who holds the lock?
For the whole scheme to work, the lock must be passed to the client that wants to modify the ticket. For the whole scheme to work, the lock must be passed to the client that wants to modify the ticket.\
The client can then directly modify the ticket (as if they had a `&mut Ticket`) and release the lock when they're done. The client can then directly modify the ticket (as if they had a `&mut Ticket`) and release the lock when they're done.
This is a bit tricky. This is a bit tricky.\
We can't send a `Mutex<Ticket>` over a channel, because `Mutex` is not `Clone` and We can't send a `Mutex<Ticket>` over a channel, because `Mutex` is not `Clone` and
we can't move it out of the `TicketStore`. Could we send the `MutexGuard` instead? we can't move it out of the `TicketStore`. Could we send the `MutexGuard` instead?
@ -131,22 +131,22 @@ note: required because it's used within this closure
## `Send` ## `Send`
`Send` is a marker trait that indicates that a type can be safely transferred from one thread to another. `Send` is a marker trait that indicates that a type can be safely transferred from one thread to another.\
`Send` is also an auto-trait, just like `Sized`; it's automatically implemented (or not implemented) for your type `Send` is also an auto-trait, just like `Sized`; it's automatically implemented (or not implemented) for your type
by the compiler, based on its definition. by the compiler, based on its definition.\
You can also implement `Send` manually for your types, but it requires `unsafe` since you have to guarantee that the You can also implement `Send` manually for your types, but it requires `unsafe` since you have to guarantee that the
type is indeed safe to send between threads for reasons that the compiler can't automatically verify. type is indeed safe to send between threads for reasons that the compiler can't automatically verify.
### Channel requirements ### Channel requirements
`Sender<T>`, `SyncSender<T>` and `Receiver<T>` are `Send` if and only if `T` is `Send`. `Sender<T>`, `SyncSender<T>` and `Receiver<T>` are `Send` if and only if `T` is `Send`.\
That's because they are used to send values between threads, and if the value itself is not `Send`, it would be That's because they are used to send values between threads, and if the value itself is not `Send`, it would be
unsafe to send it between threads. unsafe to send it between threads.
### `MutexGuard` ### `MutexGuard`
`MutexGuard` is not `Send` because the underlying operating system primitives that `Mutex` uses to implement `MutexGuard` is not `Send` because the underlying operating system primitives that `Mutex` uses to implement
the lock require (on some platforms) that the lock must be released by the same thread that acquired it. the lock require (on some platforms) that the lock must be released by the same thread that acquired it.\
If we were to send a `MutexGuard` to another thread, the lock would be released by a different thread, which would If we were to send a `MutexGuard` to another thread, the lock would be released by a different thread, which would
lead to undefined behavior. lead to undefined behavior.
@ -160,7 +160,7 @@ Summing it up:
case for `Ticket`. case for `Ticket`.
At the same time, we can't move the `Mutex` out of the `TicketStore` nor clone it. At the same time, we can't move the `Mutex` out of the `TicketStore` nor clone it.
How can we solve this conundrum? How can we solve this conundrum?\
We need to look at the problem from a different angle. We need to look at the problem from a different angle.
To lock a `Mutex`, we don't need an owned value. A shared reference is enough, since `Mutex` uses internal mutability: To lock a `Mutex`, we don't need an owned value. A shared reference is enough, since `Mutex` uses internal mutability:
@ -173,15 +173,15 @@ impl<T> Mutex<T> {
} }
``` ```
It is therefore enough to send a shared reference to the client. It is therefore enough to send a shared reference to the client.\
We can't do that directly, though, because the reference would have to be `'static` and that's not the case. We can't do that directly, though, because the reference would have to be `'static` and that's not the case.\
In a way, we need an "owned shared reference". It turns out that Rust has a type that fits the bill: `Arc`. In a way, we need an "owned shared reference". It turns out that Rust has a type that fits the bill: `Arc`.
## `Arc` to the rescue ## `Arc` to the rescue
`Arc` stands for **atomic reference counting**. `Arc` stands for **atomic reference counting**.\
`Arc` wraps around a value and keeps track of how many references to the value exist. `Arc` wraps around a value and keeps track of how many references to the value exist.
When the last reference is dropped, the value is deallocated. When the last reference is dropped, the value is deallocated.\
The value wrapped in an `Arc` is immutable: you can only get shared references to it. The value wrapped in an `Arc` is immutable: you can only get shared references to it.
```rust ```rust

View file

@ -3,11 +3,11 @@
Our new `TicketStore` works, but its read performance is not great: there can only be one client at a time Our new `TicketStore` works, but its read performance is not great: there can only be one client at a time
reading a specific ticket, because `Mutex<T>` doesn't distinguish between readers and writers. reading a specific ticket, because `Mutex<T>` doesn't distinguish between readers and writers.
We can solve the issue by using a different locking primitive: `RwLock<T>`. We can solve the issue by using a different locking primitive: `RwLock<T>`.\
`RwLock<T>` stands for **read-write lock**. It allows **multiple readers** to access the data simultaneously, `RwLock<T>` stands for **read-write lock**. It allows **multiple readers** to access the data simultaneously,
but only one writer at a time. but only one writer at a time.
`RwLock<T>` has two methods to acquire a lock: `read` and `write`. `RwLock<T>` has two methods to acquire a lock: `read` and `write`.\
`read` returns a guard that allows you to read the data, while `write` returns a guard that allows you to modify it. `read` returns a guard that allows you to read the data, while `write` returns a guard that allows you to modify it.
```rust ```rust
@ -31,13 +31,13 @@ Why would you ever use `Mutex<T>` if you can use `RwLock<T>` instead?
There are two key reasons: There are two key reasons:
- Locking a `RwLock<T>` is more expensive than locking a `Mutex<T>`. - Locking a `RwLock<T>` is more expensive than locking a `Mutex<T>`.\
This is because `RwLock<T>` has to keep track of the number of active readers and writers, while `Mutex<T>` This is because `RwLock<T>` has to keep track of the number of active readers and writers, while `Mutex<T>`
only has to keep track of whether the lock is held or not. only has to keep track of whether the lock is held or not.
This performance overhead is not an issue if there are more readers than writers, but if the workload This performance overhead is not an issue if there are more readers than writers, but if the workload
is write-heavy `Mutex<T>` might be a better choice. is write-heavy `Mutex<T>` might be a better choice.
- `RwLock<T>` can cause **writer starvation**. - `RwLock<T>` can cause **writer starvation**.\
If there are always readers waiting to acquire the lock, writers might never get a chance to run. If there are always readers waiting to acquire the lock, writers might never get a chance to run.\
`RwLock<T>` doesn't provide any guarantees about the order in which readers and writers are granted access to the lock. `RwLock<T>` doesn't provide any guarantees about the order in which readers and writers are granted access to the lock.
It depends on the policy implemented by the underlying OS, which might not be fair to writers. It depends on the policy implemented by the underlying OS, which might not be fair to writers.

View file

@ -10,7 +10,7 @@ Our first implementation of a multithreaded ticket store used:
- multiple clients sending requests to it via channels from their own threads. - multiple clients sending requests to it via channels from their own threads.
No locking of the state was necessary, since the server was the only one modifying the state. That's because No locking of the state was necessary, since the server was the only one modifying the state. That's because
the "inbox" channel naturally **serialized** incoming requests: the server would process them one by one. the "inbox" channel naturally **serialized** incoming requests: the server would process them one by one.\
We've already discussed the limitations of this approach when it comes to patching behaviour, but we didn't We've already discussed the limitations of this approach when it comes to patching behaviour, but we didn't
discuss the performance implications of the original design: the server could only process one request at a time, discuss the performance implications of the original design: the server could only process one request at a time,
including reads. including reads.
@ -37,17 +37,17 @@ We have two problems to solve:
### Sharing `TicketStore` across threads ### Sharing `TicketStore` across threads
We want all threads to refer to the same state, otherwise we don't really have a multithreaded system—we're just We want all threads to refer to the same state, otherwise we don't really have a multithreaded system—we're just
running multiple single-threaded systems in parallel. running multiple single-threaded systems in parallel.\
We've already encountered this problem when we tried to share a lock across threads: we can use an `Arc`. We've already encountered this problem when we tried to share a lock across threads: we can use an `Arc`.
### Synchronizing access to the store ### Synchronizing access to the store
There is one interaction that's still lockless thanks to the serialization provided by the channels: inserting There is one interaction that's still lockless thanks to the serialization provided by the channels: inserting
(or removing) a ticket from the store. (or removing) a ticket from the store.\
If we remove the channels, we need to introduce (another) lock to synchronize access to the `TicketStore` itself. If we remove the channels, we need to introduce (another) lock to synchronize access to the `TicketStore` itself.
If we use a `Mutex`, then it makes no sense to use an additional `RwLock` for each ticket: the `Mutex` will If we use a `Mutex`, then it makes no sense to use an additional `RwLock` for each ticket: the `Mutex` will
already serialize access to the entire store, so we wouldn't be able to read tickets in parallel anyway. already serialize access to the entire store, so we wouldn't be able to read tickets in parallel anyway.\
If we use a `RwLock`, instead, we can read tickets in parallel. We just to pause all reads while inserting If we use a `RwLock`, instead, we can read tickets in parallel. We just to pause all reads while inserting
or removing a ticket. or removing a ticket.

View file

@ -2,27 +2,27 @@
Before we wrap up this chapter, let's talk about another key trait in Rust's standard library: `Sync`. Before we wrap up this chapter, let's talk about another key trait in Rust's standard library: `Sync`.
`Sync` is an auto trait, just like `Send`. `Sync` is an auto trait, just like `Send`.\
It is automatically implemented by all types that can be safely **shared** between threads. It is automatically implemented by all types that can be safely **shared** between threads.
In order words: `T: Sync` means that `&T` is `Send`. In order words: `T: Sync` means that `&T` is `Send`.
## `Sync` doesn't imply `Send` ## `Sync` doesn't imply `Send`
It's important to note that `Sync` doesn't imply `Send`. It's important to note that `Sync` doesn't imply `Send`.\
For example: `MutexGuard` is not `Send`, but it is `Sync`. For example: `MutexGuard` is not `Send`, but it is `Sync`.
It isn't `Send` because the lock must be released on the same thread that acquired it, therefore we don't It isn't `Send` because the lock must be released on the same thread that acquired it, therefore we don't
want `MutexGuard` to be dropped on a different thread. want `MutexGuard` to be dropped on a different thread.\
But it is `Sync`, because giving a `&MutexGuard` to another thread has no impact on where the lock is released. But it is `Sync`, because giving a `&MutexGuard` to another thread has no impact on where the lock is released.
## `Send` doesn't imply `Sync` ## `Send` doesn't imply `Sync`
The opposite is also true: `Send` doesn't imply `Sync`. The opposite is also true: `Send` doesn't imply `Sync`.\
For example: `RefCell<T>` is `Send` (if `T` is `Send`), but it is not `Sync`. For example: `RefCell<T>` is `Send` (if `T` is `Send`), but it is not `Sync`.
`RefCell<T>` performs runtime borrow checking, but the counters it uses to track borrows are not thread-safe. `RefCell<T>` performs runtime borrow checking, but the counters it uses to track borrows are not thread-safe.
Therefore, having multiple threads holding a `&RefCell` would lead to a data race, with potentially Therefore, having multiple threads holding a `&RefCell` would lead to a data race, with potentially
multiple threads obtaining mutable references to the same data. Hence `RefCell` is not `Sync`. multiple threads obtaining mutable references to the same data. Hence `RefCell` is not `Sync`.\
`Send` is fine, instead, because when we send a `RefCell` to another thread we're not `Send` is fine, instead, because when we send a `RefCell` to another thread we're not
leaving behind any references to the data it contains, hence no risk of concurrent mutable access. leaving behind any references to the data it contains, hence no risk of concurrent mutable access.

View file

@ -1,6 +1,6 @@
# Async Rust # Async Rust
Threads are not the only way to write concurrent programs in Rust. Threads are not the only way to write concurrent programs in Rust.\
In this chapter we'll explore another approach: **asynchronous programming**. In this chapter we'll explore another approach: **asynchronous programming**.
In particular, you'll get an introduction to: In particular, you'll get an introduction to:

View file

@ -1,16 +1,16 @@
# Asynchronous functions # Asynchronous functions
All the functions and methods you've written so far were eager. All the functions and methods you've written so far were eager.\
Nothing happened until you invoked them. But once you did, they ran to Nothing happened until you invoked them. But once you did, they ran to
completion: they did **all** their work, and then returned their output. completion: they did **all** their work, and then returned their output.
Sometimes that's undesirable. Sometimes that's undesirable.\
For example, if you're writing an HTTP server, there might be a lot of For example, if you're writing an HTTP server, there might be a lot of
**waiting**: waiting for the request body to arrive, waiting for the **waiting**: waiting for the request body to arrive, waiting for the
database to respond, waiting for a downstream service to reply, etc. database to respond, waiting for a downstream service to reply, etc.
What if you could do something else while you're waiting? What if you could do something else while you're waiting?\
What if you could choose to give up midway through a computation? What if you could choose to give up midway through a computation?\
What if you could choose to prioritise another task over the current one? What if you could choose to prioritise another task over the current one?
That's where **asynchronous functions** come in. That's where **asynchronous functions** come in.
@ -38,7 +38,7 @@ fn run() {
} }
``` ```
Nothing happens! Nothing happens!\
Rust doesn't start executing `bind_random` when you call it, Rust doesn't start executing `bind_random` when you call it,
not even as a background task (as you might expect based on your experience not even as a background task (as you might expect based on your experience
with other languages). with other languages).
@ -73,13 +73,13 @@ has run to completion—e.g. until the `TcpListener` has been created in the exa
## Runtimes ## Runtimes
If you're puzzled, you're right to be! If you're puzzled, you're right to be!\
We've just said that the perk of asynchronous functions We've just said that the perk of asynchronous functions
is that they don't do **all** their work at once. We then introduced `.await`, which is that they don't do **all** their work at once. We then introduced `.await`, which
doesn't return until the asynchronous function has run to completion. Haven't we doesn't return until the asynchronous function has run to completion. Haven't we
just re-introduced the problem we were trying to solve? What's the point? just re-introduced the problem we were trying to solve? What's the point?
Not quite! A lot happens behind the scenes when you call `.await`! Not quite! A lot happens behind the scenes when you call `.await`!\
You're yielding control to an **async runtime**, also known as an **async executor**. You're yielding control to an **async runtime**, also known as an **async executor**.
Executors are where the magic happens: they are in charge of managing all your Executors are where the magic happens: they are in charge of managing all your
ongoing asynchronous **tasks**. In particular, they balance two different goals: ongoing asynchronous **tasks**. In particular, they balance two different goals:
@ -130,10 +130,10 @@ fn main() {
### `#[tokio::test]` ### `#[tokio::test]`
The same goes for tests: they must be synchronous functions. The same goes for tests: they must be synchronous functions.\
Each test function is run in its own thread, and you're responsible for Each test function is run in its own thread, and you're responsible for
setting up and launching an async runtime if you need to run async code setting up and launching an async runtime if you need to run async code
in your tests. in your tests.\
`tokio` provides a `#[tokio::test]` macro to make this easier: `tokio` provides a `#[tokio::test]` macro to make this easier:
```rust ```rust

View file

@ -12,12 +12,12 @@ pub async fn echo(listener: TcpListener) -> Result<(), anyhow::Error> {
} }
``` ```
This is not bad! This is not bad!\
If a long time passes between two incoming connections, the `echo` function will be idle If a long time passes between two incoming connections, the `echo` function will be idle
(since `TcpListener::accept` is an asynchronous function), thus allowing the executor (since `TcpListener::accept` is an asynchronous function), thus allowing the executor
to run other tasks in the meantime. to run other tasks in the meantime.
But how can we actually have multiple tasks running concurrently? But how can we actually have multiple tasks running concurrently?\
If we always run our asynchronous functions until completion (by using `.await`), we'll never If we always run our asynchronous functions until completion (by using `.await`), we'll never
have more than one task running at a time. have more than one task running at a time.
@ -25,7 +25,7 @@ This is where the `tokio::spawn` function comes in.
## `tokio::spawn` ## `tokio::spawn`
`tokio::spawn` allows you to hand off a task to the executor, **without waiting for it to complete**. `tokio::spawn` allows you to hand off a task to the executor, **without waiting for it to complete**.\
Whenever you invoke `tokio::spawn`, you're telling `tokio` to continue running Whenever you invoke `tokio::spawn`, you're telling `tokio` to continue running
the spawned task, in the background, **concurrently** with the task that spawned it. the spawned task, in the background, **concurrently** with the task that spawned it.
@ -56,7 +56,7 @@ to define a separate async function.
### `JoinHandle` ### `JoinHandle`
`tokio::spawn` returns a `JoinHandle`. `tokio::spawn` returns a `JoinHandle`.\
You can use `JoinHandle` to `.await` the background task, in the same way You can use `JoinHandle` to `.await` the background task, in the same way
we used `join` for spawned threads. we used `join` for spawned threads.
@ -83,7 +83,7 @@ pub async fn do_work() {
### Panic boundary ### Panic boundary
If a task spawned with `tokio::spawn` panics, the panic will be caught by the executor. If a task spawned with `tokio::spawn` panics, the panic will be caught by the executor.\
If you don't `.await` the corresponding `JoinHandle`, the panic won't be propagated to the spawner. If you don't `.await` the corresponding `JoinHandle`, the panic won't be propagated to the spawner.
Even if you do `.await` the `JoinHandle`, the panic won't be propagated automatically. Even if you do `.await` the `JoinHandle`, the panic won't be propagated automatically.
Awaiting a `JoinHandle` returns a `Result`, with [`JoinError`](https://docs.rs/tokio/latest/tokio/task/struct.JoinError.html) Awaiting a `JoinHandle` returns a `Result`, with [`JoinError`](https://docs.rs/tokio/latest/tokio/task/struct.JoinError.html)

View file

@ -19,7 +19,7 @@ You can configure your runtime via `tokio::runtime::Builder`:
### Current thread runtime ### Current thread runtime
The current-thread runtime, as the name implies, relies exclusively on the OS thread The current-thread runtime, as the name implies, relies exclusively on the OS thread
it was launched on to schedule and execute tasks. it was launched on to schedule and execute tasks.\
When using the current-thread runtime, you have **concurrency** but no **parallelism**: When using the current-thread runtime, you have **concurrency** but no **parallelism**:
asynchronous tasks will be interleaved, but there will always be at most one task running asynchronous tasks will be interleaved, but there will always be at most one task running
at any given time. at any given time.
@ -30,10 +30,10 @@ When using the multithreaded runtime, instead, there can up to `N` tasks running
_in parallel_ at any given time, where `N` is the number of threads used by the _in parallel_ at any given time, where `N` is the number of threads used by the
runtime. By default, `N` matches the number of available CPU cores. runtime. By default, `N` matches the number of available CPU cores.
There's more: `tokio` performs **work-stealing**. There's more: `tokio` performs **work-stealing**.\
If a thread is idle, it won't wait around: it'll try to find a new task that's ready for If a thread is idle, it won't wait around: it'll try to find a new task that's ready for
execution, either from a global queue or by stealing it from the local queue of another execution, either from a global queue or by stealing it from the local queue of another
thread. thread.\
Work-stealing can have significant performance benefits, especially on tail latencies, Work-stealing can have significant performance benefits, especially on tail latencies,
whenever your application is dealing with workloads that are not perfectly balanced whenever your application is dealing with workloads that are not perfectly balanced
across threads. across threads.
@ -52,7 +52,7 @@ where
{ /* */ } { /* */ }
``` ```
Let's ignore the `Future` trait for now to focus on the rest. Let's ignore the `Future` trait for now to focus on the rest.\
`spawn` is asking all its inputs to be `Send` and have a `'static` lifetime. `spawn` is asking all its inputs to be `Send` and have a `'static` lifetime.
The `'static` constraint follows the same rationale of the `'static` constraint The `'static` constraint follows the same rationale of the `'static` constraint

View file

@ -12,11 +12,11 @@ pub fn spawn<F>(future: F) -> JoinHandle<F::Output>
{ /* */ } { /* */ }
``` ```
What does it _actually_ mean for `F` to be `Send`? What does it _actually_ mean for `F` to be `Send`?\
It implies, as we saw in the previous section, that whatever value it captures from the It implies, as we saw in the previous section, that whatever value it captures from the
spawning environment has to be `Send`. But it goes further than that. spawning environment has to be `Send`. But it goes further than that.
Any value that's _held across a .await point_ has to be `Send`. Any value that's _held across a .await point_ has to be `Send`.\
Let's look at an example: Let's look at an example:
```rust ```rust
@ -90,8 +90,8 @@ trait Future {
### `poll` ### `poll`
The `poll` method is the heart of the `Future` trait. The `poll` method is the heart of the `Future` trait.\
A future on its own doesn't do anything. It needs to be **polled** to make progress. A future on its own doesn't do anything. It needs to be **polled** to make progress.\
When you call `poll`, you're asking the future to do some work. When you call `poll`, you're asking the future to do some work.
`poll` tries to make progress, and then returns one of the following: `poll` tries to make progress, and then returns one of the following:
@ -104,13 +104,13 @@ completed, there's nothing left to do.
### The role of the runtime ### The role of the runtime
You'll rarely, if ever, be calling poll directly. You'll rarely, if ever, be calling poll directly.\
That's the job of your async runtime: it has all the required information (the `Context` That's the job of your async runtime: it has all the required information (the `Context`
in `poll`'s signature) to ensure that your futures are making progress whenever they can. in `poll`'s signature) to ensure that your futures are making progress whenever they can.
## `async fn` and futures ## `async fn` and futures
We've worked with the high-level interface, asynchronous functions. We've worked with the high-level interface, asynchronous functions.\
We've now looked at the low-level primitive, the `Future trait`. We've now looked at the low-level primitive, the `Future trait`.
How are they related? How are they related?
@ -143,10 +143,10 @@ pub enum ExampleFuture {
``` ```
When `example` is called, it returns `ExampleFuture::NotStarted`. The future has never When `example` is called, it returns `ExampleFuture::NotStarted`. The future has never
been polled yet, so nothing has happened. been polled yet, so nothing has happened.\
When the runtime polls it the first time, `ExampleFuture` will advance until the next When the runtime polls it the first time, `ExampleFuture` will advance until the next
`.await` point: it'll stop at the `ExampleFuture::YieldNow(Rc<i32>)` stage of the state `.await` point: it'll stop at the `ExampleFuture::YieldNow(Rc<i32>)` stage of the state
machine, returning `Poll::Pending`. machine, returning `Poll::Pending`.\
When it's polled again, it'll execute the remaining code (`println!`) and When it's polled again, it'll execute the remaining code (`println!`) and
return `Poll::Ready(())`. return `Poll::Ready(())`.
@ -157,7 +157,7 @@ it cannot be `Send`.
## Yield points ## Yield points
As you've just seen with `example`, every `.await` point creates a new intermediate As you've just seen with `example`, every `.await` point creates a new intermediate
state in the lifecycle of a future. state in the lifecycle of a future.\
That's why `.await` points are also known as **yield points**: your future _yields control_ That's why `.await` points are also known as **yield points**: your future _yields control_
back to the runtime that was polling it, allowing the runtime to pause it and (if necessary) back to the runtime that was polling it, allowing the runtime to pause it and (if necessary)
schedule another task for execution, thus making progress on multiple fronts concurrently. schedule another task for execution, thus making progress on multiple fronts concurrently.

View file

@ -1,6 +1,6 @@
# Don't block the runtime # Don't block the runtime
Let's circle back to yield points. Let's circle back to yield points.\
Unlike threads, **Rust tasks cannot be preempted**. Unlike threads, **Rust tasks cannot be preempted**.
`tokio` cannot, on its own, decide to pause a task and run another one in its place. `tokio` cannot, on its own, decide to pause a task and run another one in its place.
@ -46,7 +46,7 @@ of entries.
## How to avoid blocking ## How to avoid blocking
OK, so how do you avoid blocking the runtime assuming you _must_ perform an operation OK, so how do you avoid blocking the runtime assuming you _must_ perform an operation
that qualifies or risks qualifying as blocking? that qualifies or risks qualifying as blocking?\
You need to move the work to a different thread. You don't want to use the so-called You need to move the work to a different thread. You don't want to use the so-called
runtime threads, the ones used by `tokio` to run tasks. runtime threads, the ones used by `tokio` to run tasks.

View file

@ -42,14 +42,14 @@ Let's imagine that there are two tasks executing `run`, concurrently, on a singl
runtime. We observe the following sequence of scheduling events: runtime. We observe the following sequence of scheduling events:
```text ```text
Task A Task B Task A Task B
| |
Acquire lock Acquire lock
Yields to runtime Yields to runtime
| |
+--------------+ +--------------+
| |
Tries to acquire lock Tries to acquire lock
``` ```
We have a deadlock. Task B we'll never manage to acquire the lock, because the lock We have a deadlock. Task B we'll never manage to acquire the lock, because the lock
@ -73,32 +73,32 @@ async fn run(m: Arc<Mutex<Vec<u64>>>) {
``` ```
Acquiring the lock is now an asynchronous operation, which yields back to the runtime Acquiring the lock is now an asynchronous operation, which yields back to the runtime
if it can't make progress. if it can't make progress.\
Going back to the previous scenario, the following would happen: Going back to the previous scenario, the following would happen:
```text ```text
Task A Task B Task A Task B
| |
Acquires the lock Acquires the lock
Starts `http_call` Starts `http_call`
Yields to runtime Yields to runtime
| |
+--------------+ +--------------+
| |
Tries to acquire the lock Tries to acquire the lock
Cannot acquire the lock Cannot acquire the lock
Yields to runtime Yields to runtime
| |
+--------------+ +--------------+
| |
`http_call` completes `http_call` completes
Releases the lock Releases the lock
Yield to runtime Yield to runtime
| |
+--------------+ +--------------+
| |
Acquires the lock Acquires the lock
[...] [...]
``` ```
All good! All good!
@ -107,14 +107,14 @@ All good!
We've used a single-threaded runtime as the execution context in our We've used a single-threaded runtime as the execution context in our
previous example, but the same risk persists even when using a multithreaded previous example, but the same risk persists even when using a multithreaded
runtime. runtime.\
The only difference is in the number of concurrent tasks required to create the deadlock: The only difference is in the number of concurrent tasks required to create the deadlock:
in a single-threaded runtime, 2 are enough; in a multithreaded runtime, we in a single-threaded runtime, 2 are enough; in a multithreaded runtime, we
would need `N+1` tasks, where `N` is the number of runtime threads. would need `N+1` tasks, where `N` is the number of runtime threads.
### Downsides ### Downsides
Having an async-aware `Mutex` comes with a performance penalty. Having an async-aware `Mutex` comes with a performance penalty.\
If you're confident that the lock isn't under significant contention If you're confident that the lock isn't under significant contention
_and_ you're careful to never hold it across a yield point, you can _and_ you're careful to never hold it across a yield point, you can
still use `std::sync::Mutex` in an asynchronous context. still use `std::sync::Mutex` in an asynchronous context.
@ -124,6 +124,6 @@ will incur.
## Other primitives ## Other primitives
We used `Mutex` as an example, but the same applies to `RwLock`, semaphores, etc. We used `Mutex` as an example, but the same applies to `RwLock`, semaphores, etc.\
Prefer async-aware versions when working in an asynchronous context to minimise Prefer async-aware versions when working in an asynchronous context to minimise
the risk of issues. the risk of issues.

View file

@ -1,6 +1,6 @@
# Cancellation # Cancellation
What happens when a pending future is dropped? What happens when a pending future is dropped?\
The runtime will no longer poll it, therefore it won't make any further progress. The runtime will no longer poll it, therefore it won't make any further progress.
In other words, its execution has been **cancelled**. In other words, its execution has been **cancelled**.
@ -38,7 +38,7 @@ async fn http_call() {
} }
``` ```
Each yield point becomes a **cancellation point**. Each yield point becomes a **cancellation point**.\
`http_call` can't be preempted by the runtime, so it can only be discarded after `http_call` can't be preempted by the runtime, so it can only be discarded after
it has yielded control back to the executor via `.await`. it has yielded control back to the executor via `.await`.
This applies recursively—e.g. `stream.write_all(&request)` is likely to have multiple This applies recursively—e.g. `stream.write_all(&request)` is likely to have multiple
@ -49,7 +49,7 @@ finishing transmitting the body.
## Clean up ## Clean up
Rust's cancellation mechanism is quite powerful—it allows the caller to cancel an ongoing task Rust's cancellation mechanism is quite powerful—it allows the caller to cancel an ongoing task
without needing any form of cooperation from the task itself. without needing any form of cooperation from the task itself.\
At the same time, this can be quite dangerous. It may be desirable to perform a At the same time, this can be quite dangerous. It may be desirable to perform a
**graceful cancellation**, to ensure that some clean-up tasks are performed **graceful cancellation**, to ensure that some clean-up tasks are performed
before aborting the operation. before aborting the operation.
@ -87,7 +87,7 @@ The optimal choice is contextual.
## Cancelling spawned tasks ## Cancelling spawned tasks
When you spawn a task using `tokio::spawn`, you can no longer drop it; When you spawn a task using `tokio::spawn`, you can no longer drop it;
it belongs to the runtime. it belongs to the runtime.\
Nonetheless, you can use its `JoinHandle` to cancel it if needed: Nonetheless, you can use its `JoinHandle` to cancel it if needed:
```rust ```rust
@ -102,7 +102,7 @@ async fn run() {
- Be extremely careful when using `tokio`'s `select!` macro to "race" two different futures. - Be extremely careful when using `tokio`'s `select!` macro to "race" two different futures.
Retrying the same task in a loop is dangerous unless you can ensure **cancellation safety**. Retrying the same task in a loop is dangerous unless you can ensure **cancellation safety**.
Check out [`select!`'s documentation](https://tokio.rs/tokio/tutorial/select) for more details. Check out [`select!`'s documentation](https://tokio.rs/tokio/tutorial/select) for more details.\
If you need to interleave two asynchronous streams of data (e.g. a socket and a channel), prefer using If you need to interleave two asynchronous streams of data (e.g. a socket and a channel), prefer using
[`StreamExt::merge`](https://docs.rs/tokio-stream/latest/tokio_stream/trait.StreamExt.html#method.merge) instead. [`StreamExt::merge`](https://docs.rs/tokio-stream/latest/tokio_stream/trait.StreamExt.html#method.merge) instead.
- Rather than "abrupt" cancellation, it can be preferable to rely - Rather than "abrupt" cancellation, it can be preferable to rely

View file

@ -10,25 +10,25 @@ rough edges in your day-to-day work due to some of these missing pieces.
A few recommendations for a mostly-pain-free async experience: A few recommendations for a mostly-pain-free async experience:
- **Pick a runtime and stick to it.** - **Pick a runtime and stick to it.**\
Some primitives (e.g. timers, I/O) are not portable across runtimes. Trying to Some primitives (e.g. timers, I/O) are not portable across runtimes. Trying to
mix runtimes is likely to cause you pain. Trying to write code that's runtime mix runtimes is likely to cause you pain. Trying to write code that's runtime
agnostic can significantly increase the complexity of your codebase. Avoid it agnostic can significantly increase the complexity of your codebase. Avoid it
if you can. if you can.
- **There is no stable `Stream`/`AsyncIterator` interface yet.** - **There is no stable `Stream`/`AsyncIterator` interface yet.**\
An `AsyncIterator` is, conceptually, an iterator that yields new items An `AsyncIterator` is, conceptually, an iterator that yields new items
asynchronously. There is ongoing design work, but no consensus (yet). asynchronously. There is ongoing design work, but no consensus (yet).
If you're using `tokio`, refer to [`tokio_stream`](https://docs.rs/tokio-stream/latest/tokio_stream/) If you're using `tokio`, refer to [`tokio_stream`](https://docs.rs/tokio-stream/latest/tokio_stream/)
as your go-to interface. as your go-to interface.
- **Be careful with buffering.** - **Be careful with buffering.**\
It is often the cause of subtle bugs. Check out It is often the cause of subtle bugs. Check out
["Barbara battles buffered streams"](https://rust-lang.github.io/wg-async/vision/submitted_stories/status_quo/barbara_battles_buffered_streams.html) ["Barbara battles buffered streams"](https://rust-lang.github.io/wg-async/vision/submitted_stories/status_quo/barbara_battles_buffered_streams.html)
for more details. for more details.
- **There is no equivalent of scoped threads for asynchronous tasks**. - **There is no equivalent of scoped threads for asynchronous tasks**.\
Check out ["The scoped task trilemma"](https://without.boats/blog/the-scoped-task-trilemma/) Check out ["The scoped task trilemma"](https://without.boats/blog/the-scoped-task-trilemma/)
for more details. for more details.
Don't let these caveats scare you: asynchronous Rust is being used effectively Don't let these caveats scare you: asynchronous Rust is being used effectively
at _massive_ scale (e.g. AWS, Meta) to power foundational services. at _massive_ scale (e.g. AWS, Meta) to power foundational services.\
You will have to master it if you're planning building networked applications You will have to master it if you're planning building networked applications
in Rust. in Rust.

View file

@ -1,8 +1,8 @@
# Epilogue # Epilogue
Our tour of Rust ends here. Our tour of Rust ends here.\
It has been quite extensive, but by no means exhaustive: Rust is a language with It has been quite extensive, but by no means exhaustive: Rust is a language with
a large surface area, and an even larger ecosystem! a large surface area, and an even larger ecosystem!\
Don't let this scare you, though: there's **no need to learn everything**. Don't let this scare you, though: there's **no need to learn everything**.
You'll pick up whatever is necessary to be effective in the domain You'll pick up whatever is necessary to be effective in the domain
(backend, embedded, CLIs, GUIs, etc.) **while working on your projects**. (backend, embedded, CLIs, GUIs, etc.) **while working on your projects**.
@ -34,21 +34,20 @@ way.
### Advanced material ### Advanced material
If you want to dive deeper into the language, refer to the [Rustonomicon](https://doc.rust-lang.org/nomicon/) If you want to dive deeper into the language, refer to the [Rustonomicon](https://doc.rust-lang.org/nomicon/)
and ["Rust for Rustaceans"](https://nostarch.com/rust-rustaceans). and ["Rust for Rustaceans"](https://nostarch.com/rust-rustaceans).\
The ["Decrusted" series](https://www.youtube.com/playlist?list=PLqbS7AVVErFirH9armw8yXlE6dacF-A6z) is another excellent The ["Decrusted" series](https://www.youtube.com/playlist?list=PLqbS7AVVErFirH9armw8yXlE6dacF-A6z) is another excellent
resource to learn more about the internals of many of the most popular Rust libraries. resource to learn more about the internals of many of the most popular Rust libraries.
### Domain-specific material ### Domain-specific material
If you want to use Rust for backend development, If you want to use Rust for backend development,
check out ["Zero to Production in Rust"](https://zero2prod.com). check out ["Zero to Production in Rust"](https://zero2prod.com).\
If you want to use Rust for embedded development, If you want to use Rust for embedded development,
check out the [Embedded Rust book](https://docs.rust-embedded.org/book/). check out the [Embedded Rust book](https://docs.rust-embedded.org/book/).
### Masterclasses ### Masterclasses
You can then find resources on key topics that cut across domains. You can then find resources on key topics that cut across domains.\
For testing, check out For testing, check out
["Advanced testing, going beyond the basics"](https://github.com/mainmatter/rust-advanced-testing-workshop). ["Advanced testing, going beyond the basics"](https://github.com/mainmatter/rust-advanced-testing-workshop).\
For telemetry, check out ["You can't fix what you can't see"](https://github.com/mainmatter/rust-telemetry-workshop). For telemetry, check out ["You can't fix what you can't see"](https://github.com/mainmatter/rust-telemetry-workshop).

11
dprint.json Normal file
View file

@ -0,0 +1,11 @@
{
"markdown": {
},
"toml": {
},
"excludes": [],
"plugins": [
"https://plugins.dprint.dev/markdown-0.17.0.wasm",
"https://plugins.dprint.dev/toml-0.6.1.wasm"
]
}

View file

@ -4,5 +4,5 @@ version = "0.1.0"
edition = "2021" edition = "2021"
[dev-dependencies] [dev-dependencies]
static_assertions = "1.1.0"
common = { path = "../../../helpers/common" } common = { path = "../../../helpers/common" }
static_assertions = "1.1.0"

View file

@ -4,5 +4,5 @@ version = "0.1.0"
edition = "2021" edition = "2021"
[dependencies] [dependencies]
ticket_fields = { path = "../../../helpers/ticket_fields" }
thiserror = "1.0.59" thiserror = "1.0.59"
ticket_fields = { path = "../../../helpers/ticket_fields" }

View file

@ -4,5 +4,5 @@ version = "0.1.0"
edition = "2021" edition = "2021"
[dependencies] [dependencies]
ticket_fields = { path = "../../../helpers/ticket_fields" }
thiserror = "1.0.60" thiserror = "1.0.60"
ticket_fields = { path = "../../../helpers/ticket_fields" }

Some files were not shown because too many files have changed in this diff Show more