"The code is more what you'd call 'guidelines' than actual rules." – Hector Barbossa

Scott Meyers' original Effective C++ book was phenomenally successful because it introduced a new style of programming book, focused on a collection of guidelines that had been learned from real world experience of creating software in C++. Significantly, those guidelines were explained in the context of the reasons why they were necessary – allowing the reader to decide for themselves whether their particular scenario warranted breaking the rules.

The first edition of Effective C++ was published in 1992, and at that time C++, although young, was already a subtle language that included many footguns; having a guide to the interactions of its different features was essential.

Rust is also a young language, but in contrast to C++ it is remarkably free of footguns. The strength and consistency of its type system means that if a Rust program compiles, there is already a decent chance that it will work – a phenomenon previously only observed with more academic, less accessible languages such as Haskell.

This safety – both type safety and memory safety – does come with a cost, though. Rust has a reputation for having a steep on-ramp, where newcomers have to go through the initiation rituals of fighting the borrow checker, redesigning their data structures and being befuddled by lifetimes. A Rust program that compiles may have a good chance of just working, but the struggle to get it to compile is real – even with the Rust compiler's remarkably helpful error diagnostics.

As a result, this book is aimed at a slightly different level than other Effective <Language> books; there are more Items that cover the concepts that are new with Rust, even though the official documentation already includes good introductions of these topics. These Items have titles like "Understand…" and "Familiarize yourself with…".

Rust's safety also leads to a complete absence of Items titled "Never…". If you really should never do something, the compiler will generally prevent you from doing it.

That said, the text still assumes an understanding of the basics of the language. It also assumes the 2018 edition of Rust, using the stable toolchain.

The specific rustc version used for code fragments and error messages is 1.49. Rust is now stable enough (and has sufficient back-compatibility guarantees) that the code fragments are unlikely to need changes for later versions, but the error messages may vary with your particular compiler version.

The text also has a number of references to and comparisons with C++, as this is probably the closest equivalent language (particularly with C++11's move semantics), and the most likely previous language that newcomers to Rust will have encountered.

The Items that make up the book are divided into six sections:

  • Types: Suggestions that revolve around Rust's core type system.
  • Concepts: Core ideas that form the design of Rust.
  • Dependencies: Advice for working with Rust's package ecosystem.
  • Tooling: Suggestions on how to improve your codebase by going beyond just the Rust compiler.
  • Asynchronous Rust: Advice for working with Rust's async mechanisms.
  • Beyond Standard Rust: Suggestions for when you have to work beyond Rust's standard, safe environment.

Although the "Concepts" section is arguably more fundamental than the "Types" section, it is deliberately placed second so that readers who are reading from beginning to end can build up some confidence first.

The following markers, borrowing Ferris from the Rust Book, are used to identify code that isn't right in some way.

This code does not compile!
This code panics!
This code block contains unsafe code.
This code does not produce the desired behaviour.


The first section of this book covers advice that revolves around Rust's type system. This type system is more expressive than that of other mainstream languages; it has more in common with "academic" languages such as OCaml or Haskell.

One core part of this is Rust's enum type, which is considerably more expressive than the enumeration types in other languages, and which allows for algebraic data types.

The other core pillar of Rust's type system is the trait type. Traits are roughly equivalent to interface types in other languages, but they are also tied to Rust's generics (Item 11), to allow interface re-use without runtime overhead.

Item 1: Use the type system to express your data structures

"who called them programers and not type writers" – thingskatedid@

The basics of Rust's type system are pretty familiar to anyone coming from another statically typed programming language (such as C++, Go or Java). There's a collection of integer types with specific sizes, both signed (i8, i16, i32, i64, i128) and unsigned (u8, u16, u32, u64, u128).

There's also signed (isize) and unsigned (usize) integers whose size matches the pointer size on the target system. Rust isn't a language where you're going to be doing much in the way of converting between pointers and integers, so that characterization isn't really relevant. However, standard collections return their size as a usize (from .len()), so collection indexing means that usize values are quite common – which is obviously fine from a capacity perspective, as there can't be more items in an in-memory collection than there are memory addresses on the system.

The integral types do give us the first hint that Rust is a stricter world than C++ – attempting to put a quart (i32) into a pint pot (i16) generates a compile-time error.

        let x: i32 = 42;
        let y: i16 = x;
error[E0308]: mismatched types
  --> use-types/src/
14 |         let y: i16 = x;
   |                ---   ^ expected `i16`, found `i32`
   |                |
   |                expected due to this
help: you can convert an `i32` to an `i16` and panic if the converted value doesn't fit
14 |         let y: i16 = x.try_into().unwrap();
   |                      ^^^^^^^^^^^^^^^^^^^^^

This is reassuring: Rust is not going to sit there quietly while the programmer does things are risky. It also gives an early indication that while Rust has stronger rules, it also has helpful compiler messages that point the way to how to comply with the rules. The suggested solution raises the question of how to handle situations where the conversion would alter the value, and we'll have more to say on both error handling (Item 4) and using panic! (Item 17) later.

Rust also doesn't allow some things that might appear "safe":

        let x = 42i32; // Integer literal with type suffix
        let y: i64 = x;
error[E0308]: mismatched types
  --> use-types/src/
23 |         let y: i64 = x;
   |                ---   ^
   |                |     |
   |                |     expected `i64`, found `i32`
   |                |     help: you can convert an `i32` to an `i64`: `x.into()`
   |                expected due to this

Here, the suggested solution doesn't raise the spectre of error handling, but the conversion does still need to be explicit. We'll discuss type conversions in more detail later (Item 6).

Continuing with the unsurprising primitive types, Rust has a bool type, floating point types (f32, f64) and a unit type () (like C's void).

More interesting is the char character type, which holds a Unicode value (similar to Go's rune type). Although this is stored as 4 bytes internally, there are again no silent conversions to or from a 32-bit integer.

This precision in the type system forces you to be explicit about what you're trying to express – a u32 value is different than a char, which in turn is different than a sequence of UTF-8 bytes, which in turn is different than a sequence of arbitrary bytes, and it's up to you to specify exactly which you mean1. Joel Spolsky's famous blog post can help you understand which you need.

Of course, there are helper methods that allow you to convert between these different types, but their signatures force you to handle (or explicitly ignore) the possibility of failure. For example, a Unicode code point2 can always be represented in 32 bits, so 'a' as u32 is allowed, but the other direction is trickier (as there are u32 values that are not valid Unicode code points):

  • char::from_u32 returns an Option<char> forcing the caller to handle the failure case
  • char::from_u32_unchecked makes the assumption of validity, but is marked unsafe as a result, forcing the caller to use unsafe too (Item 15).

Moving on to aggregate types, Rust has:

  • Arrays, which hold multiple instances of a single type, where the number of instances is known at compile time. For example [u32; 4] is four 4-byte integers in a row.
  • Tuples, which hold instances of multiple heterogeneous types, where the number of elements and their types are known at compile time, for example (WidgetOffset, WidgetSize, WidgetColour). If the types in the tuple aren't distinctive – for example (i32, i32, &'static str, bool) – it's better to give each element a name and use…
  • Structs, which also hold instances of heterogeneous types known at compile time, but which allows both the overall type and the individual fields to be referred to by name.

The tuple struct is a cross-breed of a struct with a tuple: there's a name for the overall type, but no names for the individual fields – they are referred to by number instead: s.0, s.1, etc.

    struct TextMatch(usize, String);
    let m = TextMatch(12, "needle".to_owned());
    assert_eq!(m.0, 12);

This brings us to the jewel in the crown of Rust's type system, the enum.

In its basic form, it's hard to see what there is to get excited about. As with other languages, the enum allows you to specify a set of mutually exclusive values, possibly with a numeric or string value attached.

    enum HttpResultCode {
        Ok = 200,
        NotFound = 404,
        Teapot = 418,
    let code = HttpResultCode::NotFound;
    assert_eq!(code as i32, 404);

Because each enum definition creates a distinct type, this can be used to improve readability and maintainability of functions that take bool arguments. Instead of:

    print(/* both_sides= */ true, /* colour= */ false);

a version that uses a pair of enums:

enum Sides {

enum Output {

fn safe_print(sides: Sides, colour: Output) {

is more type-safe and easier to read at the point of invocation:

    safe_print(Sides::Both, Output::BlackAndWhite);

Unlike the bool version, if a library user were to accidentally flip the order of the arguments, the compiler would immediately complain:

error[E0308]: mismatched types
  --> use-types/src/
84 |     safe_print(Output::BlackAndWhite, Sides::Single);
   |                ^^^^^^^^^^^^^^^^^^^^^ expected enum `Sides`, found enum `Output`
error[E0308]: mismatched types
  --> use-types/src/
84 |     safe_print(Output::BlackAndWhite, Sides::Single);
   |                                       ^^^^^^^^^^^^^ expected enum `Output`, found enum `Sides`

The type safety of Rust's enums continues with the match expression:

        let msg = match code {
            HttpResultCode::Ok => "Ok",
            HttpResultCode::NotFound => "Not found",
            // forgot to deal with the all-important "I'm a teapot" code
error[E0004]: non-exhaustive patterns: `Teapot` not covered
  --> use-types/src/
51 | /     enum HttpResultCode {
52 | |         Ok = 200,
53 | |         NotFound = 404,
54 | |         Teapot = 418,
   | |         ------ not covered
55 | |     }
   | |_____- `HttpResultCode` defined here
65 |           let msg = match code {
   |                           ^^^^ pattern `Teapot` not covered
   = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms
   = note: the matched value is of type `HttpResultCode`

The compiler forces the programmer to consider all of the possibilities3 that are represented by the enum, even if the result is just to add a default arm _ => {}. (Note that modern C++ compilers can and do warn about missing switch arms for enums as well.)

The true power of Rust's enum feature comes from the fact that each variant can have data that comes along with it, making it into an algebraic data type (ADT). This is less familiar to programmers of mainstream languages; in C/C++ terms it's like a combination of an enum with a union – only type-safe.

This means that the invariants of the program's data structures can be encoded into Rust's type system; states that don't comply with those invariants won't even compile. A well-designed enum makes the creator's intent clear to humans as well as to the compiler:

pub enum SchedulerState {
    Running(HashMap<CpuId, Vec<Job>>),

Just from the type definition, it's reasonable to guess that Jobs get queued up in the Pending state until the scheduler is fully active, at which point they're assigned to some per-CPU pool.

This highlights the central theme of this Item, which is to use Rust's type system to express the concepts that are associated with the design of your software.

Returning to the power of the enum, there are two concepts that are so common that Rust includes built-in enum types to express them.

The first is the concept of an Option: either there's a value of a particular type (Some(T)), or there isn't (None). Always use Option for values that can be absent; never fall back to using sentinel values (-1, nullptr, …) to try to express the same concept in-band.

There is one subtle point to consider though. If you're dealing with a collection of things, you need to decide whether having zero things in the collection is the same as not having a collection. For most situations, the distinction doesn't arise and you can go ahead and use Vec<Thing>: a count of zero things implies an absence of things.

However, there are definitely other rare scenarios where the two cases need to be distinguished with Option<Vec<Thing>> – for example, a cryptographic system might need to distinguish between "payload transported separately" and "empty payload provided". (This is related to the debates around the NULL marker columns in SQL.)

One common edge case that's in the middle is a String which might be absent – does "" or None make more sense to indicate the absence of a value? Either way works, but Option<String> clearly communicates the possibility that this value may be absent.

The second common concept arises from error processing: if a function fails, how should that failure be reported? Historically, special sentinel values (e.g. -errno return values from Linux system calls) or global variables (errno for POSIX systems) were used. More recently, languages that support multiple or tuple return values (such as Go) from functions may have a convention of returning a (result, error) pair, assuming the existence of some suitable "zero" value for the result when the error is non-"zero".

In Rust, always encode the result of an operation that might fail as a Result<T, E>. The T type holds the successful result, and the E type holds error details on failure. Using the standard type makes the intent of the design clear, and allows the use of standard transformations (Item 3) and error processing (Item 4); it also makes it possible to streamline error processing with the ? operator.

1: The situation gets muddier still if the filesystem is involved, since filenames on popular platforms are somewhere in between arbitrary bytes and UTF-8 sequences: see the std::ffi::OsString documentation.

2: Technically, a Unicode scalar value rather than a code point

3: This also means that adding a new variant to an existing enum in a library is a breaking change (Item 20): clients of the library will need to change their code to cope with the new variant. If an enum is really just an old-style list of values, this behaviour can be avoided by marking it as a non-exhaustive enum; see Item 20

Item 2: Use the type system to express common behaviour

Item 1 discussed how to express data structures in the type system; this Item moves on to discuss the encoding of behaviour in Rust's type system.

The first stage of this is to add methods to data structures: functions that act on an item of that type, identified by self. Methods can be added to struct types, but can also be added to enum types, in keeping with the pervasive nature of Rust's enum (Item 1). The name of a method gives a label for the behaviour that it encodes, and the method signature gives type information for how to invoke it.

Code that needs to make use of behaviour associated with a type can accept an item of that type (or a reference to it), and invoke the methods needed. However, this tightly couples the two parts of the code; the code that invokes the method only accepts exactly one input type.

If greater flexibility is needed, the desired behaviour can be abstracted into the type system. The simplest such abstraction is the function pointer: a pointer to (just) some code, with a type that reflects the signature of the function. The type is checked at compile time, so by the time the program runs the value is just the size of a pointer.

    fn sum(x: i32, y: i32) -> i32 {
        x + y
    // Explicit coercion to `fn` type is required...
    let op: fn(i32, i32) -> i32 = sum;

Function pointers have no other data associated with them, so they can be treated as values in various ways:

    // `fn` types implement `Copy`
    let op1 = op;
    let op2 = op;
    // `fn` types implement `Eq`
    assert!(op1 == op2);
    // `fn` implements `std::fmt::Pointer`, used by the {:p} format specifier.
    println!("op = {:p}", op);
    // Example output: "op = 0x101e9aeb0"

One technical detail to watch out for: the explicit coercion to a fn type is needed, because just using the name of a function doesn't give you something of fn type;

        let op1 = sum;
        let op2 = sum;
        // Both op1 and op2 are of a type that cannot be named in user code,
        // and this internal type does not implement `Eq`.
        assert!(op1 == op2);
error[E0369]: binary operation `==` cannot be applied to type `fn(i32, i32) -> i32 {main::sum}`
  --> use-types-behaviour/src/
53 |         assert!(op1 == op2);
   |                 --- ^^ --- fn(i32, i32) -> i32 {main::sum}
   |                 |
   |                 fn(i32, i32) -> i32 {main::sum}
help: you might have forgotten to call this function
53 |         assert!(op1( /* arguments */ ) == op2);
   |                 ^^^^^^^^^^^^^^^^^^^^^^
help: you might have forgotten to call this function
53 |         assert!(op1 == op2( /* arguments */ ));
   |                        ^^^^^^^^^^^^^^^^^^^^^^

Instead, the compiler error indicates that the type is something like fn(i32, i32) -> i32 {main::sum}, a type that's entirely internal to the compiler (i.e. could not be written in user code), and which identifies the specific function as well as its signature. To put it another way, the type of sum encodes both the function's signature and its location (for optimization reasons); this type can be automatically coerced (Item 6) to a fn type.

Bare function pointers are very limiting, in two ways:

  • The data provided when invoking a function pointer is limited to just what's held in its arguments (along with any global data).
  • The only information encoded in the function pointer type is the signature of this particular function.

For the first of these, Rust supports closures: chunks of code defined by lambda expressions which can capture parts of their environment. At runtime, Rust automatically converts a lambda together with its captured environment into a closure that implements one of Rust's Fn* traits, and this closure can in turn be invoked.

    let amount_to_add = 2;
    let closure = |y| y + amount_to_add;
    assert_eq!(closure(5), 7);

The three different Fn* traits express some nice distinctions that are needed because of this environment capturing behaviour; the compiler automatically implements the appropriate subset of these Fn* traits for any lambda expression in the code (and it's not possible to manually implement any of these traits1, unlike C++'s operator() overload).

  • FnOnce describes a closure that can only be called once. If some part of its environment is moved into the closure, then that move can only happen once – there's no other copy of the source item to move from – and so the closure can only be invoked once.
  • FnMut describes a closure that can be called repeatedly, and which can make changes to its environment because it mutably borrows from the environment.
  • Fn describes a closure that can be called repeatedly, and which only borrows values from the environment immutably.

The latter two traits in this list each has a trait bound of the preceding trait: a closure that can be repeatedly called with immutable references (Fn) is also safe to call with mutable references (FnMut), and a closure that can be called repeatedly with mutable references (FnMut) is also safe to call once, with moved items rather than mutable references (FnOnce). The bare function pointer type fn also notionally belongs at the end of this list; any (not-unsafe) fn type automatically implements all of the Fn* traits, because it borrows nothing from the environment.

As a result, when writing code that accepts closures, use the most general Fn* trait that works, to allow the greatest flexibility for callers – for example, accept FnOnce for closures that are only used once. The same reasoning also leads to advice to prefer Fn* trait bounds to bare function pointers (fn).

The Fn* traits are more flexible than a bare function pointer, but they can still only describe the behaviour of a single function, and that only in terms of the function's signature. Continuing to generalize, collections of related operations are described in the type system by a trait: a collection of related methods that some underlying item makes publicly available. Each method in a trait also has a name, providing a label which allows the compiler to disambiguate methods with the same signature, and more importantly which allows programmers to deduce the intent of the method.

A Rust trait is roughly analogous to an "interface" in Go and Java, or to an "abstract class" (all virtual methods, no data members) in C++. Implementations of the trait must provide all the methods (but note that the trait definition can include a default implementation, Item 12), and can also have associated data that those implementations make use of. This means that code and data gets encapsulated together, in a somewhat object-oriented manner.

Returning to the original situation, code that accepts a struct and calls methods on it is more flexible if instead the struct implements some trait, so that the calling code invokes trait methods rather than struct methods. This leads to the same kind of advice that turns up for other OO-influenced languages2: prefer accepting trait types to concrete types if future flexibility is anticipated.

Sometimes, there is some behaviour that you want to distinguish in the type system, but which cannot be expressed as some specific method signature in a trait definition. For example, consider a trait for sorting collections; an implementation might be stable (elements that compare the same will appear in the same order before and after the sort) but there's no way to express this in the sort method arguments.

In this case, it's still worth using the type system to track this requirement, using a marker trait.

pub trait Sort {
    /// Re-arrange contents into sorted order.
    fn sort(&mut self);

/// Marker trait to indicate that a [`Sortable`] sort stably.
pub trait StableSort: Sort {}

A marker trait has no methods, but an implementation still has to declare that it is implementing the trait – which acts as a promise from the implementer: "I solemnly swear that my implementation sorts stably". Code that relies on a stable sort can then specify the StableSort trait bound, relying on the honour system to preserve its invariants. Use marker traits to distinguish behaviours that cannot be expressed in the trait method signatures.

Once behaviour has been encapsulated into Rust's type system as a trait, there are two ways it can be used:

  • as a trait bound, which constrains what types are acceptable for a generic data type or method at compile-time, or
  • as a trait object. which constrains what types can be stored or passed to a method at run-time.

(Item 11 discusses which of the two you should prefer, where possible3.)

A trait bound indicates that generic code which is parameterized by some type T can only be used when that type T implements some specific trait. The presence of the trait bound means that the implementation of the generic can use the methods from that trait, secure in the knowledge that the compiler will ensure that any T that compiles does indeed have those methods. This check happens at compile-time, when the generic is monomorphized (Rust's term for what C++ would call "template instantiation").

This restriction on the target type T is explicit, encoded in the trait bounds: the trait can only be implemented by types that satisfy the trait bounds. This is in contrast to the equivalent situation in C++, where the constraints on the type T used in a template<typename T> are implicit 4: C++ template code still only compiles if all of the referenced methods are available at compile-time, but the checks are purely based on method and signature. (This "duck typing" leads to the chance of confusion; a C++ template that uses t.pop() might compile for a T type parameter of either Stack or Balloon – which is unlikely to be desired behaviour.)

The need for explicit trait bounds also means that a large fraction of generics use trait bounds. To see why this is, turn the observation around and consider what can be done with a struct Thing<T> where there no trait bounds on T. Without a trait bound, the Thing can only perform operations that apply to any type T; this allows for containers, collections and smart pointers, but not much else. Anything that uses the type T is going to need a trait bound.

pub fn dump_sorted<T>(mut collection: T)
    T: Sort + IntoIterator,
    T::Item: Debug,
    // Next line requires `T: Sort` trait bound.
    // Next line requires `T: IntoIterator` trait bound.
    for item in collection {
        // Next line requires `T::Item : Debug` trait bound
        println!("{:?}", item);

So the advice here is to use trait bounds to express requirements on the types used in generics, but it's easy advice to follow – the compiler will force you to comply with it regardless.

A trait object is the other way of making use of the encapsulation defined by a trait, but here different possible implementations of the trait are chosen at run-time rather than compile-time. This dynamic dispatch is analogous to the use of virtual functions in C++, and under the covers Rust has 'vtable' objects that are roughly analogous to those in C++.

This dynamic aspect of trait objects also means that they always have to be handled indirectly, via reference (&dyn Trait) or a pointer (Box<dyn Trait>). This is because the size of the object implementing the trait isn't known at compile time – it could be a giant struct or a tiny enum – so there's no way to allocate the right amount of space for a bare trait object.

A similar concern means that traits used as trait objects cannot have methods that return the Self type, because the compiled-in-advance code that uses the trait object would have no idea how big that Self might be.

A trait that has a generic method fn method<T>(t:T) allows for the possibility of an infinite number of implemented methods, for all the different types T that might exist. This is fine for a trait used as a trait bound, because the infinite set of possibly invoked generic methods becomes a finite set of actually invoked generic methods at compile time. The same is not true for a trait object: the code available at compile time has to cope with all possible Ts that might arrive at run-time.

These two restrictions – no returning Self and no generic methods – are combined into the concept of object safety. Only object safe traits can be used as trait objects.

1: At least, not in stable Rust at the time of writing. The unboxed_closures and fn_traits experimental features may change this in future.

2: For example, Effective Java Item 64: Refer to objects by their interfaces

3: Spoiler: trait bounds.

4: The addition of concepts in C++20 allows explicit specification of constraints on template types, but the checks are still only performed when the template is instantiated, not when it is declared.

Item 3: Avoid matching Option and Result

Item 1 expounded the virtues of enum and showed how match expressions force the programmer to take all possibilities into account; this Item explores situations where you should prefer to avoid match expressions – explicitly at least.

Item 1 also introduced the two ubiquitous enums that are provided by the Rust standard library:

  • Option<T> to express that a value (of type T) may not be present
  • Result<T, E>, for when an operation to return a value (of type T) may not succeed, and may instead return an error (of type E).

For these particular enums, explicitly using match often leads to code that is less compact than it needs to, and which isn't idiomatic Rust.

The first situation where a match is unnecessary is when only the value is relevant, and the absence of value (and any associated error) can just be ignored.

    struct S {
        field: Option<i32>,

    match &s.field {
        Some(i) => println!("field is {}", i),
        None => {}

For this situation, an if let expression is one line shorter and, more importantly, clearer:

    if let Some(i) = &s.field {
        println!("field is {}", i);

However, most of the time the absence of a value, and an associated error, is going to be something that the programmer has to deal with. Designing software to cope with failure paths is hard, and most of that is essential complexity that no amount of syntactic support can help with – deciding what should happen if an operation fails.

In some situations, the right decision is to perform an ostrich manoeuvre and explicitly not cope with failure. Doing this with an explicit match would be needlessly verbose:

    let result = std::fs::File::open("/etc/passwd");
    let f = match result {
        Ok(f) => f,
        Err(_e) => panic!("Failed to open /etc/passwd!"),

Both Option and Result provide a pair of methods that extract their inner value and panic! if it's absent: unwrap and expect. The latter allows the error message on failure to be personalized, but in either case the resulting code is shorter and simpler – error handling is delegated to the .unwrap() suffix (but is still present).

    let f = std::fs::File::open("/etc/passwd").unwrap();

Be clear, though: these helper functions still panic!, so choosing to use them is the same as choosing to panic! (Item 17).

However, in many situations, the right decision for error handling is to defer the decision to somebody else. This is particularly true when writing a library, where the code may be used in all sorts of different environments that can't be foreseen by the library author. To make that somebody else's job easier, prefer Result to Option, even though this may involve conversions between different error types (Item 4); Result has also a [#must_use] attribute to nudge library users in the right direction.

Explicitly using a match allows an error to propagate, but at the cost of some visible boilerplate (reminiscent of Go):

    pub fn find_user(username: &str) -> Result<UserId, std::io::Error> {
        let f = match std::fs::File::open("/etc/passwd") {
            Ok(f) => f,
            Err(e) => return Err(e),

The key ingredient for reducing boilerplate is Rust's question mark operator ?. This piece of syntactic sugar takes care of matching the Err arm and the return Err(...) expression in a single character:

    pub fn find_user(username: &str) -> Result<UserId, std::io::Error> {
        let f = std::fs::File::open("/etc/passwd")?;

Newcomers to Rust sometimes find this disconcerting: the question mark can be hard to spot on first glance, leading to disquiet as to how the code can possibly work. However, even with a single character, the type system is still at work, ensuring that all of the possibilities expressed in the relevant types (Item 1) are covered – leaving the programmer to focus on the mainline code path without distractions.

What's more, there's generally no cost to these apparent method invocations: they are all generic functions marked as [#inline], so the generated code will typically compile to machine code that's identical to the manual version.

These two factors taken together mean that you should prefer Option and Result transforms to explicit match expressions.

In the previous example, the error types lined up: both the inner and outer methods expressed errors as std::io::Error. That's often not the case; one function may accumulate errors from a variety of different sub-libraries, each of which uses different error types. Error mapping in general is discussed in Item 4; for now, just be aware that a manual mapping:

    pub fn find_user(username: &str) -> Result<UserId, String> {
        let f = match std::fs::File::open("/etc/passwd") {
            Ok(f) => f,
            Err(e) => {
                return Err(format!("Failed to open password file: {:?}", e))
        // ...

can be more succinctly and idiomatically expressed with the .map_err() transformation:

    pub fn find_user(username: &str) -> Result<UserId, String> {
        let f = std::fs::File::open("/etc/passwd")
            .map_err(|e| format!("Failed to open password file: {:?}", e))?;
        // ...

This approach generalizes more widely. The question mark operator is a big hammer; use transformation methods on Option and Result types to manoeuvre them into a position where they can be a nail.

The standard library provides a wide variety of these transformation methods to make this possible, as shown in the following map. In line with Item 17, methods that can panic are highlighted in red.

Option/Result transformations

(The online version of this diagram is clickable: each box links to the relevant documentation.)

One common situation isn't covered by the diagram is dealing with references. For example, consider a structure that optionally holds some data.

    struct InputData {
        payload: Option<Vec<u8>>,

A method on this struct which tries to pass the payload to an encryption function with signature (&[u8]) -> Vec<u8> fails if there's a naive attempt to take a reference:

    impl InputData {
        pub fn encrypted(&self) -> Vec<u8> {
error[E0507]: cannot move out of `self.payload` which is behind a shared reference
  --> transform/src/
57 |             encrypt(&self.payload.unwrap_or(vec![]))
   |                      ^^^^^^^^^^^^
   |                      |
   |                      move occurs because `self.payload` has type `Option<Vec<u8>>`, which does not implement the `Copy` trait
   |                      help: consider borrowing the `Option`'s content: `self.payload.as_ref()`

The error message describes exactly what's needed to make the code work, the as_ref() method1 on Option. This method converts a reference-to-an-Option to be an Option-of-a-reference:

        pub fn encrypted(&self) -> Vec<u8> {

To sum up:

  • Get used to the transformations of Option and Result, and prefer Result to Option.
    • Use .as_ref() as needed when transformations involve references.
  • Use them in preference to explicit match operations.
  • In particular, use them to transform result types into a form where the ? operator applies.

1: Note that this method is separate from the AsRef trait, even though the method name is the same.

Item 4: Prefer idiomatic Error variants

Item 3 described how to use the transformations that the standard library provides for the Option and Result types to allow concise, idiomatic handling of result types using the ? operator. It stopped short of discussing how best to handle the variety of different error types E that arise as the second type argument of a Result<T, E>; that's the subject of this Item.

This is only really relevant when there are a variety of different error types in play; if all of the different errors that a function encounters are already of the same type, it can just return that type. When there are errors of different types, there's a decision to be made about whether the sub-error type information should be preserved.

The Error Trait

It's always good to understand what the standard traits (Item 5) involve, and the relevant trait here is std::error::Error. The E type parameter for a Result doesn't have to be a type that implements Error, but it's a common convention that allows wrappers to express appropriate trait bounds – so prefer to implement Error for your error types.

The first thing to notice is that the only hard requirement for Error types is the trait bounds: any type that implements Error also has to implement both:

  • the Display trait, meaning that it can be format!ed with {}, and
  • the Debug trait, meaning that it can be format!ed with {:?}.

In other words, it should be possible to display Error types to both the user and the programmer.

The only1 method in the trait is source(), which allows an Error type to expose an inner, nested error. This method is optional – it comes with a default implementation (Item 12) returning None, indicating that inner error information isn't available.

Minimal Errors

If nested error information isn't needed, then an implementation of the Error type need not be much more than a String – one rare occasion where a "stringly-typed" variable might be appropriate. It does need to be a little more than a String though; while it's possible to use String as the E type parameter:

    pub fn find_user(username: &str) -> Result<UserId, String> {
        let f = std::fs::File::open("/etc/passwd")
            .map_err(|e| format!("Failed to open password file: {:?}", e))?;
        // ...

a String doesn't implement Error, which we'd prefer so that other areas of code can deal in Errors. It's not possible to impl Error for String, because neither the trait nor the type belong to us (the so-called orphan rule):

    impl std::error::Error for String {}
error[E0117]: only traits defined in the current crate can be implemented for arbitrary types
  --> errors/src/
18 |     impl std::error::Error for String {}
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^------
   |     |                          |
   |     |                          `String` is not defined in the current crate
   |     impl doesn't use only types from inside the current crate
   = note: define and implement a trait or new type instead

A type alias doesn't help either, because it doesn't create a new type and so doesn't change the error message.

    pub type MyError = String;

    pub fn find_user(username: &str) -> Result<UserId, MyError> {
error[E0117]: only traits defined in the current crate can be implemented for arbitrary types
  --> errors/src/
38 |     impl std::error::Error for MyError {}
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^-------
   |     |                          |
   |     |                          `String` is not defined in the current crate
   |     impl doesn't use only types from inside the current crate
   = note: define and implement a trait or new type instead

As usual, the compiler error message gives a hint of how to solve the problem. Defining a tuple struct that wraps the String type (the "newtype" pattern) allows the Error trait to be implemented, provided that Debug and Display are implemented too:

    pub struct MyError(String);

    impl std::fmt::Display for MyError {
        fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
            write!(f, "{}", self.0)

    impl std::error::Error for MyError {}

    pub fn find_user(username: &str) -> Result<UserId, MyError> {
        let f = std::fs::File::open("/etc/passwd").map_err(|e| {
            MyError(format!("Failed to open password file: {:?}", e))
        // ...

For convenience, it may make sense to implement the From<String> trait to allow string values to be easily converted into MyError instances (Item 6):

    impl std::convert::From<String> for MyError {
        fn from(msg: String) -> Self {

Sadly, the compiler doesn't quite have enough type information to figure out that which type is needed in a map_err() invocation. As a result, the previous example (with MyError(format!(...))) can't be minimized further:

    pub fn find_user(username: &str) -> Result<UserId, MyError> {
        let f = std::fs::File::open("/etc/passwd").map_err(|e| {
            format!("Failed to open password file: {:?}", e).into()
        // ...
error[E0282]: type annotations needed
  --> errors/src/
95 |         let f = std::fs::File::open("/etc/passwd").map_err(|e| {
   |                                                    ^^^^^^^ cannot infer type for type parameter `F` declared on the associated function `map_err`
96 |             format!("Failed to open password file: {:?}", e).into()
   |             ------------------------------------------------------- this method call resolves to `T`

Nested Errors

The alternative scenario is where the content of nested errors is important enough that it should be preserved and made available to the caller.

Consider a library function that attempts to return the first line of a file as a string, as long as it is not too long. A moment's thought reveals (at least) three distinct types of failure that could occur:

  • The file might not exist, or might be inaccessible for reading.
  • The file might contain data that isn't valid UTF-8, and so can't be converted into a String.
  • The file might have a first line that is too long.

In line with Item 1, you can and should use the type system to express and encompass all of these possibilities as an enum:

pub enum MyError {

The enum definition includes a derive(Debug), but to satisfy the Error trait a Display implementation is also needed.

impl std::fmt::Display for MyError {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            MyError::Io(e) => write!(f, "IO error: {}", e),
            MyError::Utf8(e) => write!(f, "UTF-8 error: {}", e),
            MyError::General(s) => write!(f, "General error: {}", s),

It also makes sense to override the default source() implementation for easy access to nested errors.

use std::error::Error;

impl Error for MyError {
    fn source(&self) -> Option<&(dyn Error + 'static)> {
        match self {
            MyError::Io(e) => Some(e),
            MyError::Utf8(e) => Some(e),
            MyError::General(_) => None,

This allows the error handling to be concise while still preserving all of the type information across different classes of error:

/// Return the first line of the given file.
pub fn first_line(filename: &str) -> Result<String, MyError> {
    let file = std::fs::File::open(filename).map_err(MyError::Io)?;
    let mut reader = std::io::BufReader::new(file);

    // (A real implementation could just use `reader.read_line()`)
    let mut buf = vec![];
    let len = reader.read_until(b'\n', &mut buf).map_err(MyError::Io)?;
    let result = String::from_utf8(buf).map_err(MyError::Utf8)?;
    if result.len() > MAX_LEN {
        return Err(MyError::General(format!("Line too long: {}", len)));

It's also a good idea to implement the From trait for all of the sub-error types (Item 6):

impl From<std::io::Error> for MyError {
    fn from(e: std::io::Error) -> Self {
impl From<std::string::FromUtf8Error> for MyError {
    fn from(e: std::string::FromUtf8Error) -> Self {

This prevents library users from suffering under the orphan rules themselves: they aren't allowed to implement From on MyError, because both the trait and the struct are external to them. Better still, this allows for even more concision:

/// Return the first line of the given file.
pub fn firstline(filename: &str) -> Result<String, MyError> {
    let file = std::fs::File::open(filename)?;
    let mut reader = std::io::BufReader::new(file);
    let mut buf = vec![];
    let len = reader.read_until(b'\n', &mut buf)?;
    let result = String::from_utf8(buf)?;
    if result.len() > MAX_LEN {
        return Err(MyError::General(format!("Line too long: {}", len)));

Trait Objects

The first approach to nested errors threw away all of the sub-error detail, just preserving some string output (format!("{:?}", err)). The second approach preserved the full type information for all possible sub-errors, but required a full enumeration of all possible types of sub-error.

This raises the question: is there a half-way house between these two approaches, preserving sub-error information without needing to manually include every possible error type?

Encoding the sub-error information as a trait object avoids the need for an enum variant for every possibility, but erases the details of the specific underlying error types. The receiver of such an object would have access to the methods of the Error trait – display(), debug() and source() in turn – but wouldn't know the original static type of the sub-error.

It turns out that this is possible, but it's surprisingly subtle. Part of the difficultly comes from the constraints on trait objects (Item 11), but Rust's coherence rules also come into play, which (roughly) say that there can be at most one implementation of a trait for a type.

A putative WrappedError would naively be expected to both implement the Error trait, and also to implement the From<Error> trait to allow sub-errors to be easily wrapped. That means that a WrappedError can be created from an inner WrappedError, as WrapperError implements Error, and that clashes with the blanket reflexive implementation of From:

error[E0119]: conflicting implementations of trait `std::convert::From<WrappedError>` for type `WrappedError`:
   --> errors/src/
241 | impl<E: 'static + Error> From<E> for WrappedError {
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    = note: conflicting implementation in crate `core`:
            - impl<T> From<T> for T;

David Tolnay's anyhow is a crate that has already solved these problems, and which adds other helpful features (such as stack traces) besides. As a result, it is rapidly becoming the standard recommendation for error handling – a recommendation seconded here: consider using the anyhow crate for error handling.


This item has covered a lot of ground, so a summary is in order:

  • The standard Error trait requires little of you, so prefer to implement it for your error types.
  • When dealing with heterogeneous underlying error types, decide whether preserving those types is needed.
    • If not, use anyhow to wrap sub-errors.
    • If they are needed, encode them in an enum and provide conversions.
  • Consider using the anyhow crate for convenient, idiomatic error handling.

It's your decision, but whatever you decide, encode it in the type system (Item 1).

1: Or at least the only non-deprecated, stable method.

Item 5: Familiarize yourself with standard traits

Rust encodes key behavioural aspects of its type system in the type system itself, through a collection of fine-grained standard traits that describe those behaviours.

Many of these traits will seem familiar to programmers coming from C++, corresponding to concepts such as copy-constructors, destructors, equality and assignment operators, etc.

As in C++, it's usually a good idea to implement many of these traits for your own types; the Rust compiler will give you helpful error messages if some operation needs one of these traits for your type, and it isn't present.

Implementing such a large collection of traits may seem daunting, but most of the common ones can be automatically applied to user-defined types, through use of derive macros. This leads to type definitions like:

    #[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
    enum MyBooleanOption {

This fine-grained specification of behaviour can be disconcerting at first, but it's important to be familiar with the most common of these standard crates so that the available behaviours of a type definition can be immediately understood.

A rough one-sentence summary each of the standard traits that this Item covers is:

  • Clone: Items of this type can make a copy of themselves when asked.
  • Copy: If the compiler makes a bit-for-bit copy of this item's memory representation, the result is a valid new item.
  • Default: It's possible to make new instance of this type with sensible default values.
  • PartialEq: There's a partial equivalence relation for items of this type – any two items can be definitively compared, but it's not always true that x==x.
  • Eq. There's an equivalence relation for items of this type: any two items can be definitively compared.
  • PartialOrd: Some items of this type can be compared and ordered.
  • Ord: All items of this type can be compared and ordered.
  • Hash: Items of this type can produce a stable hash of their contents when asked.
  • Debug: Items of this type can be displayed to programmers.
  • Display: Items of this type can be displayed to users.

These traits can all be derived for user-defined types, with the exception of Display (included here because of its overlap with Debug). However, there are occasions when a manual implementation – or no implementation – is preferable.

Rust also allows various built-in unary and binary operators to be overloaded for user-defined types, by implementing various traits from the std::ops module. These traits are not derivable, and are typically only needed for types that represent "algebraic" objects.

Other (non-deriveable) standard traits are covered in other Items, and so are not included here. These include:

  • Fn, FnOnce and FnMut: Items of this type represent closures that can be invoked. See Item 2.
  • Error: Items of this type represent error information that can be displayed to users or programmers, and which may hold nested sub-error information. See Item 4.
  • Drop: Items of this type perform processing when they are destroyed, which is essential for RAII patterns. See Item 10.
  • From and TryFrom: Items of this type can be automatically created from items of some other type, but with a possibility of failure in the latter case. See Item 6.
  • Deref and DerefMut: Items of this type are pointer-like objects that can be dereferenced to get access to an inner item. See Item 8.
  • Iterator and friends: Items of this type can be iterated over. See Item 9.
  • Send and Sync: Items of this type are safe to transfer between, or be referenced by, multiple threads. See Item 16.


The Clone trait indicates that it's possible to make a new copy of an item, by calling the clone() method. This is roughly equivalent to C++'s copy-constructor, but more explicit: the compiler will never silently invoke this method on its own (read on to the next section for that).

Clone can be derived; the macro implementation clones an aggregate type by cloning each of its members in turn, again, roughly equivalent to a default copy-constructor in C++. This makes the trait opt-in (by adding #[derive(Clone)]), in contrast to the opt-out behaviour in C++ (MyType(const MyType&) = delete;).

This is such a common and useful operation that it's more interesting to investigate the situations where you shouldn't or can't implement Clone, or where the default derive implementation isn't appropriate.

  • You shouldn't implement Clone if the item embodies unique access to some resource (such as an RAII type, Item 10), or when there's another reason to restrict copies (e.g. if the item holds cryptographic key material).
  • You can't implement Clone if some component of your type is un-Cloneable in turn. Examples include:
    • Fields that are mutable references (&mut T), because the borrow checker (Item 13) only allows a single mutable reference at a time.
    • Standard library types that fall in to the previous category, such as Mutex or MutexGuard.
  • You should manually implement Clone if there is anything about your item that won't be captured by a (recursive) field-by-field copy, or if there is additional book-keeping associated with item lifetimes (for example: consider a type that tracks the number of extant items at runtime for metrics purposes)


The Copy trait has a trivial declaration:

pub trait Copy: Clone { }

There are no methods in this trait, meaning that it is a marker trait – it's used to indicate some constraint on a type that's not directly expressed in the type system.

In the case of Copy, the meaning of this marker is that not only can items of this type be copied (hence the Clone trait bound), but also a bit-for-bit copy of the memory holding an item gives a correct new item. Effectively, this trait is a marker that says that a type is a "plain old data" (POD) type.

In contrast to user-defined marker traits (Item 1), Copy has a special significance to the compiler1 over and above being available for trait bounds – it shifts the compiler from move semantics to copy semantics.

With move semantics for the assignment operator, what the right hand giveth, the left hand taketh away:

        struct KeyId(u32);
        let k = KeyId(42);
        let k2 = k; // value moves out of k in to k2
        println!("k={:?}", k);
error[E0382]: borrow of moved value: `k`
  --> std-traits/src/
50 |         let k = KeyId(42);
   |             - move occurs because `k` has type `main::KeyId`, which does not implement the `Copy` trait
51 |         let k2 = k; // value moves out of k in to k2
   |                  - value moved here
52 |         println!("k={:?}", k);
   |                            ^ value borrowed here after move

With copy semantics, the original item lives on:

        #[derive(Debug, Clone, Copy)]
        struct KeyId(u32);
        let k = KeyId(42);
        let k2 = k; // value bitwise copied from k to k2
        println!("k={:?}", k);

This makes Copy one of the most important traits to watch out for: it fundamentally changes the behaviour of assignments – and this includes parameters for method invocations.

In this respect, there are again overlaps with C++'s copy-constructors, but it's worth emphasizing a key distinction: in Rust there is no way to get the compiler to silently invoke user-defined code – it's either explicit (a call to .clone()), or it's not user-defined (a bitwise copy).

To finish this section, observe that because Copy has a Clone trait bound, it's possible to .clone() any Copy-able item. However, it's not a good idea: a bitwise copy will always be faster than invoking a trait method. Clippy (Item 28) will warn you about this:

        let k3 = k.clone();
warning: using `clone` on type `main::KeyId` which implements the `Copy` trait
  --> std-traits/src/
68 |         let k3 = k.clone();
   |                  ^^^^^^^^^ help: try removing the `clone` call: `k`
   = note: `#[warn(clippy::clone_on_copy)]` on by default
   = help: for further information visit


The Default trait defines a default constructor, via a default()) method. This trait can be derived for user-defined types, provided that all of the sub-types involved have a Default implementation of their own; if they don't, you'll have to implement the trait manually. Continuing the comparison with C++, notice that a default constructor has to be explicitly triggered; the compiler does not create one automatically.

The most useful aspect of the Default trait is its combination with struct update syntax. This syntax allows struct fields to be initialized by copying or moving their contents from an existing instance of the same struct, for any fields that aren't explicitly initialized. The template to copy from is given at the end of the initialization, after .., and the Default trait provides an ideal template to use:

    struct Colour {
        red: u8,
        green: u8,
        blue: u8,
        alpha: u8,

    let c = Colour {
        red: 128,

This makes it much easier to initialize structures with lots of fields, only some of which have non-default values. (The builder pattern, Item 7, may also be appropriate for these situations.)

PartialEq and Eq

The PartialEq and Eq traits allow you to define equality for user-defined types. These traits have special significance because if they're present, the compiler will automatically use them for equality (==) checks, similarly to operator== in C++. The default derive implementation does this with a recursive field-by-field comparison.

The Eq version is just a marker trait extension of PartialEq which adds the assumption of reflexivity: any type T that claims to support Eq should ensure that x == x is true for any x: T.

This is sufficiently odd to immediately raise the question: when wouldn't x == x? The primary rationale behind this split relates to floating point numbers2, and specifically to the special "not a number" value NaN (f32::NAN / f64::NAN in Rust). The floating point specifications require that nothing compares equal to NaN, including NaN itself; the PartialEq trait is the knock-on effect of this.

For user-defined types that don't have any float-related peculiarities, you should implement Eq whenever you implement PartialEq. The full Eq trait is also required if you want to use the type as the key in a HashMap (as well as the Hash trait).

You should implement PartialEq manually if your type contains any fields that do not affect the item's identity, such as internal caches and other performance optimizations.

PartialOrd and Ord

The ordering traits PartialOrd and Ord allow comparisons between two items of a type, returning Less, Greater, or Equal. The traits require equivalent equality traits to be implemented (PartialOrd requires PartialEq, Ord requires Eq), and the two have to agree with each other (watch out for this with manual implementations in particular).

As with the equality traits, the comparison traits have special significance because the compiler will automatically use them for comparison operations (<, >, <=, >=).

The default implementation produced by derive compares fields (or enum variants) lexicographically in the order they're defined, so if this isn't correct you'll need to implement the traits manually (or re-order the fields).

Unlike PartialEq, the PartialOrd trait does correspond to a variety of real situations. For example, it could be used to express a subset relationship3 among collections: {1, 2} is a subset of {1, 2, 4}, but {1, 3} is not a subset of {2, 4} nor vice versa.

However, even if a partial order does accurately model the behaviour of your type, be wary of implementing just PartialOrd (a rare occasion that contradicts the advice of Item 1) – it can lead to surprising results:

    let x = Oddity(1);
    if x <= x {
        println!("Never hit this!");

    let y = Oddity(2);
    if x <= y {
        println!("y is bigger"); // Not hit
    } else if y <= x {
        // Programmers are likely to omit this arm
        println!("x is bigger"); // Not hit
    } else {
        println!("neither is bigger"); // This one


The Hash trait is used to produce a single value that has a high probability of being different for different items; this value is used as the basis for hash-bucket based data structures like HashMap and HashSet.

Flipping this around, it's essential that the "same" items (as per Eq) always produce the same hash; if x == y (via Eq) then it must always be true that hash(x) == hash(y). If you have a manual Eq implementation, check whether you also need a manual implementation of Hash to comply with this requirement.

Debug and Display

The Debug and Display traits allow a type to specify how it should be included in output, for either normal ({} format argument) or debugging purposes ({:?} format argument), roughly analogous to an iostream operator<< overload in C++.

The differences between the intents of the two traits go beyond which format specifier is needed, though:

  • Debug can be automatically derived, Display can only be manually implemented. This is related to…
  • The layout of Debug output may change between different Rust versions. If the output will ever be parsed by other code, use Display.
  • Debug is programmer-oriented, Display is user-oriented. A thought experiment that helps with this is to consider what would happen if the program was localized to a language that the authors don't speak; Display is appropriate if the content should be translated, Debug if not.

As a general rule, add an automatically generated Debug implementation for your types unless they contain sensitive information (personal details, cryptographic material etc.). A manual implementation of Debug can be appropriate when the automatically generated version would emit voluminous amounts of detail.

Implement Display if your types are designed to be shown to end users in textual output.

Operator Overloads

Similarly to C++, Rust allows various arithmetic and bitwise operators to be overloaded for user-defined types. This useful for "algebraic" or bit-manipulation types (respectively) where there is a natural interpretation of these operators. However, experience from C++ has shown that it's best to avoid overloading operators for unrelated types as it often leads to code that is hard to maintain and has unexpected performance properties (e.g. x + y silently invokes an expensive O(N) method).

Continuing with the principle of least surprise, if you implement any operator overloads you should implement a coherent set of operator overloads. For example, if x + y has an overload (Add), and -y (Neg), then you should also implement x - y (Sub) and make sure it gives the same answer as x + (-y).

The items passed to the operator overload traits are moved, which means that non-Copy types will be consumed by default. Adding implementations for &'a MyType can help with this, but requires more boilerplate to cover all of the possibilities (e.g. 4 = 2 × 2 possibilities for combining reference/non-reference arguments to a binary operator).


This item has covered a lot of ground, so some tables that summarize the standard traits that have been touched on are in order. First, the traits of this Item, all of which can be automatically derived except Display.

TraitCompiler UseBoundMethods
Copylet y = x;CloneMarker trait
PartialEqx == yeq
Eqx == yPartialEqMarker trait
PartialOrdx < y,
x <= y,
Ordx < y,
x <= y,
Eq + PartialOrdcmp
Debugformat!("{:?}", x)fmt
Displayformat!("{}", x)fmt

The operator overloads are in the next table. None of these can be derived.

TraitCompiler UseBoundMethods
Addx + yadd
AddAssignx += yadd_assign
BitAndx & ybitand
BitAndAssignx &= ybitand_assign
BitOrx ⎮ ybitor
BitOrAssignx ⎮= ybitor_assign
BitXorx ^ ybitxor
BitXorAssignx ^= ybitxor_assign
Divx / ydiv
DivAssignx /= ydiv_assign
Mulx * ymul
MulAssignx *= ymul_assign
Remx % yrem
RemAssignx %= yrem_assign
Shlx << yshl
ShlAssignx <<= yshl_assign
Shrx >> yshr
ShrAssignx >>= yshr_assign
Subx - ysub
SubAssignx -= ysub_assign

For completeness, the standard traits that are covered in other items are included in the following table; none of these traits are deriveable (but Send and Sync may be automatically implemented by the compiler).

TraitItemCompiler UseBoundMethods
FnItem 2x(a)FnMutcall
FnOnceItem 2x(a)FnOncecall_mut
FnMutItem 2x(a)call_once
ErrorItem 4Display + Debug[source]
FromItem 6from
TryFromItem 6try_from
IntoItem 6into
TryIntoItem 6try_into
AsRefItem 8as_ref
AsMutItem 8as_mut
BorrowItem 8borrow
BorrowMutItem 8Borrowborrow_mut
ToOwnedItem 8 to_owned
DerefItem 8*x, &xderef
DerefMutItem 8*x, &mut xDerefderef_mut
IndexItem 8x[idx]index
IndexMutItem 8x[idx] = ...Indexindex_mut
PointerItem 8format("{:p}", x)fmt
IteratorItem 9next
IntoIteratorItem 9for y in xinto_iter
FromIteratorItem 9from_iter
ExactSizeIteratorItem 9Iterator(size_hint)
DoubleEndedIteratorItem 9Iteratornext_back
DropItem 10} (end of scope)drop
SizedItem 13Marker trait
SendItem 16cross-thread transferMarker trait
SyncItem 16cross-thread useMarker trait

1: As do several of the other marker traits in std::marker.

2: Of course, comparing floats for equality is always a dangerous game, as there is typically no guarantee that rounded calculations will produce a result that is bit-for-bit identical to the number you first thought of.

3: More generally, any lattice structure also has a partial order.

Item 6: Understand type conversions

In general, Rust does not perform automatic conversion between types. This includes integral types, even when the transformation is "safe":

        let x: i32 = 42;
        let y: i16 = x;
error[E0308]: mismatched types
  --> use-types/src/
14 |         let y: i16 = x;
   |                ---   ^ expected `i16`, found `i32`
   |                |
   |                expected due to this
help: you can convert an `i32` to an `i16` and panic if the converted value doesn't fit
14 |         let y: i16 = x.try_into().unwrap();
   |                      ^^^^^^^^^^^^^^^^^^^^^

Rust type conversions fall into three categories:

  • manual: user-defined type conversions provided by implementing the From and Into traits
  • semi-automatic: explicit casts between values using the as keyword
  • automatic: implicit coercion into a new type

The latter two don't apply to conversions of user defined types (with a couple of exceptions), so the majority of this Item will focus on manual conversion. However, sections at the end will discuss casting and coercion – including the exceptions where they can apply to a user-defined type.

User-Defined Type Conversions

As with other features of the language (Item 5) the ability to perform conversions between values of different user-defined types is encapsulated as a trait – or rather, as a set of related generic traits.

The four relevant traits that express the ability to convert values of a type are:

  • From<T>: Items of this type can be built from items of type T.
  • TryFrom<T>: Items of this type can sometimes be built from items of type T.
  • Into<T>: Items of this type can converted into items of type T.
  • TryInto<T>: Items of this type can sometimes be converted into items of type T.

Given the discussion in Item 1 about expressing things in the type system, it's no surprise to discover that the difference with the Try... variants is that the sole trait method returns a Result rather than a guaranteed new item; the trait definition also requires an associated type that provides the type of the error E for failure situations. You can choose to ignore the possibility of error (e.g. with .unwrap()), but as usual it needs to be a deliberate choice.

There's also some symmetry here: if a type T can be transmuted into a type U, isn't that the same as it being possible to create an item of type U by transmutation from an item of type T?

This is indeed the case, and it leads to the first piece of advice: implement the From trait for conversions. The Rust standard library had to pick just one of the two possibilities (to prevent the system from spiralling around in dizzy circles1), and came down on the side of automatically providing Into from a From implementation.

If you're consuming one of these two traits, as a trait bound on a new trait of your own, then the advice is reversed: use the Into trait for trait bounds. That way, the bound will be satisfied both by things that directly implement Into, and by things that only directly implement From.

This automatic conversion is highlighted by the documentation for From and Into, but it's worth reading the code too:

impl<T, U> Into<U> for T
    U: From<T>,
    fn into(self) -> U {

Translating a trait specification into words can help with understanding more complex trait bounds; in this case, it's fairly simple: "I can implement Into<U> for a type T whenever U already implements From<T>".

It's also useful in general to look over the trait implementations for a standard library type. As you'd expect, there are From implementations for safe integral conversions (From<u32> for u64) and TryFrom implementations when the conversion isn't safe (TryFrom<u64> from u32).

There are also various blanket trait implementations. Into just has the one shown above, but the From trait has many impl<T> From<T> for ... clauses. These are almost all for smart pointer types, allowing the smart pointer to be automatically constructed from an instance of the type that it holds, so that methods that accept smart pointer parameters can also be called with plain old items; more on this below and in Item 8.

The TryFrom trait also has a blanket implementation for any type that already implements the Into trait in the opposite direction – which automatically includes (as above) any type that implements From in the same direction. This conversion will always succeed, so the associated error type is 2 the helpfully named Infallible.

There's also one very specific generic implementation of From that sticks out, the reflexive implementation:

impl<T> From<T> for T {
    fn from(t: T) -> T {

Translating into words, this just says that "given a T I can get a T". That's such an obvious "well, doh" that it's worth stopping to understand why this is useful.

Consider a simple struct and a function that operates on it (ignoring that this function would be better expressed as a method):

/// Integer value from an IANA-controlled range.
#[derive(Clone, Copy, Debug)]
pub struct IanaAllocated(pub u64);

/// Indicate whether value is reserved.
pub fn is_iana_reserved(s: IanaAllocated) -> bool {
    s.0 == 0 || s.0 == 65535

This function can be invoked with instances of the struct

    let s = IanaAllocated(1);
    println!("{:?} reserved? {}", s, is_iana_reserved(s));

but even if From<u64> is implemented

impl From<u64> for IanaAllocated {
    fn from(v: u64) -> Self {

it can't be directly invoked for u64 values

error[E0308]: mismatched types
  --> casts/src/
74 |     if is_iana_reserved(42) {
   |                         ^^ expected struct `IanaAllocated`, found integer

However, a generic version of the function that accepts (and explicitly converts) anything satisfying Into<IanaAllocated>

pub fn is_iana_reserved_anything<T>(s: T) -> bool
    T: Into<IanaAllocated>,
    let s = s.into();
    s.0 == 0 || s.0 == 65535

allows this use:

    if is_iana_reserved_anything(42) {

The reflexive trait implementation of From<T> means that this generic function copes with items which are already IanaAllocated instances, no conversion needed.

This pattern also explains why (and how) Rust code sometimes appears to be doing implicit casts between types: the combination of From<T> implementations and Into<T> trait bounds leads to code that appears to magically convert at the call site (but which is still doing safe, explicit, conversions under the covers), This pattern becomes even more powerful when combined with reference types and their related conversion traits; more in Item 8.


Rust includes the as keyword to perform explicit casts between some pairs of types.

The pairs of types that can be converted in this way is a fairly limited set, and the only user-defined types it includes are "C-like" enums (those that have an associated integer value). General integral conversions are included though, giving an alternative to into():

    let x: u32 = 9;
    let y = x as u64;
    let z: u64 = x.into();

The as version also allows lossy conversions:

    let x: u32 = 9;
    let y = x as u16;

which would be rejected by the from / into versions:

error[E0277]: the trait bound `u16: From<u32>` is not satisfied
   --> casts/src/
112 |     let y: u16 = x.into();
    |                    ^^^^ the trait `From<u32>` is not implemented for `u16`
    = help: the following implementations were found:
              <u16 as From<NonZeroU16>>
              <u16 as From<bool>>
              <u16 as From<u8>>
    = note: required because of the requirements on the impl of `Into<u16>` for `u32`

For consistency and safety you should prefer from / into conversions to as casts, unless you understand and need the precise casting semantics (e.g for C interoperability).


The explicit as casts described in the previous section are a superset of the implicit coercions that the compiler will silently perform: any coercion can be forced with an explicit as, but the converse is not true. (In particular, the integral conversions performed in the previous section are not coercions, and so will always require as.)

Most of the coercions involve silent conversions of pointer and reference types in ways that are sensible and convenient for the programmer, such as:

  • converting a mutable reference to a non-mutable references (so you can use a &mut T as the argument to a function that takes a &T)
  • converting a reference to a raw pointer (this isn't unsafe – the unsafety happens at the point where you're foolish enough to use a raw pointer)
  • converting a closure that happen not to capture any variables into a bare function pointer (Item 2)
  • converting an array to a slice
  • converting a concrete item to a trait object, for a trait that the concrete item implements
  • converting3 an item lifetime to a "shorter" one (Item 14).

There are only two coercions whose behaviour can be affected by user-defined types. The first of these is when a user-defined type implements the Deref or the DerefMut trait. These traits indicate that the user defined type is acting as a smart pointer of some sort (Item 8), and in this case the compiler will coerce a reference to the smart pointer item into being a reference to an item of the type that the smart pointer contains (indicated by its Target).

The second coercion of a user-defined type happens when a concrete item is converted to a trait object. This operation builds a fat pointer to the item; this pointer is fat because it also includes a pointer to the vtable for the concrete type's implementation of the trait – see Item 8.

1: More properly known as the trait coherence rules.

2: For now – this is likely to be replaced with the ! "never" type in a future version of Rust.

3: Rust refers to these conversions as "subtyping", but it's quite different that the definition of "subtyping" used in object-oriented languages.

Item 7: Use builders for complex types

Rust insists that all fields in a struct must be filled in when a new instance of that struct is created. This keeps the code safe, but does lead to more verbose, boilerplate code than is ideal.

#[derive(Debug, Default)]
struct BaseDetails {
    given_name: String,
    preferred_name: Option<String>,
    middle_name: Option<String>,
    family_name: String,
    mobile_phone_e164: Option<String>,

    let dizzy = BaseDetails {
        given_name: "Dizzy".to_owned(),
        preferred_name: None,
        middle_name: None,
        family_name: "Mixer".to_owned(),
        mobile_phone_e164: None,

This boilerplate code is also brittle: a future change that adds a new field to the struct would require an update to every place that builds the structure.

The boilerplate can be significantly reduced by implementing and using the Default trait, as described in Item 5:

    let dizzy = BaseDetails {
        given_name: "Dizzy".to_owned(),
        family_name: "Mixer".to_owned(),

This also helps when a new field is added, provided that the new field is itself of a type that implements Default.

However, this is only straightforward if all of the field types also implement the Default trait. If there's a field that doesn't play along, an automatically derived implementation of Default isn't possible:

    #[derive(Debug, Default)]
    struct Details {
        given_name: String,
        preferred_name: Option<String>,
        middle_name: Option<String>,
        family_name: String,
        mobile_phone_e164: Option<String>,
        dob: chrono::Date<chrono::Utc>,
        last_seen: Option<chrono::DateTime<chrono::Utc>>,
error[E0277]: the trait bound `Date<Utc>: Default` is not satisfied
   --> builders/src/
171 |         dob: chrono::Date<chrono::Utc>,
    |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Default` is not implemented for `Date<Utc>`
    = note: required by `std::default::Default::default`
    = note: this error originates in a derive macro (in Nightly builds, run with -Z macro-backtrace for more info)

As a result, all of the fields have to be filled out manually:

    use chrono::TimeZone;

    let bob = Details {
        given_name: "Robert".to_owned(),
        preferred_name: Some("Bob".to_owned()),
        middle_name: Some("the".to_owned()),
        family_name: "Builder".to_owned(),
        mobile_phone_e164: None,
        dob: chrono::Utc.ymd(1998, 11, 28),
        last_seen: None,

These ergonomics can be improved if you implement the builder pattern for complex data structures.

The simplest variant of the builder pattern is a separate struct that holds the information needed to construct the item. For simplicity, the example will hold an instance of the item itself.

struct DetailsBuilder(Details);

impl DetailsBuilder {
    /// Start building a new [`Details`] object.
    fn new(
        given_name: &str,
        family_name: &str,
        dob: chrono::Date<chrono::Utc>,
    ) -> Self {
        DetailsBuilder(Details {
            given_name: given_name.to_owned(),
            preferred_name: None,
            middle_name: None,
            family_name: family_name.to_owned(),
            mobile_phone_e164: None,
            last_seen: None,

The builder type can then be equipped with helper methods that fill out the nascent item's fields. Each such method consumes self but emits a new Self, allowing different construction methods to be chained.

    /// Set the preferred name.
    fn preferred_name(mut self, preferred_name: &str) -> Self {
        self.0.preferred_name = Some(preferred_name.to_owned());

These helper methods can be more helpful than just simple setters:

    /// Update the `last_seen` field to the current date/time.
    fn just_seen(mut self) -> Self {
        self.0.last_seen = Some(chrono::Utc::now());

The final method to be invoked for the builder consumes the builder and emits the built item.

    /// Consume the builder object and return a fully built [`Details`] object.
    fn build(self) -> Details {

Overall, this allows clients of the builder to have a more ergonomic building experience:

    let also_bob =
        DetailsBuilder::new("Robert", "Builder", chrono::Utc.ymd(1998, 11, 28))

The all-consuming nature of this style of builder leads to a couple of wrinkles. The first is that separating out stages of the build process can't be done on its own:

        let builder = DetailsBuilder::new(
            chrono::Utc.ymd(1998, 11, 28),
        if informal {
        let bob =;
error[E0382]: use of moved value: `builder`
   --> builders/src/
230 |         let builder = DetailsBuilder::new(
    |             ------- move occurs because `builder` has type `DetailsBuilder`, which does not implement the `Copy` trait
236 |             builder.preferred_name("Bob");
    |                     --------------------- `builder` moved due to this method call
237 |         }
238 |         let bob =;
    |                   ^^^^^^^ value used here after move
note: this function takes ownership of the receiver `self`, which moves `builder`
   --> builders/src/
49  |     fn preferred_name(mut self, preferred_name: &str) -> Self {
    |                           ^^^^

This can be worked around by assigning the consumed builder back to the same variable:

    let mut builder =
        DetailsBuilder::new("Robert", "Builder", chrono::Utc.ymd(1998, 11, 28));
    if informal {
        builder = builder.preferred_name("Bob");
    let bob =;

The other downside to the all-consuming nature of this builder is that only one item can be built; trying to repeatedly build()

        let smithy =
            DetailsBuilder::new("Agent", "Smith", chrono::Utc.ymd(1999, 6, 11));
        let clones = vec![,,];

falls foul of the borrow checker, as you'd expect:

error[E0382]: use of moved value: `smithy`
   --> builders/src/
256 |         let smithy =
    |             ------ move occurs because `smithy` has type `DetailsBuilder`, which does not implement the `Copy` trait
257 |             DetailsBuilder::new("Agent", "Smith", chrono::Utc.ymd(1999, 6, 11));
258 |         let clones = vec![,,];
    |                                  -------  ^^^^^^ value used here after move
    |                                  |
    |                                  `smithy` moved due to this method call

An alternative approach is for the builder's methods to take a &mut self and emit a &mut Self:

    /// Update the `last_seen` field to the current date/time.
    fn just_seen(&mut self) -> &mut Self {
        self.0.last_seen = Some(chrono::Utc::now());

which removes the need for this self-assignment in separate build stages:

    let mut builder = DetailsRefBuilder::new(
        chrono::Utc.ymd(1998, 11, 28),
    if informal {
        builder.preferred_name("Bob"); // no `builder = ...`
    let bob =;

However, this version makes it impossible to chain the construction of the builder together with invocation of its setter methods:

        let builder = DetailsRefBuilder::new(
            chrono::Utc.ymd(1998, 11, 28),
error[E0716]: temporary value dropped while borrowed
   --> builders/src/
278 |           let builder = DetailsRefBuilder::new(
    |  _______________________^
279 | |             "Robert",
280 | |             "Builder",
281 | |             chrono::Utc.ymd(1998, 11, 28),
282 | |         )
    | |_________^ creates a temporary which is freed while still in use
283 |           .middle_name("the")
284 |           .just_seen();
    |                       - temporary value is freed at the end of this statement
285 |           // ANCHOR_END: refbuildchainfail
286 |           let bob =;
    |                     ------- borrow later used here
    = note: consider using a `let` binding to create a longer lived value

As indicated by the compiler error, this can be worked around by letting the builder item have a name:

    let mut builder = DetailsRefBuilder::new(
        chrono::Utc.ymd(1998, 11, 28),
    if informal {
    let bob =;

This builder variant also allows for building multiple items. The signature of the build() method has to not consume self, and so must be:

    /// Construct a fully built [`Details`] object.
    fn build(&self) -> Details {

The implementation of this repeatable build() method then has to construct a fresh item on each invocation. If the underlying item implements Clone, this is easy – the builder can hold a template and clone() it for each build. If the underlying item doesn't implement Clone, then the builder needs to have enough state to be able to manually construct an instance of the underlying item on each call to build().

With any style of builder pattern, the boilerplate code is now confined to one place – the builder – rather than being needed at every place that uses the underlying type.

The boilerplate that remains can potentially be reduced still further by use of a macro (Item 27), but if you go down this road you should also check whether there's an existing crate (such as the derive_builder crate in particular) that provides what's needed – assuming that you're happy to take a dependency on it (Item 24).

Item 8: Familiarize yourself with reference and pointer types

A pointer is just a number, whose value is the address in memory of some other object. In source code, the type of the pointer encodes information about the type of the object being pointed to, so a program knows how to interpret the contents of memory at that address. It's possible to play fast and loose with these constraints with raw pointers, but they are very unsafe (Item 15) and beyond the scope of this book.

Simple Pointer Types

The most ubiquitous pointer type in Rust is the reference &T. Although this is a pointer value, the compiler ensures that various rules are observed: it must always point to a valid, correctly-aligned instance of the relevant type, and the borrow checking rules must be followed (Item 13). These additional constraints are roughly similar to the constraints that C++ has when dealing with references rather than pointers; however, C++ allows footguns1 with dangling references:

// C++
const int& dangle() {
  int x = 32; // on the stack, overwritten later
  return x; // return reference to stack variable!

Rust's borrowing and lifetime checks make the equivalent code broken at compile time:

fn dangle() -> &'static i64 {
    let x: i64 = 32; // on the stack
error[E0515]: cannot return reference to local variable `x`
   --> references/src/
386 |     &x
    |     ^^ returns a reference to data owned by the current function

A Rust reference is a simple pointer, 8 bytes in size on a 64-bit platform (which this Item assumes throughout):

    struct Point {
        x: u32,
        y: u32,
    let pt = Point { x: 1, y: 2 };
    let x = 0u64;
    let ref_x = &x;
    let ref_pt = &pt;
Stack layout

Rust allocates items on the stack by default; the Box<T> pointer type (roughly equivalent to C++'s std::unique_ptr<T>) forces allocation to occur on the heap, which in turn means that the allocated item can outlive the scope of the current block. Under the covers, Box<T> is also a simple 8 byte pointer value.

    let box_pt = Box::new(Point { x: 10, y: 20 });
Stack Box pointer to struct on heap

Pointer Traits

A method that expects a reference argument like &Point can also be fed a &Box<Point>:

    fn show(pt: &Point) {
        println!("({}, {})", pt.x, pt.y);
(1, 2)
(10, 20)

This is possible because Box<T> implements the Deref trait, with Target = T. The Rust compiler looks for and uses implementations of this trait when it's dealing with dereferences (*x), allowing coercion of types (Item 6). There's also an equivalent DerefMut for when a mutable reference is involved.

The compiler has to deduce a unique type for an expression like *x, which means that the Deref traits can't be generic (Deref<T>): that would open up the possibility that a user-defined type could implement both Deref<TypeA> and Deref<TypeB>, leaving the compiler with a choice of TypeA or TypeB. Instead, the underlying type is an associated type named Target instead.

In contrast, the AsRef and AsMut traits encode their destination type as a type parameter, such as AsRef<Point>, allowing a single container type to support multiple destinations. For example, the String type implements

  • Deref with Target = str, meaning that an expression like &my_string can be coerced to type &str.
  • AsRef<[u8]>, allowing conversion to a byte slice &[u8].
  • AsRef<OsStr>, allowing conversion to an OS string.
  • AsRef<Path>, allowing conversion to a filesystem path.
  • AsRef<str>, as for Deref

A function that takes a reference can therefore be made even more general, by making the function generic over one of these traits. This means it accepts the widest range of reference-like types:

    fn show_as_ref<T: AsRef<Point>>(pt: T) {
        let pt = pt.as_ref();
        println!("({}, {})", pt.x, pt.y);

Fat Pointer Types

Rust has two built-in fat pointer types: types that act as pointers, but which hold additional information about the thing they are pointing to.

The first such type is the slice: a reference to a subset of some contiguous collection of values. It's built from a (non-owning) simple pointer, together with a length field, making it twice the size of a simple pointer (16 bytes on a 64-bit platform). The type of a slice is written as &[T] – a reference to [T], which is the notional type for a contiguous collection of values of type T.

The notional type [T] can't be instantiated, but there are two common containers that embody it. The first is the array: a contiguous collection of values whose size is known at compile time. A slice can therefore refer to a subset of an array:

    let array = [0u64; 5];
    let slice = &array[1..3];
Stack slice into stack array

The other common container for contiguous values is a Vec<T>. This holds a contiguous collection of values whose size can vary, and whose contents are held on the heap. A slice can therefore refer to a subset of a vector:

    let mut vec = Vec::<u64>::with_capacity(8);
    for i in 0..5 {
    let slice = &vec[1..3];
Stack slice into vector contents on heap

There's quite a lot going on under the covers for the expression &vec[1..3]:

  • The 1..3 part is a range expression; the compiler converts this into an instance of the Range<usize> type.
    • The Range type implements the SliceIndex<T> trait, which describes indexing operations on slices of an arbitrary type T (so the Output type is [T]).
  • The vec[ ] part is an indexing expression; the compiler converts this into an invocation of the Index trait's index method on vec, together with a dereference (i.e. *vec.index( )). (The equivalent trait for mutable expressions is IndexMut).
  • vec[1..3] therefore invokes Vec<T>'s implementation of Index<I>, which requires I to be an instance of SliceIndex<[u64]>. This works because Range<usize> implements SliceIndex<[T]> for any T, including u64.
  • &vec[1..3] un-does the dereference, resulting in a final expression type of &[u64].

The second build-in fat pointer type is a trait object: a reference to some item that implements a particular trait. It's built from a simple pointer to the item, together with an internal pointer to the type's vtable, giving a size of 16 bytes (on a 64-bit platform). The vtable for a type's implementation of a trait holds function pointers for each of the method implementations, allowing dynamic dispatch at runtime (Item 11).

So a simple trait:

    trait Calculate {
        fn add(&self, l: u64, r: u64) -> u64;
        fn mul(&self, l: u64, r: u64) -> u64;

with a struct that implements it:

    struct Modulo(pub u64);

    impl Calculate for Modulo {
        fn add(&self, l: u64, r: u64) -> u64 {
            (l + r) % self.0
        fn mul(&self, l: u64, r: u64) -> u64 {
            (l * r) % self.0

    let mod3 = Modulo(3);

can be converted to a trait object of type &dyn Trait (where the dyn keyword highlights the fact that dynamic dispatch is involved):

    // Need an explicit type to force dynamic dispatch.
    let tobj: &dyn Calculate = &mod3;
    let result = tobj.add(2, 2);
    assert_eq!(result, 1);
Trait object

Code that holds a trait object can invoke the methods of the trait via the function pointers in the vtable, passing in the item pointer as the &self parameter; see Item 11 for more information and advice.

Other Pointer Traits

A previous section described several traits (Deref[Mut], AsRef[Mut] and Index) that are used when dealing with reference and slice types. There are a few more that can also come into play when working with various pointer types, whether from the standard library or user defined.

The simplest is the Pointer trait, which formats a pointer value for output. This can be helpful for low-level debugging, and the compiler will reach for this trait automatically when it encounters the {:p} format specifier.

The Borrow and BorrowMut traits each have a single method (borrow and borrow_mut respectively) that has the same signature as the equivalent AsRef / AsMut trait methods.

However, the difference between them is still visible in the type system, because they have different blanket implementations for references to arbitrary types:

  • For &T:
    • impl<'_, T, U> AsRef<U> for &'_ T
    • impl<'_, T> Borrow<T> for &'_ T
  • For &mut T:
    • impl<'_, T, U> AsRef<U> for &'_ mut T
    • impl<'_, T> Borrow<T> for &'_ mut T

but Borrow also has a blanket implementation for (non-reference) types:

  • impl<T> Borrow<T> for T

This means that a method accepting the Borrow trait can cope equally with instances of T as well as references-to-T:

    fn add_four<T: Borrow<i32>>(v: T) -> i32 {
        v.borrow() + 4
    assert_eq!(add_four(&2), 6);
    assert_eq!(add_four(2), 6);

The standard library's container types have more realistic uses of Borrow; for example, HashMap::get uses Borrow to allow convenient retrieval of entries whether keyed by value or by reference.

Finally, the ToOwned trait builds on the Borrow trait, adding a to_owned() method that produces a new owned item of the underlying type, like Clone. This means that:

  • A function that accepts Borrow can receive either items or references-to-items, and can work with references in either case.
  • A function that accepts ToOwned can receive either items or references-to-items, and can build its own personal copies of those items in either case.

Smart Pointer Types

The Rust standard library includes a variety of types that act like pointers to some degree or another, mediated (as usual, Item 5) by the standard traits described above. These smart pointer types each come with some particular semantics and guarantees, which has the advantage that the right combination of them can give fine-grained control over the pointer's behaviour, but has the disadvantage that the resulting types can seem overwhelming at first (Rc<RefCell<Vec<T>>> anyone?).

The first smart pointer type is Rc<T>, which is a reference-counted pointer to an item (roughly analogous to C++'s std::shared_ptr<T>). It implements all of the pointer-related traits, so acts like a Box<T> is many ways.

This is useful for data structures where the same item can be reached in different ways, but it removes one of Rust's core rules around ownership – that each item has only one owner. Relaxing this rule means that it is now possible to leak data: if item A has an Rc pointer to item B, and item B has an Rc pointer to A, then the pair will never be dropped. To put it another way: you need Rc to support cyclical data structures, but the downside is that there are now cycles in your data structures.

The risk of leaks can be ameliorated in some cases by the related Weak<T> type, which holds a non-owning reference to the underlying item (roughly analogous to C++'s std::weak_ptr<T>). Holding a weak reference doesn't prevent the underlying item being dropped (when all strong references are removed), so making use of the Weak<T> involves an upgrade to an Rc<T> – which can fail.

Under the hood, Rc is (currently) implemented as pair of reference counts together with the referenced items, all stored on the heap.

    let rc1: Rc<u64> = Rc::new(42);
    let rc2 = rc1.clone();
    let wk = Rc::downgrade(&rc1);
Rc and Weak pointers

The next smart pointer type RefCell<T> relaxes the rule (Item 13) that an item can only be mutated by its owner or by code that holds the (only) mutable reference to the item. This interior mutability allows for greater flexibility – for example, allowing trait implementations that mutate internals even when the method signature only allows &self. However, it also incurs costs: as well as the extra storage overhead (an extra isize to track current borrows), the normal borrow checks are moved from compile-time to run-time.

    let rc: RefCell<u64> = RefCell::new(42);
    let b1 = rc.borrow();
    let b2 = rc.borrow();
RefCell container

The run-time nature of these checks means that the RefCell user has to choose between two options, neither pleasant:

  • Accept that borrowing is an operation that might fail, and cope with Result values from try_borrow[_mut]
  • Use the allegedly-infallible borrowing methods borrow[_mut], and accept the risk of a panic! at runtime if the borrow rules have not been complied with.

In either case, this run-time checking means that RefCell itself implements none of the standard pointer traits; instead, its access operations return a Ref<T> or RefMut smart pointer type that does implement those traits.

If the underlying type T implements the Copy trait (indicating that a fast bit-for-bit copy produces a valid item, see Item 5), then the Cell<T> type allows interior mutation with less overhead – the get(&self) method copies out the current value, and the set(&self, val) method copies in a new value. The Cell type is used internally by both the Rc and RefCell implementations, for shared tracking of counters that can be mutated without a &mut self.

The smart pointer types described so far are only suitable for single threaded use; their implementations assume that there is no concurrent access to their internals. If this is not the case, then different smart pointers are needed, which include the additional synchronization overhead.

The thread-safe equivalent of Rc<T> is Arc<T>, which uses atomic counters to ensure that the reference counts remain accurate. Like Rc, Arc implements all of the various pointer-related traits.

However, Arc on its own does not allow any kind of mutable access to the underlying item. This is covered by the Mutex type, which ensures that only one thread has access – whether mutably or immutably – to the underlying item. As with RefCell, Mutex itself does not implement any pointer traits, but its lock() operation returns an value that does (MutexGuard, which implements Deref[Mut]).

If there are likely to be more readers than writers, the RwLock type is preferable, as it allows multiple readers access to the underlying item in parallel, provided that there isn't currently a (single) writer.

In either case, Rust's borrowing and threading rules force the use of one of these synchronization containers in multi-threaded code (but this only guards against some of the problems of shared-state concurrency; see Item 16).

The same strategy – see what the compiler rejects, and what it suggests instead – can be sometimes be applied with the other smart pointer types; however, it's faster and less frustrating to understand what the behaviour of the different smart pointers implies. To borrow2 an example from the first edition of the Rust book,

  • Rc<RefCell<Vec<T>>> holds a vector (Vec) with shared ownership (Rc), where the vector can be mutated – but only as a whole vector.
  • Rc<Vec<RefCell<T>>> also holds a vector with shared ownership, but here each individual entry in the vector can be mutated independently of the others.

The types involved precisely describe these behaviours.

1: Albeit with a warning from modern compilers.

2: Pun intended

Item 9: Consider using iterator transforms instead of explicit loops

The humble loop has had a long journey of increasing convenience and increasing abstraction. The B language (the precursor to C) just had while (condition) { }, but with C the common scenario of iterating through indexes of an array was made more convenient with the addition of the for loop:

  // C code
  int i;
  for (i = 0; i < len; i++) {
    Item item = collection[i];
    // body

The early versions of C++ improved convenience and scoping further by allowing the loop variable declaration to be embedded in the for statement (and this was also adopted by C in C99):

  // C++98 code
  for (int i = 0; i < len; i++) {
    Item item = collection[i];
    // ...

Most modern languages abstract the idea of the loop further: the core function of a loop is often to move to the next item of some container, and tracking the logistics that are required to reach that item (index++ or ++it) is mostly an irrelevant detail. This realization produced two core concepts:

  • Iterators: a type whose purpose is to repeatedly emit the next item of a container1, until exhausted.
  • For-Each Loops: a compact loop expression for iterating over all of the items in a container, binding a loop variable to the item rather than to the details of reaching that item.

These concepts allow for loop code that's shorter, and (more importantly) clearer about what's intended:

  // C++11 code
  for (Item& item : collection) {
    // ...

Once these concepts were available, they were so obviously powerful that they were quickly retrofitted to those languages that didn't already have them (e.g. for-each loops were added to Java 1.5 and C++11).

Rust includes iterators and for-each style loops, but it also includes the next step in abstraction: allowing the whole loop to be expressed as an iterator transform. As with Item 3's discussion of Option and Result, this Item will attempt to show how these iterator transforms can be used instead of explicit loops, and to give guidance as to when it's a good idea.

By the end of this Item, a C-like explicit loop to sum the squares of the first five even items of a vector:

    let mut even_sum_squares = 0;
    let mut even_count = 0;
    for i in 0..values.len() {
        if values[i] % 2 != 0 {
        even_sum_squares += values[i] * values[i];
        even_count += 1;
        if even_count == 5 {

should start to feel more natural expressed as a functional style expression:

    let even_sum_squares: u64 = values
        .filter(|x| *x % 2 == 0)
        .map(|x| x * x)

Iterator transformation expressions like this can roughly be broken down into three parts:

  • An initial source iterator, from one of Rust's iterator traits.
  • A sequence of iterator transforms.
  • A final consumer method to combine the results of the iteration into a final value.

The first two of these effectively move functionality out of the loop body and into the for expression; the last removes the need for the for statement altogether.

Iterator Traits

The core Iterator trait has a very simple interface: a single method next that yields Some items until it doesn't (None).

Collections that allow iteration over their contents – called iterables – implement the IntoIterator trait; the into_iter method of this trait consumes Self and emits an Iterator in its stead. The compiler will automatically use this trait for expressions of the form

    for item in collection {
        // body

effectively converting them to code roughly like:

    let mut iter = collection.into_iter();
    loop {
        let item: Thing = match {
            Some(item) => item,
            None => break,
        // body

(To keep things running smoothly, there's also an implementation of IntoIterator for any Iterator, which just returns self; after all, it's easy to convert an Iterator into an Iterator!)

This initial form is a consuming iterator, using up the collection as it's created:

    for item in collection {
        println!("Consumed item {:?}", item);

Any attempt to use the collection after it's been iterated over fails:

    println!("Collection = {:?}", collection);
error[E0382]: borrow of moved value: `collection`
   --> iterators/src/
104 |     let collection = vec![Thing(0), Thing(1), Thing(2), Thing(3)];
    |         ---------- move occurs because `collection` has type `Vec<Thing>`, which does not implement the `Copy` trait
108 |     for item in collection {
    |                 ----------
    |                 |
    |                 `collection` moved due to this implicit call to `.into_iter()`
    |                 help: consider borrowing to avoid moving into the for loop: `&collection`
115 |     println!("Collection = {:?}", collection);
    |                                   ^^^^^^^^^^ value borrowed here after move
note: this function takes ownership of the receiver `self`, which moves `collection`
   --> /Users/dmd/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/src/rust/library/core/src/iter/traits/
232 |     fn into_iter(self) -> Self::IntoIter;
    |                  ^^^^

While simple to understand, this all-consuming behaviour is often undesired; some kind of borrow of the iterated items is needed.

To ensure that behaviour is clear, the examples here onwards use an Item type that is not Copy (Item 5), as that would hide questions of ownership (Item 13) – the compiler would silently make copies everywhere.

    // Deliberately not `Copy`
    #[derive(Clone, Debug, Eq, PartialEq)]
    struct Thing(u64);

    let collection = vec![Thing(0), Thing(1), Thing(2), Thing(3)];

If the collection being iterated over is prefixed with &:

    for item in &collection {
        println!("{}", item.0);
    println!("collection still around {:?}", collection);

then the Rust compiler will look for an implementation of IntoIterator for the type &'a Collection. Properly designed collection types will provide such an implementation; this implementation will still consume Self, but now Self is &Collection rather than Collection, and the associated Item type will be a reference &'a Thing.

This leaves the collection intact after iteration, and the equivalent expanded code is:

    let mut iter = (&collection).into_iter();
    loop {
        let item: &Thing = match {
            Some(item) => item,
            None => break,
        println!("{}", item.0);

If it makes sense to provide iteration over mutable references2, then a similar pattern applies for for item in &mut collection: provide an implementation of IntoIterator for &'a mut Collection, with Item of &'a mut Thing.

By convention, standard containers also provide an iter() method that returns an iterator over references to the underlying item, and equivalent an iter_mut() method if appropriate, with the same behaviour as just described. These methods can be used in for loops, but have a more obvious benefit when used as the start of an iterator transformation:

    let result: u64 = (&collection).into_iter().map(|thing| thing.0).sum();


    let result: u64 = collection.iter().map(|thing| thing.0).sum();

Iterator Transforms

The Iterator trait has a single required method (next), but also provides default implementations (Item 12) of a large number of other methods that perform transformations on an iterator.

Some of these tranformations affect the overall iteration process:

  • take(n) restricts an iterator to emitting at most n items.
  • skip(n) skips over the first n elements of the iterator.
  • step_by(n) converts an iterator so it only emits every n-th item.
  • chain(other) glues together two iterators, to build a combined iterator that moves through one then the other.
  • cycle() converts an iterator that terminates into one that repeats forever, starting at the beginning again whenever it reaches the end. (The iterator must support Clone to allow this.)
  • rev() reverses the direction of an iterator. (The iterator must implement the DoubleEndedIterator trait, which has an additional next_back required method.)

Other transformations affect the nature of the Item that's the subject of the Iterator:

  • map(|item| {...}) is the most general version, repeatedly applying a closure to transform each item in turn. Several of the following entries in this list are convenience variants that could be equivalently implemented as a map.
  • cloned() produces a clone of all of the items in the original iterator; this is particularly useful with iterators over &Item references. (This obviously requires the underlying Item type to implement Clone).
  • copied() produces a copy of all of the items in the original iterator; this is particularly useful with iterators over &Item references. (This obviously requires the underlying Item type to implement Copy).
  • enumerate() converts an iterator over items to be an iterator over (usize, Item) pairs, providing an index to the items in the iterator.
  • zip(it) joins an iterator with a second iterator, to produce a combined iterator that emits pairs of items, one from each of the original iterators, until the shorter of the two iterators is finished.

Yet other transformations perform filtering on the Items being emitted by the Iterator:

  • filter(|item| {...}) is the most general version, applying a bool-returning closure to each item reference to determine whether it should be passed through.
  • take_while() and skip_while() are mirror images of each other, emitting either an initial subrange or a final subrange of the iterator, based on a predicate.

The flatten() method deals with an iterator whose items are themselves iterators, flattening the result. On its own, this doesn't seem that helpful, but it becomes much more useful when combined with the observation that both Option and Result act as iterators: they produce either zero (for None, Err(e)) or one (for Some(v), Ok(v)) items. This means that flattening a stream of Option / Result values is a simple way to extract just the valid values, ignoring the rest.

Taken as a whole, these methods allow iterators to be transformed so that they produce exactly the sequence of elements that are needed for most situations.

Iterator Consumers

The previous two sections described how to obtain an iterator, and how to transmute it into exactly the right form for precise iteration. This precisely-targetted iteration could happen as an explicit for-each loop:

    let mut even_sum_squares = 0;
    for value in values.iter().filter(|x| *x % 2 == 0).take(5) {
        even_sum_squares += value * value;

However, the Iterator documentation includes many additional methods that allow an iteration to be consumed in a single method call, removing the need for an explicit for loop.

The most general of these methods is for_each(|item| {...}), which runs a closure for each item produced by the Iterator. This can do most of the things that an explicit for loop could do (the exceptions are described below), but for this general form an explicit loop is still preferable.

However, if the body of the for loop matches one of a number of common patterns, the iterator-consuming method is clearer. shorter and more idiomatic. These patterns include shortcuts for building a single value out of the collection:

  • sum(), for summing a collection of "algebraic" values (i.e. those that have the relevant operator overloads, Item 5).
  • product(), for multiplying together a collection of algebraic values.
  • min() and max(), for finding the extreme values of a collection, relative to the Item's PartialOrd implementation (Item 5).
  • min_by(f) and max_by(f), for finding the extreme values of a collection, relative to a user-specified comparison function.
  • reduce(f) is a more general operation that encompasses the previous methods, building an accumulated value of the Item type by running a closure at each step that takes the value accumulated so far and the current item.
  • fold(f) is a generalization of reduce, allowing the "accumulated value" to be of an arbitrary type (not just the Iterator::Item type).
  • scan(f) generalizes in a slightly different way, giving the closure a mutable reference to some internal state at each step.

There are also methods for selecting a single value out of the collection:

  • find(p) finds the first item that satisfies a predicate.
  • position(p) also finds the first item satisfying a predicate, but this time it returns the index of the item.
  • nth(n) returns the n-th element of the iterator, if available.

There are methods for testing against every item in the collection:

  • any(p) indicates whether a predicate is true for any item in the collection.
  • any(p) indicates whether a predicate is true for all items in the collection.

There are methods that allow for the possibility of failure in the closures used with each items; in each case, if a closure returns a failure for an item, the iteration is terminated and the operation as a whole returns the first failure.

Finally, there are methods that accumulate all of the iterated items into a new collection. The most important of these is collect, which can be used to build a new collection for any collection type that implements the FromIterator trait. This trait defines a from_iter(it) method which consumes the iterator and constructs an instance of the collection.

Other (more obscure) collection-producing methods include:

  • unzip(), which divides an iterator of pairs into two collections.
  • partition(p), which splits an iterator into two collections based on a predicate that is applied to each item.

This Item has touched on a wide selection of Iterator methods, but this is only a subset of the methods available; for more information, consult the iterator documentation or read Chapter 15 of Programming Rust (2nd edition), which has extensive coverage of the possibilities.

This rich collection of iterator transformations is meant to be used, to produce code that is more idiomatic, more compact, and with clearer intent.

Loop Transformation

The aim of this Item is to convince you that many explicit loops can be regarded as something to be converted to iterator transformations. This can feel somewhat unnatural for programmers who aren't used to it, so let's walk through a transformation step by step.

Starting with a very C-like explicit loop to sum the squares of the first five even items of a vector:

    let mut even_sum_squares = 0;
    let mut even_count = 0;
    for i in 0..values.len() {
        if values[i] % 2 != 0 {
        even_sum_squares += values[i] * values[i];
        even_count += 1;
        if even_count == 5 {

The first step is to replace vector indexing with direct use of an iterator in a for-each loop:

    let mut even_sum_squares = 0;
    let mut even_count = 0;
    for value in values.iter() {
        if value % 2 != 0 {
        even_sum_squares += value * value;
        even_count += 1;
        if even_count == 5 {

An initial arm of the loop that uses continue to skip over some items is naturally expressed as a filter():

    let mut even_sum_squares = 0;
    let mut even_count = 0;
    for value in values.iter().filter(|x| *x % 2 == 0) {
        even_sum_squares += value * value;
        even_count += 1;
        if even_count == 5 {

Next, the early exit from the loop once 5 even items have been spotted maps to a take(5):

    let mut even_sum_squares = 0;
    for value in values.iter().filter(|x| *x % 2 == 0).take(5) {
        even_sum_squares += value * value;

The value of the item is never used directly, only in the value * value combination, which makes it an ideal target for a map():

    let mut even_sum_squares = 0;
    for val_sqr in values.iter().filter(|x| *x % 2 == 0).take(5).map(|x| x * x)
        even_sum_squares += val_sqr;

These refactorings of the original loop have resulting in a loop body that's the perfect nail to fit under the hammer of the sum() method:

    let even_sum_squares: u64 = values
        .filter(|x| *x % 2 == 0)
        .map(|x| x * x)

When Explicit is Better

This Item has highlighted the advantages of iterator transformations, particularly with respect to concision and clarity. So when are iterator transformations not appropriate or idiomatic?

  • If the loop body is large and/or multi-functional, it makes sense to keep it as an explicit body rather than squeezing it into a closure.
  • If the loop body involves error conditions that result in early termination of the surrounding function, these are often best kept explicit – the try_..() methods only help a little.
  • If performance is important, an iterator transform that involves a closure may be slower than the equivalent explicit code. As ever with performance issues: make the code correct first, then measure, then tune.

Most importantly, don't convert a loop into an iteration transformation if the conversion is forced or awkward. This is a matter of taste to be sure – but be aware that your taste is likely to change as you become more familiar with the functional style.

1: In fact, the iterator can be more general – the idea of emitting next items until done need not be associated with a container.

2: This method can't be provided if a mutation to the item might invalidate the container's internal guarantees, for example the hash value used in a HashMap.

Item 10: Implement the Drop trait for RAII patterns

"Never send a human to do a machine's job" – Agent Smith

RAII stands for "Resource Acquisition Is Initialization"; this is a programming pattern where the lifetime of a value is exactly tied to the lifecycle of some additional resource. The RAII pattern was popularized by the C++ programming language, and is one of C++'s biggest contributions to programming.

With an RAII type,

  • the type's constructor acquires access to some resource, and
  • the type's destructor releases access to that resource.

The result of this is that the RAII type has an invariant: access to the underlying resource is available if and only if the item exists. Because the compiler ensures that local variables are destroyed when at scope exit, this in turn means that the underlying resources are also released at scope exit1.

This is particularly helpful for maintainability: if a subsequent change to the code alters the control flow, item and resource lifetimes are still correct. To see this, consider some code that manually locks and unlocks a mutex; this code is in C++, because Rust's Mutex doesn't allow this kind of error-prone usage!

// C++ code
class ThreadSafeInt {
  ThreadSafeInt(int v) : value_(v) {}

  void add(int delta) {
    // ... more code here
    value_ += delta;
    // ... more code here

A modification to catch an error condition with an early exit leaves the mutex locked:

  // C++ code
  void add_with_modification(int delta) {
    // ... more code here
    value_ += delta;
    // Check for overflow.
    if (value_ > MAX_INT) {
      // Oops, forgot to unlock() before exit
    // ... more code here

However, encapsulating the locking behaviour into an RAII class:

// C++ code
class MutexLock {
  MutexLock(Mutex* mu) : mu_(mu) { mu_->lock(); }
  ~MutexLock()                   { mu_->unlock(); }
  Mutex* mu_;

means the equivalent code is safe for this kind of modification:

  // C++ code
  void add_with_modification(int delta) {
    MutexLock with_lock(&mu_);
    // ... more code here
    value_ += delta;
    // Check for overflow.
    if (value_ > MAX_INT) {
      return; // Safe, with_lock unlocks on the way out
    // ... more code here

In C++, RAII patterns were often originally used for memory management, to ensure that manual allocation (new, malloc()) and deallocation (delete, free()) operations were kept in sync. A general version of this was added to the C++ standard library in C++11: the std::unique_ptr<T> type ensures that a single place has "ownership" of memory, but which allows a pointer to the memory to be "borrowed" for ephemeral use (ptr.get()).

In Rust, this behaviour for memory pointers is built into the language (Item 13), but the general principle of RAII is still useful for other kinds of resources2. Implement Drop for any types that hold resources that must be released, such as:

  • Access to operating system resources. For UNIX-derived systems, this usually means something that holds a file descriptor; failing to release these correctly will hold on to system resources (and will also eventually lead to the program hitting the per-process file descriptor limit).
  • Access to synchronization resources. The standard library already includes memory synchronization primitives, but other resources (e.g. file locks, database locks, …) may need similar encapsulation.
  • Access to raw memory, for unsafe types that deal with low-level memory management (e.g. for FFI).

The most obvious instance of RAII in the Rust standard library is the MutexGuard item returned by Mutex::lock() operations (should you choose to ignore the advice of Item 16 and use shared-state parallelism). This is roughly analogous to the final C++ example above, but here the MutexGuard item acts as a proxy to the mutex-protected data in addition to being an RAII item for the held lock:

use std::sync::Mutex;

struct ThreadSafeInt {
    value: Mutex<i32>,

impl ThreadSafeInt {
    fn new(val: i32) -> Self {
        Self {
            value: Mutex::new(val),
    fn add(&self, delta: i32) {
        let mut v = self.value.lock().unwrap();
        *v += delta;

Item 16 advises against holding locks for large sections of code; to ensure this, use blocks to restrict the scope of RAII items. This leads to slightly odd indentation, but it's worth it for the added safety and lifetime precision.

    fn add_with_extras(&self, delta: i32) {
        // ... more code here that doesn't need the lock
            let mut v = self.value.lock().unwrap();
            *v += delta;
        // ... more code here that doesn't need the lock

Having proselytized the uses of the RAII pattern, an explanation of how to implement it is in order. The Drop trait allows you to add user-defined behaviour to the destruction of an item. This trait has a single method, drop, which the compiler runs just before the memory holding the item is released.

struct MyStruct(i32);

impl Drop for MyStruct {
    fn drop(&mut self) {
        println!("Dropping {:?}", self);

The drop method is specially reserved to the compiler and can't be manually invoked, because the item would be left in a potentially messed-up state afterwards:

error[E0040]: explicit use of destructor method
  --> raii/src/
63 |     x.drop();
   |     --^^^^--
   |     | |
   |     | explicit destructor calls not allowed
   |     help: consider using `drop` function: `drop(x)`

(As suggested by the compile, just call drop(obj) instead to manually drop an item.)

The drop method is therefore the key place for implementing RAII patterns, by ensuring that resources are released on item destruction.

1: This also means that RAII as a technique is mostly only available in languages that have a predictable time of destruction, which rules out most garbage collected languages (although Go's defer statement achieves some of the same ends.)

2: RAII is also still useful for memory management in low-level unsafe code, but that is (mostly) beyond the scope of this book

Item 11: Prefer generics to trait objects

Item 2 described the use of traits to encapsulate behaviour in the type system, as a collection of related methods, and observed that there are two ways to make use of traits: as trait bounds for generics, or in trait objects. This Item explores the trade-offs between these two possibilities.

Rust's generics are roughly equivalent to C++'s templates: they allow the programmer to write code that works for some arbitrary type T, and specific uses of the generic code are generated at compile time – a process known as monomorphization in Rust, and template instantiation in C++. Unlike C++, Rust explicitly encodes the expectations for the type T in the type system, in the form of trait bounds for the generic.

In comparison, trait objects are fat pointers (Item 8) that combine a pointer to the underlying concrete item with a pointer to a vtable that in turn holds function pointers for all of the trait implementation's methods.

    let square = Square::new(1, 2, 2);
    let draw: &dyn Drawable = &square;
Trait object

These basic facts already allow some immediate comparisons between the two possibilities:

  • Generics are likely to lead to bigger code sizes, because the compiler generates a fresh copy of the code generic::<T>(t: &T) for every type T that gets used; a traitobj(t: &dyn T) method only needs a single instance.
  • Invoking a trait method from a generic will generally be slightly faster than from code that uses a trait object, because the latter needs to perform two dereferences to find the location of the code (trait object to vtable, vtable to implementation location).
  • Compile times for generics may be longer, as the compiler is building more code and the linker has more work to do to fold duplicates.

In most situations, these aren't significant differences; you should only use optimization-related concerns as a primary decision driver if you've measured the impact and found that it has a genuine effect (a speed bottleneck or a problematic occupancy increase).

A more significant difference is that generic trait bounds can used to conditionally make methods available, depending on whether the type parameter implements multiple traits.

trait Drawable {
    fn bounds(&self) -> Bounds;
    struct Container<T>(T);

    impl<T: Drawable> Container<T> {
        // The `area` method is available for all `Drawable` containers.
        fn area(&self) -> i64 {
            let bounds = self.0.bounds();
            (bounds.bottom_right.x - bounds.top_left.x)
                * (bounds.bottom_right.y - bounds.top_left.y)

    impl<T: Drawable + Debug> Container<T> {
        // The `show` method is only available if `Debug` is also implemented.
        fn show(&self) {
            println!("{:?} has bounds {:?}", self.0, self.0.bounds());
    let square = Container(Square::new(1, 2, 2)); // Square is not Debug
    let circle = Container(Circle::new(3, 4, 1)); // Circle is Debug

    println!("area(square) = {}", square.area());
    println!("area(circle) = {}", circle.area());;
    // The following line would not compile.

A trait object only encodes the implementation vtable for a single trait, so doing something equivalent is much more awkward. For example, a combination DebugDrawable trait could be defined for the show() case, together with some conversion operations (Item 6) to make life easier. However, if there are multiple different combinations of distinct crates, it's clear that the combinatorics of this approach rapidly become unwieldy.

Item 2 described the use of trait bounds to restrict what type parameters are acceptable for a generic function. Trait bounds can also be applied to trait definitions themselves:

trait Shape: Drawable {
    fn render_in(&self, bounds: Bounds);
    fn render(&self) {
        self.render_in(overlap(SCREEN_BOUNDS, self.bounds()));

In this example, the render() method's default implementation (Item 12) makes use of the trait bound, relying on the availability of the bounds() method from Drawable.

Programmers coming from object-oriented languages often confuse trait bounds with inheritance, under the mistaken impression that a trait bound like this means that a Shape is-a Drawable. That's not the case: the relationship between the two types is better expressed as Shape also-implements Drawable.

Under the covers, trait objects for traits that have trait bounds

    let square = Square::new(1, 2, 2);
    let draw: &dyn Drawable = &square;
    let shape: &dyn Shape = &square;

have a single combined vtable that includes the methods of the top-level trait, plus the methods of all of the trait bounds:

Trait objects for trait bounds

This means that there is no way to "upcast" from Shape to Drawable, because the (pure) Drawable vtable can't be recovered at runtime (see Item 18 for more on this). There is no way to convert between related trait objects, which in turn means there is no Liskov substitution.

Repeating the same point in different words, a method that accepts a Shape trait object

  • can make use of methods from Drawable (because Shape also-implements Drawable, and because the relevant function pointers are present in the Shape vtable)
  • cannot pass the trait object on to another method that expects a Drawable trait object (because Shape is-not Drawable, and because the Drawable vtable isn't available).

In contrast, a generic method that accepts items that implement Shape

  • can use methods from Drawable
  • can pass the item on to another generic method that has a Drawable trait bound, because the trait bound is monomorphized at compile time to use the Drawable methods of the concrete type.

Another restriction on trait objects is the requirement for object safety: only traits that comply with the following two rules can be used as trait objects.

  • The trait's methods must not be generic.
  • The trait's methods must not return a type that includes Self.

The first restriction is easy to understand; a generic method f is really an infinite set of methods, potentially encompassing f::<i16>, f::<i32>, f::<i64>, f::<u8>, … The trait object's vtable, on the other, is very much a finite collection of function pointers, and so it's not possible to fit an infinite quart into a finite pint pot.

The second restriction is a little bit more subtle, but tends to be the restriction that's hit more often in practice – traits that impose Copy or Clone trait bounds (Item 5) immediately fall under this rule. To see why it's disallowed, consider code that has a trait object in its hands; what happens if that code calls (say) let y = x.clone()? The calling code needs to reserve enough space for y on the stack, but it has no idea of the size of y because Self is an arbitrary type. As a result, return types that mention1 Self lead to a trait that is not object safe.

The balance of factors so far leads to the advice to prefer generics to trait objects, but there are situations where trait objects are the right tool for the job.

Trait objects fundamentally involve type erasure: information about the concrete type is lost in the conversion to a trait object (see also Item 18). One place where this is useful is collections of heterogeneous objects – code that just relies on the methods of the trait can invoke and combine the methods of differently typed items. The traditional OO example of rendering a list of shapes would be one example of this: the same render() method could be used for squares, circles, ellipses and stars in the same loop.

    let shapes: Vec<&dyn Shape> = vec![&square, &circle];
    for shape in shapes {

A much more obscure example is when the available types are not known at compile-time; if new code is dynamically loaded at run-time (e.g via dlopen(3)), then items that implement traits in the new code can only be invoked via a trait object.

1: At present, the restriction on methods that return Self includes types like Box<Self> that could be safely stored on the stack; this restriction might be relaxed in future.

Item 12: Use default implementations to minimize required trait methods

The designer of a trait has two different audiences to consider: the programmers who will be implementing the trait, and those who will be using the trait. These two audiences lead to a degree of tension in the trait design:

  • To make the implementor's life easier, it's better for a trait to have the absolute minimum number of methods to achieve its purpose.
  • To make the user's life more convenient, it's helpful to provide a range of variant methods that cover all of the common ways that the trait might be used.

This tension can be balanced by including the wider range of methods that makes the user's life easier, but with default implementations provided for any methods that can be built from other, more primitive, operations on the interface.

A simple example of this is the is_empty() method for an ExactSizeIterator; it has a default implementation that relies on the len() trait method:

    fn is_empty(&self) -> bool {
        self.len() == 0

The existence of a default implementation is just that: a default. If an implementation of the trait has a more optimal way of determining whether the iterator is empty, it can replace the default is_empty() with its own.

This approach leads to trait definitions that have a small number of required methods, plus a much larger number of default-implemented methods. An implementor for the trait only has to implement the former, and gets all of the latter for free.

It's also an approach that is widely followed by the Rust standard library; perhaps the best example there is the Iterator trait, which has a single required method (next) but which includes a panoply of pre-provided methods (Item 9), over 50 at the time of writing.

Trait methods can impose trait bounds, indicating that a method is only available if the types involved implement particular traits. The Iterator trait also shows that this is useful in combination with default method implementations. For example, the cloned() iterator method has a trait bound and a default implementation:

    fn cloned<'a, T: 'a>(self) -> Cloned<Self>
        Self: Sized + Iterator<Item = &'a T>,
        T: Clone,

In other words, the cloned() method is only available if the underlying Item type implements Clone; when it does, the implementation is automatically available.

The final observation about trait methods with default implementations is that new ones can be safely added to a trait even after an initial version of the trait is released. An addition like this preserves backwards compatibility (see Item 20) for both users and implementors1 of the trait.

So follow the example of the standard library and provide a minimal API surface for implementors, but a convenient and comprehensive API for users, by adding methods with default implementations (and trait bounds as appropriate).

1: This is true even if an implementor was unlucky enough to have already added a method of the same name in the concrete type, as the concrete method – known as an inherent implementation – will be used ahead of trait method. The trait method can be explicitly selected instead by casting: <Concrete as Trait>::method().


The first section of this book covered Rust's type system, which helps provide the vocabulary needed to work with some of the concepts involved in writing Rust code.

The borrow checker and lifetime checks are central to what makes Rust unique; they are also a common stumbling block for newcomers to Rust.

However, it's a good idea to try to align your code with the consequences of these concepts. It's possible to re-create (some of) the behaviour of C/C++ in Rust, but why bother to use Rust if you do?

Item 13: Understand the borrow checker

Item 14: Understand lifetimes

Item 15: Avoid writing unsafe code

The memory safety guarantees of Rust are its unique selling point; it is the Rust language feature that is not found in any other mainstream language. These guarantees come at a cost; writing Rust requires you to re-organize your code to mollify the borrow checker (Item 13), and to precisely specify the pointer types that you use (Item 8).

Unsafe Rust weakens some of those guarantees, in particular by allowing the use of raw pointers that work more like old-style C pointers. These pointers are not subject to the borrowing rules, and the programmer is responsible for ensuring that they still point to valid memory whenever they're used.

So at a superficial level, the advice of this Item is trivial: why move to Rust if you're just going to write C code in Rust? However, there are occasions where unsafe code is absolutely required – for low-level library code, or for when your Rust code has to interface with code in other languages (Item 36).

The wording of this Item is quite precise, though: avoid writing unsafe code. The emphasis is on the "writing", because much of the time the unsafe code you're likely to need has already been written for you.

The Rust standard libraries contain a lot of unsafe code; a quick search finds around 1000 uses of unsafe in the alloc library, 1500 in core and a further 2000 in std. This code has been written by experts and is battle-hardened by use in many thousands of Rust codebases.

Item 16: Be wary of shared-state parallelism

Item 17: Don't panic

"It looked insanely complicated, and this was one of the reasons why the snug plastic cover it fitted into had the words DON’T PANIC printed on it in large friendly letters." – Douglas Adams

The title of this Item would be more accurately described as: prefer returning a Result to using panic! (but don't panic is much catchier).

The first thing to understand about Rust's panic system is that it is not equivalent to an exception system (like the ones in Java or C++), even though there appears to be a mechanism for catching panics at a point further up the call stack.

Consider a function that panics on an invalid input:

fn divide(a: i64, b: i64) -> i64 {
    if b == 0 {
        panic!("Cowardly refusing to divide by zero!");
    a / b

Trying to invoke this with an invalid input fails as expected:

    // Attempt to discover what 0/0 is...
    let result = divide(0, 0);
thread 'main' panicked at 'Cowardly refusing to divide by zero!', panic/src/
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

A wrapper that uses std::panic::catch_unwind to catch the panic

fn divide_recover(a: i64, b: i64, default: i64) -> i64 {
    let result = std::panic::catch_unwind(|| divide(a, b));
    match result {
        Ok(x) => x,
        Err(_) => default,

appears to work:

    let result = divide_recover(0, 0, 42);
    println!("result = {}", result);
result = 42

Appearances can be deceptive, however. The first problem with this approach is that panics don't always unwind; there is a compiler option (which is also accessible via a Cargo.toml profile setting) that shifts panic behaviour so that it immediately aborts the process.

thread 'main' panicked at 'Cowardly refusing to divide by zero!', panic/src/
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Abort trap: 6

This leaves any attempt to simulate exceptions entirely at the mercy of the wider project settings. It's also the case that some target platforms (for example WebAssembly) always abort on panic, regardless of any compiler or project settings.

A more subtle problem that's surfaced by panic handling is exception safety: if a panic occurs midway through an operation on a data structure, it removes any guarantees that the data structure has been left in a self-consistent state. Preserving internal invariants in the presence of exceptions has been known to be extremely difficult since the 1990s 1; this is one of the main reasons why Google (famously) bans the use of exceptions in its C++ code.

Finally, panic propagation also interacts poorly with FFI (foreign function interface) boundaries (Item 36); use catch_unwind to prevent panics in Rust code from propagating to non-Rust calling code across an FFI boundary.

So what's the alternative to panic! for dealing with error conditions? For library code, the best alternative is to make the error someone else's problem, by returning a Result with an appropriate error type (Item 4). This allows the library user to make their own decisions about what to do next – which may involve passing the problem on to the next caller in line, via the ? operator.

The buck has to stop somewhere, and a useful rule of thumb is that it's OK to panic! (or to unwrap(), expect() etc.) if you have control of main; at that point, there's no further caller that the buck could be passed to.

Another sensible use of panic!, even in library code, is in situations where it's very rare to encounter errors, and you don't want users to have to litter their code with .unwrap() calls.

If an error situation should only occur because (say) internal data is corrupted, rather than as a result of invalid inputs, then triggering a panic! is legitimate.

It can even be occasionally useful to allow panics that can be triggered by invalid input, but where such invalid inputs are out of the ordinary. This works best when the relevant entrypoints come in pairs:

  • an "infallible" version whose signature implies it always succeeds (and which panics if it can't succeed),
  • a "fallible" version that returns a Result.

For the former, Rust's API guidelines suggest that the panic! should be documented in a specific section of the inline documentation (Item 26).

The String::from_utf8_unchecked / String::from_utf8 entrypoints in the standard library are an example of the latter (although in this case, the panics are actually deferred to the point where a String constructed from invalid input gets used…).

Assuming that you are trying to comply with the advice of this item, there are a few things to bear in mind. The first is that panics can appear in different guises; avoiding panic! also involves avoiding:

Harder to spot are things like:

  • slice[index] when the index is out of range
  • x / y when y is zero.

The second observation around avoiding panics is that a plan that involves constant vigilance of humans is never a good idea.

However, constant vigilance of machines is another matter: adding a check to your continuous integration (Item 31) system that spots new panics is much more reliable. A simple version could be a simple grep for the most common panicking entrypoints (as above); a more thorough check could involve additional tooling from the Rust ecosystem (Item TOO), such as setting up a build variant that pulls in the no_panic crate.

1: Tom Cargill's 1994 article in the C++ Report explored just how difficult exception safety is for C++ template code, as did Herb Sutter's Guru of the Week #8 column.

Item 18: Avoid reflection

Programmers coming to Rust from other languages are often used to reaching for reflection as a tool in their toolbox. They can waste a lot of time trying to implement reflection-based designs in Rust, only to discover that what they're attempting can only be done poorly, if at all. This Item hopes to save that time wasted exploring dead-ends, by describing what Rust does and doesn't have in the way of reflection, and what can be used instead.

Reflection is the ability of a program to examine itself at run-time. Given an item at run-time, it covers:

  • What information can be determined about the item's type?
  • What can be done with that information?

Programming languages with full reflection support have extensive answers to these questions – as well as determining an item's type at run-time, its contents can be explored, its fields modified and its methods invoked. Languages that have this level of reflection support tend to be dynamically typed languages (e.g. Python, Ruby), but there are also some notable statically typed languages that also support this, particularly Java and Go.

Rust does not support this type of reflection, which makes the advice to avoid reflection easy to follow at this level – it's just not possible. For programmers coming from languages with support for full reflection, this absence may seem like a significant gap at first, but Rust's other features provide alternative ways of solving many of the same problems.

C++ has a more limited form of reflection, known as run-time type identification (RTTI). The typeid operator returns a unique identifier for every type, for objects of polymorphic type (roughly: classes with virtual functions):

  • typeid can recover the concrete class of an object referred to via a base class reference
  • dynamic_cast<T> allows base class references to be converted to derived classes, when it is safe and correct to do so.

Rust does not support this RTTI style of reflection either, continuing the theme that the advice of this Item is easy to follow.

Rust does support some features that provide similar functionality (in the std::any module), but they're limited (in ways explored below) and so best avoided unless no other alternatives are possible.

The first reflection-like feature looks magic at first – a way of determining the name of an item's type:

    let x = 42u32;
    let y = Square::new(3, 4, 2);
    println!("x: {} = {}", tname(&x), x);
    println!("y: {} = {:?}", tname(&y), y);
x: u32 = 42
y: reflection::Square = Square { top_left: Point { x: 3, y: 4 }, size: 2 }

The implementation of tname() reveals what's up the compiler's sleeve; the function is generic (as per Item 11) and so each invocation of it is actually a different function (tname::<u32> or tname::<Square>):

fn tname<T: ?Sized>(_v: &T) -> &'static str {

The std::any::type_name<T> library function only has access to compile-time information; nothing clever is happening at run-time.

The string returned by type_name is only suitable for diagnostics – it's explicitly a "best-effort" helper whose contents may change, and may not be unique – so don't attempt to parse type_name results. If you need a globally unique type identifier, use TypeId instead:

use std::any::TypeId;

fn type_id<T: 'static + ?Sized>(_v: &T) -> TypeId {
    println!("x has {:?}", type_id(&x));
    println!("y has {:?}", type_id(&y));
x has TypeId { t: 12849923012446332737 }
y has TypeId { t: 7635675208524022980 }

The output is less helpful for humans, but the guarantee of uniqueness means that the result can be used in code. However, it's usually best not to do so directly, but to use the std::any::Any trait1 instead.

This trait has a single method type_id(), which returns the TypeId value for the type that implements the trait. You can't implement this trait yourself though, because Any already comes with a blanket implementation for every type T:

impl<T: 'static + ?Sized> Any for T {
    fn type_id(&self) -> TypeId {

Recall from Item 8 that a trait object is a fat pointer that holds a pointer to the underlying item, together with a pointer to the trait implementation's vtable. For Any, the vtable has a single entry, for a method that returns the item's type.

    let x_any: Box<dyn Any> = Box::new(42u64);
    let y_any: Box<dyn Any> = Box::new(Square::new(3, 4, 3));
Any trait objects

Modulo a couple of indirections, a dyn Any trait object is effectively a combination of a raw pointer and a type identifier. This means that Any can offer some additional generic methods:

  • is<T> to indicate whether the trait object's type is equal to some specific other type T.
  • downcast_ref<T> which returns a reference to the concrete type T, provided that the type matches.
  • downcast_mut<T> for the mutable variant of downcast_ref.

Observe that the Any trait is only approximating reflection functionality: the programmer chooses (at compile-time) to explicitly build something (&dyn Any) that keeps track of an item's compile-time type as well as its location. The ability to (say) downcast back to the original type is only possible if the overhead of building an Any trait object has happened.

There are comparatively few scenarios where Rust has different compile-time and run-time types associated with an item. Chief among these is trait objects: an item of a concrete type Square can be coerced into a trait object dyn Shape for a trait that the type implements. This coercion builds a fat pointer (object+vtable) from a simple pointer (object/item).

Recall also from Item 11 that Rust's trait objects are not really object-oriented. It's not the case that a Square is-a Shape, it's just that a Square implements Shape's interface. The same is true for trait bounds: a trait bound Shape: Drawable does not mean is-a, it just means also-implements; the vtable for Shape includes the entries for the methods of Drawable.

For some simple trait bounds:

trait Drawable: Debug {
    fn bounds(&self) -> Bounds;

trait Shape: Drawable {
    fn render_in(&self, bounds: Bounds);
    fn render(&self) {
        self.render_in(overlap(SCREEN_BOUNDS, self.bounds()));

the equivalent trait objects:

    let square = Square::new(1, 2, 2);
    let draw: &dyn Drawable = &square;
    let shape: &dyn Shape = &square;

have a layout whose arrows make the problem clear: given a dyn Shape object, there's no way to build a dyn Drawable trait object, because there's no way to get back to the vtable for impl Drawable for Square – even though the relevant parts of its contents (the address of the Square::bounds method) is theoretically recoverable.

Trait objects for trait bounds

Comparing with the previous diagram, it's also clear that an explicitly constructed &dyn Any trait object doesn't help. Any allows recovery of the original concrete type of the underlying item, but there is no run-time way to see what traits it implement, nor to get access to the relevant vtable that might allow creation of a trait object.

So what's available instead?

The primary tool to reach for is trait definitions, and this is in line with advice for other languages – Effective Java Item 65 recommends "Prefer interfaces to reflection". If code needs to rely on certain behaviour being available for an item, encode that behaviour as a trait (Item 2). Even if the desired behaviour can't be expressed as a set of method signatures, use marker traits to indicate compliance with the desired behaviour – it's safer and more efficient than (say) introspecting the name of a class to check for particular prefix.

Code that expects trait objects can also be used with objects whose backing code was not available at program link time, because it has been dynamically loaded at run-time (via dlopen(3) or equivalent) – which means that monomorphization of a generic (Item 11) isn't possible.

Relatedly, reflection is sometimes also used in other languages to allow multiple incompatible versions of the same dependency library to be loaded into the program at once, bypassing linkage constraints that There Can Be Only One. This is not needed in Rust, where cargo already copes with multiple versions of the same library (Item 24).

Finally, macros – especially derive macros – can be used to auto-generate ancillary code that understands an item's type at compile-time, as a more efficient and more type-safe equivalent to code that parses an item's contents at run-time.

1: The C++ equivalent of Any is std::any, and advice is to avoid it too

Item 19: Avoid the temptation to over-optimize

"Just because Rust allows you to write super cool non-allocating zero-copy algorithms safely, doesn’t mean every algorithm you write should be super cool, zero-copy and non-allocating." – trentj


"When the Gods with to punish us, they answer our prayers." – Oscar Wilde

For decades, the idea of code reuse was merely a dream. The idea that code could be written once, packaged into a library and re-used across many different applications was an ideal, only realized for a few standard libraries and for corporate in-house tools.

The growth of the Internet, and the rise of open-source software finally changed that. The first openly accessible repository that held a wide collection of useful libraries, tools and helpers, all packaged up for easy re-use, was CPAN: the Comprehensive Perl Archive Network, online since 1995. By the present day, almost every modern language1 has a comprehensive collection of open-source libraries available, housed in a package repository that makes the process of adding a new dependency easy and quick.

However, new problems come along with that ease, convenience and speed. It's usually still easier to re-use existing code than to write it yourself, but there are potential pitfalls and risks that come along with dependencies on someone else's code. This part of the book will help you be aware of these.

The focus is specifically on Rust, and with it use of the cargo tool, but many of the concerns, topics and issues covered apply equally well to other languages.

1: With the notable exception of C and C++, where package mangement remains somewhat fragmented.

Item 20: Understand what semantic versioning promises

"If we acknowledge that SemVer is a lossy estimate and represents only a subset of the possible scope of changes, we can begin to see it as a blunt instrument" – Titus Winters, "Software Engineering at Google"

Cargo, Rust's package manager, allows automatic selection of dependencies (Item 24) for Rust code according to semantic versioning (semver). A Cargo.toml stanza like

serde = "1.0.*"

indicates to cargo what ranges of semver versions are acceptable for this dependency (see the official docs for more detail on specifying precise ranges of acceptable versions).

Because semantic versioning is at the heart of the cargo's dependency resolution process, this Item explores more details about what that means.

The essentials of semantic versioning are given by its summary

Given a version number MAJOR.MINOR.PATCH, increment the:

  • MAJOR version when you make incompatible API changes,
  • MINOR version when you add functionality in a backwards compatible manner, and
  • PATCH version when you make backwards compatible bug fixes.

An important detail lurks in the details:

  1. Once a versioned package has been released, the contents of that version MUST NOT be modified. Any modifications MUST be released as a new version.

Putting this in different words:

  • Changing anything requires a new PATCH version.
  • Adding things to the API in a way that means existing users of the crate still compile and work requires a MINOR version upgrade.
  • Removing or changing things in the API requires a MAJOR version upgrade.

There is one more important codicil to the semver rules:

  1. Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.

Cargo adapts these rules slightly, "left-shifting" the rules so that changes in the left-most non-zero component indicate incompatible changes. This means that 0.2.3 to 0.3.0 can include an incompatible API change, as can 0.0.4 to 0.0.5.

Semver for Crate Authors

"In theory, theory is the same as practice. In practice, it's not."

As a crate author, the first of these rules is easy to comply with, in theory: if you touch anything, you need a new release. Using Git tags to match releases can help with this – by default, a tag is fixed to a particular commit and can only be moved with a manual --force option. Crates published to also get automatic policing of this, as the registry will reject a second attempt to publish the same crate version. The main danger for non-compliance is when you notice a mistake just after a release has gone out, and you have to resist the temptation to just nip in a fix.

However, if your crate is widely depended on, then in practice you may need to be aware of Hyrum's Law: regardless of how minor a change you make to the code, someone out there is likely to depend on the old behaviour.

The difficult part for crate authors is the later rules, which require an accurate determination of whether a change is back compatible or not. Some changes are obviously incompatible – removing public entrypoints or types, changing method signatures – and some changes are obviously backwards compatible (e.g. adding a new method to a struct, or adding a new constant), but there's a lot of gray area left in between.

To help with this, the Cargo book goes into considerable detail as to what is and is not back-compatible. Most of these details are unsurprising, but there are a few areas worth highlighting.

  • Adding new items is usually safe, but may cause clashes if code using the crate already makes use of something that happens to have the same name as the new item.
  • Rust's insistence on covering all possibilities means that changing the set of available possibilities can be a breaking change.
    • Performing a match on an enum must cover all possibilities, so if a crate adds a new enum variant, that's a breaking change (unless the enum is marked as non_exhaustive).
    • Explicitly creating an instance of a struct requires an initial value for all fields, so adding a field to a structure that can be publically instantiated is a breaking change. Structures that have private fields are OK, because crate users can't explicitly construct them anyway; a struct can also be marked as non_exhaustive to prevent external users performing explicit construction.
  • Changing a trait so it is no longer object safe (Item 2) is a breaking change; any users that build trait objects for the trait will stop being able to compile their code.
  • Changing library code so that it uses a new feature of Rust is an incompatible change: users of your crate who have not yet upgraded their compiler to a version that includes the feature will be broken by the change. Consider the minimum supported Rust version (MSRV) to be part of your API.
  • Changing the license of an open-source crate is an incompatible change: users of your crate who have strict restrictions on what licenses are acceptable may be broken by the change. Consider the license to be part of your API.

An obvious corollary of the rules is this: the fewer public items a crate has, the fewer things there are that can induce an incompatible change (Item 21).

However, there's no escaping the fact that comparing all public API items for compatibility from one release to the next is a time-consuming process that is only likely to yield an approximate (major/minor/patch) assessment of the level of change, at best. Given that this comparison is a somewhat mechanical process, hopefully tooling (Item 30) will arrive to make the process easier1.

If you do need to make an incompatible MAJOR version change, it's nice to make life easier for your users by ensuring that the same overall functionality is available after the change, even the API has radically changed. If possible, the most helpful sequence for your crate users is to:

  • Release a MINOR version update that includes the new version of the API, and which marks the older variant as deprecated, including an indication of how to migrate.
  • Subsequently release a MAJOR version update that removes the deprecated parts of the API.

A more subtle point is: make breaking changes breaking. If your crate is changing its behaviour in a way that's actually incompatible for existing users, but which could re-use the same API: don't. Force a change in types (and a MAJOR version bump) to ensure that users can't inadvertantly use the new version incorrectly.

For the less tangible parts of your API – such as the MSRV or the license – consider setting up a continuous integration check (Item 31) that detects changes, using tooling (e.g. cargo deny, see Item 30) as needed.

Finally, don't be afraid of version 1.0.0 because it's a commitment that your API is now fixed. Lots of crates fall into the trap of staying at version 0.x forever, but that reduces the already-limited expressivity of semver from three categories (major/minor/patch) to two (effective-major/effective-minor).

Semver for Crate Users

As a user of a crate, the theoretical expectations for a new version of a dependency are:

  • A new PATCH version of a dependency crate Should Just Work™.
  • A new MINOR version of a dependency crate Should Just Work™, but the new parts of the API might be worth exploring to see if there are cleaner/better ways of using the crate now. However, if you do use the new parts you won't be able to revert the dependency back to the old version.
  • All bets are off for a new MAJOR version of a dependency; chances are that your code will no longer compile and you'll need to re-write parts of your code to comply with the new API. Even if your code does still compile, you should check that your use of the API is still valid after a MAJOR version change, because the constraints and preconditions of the library may have changed.

In practice, even the first two types of change may cause unexpected behaviour changes, even in code that still compiles fine, due to Hyrum's Law.

As a consequence of these expectations, your dependency specifications will commonly take a form like "1.4.*" or "0.7.*"; avoid specifying a completely wildcard dependency like "*" or "0.*". A completely wildcard dependency says that any version of the depdendency, with any API, can be used by your crate – which is unlikely to be what you really want.

However, in the longer term it's not safe to just ignore major version changes in dependencies. Once a library has had a major version change, the chances are that no further bug fixes – and more importantly, security updates – will be made to the previous major version. A version specification like "1.4.*" will then fall further and further behind, with any security problems left unaddressed.

As a result, you either need to accept the risks of being stuck on an old version, or you need to eventually follow major version upgrades to your dependencies. Tools such as cargo update or Dependabot (Item 30), can let you know when updates are available; you can then schedule the upgrade for a time that's convenient for you.


Semantic versioning has a cost: every change to a crate has to be assessed against its criteria, to decide the appropriate type of version bump. Semantic versioning is also a blunt tool: at best, it reflects a crate owner's guess as to which of three categories the current release falls into. Not everyone gets it right, not everything is clear-cut about exactly what "right" means, and even if you get it right, there's always a chance you may fall foul of Hyrum's Law.

However, semver is the only game in town for anyone who doesn't have the luxury of working in a highly-tested monorepo that contains all the code in the world. As such, understanding its concepts and limitations is necessary for managing dependencies.

1: rust-semverver is a tool that attempts to do something along these lines.

Item 21: Minimize visibility

Rust's basic unit of visibility is the module; by default, a module's items (types, methods, constants) are private and only accessible to code in the same module and its submodules.

Code that needs to be more widely available is marked with the pub keyword, making it public to some other scope. A bare pub is the most common version, which makes the item visible to anything that's able to see the module it's in. That last detail is important; if a somecrate::somemodule module isn't visible to other code in the first place, anything that's pub inside it is still not visible.

The more-specific variants of pub are as follows, in descending order of usefulness:

  • pub(crate) is accessible anywhere within the owning crate. Another way of achieving the same effect is to have a pub item in a non-pub module of the crate, but pub(crate) allows the item to live near the code it is relevant for.
  • pub(super) is accessible to the parent module of the current module, which is occasionally useful for selectively increasing visibility in a crate that has a deep module structure.
  • pub(in <path>) is accesssible to code in <path>, which has to be a description of some ancestor module of the current module. This is even more occasionally useful for selectively increasing visibility in a crate that has an even deeper module structure.
  • pub(self) is equivalent to pub(in self) which is equivalent to not being pub. Uses for this are very obscure, such as reducing the number of special cases needed in code generation macros.

The Rust compiler will warn you if you have a code item that is private to the module, but which is not used within that module (and its submodules):

    // Private function that's been written but which is not yet used.
    fn not_used_yet(x: i32) -> i32 {
        x + 3

Although the warning mentions code that is "never used", it's often really a warning that code can't be used from outside the module.

warning: function is never used: `not_used_yet`
  --> visibility/src/
46 |     fn not_used_yet(x: i32) -> i32 {
   |        ^^^^^^^^^^^^
   = note: `#[warn(dead_code)]` on by default
warning: 1 warning emitted

Separately from the question of how to increase visibility, is the question of when to do so. The answer: as little as possible, at least for code that's intended to be re-used as a self-contained crate (i.e. not a self-contained or experimental project that will never be re-used).

Once a crate item is public, it can't be made private again without breaking any code that uses the crate, thus necessitating a major version bump (Item 20). The converse is not true: moving a private item to be public generally only needs a minor version bump, and leaves crate users unaffected. (Read through the API compatibility guidelines and notice how many are only relevant if there are pub items in play).

This advice is by no means unique to this Item, nor unique to Rust:

  • The Rust API guideines includes advice that
  • Effective Java (3rd edition) has:
    • Item 15: Minimize the accessibility of classes and members
    • Item 16: In public classes, use accessor methods, not public fields
  • Effective C++ (2nd edition) has:
    • Item 18: Strive for class interfaces that are complete and minimal (my italics)
    • Item 20: Avoid data members in the public interface

Item 22: Avoid wildcard imports

Rust's use statement pulls in a named item from another crate or module, and makes that name available for use in the local module's code without qualification. A wildcard import (or glob import) of the form use somecrate::module::* says that every public symbol from that module should be added to the local namespace.

As described in Item 20, an external crate may add new items to its API as part of a minor version upgrade; this is considered a backwards compatible change.

The combination of these two observations means that you should avoid wildcard imports from crates that you don't control. If you ignore this advice, you run the risk that a (backwards compatible) upgrade to your dependencies will cause your code to stop compiling, because the new symbol in the dependency happens to clash with a name that your code is already using. If your code is a crate that others depend on, then those users in turn can also have their builds broken by a dependency upgrade.

If there's some reason why you can't follow this advice, then you should mitigate the risk: pin dependencies that you wildcard import to a precise version, so that minor version upgrades of the dependency aren't automatically allowed.

Finally, if you do control the source of the wildcard import, then the concerns given above disappear. For example, it's common for a test module to do import super::*;. It's also possible for crates that use modules primarily as a way of dividing up code to have:

mod thing;
pub use thing::*;

Item 23: Re-export dependencies whose types appear in your API

The title of this Item is a little convoluted, but working through an example will make things clearer.

Item 24 described how cargo supports different versions of the same library crate being linked into a single binary, in a transparent manner. Consider a binary that uses the rand crate; more specifically, one which uses some 0.8 version of the crate:

# Top-level binary crate
dep-lib = "0.1.0"
rand = "0.8.*"
    let mut rng = rand::thread_rng(); // rand 0.8
    let max: usize = rng.gen_range(5..10);
    let choice = dep_lib::pick_number(max);

The final line of code also uses a notional dep-lib crate, and this crate internally uses1 a 0.7 version of the rand crate:

# dep-lib library crate
rand = "0.7.3"
use rand::Rng;

/// Pick a number between 0 and n (exclusive).
pub fn pick_number(n: usize) -> usize {
    rand::thread_rng().gen_range(0, n)

An eagle-eyed reader might notice a difference between the two code examples:

  • In version 0.7.x of rand (as used by the binary crate), the rand::gen_range() method takes two non-self parameters, low and high.
  • In version 0.8.x of rand (as used by the dep-lib library crate), the rand::gen_range() method takes a single non-self parameter range.

This is a non-back-compatible change and so rand has increased its leftmost version component accordingly, as required by semantic versioning (Item 20). Nevertheless, the binary that combines the two incompatible versions works just fine; cargo sorts everything out2

However, things get a lot more awkward if the crate's API exposes a type from its dependency. In the example, this involves an Rng item – but specifically a version-0.7 Rng item:

pub fn pick_number_with<R: Rng>(rng: &mut R, n: usize) -> usize {
    rng.gen_range(0, n) // Method from the 0.7.x version of Rng

As an aside, think carefully before using another crate's types in your API: it intimately ties your crate to that of the dependency. For example, a major version bump for the dependency (Item 20) will automatically require a major version bump for your crate too.

In this case, rand is a semi-standard crate that is high quality and widely used, and which only pulls in a small number of dependencies of its own (Item 24), so including its types in the crate API is probably fine on balance.

Returning to the example, an attempt to use this entrypoint from the top-level binary fails:

        let mut rng = rand::thread_rng();
        let max: usize = rng.gen_range(5..10);
        let choice = dep_lib::pick_number_with(&mut rng, max);

Unusually for Rust, the compiler error message isn't very helpful:

error[E0277]: the trait bound `ThreadRng: rand_core::RngCore` is not satisfied
  --> re-export/src/
17 |         let choice = dep_lib::pick_number_with(&mut rng, max);
   |                                                ^^^^^^^^ the trait `rand_core::RngCore` is not implemented for `ThreadRng`
  ::: /Users/dmd/src/effective-rust/examples/dep-lib/src/
17 | pub fn pick_number_with<R: Rng>(rng: &mut R, n: usize) -> usize {
   |                            --- required by this bound in `pick_number_with`
   = note: required because of the requirements on the impl of `rand::Rng` for `ThreadRng`

Investigating the types involved leads to confusion because the relevant traits do appear to be implemented – but the caller actually implements RngCore_v0_8_3 and the library is expecting an implementation of RngCore_v0_7_3.

Once you've finally deciphered the error message and realized that the version clash is the underlying cause, how can you fix it? The key observation is to realize that the while the binary can't explicitly use two different versions of the same crate, it can do so indirectly (as in the original example above).

From the perspective of the binary author, the problem can be worked around by adding an intermediate wrapper crate that hides the naked use of rand v0.7 types.

This is awkward, and a much better approach is available to the author of the library crate. It can make like easier for its users by explicitly re-exporting either:

  • the types involved in the API
  • the entire dependency crate.

For the example, the latter approach work best: as well as making the version 0.7 Rng and RngCore types available, it also makes available the methods (like thread_rng) that construct instances of the type:

// Re-export the version of `rand` used in this crate's API.
pub use rand;

The calling code now has a different way to directly refer to version 0.7 of rand, as dep_lib::rand:

        let mut prev_rng = dep_lib::rand::thread_rng(); // v0.7 Rng instance
        let choice = dep_lib::pick_number_with(&mut prev_rng, max);

With this example in mind, the advice of the title should now be a little less obscure: re-export dependencies whose types appear in your API.

1: This example (and indeed Item) is inspired by the approach used in the RustCrypto crates.

2: This is possible because the Rust toolchain handles linking, and does not have the constraint that C++ inherits from C of needing to support separate compilation.

Item 24: Manage your dependency graph

Item 25: Add crate features judiciously


Item 26: Document public interfaces

Item 27: Use macros judiciously

"In some cases it's easy to decide to write a macro instead of a function, because only a macro can do what's needed" – Paul Graham, "On Lisp"

Item 28: Listen to Clippy

"It looks like you're writing a letter. Would you like help?" – Microsoft Clippit

Item 29: Write more than unit tests

Item 30: Take advantage of the tooling ecosystem

Set up a continuous integration (CI) system

Asynchronous Rust

Item 32: Distinguish between the async and non-async worlds

Item 33: Familiarize yourself with executors

Item 34: Use the type system to decipher problems

Item 35: Understand how Futures are combined

Beyond Standard Rust

Item 36: Control what crosses FFI boundaries

Item 37: Prefer bindgen to manual FFI mappings

Item 38: Understand the limitations of no_std