Skip to main content

11 - Writing Automated Tests

11.1 - How to Write Tests

Let's create a new library:

$ cargo new adder --lib
$ cd adder

You may notice that the default library project created by cargo has this block in src/lib.rs:

src/lib.rs
#[cfg(test)]
mod tests {
#[test]
fn it_works() {
let result = 2 + 2;
assert_eq!(result, 4);
}
}

This little snippet is generated by cargo for you so you don't need to remember all this boilerplate for generating a test (and perhaps as a gentle nudge to get you to write tests in the first place). This comes with some new syntax we haven't seen before though. First, there's the #[cfg(test)]. This is called an attribute. These are sort of like annotations in languages like Java or in JavaScript.

In this case, this is a configuration attribute which tells the compiler that mod tests should only be included in the compiled output if the test configuration is active. This prevents our test code from being shipped as part of our release binary. The #[test] attribute marks the it_works function as a test case.

The assert_eq! macro asserts that the two parameters passed to it are equal. If they are not, assert_eq! will panic, causing our test to fail.

We can run all tests in this project with cargo test. Cargo will run all our tests for us with the built-in test runner and report on any failures. You may notice if you look carefully at the output that there's a section about Doc-tests. Rust can actually compile examples in our documentation and run them as tests - we'll learn more about this in chapter 14.

We can add a second test to this block that always fails:

src/lib.rs
#[cfg(test)]
mod tests {
#[test]
fn exploration() {
assert_eq!(2 + 2, 4);
}

#[test]
fn another() {
panic!("Sad trombone");
}
}

Any panic! in a test will be marked as a failure. That's what assert_eq! does if the values aren't equal - it panics.

Checking Results with the assert! Macro

The assert! macro is provided by the standard library. It is much like assert in Python, Node.js, C, and friends. You pass a condition to assert, if it is true nothing happens, if it is false then it panics.

Way back in chapter 5 when we were learning about how to write methods, we came up with this example:

#[derive(Debug)]
struct Rectangle {
width: u32,
height: u32,
}

impl Rectangle {
fn can_hold(&self, other: &Rectangle) -> bool {
self.width > other.width && self.height > other.height
}
}

Here's an example of tests that use assert to verify that a larger rectangle can_hold a smaller one, and that a smaller one cannot hold a larger one:

#[cfg(test)]
mod tests {
use super::*;

#[test]
fn larger_can_hold_smaller() {
let larger = Rectangle {
width: 8,
height: 7,
};
let smaller = Rectangle {
width: 5,
height: 1,
};

assert!(larger.can_hold(&smaller));
}

#[test]
fn smaller_cannot_hold_larger() {
let larger = Rectangle {
width: 8,
height: 7,
};
let smaller = Rectangle {
width: 5,
height: 1,
};

assert!(!smaller.can_hold(&larger));
}
}

Note that we added use super::* to the top of mod tests. The tests module is a new module that doesn't inherit anything from the parent scope. This use brings all symbols from the parent scope into our scope, so we can reference them without needing to do crate::Rectangle.

We can add a custom error message to an assert! macro:

    #[test]
fn larger_can_hold_smaller() {
let larger = Rectangle {
width: 8,
height: 7,
};
let smaller = Rectangle {
width: 5,
height: 1,
};

assert!(
larger.can_hold(&smaller),
"Rectangle {:?} should fit inside {:?}",
smaller,
larger
);
}

This works just like the println! marco. There's a format string, and then one or more parameters.

Testing Equality with the assert_eq! and assert_ne! Macros

assert_eq! takes two parameters and asserts that they are equal; if they are equal it will do nothing, if they are not it will panic. assert_ne! asserts that two values are "not equal". Some languages encourage you to make the left-hand parameter be the expected and the right be the actual, some it's the reverse. Rust doesn't give any special meaning to either value - it just calls them left and right.

If either of these fail, the resulting error message will print the left and right values for you. In order to print them the values you pass in must implement the Debug trait. These macros are implemented with the == and != operators, which means that the values you pass in also need to implement the PartialEq trait. Both of these can be derived (see appendix C):

src/lib.rs
#[derive(PartialEq, Debug)]
struct Rectangle {
width: u32,
height: u32,
}

#[cfg(test)]
mod tests {
use super::*;

#[test]
fn larger_can_hold_smaller() {
let larger = Rectangle { width: 8, height: 7 };
let smaller = Rectangle { width: 2, height: 2 };

// These rectangles are not "equal".
assert_ne!(larger, smaller);
}
}

Just like assert!, we can provide an optional custom message at the end.

Checking for Panics with should_panic

If we have some code that we know should panic in certain conditions, we can verify that it does so with the should_panic attribute:

src/lib.rs
pub struct Guess {
value: i32,
}

impl Guess {
pub fn new(value: i32) -> Guess {
if value < 1 {
panic!(
"Guess value must be greater than or equal to 1, got {}.",
value
);
} else if value > 100 {
panic!(
"Guess value must be less than or equal to 100, got {}.",
value
);
}

Guess { value }
}
}

#[cfg(test)]
mod tests {
use super::*;

#[test]
#[should_panic]
fn greater_than_100() {
Guess::new(200);
}
}

This test isn't very robust, because if the test panics for some reason other than what we are expecting, the test will still pass. We can fix that by passing a expected value to should_panic. The test will only pass if the panic message contains the given text.

#[cfg(test)]
mod tests {
use super::*;

#[test]
#[should_panic(expected = "less than or equal to 100")]
fn greater_than_100() {
Guess::new(200);
}
}

Using Result<T, E> in Tests

We've been writing tests that panic when they fail, but we can also write tests that return an error when they fail. This is very handy for testing functions that return a Result already, and it also allows the use of the ? operator in the test.

#[cfg(test)]
mod tests {
#[test]
fn it_works() -> Result<(), String> {
if 2 + 2 == 4 {
Ok(())
} else {
Err(String::from("two plus two does not equal four"))
}
}
}

If we want to do some "negative testing" and verify that a Result is an Err variant, we can use assert!(value.is_err()).

11.2 - Controlling How Tests Are Run

cargo test runs tests in parallel. It captures all output from tests and prevents it from being displayed, since any logging inside functions you call would be distracting when you're looking at the list of tests that passed and failed. All of this can be changed though.

When you run cargo test, you can pass arguments to cargo test itself or to the generated test running. If you try running these two commands:

$ cargo test --help
$ cargo test -- --help

You'll see very different output. Anything before the -- goes to cargo test, and anything after to the test runner.

Running Tests in Parallel or Consecutively

By default, tests run in parallel in multiple threads. If two different tests write to the same file or modify the same database, though, you can run into problems where both tests are changing things at the same time. Ideally, well written tests shouldn't do this sort of thing, but you can limit tests to running in a single thread with cargo test -- --test-threads=1. Your tests will take longer, but won't interfere with each other if they share state.

Showing Function Output

When running a test, by default all output is captured. Output is only shown for tests that fail. You can run cargo test -- --show-output to see output from passing tests, as well.

Running a Subset of Tests by Name

In a large project, running the full test suite can take a while. If you're trying to track down a problem in a specific area, sometimes you just want to run a single test or a small group of tests. If you pass a string to cargo test, it will run only tests that include that string in the name of their test function. For example, given these tests:

src/lib.rs
pub fn add_two(a: i32) -> i32 {
a + 2
}

#[cfg(test)]
mod tests {
use super::*;

#[test]
fn add_two_and_two() {
assert_eq!(4, add_two(2));
}

#[test]
fn add_three_and_two() {
assert_eq!(5, add_two(3));
}

#[test]
fn one_hundred() {
assert_eq!(102, add_two(100));
}
}

We can run:

# Run all tests
$ cargo test

# Run the "one_hundred" test only
$ cargo test one_hundred

# Run any test with "add" in the name
$ cargo test add

Ignoring Some Tests Unless Specifically Requested

Sometimes we have a test that is expensive to run, or which is failing in some particularly obtuse way but we don't have time to fix the problem right now. We can skip tests with the ignore attribute:

#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}

#[test]
#[ignore]
fn expensive_test() {
// code that takes an hour to run
}

We can run only ignored tests with cargo test -- --ignored, and we can run all tests (ignored and not-ignored alike) with cargo test -- --include-ignored.

11.3 - Test Organization

In Rust we like to think about unit tests as focused tests that test a single module at a time, and integration tests as tests that test the public facing API of your library exactly as external code would, potentially exercising multiple modules and even libraries you depend on.

Unit Tests

The convention for unit tests is to add a tests module with a #[cfg(test)] attribute in each source file, which tests functions and methods found in that file (just as we saw above). Putting the test code immediately alongside the code that it is testing has many advantages.

Some people in the testing community are very passionate believers that you should only test the public parts of any module. Some will advocate for the opposite, for testing private functions and methods directly if they are difficult to exercise through the public interface. What constitutes "good practice" is well beyond the scope of this book, but note that a child module can see private members of its parent module, so the tests module is free to test private functionality.

Integration Tests

To write integration tests, we create a tests directory at the top level of our package, next to src. Cargo treats this as a special folder, and will load each file in tests and run it as an integration test. Integration tests are completely external to your library, and can only access it's public API exactly like any other consumer. For the "adder" crate we've been using as an example in this chapter, we might have a directory structure like:

adder
├── Cargo.lock
├── Cargo.toml
├── src
│ └── lib.rs
└── tests
└── integration_test.rs
tests/integration_test.rs
use adder;

#[test]
fn it_adds_two() {
assert_eq!(4, adder::add_two(2));
}

Much like the bin folder, each file in the tests folder is a separate crate, so we need to use our library by name in each one. There's also no need for a #[cfg(test)] attribute here, since these are only ever compiled when running tests.

Integration tests are run alongside unit tests, so to run these we just need to run cargo test (although note that if unit tests are failing, integration tests will not be run). We can still limit which integration tests run by passing a function name to cargo test. We can also run tests in a single file (for example, in "integration_test.rs") with cargo test --test integration_test.

Submodules in Integration Tests

Let's suppose we're working on a large project with several integration test files. It might be helpful to have some common helper functions to set up tests that we want to share across multiple files, or perhaps some mock data we want to share. You might try putting such code into a common.rs file and then using mod common. The problem here is that cargo test will think common.rs is a test file and will try to run it.

To avoid this, we can use the older module naming style we mentioned in chapter 7 and put our common code in tests/common/mod.rs. cargo test will not recurse into subdirectories, so it won't run these files as tests.

One problem you'll quickly run into if you go down this path is that if you have multiple integration tests, you'll probably end up with some test submodules that are included in some tests but not in others. This will trigger dead code warnings, since during the compilation of crates that don't use these tests, they'll appear to be unused. One solution to this is to combine all your integration tests into a single crate, with tests in submodules of the crate.

Integration Tests for Binary Crates

If your project only has a binary crate and no library crate, you can't use integration tests to run anything in your project, because you can't use anything out of a binary crate. This is another good reason why it's a good idea to put as much of your logic as you can into a library crate, and then create a thin application wrapper around it in your binary crate.

Continue to chapter 12.