↖️ Blog Archive

Ray Tracing in One Weekend

Bradley Gannon

2025-12-19

TL;DR: I read the book Ray Tracing in One Weekend and implemented its contents in Rust. You can find my code here. The book is fun and approachable, and it produces some lovely images with readable code and no external dependencies.

A few dozen spheres on a gray surface with a white sky. Many smaller spheres surround a trio of three larger ones. The spheres are random solid colors, and most are either matte or metallic, with a few glass ones mixed in. There is a clear depth of field in effect.
The final test scene given in the book with 300 rays per pixel. My code rendered this in about 25 minutes on a single core. Note that the smaller spheres are randomized, so they don’t match the book’s version.

Overview

A raytracer is a program that creates realistic images by simulating optics. The simulation sends many rays into the scene and traces them as they bounce around according to a set of rules. Each pixel in the resulting image is assigned a color which is the average color of all the rays that passed through it, each of which has a slightly different direction and so takes a slightly different path through the scene. The color of each ray is the accumulation of the colors of the objects it reflects off of or refracts through. Sending out a larger number of rays per pixel asymptotically improves image quality, but this linearly increases computation time. In the test scene shown in this post, the best number of rays seems to be around 300 per pixel.

The book Ray Tracing in One Weekend guides the reader through the implementation of a simple but minimally useful raytracer. I had a lot of fun working through it, and it’s clear that the book is the result of a lot of hard work and iterative refinement. The example code is given in C++ but doesn’t use any substantially advanced features, so it was relatively easy to translate it into Rust. The book doesn’t use any dependencies outside the C++ standard library, which conflicts with the typical package-driven development style that Rust’s design encourages, but it’s a good choice in this case because it allows the reader to learn by building without foreign abstractions.

I don’t think I’ve learned enough in one week to fully recreate the raytracer without the help of the book, but I definitely understand more about raytracing than when I started. Maybe that’s a better metric, since in practice nobody builds things in complete isolation except in artificial circumstances.

Points of Interest

I won’t go into much detail about the implementation itself, since I’d only be doing a worse job than what the book already does. Instead, I’ll just mention a few areas of particular interest to me and then some ideas for future work.

Random Number Generation

There are several places in the code where random numbers are needed. The code makes use of a random number generator (RNG) in the C++ standard library that gives values on the interval [0,1)[0, 1), but no such generator exists in the Rust standard library. Just for fun (and to avoid bringing in a dependency), I decided to implement an RNG myself. I don’t know much about this kind of algorithm, but I was able to port the sample code on the Permuted Congruential Generator website, which claims that it’s better than most other existing RNG algorithms. I wanted to implement a version with 128 bits of internal state and 64 bits of output, but I couldn’t figure out how to do this without going deeper into the paper and other resources, so I settled for the 32-bit output from the sample code and calling it twice.

This wasn’t enough, though, because I still needed a way to convert these random 64 bits into a double-precision floating point value on the desired interval. It’s not sufficient to simply cast the 64 bits to an f64 because that won’t put the value on the proper interval. It also won’t work to use the random value to somehow select among the representable f64s on the desired interval because this won’t result in a uniform distribution. (Recall that floating-point values are distributed non-linearly by definition.) A solution to this problem is—apparently—to take the bottom 53 bits, cast them to an f64, and then divide that value by 2532^{53}. This gives a value on the desired interval while making maximum use of the available precision.

It took me a long time to understand how this works, and I’m still not sure that I do, but here’s how I think about it. Floating-point precision decreases the further you get from zero. That means that, on the desired interval, the largest difference between any two representable values is equal to the difference between the largest number and the second-largest number. Let’s call this value ϵ\epsilon. Unlike for real numbers, we can name these particular values by looking at the underlying bits. Here’s a truncated f64 number line:

...
0.9999999999999997779553951 = 0x3feffffffffffffe
0.9999999999999998889776975 = 0x3fefffffffffffff
1.0000000000000000000000000 = 0x3ff0000000000000
1.0000000000000002220446050 = 0x3ff0000000000001
...

The difference between the first two numbers is the value ϵ\epsilon that we’re looking for. This means that there’s no point in selecting random values with any more precision than this because we can’t necessarily represent them and they’ll just get rounded. So, if we count by ϵ\epsilon, how many values do we get on the desired interval? It turns out to be exactly 2532^{53}:

>>> from decimal import Decimal, getcontext
>>> getcontext().prec = 64
>>> epsilon = Decimal(0.9999999999999998889776975) - Decimal(0.9999999999999997779553951)
>>> num_representable = Decimal(1) / epsilon
>>> num_representable == Decimal(2**53)
True

Translating C++ to Rust

I had to put in slightly more effort in my implementation because I wanted to write it in Rust, but the book’s code is given in C++. This wasn’t particularly difficult in most cases because the authors have taken care to avoid fancy language features, but there are a few concepts and patterns that don’t translate directly. One example is the use of out parameters. Rust can represent these with &mut parameters, but it’s not idiomatic, and I found returning an Option to be much better. Rust doesn’t have classes, but the example code uses them regularly, so this required some care to map onto Rust using trait objects. And, of course, Rust’s strict lifetime rules proved incompatible with certain parts of the book’s design, since it uses lots of shared pointers. I struggled for a while with this when it came time to link materials and their spheres because I couldn’t convince the compiler that everything would stay in scope—and fair enough, seeing as it’s not exactly obvious from the design. In the end I decided to use Rcs for the materials, which technically incurs a small overhead but didn’t appear in any of the cargo flamegraph outputs that I generated.

It’s worth noting that while the Rust compiler did help me a lot—as usual—this kind of program seems especially prone to subtle logic bugs. This is a lot different from gluing APIs together, and there are many more opportunities to do the wrong thing as you push f64s around. Maybe this could be mitigated with more expressive use of the type system.

Some Ideas for the Future

This project was a fun taste of a whole new computing niche (for me), and I’d like to continue on in the future. The natural next step is to move on to the second book in the series, and I expect to do that eventually, but first I’d at least like to try to parallelize the existing code across multiple cores. This problem is embarrassingly parallel, so it would be pretty silly not to try. GPU support using wgpu or similar is a stretch goal. I’m also interested in reducing the number of rays per pixel for parts of the image where additional sampling isn’t needed. I have to imagine this is a well-studied area, but my first guess is that you can define some kind of stopping condition as rays come in and the pixel color stabilizes.