linked to by This Week in Rust!
Here’s a new version of my clue solver with the engine rewritten in Rust and more accurate simulations!
I wrote the original Python engine in 2007 and haven’t really touched it since then (other than to upgrade it to Python 3), so rewriting it in Rust was an interesting experience. Here are some categorized thoughts about Rust, software development, and simulations/math:
Rust is my new favorite language. (you can read my initial thoughts about it here, and all my Rust posts here) Rewriting the engine in Rust wasn’t particularly tricky, with a few exceptions I’ll talk about below, and it sped up simulations by over 4x without any real effort! (on my desktop machine a simulation went from taking ~2.2 seconds to ~0.5 seconds, although I made some changes after that I’ll talk about in the Simulations section)
- I’m really digging the feel of Rust; it’s kind of a low-level language (you have very fine control of where copies happen) with the convenience of lots of helpful libraries and nice language features.
- Having to declare which variables/references are mutable versus not is not only helpful for performance/safety, it also makes for nice documentation on methods. I noticed this because more than one place in the Python code I had written comments saying I had to make a copy to avoid modifying the original. This is similar to const-correctness in C++ which is also nice but less enforced by the language.
- I declared an enum for all the possible cards. (which is safer than the strings I used in Python; I’m eternally afraid of typos!) But in Rust, an enum is more powerful than a C/C#/Java enum; it can have arbitrary data associated with it. This is a handy feature, but it means that the language is careful about the difference between an enum value and a reference to an enum value when in my case it doesn’t matter, since they’re the same size. I was constantly using
cardwhen I really wanted
*card. Luckily the compiler error messages are incredibly helpful so it was always clear what I needed to use, but it was still annoying.
?operator makes handling errors so easy! Part of this project was also rewriting the CGI script in Rust, which meant handling all sorts of errors if the right query parameters aren’t specified and whatnot. But
?makes easy to early return if a suboperation fails and made the CGI code much nicer!
- Using Visual Studio Code for Rust development using the rust-analyzer extension was pretty nice – it annotates variables with types and method parameters with their names! The only downside was that sometimes when I pasted some code in it would freeze for a few seconds, although this didn’t seem to happen all the time.
- One note – while porting the simulation algorithm to Rust increased its speed by over 4x, this is for the release version. The debug version in Rust was actually 3x slower than the Python version! So just a reminder to always benchmark in release mode 🙂
- Rust also does backwards type inference, which is pretty cool. So in a method you can declare a variable with
let mut some_variable;then do some stuff with it, then return it, and Rust will figure out it must be the same type as the return type of the method. (and with rust-analyzer it will show the type in the IDE!)
- I did run into some Rust-specific hurdles when I was declaring my data structures. In Python the
ClueEngineclass has a list of
CluePlayer‘s, and each player has a reference back to the engine. In Rust this is tricky to do because of lifetime management and…stuff. I read some articles about how graphs are usually declared in Rust (a similar problem) and tried to copy that but it never quite worked. At one point my Rust
RefCell<Vec<Rc<RefCell<PlayerData>>>>which I’m guessing is wrong 🙂 Anyway, even if it had worked using
RefCellmeans that you lose the safety of compile-time checking and get run-time checking instead, which is scary. It turned out I just had to restructure things a little bit to avoid needing a reference from
ClueEngineand it was well worth it.
- After I fixed the simulation problem I was able to parallelize it using Rayon (like I did for the population centers project) pretty easily! It resulted in about a 5x speedup from the non-parallel version.
One big difference I noticed from the Python code I wrote in 2007 was that I was much more eager to add unit-type tests. I had an OK set of integration tests covering some bugs I had had to fix in Python, but I expanded that a bunch. Test counts aren’t everything, but I had 20 tests in Python and I have 65 engine tests in Rust, plus 40 tests for the CGI server in Rust. (I didn’t have any CGI tests before) These tests gave me a lot of confidence to make performance improvements in Rust, as well as tweak the simulation algorithm.
I had suspicions that simulations were wrong before but wasn’t sure what exactly was wrong. Let me walk through a simple example of what the old algorithm said and how I fixed it.
Let’s say you’re at the very beginning of a 6-player game; you’re Player 2. You don’t have any suspect cards, so as far as you’re concerned each suspect has a 16% chance of being the solution. Player 1 makes a suggestion (ProfessorPlum, Knife, Hall) and it goes by all the way to Player 6, who shows a card, but you don’t see what it is.
The key question here is: according to you, has the chance that ProfessorPlum is the solution gone up or stayed the same?
This sounds a lot like the Monty Hall Problem, but it’s not the same because there’s no host manipulating things. The answer that after talking to David, we’re pretty convinced about is that the chance that Player 6 has ProfessorPlum goes up significantly, and the chance that ProfessorPlum is the solution goes up but not as much.
The old algorithm did not show the solution chance going up at all. Here’s roughly how the old algorithm worked:
- Look at what cards might still be the solution (so omit cards that we know a player has)
- For each solution combination (Suspect, Weapon, Room):
- Update the engine’s state to say this combination is the solution.
- Run N simulations. For each simulation:
- For each player, randomly assign her cards from the cards we don’t know who has, but skip ones we know she doesn’t have.
- After this is all done, see if the engine is consistent. If so, increment the count for the current configuration.
The bold part was wrong. I added that to try to reduce the number of inconsistent simulations, but it biases the results. In the situation we constructed above, it means that every simulation will be consistent, and since we weight all solutions equally, the solutions will all have equal probability. Perhaps there’s some way to do a Markov Chain Monte Carlo-like thing where we could ensure the solution is valid but weight it appropriately, but that would take more research.
The fix was pretty simple – just take the bold part out and assign the cards blindly. And that’s pretty much it! Because we’re throwing out more inconsistent simulations now, I increased the number we run from 2,000 to 20,000, but then got most of that performance back by parallelizing the algorithm.
There was another small fix when I realized that we have to extend the “don’t try to be too clever and avoid inconsistent states” even back to when we assign the solution – otherwise if we know either Player 6 or the Solution has ProfessorPlum, once we pick a non-ProfessorPlum solution we will know Player 6 has ProfessorPlum, and we shouldn’t start with that knowledge when assigning cards.
I am a little worried that even 20,000 simulations won’t be enough for more complicated game states, so I might play around with that some more.
Anyway, probability is hard!