I’ve been giving this book, Algorithms to Live By: The Computer Science of Human Decisions, to my students for the last couple of years. I hadn’t yet made it through the whole thing until this year. I’m so glad I did. There’s nothing quite so exciting to me as having actual scientific data to guide my choices and improve how I do things. Each chapter takes a different CS algorithm, explains in mostly laymen’s terms the theory behind the algorithm, and then shows how that algorithm can work in real life for non-computer-related activities.
Some chapters were better than others at making the connection between an algorithm and human decisions or activities. I found the first five chapters particularly compelling and relevant. It was hit or miss with the last 6. I still enjoyed the theory, but I didn’t always see the application, or it just wasn’t as salient for me as it might be for others.
I took the most notes in the scheduling chapter, which covered a lot of ground related to to-do lists, prioritizing tasks, etc. The key to the scheduling chapter had to do with goals. Depending on your goal, you would use a different strategy for deciding what to do when. Goals include things like getting things done on time or getting the most important thing done on time or getting everything done within a certain amount of time. In each of these three, you have to consider different factors. In the first, you’re just looking at due dates: which thing needs to get done first. In the second, you’re looking at due dates and “weightiness”: what needs to get done first and which of these is most important. In the last, you’re adding up how long everything might take and just going through at random.
The chapter also addresses the issue of the cost of context switching and how to decide when to switch or keep working on what you’re working on. And, it also addresses human thrashing, when you’ve stalled and don’t seem to be able to work on anything or can’t decide where to start. Randomness is your friend in this case. Facing a pile of email. Just respond in random order. It may not be optimal, but it’s better than not doing anything.
My other favorite and most applicable chapter was about caching, which is really just about organizing information so that it’s quickly retrievable. The short-term memory on your computer is a way of organizing information so that, for example, typing an email isn’t a slow, one-letter-at-a-time process. Computers have to figure out how accessible to make information and how to clear memory in order to store more information. Computers have to predict what you might use next. There are many ways of organizing cache, but one way that’s pretty efficient is called the Last Recently Used algorithm. The idea is in the name. Whatever you most recently used, you’re likely to use again. And the opposite is also true. The thing that you haven’t touched in a long time can probably be bumped to make room for other things. Applying this idea to humans and the physical world is kind of cool.
They use a closet and filing as examples. Some people have their closets organized by clothing type and then by color or by color then clothing type. Whichever, the idea is that you have a system of organization that in theory helps you find something to wear. However, it turns out, mathematically, this system is likely not any faster than if you put the items you just wore at the front of the closet (for me, this is on the left side) and sometimes had to search through everything to find that one pair of pants. Ditto for piles of paper. Yes, occasionally, you have to go through the whole pile to find something, but the things you need and use most are likely on the top.
Another lesson from this chapter has to do with forgetting things–or seeming to forget things–as we age. As anyone who’s prone to opening 50 tabs on their browser knows, the more information you’re trying to hold in memory, the slower the retrieval of that information gets. Going back to a tab you haven’t looked at in an hour may take a while. A similar process seems to be happening as you age. It’s not that your brain is starting to deteriorate (though there is some of that), the main reason it’s sometimes hard to remember names or even words is that you have to sift through a lot of information to find those things, especially things that are not on top of the pile. As you accumulate more information and knowledge (and store it somewhere), it becomes computationally harder to retrieve it. Kind of cool.
There are lots of other good tidbits in here about sorting, randomness, game theory and more. They’re well worth reading about. The authors end the book writing about computational kindness, a concept I can get behind. The basic idea, which is really the central idea of the book, is that algorithms are created to reduce computational load. Computing, whether on a machine or in a brain, is work and takes energy. We would do well to reduce that workload when we can. An example they give is about making your preferences for say, which restaurant you want to go to with your friends, explicit rather than doing the polite thing of saying, “Oh, wherever we go is fine with me.” Usually it’s not and you’re companions know that it’s not and so they have to do the computational work of considering what you’ve left unsaid. In the end, we think of computers as completely rational in the sense that they can grind through every possible option, and come out with the “right” decision by doing so. But that’s only for easy problems. Most things we confront as humans, and frankly, for computers, too, are really hard problems. And there’s only so much time in a day, so computers, and humans, use algorithms that
make assumptions, show a bias toward simpler solutions, trade off the costs of error against the costs of delay, and take chances. These aren’t the concessions we make when we can’t be rational.
They’re what being rational means.
So go be more computer-like. You’ll be doing yourself and the rest of humanity a pretty big favor.