/dev/joe's Experience Constructing MIT Mystery Hunt 2024

It's no secret that for the last year I've been busy helping create the next MIT Mystery Hunt. But now it's over, and I can talk about it. Anything that spoils specific puzzles will be in spoiler blocks. There will be large parts of the article in those spoiler blocks. Sorry.

This was my fourth time constructing an MIT Mystery Hunt, though I've also constructed two BAPHLs and a Caltech puzzle hunt, and a number of one-off puzzles for various reasons. I do feel like I outdid some of my previous efforts this year, but this article is not only going to be about my puzzles but about all aspects of the Hunt I experienced. I also test-solved more than 100 puzzles (129 test-solve sessions, but this includes several canceled sessions or cases where my session got merged with another). I'm npot going to write about every one of those, because this would never get done.

Constructing a Mystery Hunt is a massive endeavor, and it takes many, many people to pull it off. The wrapup video gives credit to several of those people but there are many, many more. I can't thank all those people enough. It is a team effort, with a massive team; I really don't know how we did it as Beginners' Luck for the 2010 hunt with so few people. But at a minimum you can consider this thanks going out to everyone credited in the wrapup video or in the solution document for any puzzle.

Also, I should point out that any opinions expressed here are my own, not my team's. Speaking of which, I have a bunch of other social media posts from other people (except where I responded in the AMA). Their opinions are theirs and the posts may contain spoilers:

If you wrote something and it isn't listed here, it just means I didn't see it; I'm not excluding anybody on purpose.

TOC

This is more like a novella than a blog post. It's twice as long as it looks because of the content hidden in spoiler blocks, over 20,000 words! Since you probably won't read it all in one go, here are links to each main heading and each round of puzzles:

Construction

This section includes my comments on the construction effort in general.

But let's start at the beginning. We picked up a coin in January 2023, after some tough discussions about whether we were ready to write a hunt. This team (or some semblance to the team, anyway, since there was a lot of turnover in the years since) wrote a pretty successful hunt run at Caltech in 2018, but our followup was delayed by various reasons and then canceled completely when the hunt we were writing about an epidemic disease that turns people into zombies was interrupted by a real epidemic disease. But we decided we were ready to give it another try, and picked up that coin.

First, we had theme selection. There were dozens of theme submissions, which were whittled down into a short list and then an even shorter list for a final vote. The vote was contentious and very close, but we ended up with a mythology theme that developed into the one you saw in the opening video (skip to 16:35 for the actual start, which was delayed because the MIT AV person running lights and special effects only showed up sharp at noon and needed time to set up), about the god Pluto disappearing from among the community of Greco-Roman gods after the planet Pluto is demoted even further from its status of dwarf planet. I am not going to mention the other theme ideas we didn't select, since we might go with one of them if we happen to write another hunt again in the future. I will say that I didn't really have a horse in this race, as the idea I had been proposing (for the previous two Mystery Hunts I had been involved with constructing), The Phantom Tollbooth, was actually selected for the 2018 Caltech hunt. (I was also pleased seeing that book earn a spot in Bookspace!)

The construction started off slow, but we got back on track using a three-region puzzle-writing retreat, a series of weekend puzzle-writing "virtual retreats," and some weekend test-solving virtual retreats to get the slug of new puzzles through testing. We were still behind schedule, but were basically done by the New Year, and used the last two weeks before Hunt to construct replacement puzzles for a few puzzles we thought were too long or problematic. We had a full-Hunt test-solve going in December (mostly full, anyway, since some puzzles were still being constructed) but too many of the solvers dropped out, so a few of us veterans read through the puzzles and solutions those testers weren't going to reach, pointing out specific puzzles to retest.

In parallel with this we had various creative projects. This included the art for all the rounds and a few individual puzzles, and the overworld map. Old maps often drew sea monsters in areas that weren't well known, so the ones on our map were traditional. But this one was dubbed Elise the Kraken, and everybody fell in love with it. if anybody was betting about our mascot (a minor Mystery Hunt tradition since Zappy the Rat in Bookspace), this is the one:

Elise the
      Kraken

Other creative activities included the opening skit and interactions during the hunt, recorded videos, the coin, staff shirts, and merchandise (a shirt, pin, and poster). The artist for the shirt we were selling posted a time-lapse video of the process of designing the shirt. I was not signficantly involved with creative tasks, but I didn't completely skip out on this part of the hunt. I have a cameo in the safety video about wearing your nametag (the video starts at about 31:50 of the opening video above; my cameo is at 33:13) and another as Duncan the Swordmaster in the Las Vegas Argonauts video you could get from the Oracle.

Team Names

It's traditional for teams to have some pretty weird names, but it felt to me like the names were weirder than usual. Maybe it's just because of participating in the hint queue and other behind-the-scenes aspects where I saw many teams' names, repeatedly throughout hunt. Of course, ever since COVID we've had many more teams than we used to because there are lots of all-remote teams who participated for the first time in 2021.

One of the teams had an apparently blank name. One of my teammates first noted this one, and copying their name into something that could read it, reported the name was seven zero-width spaces U+200B. People on the team told us that the name was supposed to be one zero-width space, but we definitely have seven on our server. Python tells me the same thing. When I put in Galactic Trendsetters' name, there are U+FEFF zero width nonbreaking spaces between the airplanes - but just one between each two adjacent airplanes. I assume that's to keep them from combining or something.

There was also a team named canadian geese with three goose emojis on each side, in imitation of Galactic Trendsetters. And Team to be named at a later date seemed to be an imitation of our own name.

Another team's name was a unicode snowman character, apparently pronounced "unicode snowman." Another all-emoji name was 🪸🐢🪻. That's coral, turtle, hyacinth. One team's name was just ! and another's was Interrobang. Yet another team was called [title of team].

Another team was called now 👇 let's 💢🙏 try this 🅿 garlic naan. it's paper 🤓📝 thin. 💃💃 [BITE]. 👄 that 😅☝ seasoning is 👌 OUT 😥 OF 🤤 THIS WORLD. 👌. That whole thing was their name. I mean, it's not as long as Atlas Shrugged, or even the longest version of our name, but there is a lot going on there. Another really long name was DAVID ZENG 251 TA BASTION MAIN 🎉NOT A NEET🎉 🚎RB.GY/6QV6Q🚎 🐟RB.GY/55Q9P🐟 42POTATOES CRUCIFIXPANDA ERASMUS ERAMOOSE TIMEBUFFALO EPOCHBISON YEARDEER 🕓🐂⏳🐃🕰️🦬 TSUKIHI ARARAGI #1 FAN + friends. I don't know if that is supposed to encode a puzzle, but the two URLs redirect to puzzles from a CMU hunt that this David Zeng wrote.

One team was called meow meow. Another team was called 喵喵喵, which is meow meow meow in Chinese.

A few teams had names that made me think they'd really appreciate our hunt or specific puzzles within it, including The DONUT within the DONUT, ℙoNDeterministic, ET Phone In Answer, Destroying All Nebulae, Resistant to Nixing Pluto, and I'm not a planet either. Some of those space-themed names might have been chosen after looking at the space theme on our registration site. However, I'm not a planet either wasn't new, and for the question where we asked them to draw an original constellation for us among some made-up stars, they drew Pluto, with the heart. Not every team's constellation was going to get in, since we had more of them than puzzles in the hole in the ceiling, but theirs was accepted immediately. In a similar vein, The Mathemagicians probably enjoyed our 2018 Caltech hunt, if they were there for it.

Team Visits

I went out on team visits in costume at one point, as Dionysus. (Not the Dionysus you all saw on stage during the opening skit! I was a shorter Dionysus, still with the tie-as-headband, but no flask.) I visited a bunch of teams. The MIT VISITOR card I saved from last year was still linked to my account and was reactivated with a new Tim Ticket this year. Unlike last year, when I barely used the card, I had plenty of opportunities to try it this year. A few times it seemed not to work on doors where it should, even though it gave me a green light, but my experience at Building 46 made me think some of those doors were just broken. The entrance there facing Vassar Street features a normal door, a revolving door, and another normal door. Only the normal door farthest from the card reader opened for the card.

One team we visited was The Girlz and otherz and GPTz. This team seemed to consist of six Chinese "girlz," perhaps college age, with an older Chinese man sitting at the back of the room, not participating. Their supervisor, guardian, or something. Their English wasn't perfect, and was strongly accented, but they were able to hold a conversation with us with only a few instances of one of us not understanding the other. There were a few of these all-Chinese teams, some of them only playing from China, who were among our biggest hint-requesters, often citing language and cultural issues they had getting through the puzzles. I applaud them for trying, though they are far outside our target audience and the Mystery Hunt is not written for teams like them. (I was in a similar position about 15 years ago when I was among the pioneers in trying to do Austrlian puzzle hunts from the other side of the world. At least we spoke mostly the same language, but we struggled with the cultural references at times. One puzzle was about the card game 500, which despite being developed in the United States, is now almost completely unknown outside Australia.) We heard from one of these Chinese teams (I didn't get the comment directly and I don't know which team) that they weren't intending to abuse the hint system, but they had one member they couldn't control who was just constantly asking for hints any time the button was available. Team behavior management is another issue entirely and one that fully English-speaking teams also encounter.

I also visited Control Group, who had a full classroom, about half of them wearing cat ears. I'm not sure I've encountered them before in person, so I don't know if it's usual that half of them are pretending to have taken an experimental drug that turns them into cats. (They had headbands with cat ears.)

Finally, I went out on the inaugural run of the interaction after solving the Hydra. It was supposed to be result of clearing a server of a computer virus which generated popups of a snake named snek, spawning new copies the way the Hydra spawns heads. (Incidentally, after I saw a mockup of the Hydra round page full of snek popups, I posted a link to the Vi Hart snake snake snake snake snake video, which one of my other teammates was coincidentally looking for.) For this interaction, I wasn't portraying one of the gods. Instead, I was the grateful server admin. In the interaction, we had several people get in a circle, put both their hands in the middle, and clasp hands with two other people. Then we asked them to untangle themselves without letting go of any hands.

The Server Troubles

At the start of the hunt, we experienced severe server troubles. I don't understand the full technical detail, but the setup involved some middleware between the web server and the database which was apparently supposed to detect when either of them was in need of a restart. My impression is that there were probably memory leaks which restarts cured. But in this case, the middleware itself was getting bogged down, so it didn't do its job. Post-hunt discussions seemed to lay the blame on websockets, but I don't know what it is about websockets that we were supposed to do differently.

What we eventually did was strip out this layer, so the server talked directly to the database. This made it "fail quickly" in the words of one of our tech gurus. Instead of gradually slowing down to uselessness, either the server responded quickly, or it generated a 500 error quickly, at which point we ran a command to restart it. That worked, but we had essentially replaced a broken piece of software with humans, to restart another broken piece of software, and we added more humans later to keep it going 24 hours a day. Once hints were going, we had people in HQ hitting the server pretty frequently to grab new hint requests, and hinters either included somebody given the rights to "kick" the server or they'd call out for someone to do so. Ironically, the hint queue page didn't "fail quickly" but it usually either responded instantaneously or hung, so we were still able to use that as an indicator.

While my teammates were trying to figure that out, we worked on a plan B. The basic idea was to open a Google form to submit answers, and do answer callbacks, after giving them all a copy of Google doc versions of some puzzles. But it didn't work to do this for the first round, mainly because we'd have to find and email the meta-related images to each team that they were supposed to get when solving the feeder puzzles. The second round contained an interactive puzzle that there was no easy way to send out, so we sent out the third round.

We had Google Doc versions of most puzzles because of autopostprod. That's a script that automatically postprods simple puzzles made of standard content if they are provided in Google Docs, turning them into the web pages that you see. But we had to quickly find and check the Google Docs for all these puzzles, to make sure there weren't last-minute changes. Our factchecking crew handled this job in about an hour. You can thank them for having these puzzles to work on while we worked out the server issues.

Ironically, it seemed like we got the server to the form it stayed in throughout the hunt about half an hour after you guys got these puzzles. However, because teams didn't actually have this round unlocked on the server, we kept the Google form and answer callbacks open for several hours, until we opened the round on the server for all teams. I was on the last shift of responders, and it was fun to hear some teams that still remembered their funny ways of answering the phone, such as ET Phone In Answer answering the phone with the answer they last submitted in place of "Answer" in their name, and Wafflehaüs asking to take our order.

Opening up more puzzles also had the side effect of speeding up the hunt slightly, but since everybody had lost a couple hours at the start, we felt that was fine.

The Puzzles

In this section, I describe individual puzzles roughly in the order you might have seen them during Hunt, because reading them in the order things were constructed would be too confusing. Also, this lets me make sure I don't talk about puzzles that weren't actually released in here, which would be even more confusing.

For all puzzle links below, first go to https://mythstoryhunt.world/ and click Public Access to log in. After that, you should be able to navigate directly to the puzzles via their links below.

Family Tree (The Throne Room)

I knew that...

Queen Elizabeth II had corgis, but it was new to me to learn they were all related to one another. But once you made that discovery, this was easy to work out; a good first-round puzzle.

Three Really Good Boys (Throne Room Meta)

This was the most contentious meta during meta development, and it almost didn't make it. The version I tested was rather different.

At that time, the diagrams you got when solving the feeder puzzles had colored lines, matching the coloration of the members of the trios, with color mixing where the same line was used for two of them. It had the same aspect of having to match letters to locations in the diagram. Rather than the judge scores, there were missing lines, and it seemed obvious to me that we should join up the missing lines of each color to spell a three word answer. Except that wasn't it; we were actually supposed to keep the puzzles in the order given (reinforced in the final version by forcing the first letters of the puzzles to be A-F) and read the missing letters in RGB order of the lines missing from each puzzle's diagram.

There were several other versions, both before and after that version, which used different mechanisms, some different feeder answers, and several different final answers as well. I think there was a serious too-many-cooks thing going on here, with too many different concepts being proposed for what was supposed to be an easier, early-hunt meta, and combinations of them getting put together which didn't work well or made for unintuitive steps as in the one I faced. If you were wondering why there was a color theme among the trios for the answers which wasn't used later, it was because using it made solvers want to use it more, in a deeper way that wasn't intended.

Asphodel (Rivers of the Dead)

This was another failed of my test solves. Really, the failed ones were a tiny portion of the testing sessions I was in, but they included two out of the first three I saw as interesting to mention in the order they were presented during Hunt.

We didn't have the flavor text clue that was in the final version, and my co-solver and I had never heard of the game and didn't recognize the bridge image. With the same sort of grid (a different but still irregular grid shape, with differently arranged flowers, and some different flowers) as the final puzzle, we ended up down a deep rabbit hole trying to make this into a cryptic Nurikabe which didn't lead to any quick contradictions, but also didn't lead to enough certainties to make it solvable. This isn't even the biggest BE NOISY that I encountered during this year's testing (See Obelisks of Sorrel Mountain in Oahu for that). But I bet the reason you have flowers adjacent to each other is to close that rabbit hole.

IV GUYS (Rivers of the Dead)

Somehow my group testsolved this one without ever paying any attention to the name, which we would just have noticed as

a sign with letters knocked out,

not being familiar with the meme. This is the problem with meme-based puzzles: They are usually not as widely known as authors wanting to use them in puzzles think they are. I admit that giving the meme explicitly as the title of the puzzle does make it a lot easier to find. And in this case it just didn't matter, as the puzzle was solvable without understanding there was a meme in the first place.

Why the Romans Never Invented Logic Puzzles (Rivers of the Dead)

This is the first puzzle in this review which I didn't test solve, but saw during the hunt and decided I had to try it myself afterward. The title alone is worth it. The logic is a difficult slog, though.

A King's Best Friend (The Underworld Court)

I was assigned to discussion-edit four different chess ideas various authors submitted early in hunt construction. The editors thought it was too many for the hunt, and were hoping I could pare them down. The one I liked most ran into an unexpected technical snag, so got booted out. The other three all needed some level of guidance and ultimately all got in.

This was my second choice, the only one I thought could be easy enough to be an underworld puzzle.

It was a simply fairy chess puzzle, where solvers are going to get only indirect clues (and plenty of examples) to figure out how a Cerberus piece moves.

The puzzle stayed true to its goal, but took a long time to develop once the authors latched onto this idea of cluing a metapuzzle using FEN notation, one row per puzzle, but had difficulty coming up with a position that did what they needed with FEN that could be constructed using characters from algebraic notation for chess moves.

Badges Badges Badges Badges Badges Badges Badges Badges Badges (The Underworld Court)

I didn't actually see this one in testing, but because it involves the badges we were handing out to all the teams, it was talked about a lot in the lead-up to hunt, in part because we were trying to figure out how many of those badges we needed to print. This was more than just adding up all the on-site registration numbers because we were giving small teams a set of 8 badges to ensure they could complete the puzzle, and wanted some spares in case some badges were lost or ruined. We had people constantly making references to badgers, mushrooms, and snakes and not needing stinking badges.

Cubo (The Underworld Court)

This was one of the last puzzles I test-solved, and it's cute.

Like some solving teams did, I was trying to make it into a Siamese Twins crossword at first, before I figured out it was a bigram crossword.

Football Team on the Marching Field (The Underworld Court)

I thought this puzzle was too long, but it was funny, and evoked memories of the "the band is on the field" game.

Marathon (The Underworld Court)

This was a case where I was called in to get a test-solve group unstuck.

They had already figured out the movies, and even the mechanism, but they had some bad matching of the items to the categories indicated by the movies. Even after I sorted them out, a couple of the matches felt weak, but I didn't really have any better ideas. My feedback included capitalization fixes so that the two-word phrases consistently had lower case on the second word, except for the proper name. I gave out some hints during hunt that amounted to data checks of this puzzle to help similarly stuck teams.

Second Helpings (The Underworld Court)

A pretty quick solve, but it took a bit to get the idea

that they were consonantcies. I think we got started with Rex Stout matching part of Our Ox Stoat with two letters changed and as we made other near-matches figured out the consonantcy aspect.

Transformations (The Underworld Court)

This was a fun variation on the mangled clues puzzle, which by its nature supports many, many variations.

🤞📝🧩 (The Hole in the Ceiling of Hades)

This was one of the puzzles I found the most fun in test-solving (keeping in mind that I missed many easy-but-fun ones that just didn't need my help to get them tested) and I highly recommend you try this one before spoiling yourself. 2 or 3 people with voice chat to brainstorm what's going on is recommended.

We got started in the middle, with Deep Blue Sea a solid starting point to figure out that the 7 extra squares beyond the emoji bank needed to be filled in with letters. We had HIC, and I found a lot of 7-letter words with those letters. Then we finally figured out the one about ET was supposed to be "ET phone home," and then I didn't need a word list and called in the answer. 

Bringing Down the House (The Hole in the Ceiling of Hades)

I had the idea from the flavor text that this one was about

the MIT Blackjack Team even before we looked up the title and found it was the title of a book about the team. Reading about the team led us to the knowledge that they used a set of numerical signal words, and once we found those it was practically done.

Cards (The Hole in the Ceiling of Hades)

This was a puzzle that came out of brainstorming at the New York retreat which I implemented. You probably missed it, since it was an easy one that whoever opened it first probably figured out quickly, so go take a look.

Green Logic (The Hole in the Ceiling of Hades)

This was my only other puzzle in this round. I apologize if it ended up too long and difficult for the round. The idea was really meant for an early overworld round, but the editors didn't have any good answers for me. On top of this, the answer was hard to locate; I and the editors both missed that the answer isn't in a lot of word lists and the anagram I tried to clue is nontrivial. The puzzle was still nerfed from what it might have been in a later round. An outline for such a puzzle included six of each variable instead of five, and made it a puzzle not just to match the total emissions scores to the companies, but to determine them from even more clues.

Proof It! (The Hole in the Ceiling of Hades)

This one starts out with an ordinary crossword puzzle, so if that's your thing, it's a good one to try.

I enjoyed it during test-solving. During hunt I hinted a number of teams who were not looking at it the right way, and thinking of words like SLANT and UNION, telling them to find longer words within the math theme.

Poor Seth, though, had to redo the whole puzzle because he had misspelled one of the subjects TRIGOMETRY in the version I tested (and somehow the fact-checker missed this).

The Best IT (The Hole in the Ceiling of Hades)

A straightforward enough puzzle, but (without spoiling it) I like how it works.

Colorful Connections (Minneapolis-Saint Paul, MN)

I enjoyed this one. (An apology for some solvers is in the spoiler.)

It was called Making Connections when I started testing it, and while I got the Connect Four connection pretty quickly, we didn't figure out the mechanism to change letters in the clue answers to red or black until the title got changed to what you saw.

A lot of people seemed to like this one, but we got feedback from a couple people saying they got the color replacements, but didn't think it could be Connect Four because that game has been played with red and yellow pieces for a long time. Some research shows the first Milton Bradley versions with red and yellow pieces debuted in some countries as early as 2007 and in the US in 2009. Oops! Definite generation gap problem. I played it with red and black and the authors are in that generation as well. Same for the hint writers. Sorry, Gen Z red-and-yellow Connect Four players.

Retro Chess Puzzle (Minneapolis-Saint Paul, MN)

This was another chess puzzle I discussion-edited.

Andrey Gorlin linked to a set of chess rebus puzzles he had seen elsewhere (basically cryptograms where the identities of chess pieces were hidden and you had to do retrograde analysis to figure out which piece was which, as makes up part of this puzzle). And after skimming through the solutions, I saw that a massive variance in difficulty was possible. You could make one of these puzzle solvable in a few minutes, and you could make one that took hours and hours to solve and several pages just to explain why the solution was what it was. So my main advice was to keep the puzzles on the easier side, because a Mystery Hunt puzzle would likely involve solving several chess puzzles to give solvers enough fodder to extract an answer from.

He took the puzzle into a domain I was completely unfamiliar with, so I mostly just observed once the actual construction began. He needed to nerf things at times, but delivered a solvable puzzle in the end.

Triangles (Minneapolis-Saint Paul, MN)

I was in a long, difficult test-solve session for this one. Of course, almost everything I can say about it is spoily.

Understanding the rule roll frequencies was simple enough, once you collect the data. We had a lot of trouble figuring out certain groups for assembling the die, and a few of the numbered spaces. More of a problem was matching the D&D rules. Several clues were changed as a result of trivia we unearthed, and we assumed we'd use AD&D 1, 2, 3, 4, 5 before the list of editions above the rule roller was added.

Finally, we had trouble seeing some of the letters we were supposed to be forming. At least one of those was changed to be a clearer letter as well.

A Routine Cryptic (Yellowstone, WY)

Haha, it is pretty routine, in that it has a mild gimmick that is easy to figure out. But it's also huge. If you love cryptics, this is a good one to keep you busy for a while, and if you hate cryptics, go find another puzzle. This was the second puzzle (after the Romans one) that I went back to solve after hunt was over - not an entirely clean solve, since I'd seen the answers once, but it was months earliand I didn't solve most of the clues myself in testing and didn't remember much.

I joined a test-solve team that had already solved all but a few of the cryptic clues, so I missed that particular joy.

They had the messages from the extra letters too, and knew that The Naughty Nineties had Who's on First in it. They were struggling to figure out what to do with that. I was the one who found and pointed out to them the giant diamond filled with Who's on First characters.

Arts and Crafts (Yellowstone, WY)

This was one of the earliest non-meta puzzles test-solved, and I was among the early testers.

The nonogram wasn't too difficult. But then what? Knowing there were Minecraft symbols in it, but not actually having played Minecraft, my first thought was that the ^ and v symbols meant to go up or down a level. Later, with more insight as to what Minecraft patterns looked like and in particular the placement of the lamp cells, we figured out they were origami fold lines, and we got the real 3D structure.

Then we had to try to figure out how the redstone logic worked. This was complicated by the fact that the version we started with had peashooters from Plants vs. Zombies instead of the redstone repeaters. I am not sure why that happened; maybe the author was learning more Minecraft along with us.

Eventually we did figure out how the redstone circuit was supposed to work and solved the puzzle properly, with some hinting that included linking us to a redstone simulator.

Greek Girl Squad (Yellowstone, WY)

As a Strongbad fan from back in the day, this video brought back memories. You don't actually need any knowledge from anything on homestarrunner.com to solve it, though, and it's a fun little puzzle, if your idea of fun allows a little comical cartoon death.

The 10,000 Commit Git Repository (Yellowstone, WY)

It looks like there's a story I have told people but failed to blog after the 2016 hunt, so first, a digression for that story. There was a puzzle called World's Longest Diagramless. The gist of the story is that as a test-solver for this puzzle, they made me write a program to cheat at crossword puzzles. Details in the spoiler block.

The puzzle is posed as a giant diagramless crossword for which even the clue numbers are withheld. At the top and bottom it behaves like a normal diagramless. After a bit, though, it turns into a list of alphabetical words. Not knowing what we were looking for, I downloaded a copy of Matt Ginsberg's crossword clue database, wrote a little Python to pull the clues and answers out of the simple null-delimited database format, and then wrote more code to look up the answer for every clue that appears with exactly one answer in the database as an assumed correct answer. Of course, this didn't catch all the words, but it got enough. And we had enough data to figure out what we were looking for.

It turns out it's related to "Yakko Warner Sings All the Words in the English Language," one of several Yakko Warner music videos from the Animaniacs cartoon. The first and last alphabetical segments are the words he sings. In between, it just had all the otherwise unused words from some word list that there were existing crossword clues for. Of course, the video skips over most of the words, but there is a segment in the Ls that he also sings, and those words are also given together near where they belong alphabetically, and the answer was hidden as an acrostic in the clues at that point.

Now, back to the Git puzzle. This was this year's puzzle that made me learn to cheat in a new way. The puzzle claims to be a Git respository in which the commits have gotten into the wrong order. Now before we proceed, I should explain my background. I know a little about Git, but I'm no expert. I've done basic operations. Updated my local repo and sent pull requests. That sort of thing. Now on to the spoilery content.

At first glance, I thought it wanted me to learn how to use git rebase to patch the commits back into the right order. But when I sat down to solve the puzzle for real, I said to myself, why not just make a new repo and shove all the commits into it in the right order? I just needed to understand what needed to go in.

git log -p showed me the history of the repo in a readable form, including the commit hash. And they gave us the hash of the first commit so I could verify I was doing it right. Setting the file contents was easy. I figured out how to set the user name and email, and how to set the date within the git commit command. After it didn't work, I found a document (much like the one the solution links to) which explains there is a second committer date that also also goes into the hash. This was more difficult to set, and the page I found describing it suggested it couldn't be done with a command line argument and had to go via an environment variable. In other words, probably something people don't ordinarily do.

Once I confirmed I had the first hash right, I programmatically did all the rest. I'm sure this was great confirmation to the author and editors that somebody who only knows a little about git and looks up the rest can figure it out. And I hope those of you who tried this puzzle during hunt figured it out too. And that's how I learned a new skill: How to forge a completely fake git history. I hope I never have to do that again.

This Space Intentionally Left... Well, You Know (Yellowstone, WY)

Once upon a time there was a team called This Space Intentionally Left Blank. For one of the BAPHLs my team ran, we actually had two teams with this name, the traditional one and a new team who didn't know their name was already taken. And we didn't have a system that forced the names to be unique. The new team had entered in the beginner division that gets extra hints, so we used the division names to distinguish them.

I'm sure they (the first Blank team) are still around somewhere, though I'm not sure which team they are playing on now. This year we had a team whose name was seven zero-width spaces. Or just one. Not that it makes much difference.

Oh yeah, this puzzle. The puzzle tells you there's no puzzle content in the page, and it's all in a Google doc... which also looks blank. It's a fun concept which I didn't get to test-solve.

Transcendental Algebra (Yellowstone, WY)

I didn't test-solve this puzzle but experienced it during hunt through the interaction (which reveals the answer you get from solving the given puzzle, so is in a spoiler block).

The puzzle solves to the answer HOLD BILINGUAL DIALOG. When they submitted their request, we called teams and spoke to them in some other language, which was commonly Spanish, Chinese, or Japanese. Galactic Trendsetters got a call in Puflantu!

Turing Tar-pit (Yellowstone, WY)

This was a fun one. The puzzle changed a bit from when I test-solved it; there was another slide, some different ones, and a different final answer.

My solving partner got that this was about esolangs before I even had a chance to look at it.

We didn't have the punctuation in our output, and we wanted to apply what were originally triples of numbers as epigram, word, and letter indices, as some book codes do, but it didn't work. It took longer to figure out we wanted to extract more than one letter per epigram.

The version we got, rather than just looking like slides from a Powerpoint document, was actually in a Powerpoint document, and since we got stuck, we were considering whether there was another esolang to find. In wondering whether Powerpoint itself could host an esolang, we found this, which we were easily able to rule out, since our Powerpoint did not have these features, but it was funny nevertheless.

Another weird bit is that when you look for the "Epigrams on Programming," Wikipedia links to two archived copies of it: A PDF from the original publication, and a copy of the author's web page in the Wayback Machine. The web page version misspells the word Optimization as Optimiziation in epigram 21, and of course the version of the puzzle we received used that one and we had to guess whether to treat the misspelling in the source material as canon. (It was easier to decide to ignore it when we saw the other copy without it, but we also advised just not using this epigram, which they did.)

The entire Hell, MI Round

I proposed this meta a bit later on, after several metas had already passed testing and were having feeders written, but it ended up being one of the first rounds solvers saw once they arrived in the overworld. Team leadership put out a call for metas with unorthodox structure, worried that a feature which has made a lot of Mystery Hunts (especially recent ones) memorable was missing from our hunt. This was my response. It's clear that a whole lot of you loved it. The Text Adventure seems to have been the most loved, and that wasn't mine, but maybe I get some partial credit for commissioning a teammate to write a text adventure for me with certain details and letting him go wild on the rest!

At least a few people were stunned by the possibility that this could be constructed, so I've made a lengthy writeup about how it was done. I wanted to say this at wrapup, but they only gave me two minutes to speak, so I couldn't do it justice. (Those two minutes start at about 14:50.)

The inspiration was the Szilassi Polyhedron. This bizarre shape is a polyhedron with 7 faces, each bordering all the other faces. The seeming impossiblity is made possible by the shape having a hole through it, and most faces being concave.

I used the shape for a puzzle in the 2018 Caltech hunt, but didn't really figure out how to make good use of it then, and ended up just making a lame maze. Once you cut out and piece together the shape, and then solve the maze, it forms a letter on each face.

This time I knew how I wanted to do it: Seven puzzles, each with six answers. None of the puzzles are independently solvable, and the answers are shared by pairs of puzzles which contribute information toward each answer, so there are actually 21 different answers. The scale was reasonable, on par with other multi-answer rounds like the Zelda round (9 puzzles with 3 answers each = 27) and the Orbital Nexus round (8 puzzles, each with 4 versions that switch with the phases of the moon = 32). It also was a bit like the Reverse Dimension (18 puzzles which had to be paired up to make 9 answers). And yet, it was significantly different from all of those.

Since I was putting a puzzle on each face, and an answer on each edge, the natural extraction was to put a letter on each vertex, and I chose the mechanism that the three answers meeting there would have only one letter that appeared in all three. This gave me a good flexibility to come up with answers, and it let me impose the second constraint that every two answers meeting at a vertex should have at least two letters in common, so there is still a reason for solvers to keep solving even when they have many answers already.

Another issue that came up at this point was how to read those letters. The problem with Szilassi is there is no obvious order to read the letters. There is more than one Hamiltonian; some site told me there are 24 Hamiltonian loops (each with 14 starting points and two directions to try to read the answer), and more orders if you don't force it to be a loop. But the seven-colored torus has equivalent topology, and has the advantage that a common representation of it has all the vertices in a circle. Googling seven-colored torus gives you multiple images like this, including somebody selling 7-colored torus pillows in this pattern. By doing this, there was one obvious loop to read the letters in. Solvers had to figure out the starting point and direction still, but that's a reasonable puzzle. And I decided I was going to give solvers the plain side of the torus (with just seven diagonal bands) and suggest to solvers to turn the torus over and imagine what it must look like on the other side, if we could get it through testing that way.
But there was a gap to fill still, a Grand Canyon-sized gap:
The gap was the difference between saying every pair of puzzles is going to combine to generate an answer and writing seven puzzles that work together so intricately. So in addition to proposing this mechanism and a set of answers, I proposed a full set of puzzles for the round and the way the interactions between them were going to work.

First was the Blanks puzzle. The point of this puzzle was to help solvers understand these weren't regular puzzles right up front by giving a puzzle which clearly didn't have enough information. There was just a set of blanks with 5 of them marked. Each puzzle generates or has hidden within it one phrase with words of the right lengths to fit on the blanks and extract a five-letter answer.

Next up, I chose three grid puzzles of the same size, the common 15x15 size for crossword grids. One of these would in fact be a crossword, one a word search, and one an Akari. The lights of the Akari overlay on the other two puzzles to extract clue phrases, and leftover letters in the word search would tell how to get an answer when overlaying it on the crossword. At this point, I thought the Akari wouldn't have any letters, so the Blanks phrase was going to be given in its flavor text. The crossword would have a clue, and the word search could have more of the unused letters.

The fifth puzzle I added was going to be a Matchmaker. I adopted this name from one Mike Selinker used in Puzzlecraft to describe this puzzle when I put it into my index. It's the one with clues with dots next to them which you have to pair up, draw lines between them, and look at what the lines cross, or what they don't cross. By making a really overloaded Matchmaker, I could make it extract six different phrases, one to clue an answer with each other puzzle. It could use one of the crossword answers, one of the word search words, and provide a phrase to put on the blanks. I wasn't quite sure how it was going to work with the Akari.

Nathan Fung suggested a technical puzzle, for which I chose a chemistry puzzle, and an interactive one, for which I chose one of his suggestions, a text adventure. I had never done it, but I knew we had people who had written text adventures in Inform 7, and figured I'd farm it out to one of them to construct. The chemistry puzzle could have clues pointing to specific chemicals, and a phrase from marked blanks in the names of missing chemicals to point to another puzzle, and the text adventure could have clues coming in that give easter-egg-like hidden commands and give out phrases for other puzzles when you complete whatever activities the adventure involves.
As I started looking for a metapuzzle answer, I noticed some other constraints the mechanism I had proposed put on me:
Certain combinations of positions could not have the same letter, because making the answers leading to those letters all have one letter in common would force another letter to also have that letter in all its answers, even if I didn't want it there. I developed a list of such combinations and wrote a program to help me test answers for problems of this sort. Eventually I decided on an answer and got editorial approval for it.

I actually wrote the Blanks puzzle first, before the meta was even tested, to ensure I had a set of answer words for Blanks which could be extracted from some sort of plausible phrases, with one crossword clue, while the others would be extracted from the solutions or flavor text of puzzles and could be somewhat contrived. I placed the puzzles in the grid as I chose answers, putting a bunch of chemistry-related answers around one region for that puzzle, and then working to meet the constraints I had set for myself in the other answers while choosing sensible and interesting answers.

Testing was a bit awkward, because we had to figure out what information to give testers. It was during this phase that we decided on the idea that solving all answers for a puzzle was going to give solvers the color in the torus that represented that puzzle. Finally, we got two clean test-solves... And then an editor wanted to check that the answer wasn't too brute-forceable.

The answer I had at this point was DISHARMONIZING, a pun on DIS, another name of Pluto, saying that he harmonized the groups and kept them from fighting. But it was also a real word and in word lists, and brute force tests on 11 answers (everything from two puzzles) and 10 answers (some other combination) showed just two answers for the 11, and a handful of answers from which the real one could reasonably be picked for the 10. My editors were too worried that someone was going to plug an expression into Nutrimatic, or more likely 28 of them for all possible starting points and directions, and get the meta answer with only 6 or 7 answers.

One of my teammates came up with another answer that was deemed both sufficiently punny and sufficiently non-nutrimatic-able, and I redid the whole thing, including a completely new set of phrases for the Blanks puzzle, and more clean solves.

Finally, we were ready to start constructing. I had planned the order for construction as well: Blanks was already done, Akari was next (which I got onto immediately and done in a couple days), Crossword next (which Craig handled), and Word Search next (me again). Text Adventure and Chemistry were next, and this happened to be when we had scheduled the in-person retreat. Craig found me Linus Hamilton, who had a personal goal of making sure there was a text adventure in this hunt. He said writing it himself was an acceptable way of meeting the goal. I had planned to write the chemistry myself, but I met Alina Khankin during the retreat, who was eager to write a chemistry puzzle, so I gave her this one and put her in touch with Linus to coordinate the message linking those two puzzles.

And finally, I wrote the matchmaker to give all the messages left over needing to point into all the other puzzles. The erratum for this puzzle came about because Linus gave me the message using UP, but implemented the command without that word. I am not sure how it got through two full-round regular test solves, fact checking, postprod checking, etc. without anybody picking up on the problem.

Casino Royale (Las Vegas, NV)

This was Charles' idea, but he asked me to build the map, knowing getting the scale and detail right ws a problem. He designed the concept and the extraction cluephrase, which was revised due to testing, so I had to do the map twice. Writing a good cluephrase was a bit of a challenge, because this data set and mechanism rather limited the possible phrases.

Circus Circus (Las Vegas, NV)

This puzzle was not my idea, but I necessarily got involved.

It's pretty much in the same form as I first saw it, with only minor refinements. They just needed me to not make any updates to my index before hunt, save for adding this puzzle to the index to make the first diagram true. So I had the update staged and ready to post weeks in advance, and posted it Thursday night. If some of you saw it in advance, good for you! It probably didn't help much, since when you got the puzzle content it linked back to the index.

This puzzle involved information that will change over time. An archived version of the information at the time of the hunt can be found in the next spoiler box.

The keyword list and the authors list. Specific pages you can reach from links from those pages are also preserved; everything else will error out.

Encore (Las Vegas, NV)

This puzzle got nerfed significantly.

In the original puzzle, there were only a handful of picots. We were supposed to recognize letters like in the final puzzle, but we were supposed to shift those letters by the number of picots in each one, a step which was entirely unclued. Due to nobody figuring this out and on top of that having trouble recognizing the letters, the design was updated to draw the answer, with letters that held together better due to not having any restrictions on picots.

Flamingo (Las Vegas, NV)

Highly recommended for logic puzzle fans. I test-solved this in pretty much this exact form.

Harrah's (Las Vegas, NV)

I didn't see this puzzle in testing.

But I'm sure i would have gotten the song!

Shade of Wealth (Las Vegas Meta)

The first meta I proposed, and one of the first ones to get through testing, was one based on...

the Mexican bingo-like game Lotería. Though I am not Hispanic, I grew up in south Texas and experienced many elements of Hispanic and especially Mexican culture, and this was a subject I had long considered a candidate for a puzzle but hadn't ever really developed.

It wasn't a smooth process; my first proposal had problems, and somehow I ended up with three different editors making contradictory suggestions about how to fix the meta. This was a pain; it meant I wasn't going to please everybody, but I just picked ideas one at a time and tried them. We went through a bunch of different ideas on how you could "play Lotería" on the board (which was going to have pictures of the 16 casinos laid out like a board on the round page, filling in the answers at the bottom of the images like the names on a Lotería card when the puzzles were solved) but we just couldn't come up with anything that worked. Just picking a few lines to play in meant some answers wouldn't get used at all, which was unsatisfying, and trying to play in all the lines (which is more than you think; an animation here shows all the traditional ways to win) left it so overconstrained we couldn't make anything work. Eventually I picked one of the other ideas another editor had suggested and went with that once we got something decent out of it.

A second round of editorial upheaval this puzzle went through was related to a few images some people might consider culturally insensitive by today's standards in the 19th century Lotería card set that has become standard. I was aware of those (they include a drunkard and a black man) and I had already marked them as not to be used in any way in the puzzle, but teammates were worried about even potentially subjecting solvers to these images by making them search for the game. Someone suggested alternate, modern Lotería games which have been published, but there were too many different ones, some without good sources to identify all the cards. Instead, we identified what Google had done when they had used the game in a Google Doodle, which was to keep the traditional cards but replace all potentially objectionable cards with safe images, including some Mexican cultural references they thought should have been in the game, such as the Mexican hairless dog. We clued the game that way, to at least provide a way for solvers to get in which wouldn't show those images. The final version of the meta used neither the cards Google removed nor their replacements, letting it work with whichever list you used.

The Other Scottish Play (Everglades, FL)

It's hard to say much about this puzzle that isn't spoily, but I'll say it's a very deep puzzle and not the sort of thing to attempt over your coffee break.

My testing group identified it was about Sleep No More pretty quickly. There are lots of hints for that, and we found even more later on, such as the fact that image with the icons above the door at the start of the puzzle depicts the actual door to the New York location for Sleep No More, to the extent of locating it on Street View and showing that the star shape and pineapple are there.

Before I go any further, I want to point out that solving this puzzle may subject you to spoilers for Sleep No More. When we tested this we thought that wasn't much of a concern because its run was supposed to be ending shortly after the hunt. Now I see its been extended to the end of March. But I'm not spoiling it in any major way here.

The first level of solving this puzzle was to identify the scenes. There's kind of a hush-hush attitude toward taking about Sleep No More on the Internet, but there are a couple sites with a lot of details. This both funneled us into those sites to get the info and let us know the info we need from the play isn't too in-depth because what's available is limited. In any case, we figured out the scenes and the main character identified as SOMEONE in each description.

It took longer for us to figure out what to do next, and we explored every possible aspect of the play we could get from the Internet. Eventually we got updates to the flavor text that made the idea of looking for hidden items clearer to get us to the answer.

Monsters (Everglades, FL)

This was a puzzle I came up with during the NY retreat.

Daniel Kramarsky was trying to develop some other kind of puzzle based on 3x3 grids, but his puzzle concept was too loose, with each grid working differently, and I worked with him a while before abandoning the idea. But the 3x3 concept made me think of the D&D-style alignment grids for all sorts of things not related to D&D which were a popular meme in 2023, including one for crossword grid symmetry. This led to the puzzle you got.

Realize It (Everglades, FL)

I test-solved this puzzle, But...

Before I ever managed to get the grid filled, I figured out the rules were equivalent to an allowable sequence and did the right searches to find that term. So I was confused for a bit when the grid extracted to ALLOWABLE SEQUENCE. But not long, as we were given the diagram of the square and I realized this meant to fill it in.

The Champion (Everglades, FL)

This was a grueling long test-solve due to the number of false matches we kept finding. A bunch of those clues got rewritten and rewritten again. The casting costs weren't originally given.

There were false matches to irrelevant Magic cards, and false matches to unintended Iliad scenes. The casting costs helped us eliminate some of the false matches to Magic cards which are hard to avoid simply because of how many cards there are today. A bunch of cards with one- or two-word names showed up by accident. Different cluing, and getting enough of them correct to understand that we were supposed to have all the clues in the first round of the tournament reference Iliad scenes before any of the ones in the next round, and so on, helped us nail down the Iliad references.

Finally, there was a discrepancy between playing the cards to help that tournament participant (possibly on his opponent) and playing them on that participant. They hadn't intended to have Disenchant in there at all, but they had another enchantment-killing card accidentally mentioned on (for one of the ways we were wrongly lining up tournament contenders in one version of the puzzle) the opponent of the one enchantment among the Theriad cards, and this made me think "sure, that's how we deal with a non-creature being in the tournament" when it was supposed to just be an automatic loss for that card. In the various revisions we ended up with Disenchant being in that card's clue to help point out you are supposed to cast it on the compeititor whose clue had the card.

Reflections (Everglades, FL)

Originally this puzzle was supposed to have some Chinese writing on the right side, but somebody thought using the Chinese language that way was culturally insensitive, so when I was called in to rescue this puzzle, I drew the thing that actually appears on the right.

Climbfinder (Mississippi River)

My testing group never figured out one key aspect of this puzzle.

The part where the routes are ones from famous bicycle races. We found most of them the hard way, by searching for routes on the site of the right length and in the right country, and anagrammed extracted letters within each section to figure out the answer.

I Know Who I Want to Take Me Home (Mississippi River)

I didn't see this one during test-solving, but I think it's a good one.

It's Not a Well Written Quizbowl Packet, But... (Mississippi River)

I joined a test-solve already in progress to get them unstuck. This spoiler comes with a bit of an ICK warning too.

They had figured out the answers that fit the full quiz bowl questions, and some bonus answers, but that's it. At some point after I joined, one of them noted the three-sentence format of each question, and we started considering them separately. It was when I got down to the sentence that reads merely "This figure fathered Aphrodite" that I got my aha. Aphrodite has two origin stories! A quick lookup confirmed: Zeus fathered Aphrodite by Dione per The Iliad, and the other more gruesome origin is that, when Cronus overthrew Uranus, he castrated him and threw his genitals into the sea. Aphrodite sprang forth from them and by some accounts rode a shell to the shore, as depicted in Botticelli's The Birth of Venus. A little further checking confirmed that Uranus is considered her father in this myth. Good thing we didn't need to know her mother!

We went on, gradually beginning to find non-Greek myths fitting the clues, and realized the flags gave us the locations the myths are from. I also contributed that the answers to each bonus should be sorted alphabetically, confirmed by getting one selected bonus answer to match each tossup answer when we did it this way. Finding some of the more obscure myths was the hardest part, but the flags guided us into at least looking at the correct cultures. It was after I found Teshub, the Hurrian storm god who was tricked into eating a rock, that we had enough letters to read out the clue phrase.

Long Strange Tour (Mississippi River)

I got called into a test-solve on this to add my expertise.

They had figured out the art was Grateful Dead-related, and wanted somebody who knew their music. I am not the biggest Deadhead by any means, but I was who they found. After listening to all the music samples, I told them these aren't actually specific Grateful Dead songs; it's when they are jamming, for example at the bridge of a song in concert. It sounds like Dead music because it uses musical phrases from their songs, but sometimes they are mixed together. Where I was able to identify them, I pointed out examples. And I likewise told them I had no idea where I'd find them, and even if I had the recordings of those concerts, I didn't have the time to go listen through them all for matches. And I was on so many other test-solves that I really meant that.

What I said was exactly right - both that they weren't the usual songs and that it was going to be nearly impossible to find them. They eventually did find the song clips, after being given the song titles they occurred in and the concert links. But they failed to find the posters, and so couldn't do anything with that, and eventually gave up. Much more explicit clues to Relisten and PosterDrop were added in later versions to help teams have a chance, along with clips that have some lyrics to make it easier both to identify which songs they are supposed to be and to give a chance of finding where they occur.

Model Scientists (Mississippi River)

In this test solve...

We figured out the scientists clued were all women, and simultaneously thought "Hey, that's cool to have a women in science puzzle" and "Geez, the fact that women have often gotten short shrift for their achievements in science makes these people harder to look up." We did find some web sites devoted to women in science that helped find some of our answers.

The Hermit Crab (Mississippi River)

This was one of the puzzles I discussion edited.

I helped this one out by using my knowledge of past hunts (and my index) to find a bunch of shells for the hermit crab. And then I was pretty much hands off after that.

This Puzzle Keeps Turning Upside Down And We Can't Figure Out Why (MIssissippi River)

Oh, Barney...  I didn't see this one in testing but the joke is hilarious.

I don't know how we found Barney Rafter, but he's a real Australian who came to MIT all the way from his home country for the Hunt. He played Charon in the opening skit and also was the obviously Australian voice in the safety video. He turned out to be a fun guy and also fully willing to make jokes about his country being Down Under, both by representing it as the Underworld in the opening, and by making this puzzle joking about it being upside-down, or Australians viewing the world with south up, or however you choose to interpret it.

Befouled Spellings (Newport, RI)

This was another puzzle I discussion-edited.

Originally, this was going to be an app that let you enter words like NYT Spelling Bee, with the space for the key word in each puzzle highlighted and all the words listed alphabetically to help you narrow in on it. There were two different issues. First, the words we could use were pretty limited because of the need to have them in real NYT Spelling Bee puzzles, which never use S and can only have 7 different letters. At one point I helped the first author eliminate a word on his list because it had 8 different letters, and it was because of this we realized the puzzle couldn't make the answer it had been assigned and we had to ask for something else.

No other suitable answer showed up, so editors wanted to give this puzzle the almost-thematic answer that was available and find a different way to make the puzzle work which could get there. (The original extraction was, after finding the winning words in the interactive puzzles, to find NYT Spelling Bee puzzles which could spell the runner-up words, and use the gold letters from those puzzles, with given months of the NYT Spelling Bee puzzles to provide the ordering.) And this is when the second issue showed up; the author had become unavailable, and we didn't have the tech resources to go in and write the app.

We called in Jeremy Conner to rescue the puzzle, turning it into a non-interactive version, and with a different extraction mechanism which was true to the original concept but still quite limited. He was forced to use weird bee from 1928 which didn't follow modern rules and is misreported in places, as the only way to get the letter B, but we couldn't come up with a viable clue phrase that didn't involve the letter B.

One team complained that the clue phrase was ambiguous, but E.W. Scripps' Wikipedia page reports only one half-brother. They must not have made the connection that E.W. is the person who the spelling bee you've been researching is named for.

Cedar Gardens (Newport, RI)

This puzzle was improved significantly due to my test-solving: Several tree images had details added to make them more identifable, the flavor text got more cluing, and the tree collage at the bottom was originally so badly pasted together that it was impossible to determine what some of the trees were supposed to be.

Julia and Friends (Newport, RI)

This puzzle's very close to the way it was when I test-solved it. The flavor text was changed, and several captions were changed, including one duplicated photo caption which was just plain wrong...

I recognized several of the celebrities, but there were a lot I didn't, so I fed them all into Google. When I had the entire list, I noticed that they all seemed to be from a certain era. Roughly speaking, it was my parents' generation (and recall that this was my 25th Mystery Hunt, so I'm in the parents' generation of today's MIT students).

But it was likely something more specific than that. I decided to do the MJystery Hunter's equivalent to the old "I'm Feeling Lucky" button on Google: I pasted the entire list of names into a single Google search. And that gets you to The Muppet Show. So we were able to get on to the part about names of muppets.

The version I tested used muppets from all sources, including Fraggle Rock and some more obscure sources. Our difficulty in finding them led it to being redone using only Muppet Show and Seasame Street muppets.

Marathon Block Pushing Game (Newport, RI)

I joined a testing session in progress for this one. My partner had already figured out the basic idea, but I pointed him to the logical conclusion.

He knew how to solve each room, and how the rooms connected in a binary, exponential structure. So he knew the puzzle took more steps to solve than you could even hope to have a computer programmatically enter into the puzzle. I had experience some puzzles like this, and said, "count the steps." And he's like, "really?" But I helped him count the steps for each part and we got to the answer.

Najaf to... (Newport, RI)

This puzzle is largely the way it was when I testd it, but several clues got minor tweaks. The two just-plain-wrong ones were the "most populous city of some country" was wrongly called its capital in the version I first saw, and the one that specified a number of degrees was tweaked to be more accurate.

T (Counts) for Two (Newport, RI)

This puzzle was pretty similar when I tested it. Some numbers got tweaked as a result of things we found. Even more numbers got tweaked that we missed. It was that kind of puzzle.

Of course the fox and dog one got us started. I was reminded of the "it finally happened" video. Eventually we did the right search to find one of the other pangrams and its English translation. Some of the numbers were wrong because of math errors, some due to mistaking letters with diacritical marks that make them count as different letters in their languages' Scrabble sets.

Persephone (Newport, RI Meta)

The big circle of arrows around the grid in this puzzle was added as a direct result of my feedback. There were some other minor tweaks as well. It wasn't too hard once we figured out what to do, but figuring that out was too difficult without a couple fixes.

Machines (Nashville, TN)

I enjoyed this puzzle, which was pretty much the same when I tested it as when you got it. Just the fun exercise in spreadsheet hacking it looks like.

Split the Difference (Nashville, TN)

Yet another case where I was called in to get test-solving teammates unstuck. They had already mostly filled the grid, and...

Identified VERLAN and L'ENVERS as answers, and found some of the English words that could be verlanned. They needed help finding more of these and throwing out candidates which worked poorly.

We were trying to apply "difference" to the difference between the sums of the clue halves for the original and verlanned words, but we had enough wrongs in there that we didn't immediately identify the shared half-clue in each of these pairs. Weeding out the wrong ones (which usually didn't work phonetically, didn't fit the supposed clue match, or both) and finding new matches eventually made this apparent.

But in addition, we had one good but unintended match (the word wasn't supposed to have a verlanned version), but it worked phonetically and had a potential clue, though that clue didn't share the numerical qualities of the others.

I also saw my teammates had had trouble with a numbering issue which led them to reject a correct clue answer. One of the numbers appeared twice in the grid, but since this wasn't a case where the across and down entries started at the same number, they had rejected the match earlier. Revisions to the puzzle changed several answers to fix the red herring verlan and also fix the numbering so no number appeared twice in the grid.

Subplutonean IHTFP Blues (Nashville, TN)

I didn't see this puzzle in testing. But I love it.

The song should be readily identifable to American solvers, at least, even though it's almost 60 years old. Some may get it from the title, and some from the posterboard photos, which make an homage to the iconic music video that is well known in its own right. It is, in fact, one of the very first music videos, and perhaps the first that didn't just show the band performing or people dancing to the music of the song. Some may recognize it from Weird Al's spoof "Bob" in which the lyrics sung and shown on the cards are palindromes.

But the puzzle uses an element of this classic video that has lain in wait all these years from before the start of puzzle hunts: The words on the cards include PIG PEN and WRITE BRAILLE, two alternative alphabets puzzle writers love to use as ciphers. Bob Dylan was yet again ahead of his time by including these words in his lyrics!

Under Pressure (Nashville, TN)

I test-solved this puzzle consecutively with A More 6 U 28 U 496 U ... (which you'll see below in NYC) and so the test admin says to me "More math!"

This was a pretty long solve, but it was never stuck; there were sufficient guideposts all along the way. The puzzle was pretty close to its finished form; there were a couple minor details to fix (two integration variables were missing).

We Are the Champions (Nashville, TN)

During a final check of this puzzle the morning before Hunt started, I discovered that Wikipedia had two different versions of...

The medal counts for Belgium in the 1920 Olympics on different pages. And there were citations for both versions!

We edited Wikipedia to make it match the data already printed on the physical artifacts we were distributing, but it's probably wrong; what should be the most authoritative source has the other set of counts. Now that hunt's over, I'm staying out of it, even though I was the one who discovered it. We have a Wikipedia admin who can try to straighten this out and handle any resulting revert wars.

Duet (Nashville, TN Meta 2)

I thought this video was going to get redone so you didn't have to listen to Hermes's horrible "singing" but it sounds the same as the one I tested. Sorry.

Fren Amis (Oahu, HI)

Remember, you're in a Mercury car from the god Mercury? Of course you can drive to Hawaii!

This puzzle was a huge challenge in testing, but even worse than it was for you because we didn't have that flavor text. Even knowing what you are supposed to do, it's a huge challenge. If you missed this one during hunt, and you can find a group of multi-lingual cryptic lovers (or are good at interpreting Google Translate results), give it a try, but allocate a good amoutn of time for it.

Happy Tunes (Oahu, HI)

The problems with this puzzle started before we even got going on our test-solve. The playlist was originally shared as a real Spotify playlist. I clicked the link, and Spotify gave me a 404. They asked me if I was logged in, and I said no, I don't usually use Spotify. But Spotify thought I had an account (no doubt I did something like this before) and I reset the password and logged in. Clicked the link, 404.

I confirmed I could listen to songs on their suggested playlists, and I could make my own playlist (I searched for and put one of the songs from the puzzle into it, which they had sent me as screenshots by this point). But I could never see the list the author provided, even though some other solvers on our team could. We never figured it out, but I assume they found other people who couldn't view the playlist either, leading you to get the image in the puzzle instead of a real playlist.

For actually solving the puzzle, I floundered a bit at first. The puzzle was called Show Tunes at that time, and since the first three clues mention reality shows and I assumed the masked competition one was also one, I looked for these songs performed on reality singing shows. This gave me plenty of time to fully read through the list of songs, and I thought the distribution of songs seemed about right - some new ones, a lot of greatest hits from across generations.

At some point, I got down to "I Dreamed a Dream" and I didn't have to google that one. I remembered this song was sung in one of the most famous performances on any reality singing show ever, when Susan Boyle sang it on Britain's Got Talent, and sang so well the performance got repeated around the world. Convinced that was supposed to be the right performance of that song, I checked her against all the clues. The only one that fit was that she wrote an autobiography.

The next match I found that seemed really solid was a song performed by the winner of one season on The Masked Singer. It was easy then to look up all the songs Masked Singer winners performed on the show, and there were a total of four songs performed by three different winners. I expected this: We had 55 songs and only 17 clues. I wasn't sure how it was supposed to work after that step, but I was making slow progress. I assumed flavor text (which was later changed) saying the playlist was "pretty much perfect" meant the songs got the highest ratings from judges on the various shows, and that was to help me in cases where I found different performances of the same song.

My absent solving teammate showed up overnight, and figured out the puzzle was supposed to be about songs sung on Glee. Every song in the puzzle had been sung on Glee. This was disappointing compared with what I already found, but it was reassuring in that I had a much tighter set of data to work with. There was a bunch of data collection and checking and rechecking here. One song in the puzzle was replaced to fix an error. Also, the clue about dating, which originally just said "I dated somebody else here," was changed after I sent feedback with a list of Glee actors who had dated other Glee actors; there were a lot more than the author realized.

We eventually solved it the way it was meant to be solved. There was one other thing... that answer. Some of the solvers during the actual hunt also failed to recognize that was a real thing. We were missing the last letter, because of not understanding how that clue worked, and I googled something with a different letter there and Google corrected it to the right answer.

Make a Winning Hand (Oahu, HI)

This puzzle was mostly the same as you saw in Hunt when we tested it. The interactive bit was a Python script running through some Python-web gateway which might have been the same as what's in the puzzle now too, except we had to type the names of the cards rather than use buttons to pick them.

At some point, my teammate found a way to crash the Python script. The script was fixed and we continued.

Obelisks of Sorrel Mountain (Oahu, HI)

This was the biggest BE NOISY moment I encountered in testing this year's Mystery Hunt. (For the uninitiated, this happened during the endgame of the 2002 MIT Mystery Hunt.)

I joined two solvers who had already figured out that the grid had references to Parks and Recreation episodes, and the images on the tiles referred to SubPar Parks, and the text on the tiles was mashed up from both sources. They were stuck on how to use the symbols on the edges of the grid hexes.

They thought it was going to be something related to Catan because of the board game reference in the flavor text, and the 19 hexagonal tiles. In the version we were testing, the tiles on the map at the end were arranged in a hexagon, exactly like Catan. But they had never played Catan and I was added to the session to give them more board game knowledge... when that wasn't what the session needed at all.

But anyway, they thought that the 4 different markers might indicate possessions of four different players. I analyzed it from a Catan perspective. One of the markers pointed to a vertex and the rest to edges, so i took that as a settlement and some road segments. But I realized the Catan rules only allow each player to build up to a maximum of 15 road segments, and since they can only build out from the settlements and other roads, and can only add more settlements on roads they have built, each player can have at most 2 disjoint road networks. One player had 8 of the marked segments, and they were too far apart to join into only two networks using 7 more segments.

Then I came up with the BE NOISY. The marked segments were well distributed, only in two cases with pairs adjoining at the same vertex. I came up with a set of rules for a logic puzzle: The marked vertex is an endpoint for the path. Each marked edge is on the path. The path never touches itself, so at most two of the three edges at a vertex can be used. The markers were all in different hexes, so the order those edges and one vertex are used is the order of extraction, and the different symbols indicate extraction indices somehow. Here is the map (spreadsheeted) and here is the solution (black lines form the path) to that puzzle, which I proved unique.

The only problem is that none of that was intended, and after our extraction attempts for that path all failed, we got an update which gave the "blaze a trail" in the flavor text (original was "retrace a path"). After learning what those little symbols actually were, and figuring out how to use them to make the intended path through the hexes, rather than along the roads, we solved the puzzle.

Since there was no actual board game involved in the puzzle, I was upset at the massive red herring we'd been fed. My feedback included "I guarantee about 90% of Mystery Hunters who look at the current form of this puzzle are going to think of Catan, and the other 10% simply don't know Catan and their teammates will get them headed that way. The first two solvers ... hadn't actually played the game and they still recognized it as Catan. And in fact, this puzzle has zero to do with Catan."

I advised them to dump the board game references entirely, advice which was not taken, but at least they rearranged the hexes so they weren't in the pattern of a Catan board. The actual board game reference was supposed to be Cones of Dunshire, a fictional board game that appears in two of the Parks & Recreation episodes not otherwise used in the puzzle. Apart from having a map made of hexes, it has no resemblance to the puzzle and no influence on the solution to the puzzle. But it looks like we got no Catan-related answer submissions and only two teams who mentioned it in their hint requests, so I guess the changes worked, mostly.

A More 6 ∪ 28 ∪ 496 ∪ … (NYC, NY)

I found this puzzle fun but long. I broke into it in a manner similar to but not exactly like what is described in the solution, and once I did worked mostly from top to bottom as described in the solution. And I had a long nitpick session with the author afterward to iron out lots of little details that were not quite right in the version I solved.

One of the teams registered for this hunt was named ℙoNDeterministic, using a blackboard-bold P as is used in this puzzle to represent a set solvers have to figure out. I wonder what theirs represents.

Game to Be Themed Later (NYC, NY)

This was one of the most impressive puzzles from our hunt. Elaine Lowinger built a completely custom pinball machine for solvers to play after they solved the first part of the puzzle. Until now, I had assumed I was the second-biggest pinball nut on our team (second to Bowen Kerins, multiple-time World Pinball Champion, so second by a big margin, but still second). This makes me think I'm third.

Elaine lives in California, so she had to build the machine and get it working, then take it apart to ship it to MIT and put it back together in our headquarters just before hunt.

And Bowen discussion-edited the puzzle, which had to be incredibly cool for Elaine. Imagine you decide you want to write a chess puzzle and your team says Garry Kasparov is going to be your advisor on the project. It was like that.

Pinball High Scores:

Team
Score
Super Team Awesome
33700
Setec Astronomy
33400
Literally Animal Farm
27900
Mister Rager's Neighborhood
24700
Death and Mayhem
21600
Teammate
19400
Cardinality
18800
Providence Planetary Stations
18700
TSBI Swarm
18400
Galactic Trendsetters
18300
Frumious Bandersnatch
17600
NES
17300
Mathemagicians
15800
Unicode Equivalence
15000
Ethereal Constellations
14700
(zero width spaces)
12700
Hunches in Bunches
11700

Puzzle with a Twist (NYC, NY)

This was a puzzle I discussion-edited, and it was one where my discussion-editing included writing a program to prove uniqueness of a part of the solution.

My other guidance for the author was that solvers are going to find the parity errors confusing and difficult, and they should try to keep the other aspects from getting too complicated so solvers don't just abandon the puzzle in frustration or confusion. This led to simplifications like getting the original Rubik's Cube colors which matched in the standard Rubik's Cube way instead of also having solvers have to figure out the arrangement of colors on the cube as part of the cube assembly (even though my program had proved uniqueness of that).

Queen Marchesa to g4 (NYC, NY)

This was also a puzzle I discussion-edited.

With this puzzle, I mostly helped guide the early stages of the work, pointing out that if the Magic-chess interaction was too loosey-goosey then it would be too hard for solvers to confirm how the puzzles were supposed to work. I also poked holes in some proposed puzzles by finding alternate answers. Later on, editors, factcheckers, and test-solvers were filling that role and I was mostly hands off.

The Chromosome of a Highly Colored Fish's Eye (NYC, NY)

I encountered this puzzle as a test-solver, and it was a nightmare.

The version I test-solved was over twice as long, and there were a lot of issues. The clue for Astana was there about its world record (for most renamings of a capital city in modern times) which was a good break-in that the puzzle was about city renamings, but some of the renamings were dubious. Some could only be found on one Wikipedia page, while other pages solvers might be looking at for solving the puzzle glossed over those changes. Another issue was that some of the clues referred to the events that led to the renaming, but not the actual name, while some of them only referred to the name, not when it was renamed, sometimes making the dates seem out of order.

After I finally got through it the author let me know he was carrying on the work of another author who hadn't had enough time to keep writing the puzzle, and a lot of the issues were due to trying to preserve the original author's work. I told him that if he was in charge of the puzzle now that he needed to do what it took to make a good puzzle. Much like the "Kill your darlings" advice, he should verify all the data the puzzle was based on and cut out dubious and just plain wrong renamings, and double-check dates (in a number of places I'd pointed out). Also, his clue phrase was awful; that was another holdover from the original author. The editors helped with that; by assigning him a different answer, he was forced to choose a new set of cities to extract from and was able to implement my suggestions.

Time for a Drink! (NYC, NY)

A fun one. I especially enjoyed the teleporting express trains (just look at the timetables).

I solved the subway routing puzzle by hand. After thinking about how I'd write a program to solve it, by tracking the quickest arrival time at each station and when a better one was found, updating all the other stations you could now reach sooner from there, I just did it by hand with that algorithm.

My teammate found the Clefs d'Or link and then we were able to solve the puzzle on the message board.

Toy Chest (NYC, NY)

My only comments here are spoily.

When I test-solved this puzzle, being added as reinforcement to help the original solvers clean up their data, after we got most of it cleaned up, one of my teammates found the final clue phrase ROO/SEVE/LT/ORB/EAR among unused images, and we short-circuited the puzzle. We were supposed to get a first message from the unmatched images near the start of the puzzle, which was less clear for us because we had not connected some of those to their sets. Since the part we skipped was solving number links, it was just as well they left the puzzle like this where it could be short-circuited.

ENNEAGRAM (Olympic Park, WA)

One of those should-not-be-missed puzzles. Many will recognize the obvious homage to the late Jack Lance's OCTOGRAM game. Jack died suddenly during the construction of our hunt, and while I wasn't as close to him as some Mystery Hunters were, I had experienced some of his creations and he was one of the few puzzlers I could say always kept me in awe. We will never see what Jack Lance might have done as a Mystery Hunt author, which I am sure would have been astounding. Right after his death, my teammates were saying they wanted to come up with a fitting tribute to him, and I think Jim Hays made a fine one in this puzzle.

I noticed (as did Alex Irpan) a flyer (completely coincidental) posted on one of MIT's numerous bulletin boards about the pensonality styles enneagram, which was used in this puzzle.

Jigsaw Sudoku (Olympic Park, WA)

This was another puzzle I discussion-edited, and another where I wrote a program to help confirm uniqueness of part of the puzzle's logic.

In this case, I confirmed that even when you repeated pieces in different grids, considering only the shapes and immediate contradictions due to placements of the same number, only the sudoku grids that were intended could be made.

An earlier version of the puzzle which had more, smaller pieces and didn't yet have the every-digit-at-least-once-per-piece rule had trouble with extra grids that were hard to rule out until you tried fully solving the sudokus.

Oil Paintings (Olympic Park, WA)

I joined an existing test-solve session of this physical puzzle. My teammates had the pieces and had assembled all but two of the images when I started. I should point out that instead of getting 20 strips per painting for a total of 160 strips, they had the strips cut into squares, so they had over 2000 little squares to assemble... and that was too much, so it was toned down to the strips you got. Jigsaw Puzzle from the 2013 hunt gave you even more pieces than this... and it was too much then, too.

The photo sides of these jigsaws showed famous oil paintings. The back sides had a lot of letters. There were small letters that didn't spell anything, in any direction, and larger letters that spelled words reading horizontally - the same word over and over horizontally, not necessarily starting with the first letter in a row, and they had written down all these words, and did so for the last two as they got them assembled.

They had two things: Given the oil theme, they noticed a number of the words (but only about 1/8 of them) were types of oil or words that made a phrase "___ oil." And they noticed one painting had an image of a dragon added to it that was not in the original.

I looked at the pattern of where the oil-related words were, and while they tended to be spaced apart, the spacing was irregular and different for each one. It wasn't obvious what it meant.

So then I got my team's attention focused on the dragon. Clearly that was not there in the original painting. So we should look for things added to the other paintings. This was indeed what we were supposed to do, and we started finding the hidden animals in the other paintings. I spent a while trying to find the exact Monet Water Lilies painting used in the puzzle; he painted over 250 paintings in that series, including 18 featuring the bridge seen in the one in the puzzle, and it still took some effort to identify the animal. I have since seen this picture life-size from the strips we gave solvers, and the wolf now pretty obviously isn't part of the original. I think it was redone to make the animal clearer, and it came out so obvious that it would likely be identified at the start along with the dragon. Comparison of the version solvers got (left) with what I had to work from during testing.

But at the end of this stage we had a list of eight animals including a dragon.

Our work paid off, because with the list of animals and "oil" my teammates found... [continued next spoiler box]

The other important thing in this puzzle is that we got Franced. Those of you with good memories may remember that the last time my team ran the Mystery Hunt, we discovered days before Hunt that France changed its regions on that January 1st, breaking a puzzle we had written that involved them, and forcing us to find another suitable country to put in their place. The details of who Franced us are inside the spoiler block, but since this puzzle was already professionally printed on light card stock, there was no fixing it. We just told solvers to use the 2023 data. And now the part of the solution that got Franced:

... the list of oil patterns used by the Professional Bowlers' Association. And we learned a lot about bowling that us non-bowlers/occasional casual bowlers never knew. Bowling lanes are oiled to ensure the balls glide down them. There would be too much friction for the ball to get all the way down there, otherwise. But they aren't oiled uniformly. Different parts of the lane get different amounts of oil. This affect how the ball breaks when you apply spin to it when bowling the ball, and it is how professional bowlers bowl those curve balls that hit just behind the head pin on either side in order to make strikes. And all lanes aren't oiled the same way. At your local bowling alley they probably use a "house" oiling pattern which is friendly to bowlers who have a basic understanding of how oiling works, and you can ask to see a chart of this pattern.

In PBA events, they use one of a number of named patterns, which are named for the animals in our puzzle, and which are considerably meaner. Consider how a newspaper crossword compares to a crossword you get during Mystery Hunt. These patterns change the amount of oil and distribution of it across the width of the lane and certain distances down the lane, with these breakpoints measured in feet. The lane is 60 feet long and the grid on the letter side of our jigsaws is 60 squares tall, with the large letters spelling words placed on these foot-marks. Sure enough, there are oil-related words on most of the breakpoints in the patterns named for the animal added to the painting on the other side of each jigsaw. One on each jigsaw isn't, and that word is the extraction to reach the next step.

Or, rather, they matched when we test-solved it. For 2024 the PBA changed the patterns so they no longer matched our puzzle.

Transylvanian Math (Olympic Park, WA)

Loved this puzzle.

I saw quickly that it was about The Count, but I didn't get that it was from the specific album. Searching for the largest number 2478693 and "the count" gave as the first hit a Sesame Street fanfic script for an appearance of Britney and Jamie Spears as guest stars; not a real script, but based on the lyrics of "Beep." The actual song lyrics don't show up, but I found "2,478,693 beeps" in another very random list packed with Sesame Street references, so I figured it must be from the show. Searching for a sequence of numbers near the two 15000s gave a page which quoted "The Count's Weather Report." So I ended up listening to a bunch of Sesame Street skits with the Count and getting one or two more of the right ones. My testing teammate stumbled upon a playlist from the album and so suddenly we went from a couple to all of them as quickly as we could match them up.

Gaia (Olympic Park, WA Meta)

This was a long, difficult test-solve because the puzzle was explained poorly.

The first part to assign values to letters based on the sums given when you solve each puzzle wasn't hard. But the original puzzle text wasn't clear about wanting to ignore the actual motion of the stars, and not knowing how to use the numbers from the strings, we thought we were supposed to figure out where the stars would be in 10 million years, compare the coordinates to the numbers from the strings, and derive something that way. (The fact that we didn't have the interactive component yet hurt as well. We were just told it would exist.)

And using the real data didn't work at all. Compared to the stars we were really supposed to be looking at, the real stars in the constellation are much closer to us, and have greater apparent motion in the sky (for comparable actual motion) as a result. The apparent motion of the real stars is so great that it doesn't make sense to assume that the annual movement, measured in terms of changes in two angles from spherical coordinates, would remain constant for so long. Some of them would have revolved completely around the sky in that time!

We finally figured out what it was supposed to be after we got the flavor text that was used in the end, that Gaea didn't like the way they're changing... what if they changed (some other way)? And we still didn't get it immediately, but we figured it out. What slowed us on this part was that even though we had retrieved some of the real star data from the GAEA project, we had recorded every identifier of those records but the one the puzzle was trying to match by. The numbers they gave us are searchable, but we didn't realize the database had identifiers in that format.

15 Questions (Sedona, AZ)

It was immediately obvious upon opening this puzzle that it was based on...

Who Wants to Be a Millionaire? It wasn't until my teammate did the right search that we figured out it was also based on the song "If I Had $1,000,000" by Barenaked Ladies.

A little research showed that since I stopped watching the game show, they had changed the dollar values a few more times (I remembered the first change) and most recently they were back to the original values. And only those original values included $32,000, so those had to be the right numbers. But how to use them?

We had an acrostic from the questions when we put them into the order of the song: USE RIGHT ANSWERS. This made me think I was supposed to use the A, B, C, or D of the correct answers to extract one of the first four letters. Seeing this wasn't working, they asked us to consider if the acrostic had instead said KEEP PRIZE ON AYES. That still didn't help, so they tried DIGITS AS INDICES. Finally it made sense. Without that clue, it was "guess what the constructor is thinking" to figure out that the $32,000 level meant to extract the 3rd and 2nd letters of the answer.

We also had a couple nitpicks in the questions. The empire spanning one included Dutch and Portuguese in the version we saw, and we provided evidence to indicate both of those empires had spanned six continents. The Hannibal question implied Hannibal rode on an elephant across the alps. While he did bring elephants with him, they were for use as war machines. Historical evidence suggests he rode a horse during the crossing. And they added more BNL references in the questions while they were at it.

Composing Compositions (Sedona, AZ)

You could probably guess this was one of the puzzles added late in construction...

Since it was based on a video game expansion that was released in full on November 21st before hunt and in public beta just two weeks before that. So test-solving it was complicated by not having good resources for the the content that was just released, and actually got better a week or so later when we found better-summarized guides.

Flower Power (Sedona, AZ)

This was a challenging crossword-type puzzle. Recommended for those who like hard crossword variants.

IO (Texas)

I didn't see this puzzle in testing, or at all until hint requests came in for it late in the hunt. And those hint requests tended to stick around because even the hinters had trouble understanding the puzzle. The solution page is hard to follow. But I took a good long look at this one after hunt. I think it all works, but it's hard to follow because it doesn't work how you'd expect. It's superficially a programming puzzle, but it doesn't really work like one. Here is what you were supposed to figure out (inside a huge spoiler block).

There are three code snippets labeled test1, test2, and test3. You're supposed to figure out from red-highlighted keywords in each test program that they are written in Python, Ruby, and Go, respectively. The file extensions missing from the filenames are py, rb, and go. The files are supposed to fit into the logic gate diagram at the spaces marked 1, 2, and 3. You see that the output of 3 is already labeled with the letter O. You're supposed to figure out from this that you assign the two letters of each extension to the input and output of the numbered gates, so for example the maroon trace heading into gate 3 is assigned G. That said, Python's io module doesn't have io.bind() and much of the rest of this code can best be treated as pseudocode. But you can interpret each piece in conjunction with the colored logic gate diagram.

The function in each code snippet begins with a comment referring to documentation from a particular date in the 20th century, ending with "Comments needed here!" which literally is a "request for comments." In addition, each of these comment sections has the acronym RFC. Internet standards were published as RFCs, requests for comments, in that era. The first section of each test function binds to a well known network port: 21 is FTP, 25 is SMTP, and 80 is HTTP. The dates help you disambiguate which RFC for each of these services you're looking for: RFC 114 for FTP from April 1971, RFC 788 for SMTP from November 1981, and RFC 1945 for HTTP from May 1996.

Each test function then reads (io.recv) a value from a three-byte hex string. Hopefully the various colors and the author name roy g. biv made you curious enough to extract the exact RGB color codes from the logic diagram. If you did then you'd have found that the hex code arguments to recv are the colors of the traces connected to the three network ports in the diagram, the green, brown, and pink traces (in that order).

Finally, each test function asserts that a particular value was received. These numbers are close to the corresponding RFCs, but each is too large by a two-digit difference. The comment refers to codes on page 38; this is a reference to "code page 38." The DOS/Windows code pages have larger numbers than this, but the code page system started with IBM mainframes, in which code page 38 is US ASCII. If we interpret the difference as a character in ASCII, this gives us A for the green trace, I for the brown one, and T for the pink one.

The next part of each test function sends data and then reads more data (test2 and test3 each do this twice). As with recv, the groups of three hex bytes should be interpreted as colors, and each of these sections corresponds to the traces entering and exiting one of the AND or OR gates in the diagram (the two gates at the left are coded first, then the three on the right, both sets top to bottom). The two-byte hex strings are the data sent. This time, code page 37 in the same nomenclature is EBCDIC, the more common character encoding used on those mainframes, and this is confirmed by the first three bytes of the asserted value.

When you interpret each set of four bytes of sent data as EBCDIC, you get four capital letters spelling a word, which is meant to be a clue. For example, the section in test1 sends 0xD3D6, 0xE5C5 and this spells LOVE. The answer is formed from the letters of the two traces feeding into the gate (starting with the top one), then the name of the gate, and finally the letter corresponding to the output. Each asserted value ends with some null bytes; the total length of the asserted string, in bytes, tells us the length of the answer we are supposed to get. By working back and forth from the traces we already know, we can figure out the words as ADORE, ADORN, BRANDY, GRANDE, and IVORY. Now we have the letters for all traces.

The last code snippet has its file extension, kt, identifying it as a Kotlin program, though its comments tell us it's incomplete. It calls a function exorbitantHash, which is supposed to clue that it's using XOR. The first three bytes sent as arguments to this function make color codes again, the brown and red colors from the diagram, the ones assigned I and O in the original puzzle diagram. The fourth argument in each call is an ASCII letter (I or O). And the last bit indexes into the discriminator array; this is a 0-based index, so [8] and [14] grab the 9th and 15th elements of the 26 byte array, corresponding to the positionas of I and O in the alphabet. Putting all this together, we XOR the three bytes of the color, the ASCII value for each letter assigned to one of the colored traces, and the corresponding byte from the array. The resulting byte is mostly 0s in binary, with zero to three 1s. There are only one or two 1s in each bit position across all these letters. cluing that this indicates which letters can go into each position of our answer. For instance, the operation on I XORs 0x9a, 0x63, 0x24, 0x49, and finally 0x1c from the array to give binary 10001000, indicating I can go in the first or fifth position. Only two eight-letter words fit the indicated pattern, forming the answer: INTERNET PROVIDER.

"Line, Please" (Texas)

The "modernized" Romeo & Juliet prologue was funny right from the start, with dead-ass props, people dissing people's Pops, and being advised to turn your cell phones off and watch the show.

But the Shakespeare just kept going deeper and deeper. There are a bunch of other Shakespeare characters from other players listed to the right of the prologue. We figured out they are the speakers of the "interruptions" in the prologue, which was originally a sonnet but is now more than twice as long, and matched them up. Where we didn't get them just from the paraphrased text, we looked up all lines from each character. But some of the characters were ambiguous. "Ham." actually did turn out to be Hamlet, but "Jul." wasn't Julius Caesar, but Julia from Two Gentlemen of Verona.

Appease the Minotaur (Texas Meta)

You should appreciate the fact that we test-solve metas with one or more answers missing.

In the first version of the puzzle we got, Theseus and the Minotaur were swapped, and this led to us trying to build the path in the wrong direction, which still let us think about possible paths with the ship sections we had, but the extraction was backward and hence hopeless. In addition, we didn't have the beef grades, and were trying orderings based on pulling good letters out of the answers, so we were using the wrong indices to extract from the answers in the wrong order. An update swapped Theseus and the Minotaur on the maze, and after we figured out the beef grades, we still had trouble. With two answers and hence one pair of maze pieces being withheld, we backsolved the missing answer corresponding to Canner to get those missing pieces. (There were other testers, so don't think this was the only way to solve it.)