The 2017 MIT Mystery Hunt happened last weekend

"MIT Mystery Hunt" Indian head penny1) Before addressing the main topic of the piece, a quick heads up to say that one of the lead organisers of DASH 8 in London has enquired whether anyone is willing to run DASH 9 in London this year. Interpret this as you see fit, but there must be reason why that question is being asked in that fashion.

2) If today’s current affairs have got you down, Dan Katz (see below) points to Puzzles for Progress; donate to your choice of ten US causes that might need your attention more today than they did yesterday and receive a bundle of puzzles by a collection of highly-regarded authors. It’s something concrete that anyone can do wherever in the world they are.

3) The annual MIT Mystery Hunt took place in at said Institute of Technology in the greater Boston area last weekend. A quick summary is that it’s, arguably, the world’s most extreme open-participation puzzle hunt; a couple of thousand or so players form several dozens of teams, each of perhaps as few as five players or as many as 150. These solvers spend up to two-and-a-bit days solving puzzles non-stop, taking as little sleep as they dare. There is no limit to the difficulty of puzzles; many of the world’s very best solvers take part, and some of the puzzles are written with this in mind. It’s a practical assumption that most teams will be able to directly or indirectly be able to contact the equivalent of a postdoctorate academic in virtually every subject under the sun, high-brow or low-brow, whether in person or online. For a longer description of the hunt, see my 2015 article on the topic, complete with links to write-ups of what it feels like to participate and to some of the most spectacular puzzles.

This year’s seems to have been extremely well-received. It’s also distinctive in that the winning team found the coin in under fifteen and a half hours. This is definitely on the short side as MIT Mystery Hunts go, possibly even the shortest in recent memory. In recent years, the trend has been for the hunt organisers to accept answers between the start of the hunt, shortly after midday on Friday, until typically early Sunday evening. In this regard, more than one team can have the fun of seeing everything that there is to see and finding the coin. It’s on the record that this year’s hunt was designed to be relatively accessible in this regard; a record seventeen teams each got the fun of finding the coin, many of whom had their first ever complete solution. Congratulations to all the teams who found the coin, but most of all to Death and Mayhem who found the coin first!

One of the team of organisers, Dan Katz, who it’s fair to say is more well-known (or, at least, notorious) than most hunt participants has started an exciting hunt-themed blog with reflections on the hunt-writing process and what this year’s event felt like from the organisational side. Discussion of the hunt and related topics has become somewhat more disparate than in previous years (though, to some extent, a subreddit fulfils some of the role that dear old LiveJournal did five or more years ago) though Jennifer Berk has kindly been collating links relating to this year’s event. It’s also worth looking at four-time World Puzzle Champion Wei-Hwa Huang’s Facebook post on the subject of the duration of the hunt as well.

You can see the puzzles from this year’s hunt, along with their solutions, and they’re well worth reading as amazing pieces of craftsmanship, even if you don’t try to solve them yourself. You’ll see that there are some puzzles associated with fictional characters, introduced in the context of the hunt, and others with the quests in which they participate. The character puzzles are intended to be less challenging than the quest puzzles, and it’s a delightful development that there are deliberately more accessible puzzles in hunts these days – indeed, it’s on the record that the hunt organisers had deliberately intended to make this hunt more accessible than many in the past. On the other hand, these relatively accessible puzzles are still intended to take an entire team half an hour, or an hour, to solve – so still daunting challenges.

To get a further flavour for this year’s hunt, the kick-off pastiche and the wrap-up meeting have both been posted to YouTube. These make fascinating viewing. I particularly enjoyed learning the stats quantifying the success that the organisers had in their attempt to make the hunt relatively accessible. Not far off a hundred teams registered in the first place, but some of these registrations may have been less than serious, small teams might have merged before the event began, or some teams might have been registered more than once. Of the teams that took the event seriously:

  • 83 teams submitted at least one answer
  • 82 teams submitted at least one correct answer
  • 70 teams solved at least five puzzles
  • 58 teams solved at least one quest puzzle
  • 55 teams rescued the linguist in person
  • 49 teams solved at least one character meta-puzzle
  • 29 teams completed the character endgame
  • 28 teams solved at least one quest meta-puzzle
  • 17 teams completed the hunt and found the coin

Some past hunts are more forthcoming with their stats than others, and of course every hunt has a different structure, but these figures compare very favourably to what I remember from previous years and reflect the degree of success that the hunt team achieved in its aim of relative accessibility.

I would be inclined to believe that if the most famous attribute of the MIT Mystery Hunt is the very considerable difficulty of its puzzles, its second most famous attribute is the traditionally considerable size of its teams. Another part of the wrap-up video addresses this fact. It’s true that there are half a dozen teams around the 100-150 solver mark, many of which are surely not present in person at the venue. It’s also true that some of the other 17 teams to find the coin are just (“just”!) fifty strong, with a notable outlier around the 35 mark. It’s also true that some teams of 25 or so, or even down to around a dozen, can solve around a hundred or so of the just over 150 puzzles to be solved – but those must be power-packed teams indeed. Dan Katz touches on the topic, but it’s something that comes up every now and again in Mystery Hunt discussions. It is MIT’s event, after all, and some people like the “you bring your puzzle-solving army, we’ll bring ours, no quarter asked or given” arms race of it – or, if there’s any event in the world with that spirit, the MIT Mystery Hunt seems to be the one where people have settled on.

Very occasionally, write-ups will mention that some team or other have remote cells of solvers working together on puzzles from afar, and some teams mention that they have remote cells in the UK. A couple of times, I’ve spent a weekend here in the UK with two or three other solvers working very hard on a small number of puzzles. It’s fun, though I suspect it can only be a fraction as much fun as solving on-site, and there’s so much that you have to miss from solving remotely – but it may be much more practical, as well as tens of degrees warmer some years. As the interest in puzzle hunt puzzles and puzzle hunts increases in the UK – see the last post as evidence! – it would be fascinating to know just which teams have remote cells in the UK, and whether any of those cells are actually open to potential new participants.

Thanks to the setters and congratulations to the winners. The rest of us can just follow the countdown until next year!

4 Comments

  1. I find the Aussie events, rather than MITMH, to be the pinnacle of hardness in that ESP is often required to figure out what the puzzle is all about, and they’re also occasionally fuzzy in their solving procedure. The MIT hunt has an MIT student target audience, so it occasionally assumes their education, skillset, culture and curiosity. There are some puzzles that are only reasonable for this size of team. And it has its own quirks, rewarding familiarity with NPL styles and ISIS and meta puzzle structures. But on the other hand, it has the deepest pool of experienced editors in the world, so puzzles that are broken, underclued or designed to beat the solver rarely make it. Also, you’re spoiled for choice, so the subset of the hunt you’re going to experience will better match your idea of tractability and fun.

    I find the team size discussions often misleading. A whole lot of people, possibly the majority of hunters, *choose* to experience the hunt this way. And it makes sense. Large teams provide a lower stress, lower commitment, more social way to introduce participants to the joys, the required skills and the people of the hunt. They provide the infrastructure that makes remote solving not just possible but pleasant, complex structures legible, and your work accessible to your team-mates. Physical and judged puzzles, events and runarounds, call-ins and server accounts scale on a team rather than hunter basis, so team size effect on administration is more of a mixed bag than commonly presented. And simply put, a 25-strong winner leading to a 10-strong organiser would almost certainly mean reverting the current cornucopia of available puzzles into the number and quality of the time when organising teams were of that size, and I can’t see 10 people supporting the current number of participants even given tech advances. Also, given the massive advantages experience confers, I can see such organisers risking both ageing out until this makes no sense as a MIT IA, as well as overfitting the event to a truly 1337 irrelevance and abandonment. It wouldn’t be the first puzzle event to suffer the latter fate.

    Codex is a friendly, fun-first team with superlative remote solver support, hunters in the UK and an open door member policy. It always has remote cells, and has had UK cells in the past; I don’t know if one formed this year. Cells are organised from the bottom up, typically by somebody offering some space arrangement late in December.

    Reply
    • Mmm, yes. I solved in exactly such a Codex UK cell in… er… (looks year up from the /devjoe archive based on the one puzzle I remember) 2004 and it was indeed fine fun, though rather overwhelming. I don’t know whether the guy who organised it is still associated with Codex, but he does turn up at London Puzzled Pint from time to time, so I should probably ask. (And yet I know people on other teams better, so would probably hunt with them in practice.)

      Your points about the Aussie hunts are well noted, too. I’m a little worried about the puzzles in this year’s Cambridge hunt in that there may not be the tradition of puzzle editing and testing for difficulty that can only develop with years of practice. (The first hunt I wrote, in a completely different context, was all over the place up and down the dial…) Nevertheless, you don’t get practice without practising.

      A sidenote (that I’ve observed before, but it never stops amusing me) is that one of the principals of the first MUMS puzzle hunt was one Julian Assange. Which member of the first Cambridge Puzzle Hunt constructors will go on to earn such global notoriety? Place your bets now!

      Reply
      • 2004 is a terrifying entry point! And it looked so much more polished than anything that came before, that you felt that not gaining much traction was your own fault. I remember a lot of frustration that year. We hadn’t even figured out the theme until a couple of days in! Then the solutions became available, and I felt better. Also, team infrastructure is a whole lot shinier and smoother these days.

        Reply
        • I took part remotely on a different team – a search through very old e-mail suggests it was apparently “Austerity Nymph Boston Perl Mongers” – in 2002. That year I recall that I might have managed to solve either one or two of the easier puzzles, which raised my expectations that I might have something to offer in this regard after all. Every year since where I’ve been able to pay attention has repeatedly disabused me of this notion.

          Reply

Leave a Comment.