Selir

Everyone at Legitimate Business Syndicate is heartbroken over the passing of our teammate and friend Selir.

Selir was one of the most dedicated members of our group. Whether it was lengthy work sessions or late nights babysitting servers in a surprisingly cold CTF room, Selir was always committed to making sure things worked well.

His frightening competence was also beyond valuable. All of the technical parts of running our games had Selir’s hands in them. Selir worked on making sure quals challenges were easy and reliable to deploy and run, basically all the networking, making sure cLEMENCy had ROP chains available, and wrote some of the most beloved CTF challenges ever fielded.

Selir made sure everyone felt welcome and part of the team, and was a joy to be with. When he was the voice of reason, it was from the heart. When troubleshooting and explaining a mistake, it was kind and thoughtful.

We’re thankful for the time we got to spend with you, Selir, and we only wish there was more of it. We are less legit without you.

Doodle Grid

Hello! I had a lot of fun at my first DEF CON in several years with free time and no responsibilities. Turns out running a CTF is hard work, and while it's also a wonderful experience, I'm also happy that our friends at Order of the Overflow get that experience now :)

I've enjoyed many of the visual art on display at DEF CON and other events around the world for over a decade now. In particular, interactive ones fascinate me, and ones at hacker events that have some kind of network connectivity seem especially fitting. At SHA-2017, there was a massive LED grid behind a well-staffed and well-stocked bar, all controllable over TCP to a public IP. This worked well, with lots of little visuals of animated characters slipping and sliding around the screen, until my friend Shadghost ransomware'd it with rented servers, crashing the machine hosting it, and requiring the staff to reboot it.

LED displays showing images and a cryptolocker ransom screen behind a bar

So I started thinking about how I'd do it better. Do I want IPv6, where addresses have enough bits to put the whole message in a destination address for a near-empty packet? Do I want to use a projector so the setup isn't a huge wall? Raspberry Pi? I dithered on this for most of a year, until piesocks and I started a quick project for Toorcamp, a portable battery-powewred Doom setup. Piesocks's portable WiFi AP used a little WiFi SoC, which basically made the whole thing super-easy. Also at the event were a bunch of microcontroller-powered LED displays, both in semi-official installs and also just taped together in varaious campsites.

fifteen LED grids taped into a rectangle, held up by someone wearing pyjama pants, in front of a cluttered table, resting on grass

On the journey home from Toorcamp, I started narrowing down to something I could actually build in the month before DEF CON. Without any contests I had to be at, I could just roam around with a backpack, loaded with drinking water and other important supplies, with the LED grid stuck on the back.

My first order was: an ESP-8266 prototype board, a 16x16 LED grid, and a small OLED to show the device status. Once it arrived, I used a breadboard I had lying around to wire it all together, and, over the course of a few evenings, started playing with some small ESP-8266 programs. One to connect to WiFi, one to control the LED matrix, and finally one that would accept UDP and change LEDs.

an LED grid showing an octothorpe logo connected to a breadboard

The third program didn't work: the LED grid required a ton of CPU time for a 256x256 grid, enough that it couldn't do Wifi stuff at the same time. I tried a few different libraries, but the most promising thing was to switch to an ESP-32 microcontroller; that one's got two cores and an additional peripheral that'd handle the LED grid on its own. In addition, while the LED grid appeared to work fine on 3.3V for both signalling and power, I decided to run it off 5V, which required a level shifter. A third problem was that ramping up the LED matrix to full brightness would cause a voltage drop that would brown out the microcontroller. I watched a few videos and talked to a few people and learned about bypass capacitors, and that I should run with 'em.

I ordered an ESP-32, a handful of level shifters, and a big ol' box of ceramic capacitors. They showed up, and I set down to rewire things. The ESP-32 had almost the same pinout as the ESP-8266, the level shifter provided a convenient place to wire in all the power pins for everything, and to jam in a capacitor. I also had to rewrite some of the software to use a special LED library for ESP-32.

an ESP32 prototype board and a small red level-shifter board on a breadboard, with jumper wires running all over the place

Changing parts and updating software to match got almost everything working. The capacitor didn't really fix the brownout problem. My big capacitor box only had small ones. Since DEF CON was getting closer and I didn't feel like ordering more stuff online, I tried to rip a few out of an old Xbox 360 I had lying around. This was a pain, since Xboxes use RoHS-safe solder that doesn't want to melt at the temperature my cheap soldering iron tops out at.

an Xbox 360 main board, with a soldering iron on top of it and several capacitors next to it

Once I recovered the caps, I didn't feel like janking them into the breadboard, so I just disregarded them, right-shifted the brightness down a bunch, and also tested the setup off the battery pack I'd use at DEF CON. It worked fine.

The next task was to set up the microcontroller as a WiFi access point. This was remarkably easy: I changed the function call from connecting to an existing network to run an AP, commented out the loop that waits for the connection to be established, and that was basically it.

Once the circuits were all in order, I decided to put them all together in a box that would survive backpack life. I started with an amenity kit from a flight in a tough plastic box, and did a bit of customizing. I cut out the fabric lining, Naomi Klein'd the logo off the outside, cut a hole for the OLED status display, and drilled a hole for the cables vowed to leave the zipper open a bit for cables.

an airline amenity kit box with a poorly-cut hole cut in the front for an OLED display

Mounting the OLED was tricky. I couldn't find screws that fit the OLED, so instead I just kind of taped it in. The tape didn't work, so I cut up an old hotel key card and hot-glued it together to bolster the display. By this point, the display didn't really do anything, so honestly I should've skipped it, but whatever. It's not gonna move now, heh.

cut up pieces of plastic hot glued down, holding something covered in electrical tape down

My first attempt at mounting the LED involved using Velcro to stick it on the outside of the backpack. It worked really well!

The night before I left for DEF CON, I realized that instead of the Ruby scripts I was using on my desktop to draw to the display, I should be able to use my phone. Building an iOS client for the doodle grid used the sum total of my iOS experience (half a Mastodon client), which wasn't nearly enough, so I learned a lot more. I got a lot of experience with graphics-oriented parts of the iOS environment, including the CG* family of classes. Additionally, I wrote a page of instructions and printed out some Ruby code.

a piece of paper, with a message in calligraphy and normal print handwriting
Vito Leds
SSID: twitter @vito_lbs
Security: none lmao go nuts
DHCP: ya
192.168.4.1
UDP 27420
Message: <PixelCount:u8> [<Pixel>]
PixelCount: 8-bit unsigned integer
Pixel: [x,y,r,g,b]: all 8-bit unsigned
it's 16x16 and iirc it bounds-checks
[1,0,0,255,0,0] make some corner red

I threw the LED grid in the original box for it, put it in the backpack, and put it in my checked luggage. The big battery pack and circuitry went in my briefcase. TSA didn't even flinch (in either direction), so that's cool. During my flights to Vegas, I spent some time on the iOS app, and then once I got Vegas, I put the setup together, and spent a bit of time fixing the app.

On Thursday, I talked with ffe4 over breakfast, and he was interested in messing with the backpack.

Later that day, while in the badge/registration line, I was wearing the backpack, showing it off, and it was cool, until I got hacked. A literal baby pulled on the wires for the LED grid until they disconnected, which was hilarious and educational. I got a few pieces of gaff tape to control the bare wires, and ran with that for the rest of DEF CON. I also ran into sigtrap and counterflow, who were really hyped about the project! They both took a couple pictures of the instruction page, since none of us were toting computers.

a twitter post 'dude, your backpack's lit'

Over the weekend, I wore the backpack and had it showing my name most of the time. It barely hit the big USB battery pack I brought, and the backpack was also useful for keeping beverages and stickers handy.

CTF ends Sunday afternoon, shortly before closing ceremonies, and I went in their room to see the final countdown. Without coördination, ffe4 and sigtrap were also there, and showed off some of the progress they made in the chaos of the last few minutes of CTF.

For 2019, I'll probably work on the software a bit more (it got crashy when misused) and figure out how to put it on the DEF CON network instead of its own thing.

source code

The microcontroller source is at https://github.com/vito-lbs/doodle_grid . I used the Arduino IDE for it, so the interesting stuff is in the doodle_grid.ino file.

The iOS client is at https://github.com/vito-lbs/doodle-grid-client . The drawing mechanics are mostly in DoodleView.swift, the network stuff is in ViewController.swift, and of course there's a storyboard that's optimized for iphone x (deal with it).

greets and shout-outs

Thanks pronto and shadghost for fun times and reference backfills for SHA-2017. Huge thanks to piesocks for getting me thinking about low-power hardware for toorcamp. Thank you rager & quails for just messing around with the taped together display at toorcamp and also for the rum-mune. Thanks for the picture, crowell. And thanks for messing with my display, ffe4, sigtrap, and counterflow :)

Finals | Building DEF CON CTF

This is part 4 of a series of posts about Building DEF CON Capture the Flag.

Finals is the hardest part of CTF to run. A lot of it is timeboxed (we generally didn’t have access to the CTF room until Wednesday or Thursday), it’s a pain to get stuff in and out of the conference area, and there’s a definite feeling of do-or-die. Why? It’s where your reputation among the CTF community comes from, it’s an important show for DEF CON attendees, and it’s a justification for CTF’s continued existence at DEF CON and in the hacker community.

For us, the preparation for finals started before it wrapped the previous year. The 2015 announcement about CGC architecture for 2016 was planned, the 2016 announcement about the custom architecture in 2017 was already a year in the making.

Preparation and Infrastructure

Designing the game itself is something that also has to get started early. We tended to spend a lot of meeting time on the scoring algorithms, plenty of both meeting and individual time on infrastructure, and a ridiculous amount of individual time on building challenges.

Our scoring algorithm had gotten really good for 2015 and 2017. Each team service instance would start with a pile of virtual flags, which could be stolen by other teams, or lost to other teams with an availability check failure. In addition to competitor teams, we had a secret team that got remainder flags after uneven divisions, and had a private instance of the challenge to verify that availability checks worked as intended. Novelty was rewarded by not sharing stolen flags with other teams, which especially with the public patches of 2017, worked amazingly well.

Infrastructure is where I spent most of my time, along with Selir. The scoreboard “scorebot” for every year but 2016 was a Rails & Postgres app, because that’s what I’m most familiar with running. The most important thing to me was making the database (mostly) append-only, creating all the database constraints and foreign keys necessary, and realizing that you only have like 200 users, tops. After 2013, we realized we needed admin screens to answer team questions. There will be lots of those.

Why append-only for the database? We found a lot of value in being able to re-score the game offline. Teams were affected by hardware faults and scoring problems because the game has to run in the real world. Since teams can't work around these issues, it's unfair to penalize them. We identify these issues during game time, and let teams know we'll reprocess them. Reprocessing is easiest to reason about when you have a separation between player-driven events (successful token redemptions, unsuccessful availability checks) and score change events (capture or penalty flag movements).

Our network was fantastic. Selir (who is an absolute networking genius) had it rigged up so team tables got untagged game traffic, and DEF CON network traffic (including public internet) on a vlan. Player game network access was rate limited to about the speed we could capture and store traffic, which also reduced their ability to flood the network. Since network dumps were captured and made available for teams to download in five minute intervals, this made the dumps less noisy too.

We ran vulnerable machines for teams. This meant we could surprise players with ARM, didn’t expect teams to bring more hardware than strictly necessary, and most importantly, allowed us to give teams unprivileged or no access to the machine to prevent unstoppable “superman” defenses.

What do we mean by this? We’ve had teams wrap challenges to forbid them from reading the flag from the filesystem, among other things. It’s possible to design challenges to make this intractable, but it’s way easier to just deny teams that kind of access to “their” machine.

We used "consensus evaluation" for 2016 and 2017. Consensus evaluation comes from the DARPA Cyber Grand Challenge. Teams (or in CGC, autonomous computers) upload replacement service files to the scoring system, which is then responsible for placing the replacements on filesystems to be evaluated, and also sharing the replacements with other teams. One comment i heard after our 2017 game was that consensus evaluation makes it feel more like a battle with another person.

Writing challenges for finals is very different from qualifiers, because they have some features that get availability-tested, and ideally have multiple unrelated vulnerabilities. They need to be both difficult to attack, and difficult to defend. Like quals challenges, that’s another discussion entirely.

Live Operations and Game Management

Game operations and competitor management are a big part of finals: getting everything set up in the room, making sure it works, getting teams in and connected, actually kicking off the scoring system at game start, and stopping the game at end of day. We do a test setup the Thursday before the game starts, making sure the wires to tables work, the firewall rules are all configured, and that there’s enough power on the floor to run the game. Everything that can be scripted is something that you probably won’t mess up the next morning.

Competitors need to know when and where the competition is. Some competitors will need invitations to the US in order to get a visa to travel to Las Vegas. Once competitors are in Vegas, they will need help getting DEF CON badges from you, which is difficult because casinos are intentionally confusing and not every team has proficient english speakers.

Getting teams in the room and on the network is something that can take some coördination. We prefer to have emailed and printed information on when to come in, and what the network will and won’t provide. Emailed and printed documents mean players not fluent in English can get them translated by a teammate ahead of time, and makes network setup less unfair to teams that have never competed at finals before.

We allow an hour or so for setup time in the mornings once players are allowed in, with a few game elements network-reachable, but no scoring allowed. In the meantime, we’re coming up with a rough plan of what services we want to release that day. When it’s a minute or so before game start, things get really tense and quiet as we count down on the microphone: I’d be armed and ready to fire the polling/flag redistributing service, and Selir would be armed to change the firewall rules from setup to gameplay.

Scheduling services for release during the day is a complex topic, especially since it involves dropping binaries and pollers in the right places, activating them in the scoring service, and, most of all, having every team start looking for any kind of weakness in the service, intentional or not.

We show full scores and rankings on Friday, rankings on Saturday, and nothing on Sunday. This means there can still be a surprise upset for closing ceremonies, and keeps teams more invested, since they won't know if they're on the verge of moving up or down in the rankings. We've had a few surprise upsets in the past because of this!

Make sure you capture backups, and practice restoring them. This isn't just disaster preparedness, it's a powerful enabler for during-game dev work. Being able to drop a backup, load it on your dev machine, and test score fixup scripts, admin screen changes, and other scoring system changes is extremely valuable in its own right.

Wrapping Things Up

Introducing CTF finals during DEF CON closing ceremonies is absolutely thrilling. You're on stage in front of a massive crowd that (ideally) respect and support you, and you get to introduce the top three teams from a very intense weekend. How do you get there?

By the end of the game Sunday, you should have a pretty good idea of how to get the top three teams, know of any last-minute scoring fixes that need to be run, and can make sure your scoring database is backed up elsewhere.

Downtime on Sunday is also a great time to write your speech. This isn't as hard as it sounds, the rules are pretty easy to follow:

  • Make the game sound hard and imposing
  • All the teams are wonderful competitors
  • Thank everyone involved: DT and the rest of DEF CON staff, especially the goons, competitors, DEF CON attendees, and the global CTF community
  • Third place, second place, and finally, first place

Before closing ceremonies, know who'll be talking on stage. Don't let them drink too much (there's plenty of time after they're done being in front of a mic). We put speeches on notepads (for reliability), and each speaker transcribed their own part (for legibility).

After closing ceremonies, get trashed. You survived DEF CON CTF!

Thanks Matthew Pancia for proofreading and reviewing.

Qualifiers | Building DEF CON CTF

This is part 3 of a series of posts about Building DEF CON Capture the Flag.

Quals, to me, is the most important part of DEF CON CTF: it’s the only game we make that most teams will have any interaction with, and for the teams that do qualify, it’s the best way to prepare them and give them an idea about what the finals are going to be like. We come to a consensus about dates in December, and try to have date announcements out on January 1. Yes, we opened quals registration on April 1 each year on purpose.

Picking April or May for qualifiers is important, for several reasons. 2013, our first year running it, we only knew we were hosting by March, so we picked mid-June like previous organizers. This only gave us a month and a half for finals prep, which just felt like a panic. More importantly, this only gave players traveling from some countries six weeks to go through the US visa process.

Actually building qualifiers is a lot of work! Challenges area a whole post or series of posts on their own, but the important parts are brainstorming ideas for them, making sure they’re solvable by teams, and getting them running stably on infrastructure that will survive the game. Challenges need to be tested by someone coming in with a blank slate on it, and the challenge author needs to make something that can solve it reliably in production.

Estimating difficulty is hard work, so what we found works best is just guessing at an unlock order a few hours before the game starts, and scoring challenges based on how many times they’re solved.

Production operations for challenges is worth thinking about at development time. Challenges as stdio binaries that don’t save any state to disk between connections are good. We built runc images that would be launched by xinetd, and that consistently worked great.

The scoreboard isn’t terribly difficult. There’re lots of Jeopardy-style CTF scoreboards available, running a web application is a turnkey thing, and make sure the database gets backed up (we backed up hourly and before deploys.) Let teams register during the competition and make a public scoreboard (for non-logged-in visitors) available and obvious. Lots of players don’t think to register until the game is actually on, and many players want a link to share how they’re doing with friends, family or coworkers.

We found a lot of value in being in the same place for qualifiers. In 2013 and 2014, we used an office a bunch of the team worked from. 2015 and 2016 we used somebody’s house (and didn’t even trash it). 2017 we rented a party house for the weekend, and it was mostly good. We spent a lot of time in the pool, nobody had to drive home on a heavy drinking night, and it was a great time! Unfortunately, the internet was slow and LTE was jank, but we worked around that.

Qualifiers isn’t an easy game to run per se, but it’s very rewarding, and the lack of constraints for preparation meant that it was usually a pretty smooth experience for us.

Coming Soon:

  • Building Finals

Thanks Matthew Pancia for proofreading and reviewing.

Hype, Meetings, and Workflows | Building DEF CON CTF

This is part 2 of a series of posts about Building DEF CON Capture the Flag.

Basic Hype Game

Many parties want to see you succeed running DEF CON CTF: the DEF CON organization, past organizers, past competitors, and everyone in the CTF community. We all have a vested interest in seeing DEF CON CTF as a popular game with lots of players, which means you need to bring your hype game.

I consider our marketing/hype efforts under way once we launch the website, usually on Jan. 1. For us, this meant agreeing on quals dates in December. Dates are mostly arbitrary, but Jan. 1 was a convenient deadline to target, and gets the rest of the team in a CTF frame of mind.

Besides the website, letting people know about the upcoming game is useful. We kept a Twitter account active, posting announcements leading up to and during the game. Mentioning DEF CON’s official account is easy to do incidentally, and they’ll retweet CTF stuff to their zillion followers. We also had public Google Plus and Facebook pages, but I never felt like they got the traffic that Twitter did.

A CTF game is inherently fun and easy to advertise to CTF enthusiasts. What we’ve always struggled with is non-CTF enthusiast/professional types, especially after establishing a reputation as being very binary-heavy. We leaned in to it a bit, hyping up a challenge or two in 2017 as being web-based when they were just binary reversing that happened to speak HTTP.

One thing I enjoyed and think helped us was having the “#” (octothorpe) brand. The vine-covered computer was a good recognizable image in 2014, and the spraypaint-style version from 2015 has stuck around since then.

Meetings

In 2017, we met every Wednesday from January until Vegas. We didn’t go all Robert’s Rules of Order but I did maintain meeting notes each week, and persisted any ongoing stuff that needed/expected work from week to week to make sure it was getting done.

Keeping meetings on track is hard! Start with an agenda and know which items are likely to become an open-ended discussion (for us it was challenge infrastructure and challenge difficulty). Make sure that when open-ended discussions come up, you interrupt and defer them until the end of the call. You’ll either forget the less-exciting stuff, or not give it the attention it needs.

Keep the meetings friendly and fun! You’re all relying on each other, and if meetings go badly or turn personal, you’ll find tasks will slow down or never get done.

Workflows

We didn’t use Scrum™ or any other documented workflow. I kept a personal board on Trello of stuff to do, but didn’t expect anyone else to use it.

Be ruthless about things that don’t need to get done right away or ever. A full month after 2017 qualifiers, when I hadn’t finished the quals stats dump, Gyno told me to let it slide until after finals (I swear I’ll get it done one of these weeks). Ambition can be good: that’s why you’re running a CTF in the first place, and it’s where legendary challenges come from. But it’s risky, and when you have a deadline, sometimes you just want something you know you can do reliably.

Coming Soon:

  • Building Qualifiers
  • Building Finals

Thanks Matthew Pancia for proofreading and reviewing.

cLEMENCy - Showing Mercy

With the closing of DEF CON 25 and our last year of running Capture The Flag, I figured a post about what it took to create cLEMENCy was in order. This is a very long write-up detailing what I went through while developing cLEMENCy and ignores all the effort that occurred on top of it to create other challenges for DEF CON CTF 24 and 25. Hopefully this post shows the amount of dedication it can take to run DEF CON CTF. When I joined Legitimate Business Syndicate in January 2014, I made it known to the team that for our last year I wanted to do a fully custom architecture. As luck would have it, the Cyber Grand Challenge (CGC) happened in 2016 which allowed me to take a backseat to most of the CTF challenges and focus my evenings on the emulator and tool development.

My first processor document was created on August 11, 2014. Highlights were:

  • Stack growing upwards
  • Little Endian
  • 25 instruction groups
  • 32 registers that could be used as integer or floating point
  • 8 interrupts
  • DMA transfer involving the ID of a device you want to talk to in a register along with all areas that were potentially relevant
  • Memory protections
There were questions about whether or not I should add in logic to allow threading and if the firmware was static or a custom format that allowed modules. A month later, Gyno dropped in notes about Hexagon DSP, Mill, and the Cell architecture. I had tossed around other ideas in my head in relation to Harvard architectures and the idea of swapping out the opcode lookup table as opcodes execute, tying a unique table to each opcode resulting in code obfuscation.

2015

I began laying out the basics of the opcodes and started writing actual emulator code at the end of November 2015, almost a full 2 years after joining the team while pondering design ideas during that time. The development started shortly after my break from CTF finals that year and after a number of rapid updates and changes to the architecture idea document. Although the original architecture document called it “Lightning CPU” I officially named the project “DMC” (Defcon Middle-endian Computer) in December, I own a Delorean and wanted to tie in a reference to the Delorean Motor Company.

A large amount of development on the emulator occurred at the end of December due to having a 2 week vacation. By the end of December, 23 files, 1,867 lines of code, and 344 lines of comments had been put together for the emulator. This obviously does not count refactoring and various reworking that occurred at the early stages, however a total of 2,211 lines in around 30 days was not bad (almost 74 lines a day). Things were adjusted and more refined for the specification during this time while Thing2 and I chatted about middle-endian. Ideas were still floating around about replicating the SPARC sliding register window and allowing registers to combine to expand the number of bits used for math.

I created a personal goal of keeping things simplistic enough to allow teams to learn the architecture in less than a week with the idea of just handing the teams a manual a week before the competition. The end of 2015 was spent with random ideas and tweaks being done to align it to be RISC-like and refining the running document to remove complexities to help tailor the architecture to a setup that should be easy and quick to learn.

In order for the team to be able to develop challenges I needed to have a full toolset for them to work with. Work was started in January 2016 to create a LLVM configuration for the clang compiler after giving up on adapting GCC to middle endian. The complexity of modifying LLVM resulted in looking at a number of simple C compilers including TCC before stumbling across NeatCC (NCC) developed by Ali Gholami Rudi. The benefits of it were that the core file per architecture was simplistic, its author had a lightweight libc that compiled in it, along with a linker while supporting the ELF object format to link together multiple C files.

2016

The first few months of 2016 involved minor modifications to NeatCC to get it ready for creating firmwares for DMC along with creating a python script that would auto-generate a Sqlite3 database with information about opcode layouts. The database had been planned as it would allow auto-generating a header file of instruction data for a C disassembler that the emulator would eventually have. The database also would drive the planned python assembler and disassembler to avoid massive parsing code and also be used for creating the documentation.

By mid-April 2016, NCC, the Neat linker (NLD), and the emulator were usable. I also started testing between the tools and emulator. During this time period the python script to parse the Sqlite3 database and generate the initial HTML documentation was created along with the addition of more instructions. NCC did not have the ability to have embedded assembly so an external assembler was created, Lightning Assembler (LAS). June is when the architecture was re-named “cLEMENCy” (LEgitbs Middle ENdian Computer) and all code updated to reflect it. I wasn’t completely happy with the DMC name and enjoyed that clemency means mercy, which I was showing by not going all-out in complexity.

It was during DEF CON 24 CTF Finals that things changed. Thing2 and I were talking that Friday about how to break tools with the architecture when I had an epiphany! I asked him if making all bytes 9 bits would break everything, and he could only smile at the idea. I ran it past the rest of the LegitBS team and the consensus was that, if I could prove it would work, then why not? While all the competing teams worked on finals that year, I was creating a proof-of-concept. By the end of the competition, I was able to prove that I could not only convert between 8- and 9-bit formats easily, but that I could make it work with the tooling and setup I had previously developed.

By the end of September, the instruction format, assembler, disassembler, NLD, and NCC were converted over to the new 9-bit byte layout. In the process, I came across a number of parsing issues in NCC that were not showing up in the latest release. I made a new pull of NeatCC, NeatLD, and NeatLibc in the beginning of October, spending 2 weeks dealing with a massive merge and rewrite of the code to make it compatible with the new setup.

Mid-October 2016 was the beginning of modifications to the emulator to adapt it to the 27-bit, 9-bits-per-byte setup - another 2-week process. While doing the rewrite, debugger and disassembler functionality started to be added to the emulator allowing for simple and expected functionality, dumping registers, breakpoints, and single stepping to name a few.

The speed of the emulator was a bit slow, though, so instead of adding floating point logic to NCC, I chose to remove the floating point code. This eliminated the masking, compare, and branch on most of the opcodes while requiring that I add in functionality to indicate if the processor supported floating point to avoid documentation changes. This improved the speed of the emulator to around 8 million instructions/sec on my old laptop and I deemed it good enough as the infrastructure would be a bit beefier. It appeared that the biggest speed hit was the constant shifting and masking required to handle 9 bit byte access.

November 2016 saw modifications and assembly file additions to NeatLibc specific for cLEMENCy and the actual creation of full firmware images across the tools that the emulator would run outside of the simple assembly tests that had been done after the 9 bit conversion. I had to create a custom memory allocator in Neatlibc because the original one relied on mmap() and the firmware had fixed memory to work with. LAS was modified to be capable of writing ELF objects that were compatible so the assembly could be linked in.

The rest of November and December were spent squashing bugs, adding in enhancements, and writing assembly files for things like millisleep (similar to nanosleep). During this time the addition of code for inverting the stack by recompiling ncc to ncc-inv was also added. The fun of tracking bugs at this point was determining if the emulator, disassembler, assembler, linker, or compiler was the culprit. Some bugs were issues of improper masking in the emulator, some were the C compiler kicking out the wrong values due to improper bit combining, while others were from the linker incorrectly writing offsets or incorrectly calculating where to modify data. While trying to validate issues, bugs would be found in edge cases in the assembler and disassembler.

2017

By the middle January 2017 enough bugs had been squashed that Perplexity, my Finals challenge that I’m sad to say was never finished, was being used to test functionality fairly ruggedly. Perplexity’s goal was to make people question if I had created a C++ compiler for the architecture on top of everything else that was developed. It was a C binary with structures setup to allow vtable configurations while NCC was modified to allow two colons side by side in a function name so that everything looked like C++ in development. In January 2017 the plan was to release just the manual of the architecture to the teams although there was a debate of how early the teams should have the manual. There was also a question of whether or not all challenges would be written in it due to the LegitBS team not using any of the tools yet.

By the end of January 2017, the emulator, assembler, backend file for the C compiler, and custom files for NeatLibc totaled 54 files and 13k total lines of pure code not counting comments. Sirgoon had begun working on a physical version of the processor on a FPGA, however due to personal things this was scrapped. If anyone ever makes this into hardware I would love a copy of it.

The tools and emulator were stable enough in my testing that I had nothing to fix and just needed to write my finals challenge until others began using it and reporting bugs. Work progressed on Perplexity with random additions to the debugging abilities of cLEMENCy. At this point there was still no plan to release any of this to the teams outside of the manual that had not been modified since end of December 2016.

Come April, the DBRK instruction was added to help pinpoint parts of the code after being recompiled. Although the map file existed, I did not have line specific information so identifying specific areas after a recompile was faster after adding DBRK. This also allowed me to quickly test theories about ways to land Perplexity for some of the bugs I had already added.

We decided to rent a house for our last time running quals. During this time I continued working on Perplexity, and showing the team the tools was very helpful. Since we were face to face, they had faster access to questions and issues. It was also requested during this time that shared memory be added, and as a bonus I added the NVRAM memory. There were plans for a challenge to use the shared memory, but I’m not aware of it actually occurring. During quals we determined that one connection per team per service would limit the load on the boxes and no planned challenges required multiple connections. This limitation was implemented and tested during quals.

After quals, Vito put in effort to start getting a physical manual created, Selir worked on porting old challenges so we could benchmark how much processing power was required with all services being attacked, and the rest of the team began using the tools to create and port challenges they had been working on. This became a pressure point for me due to a bunch of random directions on the documentation, tools, and architecture and of course bugs began to show up.

I created a separate slack channel to track just emulator changes that the team could be aware of and people could report issues in. It was not uncommon for a single day to have 10+ random small changes and bug fixes done to the tools and emulator with the weekend being up to 20+ different things. As we moved closer to Finals, the rate of updates and bug fixes would increase.

Just to list a few random bugs fixed in one weekend as a taste of things that were fixed:

  • Some of the branch compares had improper checks resulting in invalid if statements
  • Memory protections on the flag area needed to be enforced
  • Millisleep and strcpy having edge cases to correct
  • Exiting debug in certain situations resulted in an unusable terminal
  • Timers not firing properly
  • Signed immediate issues
Normally when challenges for Finals were created, a number of bugs would be added to the service with proof of concepts showing at least gaining control of PC and ability to continue. Due to the new architecture I made a request that the team accepted and agreed on; any bug in a challenge must be proven to be able to return the flag. This resulted in the creation of a rop-search tool to help prove that bugs could be landed in the challenges and an issue was discovered, our limited code size and lack of threading meant it was near impossible to do a stack pivot if you only control a register and PC.

Gyno came up with the idea of a page of memory that would have gadgets in it. Vito and I were about ready to get the physical manuals printed but I had not figured out exact details for this new memory area. I decided to leave the memory area out of the official documentation with the reason that a physical processor wouldn’t have this memory page, it was just from the emulator and being in the DMA area could be seen as a separate device. I created a script to auto-generate a random character text version of the LegitBS logo combined with text to appear similar to a NFO from a warez leak. The concept was to leave a few hints: the help menu saying to enjoy the NFO section, and an ELF section named NFO with the normal ASCII text. Reading it would tell them where the NFO was loaded in memory and the random character setup was to mask embedded ROP gadgets for pivoting from any register to stack.

Near the end of June 2017 we decided that the teams would be given a copy of the emulator along with the built-in debugger and disassembler, and the architecture manual 24 hours in advance. We decided this after watching lunixbochs (on the usercorn team) implement the NDH CPU into an emulator in a few hours. By releasing the emulator and built-in tools it would guarantee at minimum that everyone had the tools required to compete and avoided time being wasted at the start of the game to get tools created. I had not planned on teams having the emulator so Perplexity development was shelved. At this point Perplexity was 43 files and 2,808 lines of code, not counting comments. I was sad to shelve it but I needed to make sure the emulator was ready for teams. This involved me creating 3 versions of the emulator when it compiled, the production version with seccomp and stripped out debugging, our debug build, and the team debug build that had the instruction and register state history stripped out.

Near the end of the last week prior to Finals things appeared to be going smoothly. The emulator was chugging away at 7M instructions/sec on my laptop and we were not overloading our infrastructure that was being tested with multiple sample binaries. Selir had configured random connections and data for the sample binaries between all fake teams to help stress test and watch CPU spikes. I made a painful discovery; I found that I made a mistake in a select statement.

Can you spot the mistype? A simple 1 to 0 resulted in my poor laptop cranking out 40M instructions/sec easily, an almost 6x speed up. The speed boost was nice but I had to keep from kicking myself over tossing floating-point early in the process over such a simple mistype. We were close enough to game day that adding floating point back in was risky and wasn't needed as challenges had already been written to work around and avoid it's usage.

Game Day

I arrived to Vegas on Tuesday, July 25. Previous years I always had a challenge in finals and was doing last minute changes up till game start, adding in bugs, testing functionality, and enhancing the poller script. This year was different; I had no challenge, just the architecture. The team was spread out between multiple hotel rooms and a lot of time I was just sitting around. I decided to toss my laptop in a backpack and just wander Caesars. The team knew that I had my phone and a ping on Hangouts or Slack with a room number would result in me showing up. I had only ever gone to DEF CON for CTF, which had been 6 years straight and always busy. I had competed for 3 years and helped run for 3 years already, and having my 7th year to just wander was odd and surreal. I wandered around while the team was busy finalizing their challenges but I had nothing to really accomplish until an issue was discovered or someone needed help. On Thursday we released everything and were alerted to a couple mistypes in the manual. The delay on an answer and fix was because I was off in the swag line and needed to get back to one of the rooms as I refused to use my WiFi. Thursday is also when a clang bug was discovered; using the -O3 optimization was actually causing an edge case in the networking code. After a couple hours of testing and watching the bug appear and disappear purely based on adding debug logic, I recompiled with -O2 and everything worked as it should have.

Friday, I hated this day with a passion. From everyone else’s perspective Friday kicked off well, challenges appeared to go off without issue, HITCON landed 3 different first bloods, everything appeared to be going like clockwork. Behind the scenes I couldn’t stop shaking before game start and refused to drink anything due to being a light-weight. I needed to be able to think straight if a last minute fix was needed due to things going to hell. We had done testing, we had a version of the emulator running that no teams had to help avoid a breakout if one existed, but if something broke it would likely be on my head, I was stressed.

Mid-day Friday a bug was discovered in the custom malloc, it wasn’t re-handing out blocks and once fixed, 2 high memory usage and unreleased services would randomly fault. Sirgoon and I spent all of Friday afternoon going through the allocator. By the end of Friday I was done and actually feeling sick, probably from the stress I put on myself. We had fixed one of the services but the other continued to act up for unknown reasons. Sirgoon and Selir saved me, Saturday morning I found out they stayed up after Friday’s competition and hand verified the allocator to each other and found no faults with it outside of what was fixed Friday afternoon. They then in turn started looking at the challenge and found a null deref. Because of the offset it would wrap to high memory to get the allocated block information. This high memory was Read/Write although writes were ignored resulting in no crashes but invalid information leaking into the memory block chain. The rest of the weekend was far more relaxing and a huge weight off my shoulders.

Closing

Back in 2014 I had two teams approach me at Finals swearing up and down we had a custom architecture that year before game start as they both knew my background. I pointed out the difficulty in creating something custom, the tooling and testing required and tried to give the idea that it was hard. When it was announced at closing ceremonies in 2016 that we were doing a custom architecture the room was quiet for a moment, it was eerie. A number of the top teams knew me as the one that created the hardest challenges and also knew that if a custom architecture was to be done I would be involved. Although teams did not know I was the sole creator of the architecture until shortly before the contest, I heard rumors that teams were afraid of what would be created due to my involvement. I’m actually proud that I struck fear into teams. Balancing ease of learning with complexity and something new is not easy. I spent a lot of time tweaking ideas and judging if I thought the teams would adapt well enough before beginning to write any actual code. I could have done far worse but am glad to see my years of effort paid off and that it was thoroughly enjoyed.

I have created a number of custom architectures during my years of development and I hope that my challenges and cLEMENCy will leave a mark on the CTF scene. I thoroughly enjoyed creating my masterpieces and also learned that some people of my team and the competing teams think I am insane. I am going to have Perplexity with all of my notes pushed to the LegitBS repo when finals challenges are pushed. It was never finished but shows what could have been. Thank you not only to Legitimate Business Syndicate for asking me to join the team but for also supporting my ideas and plans. I also have to thank each and every player that took on my challenges over the years even if they didn’t solve it. I hope I helped others strive to learn new things and push their knowledge limits.

-Lightning

Team Building and Proposing | Building DEF CON CTF

It’s been a great honor and pleasure to be part of Legitimate Business Syndicate while hosting DEF CON Capture the Flag for the last five years. Now that DEF CON has announced the selection process for the next DEF CON CTF organizers, it’s time for the next organizers to step up to the plate, take the reins, and mix their metaphors on the way to becoming the new DEF CON Capture the Flag organizers. I hope you’ll find this series of posts useful when building your next Capture the Flag game, or writing a proposal for the big one.

Team Building

The most important part of running DEF CON CTF is the team you run it with. You have to trust in your other teammates’ skills, because running a complex, multi-challenge CTF alone simply isn’t doable. Before you can organize a CTF, you first must organize a team. This team should have skills with network operations, application operations (devops), forward and reverse engineering of complex networked services, full-stack database-backed web application development, real-time computer graphics, visual design, and more. More than the skills though, team members should be able to explain and share their knowledge. If you’re not cross-training skills among your team, you’re actively harming yourself.

Make sure team members know what they're getting in to. Learn what DEF CON CTF means for them, why DEF CON CTF is special for them, and why they should dedicate years of their life to it.

Naming your group is important! We got a lot of mileage about all the permutations of “Legitimate Business Syndicate.”

You don’t have a team without a way of communicating with the rest of your team. When we started in 2012-2013, we used a private Google Plus group. Since the end of the 2013 CTF season, we’ve used Slack, which you’re almost certainly familiar with. We also have a ton of stuff on Google Drive: meeting notes, material for publication, expenses, etc.

We’ve shared the same gitolite install for the whole time. Infrastructure projects keep to one repo per project, like the ideal “Twelve Factor App.” Challenges tend to live in per-challenge-author repos, because challenges are pretty fluid and git merges can be hard. Lightning and I had a lot of just normal merge conflicts while I was working on the typesetting system for the cLEMENCy manual, and that was enough to knock both of us out of our workflows.

Writing the Proposal

We wrote our proposal basically as soon as we knew what questions we had to answer. Get everyone together in the same room for a whole weekend, a month before the response is due. Seriously. Submit that shit early. DEF CON will forgive some weirdness and inconsistency more than they'll forgive lateness.

Get everyone who’ll help you run the game your first year together and just hash it out over a weekend. Even if there’s a three hour drive involved. If there’s a flight involved. Not even joking. The proposal will be what guides you and advises you for the hardest CTF you’ve ever participated in.

We put 95% of ours together over a single weekend, a month before it was due.

Seriously, Just Fucking Do It, the only thing you have to lose is you not hosting DEF CON CTF.

Coming Soon:

  • Basic Hype Game
  • Meetings
  • Building Qualifiers
  • Building Finals

Thanks Lightning, Murmus, and Zap for proofreading and reviewing.