April 26, 2026 Tagged: musings
ZFG has spoken: The Twilight Princess PC Port Is Incredible. And indeed, it is. It brings so much joy to my heart seeing the games I grew up playing given new life through these ports, in which mods run the gamut from simple frame interpolators for smoother animations to full-on online co-op randomizers for Ocarina of Time with more added mechanics than you can shake a stick at; all in all, mighty impressive stuff. As of the last year or so, these ports have been popping up like mushrooms after heavy rain, and if you follow any online personality in this space, you probably have at least a vague idea that the clouds that produced this particular bout go by the name of static recompilation. And while I absolutely am not here to shit-talk all the – genuinely – incredible things this technique has facilitated, from the way I hear it being talked about I worry that we are collectively misunderstanding what it actually is, in a way that might come back to bite us in the ass. And I wanna talk about that.
Let me get a little more specific. When I hear people describe things as “static recompilations” in this context, what they are usually referring to are projects that have been run through a tool like N64: Recompiled. At their core, these tools take the machine code in a ROM file and spit out semantically equivalent C code, which can then be compiled to whatever platform one desires, or even back to the original platform the game runs on. What this means in practice, as far as PC ports are concerned, is that the instructions that make up the game get turned into whatever machine code is native to the PC – usually x86, but every now and then also ARM – and run at full speed, as if it were any other program.
That is all well and good, but of course, by itself it isn’t enough. After all, say, the Nintendo 64 didn’t run Windows – or Linux, or macOS, or Android, etc. – and, as such, simply moving the instructions over won’t magically teach a game made in the ’90s what Wayland is – even though it wouldn’t feel particularly out of place in that time period. To handle this particular aspect, we need supporting code that can understand the way the game talked to whatever environment it was originally designed to run on and that can translate those interactions to something modern systems can understand. In effect, what we need is something that can emulate the original environment.
And this is where it gets interesting. Everything up to this point is being reported on fairly accurately. The projects themselves explain the need for a runtime fairly clearly, and the mechanics of recompilation are also fairly well explained and understood. Things start going wrong with the conclusions I’ve seen drawn from these facts, especially when it comes to the relationships between static recompilation and both emulation and reverse engineering. If you noticed the conspicuous emphasis in the last two paragraphs, you might be able to guess where this is going, but let’s dive a little deeper into how this technique relates to the two, starting with its relationship to emulation.
Let me ask you something. What exactly is the difference between static recompilation and what you’d get with a plain old emulator? If you were to go off the comment section in ZFG’s video, or even Nerrel’s excellent overview of the topic, I’d be willing to bet you would guess they’re very different, perhaps even fundamentally so. Hell, Nerrel’s video opens up with a quip about how much Project64 and other Nintendo 64 emulators suck, setting up an explicit point of contrast between emulators and the programs produced by this new technique. And while I think the criticisms toward the likes of Mupen are more than warranted, I think framing static recompilation as in some way opposite to emulators paints a wrong picture of what it actually is.
Recall the description I gave of what static recompilation is. While it is true that it works that way, the description by itself doesn’t paint the whole picture. The reality is, almost everything I said also applies to modern, garden-variety emulators, with only two exceptions making up the majority of the differences between the two: the time at which the process happens and the choice of using C as an intermediary. Enough to make the end products fairly distinct from the perspective of the end user, but not enough to put them in opposition to each other.
In fact, to help me drive home how similar they actually are, let’s start by talking about recompilation. If you open the Dolphin emulator, go to Settings, and then Advanced, you’ll see a drop-down menu labeled CPU Emulation Engine, with an option for a JIT Recompiler. This engine being a “recompiler” means that – just as in static recompilation – it takes the PowerPC instructions that make up the game and translates them to semantically equivalent native instructions, which it then writes to memory before having the CPU in the host machine execute them directly. Meanwhile, it being of a Just-in-Time – or JIT, for short – variety means that process is done as the game runs, typically over reasonably large groups of instructions at a time. In addition, typically the translations made by the JIT are kept in memory such that, after a period of time, the majority of the game’s code has been translated to native instructions and exists in memory.
Now, there is a little weirdness around the terminology used in the emulation scene, so let me add that – in this context – saying something is “Just-in-Time” is the same as saying it is “dynamic”, which contrasts with the “static” in “static recompilation”. Indeed, the thing that separates the default engine used by most modern emulators from what static recompilation does is not which of the two produces native versions of the game’s code – they both do – but, instead, when the translation happens. For dynamic engines, as I’ve already mentioned, this process happens as the game runs, while for static recompilation, it happens before the game runs. And while it might seem at first that front-loading all this work is obviously better, the truth is that both techniques come with advantages and disadvantages1. Ultimately, both emulators and ports produced through static recompilation are vastly more alike in this regard than they are different.
Not only that, but there’s still the matter of environment to consider. Remember when I mentioned that, even if you translate the instructions from the game into code that can natively run on your CPU, that native code by itself can’t talk to the rest of your system? At the end of the day, even after translation, if the game needs to communicate with the outside world to get anything done – e.g., draw to the screen, play audio, do DMA, configure and receive interrupts, among others – it will still expect that the console it was originally written for will be on the other end of the line. It doesn’t know how to talk to a modern Windows install or Linux desktop. Static recompilations and regular emulators both need to emulate the behavior of the underlying hardware well enough that the translated code can work like it should. Emulators have many names for the components that do this hardware emulation, but often they will call them “plugins”. N64: Recompiled, on the other hand, calls them “runtimes”, but, again, just like with dynamic vs. static recompilation, they are way more alike than they are different.
But, of course, if you’ve ever played these ports, or used an emulator, or even so little as listened to people talk about this whole phenomenon, you know the final products do look and behave very differently. This is where the two differences I’ve mentioned before come into play. The choice of using C and compiling ahead of time comes with the major impact that modding the original game becomes way easier than it would be if you were using an emulator. Because if you’re used to modding a game in the context of an emulator, and then move to a recompilation, you suddenly don’t have to pretend like you’re talking to a console anymore. You can use modern APIs and programming languages, talk to the operating system directly, the works. From one simple change in perspective, improving the game becomes way easier than it ever would’ve been had modders never been given the leeway to break compatibility with the original console, and I think that’s genuinely a great thing. At the end of the day, though, these improvements come much more from the shift in the way these games are perceived than from any fundamental change to the underlying model of emulation being used.
While we shouldn’t downplay the impact of these differences, I still think it’s not great to treat these PC ports as being in any way fundamentally different from what emulators do. In fact, I think treating them as such can blind us to a few of the risks that I believe can come with this technique, which ultimately drove me to write this post, and that we’ll get into now.
Or “why openly boasting about these PC ports is probably a horrible, terrible, no good idea”.
What I see repeated a lot is the misconception that, because these ports are supposedly “reverse engineered”, it is safe to publish the code for them. I honestly have a really hard time believing this would be the case, for two main reasons. First is how the nature of these ports differs from how other projects do reverse engineering and have defended it in court before, and second is how – because of the way these ports relate to emulation – publishing the decompiled source code is much less like reverse engineering and much more like publishing a precomputed intermediary step to emulating a game.
When we speak about reverse engineering not being copyright infringement, we’re usually talking about a clean-room approach. That approach is what Accolade used in Sega v. Accolade, as well as what projects like Asahi Linux use. In fact, the latter goes so far as to explicitly ban contributors who have disassembled closed-source driver binaries as part of reverse engineering the hardware in MacBooks from writing drivers for that hardware, so as to avoid concerns around copyright infringement as much as possible. In the case of the ports, what’s happening really couldn’t be further from that. What we’re actually seeing with them is people decompiling code and, rather than using the results of the decompilation to document the behavior of interfaces without publishing any of the original code, directly publishing those results to the internet.
To further illustrate my point, let’s think of compressing the ROM file of a Nintendo game as a .zip file and then making it public on the internet, whereupon people download it for their own consumption. From experience, we all know this is considered piracy. But at no point during this process was the exact code under copyright ever published; the only thing that was published was data derived from it, that could be used to reconstruct the original copyrighted material.
Now, the ZIP case isn’t very interesting. We all know how it works. Things get interesting, though, when we consider what role the decompilation to C is playing in these PC ports. We know it’s a representation derived from the code in the ROM and that it can be used both to reconstruct the original and as part of the process of emulating the original, mods and improvements notwithstanding. In a very real sense, one can argue that – as far as piracy is concerned – the C source code for these ports that’s being put up on GitHub is nothing more than a text analogue to the ZIP file case.
In other words, not only are these projects much more like emulators than people make them out to be, in a a sense, they are bundling emulator bits and original game code – even if it has had a transformation applied to it – as part of the final published code. Even if you don’t publish any assets – and to their credit, they don’t – the derived code being put up on there repositories I feel is just as susceptible to getting copyright-nuked as the assets would be.
I fear these projects will be taken down for copyright infringement. The way I see people talk about this, it feels like everyone is way too comfortable in the assumption that not only do these projects fall into the category of reverse engineering, but also that merely being part of it is enough for clean-room-like legal protections. While I can’t necessarily speak to the first half of the assumption, the second half seems extremely unlikely to be true. Maybe we should all face the possibility that if Nintendo were to try to go after these ports, they could make a good case to not only get them taken down, but maybe even to get some poor soul made into another Gary Bowser.
We will not get into why this is, because this post is getting long enough as is, and the amount of ink that has been spilled pitting just-in-time approaches against ahead-of-time ones would put Alexandre Dumas to shame. If you’re still curious and want to look more into it, you should probably start with the reason both Apple’s Rosetta and Android’s ART sometimes use one technique, and sometimes use the other. ↩