The 1995 SNES game.
Star Trek: Deep Space Nine – Crossroads of Time GamePlay (SNES)
Blog
-
Star Trek DS9: Crossroads of Time
-
Cartridges as Property
When Games Were Objects Instead of Services
The NES cartridge locked into place with a decisive click—a sound that signaled completion, not just of the insertion process but of something more fundamental. You pressed the gray plastic rectangle down into the console’s slot, pushed the tray down, and that was it. The transaction was finished. What happened next depended entirely on whether the thing worked, which it usually did, barring dust or a bent pin or the mysterious failures that afflict all electronics eventually. But those were material failures, the kind you could see or clean or sometimes fix with a careful hand. The cartridge itself was complete. It contained everything it would ever contain. It asked nothing of you except electricity and compatible hardware.
I still have cartridges from childhood that work. Not because I’ve maintained them carefully—I haven’t—but because they require nothing to keep working except to continue existing. They don’t need to phone home. They don’t need authentication servers or accounts or updates. They don’t need permission. Plug them into a working console and they boot exactly as they did in 1987 or 1991 or whenever they were manufactured. This seems remarkable now, but only because we’ve been trained to expect the opposite. We’ve been trained to expect that the things we buy will stop working when someone else decides they should stop working.
The cartridge represents something we’ve lost: true ownership. Not metaphorical ownership, not licensed access dressed up in the language of purchase, but actual property. An object that existed independently of networks, permissions, ongoing relationships, or corporate goodwill. You bought it. You owned it. It was yours in the same way a book or a hammer or a chair was yours. This was not a special privilege of games—it was the default assumption of property. But somewhere between the gray plastic of the NES and the cloud-dependent launchers of today, games stopped being things you could own and became services you could rent.
What did ownership mean in the NES era? It meant something specific and uncomplicated. When you bought a cartridge, you acquired a physical object that functioned without reference to anything beyond itself and the hardware it was designed for. There were no accounts because accounts were unnecessary. There was no authentication because there was nothing to authenticate—you either had the cartridge or you didn’t. There were no servers because the game didn’t need servers. It didn’t need to connect to anything. It was self-contained in the most literal sense. The ROM chip inside held the complete game, the entirety of what the developers had created. Once that chip was manufactured, the game was fixed in place. It couldn’t be altered, couldn’t be updated, couldn’t be revoked.
This wasn’t a design philosophy so much as a technical reality. Distribution was physical. Manufacturing was expensive. Changes were impossible after production. These constraints produced a particular kind of object: one that had to be complete before it could be sold, because there was no mechanism for incompleteness. If a game shipped with bugs, those bugs were permanent. If a game had balance problems, players learned to work around them. The relationship between developer and player ended at the point of sale. After that, the cartridge existed in the world as an independent thing.
This model aligned with centuries of property law. When you bought a hammer, the hammer company didn’t retain any claim on how you used it. When you bought a book, the publisher didn’t monitor your reading habits or require you to check in periodically. When you bought a cartridge, it was yours in the same complete sense. You could play it, you could sell it, you could lend it to a friend, you could destroy it if you wanted. These were rights inherent to ownership, not privileges granted by license agreements.
The finality of cartridges created design discipline. Developers had one chance to get things right, which meant they usually did. This wasn’t about craftsmanship in the romantic sense—it was about necessity. You couldn’t patch out a game-breaking bug after manufacturing. You couldn’t rebalance difficulty based on player data. You couldn’t add features later to justify a higher price. Everything had to work at release because release was final. The game that shipped was the game that existed, permanently and completely.
This constraint shaped how games were made. Testing was more rigorous because it had to be. Design was more conservative because radical experiments that failed couldn’t be fixed. But it also meant games existed outside time once they were manufactured. They didn’t evolve. They didn’t change. They weren’t adjusted based on metrics or player feedback or competitive balance concerns. They simply were what they were. This created a strange kind of permanence. Every copy of Super Mario Bros. 3 contained exactly the same game, and that game was identical in 1990 and 2000 and 2025. The bits didn’t change. The experience didn’t drift. If you remembered something about the game, you could verify that memory by playing it again and finding it exactly as you left it.
Compare this to modern games, which exist in a state of perpetual becoming. They launch incomplete, expecting updates. They ship with roadmaps, promises of content to come. They offer early access, charging money for the privilege of testing unfinished software. They rebalance constantly, adjusting numbers based on usage data. The game you play today might not be the game you play next month, not because you’ve changed but because the developers have changed it. This isn’t necessarily worse—some games benefit from iteration, from community feedback, from the ability to fix mistakes. But it represents a fundamentally different relationship between object and owner. The cartridge was finished. The live service game is never finished. It exists only in the present tense, only while it’s actively maintained.
The shift from property to license happened gradually, then suddenly. It started with end-user license agreements, those walls of text nobody read that redefined “purchase” as “license.” You weren’t buying the game, you were buying permission to use the game, and that permission came with conditions. The conditions multiplied. You needed an account. You needed to be online. You needed to authenticate. You needed to accept updates. Each requirement seemed minor in isolation, but together they transformed the nature of the transaction. You weren’t acquiring property anymore. You were entering into an ongoing relationship with a vendor who retained control over the thing you’d paid for.
Digital rights management made this explicit. DRM systems exist to prevent you from doing things that would be legal and normal with property. You can’t make copies. You can’t transfer ownership freely. You can’t use the software without permission from an authentication server. This isn’t protection against piracy—it’s the technical enforcement of a new ownership regime. The DRM doesn’t protect the product from copying. It protects the vendor’s control over how you use what you’ve bought.
This became unavoidable with digital distribution. Steam libraries contain hundreds of games, but you don’t own any of them. You own licenses to access them, licenses that can be revoked if Steam decides to revoke them, licenses that become worthless if Steam disappears. The games exist on Valve’s servers, not on your hardware. You have permission to download them, permission to run them, permission that depends on Valve’s continued existence and goodwill. When a game is delisted from Steam, you might retain access if you bought it before delisting, but new buyers cannot acquire it at all. The game effectively ceases to exist as an available object.
Server shutdowns make this worse. When a multiplayer game’s servers shut down, the game stops working, completely and permanently. It doesn’t matter that you paid for it. It doesn’t matter that the software still exists on your hard drive. Without the servers, the software is inert. The game you bought becomes unplayable by design. This isn’t a technical failure. It’s a designed dependency. The game was built to need something the vendor controls, and when the vendor withdraws that something, your purchase becomes worthless.
This pattern extends beyond gaming. We’ve watched it happen with music, with movies, with books, with software generally. Ownership is being replaced with access. Possession is being replaced with permission. The logic is consistent: vendors prefer ongoing relationships to one-time transactions because ongoing relationships generate ongoing revenue. Rent is more profitable than sale. Subscriptions are more predictable than purchases. Services create dependency in ways that products don’t.
Games perfected this transition. They became services gradually, layering mechanisms of control until the old model of ownership became impossible. It happened through small steps: online activation, digital distribution, day-one patches, live service models, battle passes, microtransactions. Each step seemed reasonable given what came before. Each step made sense in isolation. But the cumulative effect was revolutionary. Games stopped being things you could own and became things you could access, and access depends on permission, and permission can be withdrawn.
Subscriptions made the logic explicit. Xbox Game Pass and PlayStation Plus don’t pretend to offer ownership. They offer access to a rotating library of games for a monthly fee. Stop paying and the access disappears. The games don’t become yours over time. You’re not building a collection. You’re renting temporary permission to play whatever’s currently available. This is honest about what it is, which makes it less troubling than the fake purchases that look like ownership but aren’t. But it’s also the endpoint of a long transition. Games have become utilities, services delivered on demand for ongoing fees.
Battle passes extended this logic into individual games. Pay for the privilege of working toward rewards that expire at season’s end. The battle pass creates urgency, a fear of missing out if you don’t play enough before the deadline. It transforms play into labor, with clear metrics and reward schedules. It’s psychological retention engineering, designed to keep players engaged and paying. Again, this isn’t moral condemnation. It’s observation of incentive structures. Companies want stable, predictable revenue. They want players who return daily. They want ongoing relationships rather than one-time transactions. The systems they build reflect these incentives.
Microtransactions completed the transformation. Games became storefronts wrapped in gameplay. The game itself became free or cheap, with the real revenue coming from cosmetics, convenience items, randomized loot boxes. This model doesn’t require updates or maintenance for generosity’s sake—it requires them to maintain the storefront. The game stays alive as long as it’s generating revenue. When revenue falls below maintenance costs, support ends and the game becomes unplayable. The players who invested hundreds or thousands of dollars find their purchases worthless, erased when the servers shut down.
These aren’t aberrations. They’re the natural result of optimizing for ongoing revenue. The question isn’t whether companies should do this—the question is whether we should structure economies so that this is the optimal strategy. Systems produce behavior. If the most profitable approach is to never sell anything outright, to always maintain control, to always retain the ability to shut things down or adjust terms, then that’s what will happen. Complaining about individual companies misses the pattern. The pattern is structural.
What was lost when ownership disappeared? Preservation became impossible. Games that depend on servers can’t be preserved when those servers shut down. Games that require authentication can’t be archived. Games that exist only as licenses can’t be transferred or inherited. We’re creating a cultural history that will have gaps, periods where popular games simply vanished because the companies that controlled them decided to stop supporting them. Historians will struggle to document this era because the artifacts won’t exist. They’ll exist in memory and description, but not as playable objects.
There’s something quietly devastating about childhood artifacts becoming inaccessible not through loss or damage but through deliberate revocation. The games you played at twelve might not exist anymore, not because the software was destroyed but because the servers were shut down or the storefront was closed or the authentication system was discontinued. This creates a strange discontinuity. The cartridge on my shelf still works because it needs nothing except itself. The digital games I bought fifteen years ago might not work tomorrow if the vendor decides they shouldn’t. My relationship to those games is not ownership. It’s permission, and permission is always temporary.
Culture depends increasingly on corporate goodwill. Our access to art and entertainment and creative work depends on companies choosing to maintain that access. They have no obligation to do so. When it becomes unprofitable, support ends. This makes sense from a business perspective, but it’s a terrible foundation for cultural preservation. We’re trusting profit-seeking entities to maintain access to cultural artifacts, and that trust is frequently betrayed because maintaining access conflicts with profit maximization.
Cartridges side-stepped all of this by being complete at manufacture. They exist as objects independent of their creators. They don’t require maintenance, updates, or ongoing support. They don’t depend on servers or accounts or authentication. They work because they contain everything they need to work. Forty years later, they still boot. They’ll keep working until the hardware physically fails. That’s not nostalgia—it’s a material fact about how the objects were constructed.
Buying an NES cartridge was a clean transaction. You handed over money. The clerk handed over a cartridge. You walked out of the store with property. Nothing else was owed, by either party. You didn’t need to create an account. You didn’t need to agree to terms of service. You didn’t need permission to use what you’d bought. The transaction was complete at the moment of exchange. That kind of transaction is increasingly rare. We’ve become accustomed to purchases that aren’t purchases, to ownership that isn’t ownership, to paying money for permission instead of property. The shift happened slowly enough that it seemed normal, inevitable, maybe even preferable. But it represents a fundamental restructuring of economic relationships, a move from independence to dependency, from property to access, from ownership to rent. The cartridge sits on the shelf, still working, a relic of a different arrangement.
-
The 2KB Imagination
How the NES Forced Better Designers
There’s a photograph somewhere on the internet of a disassembled NES motherboard, all brown PCB traces and blocky chips from 1983, and what strikes me about it isn’t the antiquity but the austerity. The Ricoh 2A03 processor sitting there like a small gray tombstone. Two kilobytes of RAM. Not two thousand megabytes. Two thousand bytes. You could fit the entire working memory of that machine into a single tweet and have room left over for hashtags. I remember the cartridge click—that satisfying snap when the game seated properly in the slot—and the way the screen would flicker to life with its limited palette, fifty-four colors total, and somehow that was enough to build worlds. But the NES didn’t produce great games in spite of those constraints. It produced them because of those constraints. The hardware was so unforgiving that every design decision became existential. There was no room for waste, no space for indulgence, no possibility of hiding weak ideas behind spectacle. If your game wasn’t sharp and focused, the machine would expose you. The NES forced discipline because it offered no alternative.
Consider what two kilobytes actually meant for a designer in 1985. The entire state of the game—player position, enemy positions, score, timers, power-up states, level progression—all of it had to fit inside that microscopic envelope of memory. That’s not a technical specification. That’s a moral framework. Every byte allocated to one system was a byte stolen from another. Want more enemy types? You’ll need to sacrifice complexity in level design. Want deeper player stats? Your animation frames just got simpler. The hardware didn’t care about your vision. It cared about mathematics. And mathematics, unlike marketing departments or focus groups, cannot be negotiated with. The NES had a sprite limit of sixty-four objects on screen, with only eight allowed per horizontal line. Exceed that and sprites would flicker or disappear entirely. Cartridge sizes started at forty kilobytes for early games, though later titles pushed into the megabyte range through clever banking schemes. The color palette was brutally restricted: each sprite could only use three colors plus transparency, and background tiles were similarly constrained. There was no hard drive, no save states initially, no patching, no updates. What shipped was what players got. These weren’t suggestions. They were laws.
This created something I think of as technological morality. When resources are genuinely scarce, allocation becomes an ethical act. You cannot include something just because it might be cool or because the marketing team wants it. Every mechanic, every sprite, every line of code has to justify its existence. The question wasn’t “wouldn’t it be neat if…” but rather “is this worth the bytes it costs?” This calculus shaped everything. Super Mario Bros. has exactly one jump button because the designers couldn’t afford the memory overhead of multiple jump types and didn’t want to complicate the controls anyway. The game teaches you its entire vocabulary in the first hundred seconds: run right, jump on things, avoid other things, grab the mushroom. That economy of design wasn’t a stylistic choice. It was compulsory. But compulsion, applied to creative work, can produce unexpected clarity. The developers at Nintendo couldn’t patch their way out of bad decisions. They couldn’t download additional assets. They couldn’t add microtransactions to fund ongoing development. They had to make the game work with what they had, which meant every element needed to pull its weight. This created games that were dense with intent. Nothing was there by accident. Nothing was there because someone thought it looked cool in a pitch meeting. If it existed in the game, it existed because it served the core experience and because someone fought for the bytes to include it.
And here’s what happens when every decision is that expensive: design becomes legible. The player can read the game clearly because the game itself is forced to be clear about its own priorities. Look at Mega Man. Every robot master has a distinct silhouette, a unique color scheme, and immediately recognizable attack patterns. That’s partly because sprite reuse was essential—the same basic enemy chassis appears across multiple stages with palette swaps and minor alterations—but it’s also because visual clarity was mandatory. The player needed to understand the threat profile instantly because there wasn’t memory for complex AI trees or adaptive behavior systems. Enemies had patterns, simple and consistent, and you learned those patterns through repetition. This wasn’t a failure of imagination. This was design necessity creating pedagogical elegance. The Legend of Zelda presents a world built from repeating tile sets where every screen is an eight-by-eight grid. The dungeons are spatial puzzles constructed from a limited vocabulary of rooms and obstacles. Modern players sometimes mistake this for primitive design, but that misreads what’s happening. The constraint created a grammar of play. You learned how to read the world because the world was built from consistent elements. The alphabet was small but the sentences could be intricate.
Consider the first stage of Super Mario Bros., the one everyone knows: World 1-1. That level is a masterclass in constraint-driven design. It teaches you everything you need to know about the game without a single word of tutorial text. You see a Goomba, and the spacing gives you time to either jump over it or jump on it. The blocks above suggest jumping is possible. The first pit comes after you’ve practiced jumping several times. The first mushroom emerges from a block after you’ve been trained to investigate blocks. Each element introduces itself in isolation before combinations appear. This sequencing wasn’t accidental, but it also wasn’t the product of unlimited iteration budgets. It emerged from designers who understood they had one chance to teach the player and limited tools to do it. The level is roughly ninety seconds long for a competent player, but it contains the entire game’s pedagogical arc. And it does this because every element is essential. There are no decorative enemies. There are no gratuitous obstacles. Every Goomba, every pipe, every block exists because it teaches something or tests something you’ve already learned. The level is so efficient it borders on mathematical. But it doesn’t feel cold or mechanical. It feels playful and intuitive precisely because there’s nothing extraneous to confuse the core experience.
Now consider the modern landscape. A typical AAA game in 2024 might require a hundred gigabytes of storage. That’s roughly two million times the capacity of an entire NES cartridge. And what do we get for that expansion? Photorealistic textures, certainly. Orchestral soundtracks. Vast open worlds where you can see for miles. Motion-captured performances. Cinematic cutscenes. All of this is technically impressive, but somewhere in that abundance, discipline evaporated. When you have a hundred gigabytes to work with, you don’t have to choose. You can include everything. You can add a crafting system because other games have crafting systems. You can stuff the map with collectibles because engagement metrics reward time spent. You can build a skill tree with two hundred nodes because complexity signals depth to certain players. None of this requires justification anymore. You’re not sacrificing anything to include it because the technical ceiling is effectively gone. But without sacrifice, without trade-offs, design becomes baggy. It loses shape. I’ve played recent games where I reached the forty-hour mark and realized I couldn’t articulate what the core experience was supposed to be. The game had stealth systems and combat systems and dialogue trees and base building and romance options and crafting and skill progression and puzzle dungeons and vehicle sections. It had everything. But it wasn’t about anything in particular. The abundance made it possible to never commit.
This isn’t just a problem of excess content. It’s a problem of economic incentives meeting technological possibility. Modern game development is expensive enough that studios need games to retain players for months, ideally years. So you get live service models, battle passes, seasonal content drops, daily login rewards. The design goal shifts from creating a focused experience to maximizing engagement time. And when you have near-infinite storage and processing power, you can keep piling on systems indefinitely. Nothing forces you to decide what matters most. The NES couldn’t sustain that kind of bloat even if developers wanted it. The hardware was the check against excess. It said: you get two kilobytes and sixty-four sprites and fifty-four colors, make something beautiful or make nothing at all. Contemporary development has no such governor. You can keep adding until the game collapses under its own weight, and sometimes they do. I’m not suggesting this is universal. Plenty of modern developers still work with discipline and focus. But they do so by choice, by cultivating limitations deliberately, because the medium itself no longer enforces them. The default state of modern development is abundance, and fighting against defaults requires active resistance.
There’s something here that extends beyond games into how we think about creative work more broadly. The 2KB lesson isn’t really about hardware specs. It’s about how limitations shape thinking. When you write with a strict word count, every sentence has to earn its place. When you compose music with limited instruments, arrangement becomes critical. When you code with minimal memory, architecture becomes essential. The constraint doesn’t inhibit creativity—it directs it. It forces you to understand your priorities because you literally cannot include everything you want. This creates work that’s denser, more intentional, more itself. I think about this with writing. The temptation with digital publishing is to keep going, to write until you’ve exhausted every tangent and covered every angle. No one stops you. There’s no printing cost, no page limit. But essays that work best are usually the ones that know what they’re about and cut everything else. The same discipline applies to software development, where feature bloat is a constant danger, or product design, where adding just one more button seems harmless until you have forty buttons and no one can find anything. The lesson from the NES era is that constraints aren’t obstacles to overcome. They’re tools for clarification. They force the essential question: what is this really for?
I don’t want to romanticize limitation. Working within harsh constraints is often frustrating, and the NES had plenty of games that were merely adequate or outright bad. The hardware didn’t guarantee quality. But it did guarantee that quality, when it appeared, was the result of disciplined choices made under pressure. The great NES games weren’t great because they were retro or because they trigger nostalgia. They’re great because they knew exactly what they were and refused to be anything else. Super Mario Bros. is a game about momentum and timing. The Legend of Zelda is a game about spatial exploration and pattern recognition. Mega Man is about learning through failure and strategic tool selection. Each one has a thesis statement executed with relentless focus. There’s nothing accidental about them. There’s nothing there because someone thought it might be neat. Every element serves the central experience because there was no room for anything that didn’t. And that discipline, forced by technological necessity, created a design language we’re still learning from. The clarity didn’t come from simplicity. It came from constraint making every decision count.
The NES didn’t give developers freedom. It gave them boundaries. And boundaries, it turns out, are what make things real. You don’t become a better designer by having infinite resources. You become better by learning to work within limits severe enough that every choice matters. The machine with two kilobytes of RAM taught an entire generation of designers that discipline isn’t a restriction on creativity—it’s the price of coherence. Maybe that’s the lesson we keep forgetting as our tools grow more powerful and our storage more vast. The technology improves, the constraints dissolve, and we assume that liberation equals better work. But sometimes, perhaps more often than we’d like to admit, limitation was doing more work than we realized. It was the thing that kept us honest.
-
The Saturn Paradox: The Quadrilateral Dead End
In the early 1990s, the fundamental vocabulary of 3D graphics hadn’t been codified. While Sony’s PlayStation engineers chose triangles—the mathematically perfect polygon, where three points always define a flat plane regardless of orientation—Sega made a different bet. They chose the quadrilateral. This wasn’t just a technical preference; it was a strategic miscalculation that would define the Sega Saturn’s legacy as gaming’s most beautiful failure.
The Saturn wasn’t a “3D console” in the way we understand the term today. It was a 2D sprite powerhouse forced into an awkward pantomime of three-dimensional rendering. Its VDP1 graphics processor didn’t actually draw polygons in the conventional sense—it drew warped rectangular sprites, texture-mapped distortions of flat images that could be manipulated to simulate depth and perspective. This was quadrilateral logic: the assumption that the future of 3D would be built on the foundation of 2D’s proven dominance.
Sega’s reasoning made sense within their own ecosystem. They owned the arcade market. Their Model 1 and Model 2 arcade boards had already proven that quad-based rendering could produce stunning results in Virtua Fighter and Daytona USA. The Saturn was designed to leverage this existing expertise, to translate arcade supremacy into home market dominance. But markets don’t care about your internal logic. They care about shared standards, development efficiency, and where the momentum is building.
While Sega was optimizing for warped sprites, the rest of the industry was converging on triangles. The reason was simple: triangles are computationally elegant. Three points always lie on the same plane. There’s no ambiguity, no mathematical edge cases to handle. When you build a 3D engine around triangles, you’re working with a primitive that the hardware can process predictably and efficiently. Quads, by contrast, can be non-planar. Four points in 3D space don’t necessarily form a flat surface—they can twist, creating rendering artifacts, texture warping, and computational overhead.
The Saturn’s quad-based approach produced a distinctive aesthetic. Games like Panzer Dragoon and NiGHTS into Dreams had a visual quality that felt different from the PlayStation’s crisp, triangular geometry. Textures shimmered and warped in organic ways. Surfaces had a liquid quality, a kind of analog imperfection that made the Saturn’s 3D feel more tactile, less mathematically pure. It was beautiful in the way a steam-powered supercar is beautiful—impressive engineering in service of an approach that the market had already decided to abandon.
The Eight-Headed Beast
But the Saturn’s quadrilateral logic was only the beginning of its complexity problem. Under the hood, the Saturn was less a coherent platform than a pile of silicon held together by aspiration. It featured eight processors working in uneasy coordination: two Hitachi SH-2 CPUs, two custom VDP graphics chips, a Motorola 68EC000 sound processor, a Saturn Control Unit, a System Control Unit, and a dedicated CD-ROM processor.
The dual SH-2 setup was particularly emblematic of the Saturn’s orchestrated complexity. On paper, symmetric multiprocessing sounded impressive—two 28.6 MHz RISC CPUs working in parallel. In practice, they shared the same memory bus, meaning they often collided trying to access the same resources. Getting both CPUs to work efficiently required manual synchronization, careful choreography of which processor handled physics, which managed AI, which interfaced with the graphics chips.
This wasn’t programming; it was orchestration. You didn’t just write code for the Saturn—you composed it, balancing eight different instruments that each had their own timing, their own quirks, their own demands. The platform had no operating system to abstract away this complexity, no middleware to smooth the rough edges. Every game was a bespoke engineering challenge.
For first-party developers like Sega’s AM2 division, this was manageable. Yu Suzuki and his team had years of experience with similar architectures in arcade development. They knew how to make the eight-headed beast sing. Virtua Fighter 2 on Saturn was a technical marvel, a demonstration that when properly orchestrated, the Saturn could match or exceed PlayStation’s capabilities. Panzer Dragoon Saga showed what the hardware could do in the hands of developers who understood its esoteric architecture.
But third-party developers took one look at the Saturn’s technical documentation and made a rational business decision: they would treat it like a weaker PlayStation. They ignored the second SH-2 entirely. They used quads because the hardware forced them to, but they didn’t optimize for the quad-based rendering pipeline. They shipped PlayStation ports that ran worse and looked rougher, reinforcing the market’s perception that the Saturn was technically inferior.
The complexity tax was too high. In an industry where development costs were rising and multiplatform releases were becoming standard, the Saturn demanded specialization that publishers couldn’t justify. Why dedicate a team to mastering an arcane architecture when you could develop for PlayStation’s more straightforward, triangle-based pipeline and reach a larger install base?
The Last Exotic Console
The Saturn represented something that no longer exists in gaming hardware: genuine architectural exoticism. It wasn’t a repurposed PC. It wasn’t a streamlined, developer-friendly platform designed around industry-standard APIs. It was a bespoke arcade machine, shrunk down and forced into the living room, still carrying the assumptions and engineering priorities of the coin-op world.
This exoticism produced moments of genuine magic. The Saturn’s dual-plane background system could handle parallax scrolling and pseudo-3D effects that the PlayStation struggled with. Games like Radiant Silvergun and Guardian Heroes showcased sprite-based artistry that felt like the medium’s final evolution. The Saturn was the last console where 2D felt like the primary concern rather than a legacy feature.
But exotic architectures are expensive. They require deep documentation, extensive developer support, and a large enough install base to justify the investment. Sega provided none of these consistently. The Saturn launched with minimal developer tools. Its technical documentation was notoriously incomplete. Its surprise North American launch—four months ahead of schedule, announced at E3 1995—burned retail and publisher relationships that Sega would never fully repair.
The quadrilateral dead end wasn’t just about rendering primitives. It was about Sega’s broader refusal to standardize, to speak the shared language that the industry was rapidly coalescing around. While Sony was courting third-party developers with comprehensive SDKs and middleware partnerships, Sega was assuming that technical superiority and arcade credibility would carry the day.
The Analog Disaster
In retrospect, the Saturn feels like a last gasp of analog thinking in an industry that was rapidly digitizing. Its complexity wasn’t elegant; it was baroque. Its technical choices weren’t forward-looking; they were attempts to preserve and extend the assumptions of a previous era. The quadrilateral logic, the multi-processor architecture, the assumption that arcade expertise would translate directly to home consoles—all of it was Steam Age engineering in a world that had already committed to the internal combustion engine.
The Saturn proved a principle that would become ironclad in subsequent console generations: shared standards win. Developers don’t want exotic architectures they have to master. They want familiar tools, predictable performance, and large addressable markets. The PlayStation spoke the language the industry was already learning—triangles, straightforward memory management, a single CPU to optimize for. The Saturn demanded that developers learn a new dialect, and most simply refused.
Today, every major console uses x86 architecture, AMD graphics chips, and development environments that deliberately minimize the learning curve. The exotic, bespoke console died with the Saturn. We gained efficiency, cross-platform development, and lower barriers to entry. What we lost was that distinctive aesthetic, that sense that different hardware could produce genuinely different artistic possibilities.
The Saturn was a beautiful, expensive footnote—a reminder that in platform wars, technical sophistication matters less than ecosystem alignment, developer support, and speaking the shared language of the market. Sometimes the future doesn’t belong to the most innovative architecture, but to the one that enough people agree to build on.
The quadrilaterals are gone now. The industry speaks in triangles.
-
Super Mario Bros.
The original blockbuster, longplay.
-
Longplay: Zero Tolerance
The classic Sega Genesis FPS (impressive for its time): Zero Tolerance,
-
The Simulation Will Be Monetized
Project Genie and the Rent-Seeking Future of RealityOn January 29, 2026, Google DeepMind quietly crossed a line that technologists have been pacing around for years. Project Genie went live—not as a game engine, not as a creative suite, but as something more unsettling: a general-purpose world model capable of generating interactive environments from prompts, photographs, and sketches.
This isn’t software in the classical sense. There are no levels to load, no assets to own, no binaries to archive. Genie is a prediction engine, running in real time, hallucinating coherent spaces frame by frame for as long as you are allowed to remain connected.
We are no longer playing games.
We are renting hallucinations.
From Artifact to Session
For most of digital history, creative media has been organized around the artifact. A game shipped as a cartridge, a disc, a download. A film existed as a reel, a file, a thing that could be copied, stored, revisited. Even streaming—despite its licensing tricks—still delivered a stable object on demand. The work existed independently of the moment you consumed it.
Genie abolishes this relationship.
Nothing persists. Nothing ships. Nothing exists before you arrive or after you leave.
The “world” you enter is not a place so much as a continuous act of inference—an unfolding probability field stabilized temporarily by an uninterrupted flow of compute. When you move forward, the system predicts what should appear next. When you jump, it predicts an arc. When you push an object, it predicts resistance, mass, and friction—not because it understands physics, but because it has seen enough videos where similar motions occurred.
This is not simulation in the old sense. It is statistical improvisation.
And improvisation, by definition, leaves no artifact behind.
When the session ends, the world collapses. There is nothing to save, nothing to export, nothing to own. Whatever you “made” never existed outside the moment of execution. It was a lease on infrastructure masquerading as creation.
World Sketching and the Illusion of Authorship
Google’s preferred phrase for this is “world sketching.” The term is doing a lot of rhetorical work.
You are invited to upload a photograph, draw a few lines, or type a sentence—a medieval courtyard at dusk, a forest path, a child’s crayon spaceship—and the system obligingly generates a navigable environment. You can walk through it. Interact with it. Test its edges.
But a sketch, traditionally, is an object. It can be revised, stored, shared, inherited. Genie’s worlds are not sketches; they are performances. They exist only while the servers are actively hallucinating them into coherence. Disconnect, and they evaporate.
This distinction matters.
Because authorship without persistence is not authorship at all. It is participation in a controlled process whose outputs you are not permitted to retain. The system encourages you to feel like a creator while ensuring that nothing you touch ever leaves the enclosure.
Even the physics reinforce this instability. Genie does not calculate motion using equations. It predicts motion using precedent. That is why things occasionally pass through walls, distort, or behave strangely at the edges. The model is not constrained by law—only by plausibility. When it encounters a scenario it hasn’t seen often enough, reality frays.
These failures are often described as “alpha issues.” They are not. They are structural. You cannot debug a hallucination into permanence. You can only buy more compute and hope the predictions get better.
Which leads, inevitably, to the price tag.
Compute Rent and the New Enclosure
Access to Project Genie currently requires a $249.99-per-month subscription. This is not novelty pricing. It reflects the underlying economics of the system. Each user session demands dedicated hardware, sustained power, cooling, and bandwidth. The hallucination is expensive, and it must be metered.
This is the enclosure, updated for the cloud age.
Not land. Not labor. Latent space.
You do not own the worlds you generate. You do not own the experiences you inhabit. You rent the compute required to keep them coherent, minute by minute, session by session. When payment stops, the worlds cease to exist.
This is rent-seeking distilled to its cleanest algorithmic form. The infrastructure is centralized. The model is proprietary. The output is ephemeral. Dependency is total.
Even enforcement is baked into the hallucination itself. Genie reportedly refuses to render protected IP—recognizable characters, copyrighted designs, familiar franchises. This is not post-hoc moderation. It is preemptive control embedded directly into the generative act. The system is trained not to see certain things.
That should give us pause.
Because if a system can refuse to hallucinate a cartoon plumber, it can refuse to hallucinate anything else it is instructed to avoid. The boundary between copyright compliance and ideological sanitation is thinner than companies like to admit.
The Experience Economy Completes Its Arc
Project Genie is an extraordinary technical achievement. It is also a warning.
The long transition from ownership to access—from product to service, from artifact to subscription—has finally reached experience itself. You no longer buy worlds. You rent the ability to briefly occupy them. You do not create; you prompt. You do not keep; you remember.
And memory, conveniently, does not threaten platform control.
The session timer expires. The courtyard dissolves. The servers spin down.
What remains is the afterimage of a place that never existed, sustained by infrastructure you will never own, governed by rules you did not write, and revoked the moment the rent goes unpaid.
Imagination has been productized.
Reality has been metered.
There is a deeper asymmetry hiding here, one older than AI and more familiar than Google would like to admit. Every historical enclosure has followed the same pattern: what was once ambient becomes scarce, what was once shared becomes licensed, what was once navigable becomes gated. The commons disappears not through prohibition, but through convenience. Genie doesn’t forbid imagination—it hosts it. And hosting is power. When the only way to think spatially, play experimentally, or prototype worlds is through rented inference, the act of imagining itself becomes subordinate to platform uptime and billing cycles. This is not creativity liberated by machines; it is creativity tethered to infrastructure. The illusion is freedom. The reality is dependency.Welcome to Imagination-as-a-Service.
The landlords are ready.
-
Panzer Dragoon Saga
Part of the Legendary 1998 in Gaming
In Japan on this day back in 1998, Panzer Dragoon Saga was released for the Sega Saturn.
Longplay:
Longplay of Panzer Dragoon Saga -
The Virtual Console: Monetizing the Ghost
I. The Re-monetization of Nostalgia
In 1998, your NES sat in a cardboard box in the attic. Dust gathered on the gray plastic shell. The cartridges—Super Mario Bros., The Legend of Zelda, Metroid—still worked when you plugged them in, twenty years after purchase. You owned them in the most literal sense: physical artifacts that required no permission, no account, no network connection to function. They were yours until entropy claimed them.
By 2006, Nintendo had rewritten that contract.
The Wii Shop Channel opened with a promise: access to gaming history at your fingertips. No need to dig through attic boxes or hunt through used game stores. For $5 to $10, you could “own” Super Mario Bros. again—this time as a digital file tethered to your Wii console, your Nintendo account, and Nintendo’s server infrastructure. The Virtual Console wasn’t marketed as rental or subscription. It was sold with the language of ownership, the aesthetics of a museum collection, the emotional register of preservation.
But the thesis is simpler: The Virtual Console was Nintendo’s masterstroke in Digital Rent-Seeking. It wasn’t about preserving history. It was about rewriting the terms of that history—from “ownership” to “licensing,” from artifact to access, from permanence to permission.
The realpolitik was elegant in its cynicism: Nintendo realized their back-catalog was a dormant asset. Millions of players had purchased these games in the 1980s and 1990s. Most still existed as physical cartridges, traded in secondary markets Nintendo couldn’t touch. By creating a digital wrapper for 8-bit and 16-bit ROMs—software Nintendo already owned, already developed, already profitable—they could charge you again. And again. And again across every new hardware generation.
The NES cartridge in your attic cost Nintendo nothing to maintain. The Virtual Console game cost you $5 every time the platform changed.
II. The Death of the Artifact (Again)
The strategy was surgical: replace the physical secondary market with a digital primary market under permanent corporate control.
Before the Virtual Console, retro gaming existed in a space Nintendo couldn’t monetize. Used game stores, collector markets, emulation communities—these were ecosystems where Super Mario Bros. 3 changed hands without Nintendo seeing a cent. The cartridge was an artifact. Once sold, it entered the commons of physical exchange. You could lend it to a friend. Sell it when you needed cash. Pass it to your children. The transaction was complete. Nintendo’s claim ended at the point of sale.
The Virtual Console enclosed that commons.
Now the game was account-bound and hardware-tethered. You couldn’t lend your Virtual Console copy of Zelda to a friend—there was no cartridge to hand over, no physical object to transfer. You couldn’t sell it to a used game shop when you tired of it. You couldn’t even guarantee you’d keep it. If your Wii died and you bought a Wii U, you had to pay again (or pay a reduced “upgrade” fee, a mercy that still required payment for software you’d already licensed). When Nintendo shut down the Wii Shop Channel in 2019, the entire infrastructure vanished. Games you’d “purchased” existed only as long as your specific hardware survived, only as long as Nintendo’s servers allowed re-downloads.
This is what I call the “Ghost” mechanic. These weren’t games in the traditional sense—mechanical systems you possessed. They were emulated states delivered via a digital umbilical cord. Spectral presences that appeared when summoned by the correct account credentials and network handshake. You were paying for the privilege of access, not the object itself. The language of “buying” masked the reality of leasing. You purchased a ghost. Nintendo retained the exorcist’s license.
The cartridge in your attic required nothing from Nintendo to function. The Virtual Console game required their permission to exist.
III. The Permanent Rental (The 2026 Bridge)
The Virtual Console represented the final frontier of Market Enclosure in gaming’s pre-AI era. It proved a business model could be built not just on new production, but on re-monetizing memory itself.
Consider the progression: The arcade cabinet charged you per session. The NES cartridge you purchased once, owned permanently. The Virtual Console game you purchased repeatedly across platforms—Wii, Wii U, Switch—with each purchase granting only temporary, conditional access. The model wasn’t preservation; it was perpetual rental dressed in the rhetoric of ownership.
This matters because the Virtual Console established the cognitive infrastructure for what came after. Before the industry could charge you “Biological Rent”—before AI systems could harvest your attention patterns, before “memory-as-a-service” models could monetize your cognitive history—the ground had to be prepared. Players had to accept that the past itself could be enclosed, that memory had become a renewable resource extraction industry.
The Virtual Console taught an entire generation that nostalgia wasn’t something you owned. It was something you licensed. Your childhood wasn’t yours—it was Nintendo’s IP, available for lease at their discretion, on their terms, through their infrastructure. The games that shaped your formation as a player? You could visit them, for a fee, if the platform still existed, if the servers were still live, if your account remained in good standing.
This is the spiritual ancestor of 2026’s “Memory-as-a-Service” models. The AI systems that now charge subscription fees to access your own browsing history, your own conversation logs, your own cognitive externalization—these are the Virtual Console’s children. They’ve merely extended the same logic from your digital past to your biological present. If Nintendo could charge you to access Super Mario Bros. after you’d already purchased it on NES, why couldn’t Google charge you to search your own email? Why couldn’t Meta charge you to access old photos? Why couldn’t the AI that processed your thoughts charge you to remember what you told it last year?
The Virtual Console normalized the idea that memory is not possession but permission. That history is not artifact but access. That the past is not yours—it belongs to whoever controls the server.
In 2019, when Nintendo shut down the Wii Shop Channel, thousands of Virtual Console games became inaccessible for new purchase. Players who had “bought” these games could still download them to existing hardware, but only until their Wii eventually failed. There was no cartridge in the attic to fall back on. There was only the ghost, and the ghost required permission to manifest.
The NES in your attic asked nothing of you. It simply waited.
The Virtual Console required everything: your account, your network, your platform, your continued good standing in a system designed to expire. You didn’t own the ghost. You rented the haunting.
And the industry learned that we would pay, again and again, for the privilege of remembering what we used to own.