Defining the Next Generation

  • Post last modified:Saturday, March 27th, 2021
  • Reading time:28 mins read

by [name redacted]

This article was originally intended as a conclusion to NextGen’s 2006 TGS coverage. Then it got held back for two months as an event piece. By the time it saw publication its window had sort of expired, so a significantly edited version went up under the title “What The New Consoles Really Mean”.

So we’re practically there. TGS is well over, the pre-orders have begun; Microsoft’s system has already been out for a year (and is now graced with a few excellent or important games). The generation is right on the verge of turning, and all those expensive electronics you’ve been monitoring for the last few years, half dreading out of thriftiness and secret knowledge that there won’t be anything good on them for a year anyway, will become the new status quo. Immediately the needle will jump and point at a new horizon, set around 2011, and everyone will start twiddling his thumbs again. By the time the drama and dreams resume, I’ll be in my early thirties, another American president will have served nearly a full term – and for the first time in my life I really can’t predict what videogames will be like.

There are, of course, a number of common factors in all three “next-gen” systems; together they hint at a sort of mass consensus about what all the major companies consider important in videogames – or, at least, what they feel the biggest sticking points are with the industry and medium, as they stand in 2006. What makes prediction difficult is that all the players have their own ideas as to how to address the issues at hand – and humans being humans and money being money, it’s hard to say for sure which approach will actually win out, to dictate the next ten years of code and conquest.

The Road Behind

Thus far, we’ve been on a pretty straightforward path – if more by chance than any supreme plan. The history of home game consoles, Odyssey to Xbox, falls into three rough yet distinct eras, each bifurcated into two clear hardware generations: one of establishment (or evolution), and a further one of refinement (or growth). Thus:

ERA1:
a) Odyssey/Pong;
b) Atari VCS/Odyssey2/Intellivision/Colecovision
ERA2:
a) NES/Master System/Atari 7800;
b) TG-16/Genesis/Neo-Geo/SNES
ERA3:
a) Jaguar/Saturn/PlayStation/N64;
b) Dreamcast/PS2/GameCube/Xbox

Again, the route seems pretty obvious when you look back on it: better resolution, more colors, faster processing, more things on screen. What retrospect doesn’t get you is the existential crises that carved out this path, or what those dilemmas collectively add up to – and maybe hint toward, in the future.

Take the first and most obvious crisis: the Great Crash, which capped the first era and cleared the field for Nintendo’s first revolution. Market glut aside, the circumstances should feel familiar enough: as far as near everyone was concerned, videogames had hit a brick wall; there was simply nowhere further to take them. Then Nintendo came along with its new, intuitive control interface and emotionally intuitive software, and the market blossomed again, burning with new inspiration and direction. Then following a decade of elaboration on Nintendo’s ideas, the industry hit another problem: it had brought 2D character-based games, in the mold presented by the NES, about as far as they would go.

Indeed, at that time we could have moved on to put more sprites on screen, and drawn them in a higher resolution – still, aside from making the games prettier, the impact would have been minimal. At a certain point the law of diminishing returns kicks into effect, and each incremental adjustment becomes that much less perceptible or relevant as a measure of potential. That the Neo-Geo was sufficient to last for over fifteen years in a competitive arcade environment certainly seems illustrative. (That most people who buy HDTVs simply watch standard-definition programs stretched to 16:9, and don’t seem to either know or care about the difference, should also get a person thinking.)

Apparently sensing this, and in turn straining to find a new future, press and analysts puttered for a few years with “virtual reality” and “multimedia” and optical media and voxels and polygons, before finally finding a combination that sort of worked, settling on an ugly compromise of 3D polygonal space interspersed with CG movies, mostly stored on optical discs. Since these were still videogames, and everyone’s primary concern was how games might look different in comparison to Sonic the Hedgehog (or even Donkey Kong Country), Nintendo’s old controller and design bible were retained – however awkwardly they fit this new hybrid gamespace that had been birthed, with its new needs and potentials. In the short term, analog control smoothed things over a bit – though it wasn’t really until the Dreamcast and PS2 that this space became intricate enough to do much of merit with it.

Now, as of this waning generation and within these weird guidelines we’ve drawn, we are able to represent pretty much anything we want (including high-res 2D sprites) – which leaves us in a bit of a pickle. It means we’re at another standstill; basically the same one we suffered ten years ago, and ten before that. Save some sprinkles, what we’ve got is about as perfect as it’ll get – and yet there are still so many inherent problems and limitations to address: 2D controls and display, operating on a 3D space; the inadequacies of polygons as a construction material; issues like whether cutscenes even belong in a videogame, which seem more a facet of how we broadly interpret and act upon our role as audience than simply a software issue. Do we try to face these issues, or do we go off in another direction altogether to escape the current muddle – or do we keep plodding ahead with our heads down, pretending that the medium and the industry as they are now are effectively immutable?

This is the awkward legacy we’re left to scratch our heads over – a clear trot back, yet an unsure and inelegant journey forward, forged more through improvisation and the odd shrug than by any grand map or mission. The question naturally arises: for all the power of intuition, by what measure are these solutions that we keep finding reasoned the “best”? When we identify or solve a problem, however obvious it might seem, how do we know that it is a problem – and what does it mean for that problem to be solved?

The Business of Change

On the most practical, business-oriented level, what’s been going on is that the industry has thus far been driven by cyclical phases of growth and refinement driven by the introduction of new design fads, usually driven by new technology. One hardware generation is spent familiarizing the market with that technology and the design framework that it facilitates or suggests; a second generation cashes in on that familiarity, by delivering more and more sophisticated iterations of the technology and thereby more nuanced software to take advantage of it. Once the given hardware and software trends have reached their pinnacle of differentiation (after which the Law of Diminishing Return kicks in), their innate novelty begins to wane. The fads grow stale, the natives grow restless, and either a new fad kicks in to make the product line seem interesting again or… well, 1984. More or less.

The big issue here is the industry’s current reliance on limited gimmicks – be they new control devices or 3D polygonal space or appealing cartoon characters – to drive sales and spur financial, creative, and audience growth. Fads will fade; gimmicks will wear off – and as yet, the medium and the industry have yet to transcend gimmicks. When each reaches the point where creative expression and consumer interest can be reliably assumed without a forced schedule of proprietary maintenance, then each will be at a mature state. So long as the games that a person can make or play rely on the collective quirks of technology and business, videogames and the system guiding their development and delivery will remain in constant flux as they strive toward an as yet undetermined ideal.

So what is it we’re trying to accomplish, with all of these generational shifts – driven by gimmickry and dazzle yet necessary to keep the industry from stagnating? Maybe a good start is to look at what decisions we’ve collectively made in the past, and to try to pick apart the reasoning behind those choices.

The Mushroom Man

When the NES came along, it introduced a slate of reforms broad and subtle enough that it’s odd they aren’t analyzed more closely more often. There’s the price fixing, of course, and the “quality control” system that ensured – more than anything – supreme brand control on Nintendo’s part, aiding in their marketing spin that portrayed the NES as a lifestyle unto itself. There’s the decision to make the NES look like a random hunk of home video equipment, rather than a “game console” as such. There’s the NES controller, which worked so well at what it did that for twenty years no hardware manufacturers have dared diverge from its essential template – so well that almost nobody questions its ultimate efficacity as an input device. There’s Miyamoto’s storybook-influenced approach to the medium, which emphasized videogames as a storytelling platform. What’s missing is how all of this junk adds together.

As with Pac-Man before it, the NES and its carefully managed stable of software was meticulously engineered to attract people who didn’t care about “videogames” as such – if by “videogames” you meant faceless contests of skill like Space Invaders and Centipede – the sort of thing you’d find in dingy back rooms that mid-’80s adults were convinced were all drug dens (and hey, maybe a few were). By comparison, Super Mario Bros. was apparently full of redeeming value: an iconic, kid-friendly protagonist, bright colors, a long journey through a defined world that you could revisit as often as you wanted – a world with its own rules that it gently taught you if you simply followed your own curiosity. It was an easy game to identify with on a human level: kids and adults and boys and girls all could enjoy it equally (so long as they were right-handed). Likewise, the controller made the game extremely simple to play: simply clutch the cross-pad with your off hand, and hit the action buttons – when required – with your primary hand. The game made sure you got used to it soon enough.

The game didn’t play like a “video game”, as people broadly understood the term, and its playing equipment – the NES – wasn’t even sold as a traditional game console; it euphemistically referred to the NES as a “control deck”, the controller as a “control pad” rather than a joystick, and cartridges as “game paks”. As a result of the quality control – such as it was – it was ensured that, in those simple days before mainstream gaming press, a parent could easily pick up any game on the shelf that looked interesting, and it would confirm most of the reasons that family chose the NES to start with: reasonably well-made, vaguely culturally redeeming interactive tales that the whole family could (at least in theory) get behind. The NES pulled videogames out of the geek ghetto and made them acceptable (and maybe halfway interesting) dinner table discussion.

All Tomorrow’s Parties

Flash forward to the mid-’90s. The Internet is starting to creep into the public consciousness, as pundits everywhere put William Gibson on a pedestal as a modern-day H.G. Welles for predicting the whole concept of cyberspace. Problem was, cyberspace was still just a media myth! Didn’t stop Hollywood and TV Land from going tech ga-ga, though. Feed in recent advances in CGI special effects – the morphing in Terminator 2 and that Michael Jackson video, and creature animation in Jurassic park – animation of such sophistication that full-length features like Toy Story were just around the corner – as well as mass storage media like CD-ROMs, huge enough even to store full-motion video, and the public imagination goes nuts. Oh yeah, and then there’s Wolfenstein and Doom, blowing the cultural mind and proving once and for all that we’re all only five years away from being Max Headroom.

So hey – videogames! Now is the time to put away our toys, as the future is before us, and it’s 3D – just like real life! And it’s sometimes got real people in it, just like movies! And look, we can afford SC workstations and render cutscenes that look just like Hollywood! Even if the game can’t possibly look this good, it’ll give you a good idea what that blob is that you’re steering your stick figure toward. And hey – now that we’ve got all this storage space, we need to fill it with something!

As it happened, all this nonsense fell on our heads at about the time that Nintendo and Sega were struggling to figure out what to do next – how to up the ante. The wind was so strange, and things were happening so quickly, that when Rare chose to use a SGI workstation to render a few sprites and background elements (instead of merely drawing them by hand), people didn’t understand that this wasn’t some magical new 3D technique that the Super NES was able to achieve. People actually bought Night Trap, and thought Mortal Kombat was amazing because it had digitized actors in it. “VR parlors” opened up here and there around the country, allowing people to wander around in an inane 3D space and… not really do much. And everywhere, from all angles, each with their own agendas, the game industry was under pressure to break out of the cartoon world and do something adult; something that reflected the real world, in all its three-dimensional, gritty, realistic detail. People wanted the future and they wanted it now, and videogames were the medium to deliver it.

Step in one pissed off Ken Kutaragi, burning with rejection and ambition, to deliver where Sega and Nintendo were busy behaving like the regular cast of ABC’s Lost. “Here’s your friggin’ future”, Sony said, and flopped down an extremely generic, kind of cruddy piece of electronics that meant to do nothing except render 3D and stream cutscenes from a CD-ROM. It was expensive, yet not prohibitively so – and significantly, it wasn’t by either of those other bozos who represent the old face of videogames: the old order, who you can’t rely on to deliver the future. Sony delivered what the culture wanted of videogames; it gave people ostensibly “adult”, realistic entertainment that wasn’t like what anyone associated with videogames before. The PlayStation was cool; it spoke to the Nintendo generation, now in its teens, and to all the MTV hipsters out there. It took the public’s imagination by storm by promising a more familiar emotional experience – one that fit into people’s social lives in a way that Super Mario World could not. If the result is ten years of confusion, well. What you pull out of the crucible depends on what you put in.

Hearth and Soul

The important point is that Sony succeeded for mostly the same reasons that Nintendo did a decade earlier – if reflected through the funhouse mirror of mid-’90s tech hysteria. People were interested in the basic principle of videogames; it’s just that what they knew as videogames no longer impressed them. Whereas the first era of videogames was perceived as the province of back-room weirdos and autistics, the second era was associated with childhood; with cartoon colors and mascots and Captain N and Fred Savage wearing a Power Glove. Videogames simply didn’t seem as relevant or interesting as they did ten years ago. I know I, for one, stopped playing videogames for five years; they just didn’t seem to be going anywhere – and hey, I had important teenagery things to worry about.

The issue is pretty simple, really: what people want from videogames – as from anythings they allow into their lives – is for the medium to constructively contribute to their lives. At the same time, that seems plausibly the ultimate goal for the medium: to express something important and useful to its audience – on at least some level. That level might be trivial as hell – say, a sense of accomplishment at having achieved the top score in Space Harrier. Still, even the simplest or most fleeting emotion – if honestly earned – has its intrinsic value. It’s just that the more trivial and narrow the contribution, and the more hard-earned it might be, then – once you get past the novelty hump – the narrower the potential absolute appeal to the means of delivery.

Identifiable characters, involving gameworlds, goals that involve more than simple reflexes, and a more casual method of input all result in a broader appeal than does abstract twitch gameplay with intangible goals. An identifiable sense of space and scale, more realistic scenarios, and familiar touchstones like voice acting and movie sequences result in a broader appeal (if not necessarily always for the right reasons) than a flat cartoon world with more abstract mechanics. Each new era has added something more tangible, more familiar to grab onto – albeit largely to only one side of the scale.

See, videogames are a study of cause and effect; stimulus and response. The game provides a framework for action and stimuli to react to; the player reacts, affecting the gameworld. Each action on the player’s part in some way changes the circumstances for each subsequent decision, resulting in a conversation between the player and game – ideally a fluid, engaging, and inspiring one. If our ultimate goal is to expand the expressive potential of the medium – the ability both for the games to communicate ideas and for their audience to be moved by their experiences with the medium – then that means expanding the range and nuance of causal interaction between the game and the player.

Curiously, the vast majority of the additions since 1985 have been merely to the stimulus end; they’ve been concerned with presentation, adding new chunks to the visual and audio surfaces to lure people in. By comparison, there’s been relatively little consideration for what people will do once they’ve been attracted. I’m not just talking about controllers – NES pad or analog control or Wii remote – though certainly they’re a part of it. Just as significant is the way that people, and thereby culture, react to – engage with – videogames in the course of their normal lives.

There have been some superficial attempts at extending the utility of game consoles, generally through the Aiwa compact stereo approach: allowing users to play normal compact discs or DVDs on the machines. The famous side effect of this strategy, of course, is that zillions of people bought the PS2 simply to serve as a cheap DVD player. The result: for a long time, the best-selling piece of PS2 software in Japan was The Matrix. A bunch of people bought the razor; nobody was interested in buying blades. Although this did extend Sony’s user base, the rationale has nothing to do with videogames, much less videogames adding to people’s lives. Sony might just as well have packed a hundred million yen into each box so everyone who bought a PS2 would be rich. That would have been interesting, yet it wouldn’t have said much about the PS2’s merits as a game console. (It’s telling that both Nintendo and Microsoft balked at making their last-gen consoles DVD-compatible out of the box; Unlike Sony, the electronics giant, that’s simply not what business they were in.)

When I’m speaking about people’s normal lives, I mean the part of their lives people spend waking up, eating breakfast, talking with their family, going to work, sitting on the train or bus, hanging out, coming home and helping kinds with the homework, exercising, making dinner, having sex, and going to sleep. You know – life. If you want to know how to infiltrate people’s lives, look at handheld games. Look at cell phones. If you think it’s amazing that the Game Boy and the DS have done so well, you’ve never visited Japan – where a person might literally spend half his waking day on the train. Commuting aside, both the Game Boy and DS are simple in their ways: durable and intuitive – and convenient, the DS more so. Furthermore, the most significant DS software tends to focus on topics relevant to normal people (compared to insane videogame fans): keeping your brain fit, learning to cook, learning to be less awkward around people. There are still your Castlevanias and Mario games, of course – just as 2D fighters and scrolling shooters will always be made so long as there’s an audience. They’re just not as important as they used to be. Or to be more precise, their importance has been put into perspective. As far as cell phones are concerned – hey, who cares if the games are almost universally garbage. It should be self-evident the role their platform plays in people’s lives.

Likewise, it should be self-evident why a person should bother to put in the (actually rather precious) energy to play a videogame. The lower the learning curve, the more flexibility to the player’s input and the more appropriate the response from the game, the greater the obvious reward. As I often say, the worst thing a videogame can do is assume I’ve got nothing better to do with my life than play videogames.

When videogames fail to hit a balance – when they fail to offer players a level of response roughly equal to the sophistication of the game’s stimulus, and when they fill an important role in people’s lives (or conversely when a game has relatively little to say despite all of the player’s input – which in practical terms is more a software issue) – then the medium is, in effect, missing its target. The result is a narrowing and a limit in appeal (again with the shooters and fighting games), leading to a steeper “novelty hump” – and once people are over that hump, the fad is over.

The Road Ahead

So now, as we again shuffle awkwardly from one era to the next, what are Microsoft and Sony and Nintendo doing to combat the hump? Or finer, what do they see as the current shape of the hump? Everyone seems to perceive some big problems with the medium as it stands. What I find telling is that, despite the often dramatically different approaches taken by the three companies, the reasoning behind their decisions seems extremely similar across the board – suggesting that there’s a broad, if implicit and vague, agreement about what needs changing.

Paramount seems to be an acknowledgment of the current social stigma to videogames. Brushing aside the hyperbole about games in Hollywood and videogames making more zillions of dollars than motion pictures and how many game consoles are out there today – since we know that the actual percentage of American homes with a game console has not increased since the NES era – videogames are about as much of a public joke as they’ve ever been. If anything, we’ve backpedaled over the last ten years or so.

Back when Nintendo and Sega were in charge, at least videogames had an inoffensive, marginally family friendly, socially redeemable overtone to them. Now the relationship between videogames and culture is nearly back to the early ’80s. We’re back in the “geek ghetto”; back then it was arcades as drug dens, and videogames themselves as a weird addictive force; today it’s Columbine and murder simulators and Jack Thompson. Of course it’s all nonsense – though that doesn’t mean there aren’t reasons for the misunderstanding. If videogames can be vilified and picked apart, that means to the public eye that they have essentially failed to make their case. And if the value of videogames is only evident to the kind of freakos who hang out in videogame forums, then clearly they have failed in at least some aspect of their social mission.

The point, of course, isn’t to make videogames immune to criticism; it’s simply to account for the existing criticisms (or rather whatever might be buried between the lines), as concerns what social or personal human demands and needs might not be adequately addressed. To wit: Sony has decided to explore distributed computing, using online connected PS3s as a loose network, with the goal to explore important scientific and medical issues. (Very nice – brilliant, even – if beside the point.) Microsoft has chosen to engage new developers with its XNA system, hinting at the ideal that anyone should have the option to express his or herself through game development. (Extremely noble – and indeed significant – if extremely narrow.) Then there’s Nintendo, with its Wii Channels. (Much less grandiose, though more broadly applicable on a personal level.)

All three systems are intended as “set-top boxes” of a sort, intended to serve a variety of functions within the household – the implication that you’re buying something far more significant to your life than just a game console. (As for how well – and by what means – they intend fulfill that promise, you may come to your own conclusions.) All may be (and are indeed focused around being) connected to the Internet, resulting in at least some sort of broader social connection.

In general, the key concern of the new generation seems to be to refute the suggestion that videogames are an inward-turned, alienating influence on a person’s life. The assumption seems to be that it is unwise to expect that videogames are important in their own right, though if you can convince people that videogames serve a role in their real lives then perhaps they’ll take the plunge. Microsoft has taken this idea of adapting to the user, rather than asking the user to adapt to you, to a curiously literal level, with its faceplates and custom soundtrack options for every Xbox 360 game. It’s like Burger King’s “your way, every day” theory – the assumption being that people know what they want, and what they want comes out of a fashion magazine. Still, interesting.

Just as interesting is the focus on all three platforms on the downloading of “classic” and small-scale videogame content – implying at least some intention on creating a sort of a “universal present” for the medium whereby past works are considered just as viable and significant, for their part, as current fare – and indeed where modern games can be directly cross-referenced with their predecessors – such that any advances (or lack thereof) are made readily apparent. Likewise, games that exhibit fascinating concepts yet which could never be trussed up enough to be considered major retail releases – like Geometry Wars or Jenova Chen’s Flow – are given broad exposure, further facilitating the exchange of ideas.

This exchange of ideas – between East and West; big and small developers; developers and the buying public; between players and other players; between players and their family – seems to be another major guiding principle. When you tie it into the first point – about broader social perception of the medium – and the second, about making videogames more broadly applicable to people’s lives, and you start to think about issues like the Wii remote (along with Sony’s half-response, and – to some extent – the 360’s wireless pad), it seems clear that the zeitgeist this time around is communication.

Though their approaches could hardly seem more different, in an interview with gamebrink.com Shigeru Miyamoto appears to speak for all three manufacturers: In order to truly draw people into a videogame, “it will be necessary to create a console that gives you the feeling that it’s part of your daily life.” Hardcore gamers will scoff, of course; their resistance to change, or even toward broadening the market, is notorious – as is their paranoia that their favorite hobby is slipping away from them. On wii.com, Satoru Iwata addresses these fears – again, with the voice of a generation:

It does seem that there is a level of misunderstanding among some people. I am concerned about this. It’s true that Nintendo is reaching out to non-gamers, but this does not mean that we are ignoring game fans. I believe that if we don’t make moves to get people who don’t play games to understand them, then the position of video games in society will never improve.

Consensus in a Sense

So what does it mean today, to be a next-gen console? Same as it’s ever meant, at a change of eras: taking another sharp turn to redress a perceived lack of balance, correcting any perceived inadequacies in the status quo. Bringing the medium one step closer to what it ideally can be, someday, when we’re all bright and organized enough to know what needs to be done, and skilled enough to pull it off. We’re not there yet – we’re nowhere near there, and no weight of new-fangled controllers or interfaces or display technologies will speed the way. We just have to take our time, and see what works, and keep our eyes open for hidden genius.

Still, looking at the current clash of opinions – the startling progressiveness of Nintendo, next to the conservatism of Sony and the eagerness to please of Microsoft – is a little frustrating. Imagine, if you will, where we’d be today if everyone were to communicate and compare notes, rather than play this dumb game of oneupsmanship. Imagine a videogame standards commission, headed by a board of dignitaries from all across, from all areas of the industry – with focus, of course, on actual game designers – formed with the mission and a detailed yet flexible long-term plan to refine the medium for the good of all. It’s not impossible!

In fact, better than imagine it – let’s do it! The main difficulty will be convincing the big hardware manufacturers to relinquish some creative input over their own product. If measures are taken to ensure that everyone – including small, quirky dissenting voices – has significant input, and that all decisions are made following extensive discussion, and according to a certain irrefutable set of primary standards and ideals, I think it’s plausible to convince reasonable figures like Satoru Iwata and Ken Kutaragi – both visionaries in their own right – of their own vested interest in such a proposition. As for American companies like Microsoft, hey, consensus is their middle name! Just look at the Xbox controller!

This idea isn’t even a new one, in principle. If you recall, Trip Hawkins had the same thought at the last change of eras (to admittedly disastrous results). And look at Sony’s attempt to rebrand videogames in its own name and image: when you boot up a PS3, the only time you see a PlayStation logo is when you choose to play a videogame (as compared to a Blu-Ray movie or DVD or compact disc or what-have-you). The implication is clear, as is Kutaragi’s intent. He’s a videogame fan; he’s a die-hard gamer. He just wants what he loves. Though its effect is annoying, his desire is understandable enough.

The problem, of course, with the 3D0 and with Kutaragi’s draconian unification attempt, is that the focus is all wrong: it’s all on establishing a universal format, or a universal brand – essentially on putting videogames into a single box, with an arrow pointing at it, when they’ve got so, so, so freaking far to go before the medium is even close to mature. What we need now isn’t to limit ideas; it’s to promote discussion. To that end, there’s no good to an overseeing board like the one I’m proposing if the result would be stifling radical ideas like the Wii or the DS – or heck, even the Virtual Boy in at least principle (if certainly not in execution). If such a board is to exist, it needs a built-in way to account for these kind of ideas in accordance with agreed-upon goals and principles for the medium.

Likewise, the end result of such a board would not necessarily be a universal format; especially in the short term, the board may well decide that the industry’s best interest is in pursuing split paths that suggest equal potential. The goal in this case would be to ensure that each idea receives as much support as required to refine it to such a point that it stands as an acceptable representative of some key aspect of the board’s guiding principles. Likewise, with everyone’s ideas out in the open, it will be that much easier for third parties to understand and develop to the strengths of the hardware in question. There may even be loosely imposed guidelines to ensure equal and appropriate support for each given platform.

I think we’re getting old enough that we can handle some cooperation. There have already been plenty of hints in that direction, what with Capcom and SNK’s kinship, and Nintendo’s farming out its franchises to companies as diverse as Sega and Namco and Capcom. Honestly, I don’t think the industry can get much further unless we do band together and start watching out for all of our best interests. If we get ourselves in gear, then maybe in ten years we’ll be in a place to make some deliberate, educated decisions about our collective future. Cliché as it might be, a united front is a strong front – and we’ve got a hell of a lot to offer the world.