Industry Vets Never Metagame They Didn’t Like

  • Reading time:1 mins read

by [name redacted]

Two teams, split up amongst Eric Zimmerman of Gamelab, Warren Spector, Mark Leblanc of Mind Control, video game theorist Jesper Juul, Ubisoft’s Clint Hocking, Jonathan Blow, and USC Professor Tracey Fullerton, moved their virtual quarters around the board to make thematic comparisons between often highly-contrasting games.

Has World of Warcraft created a more intense subculture than Asteroids? Is Guitar Hero more culturally sophisticated than Parappa the Rapper? Is Wipeout more realistic than Nethack? Is Oregon Trail more emotionally rich than Virtua Fighter? (See below for answers.)

( Continue reading at GamaSutra )

Fable is Love; Love is Puppies

  • Reading time:8 mins read

by [name redacted]

This article had a strict deadline; I rushed to finish it so it could go live before the whole Internet had reported on the demonstration. And then… I guess it slipped through the cracks. Oh well! Here it is.

As another note, I think this was the meeting where Molyneux creepily offered his audience cookies. I was the only one to perk up. Hey, cookies.

Peter Molyneux was in loopy spirits, discussing his new game. Who knows how many times he had been over the same territory that day, though he seemed to enjoy spinning his tale, finding the right notes to highlight, the right places to pause for dramatic tension.

“Sequels are tricky things,” Molyneux started off. “Not my specialty. The sensible approach is to give you more things you like, better.” More weapons, new monsters, twenty times the land, guns! When Molyneux was asked to provide a sequel, he set off doing demographical research to see just what people wanted of him anyway. Then he opened “the doors of hell” – the online communities – only to quickly, in his words, slam them shut again. There were so many demands, so shrilly phrased – “so many people mortally offended by the design choices in Fable 1” – that the best Molyneux could do was sift out the most common complaints.

Mobile Session Itself Goes Massively Multiplayer

  • Reading time:1 mins read

by [name redacted]

Going by the online schedule, Gamevil, Inc. general manager Kyu C. Lee was to spend an hour chatting about mobile MMO games and communities, by way of his mobile MMO game, Path of a Warrior – in North Hall room 124, at 2:00.

In practice, the session was held in a completely different building and Mr. Lee was absent. With the blessing of the sound and technical staff, however, the small turnout soon took command, converting the session into an informal comparison of notes.

( Continue reading at GamaSutra )

Ubisoft’s Adam Thiery Talks Camera Theory

  • Reading time:2 mins read

by [name redacted]

Adam Thiery, a designer for Ubisoft Montreal, gave a short talk today on interactive cinematography. His basic point was that game cinematography is player-driven. Simple it may sound; real application is always trickier. One of the big sticking points is that camerawork, being player-driven, is limited by current understanding of game design and player psychology.

A modern camera knows when to change state, explained Thiery. In Ghost Recon Advanced Warfighter, when the player is pressed against a wall, the standard tracking camera shifts from a behind-the-character perspective to show the player character left-of-center, and focus the player’s attention to the right, around the given barrier.

Thiery said that a good game camera is a matter of functionality, rather than cinematography – yet given that, it pays to consider the visual composition within each camera state. The reason is that any action a player takes is generally guided by what he has been shown to do.

The original Half-Life takes places in a disorienting sci-fi setting; to drive the player forward, it uses huge stripes painted on the walls, like a trail of breadcrumbs or an arrow. Though this is an artificial and somewhat clumsy application, that same principle applies to any 2D screen composition.

( Continue reading at GamaSutra )

Swash awash

  • Reading time:3 mins read

“Piracy”, in the intellectual property sense, tends to have a regulating effect. If people enjoy a work enough, they tend to want to own their own copy — preferably a copy presented in the best possible quality. That’s why people buy DVDs after seeing a movie or watching a TV show, and that’s why “free samples” exist. You’d have a much harder sell by avoiding the initial courtship and just asking people to buy an entertainment product, sight unseen. To an extent, the idea is kind of ludicrous.

The downside from a marketing perspective is that if people don’t enjoy a work, and they know that ahead of time, they tend not to buy it. As a result you get these interesting effects where, say, Brittney Spears loses sales to “piracy” because people don’t consider her music worth paying for, and smaller, more interesting bands are often “made” by “piracy” because people who might like the band’s music are able to hear it, hang onto it, and judge it over an extended period.

From what I can see, the only way this could be a bad thing is if you’ve invested a ton of money in something culturally vapid, and you expect your dividends nonetheless. I’m sure the natural balance of the universe would feel powerfully unfair in that instance, and you’d be tempted to lobby the government to draft all kinds of arbitrary regulations to ensure you get the result you wanted. If you’re offering something of real merit, however, there doesn’t seem much danger of losing profit. Indeed, the more people who know, the better.

To boil it down, people generally:

1) want to own things that they enjoy
2) are willing to pay for things they consider of value
3) tend to value “legitimacy”, to a point.

If you place before a person a dirty CD-R burn of an album and a full, legit copy, both for free, I think it’s fair to say that most people will choose the legit one. If instead you charge a reasonable price for the legit copy, a fair number may still buy it if they already know that they adore the music on the disc, they know they won’t find the CD for a better price, and they can afford it at the moment. If you charge an unreasonable price, or don’t allow people to test the album out first to see if they like it, the free, dirty copy will look all the more appealing by comparison.

All that things like Youtube and P2P networking and tape-trading do is bring a level of honesty and clarity to the exchange. So long as you offer something of merit, at a reasonable price, and your customers know that they like it, you’ve got nothing to fear. If anything, this kind of distribution serves as free advertising; all it should do is increase your potential customer base, to a certain threshold. (After all, every work has only a certain natural breadth of appeal.)

So yeah, whatever. Go ahead and watch what you will, how you will. Make up your own mind what you value, and to what extent. Then go ahead and purchase the things you like, provided you’ve the money after you buy what you need. There’s nothing dishonest or unethical about any of this. Anyone who tries to tell you differently has an agenda to protect.

Further Eefining the Definition

  • Reading time:1 mins read

Art is a means of communication through implicit, rather than explicit, symbolism and meant to appeal to the subconscious and intuition, rather than to the conscious and reason.

This communication can be conducted through any medium. The fun part about it: intent needn’t even be a factor; merely communication. So if the recipient of a manmade work reads in it something that was not consciously intended by its creator, that reading (presuming it’s genuine) is as legitimate an interpretation as any based upon a deliberate message. Perhaps more so.

As regards non-manmade works — a sunset, a rainbow, an orangutan; whatever natural beauty you might appreciate — that’s somewhat different in the sense that, provided you aren’t subscribing to a supernatural interpretation of artist (say, God), it truly is a one-sided conversation.

I suppose the act of receiving an artistic message would best be described as inspiration. One may be inspired by anything, of course; art is simply a manufactured way of appealing to that impulse.

The Birth of Excellence

  • Reading time:5 mins read

So the broad consensus is that television has finally reached its golden age. Somehow, magically, it doesn’t necessarily suck anymore. People have figured out how to use the medium to do something substantial and engaging, and while not every show follows through on this potential, or does it well, the artistry is loose — and some damned excellent things have been coming of it: The Sopranos, Lost, Battlestar Galactica. Most people seem to trace this evolution down to the mid-’90s, in particular to The X-Files. A few nerds throw around Babylon 5. I recently saw a proposition that it was a three-step process begun with Twin Peaks (showing that something substantial could be done with the medium), developed in The X-Files (showing that an involving long-form narrative was possible), and refined in Buffy (moving that narrative focus from plot to character development).

What strikes me as just as important, though, is the development of DVD. Again we can thank The X-Files for establishing precedent of DVD compilations; now with shows like Lost, and shows developed straight for pay channels like The Sopranos, that otherwise have no direct commercial value, television is produced with the end user — and an end product — in mind. Whereas the ’90s shows demonstrated the artistry, DVD provides a framework; a structure. Shows are designed to be cohesive, coherent long-form narrative units that people can pull off their shelves and watch, enjoy, as a single work, with the actual broadcast little more than a taster for the eventual consumer product. I’ve even heard cases of networks developing and showing series at a loss or near the break even point (though I’m scraping my mind to remember which ones, and where I read this), with the long-term expectation of DVD revenue, once the ratings and word of mouth have made their rounds, to make up the balance. As a result, TV shows are more and more made as a long-form work, that can be watched over and over, rather than for serialization.

I’ve said before that television is, in theory, the novel to film’s short story or novella. Whereas films are self-encapsulated, short narratives with a single premise, meant to be taken in at a sitting, both novels and television are serial formats. Many novels even start off as a series of short stories (Catch-22), or as newspaper or magazine serials (Musashi, anything Dickens). It’s only when they’re compiled into a single, tangible volume that they are assessed and evaluated as complete, legitimate works. And though there is a certain elegance to the short story or novella, revolving as they do around a single conceit, there is a reason why the novel is considered the true test of literary skill: as a serial, it has the scope and structure to explore plot, character, and theme with a nuance impossible in the shorter works. Of course, most novels still suck; that’s what happens, though.

What DVD has done is allow television that objective, tangible distance. Long-form works now can be compiled and assessed as a whole, in the same sense that they provide a target structure for the narrative. It’s just a strange coincidence that it happens to have come around immediately after the artistry. I think it’s the final critical step for the medium, in that previously that objective distance was impossible to attain. Even with the occasionsal VHS release, television was transitory. There’s a reason why the BBC archives (among others) were systematically wiped; just as life doesn’t become a story until it has an ending, a serial doesn’t become a novel until it’s bound. You have to be reminded to value the fleeting because it is fleeting, rather than ignore it because you can’t grab hold of it and place it on an altar.

Film, it got its act together years ago. Decades ago. Before sound, even — though it wasn’t until the New Wave that it got all self-aware and critical. Reason? It’s already self-encapsulated. You don’t need it bound; you don’t need it on your shelf; you don’t need to have it compiled for you, because it’s brief and simple enough to be instantly comprehensible, and easily exploited. (Relatively speaking, that is.) I think there’s a reason why in film the main artistic force is perceived as the director (Charlie Kaufman aside), whereas with television it tends to be the writer. Each dictates the essental narrative structure of the work. Since film is comparably simple and short, each shot, each visual juxtaposition is of greater narrative importance. Since television sprawls, the basic narrative block becomes the chapter rather than the scene — meaning an increased reliance on script as a source of content and momentum, rather than rote imagery.

Funny thing is, soap opera was way ahead of its time. All it really lacks is sophistication and an end structure — neither of which were even developed until a few years ago.

Losing the plot

  • Reading time:9 mins read

I think the thing I enjoy about the black and white period of Doctor Who is that it’s so much more ambitious than the later eras. Ambition returns in a form during the ’80s, though for different reasons and to different results. There’s a distinct difference, though, between the day-to-day approach to stories during the ’60s and ’70s.

During the Hartnell era, nearly every story was high-concept speculative fiction of some sort. Here’s the story where everything is as alien as it possibly can be; here’s the one where the TARDIS and its occupants shrink; here’s the one where we revisit a location hundreds of years later, to see the consequences of the Doctor’s actions. Even when they’re not speculative, they’re still high-concept: here’s the one introducing a meddling counterpart to the Doctor; here’s the musical; here’s the ridiculously long and serious epic.

Troughton curtailed that trend a bit, with a bigger focus on pulp “monster of the week” storytelling. There was still room for a few speculative stories, like an acid trip to the land of fiction or to a place where all the wars in history are being fought at once. In general, though, energy was devoted to creating new creatures to frighten the kids — preferably recurring ones. Repetitious, yet fun.

Pertwee turned the show into a spy show with aliens and mad scientists, with the Doctor as the hero with the cape and the gadgets; every week there was a new evil scheme to foil. The one story that’s really stood out to me during this era is Inferno — a story where the Doctor visits a parallel Earth in order to witness what happens when he fails, then has to relive the nightmare back in his own reality. It’s almost like a Twilight Zone episode. Though the costumes and set design kill it for me, Carnival of Monsters also is pretty imaginative; it deals with the TARDIS materializing inside a miniaturized habitat trucked around the galaxy by a couple of carnies. Likewise, The Green Death is basically an allegory for the environmental movement.

Tom Baker is kind of where the series loses itself. The early, Hinchcliffe era is dominated by pastiches of whatever Hammer horror movie happened to be in theaters at the moment: travel from Sherlock Holmes Land to The Mummy’s Tomb Land to Frankenstein Land. Slap onto that a deliberate attempt to arbitrarily rewrite series continuity for short-term dramatic ends, and you’ve got a horrible mess — one which, to note, the hardcore fans generally consider the “golden era” of the show. It’s horribly dull; instead of putting creative energy into new concepts to explore, or even into creating new and original monsters to play with, or even-even new threats to London every week, this era funnels its energy into tearing down and rebuilding the series itself — whereas the stories framed by this new and hypothetically improved series are both unoriginal and told in the driest, most self-serious manner possible. The arrogance and ill handling of this era, more than anything, are what bother me about the ’70s stories.

After the BBC dumped Hinchcliffe and Holmes, with the suggestion that the show pull its head out of its ass and do something positive for a change, Graham Williams took over and generally lightened the tone of the show, turning it into a campy romp. He introduced K-9 and Romana, and hired Douglas Adams first to write for then to manage the writing of the show. The series became loopy and irreverent, and although the production values began to go down the toilet, at least the series was original and vibrant again.

Baker’s final season coincided with a complete change of direction for the show, with the oft-reviled, usually misunderstood John Nathan-Turner taking over the show. Granted, JNT had a lot of weird ideas about the show; he was a master at getting the show made, not at managing the creative side. As far as he was concerned, that was the script editor’s job. Whenever he was graced with a script editor with a strong plan for the series, the show was nearly as strong as it ever was. When the script editor was an uninspired douchebag who was more interested in squabbling with the producer than in drawing out a plan (or even managing the scripts), the show was about as awful as it ever was. Season eighteen was Bidmead’s turn, and his idea was an entire season dealing with the concept of entropy. The result: an uncannily consistent and well-conceived string of episodes, in some ways harking back to the Hartnell era.

When Davison came on board, the show was still coasting from JNT and Bidmead’s smash debut: full of intriguing experiments, carried by a continuing storyline, and even graced by a historical or two — for the first time since season four. It only took about a season, however, for new script editor Eric Saward to exert his own entropy on the series. Don’t draw out a solid plan, don’t seek out new and talented writers, don’t commission enough scripts, don’t edit what you do have — then see where the show winds up. It’s not that the rest of Davison’s and the start of Colin Baker’s eras don’t present some interesting ideas; it’s just that they’re isolated within a series that doesn’t know what it’s doing or why, and within individual scripts that haven’t received the care they require. By season twenty-two, there’s not a good story in the bunch. It’s this, more than anything, that gives Colin Baker the poor reputation he has — and it’s this that nearly got the series cancelled, for the first time.

After Michael Grade gave the production team a year and a half to get its act together (I believe those precise words were used, somewhere or other), they returned with Trial of a Time Lord. As it happens, Saward had spent that time doing… almost exactly nothing. He and JNT came up with a grand concept for the season; I guess that’s one thing. When production began, however, lo — no scripts! Last minute scrabbling and angst and anger. Result? Colin Baker got fired, and the show received one last warning.

What they did then — besides hire McCoy, who was at least very well-received at the time (even if current fans consider him the antichrist, for some reason) — was install a new script editor. As it turns out, Andrew Cartmel had almost no experience even with professional writing, much less managing the narrative direction of a TV series. What he had was a sense of perspective. His first season was a period of postmodern weirdness that fans couldn’t and can’t tolerate. Still, it was one of the most imaginative and downright intelligent periods in the show — the first breath of fresh air since Bidmead, and probably the most ballsy thing the show had done since the ’60s. Then when Cartmell settled in, watched all the old (surviving) episodes, and got a hang for what had been done before, he made a deliberate effort to bring back the ineffable qualities that he perceived had been lost over the intervening years (read: during the Hinchcliffe era). He put more of a focus on characters and long-term story, and went out of his way to find and nurture the brightest new talent available. Result: if you ignore the production and occasional casting problems, the series ended on a high nearly equal to its inception.

Now, the integrity and vision that Davies brings to the show should be self-evident. With his deliberate focus on “big ideas” (“Queen Victoria, a werewolf, and kung-fu monks!”) as a framework for character development and long-term continuity, it’s like a blend of the best from the ’60s and the late ’80s — albeit lacking a bit on the speculative end.

It’s this, here, that leads me to constantly compare the New Series to the ’60s series — before color, before the watering down and tearing down and budget and ego and focus problems. I seriously think you could watch the first six seasons, then the final three, then skip right to the new series, and not miss much of anything important.

Apply the above discussion to the Big Finish audio range and you’ll also be able to weed out the essential problems there. Whenever they do go for the big, brave, simple ideas — Scherzo or Natural History of Fear or Omega or Davros — they hit gold. Most of the time they’re content just to waddle forward with cookie cutter plots involving the Doctor and Companion arriving on X world with X political or social problem, that they need to solve. That Big Finish outwardly requests new writers not to specify what Doctor and they’re writing for should give an idea where they’re going wrong. It’s not about ideas or characters; it’s all about plots. Commodity.Words and actions taking up two hours of space, and leaving no one fictional or actual the better.

That’s not to say that Doctor Who has ever been particularly deep or substantial; it’s a children’s TV show. That, however, is all the more reason to be childlike. It’s a series about wonder and fear and finding new perpectives from which to view the world — all presented in the simplest, most digestible form. It’s basically a trainer for how to feel awe and respect for the world around us — and then to subvert it. When it (or anything else) doesn’t hit those goals, the world is deprived a little bit more.

Note for the balconies

  • Reading time:2 mins read

I’m just going to say this here for posterity, so I can link back to it in a few years.

Both HD-DVD and Blu-Ray are going to bomb, people. Not as badly as UMD, though that should give you an idea what we’re dealing with. One or both will hobble on for a while as a high-end videophile format; there’s a hole to fill, now that laserdisc’s gone away. As a mass format, though, DVD’s not budging. Not so long as most people don’t even know if they’re watching a TV show in the right aspect ratio, and not so long as there’s nothing wrong with DVD.

People change their ways when they’ve damn good reason to, and not a moment before. Plain old DVD is going to stick around until it’s too unwieldy to maintain any longer — if for no other reason than that there’s too much personal and architectural investment in the format to arbitrarily pick up and switch to something that’s exactly the same except that guy you know who will scream at you for not hooking up your stereo correctly insists it’s somehow better.

For there to be a successor to a format as established and perfect, for its part, as DVD it will have to offer something so significantly different and so obviously better in just about every aspect of convenience, simplicity, and quality, that there is no comparison between the two. You create something that’s meant to be compared, and you’ve lost before you’ve begun — however nice your product in its own right. Nobody cares! At least, nobody outside the geek ghetto — and that’s the whole issue, in a nutshell.

In conclusion, Sony is fucked.

Gestures and Measures

  • Reading time:6 mins read

Yes, I think that’s a decent way of looking at it. All these new, supposedly more “friendly” control schemes aren’t really acting as such. They are still forcing new players to remove their preconcieved attachment to, say, swinging a tennis racket, and replacing it with a more standard video game approach in order to get anywhere. They’re essentially just pushing buttons, in the end.

That’s not an issue with the Wii as such, I don’t think, as much as it is with the dumb, overly abstract way things are being designed. What I’ve noticed is that few Wii games either detect the Wiimote in realspace and realtime (as Boxing and Baseball do) or simply use the Wiimote for what it’s worth in added nuance (like an analog stick or trigger, only way more so). Instead, they’re just replacing buttons with gestures and canned animations. It’s frustrating to see — and not even so much as an end product as in what that product shows about how unable game designers currently are, en masse, to wrap their heads around the bleedin’ obvious.

Red Steel is a pretty good example. Instead of giving the player a sword and a gun, and letting him gradually learn how to use them properly — teaching new techniques and whatnot as the game progresses, staggering out “assignments” of sorts (not literal ones) over the game’s story, to allow players to get accustomed to some key concepts of swordfighting or shooting or mixing the two — you tell him to move the controller like this to make this animation happen, and maybe earn new gestures as the game progresses. What the hell? How could you possibly screw this up?

Though this is one of the more obvious examples, you’ll see this problem in pretty much all Wii games currently available — and indeed, in Gamer and press discussion about the system. You can see people straining their imaginations to figure out something to do with the system, and it doesn’t work. Either you get gimmicks or you get phantom buttons. Digital do-or-don’t.

It’s… really not that hard! The Wii really suggests two things: added nuance to traditional games (instead of just doing X, you can do X in any number of ways; the way the game plays changes dynamicly to match your body language) and giving the player true first-person control, for all the subtlety that implies, with a minimum of abstraction, over a certain range of motions. The advantage here is the ability to explore concepts with an organicity impossible with just a digital player involvement — again, making people really learn how to use a sword (more or less) rather than simply pressing buttons or making gestures to cause an on-screen character to do something.

Instead of the player’s avatar developing and learning new things as an abstraction of progress, and instead of learning complex arbitrary and abstract gestures (like moves in a fighting game), the player himself or herself physically learns how to produce difficult, subtle actions that have a tangible result in the gameworld to whatever degree of skill the player posesses.

Imagine a fighting trainer. The wiimote is exchanged for four sensor bands, strapped to each of the player’s wrists and shins, as well as perhaps a belt to provide a center reference point (and perhaps force feedback for when the player receives a blow). The game gradually metes out concepts to the player — not just to improve mechanical technique and to teach new maneuvers; also to improve the way the player mentally contextualizes all of this. It could to some extent teach the art of fighting as well as the science — or at least a reasonable enough facimile for verisimilitude. Likewise, completely new skill sets with no real-world parallel could be devised for the player — so long as they were produced and could be reproduced in a believable and nuanced way.

Games that involve physical concepts would use the Wiimote physically, as above; games that involve more abstract or intellectual ones would use it more abstractly — closer to how we normally think about playing videogames, except with an added layer of capability. Press forward to walk; tilt the controller subtly forward to jog or run forward; tilt it subtly back to creep; tilt it left or right (while still holding forward) to sway or dodge in those directions. The way this should be balanced, the player shouldn’t be expected to physically, consciously tilt the controller so much as the game should respond to slight changes in the player’s posture — those little subvoluntary movements that we make when we want the avatar to behave in a certain way — go faster, hold back, watch out! Excite Truck sort of tries to do this, though it doesn’t seem to be executed as well as it could be.

Likewise, a whole range of related motions could easily be mapped to a single button — much like the state-shifting afforded by shoulder buttons, except intrinsicly analog. Press the button to execute a punch; when pressing the button, move or position the Wiimote this or the other way way to punch in different ways for a subtly different effect. Flick the tip up for an uppercut, say. Imagine the way a Silent Hill 2 or a Metal Gear Solid could take advantage of this subtlety and flexibility — the way it could read into the player’s body language and movement patterns and extrapolate a certain level of psychology from them, to make unseen behind-the-scenes decisions.

This is a pretty damned important breach we’re crossing, here — and we’ve been given a decent, if somewhat rickety, bridge. Yet so far people are just laying the bridge on the ground and using it as a replacement for a sidewalk or a new kind of a bed, or trying to figure out really clever pieces of playground equipment they could turn it into. I kind of hope people get more smart, before the novelty wears off.

Horii Himself, Out.

  • Reading time:6 mins read

Yeah. This doesn’t completely surprise me, except in the sense that it actually happened.

Handhelds are a better place for introverted, focused experiences. (See Metroid II.) In terms of the mindset involved, playing a handheld is like reading a book, whereas playing a console is like watching TV. Again, look how perfect Dragon Warrior is on the Game Boy — how much better it is than on the NES. Also: having a lengthy “novel” game makes more sense if you can pick it up and put it down at leisure, rather than being forced to sit in one place and stare at a screen for hundreds of hours. Leave the consoles for flash and fun; visceral stuff. Like the Wii, say.

Also to consider: as great as DQ8 is, there are two major abstractions left that seem kind of contrary to what Horii wants to do with the series. For one, the player controls more than one character. That’s a little weird. For another, it’s got random turn-based battles. Honestly, that doesn’t seem like part of Horii’s great plan for the series. It never has; it’s just been something he’s settled with until now.

So yeah. The DS seems like an ideal place to put the game. What’s really interesting is the multiplayer aspect — which I didn’t expect at all, yet which again sort of makes sense, depending on how it’s implemented. If players can come and go at will — join each other or set off on their own tasks, each with his or her own agenda — it’ll work. If there are too many constraints to the framework, keeping people from just playing the damned game whether their friends are around or not, it’ll be a bit of a downer.

I’m kind of undecided what this game means in the end. On the one hand it seems likely it’s meant as an intermediary step while Horii works on Dragon Quest X for the Wii. Considering how far along this game seems to be (implying it’s been in the works for at least months, maybe a year), it seems like it’s part of a long-term plan. Also considering that the sword game seems basically like a testing bed for a new battle system… well, do the math. And yet, there’s this issue about the DS actually being the most suitable system out there right now (in terms of market saturation, the nature of the format, and the qualities it has to offer).

Maybe it’s just the most suitable platform for Dragon Quest IX in particular, for everything he wants to do with the game. If X is going to work the way I think it might, it’s going to pretty visceral and showy — demanding a home system. One in particular (that being the most visceral available).

Basically, every game Horii makes appears to be just another approach to the same game he’s been trying to make for twenty years. He never quite winds up with what he wants — though lately he’s getting a little closer. From what I can see, this is just one more angle, allowing him to capture a certain aspect of his vision that he hadn’t been able to before (perhaps at the expense of some other elements, that he’s already explored). So, you know, right on. These details seem worth exploring.

The next game… maybe it’s time to assemble? See how all the pieces fit?

The thing that I dig about Dragon Quest is that, whatever the surface problems, the games are visionary. It’s a strong, uncluttered vision that all the games reflect even if they don’t always embody it. As “retro” as they seem, they’re not just crapped out according to a formula; they’re each trying to achieve something that’s way beyond them — meaning an endless pile of compromises.

I find that pretty encouraging. Not the placeholders; the way Horii isn’t afraid to use them, while he roughs out everything else. And that he doesn’t let them distract him; he just devises them, then discards them when they’re no longer of use. He keeps chugging along, going through draft after draft until he gets it exactly right. It’s a very classical disposition. Very honest, at least to my eye.

He’s a lot like Miyamoto, except Miyamoto sort of gave up a long time ago. And Miyamoto’s vision isn’t quite as focused (though in turn, it is broader than Horii’s).

The one problem I can see with going from turn-based to real-time battles is that the battles in Dragon Quest — I don’t think they’re really always meant to stand in for actual fighting, as much as they’re a stand-in for any number of hardships and growth experiences that a person like the player might encounter in a situation like the quest at hand. Some of that might be actual battle; some of it might be much subtler and harder to depict in a game like this.

Keeping the battles turn-based and separated from the wandering-around makes the metaphor a lot clearer as a compromise, rather than as something special or important in its own right. Changing to a system that makes the game actually about fighting loads of monsters… I’m not sure if this is precisely the point he’s looking for. Still, it’s a trade off. Get more specific somewhere, you have to lose a nuance somewhere else.

I wonder what other sorts of difficulties or experiences could be devised, besides semipermeable monster walls holding you back. Ones that would add to (or rather further clarify), rather than detract from (or muddy), the experience. And preferably that wouldn’t be too scripted.

I’m thinking a little of Lost in Blue, though I don’t know how appropriate its ideas would be, chopped out and inserted whole. Still, general survival issues seem relevant: having certain bodily needs (and maybe psychological ones — though who the hell knows how to address that) that, though not difficult to attend to, cause problems if you don’t. So in the occasions you do run into real immediate difficulty (battles, whatever), you’ll be in far greater danger if you’ve been pressing yourself too far; if you haven’t sufficiently prepared. Likewise, injury might be a real problem — so the player would have to think carefully, weigh cost and benefit, before charging into dangerous situations.

Not pressing out would mean you’d never learn more, get better, stretch your boundaries. Being foolhardy would get you killed. Same deal we’ve got now; just more nuanced.

I’m sure there are other ways to do it. Maybe more interesting ones.

It could be I’m reading in some things that aren’t overtly intended. Still, I’ve never felt the battles were as important as what they stood for. They’re too straightforward. They’re used too cannily, as a barrier. The trick, again, is whether there’s an interesting and functional way of more literally representing what they might stand for. I dunno. Maybe not! At least, not right now. So all right, violence. Fair enough.

Matsuno Ball

  • Reading time:6 mins read

Final Fantasy XII tries so hard not to be dumb — indeed, to actively address almost everything wrong with Japanese RPGs. The result of this effort (and of the general inspiration behind the package) is one of the most engrossing, sincere “big” games I’ve played in a while. I mean, I really, really enjoy this thing. Seriously! It’s a damned ballsy game, that I’d recommend to anyone. On the surface the only significant problems are thus:

  • The license board
  • That the gambit system isn’t more advanced
  • That the game still has these weird “turns” grafted in

The license board isn’t a bad idea in principle; it’s just in execution that it comes off as one more bizarre affectation. The idea is that any character can, in theory, learn to do anything so long as he or she has the training or experience to do so. Learning how to do one thing (say, to cast some simple white magic) makes it possible to learn similar skills, with just a little more investment. Learning to properly use a mace, on the other hand, won’t do much for your ability to cast Fire.

The way it’s implemented, though — urgh. Why can’t I wear a hat that I just picked off the ground, without first “purchasing” the ability to do so? If I know how to use one kind of sword, why am I wholly unable to use another unless I purchase the ability? And in typical RPG style, why am I all at once magically able to do these things, once I buy the ability? The way this should have been done is as follows:

  1. Call the damned things “proficiencies” instead of “licenses”. That makes it clearer what we’re getting into.
  2. For practical abilities (weapons, armor, use of items and accessories), allow anyone to equip and use those items to some percentage of skill. Those with no training in a bladed weapon would barely be able to do anything useful with a bastard sword, though they’d be able to swing it around and maybe, by chance, hit something for some amount of damage. Those with some training in swords would have a higher chance of using the thing well. Those with specific training in that type of sword would be able to use it perfectly. Likewise, there are some items (like a freakin’ hat) that anyone could wear to full, or almost full, ability — though maybe mastering the use would provide a subtle nuance. If there were any special bonus or benefit, maybe you’d only get that if you had the proficiency. For more intangible abilities — spells, techniques — allow anyone to at least attempt those to which your party has access, though there’s an extremely low chance of success unless they’ve mastered those categories. Anyone who has put in the effort to learn the abilities can do them flawlessly, every time.
  3. Choose the direction in which you’re going to study, rather than the licenses on which to spend your accrued points. If you want to learn how to cast “Cure”, peg it as your current goal; all points would go toward learning “cure”. Once you’ve learned the ability, an unobtrusive message pops up (much like a “level up” message) informing you of your success and reminding you to pick a new goal. (You can turn off the reminders in the option menu.)

There’s no real problem with gambits; this system is the main stroke of genius here. I just wish they were more nuanced. For instance, I’d like to be able to say “if [any enemy] is [within striking range], then [equip] [X melee weapon].” Then attack. Otherwise if they’re not in striking range, equip your range weapon and attack. Also, I don’t know why it’s not giving me the option to target enemies equal to or lower than X health; only greater than. You always want to beat the weakest enemies first, so you clear them away! Again, not a big problem; it’s just that I’m frustrated that I can’t always program my companions to act as I would act — which in theory is the point to the gambit system; to keep me from having to choose the same options over and over from a menu.

Finally, it’s a little strange that the game basically takes place in real time, yet everyone waits his turn to act. There’s no reason for this; it should instead be based on a sort of an initiative system (and retaining the ability to “pause” and issue new orders). Characters and monsters would act the moment they have the opening, and those actions would take a certain amount of time to execute. (Likewise, placement would matter a lot more; you can only hit someone if you’re rudimentarily within range.) The effect would be real-time battles to match the real-time maneuvering.

And on that note, I’d like direct control over my party leader. I want to be able to assign actions to my face buttons, and only have to call up a menu for my less common actions (or to send a command to my companions). This again can be an option — much as there is an option now to leave time running (instead of pausing) when you’ve the menu open. It would not significantly change the way the game played (at least, with the above initiative system), it would make me feel far more involved, and it would simply make more sense.

While we’re here, I wish the overworld would seamlessly stream instead of being broken into hunks of map. I realize this is due to the PS2’s famous memory limitations. Still, hey. Crystal Dynamics figured it out. Also: if it’s going to be forty-five minutes between save points, I’d like a quicksave option. That sounds reasonable to me.

I’d say that all of these alterations would be natural for any sequel to FFXII (especially now that Square is hot on sequels to individual FF games) — except Squenix (and millions of Square fans, and Penny-Arcade) seem to consider this game a failure best forgotten. Ah well. Grace wouldn’t be grace if it were self-evident.

It’s fun that the game pretty much sidelines the Nomura-chic protagonist (who I call Corey) and his “girl chum”, in favor of the more interesting supporting cast and their political drama. This might just be the first game I’ve ever admired for its spoken dialog.

“I feel like I’m in a John Hughes rite du passage movie”

  • Reading time:3 mins read

Something curious about Wayne’s World is that, whereas most movies expanded from TV shows or skits throw the main characters into a situation where the goofy yet courageous heroes have to preserve [x] from the sleazy [corporate/bureaucratic/criminal something], in this case most of Wayne’s problems are entirely his own fault. They come out of the same character traits that put him in an endless string of food service jobs, living out of his parents’ house, wishing he could make something out of his life. These in turn simply the downside of the same traits that make him so charming and fun to be around in the short term.

Which, come to think of it, is a similar situation to the one in The Big Lebowski. And collectively (both as a unit and within that unit), to the main characters in The Good, the Bad, and the Ugly. And even, yes, to Charlie Kaufman’s protagonists (despite the existential crisis in Adaptation). The qualities that make the characters distinctive and interesting to watch are also those that make them vulnerable; a strong character-based plot (and every plot is to some extent character-based) explores the positive and negative qualities of those traits, first by ingratiating the characters then by showing how those qualities we admire allow them to screw up, then showing how, when applied correctly, those traits can in some way redeem the characters. It’s pretty much scriptwriting 101, of course; the nature of a character arc. Still, there you go.

On an essential level, that’s what we’re there to experience: people who are redeemable fuckups, whose power for redemption comes from the same quality that makes them weak. The question, of course, is where to draw the line: how fatal, exactly, is that fatal flaw? It all depends on the character, and the traits in question — which is basically the point. As all stories are character-based (even if that character is nonliving or even nonphysical), a satisfying story comes entirely out of those characters’ characters. And there’s very little contrived about Wayne’s World; it’s a solid, honest, well-told story. For the movie’s origin and premise, this is pretty unusual! It comes through allowing the character to indeed be fuckups, rather than putting them on a pedestal where they can do no wrong and all the world’s ills befall them in spite of their best efforts.

Then Wayne’s World 2 finds the main cast again in a rut, basically relying on the same shortcuts that got them through life last time we saw them — only now they’re a little older, and the world is a little bigger, and none of their tricks are working anymore. If anything, they’re backfiring on a basic level. Taking the whole plot into account, they’re backfiring on a scale grander and deeper than is immediately obvious — which is sort of the whole point to the movie, and the reason for most of its awkward humor. Part of the reason the movie maybe isn’t so easy to like as the first one is that it portrays its characters as even less effectual than before. None of the character traits we’re there to see are doing the protagonists much good. The movie is basically chiding them for not learning their lesson last time, and giving them one last lesson by showing them the results of their lack of development. (Sort of an Ebenezer Scrooge thing.) It’s a really good coda, though — and an appropriate one, given the characters.

Defining the Next Generation

  • Reading time:28 mins read

by [name redacted]

This article was originally intended as a conclusion to NextGen’s 2006 TGS coverage. Then it got held back for two months as an event piece. By the time it saw publication its window had sort of expired, so a significantly edited version went up under the title “What The New Consoles Really Mean”.

So we’re practically there. TGS is well over, the pre-orders have begun; Microsoft’s system has already been out for a year (and is now graced with a few excellent or important games). The generation is right on the verge of turning, and all those expensive electronics you’ve been monitoring for the last few years, half dreading out of thriftiness and secret knowledge that there won’t be anything good on them for a year anyway, will become the new status quo. Immediately the needle will jump and point at a new horizon, set around 2011, and everyone will start twiddling his thumbs again. By the time the drama and dreams resume, I’ll be in my early thirties, another American president will have served nearly a full term – and for the first time in my life I really can’t predict what videogames will be like.

Cultivating Fear

  • Reading time:12 mins read

by [name redacted]

Originally published by Next Generation, under the title “How to Make Fear“.

With Halloween at hand, surely there must be some way to warp the festive energy to our own analytical ends. Just see what happens when you invite us to a party! Don’t fret, though – though full of long words, our museum of terror takes the well-oiled form of a top ten list. We know how you like your information, and it’s in bite-sized individually wrapped treats. Please… be our guest.