Can you tell us when you started work on the current Insomnia engine and what your ambitions were? Was it a brand new project or did you take elements from previous technologies?
The principal ambition of both our engine and tools is to empower our gameplay and content teams. The goal of the engine, in particular, is to leverage as much of the available hardware as we can toward the things that are most valuable to the game and, ultimately, the player. The goal of the tools team, though, is minimizing the iteration time required to make additions or changes to the games while allowing the content teams to maximize the features in the engine. In other words: The engine is about making better, faster stuff while the tools are about making better stuff faster. Along with the engine, the tools we're now using are radically different from what we used for the previous generation.
Insomniac's PlayStation 3 engine was a completely new effort from the start. The team understood that the techniques that worked on previous systems weren't going to continue to be as effective on this generation of hardware. So everyone took a step back and tried to create something much better suited to the PS3 (and the Cell), specifically. Sure, there were missteps and bumps along the way, but ultimately we were able to make a great game that looked good and ran at a rock-solid framerate for the PS3's launch.
In what ways has the engine evolved since Resistance 1? There was a lot of talk about streaming textures at the time... What elements have you added since that game and what have you learned?
The engine is constantly changing. It's continually being upgraded and simplified, while we add new features and remove less useful ones. A sign of any maturing technology is that it becomes simpler rather than more complex. And as we work on our third-generation PS3 title, this is what we're starting to see. We've tried several approaches for different features and we're now definitely seeing a convergence of the ideas that have worked out well. For example, the physics, animation, glass, inverse kinetics, effects, and geometry database systems (just to start with) are now less complicated, thus offering more and significantly faster features than the versions found in Resistance 1.
We've also solidified some design patterns that are simplifying things. Take SPU Shaders, for example, which we discuss in detail on our newly established R&D site. SPU Shaders helped to make the big systems and all the little changes that come along during development a lot more practical to implement. They've also helped shed some light on programming the SPUs. Just having the ability to start putting high-level logic and AI on the SPUs was a major milestone that validated a lot of our ideas on how to distribute that type of work. This isn't to say that we have fewer challenges with each new generation of game--we just have all-new, even better and more streamlined challenges!
This may sound like a daft question, but in what fundamental ways do engines differ? What are the intrinsic differences between your engine and, say, Unreal Engine 3? Are there different programming philosophies at work?
Engines can differ in a multitude of ways: performance, supported features, specific techniques, algorithms used, etc. But I think what you're asking is more about the under-the-hood stuff; how engines are put together.
One topic we discussed at this year's Game Developers Conference was what we called the "Three Big Lies of Software Development". How much programmers buy into these "lies" has a pretty profound effect on the design and performance of an engine, or any high-performance embedded system for that matter.
Engine programmers can take two approaches when it comes to console hardware: hide it or highlight it. We definitely prefer to highlight the hardware, as it's much better in the long run to understand any of its issues or quirks. A good understanding of the hardware influences your data design decisions and coding choices, and it's also good practice. An understanding of one architecture will improve your ability to understand the next. It's a virtuous cycle of learning and improvement.
Some developers choose the dark side: they hide it, keeping the details away from other programmers by trying to "abstract" it. There's certainly some value in this when used in moderation, but most of the time it's overdone. Too much is hidden. This forces programmers to spend as much time learning this abstract system as they would have just learning the hardware in the first place. Then because they don't have a good understanding of the hardware, data design and algorithm choices are not well-informed. This leads to the next cycle of hardware being even harder to understand since they haven't thought about those types of details in possibly years! It becomes a nasty cycle of poorly informed choices and missed learning opportunities.
Most engines also have signature features. The "long view" has always been one for Insomniac's engines. For example, in the first Spyro the Dragon for PlayStation 1, the player could see huge distances into the level, which at the time contrasted with the fog soup so commonly seen in other games. That's something we've continued to focus on. Take Ratchet & Clank Future: Tools of Destruction for the PS3: The game had some amazing vistas that I think players appreciated and helped set the game apart.
You can't forget about your audience, either. Who is the engine designed for? Is it more for the programmers who will use it or for the players that will want the most out of it? This requires a delicate balance. We certainly don't want to make the engine more difficult for the programmers to use for no reason, but there's often a compelling reason to make things different because it means more (or better) stuff for the player.
I'm sure this decision is complicated if you license your engine since the licensees will most certainly always want it to be easier. But even when you don't, there are time and resource constraints (especially if you're releasing a game once a year) to consider... and it's always a consideration. I don't think there's a definitive answer for this one - you just have to communicate with everyone involved and try to make the best decision you can.
When I've spoken to programmers in the past, they've tended to be most excited about the platforms that have allowed them to program 'to the metal' - i.e. to get access to the fundaments of the hardware, rather than rely on libraries and APIs. Where does PS3 rank in this respect?
Let me start with some background on why programmers want to get 'to the metal' when it comes to console engines. A console is a fixed platform; it's the fundamental distinguishing factor between it and a PC. A console has strict fixed resources (hard disk or not, how much memory is available, etc).
But - and this is the big one - a player's expectations are not fixed. Each year and with each new game, players want more. More details. More effects. Better graphics. Better sound. Better AI. What this means is that with each generation of game, there's a lot of pressure for developers to push the bar and do more. To do this, those developers need to know that - with a little more time and effort - there's still power waiting for them to take advantage of.
That's definitely the case with the PlayStation 3. On the CPU-side, you've got the SPUs and no real software "roadblocks" that inhibit a developer from squeezing out extra work. They're very open, well-documented, and we have access to pretty much everything they can do on any level. So I'd rate it very high in that regard.
Do you think that old programming practices have caused people to fall into bad habits that make working on modern architectures harder?
It's interesting, because I think that probably the oldest programming methods are the most relevant today. It's the habits over the last five or eight years that are struggling, and it's interestingly the people that are more recently out of school that are going to have the most trouble, because the education system really hasn't caught up to how the real world is, how hardware is changing and how development is changing.
The kinds of things that they're teaching specifically about software as it's own platform is teaching people to abstract things and make them more generic - treating software as a platform, whereas hardware is the real platform - but performance, and the low-level aspects of hardware, aren't part of the education system. People come in with a wrong-headed view on how to develop software. And that's the reason why Office 2007 locks up my machine for two minutes when I get an e-mail.
Are you able to 'cheat' the system at all, perhaps by using memory allocations that you're not supposed to, storing data away in areas of the CPU meant for other stuff, jamming the SPEs with lots of tasks, etc?
I'm inclined to say that there's no such thing as "cheating" when you're talking about developing on a console. It's all a cheat. You want believable images, sound and AI without developing a "real-life simulator". That is not only impossible, but I'm quite sure it wouldn't run at an interactive framerate.
There's no "supposed to" to compare against. We have a fixed platform, and except for a few rules laid out in Sony's Technical Requirements Checklist (TRC), it's all fair.