Best Of GDC – Tools & Technology

Standard

Recently I’ve been trying to learn a bit more about game design theory and technical best practices. While there are lots of great books and articles to read, what I found most interesting were the talks and presentations taken from the yearly Game Developers Conference. I’ve never been to GDC myself, but the wealth of content available in their archives is pretty amazing, covering all aspects of game development, from programming to art, design, business and historical retrospectives. This short series of posts aims to highlight some of my favourite talks from the GDC Vault, focussing on the areas of game technology, design and history.

For the first part of this series I decided to focus on games technology, but I quickly realised that it’s actually quite difficult to find great technical presentations that that really stand the test of time. While focussing on the details of a cutting edge technical implementation or technique can be useful at the time, within a few years, that same content can date considerably. That being the case, my choices for the best technical talks includes sessions that are about more general concerns: unique approaches to problem solving, best practices and tools development.

Each of these talks has something valuable to say about the technical process of game development, and each provides a great deal of inspiration to explore more creative uses of games technology.


Ink: The Narrative Scripting Language Behind ’80 Days’ and ‘Sorcery!’

By Joseph Humfrey from inkle – GDC 2016
Watch this talk on the GDC Vault

This presentation by inkle’s Joseph Humfrey really impressed me, because with Ink, it looks like the studio has really thought a great deal about the disconnect between traditional scripting languages and the needs of a branching interactive narrative. That friction between content and logic has always been problematic, making narrative projects difficult to manage, and placing limits on the potential for interactive fiction. But now inkle seem to have arrived at a possible solution that meshes narrative and logic within a single interpreted language.

As with many of the best technical talks from GDC, this one really is about the power of approaching existing problems from a fresh angle. Humfrey outlines the design decisions behind the Ink language, stressing that by putting the writer at the centre of the game’s development process, Ink enables a level of expressiveness that other more traditional approaches cannot match. The talk goes on to give examples of common functionality used in many branching narratives, and shows how Ink can be used to solve those problems more elegantly than before.

As mentioned in the video, the Ink language is now open source, (and compatible with Unity), so if you’re interested in creating interactive narratives, then it’s definitely worth checking out.


Galak-Z: Forever: Building Space-Dungeons Organically

By Zach Aikman from 17-Bit – GDC 2015
Watch this talk on the GDC Vault

It’s tempting to think that the recent popularity of procedural level generation may suggest that the area has been well explored, with easy catch-all solutions to what were once complex problems, but Zach Aikman’s presentation on the approach used for Galak-Z, shows just how wide open the topic remains. In Aikman’s talk we learn how, starting with a single hand crafted level, created as a quality target, the team made clever use of cellular automata, Hilbert curves and custom Unity editor tools to procedurally generate environments of a similar quality for the final game.

Once built and running, the system was able to quickly generate highly varied levels, that were unique and unpredictable, yet based on a small set of shared components. We also see, how the system was easily extended to allow for three very different kinds of environments, the cave-like Standard Dungeons, the more industrial Space Hulks and the Pirate Bases, which combine elements from the other two themes.

I found this overview of Galak-Z’s procedural level system to be really interesting, not just for the details of the actual implementation, but also because of it’s unique approach, that hints at just how many other methods have yet to be explored.

If you’d like to find out more about procedural generation, this Game Makers Toolkit video about the system used to create Spelunky’s levels is another great starting point.


50 Camera Mistakes

By John Nesky from thatgamecompany – GDC 2014
Watch this talk on the GDC Vault

While it’s easy to appreciate the fantastic art of thatgamecompany’s Journey, a lot of the game’s emotional impact can also be attributed to the way those visuals are framed. John Nesky was responsible for the realtime camera design and programming on Journey, and in this talk he shares the range of design decisions that combined to create the sweeping cinematic camera found in the final game.

Using examples from some of Journey’s key sequences – plus learnings gathered from many other games including Mario 64, Papa & Yo, Uncharted, Shadow of the Colossus and Arkham Asylum – Nesky explores the do’s and dont’s of realtime 3D cameras from every angle. In just under an hour, this talk provides a fantastic overview of what the games industry has learned about 3D cameras in the last 20 years. It’s a fast paced presentation, with a lot covered in that short time, but Nesky keeps things easy to follow, and in doing so presents a real goldmine of creative advice for developers who are new to 3rd person action games.

This talk makes a great companion piece to John Edwards’ presentation about Journey’s Sand Rendering, a technical look at the game’s visuals, that also highlights the studio’s creative processes and commitment to experimentation during development.

There’s also a similar talk on best practices for 2D camera systems, also available on the GDC Vault. Presented by Itay Keren, this talk on the theory and practice of 2D cameras, makes for an excellent overview of many common 2D camera techniques, and is also available as a written article.


The Cameras of Uncharted 3

By Travis Macintosh from Naughty Dog – GDC 2012
Watch this talk on the GDC Vault

Another great talk on 3D camera systems, this time by Naughty Dog’s Travis Macintosh, who gives us a tour of the tools used to create Uncharted 3’s dynamic cameras. Whereas Nesky’s talk (above) focussed on theoretical best practices and their concrete implementation in Journey, Macintosh’s talk is a more detailed look at Naughty Dog’s actual tools, and how they helped enable Uncharted’s artists and designers to achieve such a polished and cinematic action game.

As with all of Naughty Dog’s technical talks, the very fact that the company uses it’s own bespoke engine actually makes for a more interesting presentation. Most developers will probably never use these tools, but just seeing how they work can help to highlight some alternative ways to approach common game development tasks. When you’re used to working with an off-the-shelf engine like Unity or Unreal, it can sometimes feel like most of the problems have been solved for you and you’re just designing within a pre-defined box, so seeing things done differently by some of the industry’s most talented developers can provide some much needed creative inspiration for improving your own tools and workflows.

There are a lot of other great presentations about the technology behind the Uncharted series available on the GDC Vault. Most recently, Andrew Maximov’s excellent discussion on the technical art of Uncharted 4, which showcases many of that game’s high-end effects, while also highlighting the studio’s unique cross-disciplinary approach to creative problem solving.


Math for Game Programmers: Building A Better Jump

By Kyle Pittman from Minor Key Games – GDC 2016
Watch this talk on the GDC Vault

When making a platform game, it’s best practice to finalise your character’s move set first, especially the jump, long before you start building any real levels based around your character’s abilities. It’s a proven process that many classic platformers were founded upon, but it’s also a stage that requires a lot of time spent tweaking different parameters until the jump just feels ‘right’. What if there was a more structured way of doing this, what if you already knew how high and far you wanted your character to jump, wouldn’t it be great to just draw an arc that describes your character’s jump and know how to get that into code? This is exactly what Kyle Pittman addresses in his excellent talk on the maths of jumping in platform games.

In just over 20 minutes, Pittman outlines all of the maths and physics needed to enable programmers to convert a pre-defined jump arc into code that simulates that parabola in realtime gameplay. He also covers how to implement common platform game ‘physics-hacks’ such as double jumping and fast falling, explaining how to think of each element as a separate self-contained parabola, which you can then splice together at runtime. This talk does an excellent job of combining the technical details with a solid understanding of how and why the implementation works, and is one of the few cases where a GDC session provides a concrete technical implementation that you can easily add into your next game.

A great follow up to Pittman’s talk would be this excellent article on the 4 main types of 2D platform game engines, by Bossa Studio’s Rodrigo Monteiro.


Rayman Legends: The Design Process Within the UbiArt Framework

By Chris McEntee from Ubisoft Montpellier – GDC 2014
Watch this talk on the GDC Vault

The modern Rayman games are a great achievement, offering up fantastic 2D graphics, alongside inventive platform game level design of a quality rarely seen outside of Nintendo. In his presentation on the UbiArt tools framework, Chris McEntee explains how these high quality tools, built for a very specific purpose, enabled a small team to rapidly iterate and innovate throughout development. We learn how the UbiArt tools treat both artist and designer with equal importance, enabling a workflow that allows each discipline to work in parallel, yet easily collaborate when required.

During the talk, McEntee demos many of UbiArt’s key features, such as the ease of editing the spline-based environments, the fact that editing takes place within live levels, and the way that designers can sketch directly onto white box designs, to more easily communicate their intent to artists. McEntee explains that because the game’s artwork automatically adjusts and responds to changes in level geometry, the tools allowed the team the freedom to iterate rapidly and test their ideas with near-final artwork at any stage of development. He also explains how many of the game’s most innovative level designs were thanks to UbiArt’s openness, allowing designers to create all kinds of emergent behaviour by combining existing components in unique and unexpected ways. Examples of this include unique uses for destructible terrain, and a realtime shadow system that emerged from experiments with stealth gameplay. Overall, it’s an inspiring talk, showing just how great tools can help a small team to create something really special.

As mentioned in the video, Chris McEntee has published some interesting blog posts that go into more detail on the topics covered in his presentation.


Ellie: Buddy AI in The Last of Us

By Max Dyckhoff from Naughty Dog – GDC 2014
Watch this talk on the GDC Vault

One of the biggest strengths of The Last of Us is the believability of it’s characters, and while it’s easy to assume that this is just down to fantastic writers, talented actors and skilled artists, there is in fact much more going on behind the scenes. In this excellent presentation, Max Dyckhoff begins with the surprising revelation that, with only half a year until the game was due to ship, the game’s character AI still lacked the sense of co-operation and attachment to ‘buddy’ characters like Ellie and Tess, which the final game has been so celebrated for. From there, Dyckhoff covers how these core aspects of the game’s buddy AI were added in those last few months of development. With easy to follow examples, he shows how the team built upon the basics of Uncharted to make something much more capable of creating that crucial emotional connection with the player.

We learn that NPC’s always stay close, so that they feel supportive, rather than a risk that needs to be micro managed. We also see how NPC’s use the player’s position to procedurally select appropriate cover points in realtime, and discover that many incidental animations, lines of dialog and actions such as gifting items to the player, were added at the last minute to help boost the character’s emotional believability.

Dyckhoff also talks us though his choices concerning many AI cheats commonly seen in 3rd person action games. He mentions wanting to avoid teleporting NPC’s at all costs, and only uses the technique when there is a greater emotional trade off, for example, Ellie saving you when you’re trapped in combat. He also explains how the team decided to stop fighting some difficult AI bugs and just allowed Ellie to break cover, while also ensuring that she doesn’t give away the player’s location. This was an intentional trade-off to avoid the player getting annoyed with Ellie, which would have broken their emotional connection to the character.

There are plenty of other great GDC talks about the making of The Last of Us available online too. In particular, Jason Gregory’s (author of Game Engine Architecture) overview of the game’s contextual character dialog system, is well worth watching. His talk outlines how the engine’s dialog scripting language works to manage the structure of conversations between characters, while also allowing for dynamic, on-the-spot dialog changes, thanks to a system that keeps each character aware of what’s going on around them.


Bringing BioShock Infinite’s Elizabeth to Life: An AI Development Postmortem

By John Abercrombie from Irrational Games – GDC 2014
Watch this talk on the GDC Vault

This excellent talk by Bioshock Infinite’s lead programmer, John Abercrombie, offers some interesting contrasts to Max Dyckhoff’s discussion of AI in The Last of Us (above). Both games are essentially traditional action games, enhanced with a strong focus on narrative and an important central relationship between the player and an AI character that helps them throughout the story. However, the differences in how these two games achieve their goals really highlights the extent to which the storytelling aspects of this medium are still wide open for exploration.

Throughout the talk, Abercrombie explains the many systems that were created to manage different aspects of Elizabeth’s behaviour, sharing before and after footage of the many approaches they tried throughout development. In doing so, he highlights several ways that Irrational’s creative vision differs from Naughty Dog’s, for example: in The Last of Us, Ellie tends to walk behind the player to avoid getting in the way, whereas Bioshock’s Elizabeth is required to lead the player through the environment, helping them find their way, while also being responsive to the player’s own freedom to explore their surroundings.

Another area where the two games (and their AI programmers) differ is the concept of teleporting NPCs. Dyckhoff argues that teleporting breaks immersion and should be avoided, but Abercrombie states that while it may break immersion, teleporting Elizabeth also allowed for a much more player centric approach to the game’s design, enabling more complex environments, faster exploration, and smoother, more believable combat scenarios.

Overall, both talks are incredibly interesting and informative, showing the range of approaches being explored in modern AAA development, I’d definitely recommend watching them both if you have the time.


Assassin’s Creed Identity: Create a Benchmark Mobile Game!

By Tobias Tost from Blue Byte – GDC EU 2015
Watch this talk on the GDC Vault

There’s a fair few talks from studios and individuals who use Unity on the GDC Vault, but this one stands out as it’s about recreating a AAA console franchise on mobile using the engine. The mainline Assassins Creed games on consoles and PC were all built using Ubisoft’s in-house Anvil engine, but for this mobile spin-off Blue Byte chose Unity, as Anvil didn’t scale down to mobile platforms.

Blue Byte’s Tobias Tost highlights the main technical challenges of the project, explaining how the team attempted to scale down the series’ trademark open world gameplay to work on mobile, while also trying to bring a wealth of existing assets and game systems into a totally different engine. Tost shows how the AI and rendering systems were optimised for mobile, with some interesting in-game examples of how the Unity version manages it’s limited resources by turning NPC’s on and off based on where the player is looking. He also takes us on a guided tour of the game’s crowd management systems, built using Unity’s custom editor scripting tools. These tools allow designers to paint crowd pathways and density maps directly into the game’s environments, which looks really impressive, providing great inspiration for what’s possible with Unity’s editor scripting tools.

It would be very interesting to see how the Unity version compares to Anvil’s equivalent tools, and to know if any of this work got ported back into Anvil-Next when the series moved over to PS4 and XboxOne.


The Implementation of Rewind in Braid

By Jonathan Blow from Number None – GDC 2010
Watch this talk on the GDC Vault

It’s a real shame that Jonathan Blow doesn’t quite get to finish this talk on Braid’s rewind feature, as this presentation offers a lot of insight into his unique approach to problem solving, from the dual perspectives of being a programmer and game designer. Blow starts by outlining the most common approaches to implementing recording and playback functionality in a game, outlining the pros and cons of each, before surprising us by choosing what seems the least optimised approach for Braid (record all data every frame), and then reframing the problem – how do you compress the data enough to fit into just 40MB of RAM?

What follows is very interesting look at the clever techniques he used to compress and optimise the vast amounts of data being recorded every frame, so that up to 30 mins of gameplay (that’s data from 108,000 frames) can be stored and quickly accessed at any time. The talk really showcases Blow’s creative talent, both as a game designer and as an engineer, demonstrating how the most innovative approaches to solving a problem can sometimes be found by simply reframing that problem, and seeing where that leads you.

For a really different approach to rewind in games, I’d definitely recommend reading this excellent overview of the implementation of Planetary Annihilation’s ChronoCam system by Uber Entertainment’s Forrest Smith.


Automated Testing and Instant Replays in Retro City Rampage

By Brian Provinciano from Vblank Entertainment – GDC 2015
Watch this talk on the GDC Vault

I kind of stumbled across deterministic game logic while working on a turn-based strategy game a few years back, where the need for an efficient online multiplayer mode lead to the decision to make the code deterministic. Before that I’d never really considered how to build a game that runs that way, or what the benefits of doing so even were. In this talk, Brian Provinciano makes an excellent case for building games that run deterministically, showing how instant replays can be captured by just saving the player’s input, how this makes rare bugs easy to catch, and how a similar process can even be used to automatically test your entire game at super high speed, every time you compile a new build.

These are things I discovered too, while making that strategy game a few years ago, and Provinciano really highlights these benefits, with clear examples from Retro City Rampage, that highlight best practices and common traps to watch out for. He notes that while the approach can add extra complexity to a project, deterministic code also provides many time saving benefits in areas such as testing and debugging. He also explains how that extra complexity can be managed by keeping your core game logic separate from other parts of the codebase that don’t need to be deterministic, its possible, but does require a careful, disciplined approach.

If you enjoyed that talk, then be sure to check out some of Brian Provinciano’s other presentations on the GDC Vault. I would recommend the story of porting Retro City Rampage to DOS, where he covers everything from clever colour palette tricks, to dealing with limited sound channels, the difficulties of DOS graphics programming, and of course, having to cope with only 4MB of RAM! But if porting the game to DOS just isn’t retro enough, then it’s also worth watching his short video about the making of ROM City Rampage, which was the original version of the game, made to run on a NES console.


The Challenge of Bringing FEZ to PlayStation Platforms

By Miguel Angel Horna from BlitWorks – GDC EU 2014
Watch this talk on the GDC Vault

This presentation is an insightful and inspiring look at the challenges of porting a game between two vastly different platforms. Miguel Angel Horna guides us through the process of porting FEZ from Microsoft’s C# based XNA framework, to native C++ code on Sony’s PlayStation platforms. The talk begins with Horna outlining the main choice that would determine the course of the whole project, should they port the C# runtime to PlayStation platforms, leaving the core game code largely untouched, and just plug the gaps with a new API layer, or should they rewrite all 500+ C# classes in C++ code?

As crazy as it sounds, they chose to rewrite all of the code. Starting with an automated translation system to do a rough first pass, they then spent months working through hundreds of hand coded changes to actually get the game running again. The rest of the talk discusses these changes and the challenges they represented, detailing just how much effort was required to recreate the language specific features of the original C# game code in C++. A lot of modern C# language features like lambda expressions, extension methods, reflection, events and a range of .NET functionality were all used in the original FEZ code, and Horna runs through many interesting examples of how BlitWorks plugged the gaps left by the transition to C++, which lacks many of those features.

Reflecting on the talk, I think it’s interesting to note the parallels between the challenges that BlitWorks faced, with those of Axiom Verge developer Tom Happ, who chose to port his game’s XNA runtime to PS Vita rather than rewrite it in C++. That approach also took a huge amount of time and effort, showing that there really is no quick solution to such a complex problem. It really makes you appreciate multi-platform engines like Unity and Unreal, which use incredibly complex multi-step compilation sequences to get from the managed language you write, to the native code that runs on your targeted platform.

Aside from the complexities of porting it, FEZ is simply an interesting game in it’s own right, both in terms of it’s design and technology. If you’d like to find out more about it’s development, there’s also an in-depth GDC talk on the making of FEZ, which focusses on the tools and technology used to create the original Xbox version.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s