GDC 2019 – Talks, Intel Student Showcase, and Magic with WotC

I just got back from GDC… Woo what a trip!


There were so many cool experiences it would be impossible to go through them all, so I’ll just hit the highlights.


This was my first year going to talks, and I’ll definitely be doing that again in future years. I’ve been mulling around the idea of writing some curriculum to expose middle and high-school students to technology through game dev, and there was an amazing talk by AP Thomson at NYU Game Center about teaching flexible system design with procedurally mixed student projects. If you’re interested in computer science education, I would highly recommend checking that out in the vault (it’s titled “Stone Soup: Procedurally Mixing Student Projects to Teach Flexible System Design”). I really think that the concept of orchestrator-designer schemes for teaching coding makes a ton of sense, and can really be scaled to any class size or skill. Modern tools are so unbelievably advanced, that with some setup you could easily abstract as much or as little as you’d like away from your students. There were so many good talks I can’t mention them all, which is too bad. To anyone looking to go to GDC in the future, spring for summits, you won’t regret it.


On Thursday, my thesis team presented our game Sky Shepherd at the Intel student showcase. I had the opportunity to act as the face of our team and present the game to browsing devs. It was amazing to see how far our game has grown over the 5 month development up to that point. We got tons of positive feedback, and I met a ton of interesting dev from all over the country. We were even stationed next to UT Austin (the city I grew up in). Public speaking is really difficult for me, but I really enjoy putting myself out there and practicing those skills.


I also had the opportunity to play Magic: The Gathering with a bunch of Wizards of the Coast employees and Magic content creators. I learned how to read with the help of Magic cards, so I’ve been playing for a very long time. It was awesome to get to see what those personalities are like and share the love of that game with people who make it happen.


On the note of being social, last year (my first GDC) I was totally overwhelmed and lost. I barely spoke to anyone. This year I was determined to put myself out there and meet as many people as possible. I smashed my own expectations and am really proud of myself. All in all, I had a blast.


AI Programming Assignment 2 – Pathfinding


In this assignment, we were tasked with building the framework to generate graphs (which I will refer to as navmeshes from here on), localize and quantize them, and pass the pathfinding functionality to our boid agents. I used this assignment as an excuse to learn more about template classes by writing a flexible priority queue and optimizing a graph structure using a map with Node IDs (int) service as indices with a child vector containing the edges leaving that node. During this assignment, I spent a lot of time doing things in what I considered an ‘ideal’ way. I’ve found that this isn’t always effective strategy, since you can only accomplish so much in a finite amount of time. Getting something to work is sometimes more important than designing it to be the most robust. Because each of these assignments build upon each other, I thought that remaining flexible was key. This assignment has helped me realize that—as a result of the way this class is structured—I can forget about jumping through certain hoops in favor of assignment portions that I feel offer a greater opportunity for learning. I’ll be updating my code (and this post) as I complete features for this system.

The backbone of this entire assignment are the graphs and their periphery data structures. If these are constructed thoughtfully, they can be both flexible and have a small memory footprint. This memory footprint was important to me, since I knew that I wanted my navmesh to be able to perform even when the resolution of the mesh is really high.


Drawing the graph structures would help me debug behavior later and give me something neat to show, so I liberally used the #ifdef macros shown to make sure that the size of a Node and DirectedWeightedEdge would only be as big as strictly necessary. This is made even more important when you consider that a DirectedWeightedGraph has vectors of both Nodes and DirectedWeightedEdges which will contain a very large number of objects (especially DirectedWeightedEdge). Some quick math shows that with debug draw on, a DirectedWeightedEdge takes at least 52 bytes, and without the debug draw only 16. When we consider a navmesh with approximately 10000 nodes, and approximately 80000 edges, we’re saving ~2.9 Mb of memory. I know that may seem insignificant, but a 70% reduced memory footprint becomes really significant as map geometry gets more complex and the maps get larger. Another small note is that when a DirectedWeightedGraph is instantiated, the desired number of Nodes is taken as an argument and used to initialize the vectors of Nodes and Edges. Because these grow dynamically by a factor of two, it’s conceivable that these vectors end up much larger than we’d need. By taking this argument at initialization we can potentially save a lot of memory later.
Actually constructing the map, given that it was never a 2D array had some fun moments. Here are a few images of incorrect versions…



After some stumbling I ended up with two graph versions, one with randomly assigned weights and edges to neighbors, and one that is ‘flat:’ weights scaled with distance. Each of these methods use Euclidean distance as heuristics, but Manhattan distance is also available. They behave the same way with a flat navmesh.



You might think that with these nicely thought out data structures I would have a beautiful implementation for A* and Dijstra’s. I’m not happy with them, however. There are a few problems, most of which can be attributed to time. The first is a problem with my priority queue. For it to be able to use functionality like “find” and “sort,” the operator < and == need to be defined. The < operator is easy, since each element has a priority. The == isn’t particularly difficult either, if you thought to use a pointer for the priority queue’s elements instead of the actual objects. This isn’t a huge change, but I didn’t have the time to correct that design decision. As it is now, I need to overload the == operator for each potential type that could go in the priority queue. This really defeats the purpose of a templated priority queue. I should instead be comparing the data’s memory address. I’d also like to go through and implement this using smart pointers, so that we never have to worry about dangling references or access violations.

I was thoroughly rushed on this assignment, and will continue to update it in the coming weeks. I’m excited to construct a dynamic environment. I plan to add AABB collision obstacles that are randomly generated when you play. Stay tuned!

AI Programming Part 1

In this assignment, we started by building the framework for our movement algorithms and displaying them using the openFrameworks toolkit in C++. Part of why I chose to take this class was to continue mastering the C++ programming language, and to try to apply all of the things that I have learned over the last two years in my own personal work and through Joe Barn’s C++ game engine classes. In this write-up, I’ll break down the design choices I made, their implications, and some of the details of my implementation.

In class we defined a few data structures, the most important of which are the Kinematic, DynamicSteering, and something I like to call a Crumb. Each of the Boids that we’re drawing are represented by a Kinematic, whose associated movement data is updated by requesting DynamicSteering information from the various movement algorithms we’ll get to later. The implementation of Kinematic::ProcessSteering() is extremely important to making consistent behavior, since no matter how the steering algorithms calculate their output it should always be processed in the same way once it’s received by the Boid.

Now that these data structures have been detailed, we can start to talk about the steering behaviors that were implemented, and why there were implemented as they are. Deciding how to implement these behaviors was a really fun exercise, because there are so many different ways to do it. The design constraints that I had to think about were scalability, readability, and flexibility. The first option that I considered was creating different classes for each of the steering behaviors who all implement their own static versions of GetSteering(<Params>). This was appealing, because there was no need to initialize a “steering” object to access the various GetSteering() behaviors. This method also led to a straightforward function call. For example we may type Seek::GetSteering(<Params>). It is very clear which steering function we’re calling. One of the major downsides of this method, however, is that the parameters of the different same-class GetSteering() functions lead to really difficult to discern function calls. An example can be seen in the image below:


These GetSteering() functions are differentiated by a single additional parameter for the second call. Not very clear.

The next potential design pattern I considered was using a single MonolithSteering class that contains all of the different GetSteering() implementations for all of the various behaviors. This could be done with static methods or with an object of that class being instantiated. When I considered this option, I decided that the amalgamation of these methods made the most sense with each of the relevant parameters being set at construction of the MonolithSteering class. This would ideally reduce the potential for user error since all of the important variables are set only a single time. An added bonus of this pattern is that you could dynamically change a single variable like MaxAcceleration and you would see that impact every type of steering behavior in a consistent way. This greatly reduces the number of “magic numbers.” Plus, tuning would be simpler (at the expense of precision). The major downside of this pattern is that each of the Boid’s behaviors are going to be identical given the same steering function call. If you wanted to introduce randomness, or have any Boid individuality you’d have to design that into the Kinematic data type. It’s also worth mentioning a design principle, the “Single Responsibility Principle.” This advocates that a class should have a single responsibility, and any more than that can get difficult to manage and scale. I don’t always agree with this principle, but in this case I feel strongly that it makes sense. This first assignment represents a boiler-plate setup we’ll be expanding on throughout the semester, so keeping the project organized in a logical way is really important.

The final design pattern I considered is a combination of both of these previously described ones. In this third pattern a Steering object for each behavior is initialized with the necessary parameters provided, and GetSteering() calls of that type are called through their respective objects. This maintains the Single Responsibility Principle, and with a little care clears up the confusion shown in the first option. I think that this balanced option is ideal. I wish I had thought of it sooner! I plan to go refactor my code to match this pattern, but it’ll have to wait for the next assignment.


Before we dive into the behaviors, we need some context of the simulation. First, if a Boid leaves the screen it is deleted. Second, the maximum speed of the Boids are controlled by the “Max Speed” slider which can be changed dynamically. Finally, the Crumb type represents the trail that Boids leave. They’re initiated in an object pool, colored like their parent Boid, placed, and their radius scaled with delta time. Once they reach a certain size they’re returned to the pool. Now on to the behaviors!

I started with implementing a Kinematic Seek. Unlike the Dynamic Steering algorithms, Kinematic Seek hard-sets the Boid’s velocity and orientation, causing it to always face and move towards its target are the prescribed maximum speed. This algorithm is incredibly simple, and to be frank it’s the least interesting in this assignment.

The next was Dynamic Seek, but before we dive into it, we should enumerate the differences between a Kinematic Steering function and a DynamicSteering output. As I mentioned, Kinematic Steering methods change the active Boid’s Kinematic properties, but Dynamic Steering methods do no such thing. Instead, they return a DynamicSteering output containing a linear acceleration and a new orientation target. These outputs are handled in a consistent way through the Kinematic::ProcessSteering method. Back to Dynamic Seek. This algorithm has two ‘modes.’ The two modes are differentiated by their behavior approaching the target. The difference is shown below.

                In both, the active Boid has an acceleration applied towards the target each frame. It’s important to note that before accelerations are applied, the Boid experiences drag based on its Kinematic property. This way the Boid moves like a realistic object. Below are two gifs showing the difference drag makes.


As you can see, without drag the Boid moves like it’s in a vacuum.

The next movement algorithm is Dynamic Wander. It works by accelerating the orientation of the active Boid in a direction and magnitude scaled with (random[0,1) – random[0,1)). This random expression gives a normal distribution centered at 0, with limits at +/- 1. This behavior looks a little bit twitchy, but behaves as you would expect. Below are two examples of Dynamic Wander, with different orientation variance values.


The final (and most interesting!) is Flocking. This works by linearly combining three different DynamicSteering outputs, scaled to produce the most appealing result. The three behaviors were: each flocking Boid must avoid all other members of the flock (Dynamic Flee), they must seek to match the velocity of the flock leader (Dynamic Velocity Match), and finally they must continuously seek the leader (Dynamic Seek). I noticed that to get the best behavior, the leader-seeking DyanmicSteering needed to be scaled up, and the avoidance scaled down. I also found that this simulation will continuously seek an equilibrium condition, and to avoid this it was best to introduce randomness. See snippet below.


This significantly improved the believability of the flocking behavior. Flocking can be set up to follow a wandering Boid, or the flock leader can dynamically seek the cursor. The latter is shown below.


I’m not very happy with this flocking behavior, and it really needs more tuning. It’s still settling into a equilibrium condition. I think that I’ll limit it’s seek to update every second or so, but we’ll see. Check back in the future for updates.

There were a lot of really interesting elements of this assignment, and while it took a long time I really enjoyed doing it. I think that it’s really interesting that in a setup like this you can simply combine different steering behaviors to create emergent behavior. I can definitely see myself playing around with different combinations to see how they affect each other.

Start of Fall 2018


Wow, it’s been a while since I’ve made a post. Time sure flies.


The spring semester became frantic by the end, but I finished all of the tasks I had. I’m really satisfied how the first year has gone. Zoom Zoom Newton (the name of our semester project) was a really great learning experience, and turned out to be something I’m proud of.


Now that the new semester has started though, it’s become really apparent how far we’ve come. We spent the first week and a half prototyping an idea that we voted on after brainstorming for less than an hour. Unfortunately none of the games I voted for got selected from that, so it was a lesson in humility working on someone else’s passion project. The day that we were supposed to pitch the first round of games they cancelled morning of. Instead we did another round of ideation and prototyping, just like the first. Utkarsh Rao and I had been brainstorming the entire week and a half before that day since we weren’t super happy with how the round of prototypes went, so we were really prepared. Our game pitch got selected for prototyping, and as I sit here in lab typing this, we’ve just finished pitching all of the games developed over the last month. Of the 19 pitches, about 5 will continue into development. I’ve become really passionate about our idea, inspired by Nausicaa of the Valley of the Wind and the concept of nomadic shepherding. Many people have told me that it will be a shoe-in, but that’s only true if people vote for it. I’m nervous it won’t get picked, but aside from inviting people to play the prototype there’s little more I can do.


I’ve gotten ahead of myself. I’ve been working at Warner Bros. Avalanche for a few months now. It took some time to find where I fit in there, but I’m really enjoying the experience. Everyone I work with is super nice, and I’m learning a ton. There’s little I can say about the project there (NDA), but I’m very excited for the future. Balancing work and school has been a real challenge, especially since I’ve also taken a TA position. But money is a thing, and a good life doesn’t always come cheap. Still, in a general sense, this is the happiest I’ve been in a long time. I’m very fortunate.


Below are some of the concept pieces for the game we pitched today. We the devs, and the game are called “The Herd.” (Join the herd, get it?)


Miguel Angel Espinosa Calderon designed the creature concepts, we told him to go wild, no restrictions on what the creatures might look like. His work can be found on his artstation:



Rita Kaczmarska designed the character and glider. The character is supposed to be androgynous, solitary, and windswept. She drew inspiration from Jewish tribal artwork and the steampunk universe.


The glider is designed to mimic the silhouette of the herd creatures. These are rough and will be refined, but we really liked the top left one. Rita’s work can be found here:


Pre-GDC Update – Avalanche and Box Castle

This program has really been amazing, and opened a ton of doors for me. It’s hard to even know where to start.


I’ve been at the Genetics Science Learning Center for about a month now, and it’s been a really great experience. I’m getting to work on a small team, all working towards a similar goal. We’ve been tackling learning objectives, and I’m getting to design based on what I’ve learned about user experience. I’m really hoping that I can convince my bosses there to start implementing data collection through their interactive learning aids. I think that would help inform some of their projects going forward, and might even help secure grant funding.


Around the time of my last post, I interviewed at Avalanche in Salt Lake (Warner Brothers). I took the interview mostly for practice, and I thought it went ok but not great so I didn’t post about it. Yesterday I found out that they want to hire me for an internship position though. That’s really a dream come true, it’s a really excellent opportunity and I’m really looking forward to starting this summer. I still don’t have details on what I’ll be doing exactly, but I’m sure it’ll be great. Since I didn’t expect to get that position, I had been working on lining up interviews with Rockwell Collins here in Salt Lake. I intend to still go through with those interviews, at least for the practice. If that goes well it may open doors for me later, even if it’s unlikely I take a position there now.


Our semester project is coming along well. I’m really happy with our process, it feels like we’ve really taken to heart all the things we’ve learned from prototyping. Rob Baer’s animations look awesome, and the whole game reads really well on a phone. Jonathan has put together a level editor, and I’ll be working on getting our minimum viable product ready in time for GDC. Speaking of GDC, I found out that I’m the only one driving, so the EAE program is renting me a car to drive their supplies. It’s coming up quick, and I’m not sure how I’ll get everything done in time, but I believe! Oh, and we settled on the studio name “Box Castle” and our team really liked my concepts for a logo, which we’ll get printed on shirts. Super stoked.


Ashley’s class GUR is also going really well. She’s brought in a local studio run by previous EAE graduates, and we’re going to get some consulting experience testing their game. I’ve realize that I’m going to have to make a more flexible resume soon, as my experience is quickly outpacing my layout.


Again, this program is amazing. I can’t believe how much I’ve learned and gained.

Spring Semester – Explosive Start

This semester (Spring 2018) is already off to an amazing start…

First, we got the bombshell that we weren’t starting a three-semester project, but instead set to develop a mobile game in time for the Game Developers Convention over a single semester. This also means that next year we’ll be undertaking a year-long development on some other type of game. I’m really happy with this change, and really looking forward to this semester’s project.

We also got to choose our groups, which went quite literally perfectly. Every member of my eight-person team is both highly skilled and enjoyable to be around. We discussed everyone’s goals for this semester, and have settled on a really strong concept. From our time in the prototyping class we’ve really learned the value of developing a core mechanic through iteration and testing. Within the first week we had a white-box playable and over the next week I’ll start using what I’m learning in Game’s User Research to keep driving our development with data.

I was also very fortunate to be offered a position as the Genetic Science Learning Center as a Unity developer. I’ve only been there for a week but I really like it so far and it’s a great opportunity to continue developing my skills in both Unity and C# more generally.

This last weekend was completely occupied with the Global Game Jam, my first jam ever. I cannot express enough how much I loved it. The 12-person team, the breakneck pace, the pure chaos; it was heaven. I was completely blown away by how much my team accomplished, and was floored by the quality and inventiveness of all of the teams. I’ll definitely be doing more jams, and am already looking forward to next year’s GGJ.

Looking forward, I’m really excited about applying all the things I’m learning in Ashley’s Game’s User Research class, and the development of our mobile game. We’ve got a bunch of new mechanics to test over the next few weeks and I’ll be sure to post updates as it progresses.


First Post! Winter Break 2017

This is the first post in what will be a series I try to make every two weeks. So a bit of an introduction is in order.

Hi! My name is Dorian. I really like drawing, painting, and ceramics. I’m more than a little bit obsessed with cats (yes, I’m definitely a crazy cat person). I ride longboards, play lots of League of Legends, and live with my two best friends. I’m about to begin my second semester in the University of Utah’s Entertainment Arts and Engineering program (EAE) as an engineer.


A good friend in my program Utkarsh and I have been spending lots of time brushing up on our C++ and improving the assignments that we had started during this last semester. We’ve also spent some time (over a couple of drinks) talking about capstone games. The way our program is structured, the first semester we do rapid prototypes, 2 – 3 weeks from start to finish. The two games in my portfolio (Destructo-spin and (BASS)teroids) were both made under those time constraints. Anyway, the following three semesters, we separate into larger teams and work on the same game until we graduate and hypothetically publish our comparatively enormous capstone games. Through the prototyping class I’ve got a pretty big list of people I’d be happy working with, so Utkarsh and I are trying to have a pitch ready for the start of the semester to try and win over those people. We don’t have one concrete idea we like yet, but we’re imagining something cooperative and multiplayer. Updates to come.