Capstone:
Puzzle Box Palace
Project Snapshot
Puzzle Box Palace is a single-player first-person puzzle game set inside a magical puzzle-box. The player character receives a mysterious puzzle-box from their grandfather, and is shrunken down and transported inside. In order to find all of the letters left inside by their grandfather and escape, the player must solve their way through rooms that rotate and transform as the player interacts with them.
Engine: Unreal Engine 4
​
Team Size: 13 people
​
Development Time: 4 months
Roles and Responsibilities
The 13-person team for Puzzle-Box Palace was comprised of 4 level designer (including 1 lead level designer and 1 game designer), 3 artists (including 1 lead artist and 1 external lighting artist), 2 producers (including 1 QA producer), and 4 programmers (including 1 lead programmer).
My Title: Lead Sound Designer
​
My Team Size: Just me :)
Responsibilities:
-
Create, mix, and master all sound assets for the game including gameplay sounds, environmental sounds, UI sounds, and music.
-
Balance sound across the game and in menus
-
Collaborate with artists, leads, and game designer regarding the theming and auditory tone of the game
-
All sound programming (via Blueprint)
-
All sound bugfixing
-
Devise and implement intelligent sound systems such as randomization and floor material detection
-
Compose, mix, and master soundtrack
-
Iterate music and sound effects per lead and game designer feedback
-
Integrate audio spatialization, occlusion, reverb, and room tone to increase player immersion
Work Samples
Music:
Sound Effects:
The Randomization System
Right click and drag to scroll around the blueprint. Mousewheel zooms in and out.
The randomization system I designed and programmed for this game represents my most complex programming task of the project. At runtime, when one of the levels is loaded, each of the audio markers (a no-collision, player-invisible editor asset I created) immediately runs a series of checks on public variables to determine which length of sound effect it will play later (0.5-, 1-, 2.5-, 5-, and 10-second options are all checked either on or off). It then draws a random member from an array of all sound effects of that length and saves it as a private variable. All unneeded audio components (i.e. the sound effect lengths that are NOT in use) are then deleted.
Identical code is run on all possible lengths of sound effects
Excess audio components are deleted at the end of EventBeginPlay
When an animation event begins, an array of these audio markers is activated. In order to increase flexibility and account for multi-phase animations, each audio marker includes public variables for delaying each length of sound effect independently. This allows, for example, a 2.5-second sound effect to begin playing right away as part of the first phase of an animation, and a 5-second sound effect to only begin playing 2.5 seconds later to create audio for an animation that’s 7.5 seconds long. This allowed designers maximum flexibility with their animations while still ensuring unique audio didn’t need to be created for every single animation.
Public bools control which sounds play. Public float controls delay of various sounds.
In some instances, such as levers, the system needed to be able to recall the sounds it had used for a particular animation but flip their playback order as the animation plays in reverse. To account for this, each audio marker also has a checkbox to determine whether it’s part of initial playback or reverse playback. For each reverse playback audio marker, the designer simply uses a public variable to associate it with the audio marker it’s reversing. This allows the reverse marker to detect which specific sound effect to play rather than a random array member, guaranteeing the reverse playback will use the same sound effects as the forward playback.
Reverse audio markers directly draw from their partner's sound effect variables.
Post-Mortem
What went well?
What went wrong?
What I learned?
-
Working as the sole audio designer/programmer on the project, I communicated clearly with designers and programmers to establish metrics for the lengths of animations and guarantee I wasn’t overwhelmed with too many unique sounds to create and implement.
-
By exposing so many audio marker variables, I was able to maximize efficiency using and timing them with animations.
-
Communication with the game designer early about soundtrack theming ensured even early versions of the soundtrack were on-target.
-
Taking ownership of the sound early guaranteed it was considered at every stage of development and helped the team end up with a cohesively polished game.
-
It was difficult at times to communicate across disciplines with designers and get them to understand that changing animations required communicating with me to update the audio to match.
-
Due to the nature of the animation system, audio integration often ended up getting very little time to work with the levels, and would often be pressed for time near the delivery of milestones. This was particularly exacerbated if additional animation changes were made after the audio pass.
-
Despite having realistic time expectations for tasks, the waterfall-like development of the levels kept audio tasks blocked for as much as a week.
-
Best practices for communicating as a remote developer with in-person developer teams.
-
Techniques for increasing the flexibility of an Unreal system, and overall improved literacy in Unreal Blueprint.
-
Communication methods for establishing game metrics, and how to appropriately ensure those metrics are followed by other team members.
-
Bugfixing strategies for audio problems that are difficult to isolate.
-
Increased understanding of Actor Components in Unreal and how to use and attach them to others’ work.