top of page
Search
  • Writer's pictureJoe Chris

Wwise Part 1: Navigating the Interface

I struggled heavily with wwise while I was a video game minor at Berklee during the pandemic. The frustrations I had with the program back then kept me away from really learning it for years. I woke up one day recently just fascinated with it and have begun getting familiar with it again properly.


I’ve been working my way through the wwise certification courses provided by audio kinetic, and I highly recommend them. That said, I would like to write about some things I found confusing back then that I have clarity on now, as well as things I am taking notes on for me to remember in the future. I hope other people find this helpful, and if anyone has tips or would like to correct me by all means please reach out!


Wwise is a very deep application. There is so much you can do in this program, and even in the official course for this they say one person rarely uses all of the program – it’s often divided amongst teams of people. Navigating the interface and understanding the different panels is a great place to start regardless of if you are on a team or riding solo.





At the top (like many other programs) we have the tool bar. The “layouts” button is very important to note. Like the DAWs you may be familiar with, wwise can be organized in separate panels and windows. The main window in wwise is a consolidated window with panel presets that get arranged depending on which “layout” you chose. These layouts are designed to be optimized for specific workflows, but you may often find yourself switching between them or activating additional panels (be it floating or in the consolidated window) in order to achieve the work flow you like. You may find yourself starting in the “Designer” layout, and may have manually opened/closed enough panels to end with something much more similar to “soundbanks” for example. These presets are just visuals layouts and any of the available workspaces can be opened, manipulated, & closed at will.



In many of the layouts, there is a “project explorer” window on the left hand side. This is where we can navigate the various assets (or object in wwise terminology) and hierarchies within the software. Here you will find all the various sound files, game syncs, events, parameters, mixing boards, etc. that you have set up, configured, or imported. So it is important to be familiar with each section here.


The game syncs tab is where you will find your switches, states, game parameters, and triggers. These are all states controlled and manipulated by the game that can correspond to audio occurrences in your wwise session. For example, a game state can be if the player is alive or dead. They tend to be more on the global setting. Switches on the other hand tend to be more localized, such as what material the players feet are on.


Events are similar to game syncs, as they respond to game calls sent by the game engine. Events need to correspond exactly to the game call, so wwise knows what to activate when requested. An event can be a simple one shot play of an audio file or it could be a whole script of actions.


Sharesets are common attributes you may use to process a lot of data at once. These can include conversion settings for various platforms, effects, attenuations, modulations, etc.

The soundbanks tab is where we can set up and export various soundbanks to be utilized within the game engine. We can use multiple soundbanks to optimize the audio in memory and only load what is necessary for specific parts of the game.


Inside the sessions tab is where we set up control surfaces (such as midi faders), mixing sessions – which include mix busses, sends, etc.; and soundcaster sessions which we can use to simulate gameplay inside of wwise and to help us tweak and polish the audio further.


I have yet to get to queries in my studies, so last (for now) we have the audio tab. This is where all of the audio work is done. Here we have our actor-mixer, master-mixer, audio devices, & interactive music hierarchies. The actor-mixer hierarchy is where we can import all of our non-musical audio and “connect” them with the events/game syncs as well as set up offsets for different collections of audio in a hierarchical setting, where it will be applied/mixed/summed later on in the master-mixer. There is no actual mixing occurring in the actor-mixer tab. And the Interactive Music hierarchy is where we set up the music files of the game.


These separate workspaces help keep the project organized, but also allow us to visually see everything occurring in the program and manipulate mass amounts of files at once.


Another panel to be aware of is the “property editor” this is where a large majority of the tweaking for audio can occur. Here we can manipulate parameters such as volume, pitch, filters, bussing, etc. And as before, it is hierarchical. Child objects stack their parent objects attributes. So a Container with a pitch of -1000 will affect all of the audio inside of it. If a child inside said container had its pitch set to +1200, the resultant pitch will be 200 cents higher than the original unaffected audio file.


I will be getting deeper into wwise and am excited to share more as I learn. But for now, I highly recommend the free courses provided by audio kinetic to cover the basics.

Recent Posts

See All

Audio Middleware for Game Audio: An Overview

In the video game music world, we often utilize software in addition to music creation software (DAWs, sample libraries, etc.) to bring our creative vision for the soundtracks to life. Many times, com

bottom of page