The bulk of my interests tend to lie in the cloudy and vague area of audio.
When I use the term audio, I am generally referring anything aural which the average Joe wouldn’t call music. This, then, incorporates sound effects and design, strange experimental audio, interesting aural ideas etc.
More specifically I love procedural audio. Procedural audio is generally taken to mean that it is created on-the-fly. This means that just prior to creation (and even during it) the rules governing the creation can change. There’s a better explanation from Andy Farnell, author of the awesome textbook Designing Sound here.
What I particularly like about procedural audio is it’s abilities to react completely to what the player is doing when it is implemented within a video game. Many a game already has adaptive audio. For instance, in Battlefield Bad Company 2, when you walk into a house, the gunshots and explosions have a lot more reverb added to them. But that is just changing a small amount of DSP affecting a pre-recorded piece of audio. With procedural audio, you can change the very components of the sound itself.
I created during my second year at University a Max/MSP (the closed-source predecessor to my preferred tool Pure Data) program which was linked in, via OpenSoundControl, to a Quartz Composer patch which played the game Asteroids (programmed by Gary C. Martin). Obviously Asteroids is a very simple game with only a few things going on, but I found that I had a lot to go on in terms of creating control signals originating from what was happening on-screen.
The result was a large mixture of strange sounds and sound effects, combined with some artsy droning things, all controlled by the player’s score, how many lives they had left, how long they had been playing the game, what wave number they were on, how many asteroids were on the screen at once etc. And the control signals weren’t operating simple DSP ideas, such as the reverb mentioned above, they would alter things like the durations of sounds, the synthesis settings, pitches, modulations and so on.
I ended up doing a performance of sorts by getting 20 people to play the game at once with the sound effects going into a master Max/MSP patch. During their playing, I mixed all the different sounds from all the machines to create a composition which by it’s very nature would never sound the same twice (my lecturer insisted I recorded the performance, which I felt was rather missing the point). But of course it wasn’t a random composition, every single sound was the result of one of the players games, be it their score or the fact that they just crashed spectacularly into an asteroid.
Earlier in my second year than the above asteroids project, we were set with a programming project. I chose to create a Pure-Data patch which could procedurally generate soundscapes for different audio environments. Using Andy Farnell’s excellent book Designing Sound, I was able to simulate a variety of sounds (fire, crickets, rain and wind) which would fit well with many game environments and has the advantage of having very configurable sounds which will never just loop and doesn’t require large pre-rendered audio samples. Here is a large pre-rendered audio sample of the project. Below is an image of the pure-data gui.