Push the boundaries of 3D audio with the unrivalled EPOS Surround binaural rendering engine
The majority of PC and console games today not only offer stereo output, but also multichannel audio, most commonly in the form of a 7.1 mix. To experience this as the game creators intended, you will need a surround capable audio system – either a traditional multi-loudspeaker room set-up or via a headset. Our engineers have designed EPOS Surround to deliver a best-in-class experience for gamers.
EPOS Surround is created in-house from our extensive research and development in the field of psycho-acoustics - the science of sound perception. The brain converts the soundwaves arriving at each ear into a spatial representation that gives the listener a sense of where audio is coming from. We use our understanding of how human anatomy captures, and the brain perceives, sound to create a realistic sense of audio immersion.
EPOS Surround uses custom spatial filters that have been compiled from a large proprietary internal research database. We especially focused on improving the perception of sounds that come from the back or side of the soundstage, which existing surround sound technologies struggle to deliver as effectively.
Our ears and brain work together to interpret the direction sounds come from. For example, sounds to the side of us will arrive at one ear before the other, and our brain processes this information to tell us which side the sound is coming from. Our brain also interprets the way sound interacts with the structure of our ears (the pinna), and the way it reflects off our shoulders and torso. In this way, humans can perceive the direction sounds come from remarkably accurately.
Traditionally, surround sound as you might experience in a cinema, is delivered by multiple loudspeakers, and the audio signal is mixed into each speaker channel depending on the location of the sound on the sound stage – so as a jet flies overhead on screen, the sound will move from the speakers behind you to the speakers in front.
For our ears to perceive the sound coming from a headset as being from a particular direction, the audio must sound as if it has interacted with our anatomy – our ears, head and shoulders especially, just as it does in real life.
Audio engineers create models of our heads and torsos complete with the pinnae (external parts of the ear) with sensitive microphones embedded where the eardrum would be in a real human head. From playing sounds from different locations around the model, then recording what finally hits the microphones in each ear canal, we can build a database of what is known as Head Related Impulse Responses (HRIRs). From these, we can construct a mathematical model (a filter) that will adjust a sound and its timing in such a way that the listener will perceive it as coming from a particular direction – even though it’s just travelled straight into your ear canal from a headset. These filter algorithms are known as Head Related Transfer Functions (HRTFs) and they form the basis of surround sound perception from a stereo headset.
Illustration: Head Related Transfer Function (HRTF)
The perception of the direction a sound comes from varies depending on frequencies in that sound. Our brains determine the direction low frequencies come from simply by comparing the time difference between its arrival in each ear – a very crude measurement. When it comes to midrange frequencies, the brain also interprets their volume at each ear – it will be louder on one side or the other.
In fact, it is for higher frequencies above 5 or 6 kHz that our brain interprets the most directional information, and it generally works this out from the interaction of these sounds with our pinnae. It’s for this reason that professional esports players tend to EQ lower frequencies down in level and higher frequencies up – it improves their perception of where sounds are coming from.