Friday, September 9, 2011


Here's an article in the NYT discussing the latest in simulating the acoustics of a great concert hall in your living room, as well as making hearing aids that do more than simply amplify sound. The word "psychoacoustics" is used to cover not just what the ear hears, but also how the brain interprets that information.

Anyone who has ever tried to mix and master audio for a CD will immediately appreciate this quote:

. . . One factor that slows the pace of innovation, Dr. Hartmann suggested, is that the human auditory system is “highly nonlinear.” It is difficult to isolate or change a single variable — like loudness — without affecting several others in unanticipated ways. “Things don’t follow an intuitive pattern,” he said. . . 

. . .“Often our changes were worse than doing nothing at all,” Dr. Kyriakakis recalled. “The mic liked the sound, but the human ear wasn’t liking it at all. We needed to find out what we had to do. We had to learn about psychoacoustics.”

Like music therapy and music pedagogy, this is another field where the new neuroscience looks to bring a much deeper understanding to what works and what doesn't. 


  1. There's another aspect of psychoacoustics which in a previous career I had a passing involvement with - the digitisation of voice for telephones.

    For mobile phone networks, there is a limited radio spectrum available, and so there is a great incentive to pack the sound of a human speaking voice into the smallest possible number of bits, and for the words to still sound intelligable and the individual voice recognisable.

    And this had to work in all major human languages - including Chinese, where the inflection of your voice changes the meaning of the word.

    It was impossible to produce undistorted voice in the available digitised radio channel, so it became a matter of working out what forms of distortion were acceptable and how to create the impression of an adequately clear human voice in the telephone earpiece.

    A lot of mathematics and a lot of expensive subjective testing went into the process. It is still going on, because advances in computing power for mobile phones enable ever more complex and computationally intensive algorithms to encode and decode voice in ever more intricate ways.

  2. Jonathan -

    What a great comment. Thanks.

    Just last night I got through on the phone to my lama friend in the hinterlands of northeast India. I noticed that since the last time we'd spoken the phone sound was different, and your comment perfectly explains what the difference was.

    Also, someone in the pop field who's work I really like is Daniel Lanois. He produced U2's Joshua Tree and Dylan's O, Mercy. Rather than trying to make music sound as perfect as possible, he uses the distortions inherent in recording and compression as ways of enhancing the soundscape he gets. Letting an obstacle become a stepping stone. He's a Cajun and all of his stuff somehow reminds me of New Orleans, where there's a blend of grit and glitter.

  3. --that should be "whose" and not "who's" --