Sound research at Acoustical Society meeting

Published: Monday, May 9, 2011 - 10:05 in Mathematics & Economics

The latest news and discoveries from the science of sound will be featured at the 161st meeting of the Acoustical Society of America (ASA) held May 23-27, 2011, at the Sheraton Seattle Hotel in Seattle, Wash. During the meeting, the world's foremost experts in acoustics will present research spanning a diverse array of disciplines, including medicine, music, psychology, engineering, speech communication, noise control, and marine biology. Journalists are invited to attend the meeting free of charge. Registration information can be found at the end of this release. Lay language versions of nearly 50 presentations will be available at ASA's World Wide Press Room approximately one week before the meeting. The following summaries are highlights of the meeting's many interesting talks.


Highlights: Monday, May 23

Noisy Classrooms Most Challenging to Youngest Students

Noisy classrooms aren't just bad for harried teachers' nerves, they can significantly affect the ability of students to listen and learn. Researchers at the Boys Town National Research Hospital in Omaha, Nebraska, have built a unique simulated classroom to help measure the scope of those effects and how they can be avoided. The model classroom – consisting of a desk at which test subjects are seated surrounded by an array of five LCD monitors and loudspeakers – was devised by architectural acoustician Daniel Valente and audiology researcher Dawna Lewis of the Boys Town Listening and Learning Lab. In a recent study, the researchers tested young and older elementary students as well as adults in the classroom. Although increasing levels of classroom noise and reverberation reduced the comprehension of all subjects, the youngest students – 8-year-olds – were the most adversely affected. "The combination of the difficult task as well as increased background noise and reverberation led to the younger children having a harder time following the story," Valente said. The results, he added, illustrate the importance of designing classrooms that reduce reverberation and ambient noise, and suggest that the standard practice of testing children in a sound booth with a single loudspeaker "may not be sufficient to identify problems students may be having in real classrooms with multiple talker locations, quick-changing talkers, and the interaction between background noise and the acoustical environment." The presentation 1pAAs9, "Effects of excessive noise and reverberation on listening and learning in a simulated classroom," is in the afternoon session on Monday, May 23. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa128.html

Attention to Speech in Deaf Infants with Cochlear Implants

Cochlear implants can allow profoundly deaf infants to hear speech – giving them the chance to eventually learn spoken language. However, a new study shows that the children receiving the implants don't automatically know how to listen when people speak to them. In the study, cognitive psychologist Derek M. Houston of Indiana University measured attention to speech by infants with cochlear implants and normal-hearing babies by tracking how much time the babies looked at a checkerboard pattern on a TV monitor. "It has been well-established that infants will look longer at a simple display – the checkerboard pattern – when hearing something they are interested in," he explains, "so I measured their looking time at the pattern when it was paired with a repeating speech sound, and compared that to the looking time at the same pattern with no sound." Although there was large variation in the attentiveness of individual deaf babies to the sound, in general, these babies "did not attend to speech as much as their normal-hearing counterparts," says Houston. Furthermore, two years after implantation, children who were less attentive to speech early-on performed more poorly on a word recognition task. This insight should help guide speech-language pathologists working with children who have cochlear implants. The presentation 1pPP5, "Deaf infants' attention to speech after cochlear implantation," will be Monday afternoon, May 23, in Grand Ballroom C. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa220.html

Homosexual/Heterosexual Speech: The Power of (Less than) a Single Word

It is not uncommon for us to draw knee-jerk conclusions about people based on how they speak. "This is a phenomenon that occurs every day," says study leader Erik C. Tracy, a cognitive psychologist at Ohio State University. In a series of experiments, Tracy and colleague Nicholas P. Satariano had seven gay and seven heterosexual males record a list of monosyllabic words, such as "mass," "food," and "sell." Listeners were then asked to identify the sexual orientation of the speakers when played those entire words, the first two letter sounds (for example, "ma"), or just the first letter sound ("m"). Although they couldn't accurately guess the sexual orientation of the speaker with just the first letter sound, "when presented with the first two letter sounds, listeners were 75 percent accurate," says Tracy. "We believe that listeners are using the acoustic information contained in vowels to make this sexual orientation decision," he says. "Other researchers have done various acoustic analyses to understand why gay and heterosexual men produce vowels differently. Whatever this difference is, it seems that listeners are using it to make this sexual orientation decision." The presentation 1pSC19, "Differentiating between gay and heterosexual male speech," will be Monday afternoon, May 23 in Metropolitan room B. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa257.html

Thunder and Lightning on Titan

The clouds in Titan's atmosphere produce rain (liquid methane, not water). One lingering question, though, is whether those rain clouds also produce lightning. One of lightning's direct and unambiguous signatures, thunder, "may help corroborate the existence of electrical discharges on Titan, in tandem with the usual electromagnetic sensors," says physicist Andi Petculescu of the University of Louisiana at Lafayette. In two new studies, Petculescu and his undergraduate physics students Peter Achi and Christopher Hill discuss computer simulations of the physical mechanisms through which a Titanian lightning discharge generates a shock wave – that is, thunder. Among other effects, the models predict that Titanian thunder would have frequencies that range from a high of roughly 100 Hz down to inaudible frequencies below 20 Hz, known as infrasound. The detection of thunder, Petculescu says, "will help corroborate and quantify lightning on Titan beyond the shadow of a doubt, which will be a very important step in inferring Titan's atmospheric electrochemistry." In addition, he adds, the discovery of lightning could inform hypotheses suggesting that complex pre-biotic and biotic molecules – that is, the precursors to life – can emerge out of chemical reactions "induced by the strong deposition of charge in a 'primordial pond.'" The presentations, 1pPA8, "Modeling thunder propagation and detectability on Titan," by Peter Achi and Andi Petculescu, and 1pPA9, "Gas-dynamic modeling of strong shock wave generation from lightning in Titan's troposphere," by Christopher S. Hill and Andi Petculescu, will be on Monday afternoon, May 23 in Willow room A. Abstracts: http://asa.aip.org/web2/asa/abstracts/search.may11/asa212.html http://asa.aip.org/web2/asa/abstracts/search.may11/asa213.html

The Effects of Long-Term Noise Increases on Fish

There are two different kinds of man-made noises in aquatic environments. One is very loud and transient – the product of activities such as pile driving or seismic exploration. The other is less intense but relentless, say, from shipping or offshore wind farms. "These sounds increase the general noise level over a very large area – for example, over a whole harbor – and the animals there cannot escape," explains neuroscientist Arthur Popper, a professor of biology at the University of Maryland, College Park. Most studies have focused on the effects of the first type of sound – the intermittent booms and blasts that can raise background sound levels as much as 180 decibels and cause hearing loss and other physiological problems in animals. But even the less intense noises, which can increase background noise by 10 decibels or less, may have profound effects on the health and well-being of animals, Popper says. The stress of constant noise, for example, can cause changes in the hormone levels of fish. "Moreover," he says, "the increase in background noise will 'mask' the ability of animals to hear sounds in their environment that may be important." For instance, higher levels of background noise may prevent fish from hearing the sounds of their prey – or their predators – or hamper their ability to communicate with their own kind. The presentation 1pABa6, "The implications of long-term increases of anthropogenic noise on fish," is in the afternoon session on Monday, May 23 in the Issaquah room. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa141.html


Highlights: Tuesday, May 24

Weather Report: Cloudy with a Chance of Loud

According to a team of acoustics researchers led by Nick Ovenden at the University College London, U.K., changes in weather significantly alter the way sound travels. The effect can become so pronounced that areas with normally modest highway noise can suddenly become a din of traffic sounds, potentially exceeding the U.S. Federal Highway Administration's noise abatement criteria maximum of 67 decibels. The reason for this is the well-known mechanism of refraction in which sound traveling upwind or downwind behaves very much like light passing through a prism. Sound traveling upwind tends to be bent, or refracted, toward the sky; sound traveling downwind is refracted toward the ground. Similar effects happen as sound passes between zones of different temperatures, where colder air lying between the ground and a layer of warmer air traps the sound close to the ground; such temperature conditions often occur around dawn and dusk. Even the presence of sound barriers may not compensate for this effect. The team's research suggests that highway sound may, under certain weather conditions, travel upward steeply before being directed back toward the ground, overcoming barriers even eight feet high and producing high noise exposure at distances nearly a half mile from the freeway. Talk 2pNS3, "Investigations of environmental and terrain effects on the propagation of freeway noise," is in the afternoon session on Tuesday, May 24 in Willow room B. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa500.html

Intoxicated Speech Reveals Some Surprises

After a few drinks, you may find it easier to strike up a conversation, but that doesn't necessarily mean intoxicated speech is easy or even easy to detect. Researcher Abby Kaplan at the linguistics department at the University of Utah made those unexpected findings while trying to determine if folks who had a few too many tended to slip in easier-to-pronounce sounds for those that were harder to say. Her results, however, revealed that intoxicated speakers could, up to a point, produce proper pronunciation with aplomb. Test subjects were given servings of vodka and orange juice until their blood alcohol content reached between 0.1 and 0.12 percent; this is slightly above the legal driving limit of 0.08. She then had them read words that contained the consonants "p" and "b" between vowels, like in the words "epic" and "cabin." Favored linguistic models predict that this produces a sort of oral gymnastics, forcing the speaker to break stride in mid word and interject an abrupt sound in the midst of a more flowing sound. Kaplan's assumption was that this and other easy-for-hard substitutions would become more and more prevalent when the speaker was intoxicated. Not so, according to her study. In fact, some assumed "easy" sounds morphed into sounds that linguists assumed required more effort to pronounce accurately. Presentation 2aSC33, "Compression of the acoustics space in intoxicated speech" is in the morning session on Tuesday, May 24 in Metropolitan room B. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa414.html

Bones Conduction May Herald Ultrasonic Hearing

Human hearing is remarkable in its subtle ability to distinguish a wide range of sounds across a wide range of frequencies. Recent research suggests, however, that humans naturally tap into the body's skeletal structure to extend hearing well into the normally ultrasonic range. Experiments conducted by Dr. Michael Qin and his colleagues at the Naval Submarine Medical Research Laboratory (NSMRL) systematically documented that human divers can detect underwater sounds up to 100 kHz, well above accepted normal hearing frequencies (approximately 20 Hz to 20 kHz). According to Qin, "Although human bone-conducted hearing at ultra-high frequencies has been documented, as yet there is no agreement on the underlying mechanism or mechanisms that make it audible." The NSMRL researchers investigated potential underlying mechanisms for this phenomenon. One theory is that human beings don't hear the sound itself directly. Transmission-path distortion has been suggested as a possible explanation for ultra-high frequency hearing. According to the transmission-path demodulation theory, the various tissues that the sound passes through in the head modify the ultra-high frequencies down into the audible frequency range. An alternate theory is that, although not its normal response, the human auditory system is able to respond to ultra-high frequency sound via direct inner-hair-cell stimulation. This work is intended to link underwater hearing with bone conduction hearing to determine if they share the same underlying mechanism. Presentation 2pPP3, "Human underwater bone conduction hearing in the sonic and ultrasonic range," is in the afternoon session on Tuesday, May 24 in Metropolitan room B. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa527.html


Highlights: Wednesday, May 25

Exploring Caves with Sound

Soldiers or emergency rescuers entering a cave will want to know the geometry of that cave. Farther along does the cave widen into a larger chamber or narrow into nothingness? Are there obstructions or holes along the way? David L. Bowen of Acentech Inc. describes a detector his firm has developed that sends planar sound waves into a cave and monitors the return reflected waves with a pair of sensors. By untangling the return times and patterns of the reflected waves, a cave geometry can be computed. The sound source can consist of a subwoofer type of loudspeaker emitting a steady stream of waves or, as used by soldiers in the field, the impulsive sound from a fired gun. A rugged laptop can quickly perform the necessary analysis. A rudimentary prototype model has been tested, but Bowen believes that two more years will be needed before the system can be fully deployed. Presentation 3aEAa2, "Development of a portable system for acoustical reconstruction of tunnel and cave geometries," is in the morning session on Wednesday, May 25 in the Diamond room. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa647.html

Evolution of Piano Wire

The keyboards of the last century are much different from those of Mozart's time. One of the things that changed during that century, along with piano design, was the availability of stronger piano wire. Purdue scientist Nick Giordano does not make piano wire but he measured the musical tones coming from four pianos built during the period from 1815 to 1912. Generally, piano designers compromise "between increasing the tension of the wires (which requires music wire of greater tensile strength or a larger diameter) and achieving a good tone quality (which requires a small string diameter)," said Giordano. Music wire improved during the period 1750-1820 with the advent of iron-wire technology and then again during 1840-1850 with the coming of steel wire, much stronger than iron wire. Earlier still, in Bach's time, harpsichords were the main wired keyboard instruments, Giordano said. "Bach only played a piano a few times, very late in his life, and never composed for the instrument." But harpsichords play at only a single level of volume. That's why pianos were invented – to give performers more dynamic range. Pianos in Mozart's time provided this range. But even Mozart's pianos had only 61 notes (five octaves); Beethoven's piano had six octaves. By Brahms's time the modern piano had seven octaves, with 88 notes. Piano design hasn't changed much during the past century. But Giordano believes there is room for improvement. He has been experimenting with piano wire made of carbon fiber in an effort to find wires of even greater strength but with a narrow diameter. Presentation 3aMU6, "Evolution of music wire and its impact on the development of the piano," is in the morning session on Wednesday, May 25 in the Aspen room. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa670.html


Highlights: Thursday, May 26

The Sounds of San Francisco

A popular tourist thing to do is to take city tours. Some tours show off a city's architecture, or its history, or literary sites. Dennis A. Paoletti had devised an acoustic tour of San Francisco, one that concentrates on sounds. During a meeting of architects, he led a tour around town that concentrated on experiencing famous San Francisco sounds, such as cable cars, fog horns, electrified street cars, and even the famous U.S. Air Force precision flying team, the Blue Angels, streaking along the San Francisco Bay. The tour covered some of the major parts of the city that a tourist would normally see, but also visited some buildings for which Paoletti had acted as an acoustic consultant. Neutralizing unwanted noise, measuring the effectiveness of double-hung windows, and listening for the absorbent or reflective acoustic qualities of various building materials were some of the things discussed along the way. Other topics on the tour included: potential hearing damage caused by industrial and entertainment noise and by personal listening devices, the hum of hybrid vehicles and personal transport vehicles, concert hall design, restaurant noise, and sound masking systems in open plan offices. Presentation 4pAAa1, "Soundscape walking tour," is in the first afternoon session on Thursday, May 26 in Grand Ballroom B. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa958.html

Rude Noises

New low-flow toilets save a lot of water but are noisier than the older models. Noral D. Stewart of Stewart Acoustical Consultants reports on a recent problem in an office, where the installation of the new toilets increased nearby sound levels from 30 decibels to about 40 decibels. Stewart reduced the noise by improving the surrounding walls. For comparison, the sound level in an office open-plan cubicle office is about 48 decibels and the level for a normal conversation is about 60 decibels. Presentation 4pAAb4, "An experience reducing toilet flushing noise reaching adjacent offices," is in the second afternoon session on Thursday, May 26 in Grand Ballroom B. Other papers in this session address other plumbing issues, such as restrooms in hospitals, the routing of pipes through wooden structures, and the sound of water pumps in high-rise condominiums. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa965.html


Highlights: Friday, May 27

New Cochlear Implant Approach to Improve Melody Perception

For people with hearing problems, a cochlear implant can transform their world. The surgical implants perform very well rendering spoken language. Melody perception, however, remains a challenge. But a new system that adapts cellphone sound processing appears to bring cochlear implant technology closer to offering the best of both acoustical worlds: speech and music. Compared with the conventional cochlear-implant sound sampling strategy, the new scheme significantly improved melody perception. In a test of nine subjects wearing cochlear implants who were asked to identify 10 melodies, results showed 10󈞀 percent improvement. Notes lead researcher Fan-Gang Zeng, Ph.D., research director of the Hearing and Speech Lab at the University of California, Irvine: "One potential application of this scheme is to one day integrate cochlear implants with smartphones so that future users can not only get better performance, but also seamless communication. Imagine one device that helps you hear and connects all." In the current cochlear implant pitch-encoding schemes for rendering melody, the original sound signals are significantly altered. This produces potentially detrimental effects on speech perception, which means improvement in hearing music comes at a cost to hearing speech. To overcome this, the new approach takes advantage of spectral constancy. It is achieved by preserving the spatial position voiced sounds occupy in a given timeframe, while altering the timeframe of pitch cycles. This minimizes distortion of the sound signals of both speech and music. Presentation 5aPP13, "Using spectral constancy to encode temporal pitch and improve cochlear implant melody perception," is on Friday morning, May 27 in Grand Ballroom C. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa1190.html

Video Game and Mock Alien Language for Learning

How do babies decode all the spoken sounds they hear to learn words and their meanings? "Alien" language may provide a clue. A team from Carnegie Mellon University in Pittsburgh and Sweden's Stockholm University designed a video game narrated in what amounts to alien language due to deliberately distorted acoustics. The soundtrack is unintelligible in any language and in the study was the only source of instruction for 49 adult players. Yet with just two hours of play, they could reliably extract word-length sound categories from continuous alien sounds and apply that learning to advance through the game. Results suggest this approach is a promising new way to explore language learning. Notes Francisco Lacerda, Ph.D., of Stockholm University, a specialist in language acquisition: "This is a wonderful opportunity to approximate the task facing infants by creating a setting where adults are forced to infer what the meaning of different sound elements might be, and to do it in a functional way." Their results have broad implications. For example, identifying functional sound units in language is a problem in dyslexia, so this work may one day have clinical applications. Intriguingly, results suggest the video game and its alien soundtrack may engage different areas of the brain in rapid and robust learning. The next step is to investigate this by observing players with functional magnetic resonance imaging (fMRI) to view their real-time brain reactions to the video game. Presentation 5aSC26, "Learning acoustically complex word-like units within a video game training paradigm," in on Friday morning, May 27 in Metropolitan room B. Abstract: http://asa.aip.org/web2/asa/abstracts/search.may11/asa1218.html

Code Loud: Noise in Emergency Rooms and Nursing Homes

Noise annoys; we all know it. But impacts of acoustics on caregivers in emergency rooms (ERs) and nursing homes have been little studied. New findings, however, from several investigations into noise effects on healthcare demonstrate that ERs have such loud background noise levels they impair listeners' ability to understand speech, according to work by a Georgia Institute of Technology team at two Atlanta hospitals. ERs have about twice the background noise level – approximately 60 decibels – of a well-designed university lecture hall, which is engineered acoustically so the lecturer's voice projects over background noise to be understood by students. Nursing homes also suffer from elevated noise levels that contribute to the stress of staff and residents, according to a Vancouver, British Columbia, pilot study, the first to perform physiological monitoring of stress by measuring cortisol levels, and the only one that relates this stress biomarker to acoustical descriptors. Notes Murray Hodgson, Ph.D., of the University of British Columbia, whose report focuses on nursing homes: "Acoustics deserves more attention in design because it has a lot of impact on our lives, and on our perceptions of places where we live, work, or socialize." Presentations 5aAA6, "Measuring the effects of acoustical environments on nurses in healthcare facilities," and 5aAA5, "Effects of noise on emergency department staff," will be on Friday morning, May 27 in Grand Ballroom B. Abstracts: http://asa.aip.org/web2/asa/abstracts/search.may11/asa1096.html http://asa.aip.org/web2/asa/abstracts/search.may11/asa1095.html

Source: American Institute of Physics

Share

Other sources

Latest Science Newsletter

Get the latest and most popular science news articles of the week in your Inbox! It's free!

Check out our next project, Biology.Net