Exploring Sonification: Representing Data with Sound

Cheryl Fogle-Hatch independent professional


The Georgia Tech sonification lab defines sonification as representing data with nonspeech audio Studies show that sonification, combined with visual data displays increases accuracy for people with normal vision. Additionally, sonification, representing data with sound facilitates access for people who are blind. There is a growing community of researchers, scientists and educators developing software to create sonification. With the exception of the Harvard/Smithsonian, sonification has not been explored for museum contexts. This paper will give examples of sonification. Software can be developed using programing languages such as python. Sonification can be produced on websites using the SAS Graphics Accellerator or the IMAGE browser extension from McGill University. The aim is to encourage people to include data sonification in addition to visual displays of data. This would provide multisensory opportunities, and it would increase access for people who are blind. Examples of Sonification projects Accessible Oceans hosted by the Woods Hole Oceanographic Institution. Chandra Photo Album Sonification Collection Hosted by the Harvard-Smithsonian Center for Astrophysics The Data Sonification Archive is a searchable database of sonification projects that present data from many scientific fields. Explore – From Space to Sound Nasa sonified many images. The Georgia Tech Sonification Lab is an interdisciplinary research group based in the School of Psychology and the School of Interactive Computing at Georgia Tech. Methods to produce sonification StarSound A standalone software tool for the sonification of multidimensional datasets © 2014 by Jeffrey Hannam Sound Designer Lenzi S., Ciuccarelli P., Liu H., Hua Y. 2020. Data Sonification Archive. http://www.sonification.design. Last accessed: October 13, 2022. Noel-Storr, J., & Willebrands, M. (2022). Accessibility in astronomy for the visually impaired. Nature Astronomy, 1-3. https://doi.org/10.1038/s41550-022-01691-2 Zanella, A., Harrison, C. M., Lenzi, S., Cooke, J., Damsma, P., & Fleming, S. W. (2022). Sonification and sound design for astronomy research, education and public engagement. Nature Astronomy, 1-8. https://doi.org/10.1038/s41550-022-01721-z

The Georgia Tech sonification lab defines sonification as representing data with nonspeech audio. This communication technique creates an alternate way to engage with complex visual data for people who may not feel confident in their ability to interpret complex charts and graphs (Sawe et Al., 2020). Sonification also enables access for people who are blind or have low vision (Levy and Lahav, 2011; Noel-Storr and Willebrands, 2022; Zanella et Al., 2022). When the Museum of Science added sonification to prototypes of digital interactives to make them accessible for visitors who are blind, their evaluators reported that half of the visitors who are sighted said the audio helped them to understand the exhibits (Malandain et Al. 2020).

There is a growing community of scientists and educators using sonification for research and science communication. Lenzi et al. (2020) curate the Data Sonification Archive with links to data sonification on a diverse range of topics: Covid-19 statistics and other medical data; climate change and extreme weather events; astronomical data; and a quantum-driven robotic swarm! Another notable sonification project is Accessible Oceans (https://accessibleoceans.whoi.edu/) hosted by the Woods Hole Oceanographic Institution.

Many sonification projects are in the field of astronomy because telescope observations produce large datasets that are multidimensional in time and space. Sonification has “the potential to enhance scientific discovery within complex datasets, by utilizing the inherent multidimensionality of sound and the ability of our hearing to filter signals from noise” (Zanella et Al., 2022:1). The United Nations Office For Outer Space Affairs (UNOOSA, 2022) recorded video presentations by leaders of several sonification projects in astronomy.

This paper begins by describing the advantages of sonification–the human sense of hearing is fine-tuned for pattern recognition. The second section discusses the potential for sonification as a storytelling technique including design considerations for creating sonification in the form of musical composition or soundscapes. The third section provides a short list of tools that can automate the production of sonification making it more widely available to people who are not musicians or sound designers. Finally, this paper will conclude with an all-to-brief discussion of the rare instances where sonification has been used in museum contexts. This paper presents information about sonification that is intended to encourage the wider adoption of sonification in museum exhibits. One possibility is a combination of sonification with text-based audio description just as visual graphs incorporate text-based labels.

Advantages of Sonification

Sonification is an effective communication method because the auditory system is well-suited to pattern detection and trend identification (Walker and Nees, 2011). For example, Loui et al. (2014) found that auditory perception is a more intuitive way to recognize brain waves associated with seizures than by looking at visual displays recorded with electroencephalography (EEG) data. Other studies [references] show that sonification, combined with visual data displays increases accuracy for people with normal vision.

Walker and Nees (2011:12) categorize the use-cases for sonification as: (1) alarms, alerts, and warnings; (2) status, process, and monitoring messages; (3) data exploration; and (4) art and entertainment. A blaring fire alarm is effective because loudness and repetition convey urgency. Medical equipment is an example of sonifying status messages because it produces continuous sound while in operation. The use-cases of sonification classified as data exploration or arts and entertainment demonstrate that the technique can express concepts and feelings that are more complex than the basic pattern recognition required to identify alarms or status messages.

Sonification as Pattern Recognition

Alarms and status messages are examples of sonification that is concerned with pattern recognition. Advertising jingles are also based on pattern recognition, and by repetition—they are associated with a specific product. A specific sound may be paired with a visual logo. For example, the MGM lion roars when it is seen on screen (Eschner, 2017).

The inventions of radio and recorded sound allowed two broadcasting companies, the BBC and NBC, to create signature chimes that they played on air. The BBC  chimes heard at the start of each hour were recorded from famous clocks in London (Parliamentary Archives, 2020); Big Ben (rings the note of E), and the Westminster chimes ring the four quarter bells (G sharp, F sharp, E, and B). In the United States, three musical notes (GEC) were played on NBC radio to conclude the news broadcast from the 1920s until the late 1980s (Twenty Thousand Hertz, 2016).

Sonification as a Storytelling Technique

Sonification can enhance storytelling.  Generations of children have learned about the individual instruments in an orchestra listening to “Peter and the Wolf” composed by Sergei Prokofiev in 1936 (https://fcsymphony.org/fos/peter-and-the-wolf/). Peter is represented by strings. Other characters are highlighted with woodwinds, brass, and percussion.

Moving from the classical to the contemporary, scientists working with skilled musicians and sound designers use sonification as a storytelling technique. A sonification of the center of the Milky Way galaxy combines data from NASA’s Chandra X-ray Observatory, Hubble Space Telescope, and Spitzer Space Telescope. Each telescope reveals different phenomena and is represented by a different instrument. The piano was chosen for Spitzer, strings for Hubble, and the chimes of the glockenspiel for Chandra. Stars are converted to individual notes while the intensity of the light controls the volume.

Beyond expressing the wonder of a complicated image, the sonification of the center of the Milky Way demonstrates that Sound can be used to explain scientific concepts. This sonification includes changes in pitch that correspond to different wavelengths in the electromagnetic spectrum: infrared, optical, and x-ray. Low notes represent infrared. Middle notes are in the optical range of the electromagnetic spectrum that is visible to the human eye. High notes represent x-ray wavelengths. The team of musicians and scientists used conventions in Western music theory to compose a piece of music that is both aesthetically  pleasing and informative when the listener is shown how to interpret the composition.

Design considerations for sonification using music composition or soundscapes

Design choices for sonification used to communicate information should be considered carefully because they affect the listener’s understanding of the data being sonified.  In musical contexts, data can be mapped to sonic dimensions, such as volume, pitch, tempo, timbre and location in the stereo field (Sawe et Al., 2020; Walker and Nees, 2011). A sound designer may choose acoustic or synthesized sounds, and create a sonification in a variety of musical styles.

Choosing a major or a minor key alters the mood of the musical piece influencing the way the listener interprets the data being sonified. For example, Sawe et Al., (2020) created a sonification of changes in the frequency of tree species in the Alaskan forest through time. They used a d minor scale to express the falling numbers of the yellow cedar evoking sadness about climate change. An alternative musical choice that they did not make would have been to represent the rising number of western hemlock trees in a major key evoking a mood of happiness. However this would have been an ineffective message because the western hemlock is adapted to warmer temperatures and is moving northward due to global warming, the same phenomenon that is causing yellow cedar numbers to decline.

In one study (Zhang et al., 2022), Sound designers composed sonifications using Pro Tools, Logic Pro, or Garage Band. Ambient sounds that are specific to the project can be added to these compositions. Public domain audio files are available in online databases such as Freesound (https://freesound.org) and BBC Sound Effects (https://sound-effects.bbcrewind.co.uk).

Tools that automate production of sonification

Researchers need to analyze large datasets, and they may not have the time to create the custom-made sonifications that are effective for public outreach. Automated tools are suitable for sonifying large datasets and for running the many queries necessary for data analysis.

Sonification is an effective tool for analyzing scientific data.

Audio graphs can be made with rising pitch indicating higher numbers. For example Astronify is a software package developed by the Space Telescope Science Institute that uses the Python programming language to turn telescope observations into sound. After installing Astronify, users download astronomical data and execute a command to render it as a sonification. They can display the data visually as a graph (Space Telescope Science Institute, n.d.).

Astronify can detect exoplanets as they orbit stars outside of our solar system. Visually, the light from a star is interrupted when an exoplanet moves between its star and the lens of a telescope observing it. Astronomers call this a transit. Sonically, Astronify represents time as a constant drone, observations with change in pitch, and the transit as an interruption to the sounds for time and pitch.

Sonification can be rendered on websites using the SAS Graphics Accelerator or the IMAGE browser extension from McGill  University, or data can be loaded into a web-based tool called Highcharts Sonification Studio

The SAS Graphics Accelerator (https://support.sas.com/software/products/graphics-accelerator/

) adds features to data visualizations including text descriptions, tabular data, and interactive sonification. The SAS Graphics Accelerator is an extension for Chrome and? works with various data analysis tools developed by a company called SAS Statistical Analysis System. Using the SAS Graphics Accelerator is not intuitive because it requires prior knowledge of SAS products.

IMAGE is a browser extension that will send a selected graphic to the IMAGE server. The graphic is rendered as a sonification with spatial audio(McGill University, 2022). The user can explore the sonification to explore objects in the graphic. Image is a project of the Shared Reality Lab at McGill University, Montreal, Canada.

Highcharts is a partnership between Highcharts and the Sonification Lab at the Georgia Institute of Technology. Highcharts Sonification Studio (https://sonification.highcharts.com/#/) is a “web-based charting and sonification technology” that can be used without having to write code, and without prior sonification expertise”.

Sonification in Museum Contexts

The examples of sonification described in this paper are part of a “broader research endeavor in which  data, sonification and  design converge to explore the potential of sound in complementing other modes of representation and broadening the publics of data. With visualization still being one of the prominent forms of data transformation, we believe that sound can both enrich the experience of data and build new publics” (Lenzi et Al., 2020). One audience for sonification of data are people who may lack “disciplinary expertise” in a particular scientific field (Woods Hole Oceanographic Institution, n.d.). Another audience for sonification is comprised of people who are blind or have low vision. Sonification is a technique that can increase visitor engagement with museum exhibits.

Using Sonification with Audio Description

Sonification can be combined with text-based audio description just as visual graphs incorporate text-based labels. For example, Siu et al. (2022), automatically generated sonification and audio descriptions for time-series data typically displayed as line graphs on websites of news media outlets or in other sources of online information. They created an audio data narrative using sonification for trend lines and a synthesized voice for common text labels on graphs such as time period, (month and year), and rates, (10%, or 50% etc.). Sonification gives the listener first-hand knowledge of the data trends, and the text labels provided by audio description are equivalent to print labels accompanying visual graphs.

The Museum of Science designed a prototype of an exhibit using sonification and audio description. The Wind Lab was a computer-based, multisensory interactive that let visitors explore data about the wind energy generated from the turbines on the roof of the building (Malandain et Al., 2020). Tracing the line of a graph on a touch screen produced a tone that rose to indicate increased wind speed; pausing activated a verbal announcement of the number displayed at that point (O’Hara, 2014).

Sonification and audio description can be combined when automated tools are used to create the sonification. It may be more difficult to edit musical compositions or soundscapes to include audio description. The Chandra photo album sonification collection includes text-based descriptions immediately below the embedded web player for each sonification.

Most of the sonification examples in this paper were designed by researchers or educators. With the exception of the Harvard/Smithsonian Center for Astrophysics and the Museum of Science, sonification is not common in museum contexts. By default, the sense of vision is the primary mode for presenting information in most museum exhibits because they are designed by people who are sighted. Explaining concepts in a auditory way is a learned skill that many people who are sighted do not have the opportunity to develop. The examples of sonification described in this paper are engaging, and it is hoped that they will encourage exhibit designers to include sonification in their future work. Ideally, data sonification would be integrated into exhibits that rely on visual displays of data. This would provide multisensory opportunities for everyone, and it would increase access for people who are blind.


BBC Sound Effects. Consulted February 7, 2023. https://sound-effects.bbcrewind.co.uk

Chandra Photo Album, (2020). “Sounds From Around The Milky Way”

Last updated November 17, 2020. Consulted February 2, 2023. https://chandra.si.edu/photo/2020/sonify/animations.html

Eschner, K. (2017). “The Story of Hollywood’s Most Famous Lion.” Smithsonian Magazine. Consulted January 18, 2023. https://www.smithsonianmag.com/smart-news/mgms-first-lion-didnt-roar-180962852/

Fort Collins Symphony, (2021). “Peter and the Wolf, a Virtual Education Program – Musical Zoo”. Consulted January 18, 2023. https://fcsymphony.org/fos/peter-and-the-wolf/

Freesound. Consulted February 3, 2023. https://freesound.org/

Highsoft Association (2023) “Highcharts Sonification Studio”. Consulted February 7, 2023. https://sonification.highcharts.com/#/

Lenzi, S., P. Ciuccarelli, H. Liu, Y. Hua 2020. Data Sonification Archive. Consulted October 13, 2022. http://www.sonification.design.

Levy, S., and O. Lahav (2011). Enabling people who are blind to experience science inquiry learning through sound‐based mediation. Journal of Computer Assisted Learning, 28(6), 499-513.

Loui, P., M. Koplin-Green, M. Frick, and M. Massone. (2014). Rapidly learned identification of epileptic seizures from sonified EEG. Frontiers in Human Neuroscience 8:820. Consulted January 14, 2023. https://www.frontiersin.org/articles/10.3389/fnhum.2014.00820/full

Malandain, B., C. Reich, J. Ghelichi, L. A. Mesiti Caulfield. And J. Tate. (2020). “An Ongoing Experiment: Developing Inclusive Digital Interactives at a Science Museum.” In B. Ziebarth, J. Majewski, R. Marquis, and N. Proctor, (eds). Inclusive Digital Interactives: Best Practices + Research publication. Washington D.C.: Smithsonian

Institution, 57-92. Consulted February 6, 2023. https://access.si.edu/sites/default/files/inclusive-digital-interactives-best-practices-research.pdf

McGill University, (2022). “Image Project”. Consulted February 7, 2023. https://image.a11y.mcgill.ca/

Noel-Storr, J., and M. Willebrands. (2022). Accessibility in astronomy for the visually impaired. Nature Astronomy, 1-3.

O’Hara, E. (2014) “CMME Exhibit at Museum of Science, Boston: Accessible Digital Interactive”, Open Exhibits. Last updated September 30, 2014, Consulted February 5, 2023. https://openexhibits.org/museumofscienceexhibit/

Parliamentary Archives, (2020). “Broadcasting Big Ben”. Consulted January 18, 2023. https://archives.blog.parliament.uk/2020/11/16/broadcasting-big-ben/.

SAS Institute Inc. “SAS Graphics Accelerator”. Consulted February 3, 2023. https://support.sas.com/software/products/graphics-accelerator/

Sawe, N., C. Chafe, and J. Treviño. (2020). “Using Data Sonification to Overcome Science Literacy, Numeracy, and Visualization Barriers in Science Communication”. Frontiers in Communication. Consulted February 6, 2023. https://www.frontiersin.org/articles/10.3389/fcomm.2020.00046/full

Siu, A., G. Kim, S. O’Modhrain, and S. Follmer. (2022). “Supporting Accessible Data Visualization Through Audio Data Narratives.” In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA. 1-19.

Space Telescope Science Institute (n.d.). “Astronify”. Consulted February 2, 2023. https://astronify.readthedocs.io/en/latest/

Twenty Thousand Hertz, (2016). “NBC Chimes”. Consulted January 18, 2023. https://www.20k.org/episodes/nbc

United Nations Office For Outer Space Affairs (UNOOSA), (2022). “Sonification: A Tool For Research, Outreach and Inclusion In Space Sciences”. Consulted November 17, 2022. https://www.unoosa.org/oosa/en/ourwork/space4personswithdisabilites/sonification2022.html

Walker, B., and M. Nees. (2011). “Theory of sonification,” in The Sonification Handbook, T. Hermann, A. Hunt, J. G. Neuhoff, F. Dombois, and G. Eckel (eds). Berlin: Logos Publishing House, 9-39.

Woods Hole Oceanographic Institution “Accessible Oceans”. Consulted July 15, 2022. https://accessibleoceans.whoi.edu/

Zhang, l. J. Shao, A. Liu, L. Jiang, A. Stangl, A. Fourney, M. Ringel Morris, and L. Findlater. (2022). “Exploring Interactive Sound Design for Auditory Websites” CHI Conference on Human Factors in Computing Systems, New Orleans, LA.

Consulted February 3, 2023. https://dl.acm.org/doi/fullHtml/10.1145/3491102.3517695

Zanella, A., C. Harrison, S. Lenzi, J. Cooke, P. Damsma, and S. Fleming. (2022). Sonification and sound design for astronomy research, education and public engagement. Nature Astronomy, 1-8.

0 Cheryl Fogle-Hatch independent professional
11072 [pods name="Paper" template="user_block" Where="_mw_paper_proposal_id=11072"][/pods]

Silver Sponsor

Bronze Sponsors