For centuries, composers have sought to capture the essence of the natural world through music. From Antonio Vivaldi’s “The Four Seasons” to Gustav Holst’s “The Planets,” and Sergei Prokofiev’s “Peter and the Wolf,” music has long served as a bridge between nature and human expression. However, a groundbreaking approach by researchers at the University of Kentucky is taking this concept to a cellular level, translating the intricate workings of the human body into sound through a process known as data sonification.
Data sonification involves converting precise data points, such as gene sequences or barometric pressure readings, into musical elements like melody, pitch, rhythm, and timbre. This innovative method is being explored by three researchers at the University of Kentucky, with funding from the Office of the Vice President of Research (OVPR) Igniting Research Collaborations program and the UK International Center’s UKinSpire program.
From Protein Patterns to Musical Notes
The idea of sonifying cellular information originated in the lab of Luke Bradley, Ph.D., acting chair of the College of Medicine’s Department of Neuroscience. Bradley’s curiosity was piqued while working on protein sequences and observing patterns in amino acids. He wondered if these patterns could be translated into sound.
During a faculty mixer, Bradley shared his thoughts with Michael Baker, Ph.D., a professor of music theory and composition in the College of Fine Arts’ School of Music. Baker, intrigued by the concept, enlisted the help of his colleague Timothy Moyers, Ph.D., an associate professor of music theory and composition and an electronic music composer.
The Symphony of Cells
Bradley likens the cell to a symphony, where harmony signifies health, and dissonance indicates disease. “If you have one group that might have something off in tuning, it doesn’t sound good,” he explained. This dissonance serves as a data point indicating potential cellular anomalies.
Moyers emphasized the value of data sonification, noting that auditory perception can reveal subtle variations that visual observation might miss. “Our ears are much more sensitive to subtle changes than our eyes,” he said, citing his own colorblindness as an example. “In a sample, like recorded audio, there’s 44,000 samples per second. If one of those is out of place, we’re going to hear that.”
Collaborating with Cambridge and Beyond
The researchers have extended their collaboration to include Cambridge University, where visual models of cancer genomics are being developed. These models allow for the identification of cancer types based on visual cues. The UK team has created a parallel research engine that sonifies different types of cancer to produce distinct sounds.
Baker provided an analogy: “Imagine a group of trumpet players all playing the same note. If some use mutes, which alter the tone, it represents mutations. Our research engine can distinguish not only how many muted trumpets there are but also what kind of mutes are being used.”
“We’re quickly able to take an existing pure sine wave sound, hear the wrinkles that get added to this based on the distortions in the sound, correlated to the data points, and be able to tell exactly what kind of cancer is involved,” Baker said.
Educational and Artistic Applications
Beyond diagnostics, data sonification offers educational benefits. Bradley noted that it began as a tool to teach students about mutations. “We were using this to illustrate to students—high school and middle school students—what a mutation is. How does it affect the symphony of the cell?”
Baker believes that data sonification enhances teaching by encouraging inquiry. “Good teaching gives ways to ask much more interesting questions,” he said, suggesting that students could use sonification to explore hypothetical scenarios.
Moyers sees artistic potential in data sonification. “Music and art are all about patterns and how we react to those patterns,” he said. He is exploring ways to incorporate data-derived audio into his performances and compositions.
Beyond Earth: Astronomical Applications
Data sonification is not limited to biology. Astronomers have applied it to study planetary rotational speeds, leading to remarkable discoveries. In 2017, the Trappist 1 system, a red dwarf star with seven orbiting planets, was found to fit into an Earth-like harmony based on rotational speeds.
“This one fellow that studies sonification of exoplanetary systems found that the planets in Trappist 1 fit into an exact Earth-like harmony in terms of their rotational speeds,” Baker said.
The tones produced by these data sets create an overtone series, akin to the combination of colors forming white light. The Trappist 1 system, through data sonification, “vibrated” at a specific pitch similar to our solar system.
What began as a speculative question at a faculty mixer has evolved into a promising diagnostic and educational tool. As data sonification continues to develop, its future applications and discoveries are eagerly anticipated.
“There’s still a lot of work to be done and a lot of different pathways to pave in that way,” Moyers said.