Professor Eduardo Reck Miranda has gone on a global journey which has seen him work across five countries.
But it is here in Plymouth where Eduardo has settled and over a number of years developed his truly visionary computer music in our Interdisciplinary Centre for Computer Music Research (ICCMR) – combining his passions for sound and composition with science and linguistics, in a number of pioneering composed works. In 2019, Eduardo's opera Lampedusa converted particle collision data from the Large Hadron Collider into music for the University's Peninsula Arts Contemporary Music Festival.
Having originally studied a degree in computing, Eduardo gave up a good job as a systems analyst for a large corporation in Brazil, to kickstart a new career in computer music – then a relatively new field in research – to become one of the world's most enthralling and progressive talents in the field.
In a wide-ranging conversation, Eduardo talks to us about how computer music has developed since Charles Babbage originated the concept of a programmable machine, what motivates his research which links technology to humankind, and why studying at Plymouth is the perfect place to compose the music of the future and create the technology it is played on.
- Composer and Professor in Computer Music – leader of MA Music ResM Computer Music
- Head of the Interdisciplinary Centre for Computer Music Research
- Expertise includes: composition, Artificial Intelligence and music technology
- Shared the stage with BBC Concert Orchestra, BBC Singers, Jarvis Cocker and American beatboxer Butterscotch
In conversation with Professor Eduardo Miranda
Getting the computer bug
Having worked in universities in five different countries, what were the factors which brought you to settle in Plymouth to develop your visionary computer music?
After a stint as a lecturer in music technology at the University of Glasgow, I worked as a research scientist for almost a decade at Sony Computer Science Laboratory in Paris.
I also taught Artificial Intelligence at the American University of Paris and music at Centre de Création Musicale Iannis Xenakis. And I worked in Barcelona and Berlin as well.
I seldom felt at ease working for institutions with well-established track records in areas of my interest. They tend to compartmentalise knowledge with rigid borders, which obstruct creative thinking. It became increasingly clear to me that, in order to develop the work that I wanted to develop, I had to create my own setup from the ground up.
As a fortunate coincidence, in 2002 I learned of a job position here at Plymouth. It looked promising and so I applied.
Luckily I was called for a job interview, where I was given the opportunity to explain my ambition to develop research combining neuroscience, computing and music that nobody else was developing at the time. The rest is history.
Back in Paris, people thought that I was going mad leaving everything behind to start from ground zero in Plymouth. Well, I think I am having the last laugh now.
I found here a fertile and supportive environment to develop my career. Moreover, Plymouth is a pleasant city to live.
You originally studied a degree in computing and gave up a good job as a systems analyst for a large corporation. What made you realise that music was your passion? And that you wanted to specifically create it with computers?
Although music was prominent in my upbringing, when time was ripe to enter higher education I still had not made my mind as to what profession I would pursue. I ended up opting for a degree in informatics.
Shortly after I graduated, I seized a job as systems analyst for an electronics company based in Rio de Janeiro. But as the story goes, I soon felt uncomfortable with the road I had taken. I decided to change horses in midstream: I moved to an evening teaching job at a vocational school of informatics, which enabled me to go back to university to study music.
I soon became increasingly disheartened with the music degree. I found myself spending more time in the library searching for interesting things to read, than attending lectures. In one of those days in the library, I came across a book entitled Musiques Formelles authored by a composer I had scarcely heard of before: Iannis Xenakis.
I could barely read French at the time, but I immediately spotted Venn diagrams, set theory, logic formalisms and probability formulas; things that looked rather familiar from my informatics degree. The penny dropped! I could not believe my eyes.
It was a self-revealing moment: I realised that I could combine my knowledge of computing with music and forge an exciting path for my career, which was rather rare at the time.
Unwittingly I was one of the pioneers of using computers to make music in Latin America.
Which musicians inspired you while you were studying?
I have a high regard for those composers that are willing to share the thinking behind their music, reveal their compositional processes, and articulate their compositional methods.
In addition to Iannis Xenakis’ book, the writings of composers such as, Lejaren Hiller, Karlheinz Stockhausen, Pierre Boulez, and to some extent, Pierre Schaeffer, were instrumental in my formation. They adopted a somewhat technical, even scientific, approach to talking about music, which resonated with me.
But I should mention that John Cage was also influential in my education, albeit his writings were not technical.
The birth and rise of computer music
From God Save The Queen played on a Ferranti Mark 1 in 1951 and Max Mathews’ MUSIC program, through to the launch of the Commodore 64 and Atari, the birth of MIDI, creation of Cubase and plug-ins, to the rise of Ableton and other digital audio workstation (DAWs), technology has quickly revolutionised how we make and listen to computer music.
Are you able to give us a short history of computer music and demonstrate its evolution from when you first discovered an interest in it, to becoming a pioneer in the field?
People hardly ever realise that musicians started experimenting with computing far before the emergence of the vast majority of scientific, industrial and commercial computing applications in existence today.
For instance, in the 1940s, researchers at Australia’s Council for Scientific and Industrial Research (CSIR) installed a loudspeaker on their Mk1 computer to track the progress of a program using sound.
Subsequently, Geoff Hill, a mathematician with a musical background, programmed this machine to playback a tune.
And in the 1950s, composer and Professor of Chemistry, Lejaren Hiller collaborated with mathematician Leonard Isaacson at University of Illinois at Urbana-Champaign, to program the ILLIAC computer to compose a string quartet entitled Illiac Suite.
The ILLIAC, short for Illinois Automatic Computer, was one of the first mainframe computers built in the USA, comprising thousands of vacuum tubes. They programmed this machine with rules of harmony and counterpoint. And the output was transcribed manually into a musical score for a string quartet.
This composition is often cited as a pioneering piece of computer-composed music. That is, whereas Mk1 merely played back an encoded tune, ILLIAC was programmed with algorithms to compose music.
Charles Babbage: the father of the programmable computer
How does the use of computers to create the music we are familiar with today link back to when Charles Babbage – who the University named their computer laboratories after – originated the concept of a digital programmable computer?
Charles Babbage was born in 1791 in London. The Babbage family moved to Devon when he was a teenager. Charles attended King Edward VI Grammar School for Boys in Totnes. However, he was educated mostly at home by private tutors because of ill health. Then, he went to study mathematics at the University of Cambridge.
Charles Babbage was a very clever person. He is often considered as the father of the programmable computer.
Babbage invented a machine, known as the Analytical Engine, that could be programmed to execute different arithmetical calculations. It even had a mechanical system of wheels in axes to store numbers.
The machine was never built physically but its blueprint has certainly inspired the development of modern computers.
How then did Ada Lovelace’s involvement in Babbage’s Analytical Engine expand on its original possibilities?
Interestingly, Lady Ada Lovelace, a brilliant mathematician and friend of Babbage, developed an algorithm for the Analytical Engine to generate a specific sequence of rational numbers, the so-called Bernoulli numbers. Of course, as the Analytical Engine had not been built, her ideas were all theoretical. Having said that, Lady Lovelace was perhaps the first person ever to program a computer.
What is extraordinary is that Lady Lovelace stated that the Analytical Engine could be programmed to compose music. On a note about Charles Babbage’s Analytical Engine, she wrote:
“Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the Engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”
If Babbage is the father of the programmable computer, by the same token Lady Lovelace is the mother of computer music!
Similar to Lovelace, who did not see life as simply science versus art, but as a way for them to complement and interact with one another; as a composer working at the crossroads of music and science, what comes first: the idea of taking a scientific idea or set of data and transposing to music, or the desire to use music to demonstrate science in more creative ways?
This depends on the purpose of the activity. If I am composing, then aesthetics comes first. If I am developing an experiment to study the brain, then science comes first.
Babbage and Lovelace were polymaths. Their knowledge seemed to span a range of different subjects. Polymaths can easily make connections between topics with no obvious relationships and harness these connections to make new things or see the world differently.
From brain wave compositions to space symphonies
You have created such a varied body of work in chamber, orchestral and electroacoustic music.
From compositions by an autonomous robot, and interfacing the brain with musical systems – and in doing so coining the term Brain-Computer Music Interfacing (BCMI) – to incorporating a specially created new language by Games of Throne’s David J. Peterson for the libretto in your opera Lampedusa, which incorporated this with musical renditions of particle collision data.
Where do you get your ideas from? How do you choose your next project and begin to realise your dreams?
Inspiration is difficult to explain. I can say that my mind wonders a lot, in particular when I am listening to symphonic music or reading a book. It takes me ages to read a novel because my mind tends to drift to peripheral ideas that the story brings up.
My eureka moments often take place during these wonderings. The hard thing is to pick an idea to develop from the many that comes to mind.
Do you have a favourite composition you are most proud of?
I recently composed an opera, Lampedusa, which is a prequel of Shakespeare’s The Tempest. It is completely sung in a non-existing language, invented specifically for the opera.
Lampedusa could not have been composed if it were not for the exciting research that we develop at ICCMR.
This is possibly one of my most significant pieces, which shined a spotlight on Plymouth as an important place for contemporary classical music.
But the piece I am most proud of is Sound to Sea. This is a choral symphony, which embodies a number of computer-aided compositional methods that I have been developing at ICCMR. It was commissioned by the University to celebrate its 150th anniversary.
Sound to Sea received its premiere on 22 September 2012 at Plymouth's Minster Church of St. Andrew, by Ten Tors Orchestra, Peninsula Arts Chorale and fabulous mezzo-soprano Juliette Pochin, under the baton of Simon Ible.
The composition alludes to voyages: voyages of discovery and exploration, and also voyages of mind, imaginary voyages to the past and to the future.
The symphony has four movements and three intermezzi. I am very fond of the fourth movement, A Fine Rattling Breeze. It contains extracts from Charles Darwin’s diary written during the journey he took round the world on board of HMS Beagle, which set sail from Plymouth Sound on 27 December 1831.
It was during this journey, which lasted for almost five years, that Darwin laid the foundations of his theory of biological evolution, which transformed the way in which we study the origins of species.
The extracts in A Fine Rattling Breeze are from entries written during the period when he sailed the coast of Brazil and visited Rio de Janeiro, between March and June 1832.
A significant proportion of this movement was composed with software of my own design, which generates music from mathematical models of Darwin’s theory of evolution.
Inspired by the notion that biological evolution involves changes in the genetic constitution of species, I programmed the computer to mutated excerpts from Bach’s and Mozart’s music to produce variants for A Fine Rattling Breeze. It was as if the notes of Bach’s and Mozart’s music constituted some sort of genetic code that I modified to create my own music.
Computer music at Plymouth
Studying computer audio and music at Plymouth is much more than just learning to record sound. What others skills will students learn from studying BSc (Hons) Computing, Audio and Music Technology and ResM Computer Music?
What makes the University of Plymouth special to study music technology are the teaching staff and their cutting-edge knowledge of the field. Great proportion of our teaching staff studied for their PhD at ICCMR. They amassed valuable skills on harvesting and creating new knowledge.
This means our staff are able to teach our students the latest trends in the field, plus knowledge that they won’t find anywhere else, simply because it was developed at Plymouth.
Our BSc (Hons) Computing, Audio and Music Technology degree is truly unique. Whereas other undergraduate courses on music technology teach students to use existing equipment and off-the-shelf software, our degree teaches how to actually create them.
Effectively, our students study IT and programming while having fun with music! Graduates will be able to compete in the IT job market with the advantage that they will also have the creative profile that most high-tech companies look for in their workforce.
The ResM Computer Music degree provides an opportunity to develop a negotiated project of interest and learn transferable research skills that are most valued in the job market.
The course is also popular with practicing musicians wishing to learn how technology can harness their profession. The ResM is the natural pathway for those wishing to study for a PhD.
From helping people with dementia (your RadioMe collaboration with Dr Alexis Kirke and Professor Sube Banerjee), to helping Rosemary Johnson, a violinist who was destined to become a world class musician, play music again after suffering from an accident 27 years ago through the power of her mind (Music of the Mind), your music has undoubtedly changed lives.
Could you elaborate on how music technology can really change and continue to enrich lives?
I believe that interaction between music and science should go both ways. That is, in addition to drawing from science and engineering to develop technology for music, we ought to draw from music to contribute to science and engineering as well.
Interdisciplinary research should enlighten as many of the disciplines involved as possible, not only one.
For instance, it is thanks to our interest in developing new musical instruments that we started developing music systems operated with brain signals. Then, we moved on to develop brain-computer music interface systems for people with severe motor impairments. This enable us to gain a better understanding of what happens in the brain when we listen to music, in particular people with brain injury.
The technologies that we developed to analyse brain signals and manipulate musical information, plus the experience that we amassed observing the effect that music has on people, gave us confidence to embark in our most recently project: RadioMe.
We are developing ways to remix radio broadcast in real-time to provide bespoke services for audiences living with dementia. This is a project with massive potential to improve the quality of life of dementia patients, their family and carers.
What excites you about the future of computer music?
Computing technology is developing incredibly fast. Artificial Intelligence and 5G wireless communication are bound to make a significant impact in our world, both positively and negatively.
ICCMR is committed to harness the positive side of these to continue contributing to the health sector. Our radar is also alert for opportunities to combat climate change.
However, the pressing questions that I am addressing these days are: What will the computers of the future be like? What will these machines be able to do that current ones can’t? And how?
Future electronic devices will most certainly be built with radically different types of processors, such as processors made of living matter (e.g., genetically engineered biological neurons) and processors that harness quantum mechanics to perform computations in the subatomic realm.
In fact, these technologies are already being developed, some of which are accessible to universities and research labs around the globe to experiment with. ICCMR is at the forefront of these developments. We have developed our own proof-of-concept bio-computer. And we are pioneers of research into quantum computing for music.
I am proud that ICCMR is essentially a group of musicians developing research at the forefront of computer science and engineering.
You will not need all 10 fingers of your hands to count how many computer science labs in the world are doing what we do.
How do you believe computer music can continue to change our world for the better?
Computer music research is exciting because the technologies that we develop for creating music in the first place can be used for purposes that may impact on other aspects of our lives too.
For instance, we are now employing the Artificial Intelligence methods that we developed to analyse the sounds of music to automatically identify whether the sounds of coughing might indicate that the person is infected with the COVID-19 virus.
We are teaming up with the researchers in the Faculty of Health to develop a system for preliminary self-assessment on a mobile phone.
All those above mentioned ideas and trends that we are developing will eventually be channelled toward our efforts to make the world a better place for all.
Today, computers are absolutely essential for music.
And future developments in computing and Artificial Intelligence will have an impact on all professions that involve music. Whether you’re a graduate from the arts and humanities, science and/or technology, we’ll help you to harness computing technology to develop your passion for music.
Immersed in a thriving research centre, our future-facing programme offers the opportunity to study what it takes to make an impact to the future of the music professions.