Computers in Hindustani Classical Music

 

Hari V Sahasrabuddhe

C2/7 Kumar Classics

Aundh, Pune 411007 India

 

Enhanced version of paper presented at the

National Symposium & Annual Music & Dance Festival 2006

Indiranagar Sangeetha Sabha, Bangalore

January 26-28-29, 2006

 

Abstract

 

I firmly believe that humans shall forever retain all key roles in the creation and enjoyment of music – musician (composer, interpreter, performer), musicologist as well as listener.  What then are the roles computers can play?  By having computers play each role in experiments we can improve our understanding.  Music may be presented to a computer as some kind of notation or as a sound signal.  The two approaches are compared.  Computer programs can synthesize interesting sounds, provide ear-training and record sound and/or video.

 

Introduction

 

I must thank the Organizing Committee of this symposium, and Indiranagar Sangeetha Sabha, for giving me this opportunity to speak in front of you today. 

 

What are the aims of computational work in music?  For musicologists computers provide a tool for bettering understanding.  For performers it is many instruments worth exploring.  For the recording engineer it is a flexible platform.

 

We review many ideas and list useful references.

 

Goals of computational work

 

I have a friend who imagines that some day a computer will present a musical performance “all by itself” and humans will only need to sit and appreciate.  I do not share his dream.  To me, humans shall forever retain all key roles in the creation and enjoyment of music – musician (composer, interpreter, performer), musicologist as well as listener. The computer, as a tool in the hands of humans, can enhance our musical experiences in a number of ways.

 

Researchers do have valid reasons, nevertheless, for attempting to have computers perform all those roles in the course of research.  I myself wrote programs that compose music in order to validate my ideas about the structure of compositions [Sahasrabuddhe 1992, 1994]. Sengupta, Parui et al cast a computer in the role of a judge of the quality of a tanpura [Sengupta 2004].  Such attempts can improve our understanding of how humans perform the same tasks.

 

In using computers for music-related research, we repeatedly face the need to present music to a computer in some form.  That is the next topic we must address.

 

Notation and audio signal

 

A number of symbolic notations have been devised for representing music.  Even in Hindustani classical music notations have been in use since the 19th century. For an overview see Kobayashi [Kobayashi 2003].  Different notations may differ in degree of detail and in exactly which features of music they choose to encode, but all notations offer a representation far more concise than any actual recording of the music being represented.  Therein lies their main strength, as also their primary limitation.

 

One particular notation deserves special attention because of its ubiquity.  Although one may argue that it is not the most suitable for Indian classical music, MIDI (Musical Instrument Digital Interface) combines both a notation one can analyze like any other, as well as input to a number of software and hardware devices that can directly play the music it represents.  A number of paper and web documents are available on MIDI.  For example, see Borg.com [Borg.com 2005], Huber [Huber 1998], and Menard [Menard 2005].

 

Sometimes it is not enough to work with notation, and one has to turn to recorded sound.  Analog recordings (esp. on tape) are subject to wow and flutter (for definitions see [Zen 2005].  Researchers are therefore cautioned against drawing conclusions about microtonal issues from an analysis of a recording that has passed through an analog medium.  Similarly, lossy compression of the digital recording may cause a loss of vital information [AudioBox 2005].

 

Summary of my work with notation

 

During the 1990’s I experimented with concrete computational models of performance in Hindustani (North Indian) Classical Music.  All my models and experiments drew upon notation found in textbooks.  As a result, I could make faster progress than if I were to use sound recordings instead.  That is because the latter course would present nontrivial problems in the interpretation of the input.  (Bapat discusses the difficulty of interpreting sound files [Bapat 2005].)  On the other hand, I exposed myself to a criticism that my input did not represent “real performances”.

 

At the heart of my experiments were:

 

·                    A program for building a finite automaton model of a raga from given notation of swar-vistar/ bandish etc. in that raga.

 

·                    An analysis of 2-, 3- and 4-note phrases that occurred in given notation, both within a raga and across many ragas.

 

The first led to several demonstrations of automatic and semi-automatic performance, which showed the strengths and limitations of the finite automaton model.  Alankaras (especially meend and gamaka) and exact tonal relations (shruti) were left out of the model by design.  In spite of that, ragas could be modeled sufficiently for recognition by a familiar listener. 

 

Some examples

 

These examples illustrate what could be done with the simple raga model.  The first pair are bandishes manually composed by two graduate student in Computer Science as an assignment in my Computational Musicology course.  The class was given the notation of a common existing bandish, and different students were given the finite automaton models of different ragas.  The assignment was to compose a bandish mimicking the up/down movements in the given bandish, but following the rules of the assigned raga.  Here are two of the results:

 

 

The next is an example of automatic composition in which the computer creates a progression of alaps.  This is done through a three-stage process.  In the first stage, random traversals through the raga automaton starting from and ending in madhya sa are saved in a list.  In the second stage these are sorted by the highest note reached, to simulate the badhat order.  In the last stage, durations and intensities of each note are decided by simple rules, and attacks, fade-outs and meends are added.  Listen to the resulting music:

 

 

The next example illustrates what a human-computer collaboration can produce.  Mr. Vasant Oka created this example for his M Sc project.  The well-known bandish in Todi “saanch-saanch keeje” has been supplied to the computer as data.  The computer then interpolates taans using the raga model to generate note sequences that fill the space available in the avartan, taking care to join the ends with notes of the supplied bandish.

 

 

Towards an architecture of musical meaning

 

When I presented the results of my experiments to experienced listeners, one common observation they made was that beauty was somehow missing from the results.  Before reaching the conclusion that it is because of the exclusion of details such as alankaras, one must attempt to model musical meaning in some way (analogous to modeling semantics in natural language).  One may, of course, question the very hypothesis that there is any meaning inherent in melody.  In Mary Russel’s novel The Sparrow [Russel 1996] music is shown to have very different meanings in different civilizations.  However, there appears to be some universality of meaning of melody as far humans are concerned.  For example, Martin Clayton found broad agreement among listeners about the mood of an Indian Raga [Clayton 2001].  The following clip is from P. Tchaikovsky’s Peter and the wolf.  It is a theme which represents an animal in the story.  Most listeners imagine that it represents a clever, sneaky animal, which tallies with the character represented, the cat.

 

 

With the help of substring frequency and succession data we searched for a set of atoms of meaning in the phrases occurring in many ragas.  It appears that 3-note sequences are most likely candidates for the atomic units we are looking for.  This work is still incomplete, because it can only be undertaken by researchers who have competence in musical as well as computational areas.  I firmly believe there is considerable progress waiting to be made there with the help of input in the form of notation, before one has to look to sound recordings for further development.

 

Computer output audio in research  

 

Producing sound output from a computer is significantly simpler than “recognizing” sound input (for example, converting melody or other musical elements in the input to notation).  As early as 1957, researchers were synthesizing sound on computers (see [Obsolete 2005]).

 

Excellent software packages are readily available today for converting specifications to sound waves.  See for example MIT Press [MITP 2005] and Vercoe [Vercoe 2005] on Csound, a powerful general-purpose synthesis package.  Mitra [Mitra, 2003] used in his research a more specialized package for producing veena-like sounds.

 

Computer in other roles        

 

In concert, computers embedded within electronic keyboards are used to create powerful and versatile instruments.  If a general-purpose computer is used in this role, it is possible to experiment with alternative musician interfaces and create even better instruments.  It is an area where more research will be useful.

 

Music students and listeners can train their ears using software created for that purpose.  See for example [Earmaster 2005], [Spangler 1999]. 

 

A PC (in a studio) or laptop (for fieldwork), armed with appropriate interfaces, is a good audio and/or video recorder.  See, for example, [Gonzalez 2003], [Salvator 2003].

 

Conclusion

 

 We have reviewed computational work in music: both musicology and creation of music.  It is hoped that this review and the references at the end provide future workers with useful resources.

 

References

 

Audio Box, Inc. 2005 Compression on a Windows PC:  http://www.audioboxinc.com/compression.html

 

Bapat, Ashutosh “Pitch Detection of Singing Voice in Tabla Accompaniment” M Tech Thesis, Department of Electrical Engineering, I I T Bombay 2005.  See  http://www.ee.iitb.ac.in/uma/~daplab/index.htm

 

Borg.com 2005 Midi tutorials on the web: http://www.borg.com/~jglatt/tutr/miditutr.htm

 

Clayton, Martin 2001:  “Introduction: towards a theory of musical meaning (in India and elsewhere)” Music and Meaning Special issue of British Journal of Ethnomusicology, vol. 10 part 1 (2001).

 

Earmaster 2005: What is Ear Training? http://www.earmaster.com/eartraining.htm

 

Gonzalez, Ron 2003: The Musician's Guide to Home Recording http://home.earthlink.net/~rongonz/home_rec/home.html

 

Huber, David Miles 1998 The MIDI Manual, Second Edition (Paperback) Newburyport, MA, 1998.

 

Kobayashi, Eriko 2003 “Hindustani Classical Music Reform Movement and the Writing of History, 1900s to 1940s” Ph D thesis, University of Texas at Austin, 2003

 

Menard, Jim 2005 Midi reference on the web: http://www.io.com/~jimm/midi_ref.html

 

MITP 2005: The Csound Front Page http://mitpress.mit.edu/e-books/csound/frontpage.html

 

Mitra, Amit 2003: “In Search of 22 Shrutis” Ph D Thesis, Department of Computer Science, University of Pune, 2003.

 

Obsolete 2005: Computer Music: Music1-V & GROOVE http://www.obsolete.com/120_years/machines/software/

 

Russel, Mary Doria 1996: The Sparrow New York: Villard, 1996

 

Sahasrabuddhe, H V 1992 “Analysis and Synthesis of Hindustani Classical Music” Department of Computer Science, University of Poona, November 1992

 

Sahasrabuddhe, H V 1994 “Searching for a Common Language of Ragasread at seminar: Indian Music and Computers: Can 'Mindware' and Software meet?, New Delhi, August 1994.

 

Salvator, Dave 2003: Build It: Extreme Personal Video Recorder http://www.extremetech.com/article2/0,1697,1626398,00.asp

 

Sengupta, R., S K Parui et.al 2004 “Objective evaluation of Tanpura from the sound signals using spectral featuresJ. ITC Sangeet Research Academy, Vol.18, 2004.

 

Spangler, Douglas 1999: Music Software for Eartraining http://www.msu.edu/user/spangle9/

 

Vercoe, Barry 2005: The Alternative Csound Reference Manual http://kevindumpscore.com/docs/csound-manual/

 

Zen Audio Project, San Francisco University 2005 “wow and flutter” http://www.sfu.ca/sca/Manuals/ZAAPf/w/ wow_and_flutter.html