This article was written in the early -90:s for a seminar on aesthetics in EAM held at the Swedish Royal Academy of Music. Some of the conclusions have been outdated by the actual development of new powerful computersystems but the article still contains some valid observations.
The development of the electroacoustic art during the last decade has given rise to a number of questions concerning the aesthetic credibility of an artform often regarded as narrow, highly specialized and, in the worst of cases, elitist. To put the discussion in perspective, I believe that it is relevant to view the technologically ground breaking and musical tendencies of the eighties and ninties in a wider historical pespective. Before doing so I need to define some terms that I will use throughout the discussion. These terms are refered to in a slightly different manner depending on who is utilizing them, and could be misinterpretated as lacking a strict definition in this context. The three basic terms are "sound", "event" and "object", frequently used to describe a variety of properties of anything from a synthesizer to a programming environment. Although at first glance one might argue that "sound" really ought to be a quite unambiguous term (if it sounds itīs a sound!), this is not really the case. In synthesizer terminology a "sound" need not represent the actual sound as it is produced, but rather a set of parameter in the algorithm that will (at some point in time) produce and output a sound. To make life a little bit easier I will stick to the more everyday concept of the word. When I use the word "sound" I do mean something that sounds, not a "patch", or "voice" or whatever synth industry terms may be used. "Event" is a somewhat more abstract term, and is normally used to represent any physical or nonphysical action taking place at some point in time. The crucial part of the definition of the word is, of course, time since an event (of any kind) always relates to some sort of timeline. Events simply donīt apply outside time. Especially not musical events. The term "object" is even more abstract, and I wonīt try to sort out all underlying possible interpretations. When I use the word in this context it means a representation of a process of any degree of complexity that makes up a well defined and recognizable structure, easily distinguished from other "objects". If an object can be altered in some manner, all the structural parts that make up the object will respond in various ways to that alteration. Since objects often (but not always) exsist in time one might regard them in a sense like complex hierarchical events. Let me give you a simple musical example: one bar of an Alberti bass line. Itīs a fairly simple musical structure. It can be easily recognized among other structures and it can be altered in some common fashions like transposition, augmentation etc, and all the structural parts (i.e. the notes) will be affected by the alteration.
Itīs not really more complicated than this. However, since the definition applies to a vast amount of possible structures (not always of a musical type), it can still be tricky to grasp at times. Yet it is important to understand the concept of "objects" since the term is extremely fit to describe the way many composers of electroacoustic music tend to think as they work with sonic art.
Having sorted out the basics, itīs time to round things off by combining these terms and add some complementary ones; A "soundevent" simply implies a sound of some sort that occurs on a timeline. A "soundobject" could mean anything from a single soundevent to a large complex of sounds as long as it adheres to the general definition of an object as stated earlier. A soundobject in this sense should not be mistaken for Schaeffers definition of the "objet sonóre", which rather referes to the physical object that creates a sound. Instead I use "soundstructure" to point to the underlying physical or algorithmic structure that causes a sound (a sound is as we stated earlier what we actually hear). An "eventstructure" is similarily the underlying structure of events on a timeline. Finally "eventobjects" could be thought of as containers of eventstructures, provided that the conditions of the object in general is met by the underlying structures. The Alberti-bass example could be used once again to demonstrate the difference between soundobjects and eventobjects. When taken in its symbolic form (i.e as in notation) it will be regarded as an eventobject. If we were to make a recording of the same example it would transform into a soundobject. One interesting aspect of computer music structures is that they have the ability to appear as both sound and eventobjects simultaneously.
Ever since the art of instrumentation entered the music history, composers have been, to a certain extent, preoccupied with sound in its own sense. But in no music was the soundstructure a major compositional means until electronic and concrete music appeared in the late forties. There existed however a fundamental difference in the ideological background of the Köln studio and GRM in Paris. A strong force behind the concept of the Köln studio was the serial thinking of Darmstadt. In Paris the closest link to the older ideas could rather be traced to the futuristic manifesto of Russolo concerning "noise-art". Accordingly there was a mainly theoretical orientation in Germany, while the French were more biased towards an experimental exploration of the new sonic field of art. These distinct viewpoints have lingered on in various ways through the short history of electroacoustic music. It becomes particularly clear as computers begin to be used as tools for composing in the early sixties. The computer would at this time act in two fundamental ways. It could handle and manipulate soundstructures and use these structures to generate sound. It could just as easily handle and manipulate eventstructures on a symbolic level and use these to control soundobjects or transform the data into a score that could be played by musicians. Since generating sound with the computer at this time was a tedious non-realtime process and the algorithms rather primitive, analog equipment was far more attractive to those composers who wanted to explore the possibilities of a truly sonic art. The analog systems were however lacking a lot in terms of structural abilities. Parameters like pitch and time could only be handled in a rather raw fashion and building complex layers of sound required a tremendous effort before the introduction of the multi-track tape recorder in the mid-seventies. Consequently two aestethic lines developed. In the late seventies "analog composers" regarded computer music with suspicion and doomed it as being aenemic, academic or plain boring (which was often true), while "computer composers" argued that analog music was narcissistic, structurally poor and predictable (which was also often true).
Things changed drastically as digital synthesizers became commercially available in the beginning of the eighties. Even for those composers who had tried to work in a "mixed" mode before, going back and forth between computers and analog equipment, the possibility of having direct access to the sound of digital synthesis in the analog studio completely changed the image of computer music.Even more revolutionary was the new MIDI-protocol that marked the beginning of the fastest expansion of the musical instrument market ever in history. By the end of the eighties there wasnīt any point in talking about "computer music", since practically all electroacoustic music was done using computers in one way or the other. Still, the two aestethic lines seemed to be there. One type of composer would concentrate his energy on soundstructures while the other would devote a major part of his attention to eventstructures. Although being a coarse generalization, this is true to the extent that it is worth taking as a point of departure for a deeper discussion.
Although there are possibilities of working with symbiotic forms of soundstructures and eventstructures, this is by far not as common as separating the two. When working with MIDI it is always the case that eventstructure and sound are separated. As a consequence, one may regard sound as being an exchangable part of musical identity, while the eventstructure is irreplacable. This heritage of thinking stems from instrumental music where one could imagine a melody that was written for flute but played on a violin still being the same melody. To be incisive one might say that sound in this sense is degraded to a means of confirming the presence of an event-structure. Further more, in the case of electronic sounds in general they do not posses the strong timbral identity of acoustic instruments. If, for instance, an electronic sound shares some structural properties with a violin sound, it simply becomes a "violin-like sound". Consequently a lot of other violin-like sounds will meet the possible requirements of the same eventstructure just as well.
In this sense we end up caring less about sound than we would if we were writing instrumental music. At least an instrumental composer always works with particular sounds in mind even if it is potentially possible to reorchestrate or transcribe the music later. This is more than one can say about some composers of electroacoustic music. I sometime get the impression that some composers really donīt want to be responsible for the sound of the piece at all. If they had the money they would simply compose a structure and leave it to some skilled programmer to put sounding flesh on the bare bones. I do not believe that it is truly possible to separate eventstructure and soundstructure in electroacoustic music. They have to be worked on in parallel, tuned and retuned until the optimal relation has been achieved. Without this relation, the music will not be artistically convincing and of little or no interest to the audience. At least it could not be regarded as sonic art, but rather as an alien form of instrumental music. I do believe that a composer of electroacoustic music must take the full responsibility for creating a complete musical structure where sound and eventstructures are mutually dependent and inexchangeble. If he canīt handle that, heīs most likely better off composing some other kind of music.
MIDI is a great thing for composers and musicians, simply because it has opened up a new world of powerful musical tools and made it possible to find these tools almost everywhere in the world. It is obvious that the more standardized our instruments get, the more flow of experience, ideas and creative development will be accessable to a lot of people. Think of what the MAX programming language, unthinkable without the standards of MIDI and the Mac computers, has meant to composers and musicians around the world. MIDI does however contain some severe limitations. Low resolution and speed is what is commonly argued to be the weak spot of the protocol. This however is true only under certain conditions and there is a lot of alternative solutions which solve some of the bottlenecks so that the end result need not suffer that much. A bigger problem in my opinion is that MIDI affects the way one tends to think about music and the compositional process. The treating of musical processes as a split world of events on one side and sound on the other is one of the major shortcomings of MIDI.
We would need to see a communication protocol that gives us access to soundstructure as well as eventstructure in a transparent way. As things are now, it is possible to work with true event and sound integration but to manage all the intermingled datastructures involved one has to work against the basic concept of the MIDI-protocol. This calls for a lot of effort and time, and also a great deal of technical control of the sometimes rather complex interactions between a variety of software and hardware. It is herefore understandable if some composers simply give up halfways through the data-jungle.
There are of course other platforms than MIDI. In different institutions ranging from eam-studios to universitys one finds remnants of what electro-acoustic music used to be before MIDI. Powerful computers running specialized software utilized by a few composers and programmers. The problem with these systems are that they often have a steep learning curve, cost too much and have very limited support and documentation. One of the really interesting things with MIDI is that it suddenly brought a common "language" to people from the commercial, experimental and art music world. In order to develop a credible artistic alternative to the industrialized popular music I believe that we need a common instrumental/technical and lingustic ground shared by many composers, developers and musicians from different musical communities. Specialized systems will complement each other as sources for artistic development projects, realizations of visionary concepts and as research platforms but will not be able to carry out the needs of a generalized specification common to a large international music society.
Since the introduction of MIDI and the ever more powerful (and cheaper) computer systems, there has also been a rapidly growing interest in various types of live-computer and interactive music, performed with or without traditional instruments, synthesizers, sensor systems, alternateive controllers and lots and lots of cables, interfaces, computers and panic, when nothing works five minutes before the concert is supposed to begin.
Again it seems like this branch of electroacoustic music has mainly attracted composers with a prime interest in eventstructures and in this special case interactive processes in general. So far, the artistical and musical results have (with a few exceptions) not been very convincing.
Itīs the same old story. People in conferences and seminars and festivals will talk for hours about intricate controlstructures, grammars, fractals, sensors and God knows what, but surprisingly little about sound, art or music! There is obviously a great deal of potential in this type of eam, but unfortunately too few good composers have put in sufficiant time to develop artistic and musical solutions to some of the many problems involved in interactive music. Most likely because, although todayīs systems are much more powerful and versatile than what they used to be only five years ago, interactive music systems still need to improve substantially both in terms of power, control and integration before they will become a really attractive alternative to tape-based composing. It is my opinion that the kind of live-electronic music that has provided the world with the best music in this genre is still the soloist or ensemble playing with a pre-recorded tape.
It is fair to say that the majority of composers that consider eventstructures as the most significant part of composition are profoundly influenced by an instrumental approach to music. This should not neccesarily disqualify them as eam-composers, but obviously some of the basic qualitys and important identity of sonic art is easily lost along the way. I still believe that a good eam-composition always starts with the composer reflecting on the structural and poetic resources of the sounds from which the piece is to be built. The day composers stops wrestling and experimenting with sound-structures, the term sonic art will have no meaning anymore. The second aesthetic branch, developing soundstructure rather than eventstructures is not without its problems either. Unfortunately, sonic art seems to attract some composers who have very rudimentary ideas about composing. All to often a piece may demonstrate some interesting sound material but at the same time be startingly void of anything even remotely related to concept, structure, dialectics or poetry. These works simply presents one attractive sound after the other or, if things gets a bit more dense, on top of each other until they eventually dissapear into the silence from which they so mysteriously transmogrified themselves earlier.
This may be an unfair and mean-spirited way of describing a musical genre that has given rise to some undisputably good music, but Iīm not really refering to the few masterpieces of eam but rather to the pathetic mass of weak music that unfortunately colors the overall impression of eam. Iīm afraid that eam in some sense is a little bit too much like popular music in that it seems to be very formula sensitive. Lacking the basic possibility to utilize some vital structural elements like melodi, harmonic progression etc eam composers are constantly on the look-out for elements that can fill this structural void. Consequently one who executes an efficient gesture (like the characteristic accelerando-ritardando or the reversed reverb-sound) will soon find this "invention" popping up in the compositions of his collegues. Since formulas like these often are heavily exploited they soon become hollow gestures with little musical value. The only way to make sure that ones innovations arenīt mimicked is simply to make things too difficult to copy. Unfortunately this easily degenerates into exagerated virtuosoity. Today the majority of soundstructure orientated music rarely exposes more than one line of musical development at any given time. There is, of course, more than one soundobject present every now and then but that doesnīt change things since only one soundobject at a time carries the musical initiative. Itīs like a ball being kicked from one player to another in a football team. Apart from simple foreground - background effects, one rarely hears more than one structural line at any given time which naturally renders the music somewhat flat. Apart from the ever popular mechanical pulsation, the rhythmic structures, which are often rudimentary or indistinct, seem in general to attract very little attention from most composers. Eventstructure is normally limited to some simple time structuring of soundobjects, typically applied to the ordering of onset times for soundobjects and possibly affecting their duration. Normally one doesnīt note the presence of these structures, but they make the composer feel more comfortable when he has to answer questions about how the piece was made. Furthermore the soundobjects in ninety percent of all cases have no external reference and thus it is possible to understand why some people find eam rather boring.
To move forward from here, we must learn from the past. I believe that the difference in music within the same cultural region, from one time to another, is far less than that between two contemporary but different cultural regions. We have to be more selfcritical. It simply is not enough to be skilled in producing sweet sounding music if it turns out that the only message it carries to the audience is emptiness. The field of eam has to be conquered over and over again. We must dare to be more personal, more specific, more unpredictable. We need to build up a new set of sounding symbols and signs that will allow us to communicate with the audience. A new kind of spectral harmonic system, adopted to sonic art, is yet to be invented and investigated. The rhythm of electroacoustic music has to be redifined and reinstalled as a major musical attribute, which will enable us to develop more complex time-relations between an increasing number of layers of soundobjects. It is time to leave the mono-linear state and learn how to expand into a multi-dimensional musical space where poly-linear composing will be the eam equivalent of the traditional polyphony. I hope that we will be able to inspire each other to smash the electroacoustic conventions we have created and move to a new state of musical conciousness. I do believe in a future for electroacoustic music, but we will have to work hard for it.