ABSTRACTS
A B C D E F G H I K L M N O P R S T V W Z
A
Mathew Adkins - The Application of Memetic Analysis to Electroacoustic Music
Mathew Adkins
Music Department, University of Huddersfield (UK)
Although Richard Dawkin’s concept of the “meme” was first formulated in his work The Selfish Gene (Oxford 1976) it has been over three decades before the first substantial text applying this concept to music has appeared. Steven Jan, in The Memetics of Music: A Neo-Darwinian View of Musical Structure and Culture (Ashgate 2007) proposes an analytical method based around memes – a multitude of musical “units” or “replicators” that are transmitted by imitation both within, and across genres of music. Jan’s study focuses primarily on the application of memetics to the analysis of classical music. This paper will be the first to extend the application of memetic analysis to electroacoustic music. The first part of the paper considers how the author’s formulation of “acoustic chains” in electroacoustic music (Adkins 1999) can be extended through the application of memetic thinking. The paper moves on to examine what constitutes a meme in electroacoustic music through considering timbre, spectromorphology, rhythm and the perception of concrète sonic material. The concept of “genre” is discussed in relation to the transference of memes within contemporary music culture from one genre of electronic music to another. Finally, to demonstrate the practical application of memetic thinking in electroacoustic music, the last part of the paper provides a brief analysis of the author’s work Recombinant (2008).
Isabel Antunes-Pires
Groupe Confluences cinématographiques, audiovisuelles, musicales et arts numériques, Université Paris-Est Marne-la-Vallée
Xenakis a développé des procédés compositionnels qui lui sont propres et il les appliqua à la fois dans des œuvres de musique électroacoustique et de musique instrumentale. L’usage de ces procédés crée des liens indiscutables entre musique électroacoustique et musique instrumentale. Par l’étude du rapport entre l’œuvre électroacoustique La légende de Eer et Jonchaies pour orchestre, nous allons approcher quelques éléments qui témoignent de cette corrélation.
À propos de Jonchaies, Xenakis affirme que cette pièce « s’inspire des résultats obtenus et utilisés dans “La légende de Eer” (…) issus de [ses] travaux théoriques sur la synthèse des sons et de la musique par ordinateur (…) ». Ainsi, dans cet article, nous chercherons à montrer, par des méthodes d’analyse, le rapport étroit tissé par le compositeur entre ces deux pièces. Nous montrerons notamment que, dans La légende de Eer, l’utilisation de processus stochastiques et mouvements browniens ont été essentiels, et que les sonorités qui en résultent ont influencé notoirement la recherche des sonorités orchestrales de Jonchaies.
B
Natasha Barrett - Listening Thresholds and Contextual Frameworks – An example from Barely-part-1
Natasha Barrett
Freelance, Oslo (Norway)
Rather than being expected to surprise the listener with endless novelty, a challenge now facing electroacoustic composers is to delve with creative energy into our established techniques and musical languages. From this perspective I present the first version of Barely - Barely-part-1: a sound and visual installation design in collaboration with Birger Sevaldson and experimental design and architecture group Ocean North. Barely is a paradigm of composition, sound-art and multi-media addressing the relation of the composed work to perceptual and social frameworks, and how this external connection may subsequently find its way into the intrinsic structure of the work as a self-contained entity. Barely evolved from a reaction to the increasing noise of our everyday sound-world and to some trends in electroacoustic composition and “sound art”. In broader terms it is a reaction to information overload and information redundancy, both of which deny detailed listening regarded as an ingredient of everyday well-being. In concrete terms Barely presents a highly detailed, just perceptual layer above the “experienced threshold” of our senses. In an anechoic room, the experienced threshold is that of the threshold of hearing. At a busy train station the experienced threshold is the sound to which we consciously listen to as information-gatherers, rather than as passive receivers hearing the sound but with no intention to listen or interpret meaning. Barely-part-1 addresses compositional structure and the experienced threshold in terms of space, symbolic syntax, acoustics, perceptual and social frameworks, and where the architectural aspect interacts with the temporal domain. It aims to create a “sound-sanctuary”, enticing the individual into deep concentration and a sensual experience by offering great detail and structural complexity at a level that is only just perceptual yet latent in our everyday experience.
Marc Battier
Université Paris-Sorbonne, MINT-OMF et EMSAN
The concerts organized in 1952 as part of the Festival de l’Œuvre du XXe siècle, conceived by Nicolas Nabokov, were the chosen forum by Pierre Schaffer to broadcast creations marked with a grand diversity in the approach of musique concrète. They were held on the 21st and the 25th of May. An additional concert on May 23rd presented a selection of works advertised as “Concert reserved to the youth movements and commented by Bernard Gavoty”. The repertoire of the young musique concrète was comprised of works realized by P. Schaeffer and P. Henry as well as guests and participants to the first training session of musique concrète. The concerts gave the opportunity to present an ensemble view of musique concrète and to claim force in research, along with, as innovation in execution, sound projection “in relief, static or cinematic”.
Programmed in these concerts were Pierre Boulez’ two studies, Pierre Henry’s Antiphonie, André Hodeir’s Masquerage (film version with singer and pianist), a fragment of Pierre Schaeffer’s Orphée and the work that is the object of this communication, Olivier Messiaen’s Timbres-Durées. The concerts also gave the opportunity to present a retrospective of musique concrète works as well as to hear the studies of the first students of the GRMC.
In a preliminary program, the work Timbres-Durées is presented as “Étude pour percussions”. The names of the composers are noted as being O. Messiaen and P. Henry. However, further study of the documents allows the comparison with other works: thusly we find a notice for a study by André Jolivet to which is associated the name of Jean Barraqué. As in Timbres-Durées, the insertion of Barraqué’s name in the notice of the work in question, “Programme de Fin d’Année”, leaves us to believe that he had assisted Jolivet; in reality, this study does not seem to have ever existed and ceases to appear in further programs. On the other hand, another document announcing two concerts was formulated as: “a work realized by Pierre Henry in collaboration with Olivier Messiaen”. It is therefore interesting to examine the work relation between these two composers that are Henry and Messiaen.
The work was presented again on June 16th, 1953 at the Centres d’études radiophoniques in Paris in a concert entitled “Son et espace”. Again, the operators of the spatialization, P. Schaeffer and P. Henry, were in command of the “relief desk” (pupitre de relief).
This communication will discuss the work Timbres-Durées using paradigmatic analysis and formal segmentation. It will also attempt to situate the work in the immediate context of other works of concrete music as well as the role played by P. Henry. At last, it will evoke the philological question of the authenticities of the sources since the version presented at the premiere has not yet been faithfully reproduced. Musicologists are left faced with posterior realizations assembled from original tapes that, nevertheless give rise to doubts on the available sonorous “texts”. It appears that what we sometimes call music “of fixed sounds” must, as other musicological sources, be submitted to genetic research, the only method capable of discerning the version closest to what the composer had chosen to compose more than fifty years ago.
marc.battier@paris-sorbonne.fr
Olivier Baudouin - Un exemple d’analyse à partir d’une partition codée : Stria de John Chowning
Olivier Baudouin
MINT-OMF, Université Paris Sorbonne
Partant du principe que l’analyse doit conduire à la compréhension puis à la pratique des techniques de composition, nous avons reconstitué le code source de Stria – œuvre emblématique de la musique de synthèse numérique composée par John Chowning entre 1972 et 1977 – afin de permettre une meilleure compréhension des mécanismes mis en jeu par le compositeur. Nous avons ainsi transcrit les codes originaux obsolètes dans des langages actuels et conçu une interface graphique qui donne la possibilité de reproduire en sons la pièce, voire d’en créer des variantes. Au-delà de la question de la conservation de ce patrimoine particulier, nous voulons montrer que l’analyse des musiques composées au moyen de programmes informatiques ne peut ignorer l’intimité de leur élaboration technique sans passer à côté d’idées compositionnelles générées par ces mêmes programmes.
En préambule, nous évoquerons les différents types d’analyse en vigueur dans le domaine de la musique électroacoustique, en réponse au manque de partition. Nous aborderons notamment l’élaboration de représentations graphiques à partir de sonogrammes – au moyen d’outils comme l’Acousmographe développé au GRM par exemple – et l’étude des traces laissées par le compositeur, esquisses, schémas de diffusion, commentaires. Nous présenterons ensuite Stria et ses sources, fournies par Chowning et le travail de reconstruction qui nous permet aujourd’hui de restituer la pièce. Enfin, nous évoquerons le concept de faktura – ou méta-facture, reformulé par Marc Battier au profit de la musique électroacoustique, en illustrant notre propos avec des exemples pris sur le vif dans la « partition-programme » de Stria.
Cette investigation ouvre un certain nombre de questions. Elle montre la nécessité de former des spécialistes pour une analyse des œuvres ayant recours à la synthèse sonore numérique, organisée autour de méthodes appropriées. Elle pose à nouveau le problème du rapport entre la « composition du son lui-même » (la micro-structure), la forme (la macro-structure) et l’intention musicale (la méta-structure). Enfin, elle souligne l’influence de techniques et de langages extra-musicaux sur les idées musicales, et donc le débat sur l’assujetissement supposé de la musique à la science.
Alexander Bennett
School of Music, University of Auckland (New Zealand)
Since its inception, electroacoustic music has remained in the realm of the avant-garde, failing to appeal to a wide audience. A contributing factor to the restricted appreciation of the medium could be the focus on predominantly abstract sonic material, which has the potential to alienate listeners. It is theorised that amongst abstract sonic material, objects with natural, referential and/or human characteristics act as a “way in”, or as Landy (2006) suggests “something to hold onto” whilst appreciating electroacoustic music. The theory, aptly coined the “key sound phenomenon” concerns these sounds that are pivotal symbols and structural signifiers within a work. The purpose of the research is to develop a construct that may be of use in the process of composing electroacoustic music with more effective structural elements and greater communicability.
To investigate the phenomenon and provide valid data from which a useful construct could be based, an empirical study was undertaken using Lee Landy’s listening test methodology from the Intention Reception Project (2006). Specifically designed to compare the accessibility of electroacoustic music impartially, three test groups including “inexperienced”, “experienced” and “highly experienced” listeners were utilised. Two works with contrasting musical discourse were auditioned to the groups. Denis Smalley’s Pentes, consisting of highly abstract material and John Cousins’ The Quarter displaying a mixture of highly referential and abstract sonic objects. The directed questionnaires were designed to gather information regarding the communicability, particularly identifying the sounds that were structural or symbolic “keys” to the overall meaning of each work. After collating results from the listening tests it was observed that The Quarter communicated more successfully than Pentes. This is not to say that Pentes is any less of a work than The Quarter, rather the language used in the latter piece was simply easier to digest by the majority of the listeners tested. One hundred percent of listeners from all three groups were able to identify at least two referential sounds in The Quater. These were the human voice and the inclusion of a short except of popular music. It was found during group discussions that these sounds carried the data that contributed irrefutably to the narrative and overall meaning of the work. On the contrary, neither the inexperienced group or the experienced group were able to recognise any of the sounds within Pentes, with only half of the highly experienced group identifying some possibilities of the sounding objects.
In further response to The Quarter, sixty-six percent of the inexperienced listeners noted the more abstract material as having an emotive effect. In particular the “bass boom” suggested a “serious” or “brooding” subject matter. All of the experienced and highly experienced listeners not only felt this sound had a menacing effect but also recognised its “episodic” occurrence and contribution to structure. Contrastingly sixty-six percent of all listeners felt that Pentes conveyed no emotion, with one experienced listener responding; “I felt no emotional connection with the work at all”.
These results could suggest that fewer identifiable sounds within the Smalley work was the reason for its apparent lack of structure and communication. However, the existence of key sounds was not discarded. From subsequent group discussions it was found that thirty-three percent of the inexperienced listeners were able to describe the sonic images they experienced as well as some possible meaning or emotion that these portrayed. Hence, key sounds were present but on a more subtle or subconscious level than those of the Cousins work. The outcome of this study has resulted in development of the “key sound gradient”. This construct attempts to illustrate the communicative power and accessibility of different types of sounds along the “abstract/referential” continuum. The model is simply intended to assist electroacoustic composers in developing their sonic palette and aid in the identification and inclusion of “key sounds" that may improve communication to a wider audience.
Sébastien Béranger - Du maniement de l’outil à la virtuosité de son utilisation...
Sébastien Béranger
La Muse en Circuit, Centre national de création musicale
La notion d’expérimentation est une donnée incontournable de la création électroacoustique. L’émergence de formes musicales inédites implique un double enjeu : esthétique, bien sûr, mais aussi technologique, ce qui laisse transparaître une sorte de complexe latent vis-à-vis de l’écriture instrumentale. Force est de constater que le compositeur « de l’écrit » – contrairement à l’acousticien – prend en compte les possibilités et les restrictions de son instrumentarium sans que cela n’implique de développements technologiques particuliers. Cette recherche constante de nouveauté pourrait d’ailleurs être l’une des raisons principales du manque de méthodologie et d’analyse qui caractérise nos pratiques musicales. Et ces difficultés à définir une approche analytique de nos musiques resurgissent à travers les processus de création du compositeur via la dialectique qui s’établit entre méthodologie et expérimentation. Ici se pose la question de la virtuosité du compositeur à manier ses outils. L’outil n’est-il qu’une simple « interface » entre le compositeur et sa musique ou est-il, au contraire, un dispositif à prendre en compte lors du processus de création, ce qui implique une virtuosité indispensable ? Qu’en est-il du rôle de l’assistant musical ? Participe-t-il au processus de création ou n’est-il qu’un simple médiateur ?
Ce type de questionnement est récurrent et ne peut être réduit à une dialectique aussi exclusive, mais les conséquences peuvent être nombreuses : le besoin d’expérimentation et d’inouï qui caractérise nos musiques est-il fonction des choix du compositeur ou des possibilités techniques des matériels et de l’assistant ? Le compositeur participe-t-il à une œuvre que nous pourrions qualifier de collective ? Le contexte d’expérimentation et de transgression technologique n’est-il pas dépassé par le besoin d’une nouvelle virtuosité compositionnelle ?
sebastien.beranger@alamuse.com
Marie-Hélène Bernard - Le développement de la musique électroacoustique en Chine continentale
Marie-Hélène Bernard
Université Paris Sorbonne, PLM
La musique électroacoustique s’est implantée si récemment en Chine continentale que nous manquons de recul pour écrire son histoire. Rappelons que le premier studio fut fondé en 1986 par Chen Yuanlin, après un voyage d’étude aux Etats-Unis. En 1993, Zhang Xiaofu, de retour d’un séjour en France, fonde une nouvelle structure, le CEMC, dans une direction esthétique plus proche de la tradition de la musique concrète française. Dans la foulée, des studios vont se créer dans d’autres conservatoires chinois, souvent pris en charge par des compositeurs formés en France (comme Xu Shuya ou An Chengbi). L’étranger, en particulier la France, joua donc un rôle capital dans le développement de cette musique.
L’avenir néanmoins fera peut-être émerger, comme dans la sphère économique globale, des luttes d’influence qui pourront avoir des répercussions esthétiques. Mais les enjeux principaux concernant le futur tournent autour de ce que pourrait être une musique électroacoustique proprement chinoise : quelles sont les spécificités culturelles susceptibles d’infléchir l’élaboration des œuvres, quel degré d’hybridation pourra-t-il être atteint ?
Ces questions, que les compositeurs chinois opérant dans la sphère instrumentale ont dû affronter dans les années 80, se posent déjà avec une certaine acuité. L’usage d’instruments traditionnels dans certaines pièces mixtes est une des voies explorées. Plus en profondeur, on peut se demander si les compositeurs reprendront à leur compte une approche spécifique du sonore, inspirée par la nature, telle que l’a développée la tradition classique chinoise ou si cette hypothèse n’est qu’un cliché occidental repris éventuellement, comme par effet de boomerang, en Chine elle-même. Mais comme chez les compositeurs instrumentaux, c’est peut-être aussi par une certaine revendication de la spontanéité ou par des conceptions différentes quant au temps musical que les compositeurs chinois de musique électroacoustique seront amenés à se démarquer de leurs homologues occidentaux.
Leslie Blasius - Persepolis Revisited
Leslie Blasius
University of Wisconsin-Madison (USA)
Iannis Xenakis’s Persepolis (1972) evokes the ruins of the Achaemenian capital, but also the late Shah’s dreams of a modernity mobilized from these ruins. It dreams us into a past that overwhelms us with its presence. It becomes present, freezes the listener in a space claustrophobic. As a revivification of that which is lost, it denies the possibility of itself as something in the past.
As Benjamin noted, the work of art is constructed of ruins, and destined itself to an aesthetic ruination. It has, in effect has a “natural history”. But what sort of natural history is the destiny of the electronic work, one which is frozen, unchanging? Such questions come into play in a new release of Persepolis, one accompanied by nine remixes. These have in common what we might think of as an “aesthetic of ruination.” In each, we hear Persepolis from the standpoint of an unspecified future in which the original has been distorted, degraded, damaged.
Such an aesthetic attempts, in fact, to force an experiential history on electroacoustic music. Yet it can only do so through an evocation of technological obsolescence. Each remix comes across as a failed recovery of Xenakis’ original. That this mimes attempts to write a history proper of electroacoustic music (should such a history be one of works or of technologies?) is telling. But a different angle comes more to mind. Persepolis ostensibly recaptures the sonic texture lost in the millenia. We are there, in a present which in fact is a different time. Yet Persepolis could as easily be experienced as prophesy (if but ironically as a prophesy of the events of 1978). I would argue that it is this sense of work as prophesy that the aesthetic of ruination attempts to forestall.
Tatjana Böhme-Mehner
Leipzig (Germany)
En 1996, François Delalande a montré que l’histoire de la musique électroacoustique pouvait être envisagée à la fois comme inscrite dans la continuité de la période baroque, et comme rupture, avec l’apparition de l’enregistrement comme nouvel outil privilégié. Nous proposons ainsi une histoire de la musique électroacoustique vue sous l’angle de la rupture, avec ses implications sociologiques et théoriques.
Le cœur de la discussion repose sur les « malentendus » de cette Histoire, à travers des observations concrètes liées à l’environnement et des remarques plus théoriques. Nous évoquerons notamment les points de vue mutuels entre la musique concrète de Schaeffer et l’Elektronische allemande. Puis, après avoir examiné les éléments de continuité entre musique baroque et musique électroacoustique, nous dresserons un portrait renouvelé du compositeur et de son œuvre, grâce au couple rupture / continuité.
Hannah Bosma
Muziek Centrum Nederland, Amsterdam
Electroacoustic music always had a difficult relation with concert performance. In a “loudspeaker concert”, with acousmatic or “tape” music, the lack of live musicians is problematic for many, especially in a classical concert situation. Or, as Dutch composer Ton Bruynèl (1934-1998) once said: “It’s nice but still you sit there for two hours looking at your shoelaces.” As a consequence, Bruynèl decided to compose for the combination of sound tracks (tape) and acoustic instruments/voices almost exclusively.
However, in general, compositions for live instruments/voice and tape are considered as problematic too: as an unmatching combination of electronic sound coming from static loudspeakers with acoustic sound from three-dimensional instrumental/human sources; or as a mechanical timeline of the prerecorded medium that constrains the live musical timing of the performer. Several forms of “live electronics” have been proposed to solve these problems, but compositions for acoustic instruments/voice and tape/cd/prerecorded medium are still being performed, nowadays even more than ever. Some musicians even prefer to perform with tape instead of live electronics – therefore, such diverse composers as Ton Bruynèl and Anne La Berge made prerecorded versions of compositions for instrument(s) and live electronics. On the one hand, with computer technology the “tape” (prerecorded) part can be much more flexible now, allowing the musician more freedom of timing; but on the other hand, some musicians ask for clicktracks, an ultimate mechanic “timing machine”. Bruynèl’s notation of timing points towards a different notion of the performer: the musician as listener. Not only the instrumental part, but also the prerecorded part has to be “rehearsed” by the performer. In Bruynèl’s compositions, the tape part is not an accompaniment to the instrument(s): the tape part is the main part, that has to be “filled in” by the instruments. Thus, on stage, the musician functions as an exemplary performer of active listening.
For NEAR/Donemus, I was extensively involved with the publication and digitisation of Ton Bruynèl’s oeuvre, and the release of a 6CD-box with the Bruynèl’s music (with extensive information, texts and scores on a CD-ROM track). May 2008, a festival around the work of Ton Bruynèl will take place in the Netherlands, and a DVD with his video opera Non sono un uccello and a documentary will be released. With this paper, I want to reflect on the most important aspect of Ton Bruynèl’s work: the combination of “tape” and instruments.
I discuss some of the issues and problems related to this mixed electroacoustic music, with examples from compositions of Ton Bruynèl and other composers, and from performance practices of contemporary musicians. I argue that these problems bring to the surface some implicit issues of Western classical concert music and of electroacoustic music in general, such as: the role of the musician (as co-creator, as machine, as listener, as creative manager); the role of the audio technician; the liveness or mechanical quality of live performance; the theatrical and visual quality of live performance; the activity of the listener: ears only? eyes open or closed? the activity of the listener: hearing a composition just once, or comparing different instances/interpretations? the musical importance of sound quality.
We could say, in this “age of reproduction”, that all music is in fact electro-acoustic music, since recorded music is so pervasive in all aspects of musical life: it functions as a normative standard and is the main form of distribution. Indeed, in Ton Bruynèl’s oeuvre, there is a gliding scale of works for tape only and works for instruments and tape, with several ambigues compositions, such as Study for piano and sound tracks (1959), Elegy (1972) for soundtracks with or without voice, Chicharras (1985), Non sono un uccello (1998), among others. Ton Bruynèl’s music is also hybrid sonically: an extension of instrumental sound and of environmental sound; and artistically: inspired by and combined with literature and visual arts. Such an apparently “conservative” subgenre of electroacoustic music, the combination of instruments and tape, turns out to be “cyborg music” avant la lettre, unsettling musical identities.
Bruno Bossis
MINT/OMF, Université Paris-Sorbonne Paris-IV / Université de Haute-Bretagne Rennes 2
Traditionnellement, la musique savante occidentale classe les instruments en trois familles : cordes, vents, percussions. En 1914, Sachs et von Hornbostel définissent des catégories plus universelles. Dans ce but, ils proposent une classification divisant les instruments en quatre familles selon l’élément qui, en vibrant, produit le son (cordophones, membranophones, aérophones, idiophones).
Ces deux types de classements s’appuient sur des considérations matérielles et acoustiques. Or ces grands principes sont fondamentalement remis en cause dans la lutherie électronique : geste et son déconnectés d’un processus acoustique générateur, interface homme-machine, virtuosité augmentée, faculté de mémorisation, continuum généralisé, modularité, comportement programmable...
Parmi ces caractéristiques, l’une des plus remarquables est certainement la malléabilité, la plasticité dynamique. Les réponses aux sollicitations de l’interprète peuvent être programmées par le compositeur et évoluer au cours de l’œuvre. Depuis longtemps, les compositeurs choisissaient un instrument et un mode de jeu, connaissant par avance ses caractéristiques matérielles et sonores plus ou moins standardisées. Ils disposent maintenant d’outils modifiables, sans classification acoustique prédéterminée.
La notion de « comportement composable », donnant naissance à une lutherie dynamiquement adaptative, nécessite d’abord d’en préciser la définition. Ensuite, les conséquences d’un tel dispositif instrumental peuvent être abordées en considérant les positions du compositeur, de l’interprète, et de l’auditeur. Le métier de compositeur évolue et se transforme avec la possibilité nouvelle d’« écrire l’instrument » et plus seulement d’« écrire pour l’instrument ». De son côté, l’interprète est confronté à un instrument dont le comportement n’est plus uniquement fixé par construction, mais dépend de l’acte de composition. Enfin, voir sur scène des instruments pour lesquels la relation geste / émission sonore est mouvante bouscule les habitudes d’écoute. Face à de telles modifications, la musicologie doit s’interroger sur les concepts adaptés à une telle instrumentation et envisager des classifications instrumentales et esthétiques prenant en compte ces caractéristiques.
Romain Bricout - Les incarnations du sampler au XXe siècle : l’avènement du musicien-luthier
Romain Bricout
Centre d’Étude des Arts Contemporains, Université de Lille 3
1948 : premier journal de la musique concrète. Pierre Schaeffer découvre la musique concrète à travers l’expérience de la cloche coupée. Trois jours plus tard, il s’imagine « entouré de douze douzaines de tourne-disques, chacun à une note », puis le lendemain il imagine « un orgue dont les touches correspondraient chacune à un tourne-disque dont on garnirait à volonté le plateau de disques appropriés ». La vision de Pierre Schaeffer est celle d’une forme primitive de sampler, vision qu’il réalisera grâce aux technologies analogiques de son époque, en créant avec l’aide de Jacques Poullin le Phonogène.
Tout au long du XXe siècle, une même idée a habité l’esprit de musiciens et de créateurs issus d’époques pourtant bien différentes. Entre Luigi Russolo, (la publication de L’Art des Bruits puis la création du Russolophone), Pierre Schaeffer (la découverte de la musique concrète et la réalisation des différents modèles de Phonogènes), l’invention du Mellotron et l’essort du rock progressif dans les années 1970, pour finir par la démocratisation du sampler « moderne » et l’explosion des musiques électroniques populaires, seules les conditions de réalisations techniques diffèrent.
À travers les incarnations matérielles qu’a pu recouvrir cette même idée et à travers les multiples devenirs esthétiques que celles-ci ont engendrés, on se proposera de reformuler la question de l’influence de l’outil en se demandant plutôt dans quelle mesure la création d’un instrument spécifique ne doit-elle pas devenir l’une des activités principale de l’invention musicale. Plus que tout autre médium, il semblerait que le sampler ait constitué l’un des plus puissants vecteurs de diffusion de la pensée de Pierre Schaeffer. Car si l’outil qu’est l’instrument de musique a pour vocation première celle du « faire » et non celle du « transmettre », l’instrument s’avère être un médium même du mode de pensée qui l’a conçu, médium dont l’invisibilité témoigne de l’extrême efficacité.
Michael T. Bullock
Rensselaer Polytechnic Institute - Department of the Arts, Troy (USA)
The growth of a consumer-electronics culture in the home – especially audio electronics since World War II, and Internet technology since circa 1990 – has led to a radical reorientation of the means and site of music production. Broadcast and recording technology gave to musicians and non-musicians alike the means to create and embody sound when and where they desired; it also raised awareness of noise as idiomatic to audio technology and to recordings. Audio technologies became instrumentalized when they became recognized as sounding bodies beyond simply archives of previous sound. Eventually, this elevation of noise and instrumentalization of electronics were reapplied to extended techniques on traditional instruments, and developed a new form musical engagement: self-idiomatic improvised music.
I make a distinction among four general categories of extended instrumental use in modern music and sound. The first three are: extended technique on “traditional” instruments; instrumentalization of audio electronics; and creation of entirely new musical instruments (for this paper we’ll focus on electronic and electro-acoustic instruments). The fourth category cuts across the other three categories and addresses a radical realignment of the site of music and sound: the creation of sound environments.
Ian Burleigh and Friedemann Sallis
University of Calgary (Canada)
This paper reports on the philological preparation and the acoustic/theoretical basis on which a digital recording of Luigi Nono’s A Pierre Dell’azzurro silenzio, inquietum. A più cori (1985) for bass flute in G, contrabass clarinet in B-flat and live electronics will be made. This research project has two primary goals:
1.The production of a multi-track digital recording, capable of reliably reproducing the spatial reality of the listening experience. The purpose of this recording is to provide data for the second stage of this project.
2.The study and interpretation of the acquired datasets through which we hope to better understand the sound field created by the performance of the work. The data will be analysed using traditional musicological methods, as well as with newer computational techniques available through digital technology.
The report will be divided into two parts, presented by two speakers. The first will focus on a concise presentation of the source material pertaining to the composition of A Pierre that is conserved at the Archivio Luigi Nono (Venice). A survey of the sources will be presented and information gleaned from this material will be used to develop a more precise idea of the work than can be obtained from the published score alone, especially with regard to the sound field that a performance of the work generates. Of particular interest for this project is Nono’s concepts of sound mobility (il suono mobile) and dynamic space (spazi interdinamizzati). The second part of the paper will present the numerous and interrelated sound sources involved in a performance of A Pierre, and the recording methods that are best suited to the musical concepts underlying this work: i.e. pragmatic strategies that capture the diffusion of the sound in space and time.
C
Joel Chadabe - Schaeffer in the United States
Joel Chadabe
Electronic Music Foundation
What has been the influence of Pierre Schaeffer’s ideas in the United States? First, more generally, how do we understand the connections, or lack of them, from one instance of an idea to a later, further evolved instance of a similar idea? We understand the lineage from Schoenberg’s so-called 12-tone system to Boulez’ serialism of the 1950s, for example, because both Schoenberg and Boulez’ methods were based on clearly related procedures, because Boulez related his work to Schoenberg’s in “Schoenberg is Dead”, and because the works of both composers were based on the same paradigm of items-and-arrangements. Is there an equivalent lineage between Pierre Schaeffer’s concept of sound, i.e. as an object characterized by its internal morphological evolution, and concepts of sound in today’s music in the United States? This author thinks no. Even in John Cage’s Williams Mix, composed in 1952, contemporaneous with Schaeffer and based on a similar paradigm of items-and-arrangements, the focus was on arrangement by random choice, or in other words, underlying process rather than analysis; and the early compositions with tape, computers, and synthesizers in the United States were based on yet different paradigms. In short, Schaeffer’s influence in the United States was limited, not because of a specific disagreement with his ideas, but because the basic musical paradigms were different.
Hui-Mei Chen - La musique électroacoustique dans l’enseignement supérieur de Taiwan : son évolution et sa réception
Hui-Mei Chen
MINT/OMF, Université Paris-Sorbonne Paris-IV et Taiwan
Même si l’implantation de la musique occidentale à Taiwan reste récente, depuis la fin du XIXe siècle, les musiciens de ce pays cherchent néanmoins à suivre les dernières tendances musicales du monde occidental. Après les pionniers qui ont déjà expérimenté les effets électroacoustiques dans les années soixante, il faut cependant attendre jusqu’à l’année 1988 pour voir s’établir le premier studio de cette musique dans une université scientifique spécialisée dans la communication : Chiao Tung University.
Depuis ces dernières années, on assiste à une augmentation de l’ouverture des matières optionnelles en musique électroacoustique dans les universités taiwanaises, mais cela se limite souvent à l’apprentissage des logiciels. S’il n’est pas rare de voir les jeunes manipuler avec aisance les outils informatiques pour la production musicale, le nombre des créations assistées par ordinateur demeure encore très faible. Si certains, souvent non professionnels de la musique, se passionnent pour l’intervention de la technologie informatique dans la production musicale, d’autres s’obligent à apprendre cette technique pour « moderniser » leurs outils de travail. Nous observons ainsi l’existence de conceptions différentes concernant « la musique électroacoustique » dans l’enseignement supérieur taiwanais.
Après une brève présentation historique de l’introduction de la musique électroacoustique à Taiwan, cette communication se concentrera sur vingt ans de l’évolution de cette musique en prenant le cas de la National Chiao Tung University où a été créé le premier studio de musique électroacoustique. Nous aborderons ensuite les divergences concernant la conception et la réception de la musique électroacoustique dans l’enseignement supérieur pour montrer son évolution actuelle à Taiwan.
Kyong Mee Choi - Spatial Relationship in Electro-Acoustic Music and Painting
Kyong Mee Choi
College of Performing Arts, Roosevelt University (USA)
This paper compares two systems-Renaissance perspective and the two-channel electro-acoustic music system in order to explore how the illusion of depth is created in both media. The two systems are compared through individual parameters and the results of techniques that provide composers or artists with an intuitive mapping scheme. This study does not intend to copy an entire piece of music into a painting, or vice versa, instead it aims to supply a cohesive explanation of how the two systems create the illusion of depth.
After reviewing the historical background, major components of Renaissance perspective-linear perspective, separation of planes, and aerial perspective-are discussed. Then the two-channel electro-acoustic music system is examined in conjunction with Renaissance perspective. The Inverse Size/Distance Law in linear perspective and the Inverse Square Law in sound show a strong correlation between the size of an object in painting and the intensity of a sound object in the stereophonic system. The technique of reverberation is specifically discussed in terms of creating the vertical sound planes in space. Filtering is the major techniques examined in order to create atmospheric perspective in the two-channel sound system. The color perspective is examined through timbre space, which is a conceptual space where each parameter of the axis is measurable. Through these studies an intuitive mapping scheme, which includes individual parameter and its values, is applied to actual works in order to convert spatial information from painting to music and vice versa. Converting spatial information requires three steps: 1) Number the order of the objects 2) Analyze spatial information of the objects based on the intuitive mapping scheme 3) Arrange the objects with spatial information in the new medium. In addition to this application, different temporalities of both media are discussed to see how a specific mapping scheme can be applied to the particular medium.
Andrea Cohen - Musique concrète et art radiophonique : allers-retours
Andrea Cohen
Institute of Creative Technologies, De Monfort University, Leicester, (UK)
La musique concrète doit sa naissance aux recherches que Pierre Schaeffer entreprend au sein du Studio d’essai pour le développement de l’art radiophonique. Cette proximité historique pourrait justifier le fait que les compositeurs de musique concrète électroacoustique ou instrumentale se soient confrontés à l’art radiophonique, pour le pratiquer d’une manière suivie ou épisodique.
Mon questionnement sur la relation singulière du compositeur à l’art radiophonique reprend à sa manière l’idée centrale de cette rencontre, c’est-à-dire celle du rapport entre son et musique, mais elle déplace la question pour poser celle de la relation entre sens et abstraction. Lorsqu’un compositeur décide de créer une œuvre radiophonique, il sait que son travail s’inscrit dans un domaine qui a ses propres lois, même si elles peuvent être bousculées par l’acte de création : le médium impose ses règles aussi bien internes qu’externes : d’une part des règle d’ordre formel, concernant l’emploi de matériaux spécifiques (parole, bruit) et leur organisation au sein d’une dramaturgie particulière, d’autre part les règles liées aux contraintes du médium lui-même, reposant principalement sur la production, la diffusion et l’écoute.
Un point essentiel est de savoir si le compositeur définit l’art radiophonique comme un art acoustique autonome ou au contraire comme un « genre » musical. En effet, certains compositeurs considèrent le langage radiophonique comme auto-référentiel, et la musique n’y constitue qu’un des matériaux sonores utilisés au même titre que la parole et le bruit. L’œuvre radiophonique apparaît alors comme une œuvre sonore que le compositeur distingue de son œuvre musicale. Dès lors nous pouvons, pour cerner ce concept d’« œuvre sonore », poser ici trois approches :
1. La radio est envisagée comme un espace de méditation sur sa production musicale. Si le temps d’expérimentation propre au travail du compositeur est solitaire, l’œuvre radiophonique est conçue comme un moment de réflexion partagée.
2. La radio est vécue comme un espace d’élargissement de l’activité musicale vers d’autres domaines comme la littérature ou le théâtre.
3. La radio peut enfin être considérée comme un moyen de communication où le créateur peut exprimer sa curiosité intellectuelle ou son engagement citoyen.
Pour d’autres compositeurs, en revanche, l’art radiophonique constitue un genre musical à part entière, le médium devient véritablement un lieu de création qui suppose un questionnement de la conception même du musical. Le discours adopté inclut, en effet, tout matériau sans exclusive et le compositeur prend en charge l’élaboration des différents éléments sonores de son œuvre (composition de la musique, écriture du texte) ainsi que toutes les étapes de sa réalisation. Il peut faire appel à des collaborateurs – écrivains, poètes, acteurs, bruiteurs, techniciens – se réservant la composition musicale proprement dite ainsi que la construction de l’ensemble. Il utilise enfin la radio, qui retrouve par là même son rôle premier, comme un espace de diffusion. Au final, l’œuvre radiophonique se présente soit comme une étape du travail compositionnel soit comme une des versions possibles d’une œuvre que l’on pourrait qualifier, d’« ouverte », du moins selon les conceptions d’Umberto Eco.
Quoi qu’il en soit, les compositeurs développent des échanges fructueux et une approche singulière face à l’art radiophonique, où leur relation au sonore joue un rôle de premier plan. Ainsi, ces champs se nourrissent mutuellement. Si l’art radiophonique est aujourd’hui tributaire de la musique concrète, c’est grâce à l’originalité et à la richesse des œuvres radiophoniques créées par les compositeurs et à leur inventivité.
Arshia Cont and Marco Stroppa - The Writing of Interactive Electronic Music
Arshia Cont and Marco Stroppa
Music Department, UCSD, La Jolla (USA) / IMTR, IRCAM
Hochschule für Musik und dastellende Kunst, StuttgartThe consensus for interaction between live musician(s) and electronic music dates back to early experiments in mixed instrumental and electronic music repertoire. Whether the electronic part is fixed or is the result of live generation, the composer and musician is involved with interactive aspects of her piece both during writing and performance. Since the 80s, score following techniques have been proposed that enable real-time alignment of a musician’s performance to a pre-written music score in order to synchronize the electronic score with the performance or serve as an accompaniment agent. Existing score following systems are mostly limited to simple symbolic representations of the score in western classical music notations, limiting their applications to classically notated pieces. In addition, existing systems are separated from the interactive components by only serving as a real-time audio to score synchronizer. In this exposition, we present Antescofo, an anticipatory score following system. In its basic use, Antescofo serves as a classical score follower. In addition, Antescofo has been designed to address the following extensions handling both flexible score scripting and live interactions: (1) to enable concurrent, flexible and user-defined representations of the audio stream in the score, (2) to concurrently represent and handle different time-scales both in scoring and recognition and enable a flexible writing of time, (3) to provide a score language that handles interaction between live performance and electronics via the score. Using Antescofo’s score language, the instrumental and electronic scores exist together and the user can easily switch between various notions of time and representation within the score, which would be used during live performance. We present the rationales behind the design and concept of the anticipatory follower and would demonstrate the system both in the scoring aspect (using both symbolic and continuous (audio/gesture) representations) and live interaction.
cont@ircam.fr
stroppa@mh-stuttgart.de
John Coulter - Electroacoustic Composition as Research
John Coulter
National Institute of Creative Arts and Industries, University of Auckland, (New Zealand)
A common problem facing composer-researchers working in the field of electroacoustic music today is the lack of domain-specific procedural guidelines available for undertaking practice-led research. Important questions concerning sonic phenomena are too often overshadowed or transformed by incongruous research paradigms – theoretical methods that have little relevance to the composing-listening process. It is the innate conviction of every composer that the creative process yields the greatest of discoveries – we have evidence of this from the pioneers of the domain – yet the research possibilities have been slow to present themselves outside the realm of the subjective. The name of the paper “Electroacoustic Composition as Research” outlines the initial research problem by implying that the creative process itself can act as a research methodology. The title does not intend to imply that electroacoustic music should replace literature-based research outputs, to the contrary the research postulates that dual outputs are preferable. The practice-as-research paradigm is the central focus of the study, and it is investigated through a series of investigations including analysis of the creative process, comparison with action research models, elaboration using composition process diaries, and group evaluation. The primary research outcome is the proposal of a staged process model that allows certain aspects of composition and research to be undertaken concurrently – a model that does not lay claim to rigour through a prescribed methodology, but one that draws its authenticity from reflective practice.
Pierre Couprie - Utiliser le logiciel iAnalyse pour analyser la musique électroacoustique
Pierre Couprie
MINT/OMF, Université Paris-Sorbonne Paris-IV
iAnalyse est un logiciel d’aide à l’analyse musicale disponible gratuitement pour la plate-forme Macintosh sur le site http://web.mac.com/pierre.couprie/Logiciels/iAnalyse.html. Il offre différents outils pour aider le chercheur ou le musicien dans ses analyses musicales ou ses présentations d’œuvres.
iAnalyse a été conçu à partir de deux idées : offrir un véritable logiciel d’aide à l’analyse musicale et déconnecter les données d’analyse de leur représentation. Cette deuxième idée est essentielle car elle permet de créer plusieurs types de graphiques à partir d’une même analyse. Sur le plan de son architecture, il est conçu sur quatre niveaux :
1. Un lecteur audio et vidéo offrant les différentes fonctions de contrà´le de lecture. iAnalyse peut fonctionner avec des fichiers audio ou vidéo pour l’analyse de musique de film.
2. Un plan graphique sur lequel sont représentés des diapositives ou un sonagramme. L’enchaînement des diapositives est synchronisée avec le déroulement temporel du fichier son. Chaque diapositive peut contenir une image de fond comme la page d’une partition.
3. Certaines parties des diapositives et du sonagramme peuvent être mis en évidence avec des annotations (formes graphiques transparentes) et des fonctions musicales (annotations automatiques avec des éléments typiquement utilisés en analyse musicale). L’apparition et la disparition de ces annotations et de ces fonctions musicales sont elles aussi synchronisées sur le déroulement temporel du fichier son. De plus, il est possible de créer un curseur sur la partition se déplaçant au rythme de la musique.
4. L’ensemble des données d’analyse (curseur, annotations, fonctions musicales) peuvent donner lieu à des graphiques afin de suivre un ou plusieurs paramètres, créer un diagramme formel ou une représentation synoptique, etc. Lors de la présentation, je montrerai l’utilité de ce logiciel pour l’analyse d’œuvres électroacoustiques.
D
John Dack - Sons Excentriques, Sons Équilibrés and the “Sublime”
John Dack
Lansdown Centre for Electronic Art, Middlesex University (UK)
In Pierre Schaeffer’s diagram of his typological classes, the “Tableau récapitulatif de la typologie” (TARTYP) the sons excentriques (as their name suggests) occupy the edges of the sound universe. Sons excentriques are the least “instrumental” in typology. This both challenges the listener’s perception and provides a reason for composers to use them. Unlike the sons équilibrés they are long in duration and often their spectral constitution is unpredictable and complex. Despite (or because of) their lack of resemblance to sons équilibrés they form a major part of many electroacoustic composer’s vocabulary of sounds. It is often difficult to combine sons excentriques and sons équilibrés in compositions – a distinction made by Schaeffer in his formulation of a “musical” and a “plastic” music. By an analysis of sections of selected acousmatic works I will investigate this relationship and suggest how structures in one language might corroborate or subvert those in the other.
In addition to the task of description as outlined above, I will also suggest the relationship between sons excentriques and the concept of the “sublime” in Immanuel Kant’s Critique of the Power of Judgment. Three connections seem of immediate relevance. Firstly, in the Kantian sublime we are “overwhelmed” or “awestruck” by an object’s size and “power”. Secondly, Kant’s sublime is related to the body. Size and power relates to our bodily dimensions; it is through bodily perception that we experience the world. Thirdly, according to the “Copernican revolution” claimed by Kant he shifted appreciation of the object to the subjective reaction of the perceiver. Thus, he claimed the “sublime” is not in a (sound) object as such, but in its perception. This is also reflected in Schaeffer’s insistence on subjective experience.
Ricardo Dal Farra
Electronic Arts Experimenting and Research Centre, National University of Tres de Febrero, Buenos Aires
Paraphrasing Attali, art is a tool for the creation or consolidation of a community. Science is the approved western way to look for knowledge and understanding, and technology is the practical application of that knowledge to solve problems or to accomplish our desires.
Electroacoustic music merges art, science and new technologies in a way that was impossible to imagine before, except for a few visionaries. Strongly connected to the development of a musical practice using electronic media are the available technologies to support it. The compositional techniques and styles the pioneer composers applied or developed using the available equipment at each place, were influential models received by later generations arriving at the first studios (in France, Germany, the United States).
The possibility to electronically generate and modify sounds was pushing the boundaries of composition, and gave as result a closer collaboration between art and technology, producing a positive feedback. As an example: the Analog Graphic Converter invented by Fernando von Reichenbach in the mid 60s, was used to convert graphic scores from a paper roll into electronic control signals adapted for musical uses with analog instruments; this device made it possible to realize Analogías Paraboloides, a tape piece with special design characteristics by Pedro Caryevschi.
Creative loops between technology and art. Creative loops between sound generation/processing and music composition.
The presentation at EMS08 includes sound, music, graphic and audiovisual samples of these rich associations, full of mutual influences at multiple levels. Music, as a sound-based art form, is taking part in the definition of our future.
Francis Dhomont - Vers un classicisme de l’écriture acousmatique
Francis Dhomont
Compositeur
L'opinion selon laquelle la nouveauté d'une œuvre garantirait sa qualité est souvent exprimée dans les milieux de la musique électroacoustique et constitue parfois l'unique critère de valeur : l'invention est taxée d'académisme dès lors qu’elle ne surprend plus.
Or, la mise en œuvre de modèles stables ne participe-t-elle pas plutôt aux fondements d'un nouveau classicisme ? Les époques classiques constituent l'aboutissement réussi des périodes de recherche, la maturité d'un art, et elles ne sont généralement pas réputées pour leur médiocrité. Ainsi va la musique depuis les origines, alternant la rupture avec les usages établis (abandons et recherches) et l'affirmation de théories nouvelles (stabilité et classicisme). Seule la durée des périodes de stabilité est variable. À notre époque de surconsommation, on ne s’étonnera pas que les nouveautés aient la vie courte et que « classicisme » soit souvent confondu à tort avec « pensée archaïque ». D’où le débat toujours recommencé sur l’académisme et l’avant-garde.
Pourtant la cohérence du classicisme n'implique pas la négation de la recherche, mais elle ignore le déni compulsif des trouvailles antérieures, la répétitive tabula rasa ; elle offre une pérennité de syntaxe qui permet au signifié musical de s'exprimer pleinement au moyen d'un signifiant qui cesse d'être toujours remis en question. Il est sans doute temps que le compositeur électroacoustique ne considère plus l'expérimentation comme une fin en soi : il y a un temps pour remplacer les modèles anciens et un temps pour prouver, par des œuvres fortes, la pertinence des nouveaux. Il semble que la modalité acousmatique soit prête à amorcer ce processus.
Je suis conscient du rejet que peut provoquer aujourd'hui une approche critique de la performance technologique. Mais peut-être est-ce la condition nécessaire pour retourner à l'essence du musical et atteindre ainsi une audience élargie en lui proposant des œuvres originales exprimées dans une langue non plus vernaculaire, mais commune, familière, intelligible.
Jean-Louis Di Santo - La perception de l’objet sonore : objective ou subjective ?
Jean-Louis Di Santo
SCRIME
Pierre Schaeffer a révolutionné la conception de la musique en s’appuyant sur la perception et la description de l’objet sonore, sa démarche ayant été guidée par la phénoménologie développée notamment par Edmund Husserl et Maurice Merleau-Ponty. Dans le but de structurer la création électroacoustique, il a établi les célèbres « TARSOM » et « TARTYP » qui sont censés émaner directement du monde sonore lui-même.
Depuis, dans la lignée de cet acte fondateur, des compositeurs-musicologues ont proposé d’autres approches de l’objet sonore, de nature parfois très différentes. Certains, comme Schaeffer, n’entendent que des sons (perceptions morphologiques), tandis que d’autres entendent aussi du sens (perceptions morpho-sémantiques). Tous ont cependant la composition comme finalité, et leurs approches semblent conditionner leurs styles. Ceci nous amène aux questions suivantes : comment percevons-nous ? Y-a-t-il une perception objective, c’est-à-dire tournée vers l’objet, qui rende compte de toutes ses propriétés, ou bien sommes-nous condamnés à une perception subjective, c’est-à-dire orientée vers les préoccupations du sujet percevant, qui ne prend en compte que certaines propriétés, et donc variable d’un sujet à l’autre ? Pouvons-nous alors établir des rapports entre perception et composition ?
Jean-Louis.Di-Santo@wanadoo.fr
Frédéric Dufeu - Temps réel et temps différé dans l’instrument de musique numérique
Frédéric Dufeu
Laboratoire Musique et Image : Analyse et Création, Université Rennes 2
La distinction entre temps réel et temps différé est couramment employée pour qualifier les œuvres électroacoustiques. Si les deux expressions sont commodes pour désigner une pièce de manière globale, celle-ci est souvent régie par un rapport plus complexe entre les deux temporalités. Le propos de cette communication est de montrer comment, dans les œuvres pour instrumentiste et production électroacoustique en temps réel, la programmation de l’environnement informatique peut conduire à élaborer un outil d’exécution au sein duquel coexistent temps réel et temps différé.
Le compositeur étant responsable du développement de l’environnement informatique constitutif de l’outil destiné à l’exécution de son œuvre, c’est à lui que revient la détermination de l’équilibre entre temps réel et temps différé dans l’instrument numérique. Différents types de rapports peuvent être définis, depuis l’absence totale de temps différé, si le programme est un traitement élémentaire de signal, jusqu’à la seule présence d’échantillons préparés en studio, pour lesquels l’exécution en temps réel n’a qu’une fonction de déclenchement. À l’échelle d’une œuvre, l’instrument numérique est rarement réduit à une seule de ces configurations. Afin d’organiser son comportement sonore en fonction des gestes de l’exécutant d’une part et de la position dans le temps de la pièce d’autre part, un autre élément relevant du temps différé, d’ordre non sonore mais symbolique, peut être introduit dans l’environnement : la partition permettant le suivi du jeu de l’interprète. Enfin, l’instrument peut être configuré de manière à permettre l’enregistrement, le traitement et la réitération d’informations sonores ou symboliques pendant l’exécution. Les éléments de temps différé sont alors créés par l’interprétation.
Ainsi, selon le rapport entre temps réel et temps différé qu’il établit pour l’outil d’exécution, le compositeur détermine le rôle de l’interprète non seulement dans le déroulement temporel de l’œuvre, mais également dans l’élaboration et l’actualisation de ses matériaux sonores et musicaux.
E
Simon Emmerson - Pulse, Meter, Rhythm in Electro-Acoustic Music
Simon Emmerson
Faculty of Humanities, De Montfort University (UK)
At various times in the last sixty years (and increasingly in the last twenty) the introduction of rhythmic (or metrical) identities into electroacoustic music has caused disquiet – sometimes stated in words, sometimes in music.
Part 1: An introductory review of concepts of pulse, rhythm and metre
The paper will start with a summary of some basic definitions of pulse, meter and rhythm. How might memory (short, medium, long) relate to rhythmic time? What kind of sound objects have been used to delineate rhythmic structures?
Part 2: The Sound Object and Time
The presence and absence of pulse/meter within the musical rhetoric of electroacoustic music (broadly defined) will be reviewed. What are the consequences of including or overtly excluding rhythmic motifs?
- Periodicity in early musique concrète (the sillon fermé), the loss of source/cause.
- Periodicity as indicative of machine synchronisation: Trevor Wishart (in the “prison metaphor“ in Red Bird) and Karlheinz Stockhausen (in rejecting it altogether pre-Mantra).
- Pulse and meter as indicative of dance: Latin American electroacoustic music of the 1980s; post-dance experimenters within electronica (and later); quotations and references in plunderphonics.
- The ambiguous position of early (tape loop) minimalism (Steve Reich, Terry Riley) and its influence on later dance.All these contrasting aesthetic views concern “embodiment”. All have a hedonistic (Dionysiac) pull. But in the alternative, more Appolonian depths of “art modernism”, traditional rhythm – indeed any memory of performative action (for example, “midi keyboard gestures” in an acousmatic work) – is banned. Here lies the clash.
Part 3: Unraveling the Clash of Aesthetics
Let us try to relate the aesthetic to the technical:
- How do the sound objects in a rhythmic motif relate? How do the objects group?
- Does a rhythm/metre structure have a “separate” identity – a Gestalt (pattern-structure) abstracted from the time relations?
- Does this distract from the ideal perception and contemplation of the sound object’s attributes?
- Does this detract from – or even mask – the ideal perception of the relation of the sonic objects (in Schaeffer’s terms)?
- Or, on the contrary, might a new idea of sonic object (or group of objects) be the proper attribute to engage?Examples will be drawn from: Pierre Schaeffer/Pierre Henry, Bernard Parmegiani, Denis Smalley, Javier Alvarez, Trevor Wishart, Robert Normandeau, Barry Truax, Autechre, Squarepusher, Aphex Twin, Scanner, Kaffe Mathews and others.
Julio d’Escriván and Paul Jackson
Dept. of Music and Performing Arts, Anglia Ruskin University, Cambridge (UK)
Plunderphonics, a technique where the composer borrows music freely from any available musical sources, is arguably a direct descendant of the work of Schaeffer in À la recherche d’un musique concrète. John Oswald, who coined the term, in his presentation at The Wired Society Electro-Acoustic Conference in Toronto in 1985 said that “A sampler, in essence a recording, transforming instrument, is simultaneously a documenting device and a creative device”.
In defining the sampler as a documenting device, Oswald introduces the possibility that by juxtaposing plundered material, new knowledge may be derived from the relationship between the chosen musical extracts; one which is “documentary” in nature and which further illuminates the music it accompanies. Plundered sound objects can thus shed light on the original musical context in which they are quoted, simply by being heard. They may provide a catalyst for an intuitive understanding of the original piece, and create new aural meanings.
By virtue of being superimposed or juxtaposed with a piece in which they do not originally belong, they function in a way reminiscent of hyperlinks or even tags on web documents, as they are placed arbitrarily with the intention of furthering an idea or extending a concept or signifying a degree of implicit categorisation. This is what we understand by the generation of “documentary” knowledge.
This paper aims to propose ways in which music can be “marked-up” at the time of composition or as an analytic activity with samples from original or borrowed sources. The end result is to show how this association of extraneous sampled material to the created or analysed music provides a deeper understanding of the work as well as extending its semantics.
Ioanna Etmektsoglou, Andreas Mniestris and Theodore Lotis
Department of Music, Ionian University, Corfu (Greece)
Within 2007, the Electroacoustic Music Research and Applications Laboratory of the Ionian University brought to completion a research program on Acoustic Ecology and Soundscapes, named “Research and Analysis of Greek Soundscapes”. One of the fruits of the program was the acquaintance of young children with the sonic environment and its better understanding.
In an effort to discover educationally valid and motivating ways to teach young children sound sensitivity, creativity, and a critical approach to sonic environments, we developed an acoustic ecology program which incorporated concepts and tools introduced by R. Murray Schafer and Pierre Schaeffer. In the context of this program, Schafer’s and Schaeffer’s often seemingly contrasting approaches were found to be complementary. Schaeffer, with his four ways of hearing, ouir, entendre, écouter, comprendre, offered us four different listening perspectives, while his seven sound criteria: mass, harmonic timbre, grain, allure, dynamic, melodic profile, and mass profile, seemed useful for the development of children’s sound perception. The analytic approach to sound adopted by Schaeffer, was complemented in this program by Schafer’s active sound exploration approach. Schafer’s sound games, which encourage sound making, were introduced to children, while similar games were developed and pilot tested. From Schaeffer’s sound dissecting lab, students would move to Schafer’s school yard, where they listened silently, recorded and then discussed the sounds they heard, in terms of sources, meanings, or balances. They also composed collective soundscape pieces which they evaluated based on aesthetic and ecological criteria.
This presentation will include audiovisual examples of activities as realized by a group of 15 third grade children at a village school in Corfu, Greece. Findings and implications from this pilot program will be discussed in the light of an ecologically minded education.
e_ioanna@yahoo.gr
andreas@ionio.gr
lotis@ionio.gr
F
Ken Fields - Cooperative Research and Performance on E-Art Grids
Ken Fields
Central conservatory, Beijing / University of Calgary
Network music performance is an eventuality-type manifested by the establishment of high-speed electronic art (E-Art) grids toward a sustained practice like that of its big brother E-Science, encompassing such issues as cluster computing and transcontinental light-paths. While our previous papers have discussed issues of terminology, ontology and categorization (EMS 2006/7 and Organised Sound 12.2), this paper extends the discussion to that of the physical and pragmatic substrate through which language runs, organizes and emerges in the modern sense: on networks. Urging a step beyond the semantic web, I argue that in combination, the practices of discourse on networks and music on networks can be engineered to advance a semiosis of symbol and sound object as embedded in the collaborative production environment. In other words, we are preparing for a much more fluid scenario of musicians interacting with sound in metadata-saturated, high-speed network environments.
An emerging telematic practice characterized by highly mediated production models has overtaken the music profession as it has the arts in general and what is not performed on networks is still influenced by networks. While our tools/instruments can be seen as extensions of the intentional sphere, intention is decidedly more mutable then the rigid economy of technical (non-neutral) conditions which it induces. Thus, the occasion of this evident paradigm shift (from fast CPU’s to fast networks) calls for the consideration of new scenarios. The design of network music systems can be accomplished in such a way as to make apparent their substantive language games, affording organisation of musical objects as fluently as discourse objects into various orders of (semiotic) structure. Given the current confluence of the electroacoustic music research community whose practices are now both so widely distributed and yet so interdependent, we are signaling this major thrust toward cooperative research and performance on E-Art grids.
Elsa Filipe
MINT-OMF, Université Paris-Sorbonne
L’accroissement des puissances de calcul des systèmes informatiques observé pendant la deuxième moitié du 20e siècle a permis l’élaboration et le perfectionnement de systèmes de traitement sonore en temps réel. Conséquemment, l’application de ceux-ci au domaine de la musique est à l’origine de ce qu’on appelle musique mixte temps réel.
Le plus souvent, l’association de systèmes informatiques à la musique exige du compositeur d’adapter sa pensée compositionnelle aux moyens techniques disponibles. Ainsi, on peut se poser des questions telles que la musique est-elle le résultat de l’invention, ou le système choisi joue-t-il un rôle déterminant dans le résultat final ? Quelles sont les stratégies compositionnelles adoptées par les compositeurs par rapport aux systèmes choisis ? Quels sont les procédés compositionnels utilisés par les compositeurs afin de transmettre leurs idées d’interprétation ? Partant de l’analyse des œuvres Xatys (1988) pour saxophones de Daniel Teruggi et Lituus (1991) pour cuivres de José Manuel López-López, notre communication essaiera de répondre à quelques-unes de ces questions.
Dans notre brève présentation sera discutée la question de l’influence des systèmes informatiques sur l’écriture tant compositionnelle qu’électronique ainsi que sur le processus d’interaction instrument-machine. Les deux œuvres choisies représentent deux esthétiques différentes, cependant il est intéressant de remarquer que les deux compositeurs ont eu un même objectif en utilisant ces types d’outils : fusionner la partie électronique à l’instrument de façon à que ces deux couches ne puissent pas être différenciées.
L’analyse des différents procédés techniques utilisés dans ces œuvres s’avère importante tant pour la musicologie que pour la composition, en permettant de mieux comprendre ce nouveau domaine musical qu’est la musique mixte temps réel.
Martin Flasar
Institute of Musicology, Faculty of Arts, Masaryk University Brno (Czech Republic)
1948 is considered to be the year in which musique concrète, and consequently electroacoustic music as such, first emerged. Unlike the situation in West European countries, it was not until halfway through the 1950s that electroacoustic music was first mentioned in the Czech musical press, and it took almost ten more years before this movement diffused into Czechoslovakia on a practical level.
The historical period I have chosen, 1948-1992, may be interesting for several reasons. 1948 coincided with the communist putsch in Czechoslovakia. The official doctrine of Socialist Realism, adopted from the Soviet Union, marked a substantial setback in postwar developments. From its point of view, electroacoustic music was very suspect, representing an autonomous movement in opposition to the official doctrine. And as such it was persecuted.
This detrimental change not only restricted and delayed the reception of current trends in European music, including electroacoustic music, but also obstructed their application in practice. The political isolation of Czechoslovakia produced a certain scepticism concerning even the mere acceptance of novelties coming from Darmstadt, Paris or Köln. And influences were not derived directly from these places but at second hand (for example through the Warsaw Autumn festival (Warszawska Jesien), founded in 1956, which was the only festival of New Music in the Eastern bloc).
The most important impulses for founding electronic music studios in Czechoslovakia came halfway through the 1960s, with a seminar of Electronic Music held at the Czechoslovak Radio Studios in Plzeň in 1964, and especially a performance by the Groupe de Recherches Musicales in Prague in 1966. The activity of Czech and Slovak composers of electroacoustic music culminated about 1970, shortly before it was politically suppressed. And a second significant wave of interest by composers in electroacoustic music came surprisingly in 1992, at the time when Czechoslovakia split into two separate states. On account of these facts, we must consider politics to have been crucial in the development of electroacoustic music in Czechoslovakia.
Robert J. Frank - Analyzing Structure in Musique Concrète via Temporal Elements
Robert J. Frank
Division of Music, Meadows School of the Arts, Southern Methodist University, Dallas
Analysis of musique concrète is often avoided by theorists in part due to a lack of meter, pitch, and the written score. As a result, authors often dismiss structural aspects and assert that “It is more important to listen to the specific qualities of each sound than the relationships between events.” (Schrader, 1982) Although spectral and timbral analysis, extra-musical context, and audio processing techniques certainly influence flow and musicality in a work, every composer of electro-acoustic music also knows that the same techniques can produce unsuccessful works just as well as “good” ones. This led to the development of a cognitive system of analysis that identifies “temporal elements” and their function (Frank, 1996). Subsequent papers have refined and successfully applied these principles to musique concrète and led to the establishment of terminology to analyze and discuss non-pitched or non-meter works (Frank, 2000, 2001, 2008).
Based upon established research in the field of music cognition (Dowling, 1986), this system of analysis focuses not upon process or technique, but rather the cognitive relationships between aural events within a work. This analytical system identifies and classifies aural events into five general categories of temporal elements: sustained, repeating/aligned, non-repeating/aligned, repeating/non-aligned, and non-repeating/non-aligned. Hybrid elements comprised of traits of two adjacent categories and transformational processes are defined. By mapping the use of these stable and transformational temporal elements in a work through aural and visual measurements (via simple sound editing software) elegant structural traits can be found in many of the pioneering and enduring works in the realm of musique concrète. Analytical results are easily presented in graphic form and are demonstrated in this paper. Specifically, an analytical summary of Pierre Schaeffer’s Étude aux chemins de fer (1948) and Toru Takemitsu’s Water Music (1960) are presented, uncovering clear, form, structures, and cadences.
Koichi Fujii
Keio University, Tokyo
In studying Japanese contemporary music, film music is certainly a genre to be reckoned with, not to be neglected. Many composers of “serious” music in Japan have composed music for films quite actively. This genre, particularly in the nineteen fifties and sixties, was important as it provided them not only with their a financial source but also with opportunities for experiments including those of music technology. In an aesthetic viewpoint, this is related to the Japanese interdisciplinary avant-garde movement in the same period. This presentation illustrates an aspect of Japanese film music in connection with music technology, particularly musique concrète, focusing on Toshiro Mayuzumi (1929-97) and Toru Takemitsu (1930-96).
Mayuzumi had composed tape music for films just before his x, y, z (1953), which is officially recognized as the first musique concrète by Japanese composers, and Shichi no Variation (Variations on the Numerical Principle of 7) (1956) in collaboration with Makoto Moroi (1930-), which is considered as the first full-scale electronische Musik by Japanese composers. Takemitsu experimented tape collages of Japanes instruments’ sounds in Seppuk [Hara-kiri] (1962) and Kwaidan [Ghost Stories] (1965) directed by Masaki Kobayashi (1916-96), and the TV drama series Minamoto Yoshitsune [Yoshitsune Minamoto] (1966) directed by Naoya Yoshida (1931-). These experiments were fruitfully led to one of his masterpieces November Steps for biwa, shakuhachi, and orchestra (1967) commissioned and premiered by New York Philharmonic. Additionally, the excerpt from Suna no onna (A Woman in the Dunes) (1964) directed by Hiroshi Teshigawara (1927-2001) was revised as Chiheisen no doria (Dorian Horizon) (1966).
Thus, film music production and music technology used there are quite important even in undertaking investigation into the geneses or origins of instrumental pieces examining the rudiments of their original creative ideas.
G
Luciana Galliano - Musique concrète in Japan
Luciana Galliano
Venice University Ca’ Foscari
The first move of using ambiental sound in music occurred in Japan for the effort of Mayuzumi Toshiro, who came back at the end of 1952 after having known the research in Paris. It occurred also among ideas and works of avant-garde group Jikken Kōbō, wholly indipendently from the ongoing research in Europe. Compared with the European, the research on concrete sounds done by Jikken Kōbō’s members was moved by similar yet different intentions. The aural-visual realizations of the group explored the possibility of building an imaginary artistic landscape full of symbols and references, like in Takemitsu’s AI or in Yuasa’s Aoi no ue but with an allegedly minimum of intervention on the concrete sound itself. This is completely in line with the definitione by Pierre Schaeffer of musique concrète as “pieces of time torn from the cosmos”. The different weight of empiricism between the two musical cultures however, in using sounds as “belonging to the world” was very probably due to the specific role sound and noise play in Japanese music, and, more obviously, to the role sound/noise plays as culture references in the construction of identity, and whose word in Japan might be more “whole” then “beyond”. In Japanese contemporary music the role of sounds and noise has nevertheless been somehow doomed since it has completely lost that character of disrupting Western music and its genres – as might be seen in the works of Yoshimatsu Takashi but even in the early Kosugi Takehisa. The following of these processes generated the outcoming of Japanese noise music.
luciana.galliano@fastwebnet.it
Evelyne Gayou - Les procédures de composition d’hier et d’aujourd’hui au GRM
Evelyne Gayou
Groupe de Recherches Musicales
La dialectique du « faire et entendre », maître mot du mode de travail du compositeur en électroacoustique, se fonde sur quelques invariants propres au studio et à ses outils. Les outils : L’outil devient un instrument à partir du moment où l’on en joue. Le micro, le disque, le magnétophone, l’Acousmonium, l’ordinateur sont tous des outils que les musiciens ont transformés en instruments. Les gestes : Quels que soient les instruments employés, les sons résultent des quelques gestes effecteurs frotter, taper, gratter, faire raisonner. Ces gestes ont été transposés dans chaque nouvelle génération d’appareils, du studio analogique au studio électronique, à l’informatique. Cette continuité se retrouve dans l’ergonomie des logiciels de traitement du son, GRM Tools. L’ergonomie des studios : Elle s’est imposée dès le début, au milieu des années 1940. L’écoute réduite : En électroacoustique, le fait de fixer les sons sur un support, a permis de systématiser la notion d’écoute. Il y a plusieurs types d’écoute, quatre selon Schaeffer, qui peuvent se mélanger et conduire du son à la musique. La notation : Il n’est pas nécessaire de savoir écrire ou lire les notes pour faire de l’électroacoustique. Cela n’empêche pas les compositeurs d’utiliser des aide-mémoire écrits des schémas, des mots et des codes parfois partagés. Au bord du visuel : Chaque nouvelle technologie est à la source d’une nouvelle esthétique. L’enregistrement du son a donné naissance aux musiques de support, cousines des autres arts de support : peinture, cinéma… Avec l’informatique, depuis qu’on a dépassé le stade des calculateurs, l’arrivée de l’écran a encore bouleversé les choses l’intrusion de l’image nous entraîne aux limites du sonore, au bord du visuel.
Yann Geslin et Noémie Sprenger-Ohana
Groupe de Recherches Musicales - Université de Paris-Sorbonne
De nombreuses représentations graphiques des musiques électroacoustiques ont été réalisées depuis les débuts de leur histoire, afin d’offrir un support descriptif aux analyses des œuvres. Ces transcriptions, obtenues désormais à l’aide de moyens informatiques modernes, ne semblent toutefois pas avoir permis jusqu’ici de dégager un système privilégié de représentation ou de notation.
Schaeffer a souligné l’importance d’expliciter les catégories et niveaux de description du sonore. Nous proposons d’explorer différents niveaux et complexités de représentations tels le symbolique, l’abstrait, le sémantique, le causal, etc. ainsi que leurs combinaisons à l’aide du logiciel Acousmographe.
Ce travail a été appliqué en 2007 de manière approfondie à une œuvre représentative du répertoire L’Oiseau moqueur de François Bayle. Organisée en un agencement d’objets sonores distincts clairement identifiables, cette œuvre se prête bien à la classification de ses constituants en vue d’une analyse. Au moyen d’un canevas de repères fixes, les capacités de synchronisation et superposition des plans graphiques de l’Acousmographe ont permis d’explorer les combinaisons de sept niveaux de représentation et d’en dégager les associations les plus pertinentes.
Le résultat propose des pistes de hiérarchisation, et peut-être de normalisation, des représentations utilisées. Nous souhaitons montrer que des représentations simplifiées, ne nécessitant pas d’explicitations détaillées de la codification choisie, sont parfaitement opératoires dans de nombreuses situations ; si ces représentations ne remplacent pas totalement des graphismes usuels plus complexes comme la représentation spectromorphologique, elles leurs offrent une alternative intéressante et des possibilités de combinaisons notables au prix d’ajustements minimes.
ygeslin@ina.fr
noemie.sprenger_ohana@yahoo.fr
Robert Gluck - Live Electronic Music and Jazz: First encounters
Robert Gluck
University at Albany
The first part of a more extensive study, this paper explores interrelationships between live electronic musicians and exploratory jazz improvisers during the 1960s and early 1970s. Among the jazz musicians considered, most of them African-American, are Sun Ra, in the mid-1950s the first jazz performer to play electric instruments, and members of the Chicago-based Advancement of Creative Musicians (AACM), especially pianist and multi-instrumentalist Richard Muhal Abrams, saxophonist Anthony Braxton, violinist Leroy Jenkins, and trombonist George Lewis. Some worked alone and others in collaboration with electronic music pioneers, most of them white, including Richard Teitelbaum, Alvin Curran, Elliot Schwartz, Jon Appleton and Patrick Gleeson. A special focus here will be pianist Herbie Hancock’s Mwandishi Sextet (1970-1973). Hancock became cognizant of electronic music as early as 1964, when he was a member of the Miles Davis Quintet. His Mwandishi band incorporated electronics as an integral element, both in post-production and live performance. This began with Hancock’s expansion of electric piano technique using devices such as Echoplex that abstracted its sounds. With the addition of synthesizer player and sound designer Patrick Gleeson who drew upon his experience at the San Francisco Tape Music Center in the early 1960s, the band fully integrated abstract electronic sounds into their highly rhythmic and timbral improvisatory approach, beginning with the 1972 recording Crossings. This paper will draw upon on musical examples and musician reminiscences to consider highlights from this body of work and offer observations about its nature and place within the history of electroacoustic music. For the field of electroacoustic music, such a discussion furthers dialog about the nature and scope of the field and opens valuable dialog about issues relating to race and culture.
H
James Harley - From Trains to Plains: An historical consideration of soundscape composition
James Harley
University of Guelph, School of Fine Art and Music (Canada)
The incorporation of environmental sounds into electroacoustic music composition was underscored with Schaeffer’s Étude aux chemins de fer (1948), recorded train sounds shaped into a formal musical structure. There were precedents: filmmaker Walter Ruttman produced Wochenende in 1930, an “experimental film with sound only, no image,” aural recordings of life in Berlin; recorded birdsong as an atmospheric component in Respighi’s Pini di Roma (1924); mechanical sounds from beyond the concert hall added to works by Satie (Parade, 1917), Antheil (Ballet mécanique, 1925), and Varèse (Ionisation, 1931). Russolo’s “The Art of Noises” (1913) exerted influence on many of these composers. Earlier, the natural world was evoked through notating birdsong and other natural elements, including Beethoven (“Pastoral” Symphony), Vivaldi (“The Seasons”), and Janequin (“Le chant des oiseaux”).
In the early years of electroacoustic music, the focus was on developing both the technology and generalized techniques for working with recorded or synthesized sound. Xenakis incorporated jet engine and other non-instrumental sounds into his first GRM work, Diamorphoses (1957). In Japan, Takemitsu drew on a number of soundscape sonorities in his Sky, Horse, and Death (1954). Cage systematically recorded and spliced together something like 600 recorded sounds for Williams Mix (1953), including City Sounds and Country Sounds.
It was Ferrari’s Presque Rien, No. 1 (1970) that first attracted attention as a “soundscape composition.” It was intended to sound “natural,” with no studio manipulation. There is in fact a great deal of editing and treatment applied. The piece generated some controversy for its deliberate “naivety.” At that same time, the World Soundscape Project was established at SFU in Canada, to draw attention to the sonic environment and study acoustic design. The aim was not to support electroacoustic composition, but such creative work was a natural extension of the research, by Barry Truax, and later Hildegard Westercamp. Publications such as Schafer’s Tuning of the World (1977) and Truax’s Handbook for Acoustic Ecology (1978) and Acoustic Communication (1984) have provided a theoretical-aesthetic framework for soundscape work, including composition. The creative use of environmental sounds has tended to follow along two paths. In one, such sounds are used as sonic material within a wider compositional framework, where the “mimetic” aspects (after Emmerson) of the materials are downplayed. Dhomont’s work could be cited here, and certain pieces of Denis Smalley. Other composers work exclusively with environmental sounds. Claude Shryer identifies what he calls the “Sharawadji Effect”, defined as “an aesthetic effect characterized by a sensation of plenitude sometimes created by the contemplation of a complex soundscape whose beauty is inexplicable.”
Anna-Marie Higgins
Faculty of Education, University of Cambridge (UK)
Key words associated with the career of the Irish composer Roger Doyle (b. 1949) include “Pierre Henry”, “tape recorder”, “pop”, “theatre”, “godfather of electronic music”, “Bourges” and “Babel”. The diversity of his musical output over the past forty years provides teachers with a rich listening resource. Although electroacoustic music is not on the Irish secondary school music programme, I organised a “Doyle Week”. My two-fold aim was to familiarise my students with the music of a major Irish composer and to introduce them to concepts associated with electroacoustic music. While it was tempting not to probe it with an analytical tool but to allow the music to speak for itself, I was obliged to measure the educational content of each session and to register learning outcomes. I now report on what I found achievable in five forty-minute classes with a group of fifteen- and sixteen-year olds. I focused on sounds, structures, spaces and “societies” as these areas embrace Doyle’s timbral choices, unifying devices, humorous juxtapositions and programmatic titles. Ten extracts from his oeuvre were chosen as the basis for exploration, including Under the Green Time, The Idea and its Shadow, Beautiful Day, Yunnus and Surface du Monde. More accustomed to discussing conventional musical features, I found it challenging to locate a context for the music, to adopt a vocabulary with which to interpret it and to gain an insight into Doyle’s motivation. Responses to the music were noted and related activities described. Students went on to compose musique concrète, to research extracts by Lansky, Reich and Stockhausen and to investigate how Irish traditional music can fuse with other musical styles. I concluded that the music of a local, living composer can act as a springboard for the study of electroacoustic music and music in general.
I
Jonathan Impett - Being Itself: Improvised electronic music as simulation and interface
Jonathan Impett
School of Music, University of East Anglia (UK)
In this paper I consider improvised electronic music as a paradigm for contemporary musical activity – in terms of its imagining, production, performance and reception. I suggest that the essence of that paradigm is simulation and propose an understanding of the particular work based on modes of distributedness in two dimensions: physical / technological / environmental / cultural, and temporal.
As the conventional work concept recedes and computing becomes conceptually and materially transparent, both formal compositional and technological descriptions become less useful modes of characterising musical artefacts. The fact that they may generate a quite different musical surface in each instantiation makes analysis of that surface more problematic in terms of its relation to the work. As a result there is an absence of discourse that impedes the development of common cause and critical conversation.
Perniola’s concept of the shadow of the work is here inverted, as the work now presents itself as such a shadow. From Badiou I adapt the idea of the work as a configuration, its instantiation as a particular truth triggered by an event. I consider Herbert Simon’s natural laws for the design of computational artefacts and suggest that we think of the work itself as an interface. Complexity of interest, Simon states, is a dynamical product of the interaction of behaviours with environment. The distribution and relationships between components is discussed in terms of Deleuze’s body without organs. They may be more or less structured or contingent on environment – personal, physical, cultural technological, acoustic. The work can be characterised by the extent to which these are explicit or constructed, the modes in which they contribute and the mechanisms for mediation. This parameter s balanced by a view of the distribution of the decision-making process through time and the ways in which this relates to the temporal properties of the musical surface itself. Together these constitute the thickness and dynamics of the work-as-interface.
In the new case, the work performs an adaptive simulation of itself. Crucial to this are the ways in which it can identify new structure as it evolves. I therefore look at various understandings of emergence – cognitive and computational – and investigate their utility in the context of electracoustic music.
Finally, I consider the extent to which these concepts might usefully apply to the contemporary (and invariably technologically mediated) understanding of other musics.
Hiromi Ishii
Freelance, Germany
Since electronic music was introduced to Japan in 1950s, the Japanese term denshi-ongaku has been used for its translation. Nowadays it includes commercial-based computer music in which computers are used as sequencers to play pitch-oriented and metric music.
In 1980 the New Grove Encyclopedia of Music and Musicians (New Grove) was published. It has entries “electronic music” and “computer and music”. After the manner of the English version, the Japanese version included the entries “denshi-ongaku” and “konpyuta to ongaku” (computer and music). The Japanese version was however, first published during 1993 to 1995. In 2001 the second edition of the New Grove was published and became also accessible through Internet. In this version the term electronic music is no longer a headword and is redirected to “electro-acoustic music”.
Japanese online-version of New Grove has also been started in 2002. However, this is based on the first edition and still has entries “denshi-ongaku” and “konpyuta to ongaku”. There is no entry “denshi-onkyo-ongaku” which is the translation of the term electroacoustic music. The term denshi-onkyo-ongaku appears only in the text of the addenda.
Nowadays the composers of electroacoustic music use the term “electroacoustic music” to suggest their sound-based music. Denshi-onkyo-ongaku is also gradually spreading among the composers of this field in Japan, but generally it has barely rooted yet. Furthermore, another term denshi-onkyo which means “electro-acoustic” or “electronic sound” is often used. Although it does not mean organised sounds and autonomous music, some composers prefer to use it.
This paper focuses on the terms relating to denshi-ongaku and denshi-onkyo-ongaku examining their nature of music, definition and usage.
K
Brian Kane
Columbia University, New York
The story of the Pythagorean veil functions a foundational myth within the Schaefferian tradition, a primal scene of self-appropriation that retroactively founds an arche and projects a telos. In particular, the myth is used to authorize three central claims:
1. It underwrites a theoretical split between the eye and the ear, where the suppression of the eye encourages the ear to direct its attention away from everyday modes of listening towards pure auditory experience.
2. It encourages a division between causal sources and perceptual effects, which in turn justifies the establishment of an ontology comprised of intentional sound objects detached from causal contexts.
3. It promotes a conceptualization of the acousmatic as a horizon within which the practices of electroacoustic music are retrospectively given a tradition and meaning.
However, the Schaefferian tradition has relied solely only the literalist reading of the Pythagorean veil originally found in Iamblichus, while ignoring an earlier account by Clement of Alexandria. In Clement, there is no literal veil; rather, Clement states that the sayings of Pythagoras were spoken “under the veil of allegory” (Clement, Stromata, book V, § 58). By contrasting the literal and figural readings of the veil, I will argue:
1. That the historical and interpretative evidence used by many theorists in the Schaefferian tradition cannot unequivocally support their claims about the acousmatic reduction, ultimately revealing its mythic purpose.
2. That the Iamblichian reading has encouraged a marked emphasis on morphological, terminological and epistemological approaches to the theory of electroacoustic music, while underemphasizing questions of figurality, tropology or symbolic meaning.
3. That the Clementine account undermines the construction of the acousmatic as a horizon or tradition which dates back to antiquity, rather, it promotes a concept of the acousmatic capable of acknowledging its own technological historicity.
Gary Kendall
Sonic Arts Research Center, Queens University, Belfast (Ireland)
The concept of “event” is fundamental to most discussions of electroacoustic music and yet has not been carefully studied as a topic in itself. We often treat “events” as a self-evident, but this cursory impression masks a world of complexity. We might have the impression that the recognition of an “event” is automatic, but how did we come to that decision and what do we mean by that designation? While the concept of “event” has been treated in many fields from physics to philosophy, the treatment of “event” in linguistics is particularly rich with potential significance for electroacoustic music. The ways in which we communicate about “events” in language reveal universal features of how we think about “events.” For the field of electroacoustic music, an EVENT schema is proposed and examined. This EVENT schema is a generic model that captures the essential temporal structure by which electroacoustic “events” are conceptualized. In the realtime process of listening the listener binds the EVENT schema with the particular “circumstances” of the moment. The specific way in which the listener binds the schema with “circumstances” is an act of understanding.
Clearly one of the common challenges in listening to electroacoustic music is the experience of a situation that is difficult to assimilate in realtime. When “events” cannot be grasped in full detail, an alternative strategy is to grab onto the “gist” of “events”. “Gist” enables the listener to construct a working hypothesis and to keep up with the realtime flow of “events”. In acousmatic music the “circumstances” of an “event” are often intentionally impoverished. In this case, the lack of information is intentional and part of the artistic content of the work. Clearly, an essential aspect of artistic expression is the intentional abbreviation, the situation in which space is left open for the imagination.
Peiman Khosravi - Exploring the Boundaries of Spectral Space and Tonal Pitch Space
Peiman Khosravi
City University, London
In attempting to devise an analytical framework for observing and evaluating formal implications of spectromorphological properties in acousmatic music, the significance of our listening habits, developed within the context of western music tradition, cannot be justifiably overlooked. After all, it is highly improbable that electroacoustic music has been, somehow, fashioned independently of its cultural surroundings - an undeniable conditioning force for composers and listeners alike. Therefore, in order to grasp the nature of listening expectations and formal implications regarding spectromorphological behaviour in electroacoustic music, it is necessary to explore possible correspondences between aspects of conventional instrumental music and prerequisite constituents of spectromorphologically-orientated listening experience.
Following the above hypothesis, this study intends to investigate the relationship between spectral space in electroacoustic music and the more culturally prominent notion of tonal pitch-space as an aspect of conventional Western Art Music and a subcategory of spectral space. The question posed here is whether there are any parallels between listening expectations developed through the use of pitch-space in instrumental music, and those formed in conjunction with exploitation of spectral space in acousmatic music.
Aspects of motion continuity, and the manner in which morphological archetypes in acousmatic music are inhabited within spectral space, are viewed and investigated in light of listening expectations formed in connection with tonal pitch-space in pre-acousmatic music. Furthermore, the concept of frequency/pitch continuum as a spatial metaphor, or a dimension of musically represented space, is explored in order to reveal possible listening expectations that influence the perception and conception of textural design in music. Finally, the utilisation of a unified vocabulary is proposed that aims to enable detailed description and analysis of implicative attributes of motion within spectral space as experienced in acousmatic music.
Yuriko Hase Kojima - Listening to the Sound: Meanings in making music
Yuriko Hase Kojima
Shobi University, Saitama (Japan)
Perception of sound and music has long been the biggest concern for many composers. Psychoacoustical questions such as what is the difference between sound and music and when is the moment for sound to become music have been their deepest interests when making music either instrumental or electroacoustical. Sixty years ago, when musique concrète was officially introduced to the world, many audience must have had some difficulties listening to real sound as a new kind of music. In Japan, people have a long tradition of finding pleasure listening to the sound in everyday life produced by the traditional sounding devises, such as furin, shishiodoshi, and suikinkutsu. Such unique sound culture may have been originated from the people’s mind toward nature and religious life of Japan. It has influenced on the development of traditional musical culture of the country as well. That is fundamentally different from Western music in every aspect. Regardless of Western music domination, many of Japanese contemporary composers’ works maintain distinctive characteristics different from the rest of the world. This paper will discuss about our approach to sound and how it can contribute to the creation of musical arts, particularly in the field of electroacoustic music.
Phivos-Angelos Kollias
City University, London / Université Paris VIII
“Systems thinking” includes a number of interdisciplinary theories based on organizational approach to problems, in other words considering everything as systems. The paper discusses the connection of Iannis Xenakis and Agostino Di Scipio with “systems thinking” and proposes an experimental compositional model related to this line of thinking.
Xenakis, in order to formulate and explain what he called “stochastic music”, used the methodology of “cybernetics”, one of the most important theories of “systems thinking”. Also, based on the same approach, he formulated the hypothesis of “second order sonorities”. After this historical point Di Scipio comes to add his objection to Xenakis’s approach. Di Scipio doubts that the stochastic laws are capable of determining the emergence of “second order sonorities”. Resulting from this problematics on Xenakis, his criticism of the conventional model of interactive music and the application of notions found in “systems thinking”, Di Scipio is suggesting a “self-organized” interactive model. According to this model, the sound system is able to observe itself and regulate its own processes. It can be considered as a self-organized system, an organism, placed in his environment, the space of the concert hall.
Based on this line of music evolution connected with “systems thinking”, we have attempted to develop a systemic model of symbolic music, an experimental compositional model mainly used for instrumental writing. The term “symbolic” refers to the focus on the information’s flow through symbolic means, i.e. through music notation. In addition, the approach treats “systemically” the compositional work, applying notions found in “systems thinking” through the cognitive sciences. We have abstracted the “live interactive music model” used in live electronics, from a systemic viewpoint, using it as the basis of what we call the “Creative System of Symbolic Music”. Using simple examples, the structural design and the functional performance of the model will be presented.
Sanne Krogh Groth - The Stockholm Studio EMS during its Early Years
Sanne Krogh Groth
Musicology, Department of Arts and Cultural Studies, University of Copenhagen
I will present my Ph.D. project EMS: Two music cultures – one institution. Swedish electro-acoustic music from 1965 to the late 1970s. EMS, an institution in Stockholm with studios for producing electro-acoustic music and sound art, was first established in 1965 under the Swedish Radio, where an old radio-theatre studio was opened up to composers and sound artists. The first studio (the “Sound Workshop”) was intended solely for contemporary work. Very high investments were allocated to a prestigious and, for its time, highly advanced computer music studio, which opened in 1970. This studio was even before its opening very famous, but also difficult to work. The Sound Workshop was available to the artists 24 hours a day and much easier to handle, and therefore most pieces were produced here. In the early 1970s a conflict emerged between the composers and the studio director Knut Wiggen. In his eager and idealistic search for “the music of the future” he believed in continuing the earlier experiments within musique concrète and elektronische Musik through research into sound and sound perception. Most of the composers, mainly from the Swedish “Text Sound” milieu, wanted to produce pieces that could be performed “here and now”. They were too impatient to wait for Wiggens’ results and wanted investments to update the Sound Workshop. Wiggen did not agree, so the composers boycotted the studio and Wiggen was dismissed (1975). The music produced at the institution during these years varies from so called abstract electronic music to political performance-related text-sound pieces. So far there has been very little academic writing about this. Going into the conflict and the very heterogeneous material, questions appear which I believe haven’t been raised properly in earlier writings on electro acoustic music. To the extent they have been dealt with, it has been within separated traditions of historiography and analysis.
L
Leigh Landy - The Sound-based Music Paradigm
Leigh Landy
Music, Technology and Innovation Research Centre, De Montfort University (UK)
Following François Delalande’s description of “an electroacoustic music paradigm” (Le son des musiques, 2001), an alternative based on sound-based music (music based on sounds as opposed to notes) will be presented. It will be demonstrated that Delalande’s notion focuses solely on production whereas the latter is related to both creative production and the listening experience. Furthermore, using a play-on-words, it will also be shown that sound-based music offers greater “co-hear-ence” (co-ouïr-ence en français) than electroacoustic music does. Brief aural examples will be included to support the sound-based paradigm. Acknowledging sound-based music as a “supergenre” would be beneficial to this broad musical corpus. Recognition would influence both questions of access related to this body of work as well as its field of studies. One of the most interesting results of the recognition of paradigmatic behaviour is the fact that certain establish means of classification of music will be found to be largely irrelevant such as the German E- vs. U-Musik separation (art vs. popular music, musique savante vs. musique pop). This paper presents a summary of the research that led to my recent book, La musique des sons/The Music of Sounds (2007, Sorbonne MINT/OMF).
Hsien-Sheng Lien
National Taiwan Normal University
Since the 1960s, some pioneers in Taiwan’s community of composers began creating their works with new electro-acoustic techniques. However, in the universities around the island, classes and studios of electro-acoustic music did not really exist until the late 1980s. Before 1980, very few compositions created with electro-acoustic techniques could be heard in concerts, inside or outside the universities of Taiwan. In this report, we will first review the history of the development of electro-acoustic music in Taiwan. Our goal is to find out whether or not the Taiwanese composers were influenced by the idea or the thought of the schools from the United States or France, who have established a solid tradition of electro-acoustic music since the mid-twentieth century. Then we will make some comments from a musicological point of view on existing problems that accompanied the development of this new kind of music in Taiwan. Finally, we will present the interdisciplinary forum: “International Workshop on Computer Music and Audio Technology” (WOCMAT) founded by some young Taiwanese composers, engineers and scientists in 2005. And we will sum up our discourse by means of a discussion on the works of Tzeng Shing-Kwei, Wu Ting-Lien, Tseng Yu-Chung and Chao Ching-Wen, four essential figures in the history of Electroacoustic Music in Taiwan. We will describe how they enliven the tunes and sounds around them by using the new compositional techniques, electro-acoustic or not, and how they evoke the cultural memories of their listeners in Taiwan.
Eric Lyon
Sonic Arts Research Centre, School of Music and Sonic Arts, Queen's University, Belfast
The preservation of electroacoustic music has long been a concern for practitioners of the art. An early problem is the preservation of electronic signal on the decaying medium of magnetic tape. A more recent challenge is the preservation of electroacoustic music stored digitally, given rapidly changing digital audio formats and standards. Additionally there is the need to preserve electroacoustic performance practices, often just for individual works, in the face of reliance on obsolete hardware, and perhaps soon-to-be-obsolete software. In addition to reviewing extant approaches to these issues, we will consider how the practice of archiving itself might be adapting in response to technological and sociological advances, in some cases merging with evolving compositional and performance practices.
This paper addresses some existing issues of archiving in the above-mentioned concerns, while suggesting that the nature of archiving of electroacoustic music itself is becoming transformed, perhaps radically, through the agency of rapidly decreasing costs of mass storage, rapidly increasing accessibility of broadband Internet including WiFi, and its manifestation through the fusion of social networks (Myspace), social media networks (YouTube), search engines (Google), widely distributed Internet self-publishing and netlabel publishing of electroacoustic music, and Internet radio, into a kind of social- networking, intelligently searchable Internet radio station (Last.fm), that favors dynamic archiving, in which public visibility is no longer a zero-sum game, as is the case for traditional music distribution models employed in physical CD stores.
M
Andra McCartney - Reception and Reflexivity in Electroacoustic Creation
Andra McCartney
Concordia University, Montreal
Since the mid 1990s, I have been developing an approach to the analysis and production of electroacoustic sound art which features reception and reflexivity studies as important aspects of the work. My reception research is integrated with other approaches such as repeated listening, analysis of public reviews of work, gestalt analysis inspired by the work of James Tenney, and cultural studies. In the session at EMS ’08, I will present examples of this work, indicating its potential as a tool for analysis and reflection. The examples I intend to include are:
1. An analysis of works by Canadian electroacoustic composers Paul Dutton, Wende Bartley, Hildegard Westerkamp, Pascale Trudel, Diane McIntosh and John Oswald, focusing on the themes of controversy and silence in both public reviews and private responses to their works.
2. A sound art installation called Soundwalk to Home which was presented at the School of the Art Institute of Chicago, in which responses by gallery visitors to soundscape pieces were solicited, in reaction to questions about home, nostalgia and everyday sounds. Responses to the work developed into a conversation among the visitors about home, recording, the public and private spheres, questions of representation, and the work of other artists such as Vito Acconci, Adrian Piper and Martin Arnold.
3. A recent installation created collaboratively with Professor Don Sinclair of York University (Toronto) that incorporates a webcam tracking system to allow visitors to mix soundscape files and create an ideal soundscape, as well as a microphone and recording controls so that audience responses can be immediately integrated into the sounds presented in the space. Artifacts from these four projects such as written responses, video documentation, and short excerpts of sound works will be employed to discuss the concepts and challenges associated with these projects as well as to suggest possibilities for future research. While information from the projects will be summarized in the session because of time constraints, I will submit a longer paper to the proceedings which will discuss these projects in more detail, for those researchers who are interested in a deeper consideration of the relevant ideas.
Bertrand Merlier
Département de Musique / Faculté LESLA, Université Lumière Lyon 2
L’objectif de notre recherche est de préciser ou d’élaborer un vocabulaire (un ensemble de mots spécialisés) susceptible de décrire la perception de l’espace en musiques électroacoustiques. Le problème est délicat car il touche à la psychoacoustique… et à l’imaginaire des créateurs ou des auditeurs. Le résultat doit être représentatif de la collectivité. La démarche a été la suivante :
1. Une batterie de tests d’écoute est proposée à une douzaine d’auditeurs suivant une procédure précise :
- écoute d’un extrait d’une minute environ d’une musique en pentaphonie sur un dispositif 5.1;
- réflexion individuelle (non influencée) dont le résultat est écrit sur papier,
- lecture collective des écrits;
- débat oral, essais de clarification et recherche d’un éventuel consensus (non obligatoire : des divergences peuvent subsister);
- parfois ré-écoute de l’œuvre pour conforter les définitions (à la demande),
- etc. (da capo);
- tous les écrits et débats sont consignés.Ces tests ont été effectués à plusieurs reprises (Université Lyon 2 / département musique et musicologie, CNSM de Lyon / SONUS, Conservatoire Fédéral de Genève / Classe d’électroacoustique, ENMD de Villeurbanne…) avec un public musicalement averti (non spécialiste de l’espace) : instrumentistes, compositeurs, preneurs de son…
2. Une classification des termes (en langue française) a été réalisée à partir de l’ensemble des mots collectés de la façon suivante :
- les « mots » présentant des similitudes sont regroupés par « familles »;
- les « familles » reçoivent ensuite un intitulé représentant au mieux leur contenu;
- les doublons sont filtrés;
- les antonymes sont ajoutés à la liste.Conclusion : Les résultats sont extrêmement intéressants ! L’étude de notre corpus fait apparaître :
- 5 types de spatialité :le bain sonore, l’image d’espace, le plan sonore, le point et le démixage;
- 2 types de mobilité ou de mouvement : interne ou externe;
- ainsi que toute une panoplie d’adjectifs permettant de décrire ou de caractériser la spatialité ou la mobilité.Cette recherche est menée dans le cadre du GETEME (Groupe de Travail sur l’Espace en Musiques électroacoustiques / Research Group on Space in Electroacoustic Music : http://geteme.free.fr) et soutenue par Thélème Contemporain / France : http://tc2.free.fr.
Bertrand.Merlier@univ-lyon2.fr
Raúl Minsburg and Fabián Beltramino - The Quotation in Electroacoustic Music
Raúl Minsburg and Fabián Beltramino
Universidad Nacional de Lanús, Universidad Nacional de Tres de Febrero, Buenos Aires
This paper focuses on the analytical approach of a phenomenon each time more recurrent in the electroacoustic music of the last two decades: the quote, a resource which implies the more or less explicit and the more or less extensive presence of some works – contemporary or classical – or of some genres – popular or folk – in a new composition.
The aim we are pursuing consists, firstly, in the determination of the main modalities and procedures of quoting in a corpus of recent electroacoustic works. Considering the proper and unavoidable distances, we will take as analytical categories certain notions developed by the linguistic science for tackle the intertextuality in the spoken language. Secondly we will try to differentiate in its mode of working and specially in its consequences for the construction of the meaning of the works, the quoting of particular compositions regarding those in which the references are less precise and point to a determinate style or genre. Thirdly, and regarding the distinction between more literal or more evocative quoting modalities, we will assess the degree of intervention of the quoter in the cited discourse, which means the degree of transformation of the original discourse in the cited discourse. This will imply attending the modalities of composing by deconstruction o reconstruction, techniques which were, until some time ago, an almost exclusive patrimony of the compositive schools dedicated to the traditional acoustic means: the instruments.
Regarding each one of the items proposed, we will try, if possible, to contemplate the differences and the nuances that may appear when facing such questions from the listener point of reception to what happens in the strictly composing field.
raulminsburg@gmail.com
fabianbeltramino@arnet.com.ar
Mikako Mizuno
Nagoya City University (Japan)
New technologies have changed the condition of not sound treatment in laboratory and the time structure in live performance but we have not yet gotten any terminology nor model pattern to discuss the electroacoustic music. This presentation thinks about the structure of electroacoustic music both in the points of hearing of the sounds and in the point of interactivity. The former relates creation and the latter relates reception which includes sound projection/performing/presentation, or multi-media collaboration with visual elements. This discussion will lead to the radical problem like what music is, or, how we can appreciate the sound together with the visual elements.
First the cultural tradition and today’s situation of “hearing sound” in Japan are discussed in comparison with that of Pierre Schaeffer. In Japanese sound history, people had a big glossary to describe noises like insects voice or other soundscape. The traditional hearing is symbolized in a poem by Basho, describing the situation where the cicada’s voice is a symbol of quietness. We have another example of Japanese unique type of hearing in the films directed by Yasujiro Ozu, the music of which was composed by Kinji Fukasaku.
Second, I discuss the interactivity in the on-stage pieces, which has also been affected by culture-based context. In the on-stage live-performance or live-projection, the performer should make action and the sound comes in the causality which the audience see and hear. Interactivity during the musical performing has been technologically realized in various precise connections, but the technical causality are oriented by the multi-media situation of daily life.
Several Japanese composers have unique methods of causality between visual structure and sound, like Formant Brothers or Hoho-machine including Taro Yasuno. Discussion about interactivity in the audio-visual musical pieces on-stage should examine not only gesture-sound, gesture image and sound-image relationships but also composition-performance and creation-reception relationships, each of which is in the different phase of cultural communication.
Rosemary Mountain - Sorting Sounds: Testing tools & strategies
Rosemary Mountain
Music Faculty of Fine Arts, Concordia University, Montreal
The paper reports on the author’s first attempts to classify an array of electroacoustic works and fragments in ways which reflect her appreciation of their particular qualities. Many of the sample works are musique concrète and most were produced at the GRM. The exercise has three main objectives: (1) to be a first step in determining what analytical method(s) would be (most) appropriate to reveal more about the identified salient characteristics of each work; (2) to test out various tools and strategies for analysis being developed by the author; and (3) to present a range of different aspects which can draw someone to a particular work.
Portions of the chosen examples are being incorporated into the database for the research project Interactive Multimedia Playroom, to enable an investigation of which axis labels could be useful for a multi-dimensional scaling – reminiscent of Grey’s attempts to determine qualities of timbre. For this part of the exercise, some acoustic works are incorporated as well, in order to reveal potentially “missing areas” in the electroacoustic set.
Even with this relatively limited set, it is clear that different pieces and fragments can attract us for very different reasons: musical qualities of timbre, texture, tuning, timing; evocative elements (e.g. nostalgia) arising from reference to preferred genres, activities, or soundscapes; reactions to the meaning of spoken words; appreciation of the composer’s intention to express humour, present clever designs, etc. The author argues that each of these aspects suggests a different type of analysis: parametric, semiotic, socio-cultural, etc.
Although this initial report reflects the author’s personal aesthetics and experiences, these biases are presented as illustrating typical and therefore relevant factors involved in our reactions to artworks and choice of analytical methods.
Christopher Murray - The “Timbres” of Timbres-Durées: Caught between note and objet musical
Christopher Murray
Université Lumière, Lyon 2
Timbres-durées, Olivier Messiaen and Pierre Henry’s 1952 collaboration in musique concrète, is generally considered an exception in the former composer’s output of traditional acoustic music, and an early footnote in the latter’s dazzlingly long career in musique concrète. The work is essentially a rhythmic étude, composed of a monophonic chain of musical objects which Messiaen intended to tint his rhythmic combinations and permutations. Confronted with the semi-abstracted sounds he had chosen for his work, Messiaen was faced with a taxonomical dilemma. Documents from the time of Timbres-durées’ creation betray a certain tension between tradition (the source of the sounds, their notation according to pitch and duration), and a new nomenclature based on the sounds’ morphology. Already notorious by 1952 for the original vocabulary he used to describe his langage musical, Messiaen also used his own vocabulary to describe the sounds employed in Timbres-durées. Additional terminology attributable to Schaeffer and Henry complicates the argot of Messiaen and Henry’s drafts and working documents from 1952.
Built upon the findings of my Master’s thesis Olivier Messiaen et la musique concrète (2006) and working with new documents from the archives of Pierre Henry, my article moves beyond hearing Timbres-durées as a quirky failure, and examines the work as the testament of a historical moment in the early development of electroacoustic music. When heard as a coming-to-terms with the limited common space of acoustic and electronic musics, Timbres-durées can be considered a turning point for both composers. Messiaen returned to the world of acoustic music with the metaphorical objet musical still in mind, continuing to use ideas he experimented with in Timbres-durées, while Henry went on to grapple directly with the implications of a nouvelle écoute.
N
Robert Normandeau - Le cycle Onomatopées : l’orchestration électroacoustique
Robert Normandeau
Université de Montréal
Entre 1970 et aujourd’hui, le studio de composition électroacoustique est passé de l’état analogique à l’état numérique en passant par plusieurs étapes intermédiaires, celle du MIDI notamment. On verra ici comment le compositeur a modifié sa façon de travailler, notamment au tournant des années 1990, au moment de l’apparition des premiers appareils numériques relativement abordables pour un individu, et comment la nouvelle lutherie a pu engendrer un cycle d’œuvres intitulé Onomatopées, unique et impensable au cours des décennies précédentes. Le cycle – qui comprend une œuvre génératrice, Bédé, et quatre œuvres principales, Éclats de voix, Spleen, Le renard et la rose et Palimpseste – a également nécessité la mise au point d’un système de classification sonore circonscrit dans deux bases de données qui comportent à ce jour pas moins de 2 000 entrées pour la première et de 4 700 entrées pour la seconde, qui constituent en quelque sorte la première étape du processus de composition.
Ce cycle d’œuvres est également construit autour de l’utilisation de l’échantillonneur et on verra comment celui-ci a pu déterminer des conduites de compositions originales. Enfin on verra comment ce cycle correspond à un travail d’orchestration, pratique inédite en électroacoustique, par l’utilisation d’une structure temporelle et morphologique commune aux quatre œuvres du cycle dont les matériaux, exclusivement vocaux, ont été changés pour chaque œuvre passant de la voix d’une enfant à celle de quatre adolescents, puis d’un groupe d’adultes à celles de voix de personnes âgées, reprenant en cela les quatre âges de la vie.
robert.normandeau@umontreal.ca
O
Naotoshi Osaka - Timbre Symbol-based Music Notation
Naotoshi Osaka
Department of Information Systems and Multimedia design, School of Science and Technology for Future Life, Tokyo Denki University
Music systems and theory, as well as linguistic systems, consist of discrete structures that can be represented as symbols, such as interval, scale, chord and tonality – all of which are based on a discrete pitch system. Common music notation is well-established in terms of these discrete symbols, and indispensable for the expression, recording, transmission and preservation of music. Ever the establishment of Music Concrete, timbre has been a more important factor in academic computer music and sound art. On the other hand, there is no descriptive system for timbres, and so timbre-based music lacks those advantages which music based on common music notation has. In this study, we try to define symbols for any timbres of interest for a new timbre-based music notation. We have set up three layers of symbols: microtimbre, onomatopoeia, and macrotimbre. Onomatopoeia is the most popular one, and it is language-dependent. Microtimbres roughly correspond to the phonemes used in languages, though they are shorter in duration than most phonemes. As a result, it describes a smaller and richer perceptive unit. A macrotimbre symbol covers a larger class of sound, such as all water drop sounds. They are provided as a convenience for users who are not concerned with specifying timbre to the precision of the smaller units. In order to define and use these symbols efficiently, we are constructing an electronic timbre dictionary. This is a server-client system which will be available to any users via the internet. The system has two functions: registration mode and search/synthesis mode. In registration mode, a user can define timbre symbols for a sound file. In search/synthesis mode, he can either search for a sound or synthesize a sound from timbre symbol input. Considering the great success of Wikipedia, this system is expected to become a useful tool for timbre notation. The authors are optimistic on the convergence of timbre symbol definitions. Presently we are working on symbols for water-related sounds such as water drops and water stirring.
P
Abril Padilla et Benny Sluchin
Freelance, France
IRCAMNotre objectif consiste à présenter une des œuvres de Stockhausen, Mikrophonie I (1964), à travers les deux points de vue complémentaires de l'acoustique et de l'interprétation. Cette œuvre pionnière du live electronic naît d'une expérience d'enregistrement réalisée au préalable par le compositeur, qui lui permet de prendre conscience des phénomènes sonores présents à la surface du tam-tam. La captation et la transformation du son deviennent ainsi le point de départ de l’œuvre, où le filtrage et la projection sonore dans l’espace sont mis en scène.
La partition de Mikrophonie I est prescriptive, elle dit ce qu’il faut faire sans décrire le résultat sonore. Six musiciens sont sollicités dans trois rôles bien définis : deux tam-tamistes deux microphonistes, deux filtristes. Il se produit un phénomène de fusion depuis la production acoustique du son jusqu’à la diffusion du son filtré, préalablement ausculté par les microphones. Les musiciens doivent remplir, avant de jouer, plusieurs tâches compliquées : choisir l’ordre de moments, déterminer une unité métronomique de jeu et obtenir la forme de la pièce dans une versions particulière. L’ordinateur peut être ici une aide à la réalisation de la pièce, en devenant un chef d’orchestre virtuel, qui coordonne et de synchronise les actions les plus diverses, indiquées très précisément par le compositeur. Il peut de surcroît modéliser des filtres adaptés, et éventuellement jouer les mouvements de filtre prescrits pour des filtres analogiques aujourd'hui obsolètes.
La liste de 53 mots décrivant les timbres à générer avec le tam-tam représente un autre aspect innovant introduit par Stockhausen : des adverbes définissent le timbre des 33 moments de la pièce avec des mots de la langue courante. L'interprétation de ces mots en musique reste à la charge des instrumentistes, et conduit à des productions très riches, que nous mettrons en valeur par une étude comparative.
info@abrilpadilla.net
benny.sluchin@ircam.fr
Kevin Patton
Brown University (USA)
When using real-time data to generate representations, issues of time-scale become immediately apparent. Furthermore, the appearance of gestural aspects of sound, an important aspect of the morphological characterization of the aural, vary depending on the time scale represented. Music theorists are now using the idea of gesture as an analytical tool and applying it as a biologically grounded, inter-modal synthesis that shapes motion (pitch motion, rhythmic motion) in time to create expressive force. Models of force and inertia are applied to musical models of harmony and expression. This paper addresses issues of time-scale and gesture when using spectral data to influence particular aspects harmonicity and the use of signal processing data to develop representations of sound morphologies. The estimation of harmonicity is most often discussed in the signal processing literature as a method of determining fundamental tones (f0) in monophonic or polyphonic, note-oriented music. Frequently this is in an effort to create applications for automatic transcription or characterization of recorded music. Most commonly, noise reduction techniques are used in order to extract “musical” sound and differentiate it from background. In electroacoustic and experimental approaches to music, such concepts function distinctly. How is time scale constructed when using real-time data? How does the aspect of time relate to the geometry of the visualizations? How are gestural representations changed at different time resolutions? This paper evaluates harmonicity as a timbral measure with real-time data to generate 3-dimensional representations of sound. In order to illustrate these approaches I am analyzing the electronic portion of Luigi Nono’s A Pierre. Dell’Azzurro Silenzio, Inquietum. Because the score has such specific electronic synthesis indications, Max/MSP can be used to generate excellent control over all aspects of sound generation and analysis. Furthermore this expands on my morphological notation system, and continues my research into representation of electroacoustic sound. Examples will be given through real-time animation as well as graphics for notation to be compared with Nono’s acclaimed original score.
Blas Payri and José Luis Miralles Bono - Perception of Spatial Trajectories as Musical Patterns
Blas Payri and José Luis Miralles Bono
Universidad Politécnica de Valencia (Spain)
We present the results of an experimental study on the perception of spatial trajectories as a music language element. We used different rhythmic patterns that defined spatial left-right movements, varying abruptly or progressively and with two different tempi. These patterns were applied to 7 timbres: 4 were entirely synthesized (sinus and square waveforms, pure noise, and a mixture) and 3 were created from a human female sung voice, a whisper and an orchestral sound. 56 combinations were used as samples. Subjects were trained musicians and were asked to recognize the rhythmic pattern and the abrupt or progressive evolution by choosing from a closed list. The listening took place in a room with two loudspeakers and the position of each subject was recorded. The statistical analysis shows that the elements that influence significantly the perception of trajectories are the harmonicity (noise lowers recognition), jitter or allure and timbre. The shape of spatial changes was very influential: a continuous shape prompted a much lower recognition of the spatial pattern and of the shape itself. Discrete changes were on the contrary a positive factor to perceive the pattern. Listener position was also influential: peripheral positions had lower recognition rates. Our results show that spatial trajectories – that are widely used in electroacoustic music – can be recognized and used as a musical element, but perception is fragile and depends on the sounds used, the position of listeners and the nature of spatial movement. Many spatial trajectories in real electroacoustic works may not be recognized. Classical experiments on spatial position perception have a limited utility in the study of musical uses of space as they are not concerned with complex patterns and the sounds tend to be artificially calibrated instead of using the complex and evolving sounds found in musical works.
bpayri@har.upv.es
josmibo@posgrado.upv.es
R
Laurie Radford - “I am sitting on a fence” – Negotiating sound and image in audiovisual composition
Laurie Radford
Department of Music, City University, London
Tremendous effort has been exerted since the inception of musique concrète to avoid, repress or stave off the influences and workings of the visual and to privilege the purely aural in the perception, conception, production and reception of the art form generally referred to as electroacoustic music. Electroacoustic music, in moving from its birthplace of the radio to the conventions of the concert space, has in the broadest sense cultivated a vital if somewhat ambivalent relationship with the visual domain. The integration of common technical procedures in the tools and operations of composing with sound and image, the maturing practice of live performance of sound and image, and the evolving concept of aurality shaped by the increasingly mobile mediation of audiovisual experience, redefine artistic practice and cultural engagement with the intertwined relationship of the aural and the visual. Practitioners have begun to employ strategies and concepts that are unique to audiovisual composition and have developed an art form that departs from hybridity and the formative “audiovisual contract” and instead proposes a singular artistic practice that dismantles the perceptual fence between aural and visual experience. A theoretical foundation is proposed that considers the negotiation of perceptual, technological, linguistic, historical and social issues characteristic of the practice of contemporary audiovisual composition. Current activities, issues and practices arising from the growing inclination of composers employing mediating technologies to integrate sound and image in studio-based and live performance work are discussed in the context of recent sound, image, media and cultural studies.
Anna Rubin - Structure and Sonic Metaphor in the Music of Francis Dhomont
Anna Rubin
University of Maryland/Baltimore
In his epic work Forêt profonde, Francis Dhomont creates a series of linked sonic metaphors which in turn serve to build up a complex structure. Dhomont uses the thirteen sections of Schumann’s Kinderszenen to inspire his thirteen sections; he also references both the music and psyche of Schumann. Other important references are Dante’s Inferno and a variety of folk tales as well as the holocaust and the life and work of Bruno Bettleheim, psychoanalytic scholar of fairy tales. Dhomont not only quotes from and references all of these sources but he manipulates the material in complex ways to create startling and subtle sonic metaphors. I will focus on five features illuminating the temporal, spectral and narrative levels of the work, each of which projects a singular or multivalent sonic metaphor:
- spectral content of the first sound noise/pitch-based mass and its import for the rest of the work,
- Dhomont’s sonic model of the “forest” and its transmutation through the work,
- the omniscent male narrator uneasily governing the multifarious other child/adult voices and presences of the work,
- a meta-narrative overall presence explained by M. Chion’s concept of êtricule, a kind of anthropomorphization of sound character.
- cadential and anti-cadential closings of the different sections and their effect on the flow of time and how one makes sense of the work as a totality. The resulting form is a complex and at times contradictory result of these features.
Nathalie Ruget
MINT-OMF, Paris Sorbonne
À partir de 1980, l’œuvre de Nono prend un tournant important, lié à de nouvelles explorations du son au sein du studio de Freiburg. Sa motivation première, en 1980-81, est d’utiliser de nouvelles techniques qui lui permettront d’intégrer concrètement à son œuvre la résonance, le retard, la persistance, la mémoire.
À travers un jeu de miroir entre nouvel outil (l’électronique) et construction d’un réseau sémantico-musical liée à la notion de résonance, les pièces de cette dernière période sont une démonstration de la transformation du son par le mot, et de la transformation du mot par le son : chaque syllabe chantée ou parlée transforme en temps réel le son dans un mouvement lent, semblable à l’évolution d’une pensée, au sein d’un vaste réseau de résonances.
L’électronique permet à Nono d’élaborer un système mouvant où le son déclenche le mot et réciproquement. Le texte évolue par tuilages d’une voix à l’autre, en une succession de fondus-enchaînés très lents et conjoints. Les syllabes comme les intervalles passent de voix en voix, il n’y a plus de perception horizontale ou verticale. Les voix sont tour à tour exploratrices et mémoires. Sur un accord vocal déjà « mouvant » (organisé en quarts de ton), un son va quitter l’accord en « éclaireur » et se poser sur une autre hIan Burleigh and Friedemann Sallis, déclenchant alors une avancée du texte : le mouvement du son déclenche le mouvement du texte ; inversement, la nouvelle syllabe fait réagir les autres voix de l’accord qui se déplacent et se « reconfigurent » sur cette progression du texte.
Nous mettrons en regard dans notre contribution deux démonstrations édifiantes de cette conception en miroir que sont d’une part Das atmende Klarsein et d’autre part Omaggio a György Kurtág.
Zhang Ruibo (Mungo) - CHEARS 2008: EARS full text translation plan
Zhang Ruibo (Mungo)
China’s Electroacoustic Music Center, Central Conservatory of Music
Translation is never a straightforward job; rather it is a creative progression, especially on terminology translations from a different language phylum, for instance, from English to Chinese. After having learned that the Italian translation is almost ready to go online, and that the Greek and Portuguese translations have been started for EARS, it seems an obvious next step that a Chinese version should be added. This is an important step for the adoption of EARS into China. It is also a compulsory job for CHEARS (China Electroacoustic Recourses Survey) which was proposed last year as my master’s thesis. Hopefully, Chinese will be the first language to fully translate EARS in Asia and will turn CHEARS from a simple EA glossary for bibliography collection purposes into a professional project for wider effect in China.
Since Chinese is one of the oldest languages in the world, a complicated rule system creates difficulties when optimizing the meaning of characters along with good pronunciation. However, that is why this is an interesting issue that creates a hot discussion. One third of the EARS glossary was translated last year. Some of these already have a precise Chinese translation; some of them have to be given a new translation, the rest of them are almost impossible to find proper words for in Chinese. Many translation gaps still exist. So, a translation progress report for EARS will be the main task of this article. Furthermore, as one of the translators for Chinese version of Computer Music Tutorial by Curtis Roads, I will coordinate the subject index of the CMT book with terminology from EARS.
S
Simonetta Sargenti - Technologie et écriture instrumentale dans les œuvres de Luigi Nono
Simonetta Sargenti
Freelance, Milano
Espace virtuel et son en mouvement : interaction technologie - écriture instrumentale dans les œuvres de Luigi Nono.
On constate aujourd’hui que la technologie a influencé l’œuvre de nombreux compositeurs du XXe siècle, jusqu’à transformer leur approche poétique et leur style d’écriture. Parmi eux, Luigi Nono qui, depuis 1980, a recherché dans ses œuvres une poétique du son qui l’a conduit, selon nous, à une nouvelle écriture vocale et instrumentale irréalisable avant l’utilisation de l’électronique.
Cette évolution est évidente depuis le quatuor à cordes Fragmente, Stille... an Diotima (1980) jusqu’à La Lontananza nostalgica utopica futura (1988). À travers les œuvres de cette période, on peut examiner, d’une part les caractéristiques de l’écriture vocale et instrumentale qui ont changé en fonction des expériences électroacoustiques, d’autre part comment les technologies électroniques ont modifié cette écriture.
À partir d’exemples issus de A Pierre dell’azzurro silenzio inquietum (1985) et La Lontananza nostalgica, utopica, futura (1988), typiques de l’emploi des instruments à vent et à cordes, nous montrerons les spécificités de son écriture et les influences de l’électronique sur la conception de son œuvre, en particulier sur la production du son et dans sa relation avec l’espace. Nous aborderons aussi deux techniques utilisées par le compositeur : la bande enregistrée, qui permet de réaliser des espaces acoustiques différents grâce à l’amplification et la superposition d’espaces sonores, et le live electronic qui permet de modifier les événements sonores en temps réel.
Nous porterons ainsi un nouveau regard sur l’œuvre de Nono, toujours en devenir. La technologie apparaît comme un point de départ pour étendre les possibilités de la musique, ouvrant les portes à une nouvelle créativité et à une interaction entre compositeur et instrumentiste.
Margaret Schedel - Ensuring the Sustainability of Art with Technology
Margaret Schedel
Stony Brook University, New York
In 1966, Billy Klüver organized Nine Evenings of Theatre and Engineering, in which ten artists, collaborated with nearly thirty engineers. Evenings left a permanent impression on the artists who participated, and inspired many younger artists who were in the audience. It has become a classic event in the history of art and technology, yet few people could experience it first hand. Two of the nine works from this pioneering event, have been released on video with original documentary footage and interviews, but is this an accurate representation of the artwork? Ultimately, time-based art is nothing more or less than an engineered experience, a temporal environment with a beginning and an end between which exist the performance; performance is essential to the practice of time-based art as a living form, but has been complicated by the unique challenges in interpretation and re-creation posed by works which incorporate technology. Is a document of an event as moving as the event itself? Do we have a duty not only maintain and conserve work that incorporates new technology, but also to ensure repeated performances? The responsibility for this decision lies not only with curators, but also with the artists themselves who should document their pieces consistently and thoroughly if they wish their work to remain performable. The greatest driving force for sustainability is the demand for works to be performed and heard, but recreating the technological component of pieces in order to program them on a concert is a time consuming task. Are we going to lose great works because of technical obsolescence? Many artists are not thinking about the ability to perform their works 100 or even 1,000 years into the future, should they? Great art rests on the tradition of what comes before, if we only can experience this art through documentation, are we cheating future generations? Using Nine Evenings as a basis for exploration, this paper will examine the rewards and challenges of creating work with a technological component that can be viable well into the future.
Paul Scriver
Mills College, Montreal
In his book Acoustic Communication, Barry Truax states that in the broader, supra-musical, acoustical realm of the soundscape, “meaning depends more and more on the relationship between elements, and between the elements as a whole”. Without intending to do so, Truax may have written another sentence in a lengthening manifesto that champions “holistic” listening; listening that is inclusive of all there is to hear that is musical and supra-musical. The task for sound artists, electroacoustic composers, and media artists is to bring attention to the relationships between those elements and to find the musical meaning in the whole. They have, since Pierre Schaeffer first introduced the idea that recorded sound can be used as musical material, used supra-musical sound to create compositions that challenge the traditional separation between the “organized” soundfield of the musical experience and the supposedly chaotic soundfield of all sound which is extra-musical.
As our culture stretches to listen beyond music for the musical qualities of all sounds, we are increasingly required to draw on that part of our evolution that benefited from holistic listening. The search for coherency in all that is presented to the modern human ear is built on the listening abilities of our hunter-gatherer ancestors. Their ability to cognitively parse the soundfield for threat or the promise of food is the same ability that we now use to parse the elements of music – to separate foreground from background, theme from variation…
I contend that musical practice has the unique and indispensable function as a cultural storage cell for latent listening skills once essential to our ancestors. I will discuss how the role of the artist in the context of a rapidly deteriorating natural environment can be re-tooled to tackle the crucially important role of paying attention to the messages that our environment is communicating to us through sound – that perhaps these sounds are encoded in a form of music yet to be understood. I will examine the possibility that there are prosodic qualities in the sounds of the human and natural environments – inflection in tone, intensity and pitch – that could hold the key to greater understanding of our world. That just as we may discern supra-lexis meaning in common speech, we may yet be able – with the aid of musical practice and artistic thinking – to perceive meaning in the supra-musical.
By discussing my own and other artist’s work, I will explore how, at the juncture of scientific method and artistic practice, there is a place for sound artists to apply their heightened auditory sensitivity to acquiring that understanding.
Karen Sunabacka
Providence College, Otterburne (Canada)
The human voice has been used as a sound source in electronically produced music since the invention of musique concrète. As we enter the twenty-first century we find ourselves in the midst of a culture addicted to electronically mediated sounds and images. Women’s images are cut up and manipulated to sell products and advertise popular culture. In a similar way, women’s voices are cut, manipulated, and spliced when used in electroacoustic works. Within this cultural context of mediated, and often explicit, cut-up images and sounds of women and girls, how do women composers use the female voice and music technology in their eleoctroacoustic compositions?
Through the analysis of three recent pieces I will show how women composers reclaim the female voice. The Handless Maiden (2004), composed by Wende Bartley, addresses issues of women’s betrayal and healing through a mythological tale that is told using live and recorded voices. Family Stories: Sophie, Sally (2000) by Anna Rubin and Laurie Hollander uses recorded female voices to tell a true story that portrays issues surrounding relationships, race and loss. Diana McIntosh’s Doubletalk (2003) includes both live and recorded female voices that explore the space between women and technology. Although all the composers engage with diverse issues, incorporate different aspects of technology and use the female voice in various ways, all three pieces are about women, created by women, and told using the recorded voices of women.
Peter V. Swendsen
Oberlin Conservatory of Music (USA)
Electroacoustic music and modern dance are often partnered in concert, a pas de deux of disciplines vibrant in experimental practice but lacking a mature theoretical framework. Both fields are in need of more refined and far-reaching terminology, a fact particularly highlighted in collaborative situations that include participants from both. My current research uses such situations to examine ways in which these complimentary disciplines can assist each other in developing a more comprehensive language for understanding compositional processes, transmission in performance, and audience reception.
The two principle points of departure for this discussion – the terms that respectively represent the greatest divergence and commonality between electroacoustic music and modern dance – are physicality and abstraction. Electroacoustic music often remains estranged from physicality in its production, performance, and consumption. Conversely, dance transmits through corporeal agents in space, providing a literal embodiment of content and structure. Despite this deviating reliance on physical links to the real world, both electroacoustic music and modern dance rely heavily on a process of abstraction to attain their desired results.
In addition, a middle ground is quickly emerging as practitioners of electroacoustic music increasingly rely on physical interfaces to create “composed instruments” and “modulated objects”, while movement artists grapple with the augmentation and even the disappearance of the physical body in virtual space. The opportunity therefore exists to develop a terminology that will frame this middle ground where movement and music now meet, particularly in regard to interactive performance technologies and real-time relationships between physical presence and electroacoustic sound.
T
Elisa Teglia
Università degli studi di Bologna
Le Studio di Fonologia Musicale de Milan représente pour les compositeurs des années 1960, 1970 et 1980 une possibilité d’expérimentation et de travail extraordinaire : la présence d’outils à la fois diversifiés et nouveaux stimule de nombreux artistes renommés dans la composition d’œuvres qui deviendront dans l’histoire de la musique des étapes fondamentales du répertoire électroacoustique.
Un des aspects intéressants de ce centre de recherche est la façon avec laquelle les musiciens inventent de nouveaux sons pour leurs musiques : leurs méthodes sont-elles spécialement liées aux ressources du Studio ? Quels rapports relève-t-on entre les ressources et le résultat sonore de leurs compositions ? Enfin, les outils à disposition influencent-ils l’invention et la créativité des artistes qui les emploient, et de quelle manière ?
Étant donné l’énorme quantité de compositions, il est difficile de répondre à ces questions en interrogeant tout le corpus produit à Milan. Pour cette raison, j’ai choisi de concentrer ma recherche sur la production de Luciano Berio, Luigi Nono et Bruno Maderna. En effet, ces compositeurs, à la fois très différents et très liés, ont laissé beaucoup de témoignages quant à leurs esthétiques et leurs idéaux. L’approfondissement de ces interrogations peut aider les chercheurs et les musiciens d’aujourd’hui dans une meilleure connaissance et compréhension des premières expériences électroacoustiques qui appartiennent à l’histoire de la musique du XXe siècle.
Lasse Thoresen - Sound Characters, Sound Values and Form in Åke Parmerud‘s Les Objets Obscurs
Lasse Thoresen
The Norwegian Academy of Music, Oslo
During EMS06 in Beijing a graphic approach to spectromorphological analysis based on Pierre Schaeffer’s “Typomorphologie” was demonstrated, and later published in Organised Sound. A spectromorphological analysis of the above mentioned piece by Parmerud was shown as a movie. The subject of the present paper is to organise the previous analysis into genres/caractères and espèces/valeurs, a pair of concepts that Pierre Schaeffer posits as fundamental to any meaningful musical discourse. The analysis will show that, while Parmerud’s piece certainly appears to be musically meaningful, the values in the pieces are not pertinent in the sense that they do not support the perception of a “structure”; they function solely as a kind of temporal prolongation of a genre of sounds. Accordingly, the question arises whether the piece contains a possible “super-character” – a common denominator that is capable of unifying the different sound characters used can be found. As a matter of fact it can be found, but on a higher level of abstraction.
The paper will briefly present two approaches to the aural analysis of emergent musical forms, analysing “time fields” (temporal segments) and “dynamic form” in Parmerud’s piece. The analysis leads to a condensed gestural formula, which can be seen as a tertium comparationis between the aural form of the music, perceived as and iconic or metaphorical reality, and the literary riddle that that is presented by I voice at the opening of the piece.
Vincent Tiffon
Centre d’Etude des Arts Contemporains, Université de Lille
Après l’invention du papier conduisant à « l’artifice d’écriture » (H. Dufourt ) depuis l’Ars Nova, et l’invention du phonographe conduisant au nouveau « paradigme électroacoustique » (F. Delalande) depuis la musique concrète, l’influence déterminante des outils (papier/imprimerie, enregistrement/radio) sur la musique n’est plus à démontrer. Pour autant, et dans la lignée des travaux de l’anthropologue André Leroy-Gourhan et du médiologue Régis Debray, l’étude fine de ces influences, et plus particulièrement l’étude fine des interactions entre les innovations techniques de la « sonofixation » (M. Chion) – les outils ou « médiums » de la musique – et les inventions musicales au XXe siècle devrait être systématisée pour mieux en comprendre les logiques explicites ou implicites.
L’outillage technique (médiums de stockage, de diffusion et de symbolisation) et institutionnel (médiums de formation, cadres d’organisation) change d’aspect et de nature en fonction des évolutions technologiques, ce qui induit des mutations profondes de la musique. À l’inverse et simultanément, les compositeurs suscitent des innovations techniques qui à leur tour transforment notre écoute de la musique du passé ou produisent de nouvelles formes d’expression sonore.
Les médiums de l’audiosphère modifient les rapports d’équilibre entre les modes sémiotiques de l’image sonore : l’indice prend le pas sur le symbole (et son corollaire potentiel, l’abaissement symbolique), induisant un rapport à l’émotion (et son corollaire, l’émotionnel), à l’immédiateté et l’éphémère (et son corollaire, l’hyperconsommation), et une dépendance technologique plus forte (et son corollaire, l’obsolescence technologique)...
L’influence de l’outil reflète le dilemme conscient ou inconscient : rendre pérennes une musique, une pensée musicale, un genre, une situation musicale ou, au contraire, faire de la musique un simple divertissement assujetti aux contraintes de notre époque. Nous postulons que la musique est avant tout une connaissance à transmettre, et non un objet à communiquer. Si nous voulons lutter contre l’hyper-industrialisation (Stiegler) de la musique d’aujourd’hui, l’étude fine des mécanismes de la transmission de la musique aux XIXe, XXe et XXIe siècles s’avère essentielle.
Gaël Tissot
Équipe PARNASSE, Musicologie Recherches, Nouvelles Approches Scientifiques, Sociales et Esthétiques en Musicologie, Université de Toulouse
Une première lecture des titres ou sous-titres des œuvres électroacoustiques de François Bayle révèle immédiatement un intense rapprochement avec le domaine visuel. On trouve ainsi Espaces inhabitables (1967), Camera oscura (1976), Les couleurs de la nuit (1982), Son Vitesse-Lumière (1983)... Ce qui pourrait n’être qu’une dénomination purement anecdotique, sans conséquences, se trouve être au contraire révélateur de l’important travail métaphorique du compositeur. Ainsi, le concept d’« image de son » que propose Bayle en 1976 établit une analogie forte entre la musique et la photographie ou le cinéma. De même qu’une photographie n’est qu’une représentation du monde réel, dont elle est disjointe, de même l’« image de son » n’est qu’un reflet de la réalité. Elle est alors susceptible d’être transformée suivant des procédés issu du monde graphique : élargissement, fragmentation, effet de grossissement etc... Cette « traduction » de principes graphiques à la musique se révèle particulièrement riche, dans la mesure où l’on constate qu’elle place la notion de figure, ou forme, au centre de l’esthétique de François Bayle, rejoignant ainsi certaines des préoccupations de Paul Klee par exemple.
Ce qui se dégage au final à l’écoute de cette musique, c’est la notion d’espace. Espace non seulement au sens physique du terme – rappelons que François Bayle est le créateur de l’acousmonium, dispositif permettant en concert de répartir les sons électroniques sur différents haut-parleurs dispersés dans la salle – mais également dans le sens plus large de « lieu » dans lequel s’inscrivent des formes ainsi que les mouvements crées par l’enchaînement de ces formes. Un réseau de liens s’établit alors à travers la pièce, tel espace rappelant tel autre, telle figure offrant telle analogie avec telle autre. L’écoute n’est plus totalement discursive, l’œuvre reste en mémoire comme une entité unique que l’on peut parcourir à volonté, à la manière d’une œuvre plastique.
V
Andrea Valle - Tableaux et Gravures: A graph modelization of Schaeffer’s theory of listening
Andrea Valle
Centro Interdipartimentale di Ricerca sulla Multimedialità l’Audiovisivo (CIRMA) / Department of Fine Arts, Music and Performing Arts (DAMS), Università di Torino
In his Traité des objects musicaux, Pierre Schaeffer proposed a phenomenological approach which can be considered authentic not because of his eventual philosophical claims but because it led to the articulation of the “audible domain” – simply the domain of all what can be heard – by providing and binding together a theory of sound objects and a theory of listening practices. For what concerns the latter, in the famous Tableau des fonctions de l’écoute, listening is notoriously redefined in terms of four different modalities: écouter, ouïr, entendre, comprendre. For Schaeffer, a similar organization, which replaced the monolithism of listening with a multiplicity of determinations, was not an end in itself, but the first step towards a syntax of listening strategies. These strategies have been widely described by Schaeffer, but they have not received a specific formalization. On the contrary, a more explicit and formalized approach can result in a better description of the complexities of listening practices involved in electroacoustic music (and not only). In order to pursue such an aim, Schaeffer’s table can be rewritten in form of a graph, made of vertices connected by edges: the vertices represent the four modes of listening while the edges – connecting each couple of vertices – define possible sequencing relations between them. Thus, each path on the graph represents a sequence of listening modes. But Schaeffer has also described the dynamic transformations of listening practices. In particular, a key point in Schaeffer’s argumentation is his discussion regarding the process of transforming a practice into another one, a process which typically happens in the modus audiendi of the specialiste. While a sectorial drawing cannot show this process, a graph-based model allows to represent this mechanism by expanding and modifying the original graph. Analogously, other relations between listening activities can be discussed and modeled through graphs.
Nicolas Viel - Vis Terribilis Sonorum : problèmes musicologiques de la musique algorithmique
Nicolas Viel
MINT-OMF, Université Paris Sorbonne
Dans cette présentation, il s’agit de montrer comment on peut aborder, d’un point de vue musicologique, les problèmes que posent l’étude des œuvres algorithmiques sur bande. La musique algorithmique pose comme principe la constitution de structures musicales au sein desquelles les sons sont fabriqués, choisis à partir d’une axiologie probabiliste. Dans le cas particulier de la musique sur bande, chaque son (hauteur, durée, intensité, timbre) fait l’objet d’une procédure de choix avant d’être fabriqué directement sous la forme d’une suite de nombres entiers.
L’un des problèmes posés concerne la nature de ce qui reste de l’élaboration de l’œuvre. Comment, dès lors, reconstituer la démarche du compositeur à partir de bandes, documents papiers, relations lorsque ceux-ci demeurent lacunaires ? Les particularités de la musique algorithmique sur bande impliquent des méthodes de travail différentes d’une étude d’œuvre composée de façon classique. Combinant les particularités de la musique électroacoustique et celles de la musique aléatoire, la production de musique algorithmique doit être retracée en tenant compte du déterminisme qu’elle met en œuvre puis évaluée selon son projet propre. Quelle est la généalogie des œuvres, quelles sont leurs particularités techniques et compositionnelles et quelle peut être la nature des algorithmes envisagés ?
Dans ce cadre d’étude, on examinera le cas de la musique de Pierre Barbaud au travers d’œuvres comme Vis Terribilis Sonorum, Saturnia Tellus et Terra Ignota Ubi Sunt Leones pour lesquelles l’examen de ce qu’a laissé le compositeur peut permettre de retracer son parcours esthétique. Ces œuvres, représentatives du projet initial formulé dix ans auparavant par le compositeur, présentent différentes versions, certaines issues de montages. La question de l’élaboration esthétique est donc centrale ici.
W
Hasnizam Abdul Wahid - A Review of Electroacoustic and “Experimental” Works from Malaysia
Hasnizam Abdul Wahid
Faculty of Applied and Creative Arts, Universiti Malaysia Sarawak
The last sixty years has witnessed an enormous development with regard to electroacoustic music in the west. Rapid changes in technologies have made it easy to gain access to things that we have never get to approach before. As the result, “technology” has always seems to be the “tools” of trade, especially in composing electroacoustic music. There is a progressively growing interest on electroacoustic music in Malaysia. Among the factors that influence this trend is the accessibility of the “tools”. In this paper, I will discuss some examples of the electroacoustic or “experimental” works from Malaysia and the motivation behind the increase of appreciation on electroacoustic music among Malaysians.
Tim Ward - The Technological Crisis in the Electroacoustic Music Studio
Tim Ward
Freelance, Athens
It is a well-worn truism to state that the technology available in the electroacoustic music studio is continually getting better. Such technology is certainly getting faster, smaller, cheaper, easier to use and more widely available than ever before, with clear benefits for the daily practicalities of electroacoustic composition and performance. However, is this a complete picture – what does the situation look like when viewed from compositional point of view?
Composers of electroacoustic music have always been closely involved with the theoretical work surrounding their field, from the development of taxonomies and theories of musical discourse through to the technical development of innovative sound manipulation and performance systems. When they return to the studio to compose – to select from the pool of processes and techniques supplied by current studio technology in order to discover or invent new sounds and shape them into music – are the tools they find a good match to the compositional ideas and theories that they might wish to explore?
This paper aims to examine the condition of the composer’s technological tool kit in the light of current theoretical and creative thinking, seeking to establish criteria by which we can judge just how well matched are the tools and the creative requirements they aim to serve. These criteria are explored within a range of typical compositional situations within electroacoustic music as well as in wider musical and creative fields. Here we can see that a comparison between the tools that invent and shape sounds and those comparative new-comers that invent and shape moving images quickly leads to important questions that may cause us to consider replacing the truism of “continually getting better” with something much closer to the “technological crisis” mentioned in the title.
Simon Waters
Music School of Music, University of East Anglia (UK)
EMS 08 has as its main theme “the relationship between sound and music”. This paper takes as its starting point John Blacking’s insistence that the latter term inevitably involves human beings. Against the framework provided by thirty-five years of innovative practice, shifting discourse, infrastructural change and technical development in a small but productive and influential UK university studio the paper attempts to draw connections between social, technical and aesthetic change throughout that period. It aims to demonstrate the relationship of changing compositional choices and strategies to changing social behaviours and technical affordances – to indicate the manner in which sound (acoustic phenomenon) and music (social practice) interpenetrate.
The author has previously written on the period in question, at some theoretical distance, as characterised by a gradual change from an acousmatic sensibility towards an increased concern with context and process, manifesting itself initially in sampling, and recently most obviously evident in improvised/real-time practices. This paper draws on accumulated small detail – oral accounts, studio purchase records, notions of storage and archiving, details of completed works, processes of composition, contacts with other studios, descriptions of the concert environment, habitual uses of the studios by associated students, staff and visiting composers, and the changing economic and educational context – to illustrate through specific instances the connections between aesthetic, social and technical structures.
Issues investigated include (but are not limited to): the instrument/electroacoustic sound relationship; the change from solo authorship to collaborative practices; the codification of spectro-morphology as a defining discourse of electroacoustic music; changing attitudes to the audibility of compositional strategy and composerly “presence”; the introduction and institutionalisation of sound diffusion; the sampling of other identities; the lap-top, portability and distributedness; lo-fidelity, noise and glitch; legacy equipment, hacking and e-bay electronica; time, storage and access.
It is one of the paradoxes of a practice in which the primary objects of production apparently document the process of their production is that so many aspects of that practice are left unrecorded, or discarded as unimportant. This paper uses the specific instance of one studio, viewed at various key points over a thirty-five year period, to fill out some of the gaps in our record of the practice of studio-based composition.
Rob Weale
MTI/CEPA, De Montfort University (UK)
In this paper, which is to some extent provocative (intended to stimulate discussion in the conference setting), I will be examining Electroacoustic (E/A) music in terms of its “function(s)” (For this paper I am using the term “E/A music” in reference to works that can be considered part of the Musique Concrète tradition, wherein recorded sound is the principal unit of composition, and manipulation of the recording medium is, for the most part a compositional necessity).
In the late 1940s Pierre Schaeffer paved the way for the creation of a new musical paradigm, out of which has evolved a myriad of approaches that utilise “sound” as the fundamental compositional unit. This is the historical precedent, the “big bang” that led to E/A music “being”. What I am interested in are the reasons for it continuing to be. What purpose(s) does it serve? What is (are) its role(s)? I will be considering such functions of the E/A work from the perspectives of the maker (composer) and the taker (listener), as this art form may well serve a different function for both; indeed might it be these two groups who “mutually” legitimise its function? To this end I will be addressing questions such as, why do we make it? And why do we choose to listen to it? In dealing with these questions I will be presenting anecdotal responses solicited from E/A practitioners and listeners.
My aim is, in significant part to reveal some of the humanistic traits that are at work in E/A music as a socio/cultural phenomenon, to tease out a deeper understanding of the function of E/A music as something that is perhaps rooted in some of the fundamental aspects of human communication.
Lonce Wyse - The Emergence of Electroacoustic Music in Singapore
Lonce Wyse
Communications and New Media Programme, National University of Singapore
The Singapore media arts and music scene is one of many boats rising with an actively managed economic wave. The recent rap video produced by the champions of the new initiatives at the Media Development Authority (2007) generated hundreds of thousands of hits on YouTube.com, and despite a few snide remarks from some corners, clearly demonstrates to the world that things are not what they used to be. One can regularly overhear discussions about the alternative musical “scene” in Singapore, and no matter what the assessment, its very existence is remarkable and now firmly established.
Z
Ivan Zavada
The University of Sydney / Sydney Conservatorium of Music
This presentation investigates the role of technology during the creative process. The author describes a system to cope with the transformation of musical structures in three-dimensional space. A software called 3D-Composer was designed as a visualisation tool to assist composers in creating works within a new methodological and conceptual realm. A definition of micro-composition is proposed, and principles of projective geometry, group composition, local composition and graph theory are implemented to model a short melodic motif, which becomes the basis for a larger composition. The author brings forth the notion of musical and symbolic notation as an anchor point to model a system representing musical information based on conventional visualisation techniques. The discussion focuses on the ability to establish a direct conceptual link between visual elements and their correlated musical output, integrated in a musical composition based on geometric properties. Some aspects of the compositional elements derived from the use of geometry are explained, along with an overview of the software and its graphical user interface.
Many composers have used “series” as a tool to generate musical structures, which has helped them through the decision-making process in music composition. Such examples include the notion of “sieves”, defined as a sequence of integer values, which can be translated to musical or sound events (Xenakis). Similarly, “group composition” is a way of exploring unknown artistic territory with specific objectives (Stockhausen). In another order of ideas, “local compositions” are introduced as elementary objects of music topology (Mazzola). More recent developments include the application of string theory to define how chord structures evolve in multi-dimensional spaces called orbifolds (Tymoczko). The essence of 3D-Composer is to remain purely within Euclidean space and integrate fundamental concepts in the realm of computer-assisted composition, with a particular focus on the analysis and modelling of the structural and generative processes of micro-composition.