20130616

Sound Within Topography

The environment in which music is played or performed can be as vital a component in the structure of the sound itself, both in context and in physical shape, as any element of instrumentation or composition. This has been true as long as humanity has been consciously creating music, a process at least as old and in some ways even older than language itself. The earliest music, comprised of vocalizations and percussion, was created by necessity in either open spaces or in naturally occurring geographical features such as caves, cliffs, and outcroppings. The different sonic characteristics of even these simple settings would have been as evident to early humans as they are to us. A basic plane such as a cliff wall will reflect sound to produce echoes and phase cancellation, and the added third dimension of an enclosed space such as a cave creates an almost infinity complex sonic signature of reverberant frequencies. These reverberations generally lend themselves well to the performance of vocal and percussive sounds, provided that neither is so rapid or complex in their tonality and rhythm that their nuance is obscured by the reverberation itself. Simple, repetitive chants and drumming take on a more rich and rounded character in any reverberant space, and this has inevitably impressed and influenced both musicians and listeners throughout human history; even today, a universal of recorded music is the recurring inclusion of some sort of reverberation, be it natural or artificial, on percussion and voice, generally more so than on other instruments. This effect will almost universally be described as sounding “better” and more “natural” to the listener, but it is interesting to remember that this is a matter of taste and conditioning shaped by many generations of associating music, often in a sacred or a ceremonial sense, with a specified system of emotional and intellectual response, and not an intrinsic part of sound on a scientific or artistic level. As these tastes and preferences of performance spaces and conditions were shaped, the rhythmic sensibilities of human groups who tended to spend more time performing and sharing music in the open air, often due to a warmer climate or less topographical variation, usually developed their musical aesthetic accordingly to include more rhythmic variation, shorter and more percussive use of voices, and often a broad palette of percussion instruments. Meanwhile, those groups who tended to convene in naturally enclosed stone spaces developed simpler rhythms that do not become lost in reverberation and can match the pace of natural echoes, and longer vocal phrasing in which single notes have time to build upon themselves. The percussion instruments used in these situations were generally pitched lower, as well, to take advantage of the amplification of low frequencies afforded by natural reverberation.

These spaces in which music was being performed and explored were also the loci of ceremonial and sacred experiences of the groups, and these aspects were in turn shaped by the interchange between the music and its surroundings. The basic function of music in ceremony and ritual, to transpose the listener from a physical, earthly state to a removed, spiritual state, is seen across the globe, but, interestingly, the music used for such a purpose varies greatly in its structure, and the specific music and setting which may deepen a spiritual experience or even induce a state of trance in the listener are culture-specific. Those peoples with an open-air sonic sensibility tend to use repetitive, complex rhythms at moderate to high speed to induce trances, while those with enclosed, reverberant sensibilities tend to use slower rhythms, lower frequencies, and complex harmonies or overtones. The music of the open-air traditions is by its nature suited to a very specific and unchanging sonic space, and has thus undergone little change in terms of structure and instrumentation since the sonic sensibilities of these traditions were developed. The music of reverberant traditions, however, was intrinsically linked to the characteristics of the performance spaces themselves, and as people began to move away from natural performance spaces and shape their own sacred spaces, their music and architecture both guided each other closely. Simple churches provided a single voice, speaking or singing simple arrangements, a clear voice, and as the churches grew and eventually became cathedrals, so too did the number of singers increase, inspiring architecture which amplified and reflected the chants back towards those in attendance. Organs became the preferred instrument and secured their long association with European spiritual spaces by benefit of their increased volume and full range of sustained notes, which could play music slow enough to carry through its own echoes, but ornate enough in harmony to take advantage of the rich reverberant spaces of these artificial caverns.

Cathedrals, with their massive choirs and huge organs (instruments which were often an aspect of both music and architecture), became the high-water mark in the intentional creation of cavernous spaces for purposes of reverberation as well as visual grandeur. Music, while remaining deeply entwined with spirituality and ceremony throughout the human world, began to be increasingly developed for recreational and purely artistic means. Stringed instruments developed from their original role within the broad percussive palette of music developed by those with open-air sensibilities, and became gradually more elaborate and resonant, finding their place as more portable instruments in the hands of musicians who sought an outlet beyond the choirs and organs which were inexorably bound, both spiritually and physically, to the holy spaces. And gradually, as music began to be played in less reverberant spaces, private and secular places where more the development of more artistic and elaborate musical ideas was being encouraged and sponsored, they evolved into instruments such as the many permutations of the piano. Brass instruments became more articulate, no longer needing to produce the volume needed to carry a sound in the open air. Assemblages of greater numbers and types of instruments were able to play together and increase the potential for both volume and harmony as more precise systems of music, both physically and theoretically, were developed. Importantly, dynamics became more valued as dedicated spaces were constructed to cater specifically to music performance. Reverberation was still important for its ability to strengthen sound and blend harmonies, but a balance was now sought between it and ever-more rapid and elaborate melodic passages, while, invariably also influenced by cultural and aesthetic trends towards mechanization as a whole, rhythm became more streamlined and mathematical.

This pattern continued until the introduction of electronic amplification and recorded sound, at which point aesthetic paradigms began to shift and diverge relatively quickly. While the segment of humanity which initially pursued a reverberant sonic sensibility was geographically limited to Europe, social and technological developments quickly spread their musical paradigm (even as musicians and artists within that cultural context continued to borrow and experiment with the sonic and aesthetic sensibilities of traditions from around the globe) anywhere the needed electricity and equipment travelled, and “western” popular music became global popular music. The advent of the phonograph and the radio broadcast shaped their own performance space, limited by the frequency spectrum of early recording equipment and the quality of the listener’s speakers (as well as often being colored by some degree of static in the case of the radio, or physical damage to the recording itself in the case of the phonograph). The live performance of music at the dawn of electronic amplification was still generally for the purpose of dancing, as it had been for centuries, but this amplification increasingly allowed for faster and more active dances without the dancers overwhelming the sound of the music, leading to the birth of rock n’ roll music and its great diaspora. The incorporation of speakers into cars led to another technological bottleneck in sound quality and frequency range, encouraging music with a strong, simple rhythm and few dynamics, as well as popularizing the synthesizer as an instrument, with its pure sound that traveled well through radio broadcasts. Digital music media such as the compact disc and the computer file gave an even further advantage to non-organic sound sources and recording techniques, and, finally, the technological trend of miniaturization, as ear buds and personal music players led to a focus on recordings with few dynamics that favor the middle and upper frequencies which may be accurately produced by a tiny speaker.

It should be noted that beyond the sonic characteristics of a space, any performance (and here I include the playing of a recording) is also affected by the sonic texture of a space; what one might think of as background noise, although it is very much an active part of the listener’s experience, and can be part of the performer’s experience as well. Historically, reverberant music was performed in an enclosed space which also isolated the performance from external sonic textures, but various artists and composers have created music with the intention that it be allowed to blend with the ambient sounds of the performance space (John Cage’s composition 4′33″, for instance, is a piece written for piano in which the sonic texture of the performance space famously comprises the entirety of the piece). Modern recorded music is often played back in spaces relatively dense with sonic textures, however, and this has influenced the music itself both consciously and unconsciously. Music played back in an automobile, as previously mentioned, needs to contend with the textures of the engine as well as mechanical and road noise, and where this has been knowingly addressed while designing speakers for automotive use, it has led in many cases to an emphasis on more powerful bass speakers, which has in turn led musicians to produce music which takes advantage of these increased low-frequency capabilities. Also, the aforementioned ear buds are not simply a response to the trend of miniaturization in consumer technology nor indicative only of an increased social swing away from communal music or even social interaction, but are in many cases a response to increasingly noisy urban environments. Thus, while actively creating music to incorporate and reflect increasingly dense and complex sonic textures is a relatively new possibility, it is being explored, and may yet be developed more directly and universally.

This history is invaluable to bear in mind when considering the ways in which sonic spaces can, have, and will influence music, but, knowing where these traditions arose and how they have been shaped, what I find most interesting is the ways in which sonic space and music may consciously shape each other in the future. This is especially true once sound is approached and created free from the constraints of culture or tradition. Music can be developed to interact with any environment or sonic space (or, to approach it from an architectural perspective, a sonic space can be created to emphasize, contrast, or even create any sound), especially when sound is utilized which does not rely on pre-existent paradigms of musical experience and is meant to be actively explored on its own terms intellectually and emotionally. Indeed, there are several anomalies which are interesting in the context of difficult or even painful music or sound design related to sacred or ceremonial music through history. Within one chambered tomb in the Orkney Islands dating from the Neolithic period, for instance, such are the sonic characteristics of the stone-lined space that an additional subsonic characteristic is manifested in the form of the natural resonance of the air in the space itself, a phenomenon known as Helmholtz resonance. A microcosm of this phenomenon can be seen in the space within a bottle or jug when air is blown across the opening to excite the natural resonance of the air inside. In the case of this specific tomb, the resonance is roughly 2 Hz, and while this is well below the threshold of human hearing, even the basic stimulus of movement in the tomb itself is enough to produce this frequency strongly enough to induce physical effects which may include an increase in heart rate and blood pressure, headaches, and feelings of disorientation, restlessness, and weariness. While these effects are by no means pleasant (and were likely almost wholly accidental on the part of the builders), the connection between the sound in a sacred space and a powerful physical effect could not have gone unnoticed or unexplored. Could music have even been composed and performed to enhance these effects? What would the ceremonial or social implications have been? In any case, intentional or not, there is evidence that the tomb was actively used ceremonially for one or more long periods of time, rather than sealed or abandoned, and the effects of this Helmholtz resonance must almost certainly have been incorporated in some way. In instances like this in the past and the potential for others like it in the future, it is equally challenging and fascinating to consider the implications, not only of music designed to elicit an intellectual response outside of a community-unifying emotion of joy, sadness, or spiritual depth, but also the architecture and even the ritual that might be designed to accompany it. The scope of sound created thus far by humans is staggering, but once the factors in music dictated thus far by culture and tradition are consciously controlled and manipulated apart from these limitations, we will have only just begun to explore what is possible.

20130609

The Sacred Song Of Entropy

In the liner notes of the first null/point album I discussed how, although the music of contemporary western civilization is filled with endless variations on rhythm, timbre, and instrumentation, the similar systems of recording and playback technology used to listen to that music form a certain strata of "sacred instruments" all of their own. The humming signal of an AM radio or the warm crackle of a turntable stylus unite generational swaths of our technological society much more closely than the actual music that may be intertwined with those sounds; almost any two turntables or any two radios sound exponentially more similar in their own sonic idiosyncrasies than the disparate musical styles also coming from their respective speakers.

I have always been interested in exploring, particularly in the context of the null/point project, the additional element of music which takes place between the performer and the listener. Even from several feet directly in front of a musician playing an acoustic instrument, differences in temperature and humidity and the nuances of the reverberant surroundings will have an effect on what sonically passes between the musician and the listener. And while these factors may be inconsequential, the deeper subtleties of the context of the performance, and even the thoughts and emotions of the listener, can almost completely alter what is heard. And this is simply the case for direct listening, with the listener and the musician (or source of sound in general) in the same physical space at the same time. With recorded music, however, the spectrum of possibilities becomes infinite. And this is where fidelity becomes an issue. At the dawn of recorded music (Edison's first phonograph cylinder was invented in 1877; thus a relatively recent milestone made incredibly ancient by the seeming compression of technological time in the 20th century and since), while the express purpose of recording was to capture sound in as lifelike a manner as possible, the limitations of the available technology made only a rough facsimile of the original acoustics possible. But as each advance in equipment for recording and playing back music as accurately as possible became available, it was quickly adopted as the new industry standard. Higher fidelity, it seemed, was always preferable, and the willingness of the listener to invest in higher-quality equipment would directly correlate to their listening enjoyment.

The advent of the digital age has corroded this paradigm. The sound quality of the first commercially available compact discs was a poor substitute for that of vinyl records, but nor were they meant to be a serious competitor in the eyes of an audiophile. Instead, they were designed to replace the cassette tape as a higher-fidelity means of portable storage, since vinyl is relatively heavy and fragile to wield anywhere except in the close proximity of a home stereo system. This was the first major step in the direction of a new paradigm: listeners, faced with a new choice between convenience and audio quality, preferred convenience. And musicians and listeners quickly adapted, making and mixing music to fit the new medium. Listen to a recording from the 60's or 70's on both vinyl and on an early CD re-release. It's not even that the digital recording sounds bad (although, chances are, it does); it just sounds wrong. On the other hand, listen to a pop or electronic album newly released the same year that the older album was re-released, and chances are it will sound much more listenable and almost comfortable. Obviously, CD technology has continued to progress since its introduction in the early 80's, and the possibilities and limitations of digital recording now and then are a much deeper topic for discussion on an acoustic level. But the cultural (or even, to some degree, pan-cultural) connotations we attach to these media themselves remain relatively constant between listeners. Music of the digital age sounds better in a digital format to almost any listener who can apply some historical context. Granted, while the sum of human history is ever-more-rapidly being digitized and consumed by the carrion-feeding digital zeitgeist, it is less and less jarring to hear recordings from the 30's and 40's (and, on occasion, even earlier) emanating from a car stereo or a pair of errant headphones. But imagine a pop album from last year being played through an Edison gramophone, and you'll have some gauge of how closely we associate a musical style with its contemporary medium.

In the modern age, the scope of the music-listening experience continues to extend, to purely digital files played back on devices which are simply too small to offer any semblance of audio quality (players built primarily as telephones and computers, playing through miniaturized speakers and ear buds; the ultimate in convenience over fidelity), and to a resurgence of appreciation for the highest possible fidelity (lubricated by the renewed interest in vinyl, something which can actually still be physically sold by a music industry still grasping for physical objects to sell). But above the absolute floor of simply not being able to hear anything (and even then, a more philosophical argument could be made for the fidelity of silence itself in an increasingly noisy world), audio quality is completely relative, and every method of storing and delivering music is to some degree imperfect. While a degree of cultural memory (or even nostalgia) like that which we place on, for instance, the sound of a cassette tape rewinding, or of the needle dropping on a 45 rpm record, may take some time to crystallize, we can still see the context of its own storage and playback media emerging around modern music. In some ways, its extremes mirror those of how music itself has increasingly become disparately omnipresent and meaningless, like a sonic fog to be ignored; and as an almost fetish-like symbol of individuality and tribal identity in an increasingly homogenized world. But while neither of these assumes the original sacred role music once held for humanity, there is still an emerging cultural sacred music. It sounds like the over-compression necessary to make music audible on ear buds with an extremely limited frequency response. It sounds like the distortion of a low sample rate coming through a cellular phone speaker. And it sounds like the chirp of a digital glitch, replacing the skip of a CD, replacing the screech of cassette tape being ingested by the playback head, replacing the repetitive pop of a needle stuck in a scratched vinyl groove, and becoming simply the latest in a long and proud lineage of things we've been listening to while trying to listen to music.

(Originally uploaded 11/16/12 on Digital Death Rattle)

Binaural Beats [Part 1]

While the human ear can detect a range of audible wavelengths between roughly 20 and 20,000 Hz, there are several ways we can also perceive signals outside of this range, through means other than actual auditory sound. One of these techniques is the use of binaural beats, tones or audio artifacts which are perceived by the brain within the difference between two separate audible signals. For instance, if one ear hears a steady tone of 440 Hz, while the other ear hears a steady tone of 424 Hz, the brain can perceive the difference in vibration of 16 Hz, and will “hear” that tone, despite it being below the range of human hearing. Note that the key factor in this effect is the separation between the two signals being heard by the individual ears; the use of headphones is absolutely essential in the perception of binaural beats, since even in a stereo recording played back on speakers, both ears hear essentially all of both stereo signals. Find a pair of headphones of reasonable quality and listen to the following sound:


The right channel plays a steady tone of 220 Hz, while the left channel plays a steady tone of 212.17 Hz. The brain will also perceive the difference of 7.83 Hz. This frequency, incidentally, is the root note of what are known as the Schumann resonances, the frequency at which the Earth’s electromagnetic field vibrates. Obviously, it cannot be heard with the naked ear, but using binaural beat frequencies, it can be approximated and perceived.

Using binaural beat frequencies in this way is interesting, but has little value beyond its simple novelty. The phenomenon was discovered in 1839 by the Prussian physicist and meteorologist Heinrich Wilhelm Dove, but it was not until the work of Gerald Oster in the 1970s that binaural beats were revisited and their potential fully explored. Oster himself made several advances in the treatment of multiple neurological ailments, using binaural beats as both a therapeutic and diagnostic tool (for instance, an inability to perceive binaural beats can be indicative of Parkinson’s disease), and began applying his research to biological studies as well, including noting a correlation between an ability to perceive certain low frequencies and the menstrual cycle in women. In the last forty years, the study of binaural beats has been seized upon by the fringe and pseudo-sciences, to varying degrees of credibility. The basic theories involving the use of binaural beats to mentally and physically influence the listener, however, are scientifically sound and intriguing. The phenomenon called the ‘frequency following response’ causes the human brain to synchronize its brainwave activity (to varying degrees) with any other frequency close to its own. Human brainwaves travel on very low frequencies, ranging from the 40 Hz or slightly higher of the Gamma wave (the brain at its maximum functionality), to the 7 to 13 Hz of the Alpha wave (the usual range of the brain while awake and alert), to the below 4 Hz of Delta waves (the brain in the deepest, dreamless sleep). Since we are not generally aware of these frequencies, and cannot replicate them easily using traditional musical instruments and sound equipment, the frequency following response rarely becomes a factor in our daily lives. Using binaural beats, however, these frequencies can be easily attained, and using the frequency following response we can increase or decrease the activity of various frequencies within the brain of the listener, guiding the activity of the brain from one state to another.

I may later discuss the further possibilities of binaural beat frequencies, but for now you have a basic understanding of the science behind “Obdormiscere” (found here). Underlying the ambient elements is a tone containing binaural beats which, over the course of 30 minutes, decrease from 15 Hz (A normal, waking Beta state) to 3 Hz (The threshold between Theta and Delta waves, comparable to sleep or deep meditation). Obviously, many factors will influence the degree to which this may or may not cause a specific listener's brain to follow the same pattern, but give it a try if you have an extra half hour, and let me know how it goes.

(Originally uploaded 11/15/12 on Digital Death Rattle)

20130606

Manifesto of Nullified Topography



Front cover


Back cover