Thursday, 17 October 2019

Sound Design/Composition: An Introduction (Part 1)

Abandon all hope, ye who enter here.


This article is the first from a series focusing on sound design.

But before learning how to make sounds, we'll take a look at the bigger picture.


No musical knowledge is required for understanding the content herein. 


Future articles will look at specifics of sound design (i.e how to create bird calls, gunshots, telephone sounds, particular instruments etc), and will require a basic understanding of synthesis - but of most importance in sound design is understanding and establishing:


What is it we are actually trying to express with the sounds we are creating and using? 



In other words, the psychology of sound design is of fundamental importance.

We'll look at the basics of the psychology/philosophy of sound design, and learn how this influences our use of sound, from instruments through to the very way in which we write music.

Studying the psychology of sound design can (and does) alter our perspective/outlook on both sound and music - often monumentally so.



Key point #1

Non-musicians often grasp the underlying principles & psychology of sound design with more fluidity than the trained musician. 

Later in this article we'll consider why this is often the case.



Key Point #2

Understanding how to make a sound doesn't give us insight into how a listener perceives a sound, nor how to harness most effective use of a sound. 


Having oscillator/routing settings for a sound is only a very small part of the process.
We'll look at how to build a sound from first principles - and also how to use a sound effectively.



Psychology of sound design will be in relation to real-world examples.

In this article we'll look at the following subjects:

  • Fundamental principles of sound design (+ mention of sector-specific software)
  • Definition(s) of sound design
  • Sound and language
  • Why bother with synth sound design 
  • Sound design checklist (suggested equipment/software etc)
  • Do trained musicians make good sound designers
  • Trained musicians & compartmentalization
  • Children & sound design
  • The essence of successful sound design

And we'll finish with a short homework exercise.


There are 15 key points in this article; these are highlighted in green.  If read in isolation, these points may not make much sense.  But if read in context, they should make sense.


----------------------------------------


Sound Design: Fundamentals

If there were a single commandment of sound design, it would most likely be the following:


Key Point #3:

Use the tools that are most effective (and efficient) for achieving the desired outcome.



  • If a hardware synth is most effective, use a hardware synth
  • If software/softsynth, use software/softsynth
  • If an acoustic/field recording, use/create an acoustic/field recording
  • If a sound from a sample pack, use a sound from a sample pack
etc etc.


As said, we will be learning how to create specific sounds on synth.  But if your aim is to work as an industry professional, always keep the above point in mind.


Also: synths may at times play no part whatsoever in a sound design project.  Many great sounds are the product of slicing & dicing acoustic sounds within a DAW.



Key Point #4

Synths, whilst a fundamental element of sound design, are not always a necessary element of sound design.  




Also note that many great sounds are multi-layered (especially in the world of sound fx), and the sound elements are often from varying sources (i.e synths, field recordings, instrument studio recordings etc).


Key Point #5

Many sounds are multi-layered, and multi-sourced.  In the world of sound design, tying ourselves to a single means of sound creation can be an inhibiting artificial restriction.  




However, make the most of what you have - and learn how to effectively use what you have.  There isn't a list of 'must-have' tools that are absolutely essential prior to your becoming a sound designer.  There are certainly recommended tools - but only a few that could be considered 'must-have'.

A great sound designer could achieve more with a simple mic and a DAW than many could with an entire modular wall.





A few caveats can be placed on the above - specifically in relation to sound design for games:

If planning on working as a sound designer within the games industry, you'll have to be familiar with modular synths - particularly software modular.

I'd strongly suggest looking into the following if your interest is specifically with regards sound design for games:

  • Reaktor
  • Wwise
  • FMOD.

You should also be familiar with general production skills within a DAW environment (however, this particular point applies to almost all sound design).

If you aren't familiar with the above platforms (or similar platforms), entering the games industry as a sound designer will likely prove very difficult indeed.  Of course, if you have no interest in the games industry, ignore the above!  Sound design is a big field, and isn't restricted to one specific industry.


----------------------------------------


Defining 'Sound Design'

The Rosetta Stone.  It is rumoured that a succinct definition of sound design is to be found somewhere on the Rosetta.


The term 'sound design' has many uses/definitions.  It is frequently used within the synth world to express two quite distinct categories i.e that of designing specific sounds that are not necessarily 'musical' in the traditional sense (e.g gunshots, bird calls etc), and the designing of a musically satisfying timbre on a synth (i.e a 'patch').

Of course, designing a bird call is also the designing of a patch - hence the frequent confusion.

The varied nature of the definition of the term can lead to vagueness/confusion with regards what is actually being discussed.  In relation to synths, I generally use the term 'sound design' to refer to the former category, and prefer using the term 'patch design' when referring to the creation of musically playable/useful synth patches.

Due to the varied use of the term, it is possibly worth, when speaking with others, establishing an understanding of what is being referred to.   

Establishing an understanding can (and often does) avoid confusion/disagreement.  It isn't necessarily a case of establishing 'right' vs. 'wrong', but rather, coming to an understanding for the purpose of effective and meaningful communication.


As well as the varied usage of the term within the synth world, there is also how the term is used within the world of film/theatre (we'll look at this in detail in the next article).  In the movie industry, 'sound design' can involve elements of Foley, sound effects, and sound design in the synth sense.

Many musical compositions (for film and otherwise) often feature elements of sound design as part of the musical framework; the distinction between composition and sound design are often blurred.  Hence the focus of these articles will look at the special relationship between sound design and composition.


----------------------------------------


Sound and Language


Ancient words.  Or an early form of MIDI?  Requires further investigation.



Think of the difference between 'sounds' and 'music'.

...What is the difference?  Is there any?


A useful analogy can be made with language, and the difference between words and sentences.

  • Sound design is about words
  • Music is about sentences.

Sentences are formed from individual words; music is formed from individual sounds.

Just as sentences mean something beyond individual words, music means something beyond individual sounds.



Music can deal with sentences, paragraphs - or entire novels.


It makes sense to learn how to read, write, and form words before writing sentences.
It makes sense to learn how to write sentences before writing novels. 




In sound design, we often learn how to make words that already exist.  This is a good way of learning how to use language and the alphabet.

But we can also create entirely made-up words.  We can make up our own alphabet and language.


Good sound designers not only make up their own words, but they also take established words, break them up into individual letters, and then make different or new words from those letters.



We could say that sound design deals not only with using words, but also with making words.


Waves/filters/envelopes/noise etc are to sound what the alphabet (and punctuation) is to words.

  • Letters are the building blocks of words
  • Words are the building blocks of sentences
  • Sentences are the building blocks of stories.

Hence the lines between sound design and composition are often blurred.



Traditional study of composition deals with scales/chords/harmony etc.  But it doesn't deal with making sounds.
In this sense, it could be argued that composers learn to write novels before learning the alphabet.



Every sound is a word from the dictionary.  Some sounds are words that haven't yet been added to the dictionary.

The key is in learning which words will give our sentences most meaning.


Sometimes sentences are most effective when lots of words are used.
Sometimes sentences are most effective when few words are used.
Sometimes sentences are most effective when familiar words are used.
Sometimes sentences are most effective when unusual words are used.


Good sentences begin and end exactly when they should.



----------------------------------------


Synth Sound Design: Why Bother?


Don Buchla, seen here with a small portion of the required equipment to make the sound of a duck quack.


It is perhaps worth taking a step back at this point and asking 'why bother?' with regards sound design - specifically synth sound design.

'Why bother using a synth to build sounds from scratch?' is a good question - and certainly worth asking.



Consider the following:

  • Instruments can be expensive - sometimes very expensive
  • Instruments require storage
  • Instruments require service/maintenance etc
  • Even if we did purchase an instrument/instruments, we are only going to have the sound of that particular instrument/those particular instruments
  • Creating acoustic recordings of instruments requires specialized equipment (preamps, various microphones, a plethora of cables, studio desk, monitor speakers etc).  As with instruments, this equipment can be very expensive, and also requires storage/maintenance etc
  • We may only need a sound for a single moment of a single project
  • Acoustic recordings require an appropriate acoustically-treated recording environment
  • Recorded audio can have many associated problems (artifacts, localization issues etc)
  • Field recordings can be problematic (background noise, aircraft overhead etc).  Field recordings of animals: especially problematic 
  • Using pre-recorded sounds from sound libraries could potentially create copyright/licensing infringement issues
  • We could potentially be working on a project in a location where we have access to nothing other than a laptop (another good reason why being 'tied in' to one particular instrument could be seen as disadvantageous)
  • In creating sounds, we learn a lot about sound, music, and perception
  • Once we have the knowledge of how to create certain sounds/effects, the above potential issues no longer apply.


Sounds built from the ground up avoid many (if not all) of the above potential issues.


----------------------------------------

Sound Design: Suggested Equipment


Sound design starter pack.  Man not included.


If pushed into committing to a list of what resources a composer/sound designer should have at their disposal, I'd strongly encourage a primarily software-based approach.  There is a time/place for hardware, but from the perspective of sound design, hardware instruments etc are rarely (if ever) essential.

Included below is a list for consideration.


Hardware

  • Laptop running your DAW
  • Portable handheld recorder for field recording (e.g Tascam DR-40, Zoom H4N etc)
  • MIDI controller keyboard (not essential, but certainly useful)


Software

  • Software subtractive modular synth (e.g Reaktor)
  • Software linear FM synth (e.g Dexed)
  • Software wavetable synth (e.g Serum)
  • Orchestral + Choral sound libraries (e.g various Spitfire/EastWest software packs)
  • Instrument-specific sound libraries (e.g Pianoteq)


Of course, if you are presently running a desktop PC, use your desktop PC (especially if running your own in-house office/studio).  But if looking to build from scratch, laptops have the added benefit of portability.

I haven't included any post/mastering software on the above list (i.e compressors, reverbs, EQ's, various plugins etc), as the stock plugins within most DAW's are, for all but the most exacting of professional situations, more than sufficient (...once we know how to use them properly!).  But if you have your own post/mastering plug-ins, by all means use them.



My absolute bare-bones sound design products would be:

  • Laptop running your DAW
  • Portable handheld recorder.

Purchasing a synth is, at the bare-bones level, unnecessary, as many sound design projects don't require synths, plus you can download many synths for free.



Key Point #6

The fundamental tools of sound design are having a means of recording sound and a means of editing sound.


If you can record and edit, you can do sound design.



On a single project you may only use one or two of the above.  But if you have one item for each category, you'll have more than enough to cover almost any situation.

My software/hardware suggestions are suggestions only.  If your preferred software modular option is (for example) VCV, use VCV.  The point is to have an option within each category.



This in itself leads to another key point:


Key Point #7

The end result is the most important part.




Musicians can argue all day RE instruments/means.  And they often do!

If you can use (for example) an old sound module to create sounds that achieve the desired outcome, use your old sound module.  The listener is indifferent to the means.  But they do hear if your sounds are successful or not in communicating the intended message.

Example: if you are working on a project that requires the sound of an old, beat-up piano, what use is the 100k Steinway?  In this situation, the 100k Steinway is completely inappropriate.  Conversely, the beat-up piano is completely inappropriate if you require the sound of a concert grand.




I'd strongly suggest using your music laptop/PC for music and music only.  Or rather: register your products online, then disable the internet connection.  Nothing quite kills a laptop like a permanent link to Skynet.

Treat your laptop/PC as an instrument - not only any instrument, but your finest instrument.


You could probably purchase all of the above for somewhere in the region of £2500-£3000 (at the bare-bones level, you could spend less than £400 i.e a laptop and a small recording device.  Or you could spend well over £50k i.e £4000 on a Sound Devices 833, £9000 on a Brauner mic etc).  This is a large investment - but consider what you can achieve with it once you know how to successfully use it i.e full orchestral soundtracks, huge synth/sound design potential, freedom to make music anywhere etc.


*However*
I'd recommend purchasing what you need only when you need it.  If you are never likely to use software modular, don't purchase software modular.  Or rather, purchase it when you need it.


Key Point #8

Purchase what you need when you need it.



The above point could be argued, as there's certainly nothing wrong with expanding our knowledge/skillset.  Developing our skills on a specific platform can not only help our creativity, but may also make the difference on our CV.  Plus there are worse things we can do than spend our spare money on sound equipment.

But don't get bogged down working on learning a specific piece of software/hardware if you don't have to - especially if doing so is having a negative impact on your creativity.



----------------------------------------



Do Trained Musicians Make Good Sound Designers?


String players, and a man with a stick.  Not the best choice if you need a zappy laser gun sound for your latest movie.


Yes and no.  When no: often disastrously so.


Being a trained musician is often a hindrance in the world of sound design, as trained musicians often think from the perspective of the self rather than of the audience.  

Trained musicians can also have a very fixed view of what music 'is', and can have a very fixed idea of what 'musical sounds' are.  Many great sounds are dismissed as they are considered 'unmusical' (which, in itself, is simply a matter of context).




It is good to say sounds are good.  It is also good to say sounds are great.  Some sounds are even better than this; some sounds are brilliant.

But it probably isn't good to say sounds are bad.

Every sound is either a good or a bad sound, depending on context.


A good sound is a sound that works well in a particular context, or for a particular purpose.

Bad sounds are sounds we haven't yet managed to think of a good use for.  But we shouldn't blame the sounds for this.

Sounds, like people, often make more sense when we view them in relation to those around them.




Returning to the idea of the trained musician: spending 5 years at conservatoire studying violin won't teach you how to make monster sounds.

The label of 'musician' is broad and varied.  Embrace what you love - not what you think others expect of you. 



I remember one time talking to an orchestral violinist about a music project I was working on.  I described to them one of the sounds I was making (climbing equipment used on piano strings); their reply was, to put it mildly, patronising.

Not all highly-trained musicians have this mindset - but many do.

The highly ingrained mindset of 'what constitutes good music' is often very difficult to break with trained musicians (sometimes it is impossible to break.  In my own experience, it is almost always impossible to break).



As an aside, and for those interested, included below is a recording I made of the aforementioned climbing equipment on piano strings -


    


(PS I have a large body of 'inside the piano' sounds.  I'll upload some in the next sound design article, or the article thereafter.  Readers can use the sounds in their own projects if they wish).


Have a listen to the following piece by Radulescu as a great example of out-there piano sounds (PS for some reason this video is beginning halfway through the piece.  Be sure to skip back to/listen from the beginning, as the initial sounds are great):





Those are some good piano sounds.  I'd probably even call them brilliant piano sounds.


We develop lots of assumptions in learning an instrument.  Pushing sticks to make felt hammers strike strings is an assumption.   



Which leads to another important point:


Key Point #9

Technical ability often limits creative mindset.


  • Great players often write 'finger-wiggle' music i.e music that prioritizes demonstration of technique over effective communication of idea.
  • Technically poor players often write music within their own technical limitations; this often limits creative breadth.   


Sometimes the best position is not being a player at all.  Sometimes it is the worst position - but sometimes it is the best position.

No baggage can mean no preconceptions - but it can also mean no insight.  No insight can be seen as both a positive and a negative.


Knowledge of chords/harmony etc is sometimes required - but the trained musician often struggles to hear sound simply as 'sound'.

Thinking is often of technique, chords, scales, sawtooth waveforms etc - but this is not what the audience hears (...unless they are also musicians, of course!).

Considering the above points, it is probably easier to now see why many sound designers are not from a traditional musical background.



Key Point #10

The audience hears the effect and intention of musical material.  The audience are experts in interpreting how something feels





Creative Challenge

Think of an acoustic instrument.

Actually, abandon that thought.  Think of an object: any object (e.g a bread bin, a toilet brush etc).

Make a list of how many possible ways you could make the object make a sound.


As a bonus challenge: try to imagine what kind of sound each way of making the sound would create.  Write down what you think the sound would be.


----------------------------------------

Trained Musicians & Compartmentalization


Putting items into the correct box.


There are many paths we can take with a piece of music.  Consider which of the following paths is more likely to be taken by a trained musician/composer:

  • I would like the next section of my music to be based around a Gm7 chord.
  • I would like the next section of my music to be based around the sound of a large piece of metal dragged across a concrete floor within an abandoned building. 

Hence those from a traditional musical background are often less effective within the world of sound design.


The idea is the part that connects with the audience/listener.


Key Point #11

Make sure your ideas are strong.




What is the point of spending hours refining settings on a compressor if our musical idea is weak?



A sound designer may wish to, for example, create the sense of a vast, cavernous space.  Technical realisation of this is relatively simple (i.e lots of reverb).

But the genesis of the idea isn't necessarily something that is the product of technical training; it is the product of creative thought.

It is often easier to learn how to achieve something at the technical level (i.e our example of making something sound like it is in a cave) than it is to have a good idea in the first place.



Of course, the best position is technical knowledge combined with creative freedom of thought.  But as we said earlier, this is a relatively rare combination to find.




Consider your own musical output: have you given thought to sound in the broadest sense i.e without self-imposed boundaries and labels such as 'music'?

Why wouldn't the sound of, say, footsteps, be effective in your latest work?



There's an interesting Japanese artist called World's End Girlfriend.  Do you know the name?  Some of his albums are good examples of works blurring the lines of composition and sound design.


Have a listen to the following example as a good track where sound design subtly forms part of the tapestry of the composition:






Here is another piece, but in this piece the sound design acts the foundational element rather than only as a few threads of the tapestry.  This piece may be more difficult to accept as 'music' in the traditional sense:






Key Point #12

Labels & compartmentalization can be useful.  Labels & compartmentalization can be problematic.



During the course of our sound design study, we'll use labels.  We'll also sometimes not use labels.


If the label of 'music' is a problem with the previous piece of music we listened to, we could revise our label.  Or we could bin it.  Or we could choose to not put our items in that box.

We could put them in a different box, or not put them in a box at all.


Sometimes having items not in a box can be freeing.  But it is also often easier to lose or forget about items that are not in a box.




We could even make up our own label.  But if no-one else knows what our new label means, they may struggle with knowing which box we mean when we talk about our new label.




Things can be constructive until they become destructive.  If I love cakes, this can be constructive i.e cakes make me happy.  But if my love of cakes is such that I eat them until I make myself sick, my love of cakes is destructive.

We change as people.  At certain points in our life, certain quantities of things can be constructive.  At other times in our life, the very same quantities can be destructive.


We can think the same way with our music & creative processes.  If our focus on, say, harmony, is such that it is becoming destructive to our musical output, it is maybe worth thinking about how focused we are on harmony.



If we are asking ourselves something along the lines of 'is my focus on writing synth-only music negatively affecting my musical output?', the asking of the question in the first place would suggest the answer is 'yes', or 'certainly possibly'.  At the very least, we would do well in thinking to ourselves 'that is a good question'.



If something is perceived as a problem, it probably is a problem.  If writing synth-only music isn't a problem, then it isn't a problem.  If writing traditional acoustic music isn't a problem, writing traditional acoustic music isn't a problem.

Things that aren't problems aren't problems.  But things that are problems are problems.  Until they aren't.




If a sound is effective in achieving a desired outcome, it is a useful sound.  Whether a sound is 'musical' or not is perhaps a restrictive way of viewing sound.

Of course, don't use sounds simply for the sake of it.  Conversely, don't place boundaries on yourself simply for the sake of it.



Key Point #13

In music, there is no rulebook restricting us.  The only boundaries that exist are those we place upon ourselves.



----------------------------------------



Children & Sound Design


Expert engaged in sound design.


Children - especially young children - are natural sound designers.

Children are also natural sound discoverers.

Think of how often children find interesting sounds.  Think of how often adults find interesting sounds.

Think of the last time someone at work told you about a great sound they discovered.



Children could care less for how difficult or complicated something is, or for what others think of the sounds they make.  They enjoy and make sounds for the sake of it.

If they lose interest in something, they are more than happy to abandon it.  Children abandon sounds all the time.

Children also abandon adults all the time, and are happy to walk away mid-conversation.  This is very liberating & freeing.


Being able to abandon something is an important aspect of working in the creative realm.  As is being able to stick with something.




Children ask fundamental questions that reveal many assumptions adults make with regards music/sound i.e 'does it have to be played that way?''what does it mean to be in tune?' etc.

Many adults struggle to answer questions like this, and become annoyed with children for asking questions in the first place.




We'll look at this mindset in more detail in forthcoming articles, as approaching sound in the manner of a young child is of key importance in sound design (the relevance of this will be illustrated in full with the first sound design brief we'll be working on).



Life can be simple, or complex.  Author/poet James Richardson says 'Perhaps our lives get complicated because complexity is so much simpler than simplicity'.

I think this is true.


We often view our lives as a complex mass.  We often make our lives a complex mass.  And a complex mess.

But our lives aren't really that complicated.  Or as messy as we'd like ourselves & others to believe.



Perhaps complexity is a mask.


We can add another important point:


Key Point #14

Sound design can be as complex as a maths PhD, or as simple as bashing two objects together.



In both instances, the important part is having something to say; having a story to tell.

Plunging deep into wave analysis is as pointless as recording the sound of a boiled egg thrown against a brick wall if we have nothing to say with the end result of either.



If someone is having fun making noises by randomly turning dials on their synth, who are we to tell them otherwise?

If someone is deep in wave research with the intent of pushing sound design forward, who are we to comment?



Expert engaged in sound design.


----------------------------------------


The Essence of Successful Sound Design


The essence of successful sound design (and successful composition) can, at the fundamental level, be reduced to two simple questions:
  1. What are we trying to express, and
  2. Who are we addressing.
(...perhaps the above questions are deceptively simple rather than 'simple' in actuality!)

Point 2 can initially appear to be an afterthought/of little importance, but consider a conversation on the topic of death with an adult compared to the same conversation with a 7-year old child.



Real-World Example of the Importance of Intended Audience

Imagine you are contacted by someone and asked to create a lullaby for a scene in a movie.

What kind of sounds/musical language are you imagining?


Your ideas of a lullaby will probably be similar to my ideas of a lullaby:

  • Soft, gentle sounds
  • Instruments associated with childhood
  • Quite a high melody
  • Quite slow
etc

...but all of the above are ideas of what a lullaby is based on our own cultural background.



Here is a popular Arab lullaby:





 

...Not quite what you were expecting a lullaby to sound like?!


To Western ears, the track probably sounds quite harsh and alien.  And certainly nothing close to what we imagine when we think of the term 'lullaby'.

But is it harsh and alien - or is this more a case of us seeing through our own cultural lens?



We all see through the lens of the culture(s) we are raised in; this isn't to be seen as a negative (we all require an identity of some sort).  But there can be a problem if we are attempting to communicate with a culture that isn't our own and we approach that culture with our own perception of what something is.

This works both ways; consider a Middle-Eastern composer writing a lullaby for a Western production.  If their idea of 'lullaby' is as of the above example, the end product is going to be a piece of music that is alien to the Western conception of 'lullaby'; their music will fail to communicate with the intended audience.   




The above should illustrate the point RE the importance of intended audience.  Imagine if you spent months creating your beautiful lullaby, only to discover that the music was destined for a Middle-Eastern audience!  Hence communication at the early stages of a project is very important.




Age, as said, plays an important factor RE intended audience.  Consider the difference in response in asking a 9-year old what 'fun' is compared to asking a 60-year old.

Consider the difference in designing spaceship sounds for a children's cartoon with designing spaceship sounds for an alien horror movie.



In the commercial music industry, the term 'Target Market' is often used when discussing the intended audience.


Revisiting the above lullaby example: if I were contacted and asked to create a lullaby for a movie, before writing a single note my first question would be 'who is the target market?'.  The answer would determine the direction I would take with the music.


With our own personal projects, there isn't necessarily a target market in mind.  I often write pieces of music simply because I like the sound of what I'm working on!  Don't feel like you always have to write for an intended audience.  But if you are working on a commercial product (i.e a product created to connect with a specific audience), understanding the target market is vital.



Key Point #15

Understanding how to effectively communicate with the intended audience ('Target Market') is vital.  Without this understanding, communication often collapses. 


----------------------------------------


Homework Task

This homework exercise may prove very difficult, as it may be asking you to think in a way in which you don't normally think.

The task is:

Make a list of some of your favourite sounds.



For the purposes of illustration, I'll list some of my own favourite sounds:  

  • A jigsaw piece clicked into place when it is surrounded on all other sides by jigsaw pieces
  • A measuring tape from a sewing kit pulled tight
  • A measuring tape from a sewing kit held up and left to unfurl
  • A hand-sized rock tapped against the bark of a silver birch tree
  • A creaky floorboard when you didn't expect to stand on a creaky floorboard
  • Pinging a spring door stopper
  • Rain on the window from in bed/under the covers
  • Soft rain on a tent
  • A pencil being sharpened with a metal sharpener
  • People's voices when they become distant and muted when you drift into a daydream
  • Oystercatchers playing on the sand when the tide is out
  • The sea from a distance
  • The clicky button on a lamp
  • A chess piece with felt on the bottom when it is placed on a new square
  • The high zippy scratch sound when you scratch the sheet on your mattress
  • The combination of squelch and crunch when you stand in a puddle in autumn with leaves in it
  • The tight creaking sound of a tree branch when you bend it and just before it snaps
  • The stillness when you wake up in the morning and know it has been snowing outside even before you look out the window
  • Tiny little streams on hillsides
  • Big zips on backpacks
  • The odd skidding sound an apple makes when you've finished eating it and throw the core on the ground for the birds to eat
  • The gurgling sound the bath makes when the last of the water is running away
  • The gulp when you throw a large rock into a deep still part of the river
  • The sound a coin makes just before it stops when you spin it on a wooden table
  • The sound an ice cream cone makes when you accidentally drop it on the ground
  • The whooshing sound when you jump down from something high
etc.

If readers would like their own favourite sounds added, just let me know and I'll add a 'reader's list of favourite sounds' to this article.


Part 2 of the introduction will be posted soon.

All best
Kris


Tuesday, 12 February 2019

Skyline Forty Nine P Synth/Organ Teardown

Forty Nine P.  Rare piece of gear.

I recently picked up a Skyline Forty-Nine P.  More of an electric organ than a synth, but worth a look nonetheless. 

Presently it isn't working, but it should be back to full health soon.  Once working, I'll upload a selection of pieces/performances to YouTube.

Numerous photos below of the innards (plus the service manual). 

A stand was also included, but I haven't taken photos of it.  

The instrument is quite big - I haven't measured it yet, but it looks almost exactly the same size as the Matrixbrute.  Being 49-key, the size is going to be similar - but the main panel (metal) is bigger than I imagined it would be. 


External

Rear panel.  Note the broken/missing power switch.

Stand attachment point; 5 possible angles

Speaker on underside.  The sound of these is usually pretty strong.

Chord tonality selector.  Note the faux leather finish.

Voice switches

Mixer

Percussion section



Under the Hood/Service Manual

The service manual was stapled inside, to the R of the speaker.  




Almost 40 years old!  Vintage gear.








Internal

A few of the boards/components etc.

The tech spec/PDF's for most of the chips can be found online.  

Power supply unit




Percussion section

Voice board

Front of voice board

Main board, underneath the keys

Chips are also used on... Farfisa.  Good chips!  Circuit benders - note the Texas Instruments logo off to the R.  

More TI.  Once the machine is up & running, I'll maybe look into some circuit bending on the Skyline.
The main chips are possibly worth more than the value of the instrument
















All best
Kris


Saturday, 26 January 2019

'Aviary': Polytemporal, Polymetric, Prime-Palindromic Music

Yes, the title is quite a mouthful!  

The included refers to my work Aviary.  Link below:  




The work plays with time.  Time slows down and speeds up - we observe.

My idea with the work was floating through space amongst a flock of birds ('Aviary' isn't the most appropriate title (as it implies 'caged') - but I like the sound of the word/it is good enough) - and as with a flock of birds, some speed up, others slow down, and others maintain a steady pace.  The idea with the sound was to create the effect of birds vanishing into a wormhole (hence the very deep 'gulps'), only to reappear in a different temporality.  

I'm not a big fan of movies, but a while ago I seen Interstellar at a friends house.  It had an interesting scene where some people descended to a planet - for a minutes max. - and upon returning to their ship, the crew member who chose to remain on the ship was many years older due to the time displacement.  With this piece I'm imagining a similar kind of perceptual experience - but we are the multi-dimensional beings observing the fluctuations of time - we are 'outside of time'.

PS If you don't like 'the cake unbaked', it is probably best simply to enjoy the music without reading the information below, as it plunges into compositional structure.  Decoding can spoil.  

It can also enlighten, so I'll leave it for my readers to decide.

Ultimately, the function of all this structure is to create something very expressive.  The structures themselves may be relatively 'hard', but I see these as leading to freedom - much like the chicken breaking out of the egg into a new world.

So - despite the technical focus below, it is worth keeping in mind the primary function is to create a new kind of beauty.


--------------------  


Poly-Palindromic Melodic Phrasing

This is something you'll likely have to be actively listening for to notice.  Otherwise, it can slip by with the listener completely unaware.  

If you listen to the melody, you'll hear there is a phrasing pattern.  Think of each phrase ending when I remove my hand from the keys (it should be obvious in the video).  

The number of notes in each phrase is always prime - with a palindromic prime pattern.  

For example:  

Phrase 1 = 2 notes long
Phrase 2 = 3 notes long
Phrase 3 = 5 notes long

etc etc

See the image below for the full phrasing pattern/sequence:  


Melodic phrasing sequence


The entire melody is a 31-step palindrome (31 is, of course, also prime), consisting of three (also prime) stacked palindromes.  

The central prime of the work (indicated in yellow) is also the 'master' time signature (explained in more detail further on in this article).

Have a look/listen again to the piece and you should be able to spot the pattern in the melody.  



What is the point of the above?  Many reasons.  The exploration of something new, of course.  

The palindromic sequencing also gives a very natural, organic feel.  The melody seems to act as a form of 'bellows' - the phrases grow/recede in a very natural manner.  

What could be more natural than prime numbers?


It is very interesting to give a listener a piece of music which is, on one level, a piece of music, but is also at a deeper level something else; something more.  I find a great beauty in this.  


--------------------


Polymetricism

Yes, another mouthful!  The easiest way to imagine 'polymetric' is to think of two sequencer parts of different lengths running together (at the same tempo).  

Imagine a sequence of 5 notes long playing whilst a sequence of 4 notes long is also playing.  

The result would be as of the number sequence below (consider the vertical alignment as beat alignment):


1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
  



The next step of the above sequence would be a return to the beginning of the sequence (i.e '1-1' alignment).  

At the simplest level, simply multiply the sequence lengths to generate the Latch Point (i.e how long before the repeat/sync). 

This, however, isn't always true.  There are some exceptions, notably common factors.  

If we have two sequences where one sequence is a common factor of the other, the Latch Point is the longest sequence.  


Assume we have a sequence of 8 and a sequence of 2.  The Latch Point isn't 16 (i.e we aren't multiplying), but rather, 8:

1 2 3 4 5 6 7 8 1 2 3 etc.
 1 2 1 2 1 2 1 2 1 2 1 etc. 

Latch Point indicated in yellow.  




With Aviary, three sequences are running.  The sequences are polymetric.  

They are:

1)  11 notes long
2)  13 notes long
3)  17 notes long 

i.e all sequences are prime, and also prime-consecutive, and there are a prime number of sequences.  

Therefore, we have three options when ascribing a time signature to the piece.  The piece could be described as 11/16, 13/16, or 17/16.  All are correct.

Given the above are prime, they have no common factors.  The Latch Point for the triple polymetric sequence above is 11 x 13 x 17.  

It would take 2431 beats for the pattern to latch (!).  Yes, a long sequence!  


All possible sequences can very easily be generated - included below is the generative algorithm for polymetric sequencing (shown below for triple-polymetric sequences):




The above algorithm is, in one sense, the foundation for Buchla's 252e module (I plan to write an article on this module soon).  Unfortunately the Buchla module isn't as all-inclusive as the above.  Which is very unfortunate, as it is potentially a stunning module.

We can infer from the above that there are 4096 possible triple-polymetric sequences for a 16-step sequencer (i.e 16 x 16 x 16).  

The above statement isn't fully accurate, as we can also add rests/empty steps.  And ties.  

The plot thickens...!!  


--------------------

Polytemporal Music

In simple terms, polytemporal music is a piece of music where two or more tempi occur simultaneously i.e a piece where one player performs at, say, 112 BPM, whilst another player performs a second part at, say, 115 BPM.

Simultaneous tempi of 120 BPM/60 BPM wouldn't in the strictest sense be polytemporal, as the listener would perceive one player simply playing double/half the speed of the other (i.e quavers against crotchets).  Both players are still 'locked in' to a beat.


In Aviary, the 13/16 sequence line is, on average, running at 173 BPM.  There are slight fluctuations: this is compensation to allow the three sequencer parts to sync when there is a change of mode.  The fluctuations aren't a sudden change of tempo, but rather, a very gradual drift to the temporal latch point.


The tempo of the other two sequences is in a state of flux, ranging from 137 BPM through to 191 BPM (again, both prime.  The upper bound, being palindromic, is a structural reference to the palindromic nature of the melody).  

I'm not manually altering the tempo (as the beat-matching would fall apart) - I pre-programmed multiple LFO's (summed) to create a waveform that acts as a modulator for the tempo/clock.  

In 'normal' music, tempo is as of image 1 below.  When using LFO's, we could/can create a tempo wave such as image 2:






I latched multiple triangle-wave LFOs to create the tempo oscillations for the piece (certain summing values can create a 'flat' wave i.e even tempo).  The low 'gulp' is also LFO-tempo controlled. 

This is essentially a form of Harmonic Analysis.  The very simple way to think of Harmonic Analysis is 'summing waves to make patterns'.   

Basic algorithm below outlining the process for Aviary.  It only shows the basics (and no values), but should give an idea of the scale of the task:






VCO's 4 & 5 are producing the 'gulp', with the other three producing the polytemporal sequencer lines.


--------------------

Aviary Deconstructed

Below is the audio for Aviary, but only the sequencer parts.  I've also included a basic visual map of tempo/time domain.

Modal transitions are also indicated on the temporal map (as changes of colour).

In terms of listening: I've removed the delays and hard-panned the voices.  Following the parts should be easier:

17/16 sequencer line = Left ear
11/16 sequencer line = Right ear
13/16 sequencer line = Centre  

PS I'd personally think of the work as 13/16, as this is the (most) constant throughout.  But I'd be happy for others to argue the case against.

Take note that all three sequencer parts are playing simultaneously (i.e don't read the music in the manner of a normal score, from L-R).
  


      

Hopefully the included sheds some light on the structure of this work.  No discussion of harmony here as the focus of this short article is structure rather than harmony, but I'm sure some listeners will spot certain tools of the trade i.e pitch-axis modal transitions etc.  

Note also the axis-scale degree transition in the final mode (i.e altering enharmonic degrees - hence the 'freshness' of the sound).  Also keeping prime with the mode count (5 modes).

PS the melody being performed on the Vermona: very difficult to keep in time, given there are multiple tempi running!  Key to the structure of the melody is having the central 13-note phrase (highlighted in yellow on the melody image early in the article) at the centre of the piece, temporally.  This is the 13-note run that occurs during the F# overtone section.    

All best
Kris