micro.swarm

with No Comments

This piece is an early exploration of my ideas for interactive audio-visual computer art, based on dynamic systems with their own emergent complexity such that the player is not a player in the sense of an instrumentalist who has absolute control of their tool; they are collaborating with the machine, at times allowing it to flow autonomously, while at others exerting influence, coaxing and guiding it in a certain way.

Occasionally, some sound that I make does manage to take a form that I’m actually willing to let other people listen to.  Here is an example of such – a sample of ‘micro.swarm’, produced while I was studying Sonic Arts at Middlesex University.  Micro.swarm is a system for synthesising audio (and graphics) from the emergent behaviour of a swarm simulation, ala Craig Reynold’s flocking boids.

Audio MP3

To date, that is about the best that it has ever sounded; I made the fatal error of ‘cleaning up’ the software without keeping any kind of backup of that specific version and have yet to recapture some of the original musicality, although I hope to revisit the work and build on it in various ways.

The audio can be controlled by exerting influence on the swarm; by manipulating the parameters of the simulation such as the distance individuals try to maintain between themselves and neighbours, how attracted they are to a certain point, or the rate at which they can move, for example.  In the audio excerpt above, a strong point of attraction is introduced briefly after a few seconds, causing a flurry of activity toward the middle of the space.  There is, at that point, a clear audible response to the action of the player… however, it is an essential property of such a system that the response is never totally direct.  The player is not a player in the sense of an instrumentalist who has absolute control of their tool; they are collaborating with the machine, at times allowing it to flow autonomously, while at others exerting influence, coaxing and guiding it in a certain way.

Similar work has been done by others, most notably Tim Blackwell.  At the time I was working on this, I was aware of his ‘Swarm Music‘, in which the motion of a swarm (itself influenced by realtime analysis of a human instrumentalist) is translated into sequences of notes played by a fairly conventional synthesiser.  However, in this approach, I felt that much of the richness of the patterns of motion in the swarm were lost.  On the other hand, I was also very interested in granular synthesis and was well aware that common approaches to that use stochastic processes to generate the richness of data required to fuel hundreds of microsound events per second.  These properties of granular synthesis and swarms seemed to have a potential conceptually and technically beautiful symbiosis.  Such was the basis of micro.swarm.

As it turns out, Blackwell was working along very similar lines at much the same time.  It was pretty gratifying to find out later that he had produced (at much the same time I was working on micro.swarm) ‘Swarm Granulator‘ – which, while in many ways more sophisticated and thoroughly developed than my own work, was at root much the same.  His paper on it was well received, winning best paper at EvoMusART 2004.  More recently, I’ve encountered Daniel Jones’  ‘AutoSwarm‘, a very nicely executed realisation of similar ideas, coincidentally also developed at Middlesex.  It was seeing that which prompted me to post here.

Technical notes

The positions and velocities of each agent in the system determined the parameters of a corresponding synthesiser. Position in ‘x’ and ‘z’ (depth) determined the panning and amount of reverb to be applied, while ‘y’ corresponded to the pitch.  As each event was triggered, its length and the time until the next event for the given agent was computed based on the total velocity at the time of event onset, as well as the pitch (lower frequency sounds lasting longer than higher frequency ones).  The basic idea was for the synthesis to be granular; I read most of Curtis Roads’ Microsound while working on it.  In practice, depending on parameters used to control the system, time scales of the sound events range from granular to much longer.

In addition, the velocity in ‘z’ influenced the shape of the amplitude envelope; as creatures flew towards the viewer, they had a sharp attack and longer decay, while when flying away they had a slower attack.