Showing posts with label processing. Show all posts
Showing posts with label processing. Show all posts

Tuesday, February 07, 2012

Local Colour: Smaller World Network

Back in September I showed a little work called Local Colour at ISEA 2011. This project continues my thinking about generative systems, materiality and fabrication. It's a work in two parts: the first is a group of laser-cut cardboard bowls, made from reclaimed produce boxes - you can see more on Flickr, and read the theoretical back-story in the ISEA paper. Here I want to briefly document the second element, a sort of network diagram realised as a vinyl-cut transfer. The diagram was created using a simple generative system, initially coded Processing - it's embedded below in Processing.js form (reload the page to generate a new diagram).

Local Colour at ISEA 2011
Network diagrams are one of the most powerful visual tropes in contemporary digital culture. Drawing on the credibility of network science they promise a paradigm that can be used to visualise everything from social networks to transport and biological systems. I love how they oscillate between expansive significance and diagrammatic emptiness. In this work I was curious to play with some of the conventions of small world or scale-free networks. A leading theory about how these networks forms involves preferential attachment: put simply it states that nodes entering a network will prefer to connect to those nodes that already have the most connections. In visualising the resulting networks, graph layout processes (such as force direction) use the connectivity between nodes to reposition the nodes themselves; location is determined by the network topology.



This process takes the standard small-world-network model and changes a few basic things. First, it assigns nodes a fixed position in space. Second, it uses that position to shape the connection process: here, as in the standard model, nodes prefer to connect to those with lots of existing connections. But distance also matters: connecting to a close node is "cheaper" than connecting to a distant one. And nodes have a "budget" - an upper limit on how far their connection can reach. These hacks result in a network which has some small world attributes - "hubs" and "clusters" of high connectivity - but where connectivity is moderated by proximity. Finally, this diagram visualises a change in one parameter of the model, as the distance budget decreases steadily from left to right. It could be a utopian progression towards a relocalised future, or the breakdown or dissolution of the networks we inhabit (networks in which distance remains, for the time being, cheap enough to neglect).

The process running here generates the diagram through a gradual process of optimisation. Beginning with 600 nodes placed randomly (but not too close to any other), each node is initially assigned a random partner to link to. Then they begin randomly choosing new partners, looking for one with a lower cost - and cost is a factor of both distance and connectivity. The Processing source code is here.

Read More...

Sunday, June 06, 2010

Measuring Cup

Measuring Cup is a little dataform project I've been working on this year. It's currently showing in Inside Out, an exhibition of rapid-prototyped miniatures at Object gallery, Sydney.

This form presents 150 years of Sydney temperature data in a little cup-shaped object about 6cm high. The data comes from the UK Met Office's HadCRUT subset, released earlier this year; for Sydney it contains monthly average temperatures back to 1859.


The structure of the form is pretty straightforward. Each horizontal layer of the form is a single year of data; these layers are stacked chronologically bottom to top - so 1859 is at the base, 2009 at the lip. The profile of each layer is basically a radial line graph of the monthly data for that year. Months are ordered clockwise around a full circle, and the data controls the radius of the form at each month. The result is a sort of squashed ovoid, with a flat spot where winter is (July, here in the South).


The data is smoothed using a moving average - each data point is the average of the past five years data for that month. I did this mainly for aesthetic reasons, because the raw year-to-year variations made the form angular and jittery. While I was reluctant to do anything to the raw values, moving average smoothing is often applied to this sort of data (though as always the devil is in the detail).


The punchline really only works when you hold it in your hand. The cup has a lip - like any good cup, it expands slightly towards the rim. It fits nicely in the hand. But this lip is, of course, the product of the warming trend of recent decades. So there's a moment of haptic tension there, between ergonomic (human centred) pleasure and the evidence of how our human-centredness is playing out for the planet as a whole.


The form was generated using Processing, exported to STL via superCAD, then cleaned up in Meshlab. The render above was done in Blender - it shows the shallow tick marks on the inside surface that mark out 25-year intervals. Overall the process was pretty similar to that for the Weather Bracelet. One interesting difference in this case is that consistently formatted global data is readily available, so it should be relatively easy to make a configurator that will let you print a Cup from your local data.

Read More...

Saturday, March 27, 2010

commonsExplorer

A quick bit of cross-promotion. The commonsExplorer is an experimental "big picture" browser for Flickr Commons collections - Sam Hinton and I started working on it for MashupAustralia months ago, and it's finally ready. Read some background over on the Visible Archive blog, or download the app and try it out.

commonsExplorer 1.0

Read More...

Wednesday, October 07, 2009

Weather Bracelet - 3D Printed Data-Jewelry

Given my rantings about digital materiality and transduction, fabrication is a fairly obvious topic of interest. I posted earlier about an experiment with laser-cut generative forms and Ponoko - more recently I've been playing with 3d-printing via Shapeways, as well as trying out data-driven (or "transduced") forms. This post covers technical documentation as well as some more abstract reflections on this project - creating a wearable data-object, based on 365 days of local (Canberra) weather data.


Shapeways has good documentation on how to generate models using 3d-modelling software. Here I'll focus more on creating models using code-based approaches, and Processing specifically. The first challenge is simply building a 3d mesh. I began with this code from Marius Watz, which introduces a useful process: first, we create a set of 3d points which define the form; then we draw those points using beginShape() and vertex().

The radial form of the Weather Bracelet model shows how this works. The form consists of a single house-shaped slice, where the shape of each slice is based on temperature data from a single day. The width is static, the height of the peak is mapped to the daily maximum, and the height of the shoulder (or "eave") is mapped to the daily minimum. To create the radial form, we simply make one slice per day of data, rotating each slice around a central point. As the diagram below shows, this gets us a ring of slices, but not a 3d-printable form. As in Watz's sketch, I store each of the vertices in the mesh an array - in this case I use an array of PVectors, since each PVector conveniently stores x,y and z coordinates. The array has 365 rows (one per day, for each slice) and 5 columns (one for each point in the slice). To make a 3d surface, we just work our way through the array, using beginShape(QUADS) to draw rectangular faces between the corresponding points on each of the slices.


To save the geometry, I used Guillame laBelle's wonderful SuperCad library to write an .obj file. I then opened this in MeshLab, another excellent open source tool for mesh cleaning and analysis. Because of the way we draw the mesh, it contains lots of duplicate vertex information; in MeshLab we can easily remove duplicate vertices and cut the file size by 50%. MeshLab is also great for showing things like problems with normals - faces that are oriented the wrong way. When generating a mesh with Processing, the order in which vertices are drawn determines which way the face is ... er, facing... according to the right hand rule. Curl the fingers of your right hand, and stick up your thumb: if you order the vertices in the direction that your fingers are curling, the face normal will follow the direction of your thumb. Although Processing has a normal() function that is supposed to set the face normal, it doesn't seem to work with exported geometry. Anyhow, the right hand rule works, though it is guaranteed to make you look like a fool as you contort your arm to debug your mesh-building code.

The next step in this process was integrating rainfall into the form. I experimented with presenting rainfall day-by-day, but the results were difficult to read; I eventually decided to use negative spaces - holes - to present rainfall aggregated into weeks. Because Shapeways charges by printed volume, this had the added attraction of making the model cheaper to print! The process here was to first generate the holes in Processing as cylindrical forms. Unlike the base mesh, each data point (cylinder) is a separate, simple form: this meant I could take a simpler approach to drawing the geometry. I wrote a function that would just generate a single cylinder, then using rotate() and scale() transformations made instances of that cylinder at the appropriate spots. Because I wanted the volume of each cylinder to map to rainfall, the radius of each cylinder is proportional to the square root of the aggregated weekly rainfall. As you can see in the grab below, the base mesh and the cylinders are drawn separately, but overlayed; they were also saved out as separate .obj files. The final step in the process was to bring both cleaned-up .obj files into Blender (more open source goodness) and run a Boolean operation to literally subtract the cylinders from the mesh. This took a while - Blender was completely unresponsive for a good few minutes - but worked flawlessly.





Finally, after checking the dimensions, exporting an STL file from MeshLab, and uploading to Shapeways, the waiting; then, the printed form. I ordered two prints, one in Shapeways' White, Strong and Flexible material, and the other in Transparent Detail. You can clearly see the difference between the materials in these photos. The very small holes tested the printing process in both materials; in the SWF print the smallest holes are completely closed; in the TD material they are open, but sometimes gummed up with residue from the printing process (which comes out readily enough). Overall I think the TD print is much more successful - I like the detail and the translucency of the material, as well as the cross-hatched "grain" that the printing process generates.






So, a year of weather data, on your wrist - as a proof of concept the object works, but as a wearable and as a data-form it needs some refinement. As a bracelet it's just functional - the sizing is about right, but the sharp corners of the profile are scratchy against the skin. As a data-form, it could do with some simple reference points to make the data more readable - I'm thinking of small tick-marks on the inner edge to indicate months, and perhaps some embossed text indicating the year and location. More post-processing work in Blender, I think.

Another line of development is to do versions with other datasets - and hey, if you'd like one for your city, get in touch. But that also raises some tricky questions of scaling and comparability. The data scaling in this form has been adjusted for this dataset; with another year's data, the same scaling might break the form - rain holes might eat into the temperature peaks, or overlap each other, for example. A single one-size-fits-all scaling would allow comparisons between datasets, but might make for less satisfying individual objects - and, finding that scaling requires more research.


What has been most enjoyable with this project, though, is the immediate reaction the object evokes in people. The significance and scale of the data it embodies, and its scale, seem to give it a sense of value - even preciousness - that has nothing to do with the cost of its production or the human effort involved. The bracelet makes weather data tangible, but also invites an intimate, tactile familiarity. People interpret the form with their fingers, recalling as they do the wet Spring, or that cold snap after the extreme heat of February; it mediates between memory and experience, and between public and private - weather data becomes a sort of shared platform on which the personal is overlayed. The form also shows how the generalising infrastructures of computing and fabrication can be brought back to a highly specific, localised point. This for me is the most exciting aspect of digital fabrication and "mass customisation" - not more choice or user-driven design (which are all fine, but essentially more of the same, in terms of the consumer economy) - but the potential for objects that are intensely and specifically local.

Read More...

Sunday, August 23, 2009

Tiny Sketching

As a kind of test pattern to fill the current break in transmission, here are my contributions to Tiny Sketch, an OpenProcessing / Rhizome competition (open until mid September) for Processing sketches under 200 characters.

In Bit Sunset I just load the pixels[] array, pick a random block of pixels, and add a large number to their value. This process throws up some surprising results as the colour values gradually increase, then start pushing into the alpha bits of the ARGB integer; eventually, as it fills the alpha bits, it settles into a pallette of pinks and greens that are gradually smashed into pixel-dust.


Albers Clock was an attempt to slow the pace of TinySketch even further; it visualises the current time in the form of an Albers square, with three colours, one each for hour, minute and second. I also like that it creates an image that is synchronous (within timezones, at least), unlike the asynchronous, individualised runtimes of most sketches.


There are dozens of amazing sketches in this collection - it's a fascinating microcosm (in every sense) of the current Processing / generative / code art scene. Given the tight constraints it's not surprising to see some demoscene virtuousity in the code - like Martin Schneider's Sandbox, a physical simulation painting app in 200 characters. There is also some classic software art conceptualism and reflexivity - like Jerome St Clair's Joy Division cover and Kyle MacDonald's Except. Great to see projects like this - and OpenProcessing itself - reviving applet culture in an open source, web2.0-flavoured way.

Read More...

Sunday, March 15, 2009

Watching the Street (Navigator) / citySCENE

Vague Terrain 13: citySCENE has just launched. As editor Greg J. Smith writes:

This issue of Vague Terrain is founded on two notions - that the city is a stage set for intervention and an engine for representation.
The collection expands out from this premise in multiple directions: carto-mashups, projection-bombing, sound walks, psychogeographic imaging and ubicomp experiments. Early highlights for me included Crisis Fronts' Cognitive Maps and Database Urbanisms, which presents some impressive work on data visualisation and generative models as urban mapping strategies (below: Case Study: Los Angeles). Overall, on a first look, this collection is incredibly rich. It shows that a creative, wired-up, critical urbanism is not just a wisftul aspiration of the technorati, but a real practice.


Having said all that, it's a privelege to be a part of this collection. My contribution is Watching the Street (Navigator), a browsable visualisation of a single day of images from the Watching the Street dataset. It tests out the hunch that these time-lapse slit-scans can be used to read real patterns in the urban environment - that they are (or can be) more than just suggestive abstractions. It uses a simple interface to display both a single source frame, and a correlated slit-scan visualisation, with image-space and time-space sharing an axis, a bit like a slide rule. Greg Smith called it an "urban viewfinder", which sums the intention up nicely.


Playing with the navigator for a while seems to confirm that hunch. The composites reveal temporal patterns in the environment, but not the spatial context that allows us to identify their causes; the source frames show that spatial context, but not the change over time. Reading the two against each other involves chains and cycles of discovery, analysis and inference. These might be open-ended (spatiotemporal browsing) or more directed. What time do the sandwich-boards go out? How long does the delivery truck stay?

Building the navigator presented some interesting technical challenges: mainly, how to make a web-friendly interface to 1440 source frames (240 x 320) and 480 slit-scan composites (720 x 320). That adds up to about 75Mb of jpegs. Processing 1.0 came to the rescue, with its new built-in dynamic image loader. requestImage() pulls in an image from a given URL, on cue, without bringing the whole applet to a grinding halt; it provides some basic feedback on the state of that image - whether it's loading, loaded, or un-loadable. I also blundered into two other useful lessons: how to use the applet "base" parameter, and how to manage Java's local cache, which kept throwing up earlier versions of the applet during testing.

Having made a lean, mean, browser-friendly version, I'm now thinking of adapting the navigator into a full-screen, offline app, with the whole eight-day dataset, and perhaps some tools for annotation and intra-day comparison. Best of all would be a long term installation; a sort of urban space-time observatory, watching the street but also opening it up to ongoing interpretation. If you'd like it running in your foyer, let me know.

Read More...

Friday, January 16, 2009

JCSMR Curls

This post is (belated) documentation of a project I worked on in 2007-8, creating an audio-responsive generative system for a permanent installation for the Jackie Chan Science Centre (yes, that Jackie Chan) at the John Curtin School of Medical Research, on the ANU campus. Along with some Processing-related nitty gritty, you'll find some broader reflections on generative systems and the design process. For less process and more product, skip straight to the generative applets (and turn on your sound input).

In mid 2007 my colleague Stephen Barrass and I were approached by Thylacine, a Canberra company specialising in urban art, industrial and exhibition design. Caolan Mitchell and Alexandra Gillespie were designing a new permanent exhibition, the first stage of the new Jackie Chan Science Centre, housed in a new building - a razor-sharp piece of contemporary architecture (below) by Melbourne firm Lyons. Instead of just bolting a display case and a few plaques to the wall, Mitchell and Gillespie (wonderfully) proposed a design that hinged on a dynamic generative motif - a system that would ebb and flow with its own life cycles, and echo the spiral / helix DNA structures central to the School's work, and already embedded in the building's architecture.


My initial sketches (below) took the spiral motif fairly literally, drawing vertical helices and varying their width with a combination of mouse movement and a simple sin function - the results reminded me of the beautiful spiral egg cases of the Port Jackson Shark. At that stage we were talking about the possibility of projecting back onto the facade of the building, which has big vertical glass panels; this structure informed the vertical format. I made a quick video mockup of the form on the facade - which was incredibly easy, thanks to the robust, adaptable, extendable goodness of Processing (a recurring theme in the process to come).


These sketches meet the simplest criteria of the brief (spiral forms) but do nothing about the more interesting (and difficult) ones: cycles of birth, growth and death, and dynamics over multiple time scales. Over the next couple of months I developed two or three different approaches to this goal.

The phyllotaxis model blogged earlier was one attempt. Spurred on by the hardcore a-life skills of Jon McCormack and co. at CEMA, I built a system in which phyllotactic spirals self-organised spontaneously. Because in Jon's words, anyone can draw a spiral, what you really want is a system out of which spirals emerge! The model worked, but I had trouble figuring out how phyllotactic spiral forms might meaningfully die or reproduce. Also, by that stage I had two other systems that seemed more promising.

From the early stages I wanted to make the system respond to environmental audio. The installation would be in a public foyer with plenty of pedestrian traffic, so audio promised a way to tap in to the building's rhythms of activity at long time scales, as well as convey an instantaneous sense of live interaction. In the two most developed sketches audio plays a key role in the life cycle of the system.

One sketch moved into 2d, and started with a pre-existing model for growth, by way of the Eden growth algorithm (this system would later be adapted again into Limits to Growth). I had already been playing with an "off-lattice" Eden-like system where circular cells could grow at any angle to their parent (rather than the square grid of the original Eden model). This system also made it easy to vary the radius of those cells individually. The next step was to couple live audio to the system; following a physical metaphor, frequency is mapped to cell size, so that larger cells responded to low frequency bands, and smaller cells to high frequencies. Incoming sound adds to the cell's energy parameter; this energy gradually decays over time in the absence of sound. Cell reproduction, logically enough, is conditional on energy.


The result is that cells which are best "tuned" for the current audio spectrum will accumulate more energy, and so are more likely to reproduce, spawning a neighbour whose size (and thus tuning) is similar to, but not the same as, their own; so over time the system generates a range of different cell sizes, but only the well-tuned survive. The rest die, which in the best artificial life tradition, means they just go away - no mess, no fuss. In the image below cells are rendered with stroke thickness mapped to energy level. The curves and branches pop out of rules sprinkled lightly with random(), resulting in a loose take on the spiral motif, which is probably the weak point in this sketch. I still think it has potential - nightclub videowall, anyone? Try the live applet over here (adjust your audio input levels to control the growth / death balance).


The third model takes this approach to energy and reproduction - about the simplest possible a-life simulation - and folds it back into the helical structures of the first sketches. In this world an individual is a 3d helix, built from simple line segments. Again each individual is tuned to a frequency band, which supplies energy for growth; but here "growth" means adding segments to the helix, extending its length. Individuals can "reproduce", given enough energy, but here reproducing means spawning a whole new helix, with a slightly mutated frequency band. All the helixes grow from the same origin point - they form a colony, something like a clump of grass.


This sketch went through many variants and iterations over the next month or so; in retrospect the process of working to a brief, within a design team, pushed this system further than I ever would have taken it myself. At the same time I was testing the system against my own critical position; I've argued earlier that the generative model matters, not just for its generativity but the entities and relations it involves.


From that perspective this system was full of holes. Death was arbitrary: just a timer measuring a fixed life-span. "Growth" was a misnomer: the number of segments was simply a rolling average of the energy in the curl's frequency band, so the curls were really no more than slow-motion level meters. Taking the organic / metabolic analogy more seriously, I worked out a better solution. An organism needs a certain amount of energy just to function; and the bigger the organism, the more energy it needs. If it gets more than it needs, then it can grow; if it gets less than it needs, for long enough, it will die. So this is a simple metabolic logic that can link growth, energy and death. Translated into the world of the curls: for each time step, every curl has an energy threshhold, which is proportional to its size (in line segments); if the spectral energy in its band is far enough over that threshhold, it adds a segment - like adding a new cell to its body; if the energy is under that threshhold, it doesn't grow; and if it remains in stasis for too long, it dies. Funnily enough, the behaviour that results is only subtly different to the simple windowed average. Does the model really matter, in that case? It does for me at least; if and how it matters for others is another question.


Next, the curls developed a more complex life-cycle - credit to Alex Gillespie for urging me in this direction. In line with the grass analogy, curls grow a "seed" at their tip when they are in stasis; when they die, that seed is released into the world. Like real seeds, these can lie dormant indefinitely before being revived - here, by a burst of energy in their specific frequency band. After several iterations, the seed form settled on a circle that gradually grows spikes, all the while being blown back "down" the world (against the direction of growth) by audio energy (below). As well as adding graphic variety, seeds change the system's overall dynamics. Unlike spawned curls, seeds are genetically identical to their "parent" - attributes such as frequency band are passed on unaltered. Because each individual can make only one seed, that seed is a way for the curl to go dormant in lean times; if it gets another burst of energy, it can be reborn. The curls demo applet demonstrates this best (again, adjust your audio input and make some noise).


A few technical notes. One big lesson here was the power of transform-based geometry. Each curl is a sequence of line segments whose length relates to frequency band (lower tuned curls have longer segments); each segment is tilted (rotateZ), then translated along the x axis to the correct spot. A sine function is used to modulate the radius of each curl along its length; frequency band factors in here too; this radius is expressed as a y axis translation. Then the segment is rotated around the x axis, to give depth. I iterate this a few hundred times to get one curl, and repeat this process up to twenty times to draw the whole world - each curl has its own parameters for tilt, x rotation increment, and frequency band.

In the live applet audio energy ripples up the curls, from base to tip. This was added to reinforce the liveness of the system and add some rapid, moment-by-moment change. It was implemented very simply. I used a (Java) ArrayList to create a stack of audio level values; at each time step, the current audio level value is added at the head of the list, and the ArrayList politely shuffles all the other values along. So each segment's length is a combination of three values; the base segment length, a function to taper the curl towards the tip, and the buffered audio level.


The graphics are all drawn with OpenGL - following flight404 I dabbled with GL blend modes, specifically additive blending, to get that luminous quality. The other key visual device here is the smearing caused by redrawing with a translucent rect(); instead of erasing the previous frame completely this fades it before overlaying the new frame. It's an easy trick that I've used before. But as Tom Carden explains, in OpenGL it leaves traces of previous frames. I discovered this firsthand when Alex and Caolan asked whether we could lose the "ghosts." I was baffled: on my dim old Powerbook screen, I simply hadn't seen them. Eventually, juggling alpha values I could reduce the "ghosts" to almost black (1) against the completely black (0) background - but no lower. Finally I just set the initial background to (1) instead of (0), and the ghosts were gone.


The adaptability of Processing came through again when it came to realising the installation. The final spec was a single long custom-made display case, with three small, inset LCD panels. These screens would run slide shows expanding on the exhibition content, but also feature the generative graphics when idle; the case itself would also integrate the curls as a graphic motif. For the case graphics, I sent Thylacine an applet that output a PDF snapshot on a key press; they could generate the graphics as required, then import the files directly into their layout.

The screens posed some extra challenges. The initial idea was to have the screens switch between a Powerpoint slideshow, and the curls applet; but making this happen without window frames and other visual clutter was impossible. In the end it was easier to build a simple slide player into the applet: it reads in images from an external folder, allowing JCSMR to author and update the slideshow content independently.

So to wrap up the Processing rave: it provided a single integrated development and delivery tool for a project spanning print, screen, audio, interaction, animation and even content management. Being able to burrow straight through to Java is powerful. Development was seamlessly cross-platform; the whole thing was developed on a Mac, and now runs happily on a single Windows PC with three (modest) OpenGL video cards. The installation has run daily for over six months, without a hitch (touch wood).

Some installation shots below, though it's hard to photograph, being a glass fronted cabinet in a bright foyer - reflection city. I'll add some better shots when I can get them. If you're in Canberra, drop in to the JCSMR - worth it for the building alone - and see it in person.





And very finally, photographic proof of the Jackie Chan connection - image from The Age.

Read More...

Thursday, November 27, 2008

Watching the Street

wts_out_1112
The recent Dorkbot show seemed to go off nicely - it was great to be part of such a strong show of local work (some documentation). I showed some prints from Limits to Growth, as well as a more experimental process piece, Watching the Street - a (sub)urban remake of Watching the Sky.


Credit to Nathan McGinness for the suggestion: use the same time-lapse / slit-scan technique to image change in an urban environment. Technically, the setup was fairly straightforward. Instead of a digital stills camera I used a webcam (in portrait orientation), and wrote a simple Processing script to save stills at one-minute intervals, while extracting and compiling one-pixel slices into 24-hour composites. The webcam was installed in a window box on the gallery street front, with a view across the road, under a street tree, to one of Manuka's low-rise shopping arcades (above). I also attached a printer to the installed rig, so that a new composite could be produced and pinned to the wall each day. So here, some of the resulting images, and a bit of commentary.

The image-gathering process got off to a rocky start. After a few hours, the webcam came unstuck from the side of the window-box, and lay forlornly on its side for the next 48 hours (here's what that looks like). I gaffed it back in place just before the opening, and restarted the capture in time to catch some gallery-goers loitering around out the front.

wts_out_1107
wts_out_1108
These two are the Frday the 7th and Saturday the 8th of November, the first two full day composites. Those striped rectangular chunks around mid-frame are cars, parked in the 30 minute loading zone accross the road. Some stay for a few minutes, a couple for what looks like an hour. Of course on the Saturday, the loading zone doesn't operate, and there's a single car parked in it from mid-morning to mid-afternoon. The single-pixel vertical shards give an indication of passing car and pedestrian traffic.

wts_out_1109
wts_out_1114
A quiet, sunny Sunday the 9th; the form hinted at on the 8th, reveals itself as the shadow of the big plane tree, creeping across the footpath. Then the following Friday the 14th. It's all happening; lots of car and pedestrian traffic, changes in sunlight, looks like an afternoon breeze in the foliage as well. The dominant, bluish horizontal stripe in all these images is the neon sign on the shopping centre - which runs all night. The orange rectangle that extends into the evening is the interior light of a shop - which you'll notice switches off at slightly different times each night.

So you'll notice that as in Watching the Sky, I'm persisting in reading these as visualisations of the environment, as well as digital images in themselves. I'm struck by how this simple, indiscriminate process reveals both expected and unexpected patterns, and continues to provoke new questions. This despite, or I would argue because of, its openness to multiple material / temporal systems. In an interesting bit of synchronicity, I was teaching in the UTS Street as Platform masterclass with Dan Hill (more on that soon) while this piece was running. Could a simple visualisation process like this function "informationally", as it were; to help answer real questions about a very specific slice of urban environment, in near-real time? More interesting for me, could it function in that way without prescribing the question in advance - that is, could it support an open-ended process of exploration and interpretation? I'm planning to build an interactive version of this piece, to try out these ideas. In these static visualisations there's a huge amount of data missing: I set the slice point more-or-less arbitrarily, so there are 479 other potentially interesting slices to browse. It would be nice to be able to change the slice point dynamically, as well as navigating through the source images. I notice that Processing 1.0 (yay!) now supports threaded loading of images: could come in handy. Meanwhile, the full set of composite images are up on Flickr.

Read More...

Monday, September 22, 2008

Limits to Growth

A new generative work that has just fallen into place; I'll be showing prints at the upcoming Dorkbot CBR show (CCAS Manuka, in November). Made with Processing. More will accumulate here.

ltg_lateral
Economic growth is a central tenet of contemporary capitalism; but the logic of endless growth seems increasingly difficult to sustain. Limits to Growth, published in 1972 (the year I was born), was commissioned by the Club of Rome to report on the economic implications of exponential growth, and used an abstract "world model" to predict the behaviour of the global economic system. This artwork experiments with growth in another model world: a simple generative system in the form of a computer program. In this two-dimensional system, growth has the ability to constrain itself, creating boundaries that define a formal and graphical whole. These forms are utopian diagrams of self-limiting growth.

Read More...

Tuesday, May 20, 2008

Draw a Straight Line...

Instruction Set is an embryonic open software project with a simple process, gathering different code implementations of a given "instruction." The format reminds me of two Whitney Artport projects - CODeDOC (2002), and Casey Reas' {Software} Structures (2004). It's good to see this approach being updated and opened out for a wider community.


The initial instruction was La Monte Young's wonderful Composition 1960 #10: "Draw a straight line and follow it." Implementations range from the abstract and conceptual to the more performative, in languages from Python and Javascript to Supercollider and Processing; web2.0 nerds like me will appreciate markluffel's Twitter version. Anyhow, I've just posted a belated implementation of "Draw a straight line..." (screengrab above). Nothing amazing, more just filling in a gap and solving a pragmatic problem - how to wring some generative juice out of the instructions - by manipulating the space, rather than the line.

That follow up post on transmateriality and hardware practice is coming soon, really. Off-task productivity is an amazing thing.

Read More...

Thursday, March 27, 2008

Self-Organised Phyllotaxis

Like Mr Smith, I'm being a bad host, but trust me, there's some good stuff in the works. Meantime, like Smith, here's something else entirely. In this case it's a little generative sketch I recently dusted off, some source code, and a side observation about Processing culture on the web.

While at CEMA last year I was working on a project with spirals as a kind of required element. I was talking to Jon McCormack about this, when he said something like "Oh, anyone can code up a spiral. What you want to do is make a system where spirals emerge." This is a classic a-life approach, of course, but also for me seemed technically daunting. Jon pointed me to Ball's The Self-Made Tapestry as well as to the literature on spiral phyllotaxis, a fundamental structure in plant morphogenesis. Douady and Couder published a brilliant paper on this topic in 1995 [pdf], so I set about implementing their model.

lotus_phyllotaxis479773
It's a beautiful thing - buds, or "primordia", are spawned by a central ring of "base" points. Douady and Couder show that you can create phylotactic spirals with a model where primordia inhibit the budding process in their neighbourhood; the result is that when a primordium forms, the next one to emerge will pop out some distance away. By simply changing the growth rate and the inhibition threshhold, you get a variety of self-organised spirals, but also other less predictable complex systems traits.

As it turned out I didn't use this for the "spiral" project - more on that soon - but rediscovered it recently when I was asked to reproduce an old drawing of a sort of abstract lotus-flower structure. In the image above the bases are invisible, and the primordia are drawn as circles that expand over time - instant lotus generator (more images).

Have a play with the applet, or just grab the Processing source. Let me know if you use it, too.

Which leads me to a side point. What's become of the applets-on-the-web side of the Processing community? Maybe it's just me, but it seems to be diminishing; instead there's tons of (web-compressed) video, with relatively sparse documentation and source. Is it because of the increased interest in using Processing for generative motion graphics (and other exotic, large scale, non-applet-friendly things)? Maybe I'm over-reliant on ProcessingBlogs, which now seems to be all Vimeo, all the time. Any thoughts?

Read More...

Monday, October 29, 2007

More is More: Multiplicity and Generative Art

Douglas Edric Stanley wrote a nice post recently on complexity and gestalts in code and generative graphics. In it he wonders about "all those lovely spindly lines we see populating so many Processing sketches, and how they relate with code stuctures." I've been wondering about the same thing for a while, and Stanley's post has prodded me to chase up a few of these ideas.

Stanley makes some astute observations about the aesthetic economics of generative art; the fact that it costs almost exactly the same, for the programmer, to draw one, a hundred or a million lines. Stanley pursues the machinic-perceptual implications - how simple code structures contribute to the formation of gestalts; but he only hints at what seems like a more interesting question, of how these generative aesthetics relate to their cultural environment: "all of these questions of abstraction and gestalt are in fact questions about our relationship to complexity and the role algorithmic machines (will inevitably) play in negotating our increasing complexity malaise."

I actually don't think complexity is the right concept here. For me complexity refers to causal relations that are networked, looped and intermeshed (as in "complex systems"). These "lovely spindly lines", and Stanley's gestalt-clouds, show us multiplicity but not (necessarily) complexity. Simple, linear processes are just as good at creating multiplicity. There's certainly a relationship here - complex systems often produce multiplicitous forms and structures; and causal complexities embedded in "real" datasets seem to be a reliable source of rich multiplicities - but complexity and multiplicity aren't the same thing. For the moment I want to focus on the aesthetics of multiplicity.


Multiplicity is the uber-motif of current digital generative art - especially the scene around Processing. Look through the Flickr Processing pool and try to find an image that isn't some kind of swarm, cloud, cluster, bunch, array or aggregate (this one is by illogico). The fact that it's easy to do is a partial and not-very-interesting explanation; to go one step further, it's easy and it feels good. Multiplicity offers a certain kind of aesthetic pleasure. There's probably a neuro-aesthetics of multiplicity, if you're into that, which would show how and where it feels good. Ramachandran and Hirstein have suggested that perceptual "binding" - our tendency to join perceptual elements into coherent wholes - is wired into our limbic system, because it's an ecologically useful thing to do. Finding coherence in complex perceptual fields just feels good. The perceptual fields in generative art are almost always playing at the edges of coherence, buzzing between swarm and gestalt - just the "sweet spot" that Ramachandran and Hirstein propose for art in general.

I don't find this explanation very satisfying either, because it doesn't seem to tell us anything much about the processes involved - it's a "just because," and a fairly deterministic one. Another way in is to think formally about the varieties of multiplicity in generative art. I rediscovered Jared Tarbell's wonderful Invader Fractal (below) in the Reas/Fry Processing book recently. It shows a kind of multiplicity that's the same but different to the "spindly lines" aesthetic. Each invader is the product of a simple algorithm; the whole mass is a visualisation of a space of potential - a sample (but not an exhaustive display) of the space of all-possible-25-pixel- invaders. Multiplicity here is a way to get a perceptual grasp on something quite abstract - that space of possibility. We get a visual "feel" for that space, but also a sense of its vastness, a sense of what lies beyond the visualisation. John F. Simon's Every Icon points in the same direction; towards the vastness of even a highly constrained space of possibility (32x32 1-bit pixels).


Perhaps current aesthetics of multiplicity are actually doing something similar. The technical differences are fairly minor; basically a switch in spatial organisation from array to overlay; a compression of instances into a single picture plane. The shortest (and my personal favourite) path to multiplicity in Processing is aggregation: turn off background() and let the sketch redraw. Reduce the opacity of the drawing for an accumulating visualisation of the space of possibility that your sketch is traversing. Multiplicity here isn't an effect or aesthetic for its own sake; it's intrinsically linked to one of the defining qualities of generative systems - their creation of large but distinctive spaces of potential. Multiplicity is again a way to literally sense that space; but also, since it almost never exhausts or saturates that space, it points to an open, ongoing multiplicity; it actualises a subset of a virtual multiplicity, and shows us (as in Every Icon) how traversing that space is only a question of specifics and contingencies. Multiplicity says "and so on"; an actual gesture towards the virtual.

Multiplicity refers to the specific space of potential in any single system, by actualising a subset of points within it; but it also metonymically refers to an even wider space of potential, which is the one that all computational generative art - and in fact all digital culture - traverses. Because of course any system can be tweaked and changed, no chunk of code is immutable or absolute, the machines of the Processing pool are ever-changing things that collectively sample the space of all possible (generative) computation. Just as it refers directly to the space of potential of its own (local) system, generative multiplicity alludes to the unthinkable space-of-spaces that contains that system - a space the system gradually traverses with every change in its code.

This, for me, explains the aesthetic and cultural charge that multiplicity carries. It's a gesture towards an abstract, unthinkable figure; an aesthetics of the virtual, in the Bergson / Deleuze sense of the word. What's more this particular form of virtuality, or possibility - the one accessable through code and computation - is at the core of digital culture and our contemporary situation. Generative multiplicity is, quite literally, a visualisation of that figure.

Read More...