I spent three months this year on sabbatical at Culture Lab, Newcastle University (UK). It was a privelege to spend time in such a vibrant research lab, as well as to get to know the city of Newcastle. One of the projects to come out of my visit is Succession, an experiment in generative digital heritage that uses Newcastle and its history to think about industrialisation, global capital, our shared pasts and potential futures. Personally, it brings together two strands of my work that have been separate until now - on generative systems and digital cultural collections. Hence you'll also find this cross-posted over on The Visible Archive. Here, some notes and documentation on the work, some musings on generative and computational heritage.
Much of my recent work with digital cultural collections has worked to create rich representations of these ever-expanding datasets. A key thread has been an interest in the complexity of these collections; the multitudes they contain, their wealth of potential meaning as complex, interrelated wholes, rather than simply respositories of individual resources. Visualisation can provide a macroscopic view of this complexity, but it can be just as vivid when sampled at a micro scale. Tim Sherratt'sTrove News Bot tweets digitised newspaper articles in response to the day's news headlines, creating little juxapositions, timely sparks of meaning that can be pithy, funny, or provocative. Trove News Bot appropriates the twitter bot - the joking-but-deadly-serious computational voice of our age - and adapts it to work with the digital archive. We could call this generative heritage; using computational processes to create new artefacts (and meanings) from historical material.
Succession applies this generative approach to the digital heritage of Newcastle Upon Tyne. Newcastle has a rich industrial heritage; it played a major role in the Industrial Revolution that began in Britain and went on to remake global civilisation. Today Newcastle is a post-industrial or de-industrialised city: coal, steel and shipbuilding have given way to service industries: education, retail, entertainment and tourism. As an outsider exploring the city I was struck by the mixture of pre-modern, industrial and post-industrial eras in the fabric of the city. Different (often inconsistent) patterns of life, work and economy are accreted in layers as the city continues the everyday process of adaptation, experimentation with the possible; working out what comes next.
The city, like the digital archive, is a multitude; an unthinkably complex matrix of people, things, systems, narratives. Newcastle - more than many other cities - also speaks to the expansive dynamics of industrialisation, globalisation, extractive industry, fossil fuels; the whole modern trajectory that has brought us to our current predicament. This seems to be both urgent and unthinkable - or perhaps, unsayable. How can we speak back to this complexity; how can we make in a way that responds to this tangled, expansive mess? Here generative techniques offer a way to synthesise complexity and create multitudes, formations that might portray the city as it was, or hint at what it could be. Automatic juxtaposition and remix create nonsense but also, occasionally, glimmers of a new sense, or at least a texture or sensation that emerges from a random constellation of images, sources and contexts. Succession requires us to piece together fragments of history; and this is a work of imagination, as Ross Gibsonwrites, framing his own work of generative heritage (with Kate Richards, Life After Wartime
Our parlous states need imagination. We need to propose “what if” scenarios that help us account for what has happened in our habitat so that we can then better envisage what might happen. We need to apprehend the past. Otherwise, we won’t be able to align ourselves to historical momentum. Without doing this we won’t be able to divine the continuous tendencies that are making us as they persist out of the past into the present.
In practical terms, the work is based on a corpus of around two thousand images sourced from the Flickr Commons. Most come from the (wonderful) Tyne and Wear Archives and Museums collection; many more from the Internet Archive Books collection, with a smattering of others from UK and international institutions. Succession uses these ingredients to generate new digital "fossils"; composite images assembled in the browser using HTML Canvas. This generative process is extremely simple: pick five sources at random, and place them in the frame using some semi-random rules for positioning, compositing and repetition. Opacity is kept low, so that the sources blend and merge. The visual process often obscures the source images - they end up buried, cropped or indistinguishable, squashed like fossil strata. But at the same time the source items are preserved and presented in context, so each composite retains references to its sources and their attendant contexts. Composites can be saved, acquiring an ID and permalink; the images in this post show some of my favourites, but there are over a hundred to sift through already.
As a generative system this is, in formal terms, incredibly simple. It's essentially a combinatorial process, in that each composite consists of five elements from a set of around two thousand. Yet already this adds up to 2.5 x 10^15 unique combinations - it would take eight million years to see them all, at one per second. Compositing and layout parameters are random within constraints - so this simple machine can produce an immense variety of unique results; I'm still surprised and delighted by the fossils people discover (or generate). But this computational variety is also strongly shaped by the human creative choices involved in making the work. This is what Bill Seaman (combinatorial media artist par excellence) calls "authored space" - a domain of potential that is expansive but never arbitrary. The corpus reflects a handful of coherent themes, seasoned with generous sprinklings of the lateral and miscellaneous; the aim is, in Seaman's words, a kind of "resonant unfixity." Also the corpus and the compositing process work in tandem; for example compositor treats the largely monochrome line-art and engravings of the Internet Archive material differently to other (largely photographic) sources. The generative machine is programmed in part by the textures and qualities of its material.
The Internet Archive book images are interesting on several fronts; for one, they are an amazing demonstration of the power of computational processes for generating and describing large collections (like 2.6 million items large). Given the right kind of source material, this computational leverage changes the logic of collections completely. When adding and describing items is expensive, it makes sense to be selective, and publish only what is most "significant". Automation makes it possible to simply publish everything - for who's to say (really) what is significant, or how it might one day be significant? In Succession the Internet Archive material plays a crucial role. The line art and diagrams - many from obscure publications like the Transactions of the North of England Institute of Mining and Mechanical Engineers - offer evocative fragments of the machinery of mid-nineteenth century industrialisation.
As for generative digital heritage, it's a fairly open-ended proposal. What happens when we turn algorithms loose on our digital culture with makerly, synthetic, speculative or poetic intention? There are some pretty solid precedents in the digital humanities for these approaches; Schnapp and Presner call for a "generative" DH in their 2009 manifesto. Before that Drucker and Nowviskie outlined a "speculative computing" with a strongly generative flavour. Gibson and Richards' Life After Wartime is an early exemplar of generative heritage in the digital arts. More recently we've seen the rise of massive online collections, web-scale computing, and a proliferation of cultural, critical and creative bots, not to mention projects like #NaNoGenMo. If there is such a thing as generative digital heritage, then now's the time.
At CODE2012 I presented a paper on "programmable matter" and the proto-computational work of Ralf Baecker and Martin Howse - part of a long-running project on digital materiality. My sources included interviews with the artists, which I will be publishing here. Ralf Baecker's 2009 The Conversation is a complex physical network, woven from solenoids - electro-mechanical "bits" or binary switches. It was one of the works that started me thinking about this notion of the proto-computational - where artists seem to be stripping digital computing down to its raw materials, only to rebuild it as something weirder. Irrational Computing(2012) - which crafts a "computer" more like a modular synth made from crystals and wires - takes this approach further. Here Baecker begins by responding to this notion of proto-computing.
MW: In your work, especially Irrational Computing, we seem to see some of the primal, material elements of digital computing. But this "proto" computing is also quite unfamiliar - it is chaotic, complex and emergent, we can't control or "program" it, and it is hard to identify familiar elements such as memory vs processor. So it seems that your work is not only deconstructing computing - revealing its components - but also reconstructing it in a strange new form. Would you agree?
RB: It took me a long time to adopt the term "proto-computing". I don't mean proto in a historical or chronological sense; it is more about its state of development. I imagine a device that refers to the raw material dimension of our everyday digital machinery. Something that suddenly appears due to the interaction of matter. What I had in mind was for instance the natural nuclear fission reactor in Oklo, Gabon that was discovered in 1972. A conglomerate of minerals in a rock formation formed the conditions for a functioning nuclear reactor, all by chance.
Computation is a cultural and not a natural phenomenon; it includes several hundred years of knowledge and cultural technics, these days all compressed into a microscopic form (the CPU). In the 18th century the mechanical tradition of automata and symbolic/mathematical thinking merged into the first calculating and astronomical devices. Also the combinatoric/hermeneutic tradition (e.g. Athanasius Kircher and Ramon Llull) is very influential to me. These automatons/concepts were philosophical and epistemological. They were dialogic devices that let us think further, much against our current utilitarian use of technology. Generative utopia.
Schematic of Irrational Computing courtesy of the artist - click for PDF
MW: Your work stages a fusion of sound, light and material. In Irrational Computing for example we both see and hear the activity of the crystals in the SiC module. Similarly in The Conversation, the solenoids act as both mechanical / symbolic components and sound generators. So there is a strong sense of the unity of the audible and the visual - their shared material origins. (This is unlike conventional audiovisual media for example where the relation between sound and image is highly constructed). It seems that there is a sense of a kind of material continuum or spectrum here, binding electricity, light, sound, and matter together?
RB: My first contact with art or media art came through net art, software art and generative art. I was totally fascinated by it. I started programming generative systems for installations and audiovisual performances. I like a lot of the early screen based computer graphics/animation stuff. The pure reduction to wireframes, simple geometric shapes. I had the feeling that in this case concept and representation almost touch each other. But I got lost working with universial machines (Turing machines). With Rechnender Raum I started to do some kind of subjective reappropriation of the digital. So I started to build my very own non-universal devices. Rechnender Raum could also be read as a kinetic interpretation of a cellular automaton algorithm. Even if the Turing machine is a theoretical machine it feels very plastic to me. It a metaphorical machine that shows the conceptual relation of space and time. Computers are basically transposers between space and time, even without seeing the actual outcome of a simulation. I like to expose the hidden structures. They are more appealing to me than the image on the screen.
MW: There is a theme of complex but insular networks in your work. In The Conversation this is very clear - a network of internal relationships, seeking a dynamic equilibrium. Similarly in Irrational Computing, modules like the phase locked loop have this insular complexity. Can you discuss this a little bit? This tendency reminds me of notions of self-referentiality, for example in the writing of Hofstadter, where recursion and self-reference are both logical paradoxes (as in Godel's theorem) and key attributes of consciousness. Your introverted networks have a strong generative character - where complex dynamics emerge from a tightly constrained set of elements and relationships.
RB: Sure, I'm fascinated by this kind of emergent processes, and how they appear on different scales. But I find it always difficult to use the attribute consciousness. I think these kind of chaotic attractors have a beauty on their own. Regardless how closed these systems look, they are always influenced by its environment. The perfect example for me is the flame of a candle. A very dynamic complex process communicating with its environment, that generates the dynamics.
MW: You describe The Conversation as "pataphysical", and mention the "mystic" and "magic" aspects of Irrational Computing. Can you say some more about this a aspect of your work? Is there a sort of romantic or poetic idea here, about what is beyond the rational, or is this about a more systematic alternative to how we understand the world?
RB: Yes, it refers to an other kind of thinking. A thinking that is anti "cause and reaction". A thinking of hidden relations, connections and uncertainty. I like Claude Lévi-Strauss' term "The Savage Mind".
This essay was commissioned for the exhibition Datascape, at the Cube Gallery, QUT in April 2013. I should mention that since writing it I've discovered that Jer Thorp was way ahead of me on to the new oil thing.
“Data is the new oil” - Ann Hummer, Hummer-Winblad Venture Partners (source)
In the swirling chaos of twenty-first century capitalism, everybody wants to know what’s
next. “Data is the new oil” is a pithy little announcement. It reminds us how
we got here, powered by the long energetic boom of fossil fuels, now entering
its closing stages. it announces a successor, a new wealth (and just in time).
But in drawing the analogy, it also constructs data in a certain way; as a sort
of amorphous but precious stuff, a resource for exploitation, and a sort
of promising abundance. Similarly The Economist trumpeted the “Data Deluge” on their February 2010 cover: a businessman catches falling data in an
upside-down umbrella, funnelling it to water a growing flower whose leaves are
hundred dollar bills.
We need not (and should not) accept this analogy; but it demonstrates how data is
figured, or constructed, in our culture. Our everyday life and culture is
traced, tangled and enabled by digital flows. We produce and consume data as
never before. But what exactly is this data? What can it do, and what
can we do with it? Who owns or controls it? How can we understand,
appreciate, or even sense it? The construction of data as a cultural
actor is vital because data itself is so abstract, so hard to pin down. We
ought not leave it to the captains of industry, and their upside-down
umbrellas. In Datascape we see artists working with data, applying and
diverting it for their own ends, as well as offering their own figurations of
its potentials and limits. In a culture increasingly built on data, these works
provide moments of cultural introspection, reflections on this abstract stuff
that is our new social medium.
Google, Facebook, Twitter and the rest make us - their users - into data. This makes us
anxious about privacy and surveillance, but perhaps a more interesting question
is what it’s like to be data. If we are all data subjects now, then what
is data subjectivity? Jordan Lane’s Digital Native Archiveimagines a
new bureaucratic archive for the data subject, and immediately comes to the
question of mortality. If we are data, and data can be faithfully preserved,
are we now immortal? Or are we, instead, dead forever, entombed in a
rationalised hierarchy of metadata, request protocols and archival record
formats? Christopher Baker’s My Map(below)shows us what it might be to take
charge of a personal archive, with a tool that reveals the patterns and
relationships in email correspondence. This self-portrait suggests that one of
the challenges of data subjectivity is simply knowing oneself: the scale of our
personal data exceeds our grasp.
In two of the most prominent data art works from the mid 2000s, we mine these personal
archives en masse. Golan Levin’sThe Dumpster and Sep Kamvar and Jonathan Harris’ We Feel Fine scour the internet for “feelings” that are compiled into
datasets, and in turn staged as dynamic visualisations. In turning our digital
selves into swarming dots and bouncing balls, the artists animate us as members
of a teeming throng. Data here is in part a new form of social realism, a way
to represent the complex texture of life in the crowd; but these works also ask
us to reflect on the limits of data-subjectivity. Can the intensity of our
inner lives really be represented in cool, abstract data? Are we all so much
alike? Aaron Koblin’sSheep Marketanswers both yes and no; for we can see here both the comical diversity of the crowd (and its sheep avatars), and
the uniformity that digital systems encourage.
The pathos of this contrast, between the coolness of the digital and the warm,
messy intensity of humankind, emerges again in Luke du Bois’ Hard Data,
where the tolls of war unfold as stark lists and map references. Du Bois’
soundtrack, generated from the same source data, acts as an emotional mediator,
trying to return some of the tragic importance that the data fails to convey.
Du Bois’ work pivots between the data-subject and what we might call the
data-world. For if the world, too, is now data, then what might that feel like?
How do we approach such a world?
In many works here the weather - a complex (and increasingly uncooperative) material
flux - is a sort of proxy for the data-world: a field that is both easy to
measure, and difficult to grasp. In Miebach’s Weather Scores, Viegas and
Wattenberg’s Wind Map(above), and my own Measuring Cup, weather data is
a source of aesthetic richness, as well as a pointer to the world beyond, the
world that data traces. The weather - so much part of our everyday sensations -
is abstracted here into numbers and symbols, only to be remade in new sensual
forms. What if we could see the wind across an entire continent? Or hold a
hundred years of temperature? Or hear the tides as music?
Here we get a glimpse of an alternative figuration of data itself. Rather than some
kind of precious (but immaterial) stuff, or fuel for market speculation, data
here is a relationship, a link between one part of the world with another, and
a trace that can be endlessly reshaped. Of course, that trace is imperfect; a
mediated pointer, not a pure reproduction. So Viegas and Wattenberg issue a disclaimer for their Wind Map: this is just an “art project”, they say;
we "can't make any guarantees about the correctness of the data or our software.” Yet
that connection remains; and art here plays the role that it always has. It
transforms our understanding of the world, by representing it anew.
Back in September I showed a little work called Local Colour at ISEA 2011. This project continues my thinking about generative systems, materiality and fabrication. It's a work in two parts: the first is a group of laser-cut cardboard bowls, made from reclaimed produce boxes - you can see more on Flickr, and read the theoretical back-story in the ISEA paper. Here I want to briefly document the second element, a sort of network diagram realised as a vinyl-cut transfer. The diagram was created using a simple generative system, initially coded Processing - it's embedded below in Processing.js form (reload the page to generate a new diagram).
Network diagrams are one of the most powerful visual tropes in contemporary digital culture. Drawing on the credibility of network science they promise a paradigm that can be used to visualise everything from social networks to transport and biological systems. I love how they oscillate between expansive significance and diagrammatic emptiness. In this work I was curious to play with some of the conventions of small world or scale-free networks. A leading theory about how these networks forms involves preferential attachment: put simply it states that nodes entering a network will prefer to connect to those nodes that already have the most connections. In visualising the resulting networks, graph layout processes (such as force direction) use the connectivity between nodes to reposition the nodes themselves; location is determined by the network topology.
This process takes the standard small-world-network model and changes a few basic things. First, it assigns nodes a fixed position in space. Second, it uses that position to shape the connection process: here, as in the standard model, nodes prefer to connect to those with lots of existing connections. But distance also matters: connecting to a close node is "cheaper" than connecting to a distant one. And nodes have a "budget" - an upper limit on how far their connection can reach. These hacks result in a network which has some small world attributes - "hubs" and "clusters" of high connectivity - but where connectivity is moderated by proximity. Finally, this diagram visualises a change in one parameter of the model, as the distance budget decreases steadily from left to right. It could be a utopian progression towards a relocalised future, or the breakdown or dissolution of the networks we inhabit (networks in which distance remains, for the time being, cheap enough to neglect).
The process running here generates the diagram through a gradual process of optimisation. Beginning with 600 nodes placed randomly (but not too close to any other), each node is initially assigned a random partner to link to. Then they begin randomly choosing new partners, looking for one with a lower cost - and cost is a factor of both distance and connectivity. The Processing source code is here.
Yet another blog "paste" to provide some semblance of life around here. Paul Prudence recently interviewed me for Neural 40: The Generative Unexpected, ranging over generative art (utopian and otherwise), cross modal AV and data visualisation among other things. Thanks to Paul for some thoughtful questions.
PP: It might be argued that some of the main themes infused in
generative art are those to do with a kind of techno-utopianism and
futurism. Have you come across any generative artworks that deal with
dystopian themes or have a sense of anachronism about them?
More importantly are the technologies and software used in creating
these artworks inherently defining their aesthetics?
It's true that there's a flavour of the techno-utopian to a lot of
digital generative art, especially in the online digital scene. The
founding principle of generative art is, inescapably, the generative
capacity of its own system, so perhaps it is optimistic
by definition? Online culture - or the realtime social media flow of
projects, memes and links that we tend to bathe in - is also
techno-utopian at its core, still strongly influenced by the West-Coast
startup culture of the companies involved. But with a bit
of digging some more diversity emerges; the work of my friend Jon McCormack for
example, is highly reflective about the
nature / technology relationship - though it sometimes conceals its
ambivalence under a very beautiful surface. Another Australian artist - Murray McKeich - makes work that is both anachronistic and dystopian, like
his pZombies, gruesome avatars for generative agency composited from
scanned rubbish.
On the other hand the flipside of techno-utopia is real richness and
generative excess - the ability of formal systems to reveal terrains of
sublime complexity. At best this "maximalist" strand of generative
practice can induce a state of wonder, little chinks
of access to the unthinkable complexity of the real material world.
Do the technologies define aesthetics? They certainly shape the
aesthetics powerfully - but at least now the field of technology is more
open and malleable for artists than ever before. It might be that the
most important new works in this field are coding
platforms or communities, rather than art or design projects. Processing
won a Golden Nica, after all. But in this field monolithic
"technologies" are increasingly breaking down - Processing for example
is very influential, and there is certainly a Processing
"look", but with a new framework or library appearing every other week,
we can't blame technology for limited diversity in the field.
PP: Much generative art is concerned with certain kinds of abstraction
and systematised multiplicity of form without a framework of
proposition, resolution and conclusion. Do you think there is any room
for a sense of narrative in generative art? Could you
give me examples of generative artworks that deal with narrative
successfully?
I would argue that every generative artwork involves a framework of
proposition, resolution and conclusion. It is the formal and procedural
structure of the generative system that creates the work: a set of
entities, attributes, relationships, processs, rules,
constraints, and visualisations (more here). The problem, for the way generative art
is both made and received, is that that system is often hard to get at -
it's an abstract thing, which the artist may or may not describe or
publish. A lot of work in the digital generative
scene operates in an image culture where "look" is valued over process
or concept. So although it's sometimes hard to access, I would argue
that there is often a narrative inside even the most "retinal"
generative art - it's the narrative of the system. Sometimes
it's fairly clear - for example Brandon Morse's wonderful procedural
animations of collapsing structures (also another dystopian work!). For
me Morse's work is wonderfully poignant because it works by resemblance -
it reminds us of real things collapsing -
but it also works by metonymy, referring to the idealised world of
computer graphics and simulation; so it seems like the simulation itself
is collapsing as well (below: Achilles (2009) - photo by Paul Prudence).
PP: Each year we see different algorithms come into fashion as tools
for the generative artist. Perlin Noise, Circle Packing, Voronoi,
Reaction-Diffusion and Sub-divisioning algorithms are good examples. How
important is it for an artwork to hide traces of
the software and algorithm that was used to generate it it? Can you
predict what the next big algorithm might be? Or do you see any new
potential in an old or overlooked algorithm?
If you need to hide the traces of your algorithm, change your algorithm.
I too am fascinated by the algo-memetic fashion parade that moves
through digital design and generative art. This relates to the question
of look vs system; these systems seem to reproduce
using their appearance as a sort of lure - it's a bit like sexual
selection in a memetic ecology, survival of the prettiest. As a result
people seem to apply them without any understanding or interest in the
system or process. I wrote last
year
about the Voronoi algorithm along these lines. So algo-fashions will
come and go, but for me the most rewarding work is always a result of
deep engagement with the generative system - taking a system and hacking
it into something else entirely, or deriving
new systems. Erwin Driessens and Maria Verstappen for example have a long track record of inventing algorithms that you can't just grab off
the shelf - their Breed and Ima Traveller works are sort of mutant cellular automata - but really they don't fit any clear template. Nervous System also
implement new systems:
they go to the scientific literature in biology, or even run their own
physical trials, and implement models from scratch. There aren't many
designers currently with the ability to do that. Jonathan McCabe is another good example of this; his multi-scale Turing
patterns (below) are a genius hack of a very old algorithm. Jonathan's Origami
Butterfly process is completely new (and equally distinctive).
So there isn't a Platonic shelf somewhere stocked with generative
algorithms for designers to select from. The space of potential
generative systems is unimaginably massive. Make one up, or at least
hack an existing one into something else. Even very simple
changes to existing systems can be very productive. For years I have
been playing with systems based on Murray Eden's growth model - perhaps
the simplest (and first) ever model of biological growth. There's much
more to explore.
PP: What is the role of serendipity and non-determinism in the formulation of a successful generative artwork?
When teaching generative art my colleague Tim Brook initially bans his
students from using randomness. I don't do the same, but I can see the
logic of it: randomness adds meaningless variation. Used directly, it's
just that - meaningless variation that can
give a false impression of richness. But it can be very handy - for
example when exploring the range of outcomes of a complex system,
randomising its parameters can throw up useful samples of the generative
space of that system. Again it's about understanding
the system. Serendipity is another thing; I think most generative
artists work hard to cultivate serendipity, to entice systems into a
state where pleasant surprises emerge. Many artists hand-pick
"candidates" from large populations of generated works - seeking
out those serendipitous moments. Although variation is fundamental to
generative work, it's interesting to observe reactions to Written Images,
where each volume is a unique variant of the collected works, with no
opportunity for artists to pick favourites. Not having final control
over each artefact is still a bit scary (for me at least).
PP: In your Watching The Sky piece there is almost a tendency to study
the image in a forensic manner, to try and decode the work, and to find
environmental patterns in relation to patterns in the work. This method
of analysis is in almost direct contrast to
the usual manner in which a data visualisation might be constructed,
where an artist decides on a specific representational system beforehand to create clarity and make a point. Perhaps you could
comment a bit more on how data visualisation might move
forward in this respect.
I am drawing on other work here - especially the early work of Lisa Jevbratt, like her classic 1:1.
Jevbratt outlines a sort of data-mysticism, a view of data as a
reservoir of unknown potential, and shows fine-grained patterns without
concern for "readability". In Watching the Sky (and related work) I just
use images as a data source; this is a simple ploy
to introduce richness by working with rich, unstructured data - and data
with a complex (but legible) relationship to the world. That work has
certainly shaped my thinking on visualisation. Maintaining the
"unstructured" complexity of the image as a data source
- rather than reducing it to statistical features - is a great way to
provide contextual cues. The commonsExplorer project
I did with
Sam Hinton - a visual explorer for Flickr Commons streams - uses tiny
cropped "core samples" that offer telltale clues about the source
images.
The other idea at work here (and in Jevbratt's work) is a sense of data
as (a) material; as something with texture or grain that can be felt as
much as analysed. I have experimented with making these ideas literal in
data-form projects like Weather Bracelet and Measuring Cup.
PP: In one of your papers you discuss synaesthesia and cross-modality in
contemporary audio visuals. It seems that an important criteria for a
successful synaesthetic artworks is in meaningful, metaphorical or
conceptual cross-wiring of sound and video - and
not just a mechanical translation between the two. What other criteria
are important in a successful cross-modal artwork?
Cross-modal or "coupled" audiovisuals exemplify one of the key
questions of digital media - we could call it the mapping problem. If
the basic materials of the work are digital - that is, abstract patterns
that can travel through any number of different
substrates - then how do we make them perceivable? Or, how do we choose a
mapping, a way of making data available to perception? Manovich calls this
the
"built-in existential angst" of digital media. So of course there are
an infinity of possible ways to connect sound and image - either mapping
one into the other, or generating both from some common data source. I
actually like mechanical or automatic mappings.
Because they are stable and consistent they let us soak in the
relationship, the map itself; and these automatic maps are often quite
subtle and fine-grained, compared to more composed or intentional
relationships. In Robin Fox's work for example a simple (polar) oscilloscope display creates
images from audio signals - but Fox explores the mapping in depth,
working out how to "play" it, reverse-engineering the audio signal to
create images and revealing surprising correspondences (above: image via Not-Quite-Critics).
Of course automatic mappings can be incredibly boring - how many
modified graphic equaliser visualisations do we need to see - but I
think this is often because the mapping is filtered through too many
abstractions and interventions; it becomes a set of parameters.
PP: There has been a huge influence of generative art in recent years on
traditional drawing techniques such as painting and sculpture. In
reverse direction, what ways, if any, can generative artists learn from
traditional plastic arts?
The link there for me is a sense of "procedurality" or "processuality".
In Casey Reas' work we can see a strong relationship between
computational and non-computational procedures such as those of Sol Le
Witt. In teaching programming to designers, I have students
write and execute a Le Witt style procedure, with pencil and paper.
Digital generative systems are just formal procedures, executed by
machines. Treating processes as human-executable helps unpack the black
boxes of generative systems mentioned earlier, and
hopefully reveal them as contingent and hackable. Otherwise: the joy of
materiality. Generative art and design covets the lush tangibility of
traditional media; and with the wave of interest in fabrication we are
seeing ever more generative work realised in
"off-screen" forms. The challenge then, for pasty code-artist types, is
to match the craft skills of hands-on makers in realising the work.
PP: What early interests did you have that might have lead you to your current path as an artist and academic in this field?
Music - which I don't do much of any more, but it was a big part of my
world for a long time. Music (or Western music anyway) is systematised
and symbolic, but also immediate and affective. That combination has
always interested me. Reading Gödel, Escher, Bach
- as well as lots of popular science stuff on complex systems - was
influential. I was playing around with computers from around the time of
the Apple II; later I convinced my father to buy an Amiga 1000,
ostensibly to be used in his architecture business.
It didn't ever do much architecture but I used it to make lots of bad
graphics and music. Also I grew up in an outer suburb, surrounded by
wild bushland; I'm a romantic nature boy at heart.
PP: Can you tell me a bit about how the dual role of essayist/writer and
artist works in your situation. The dialectical relationship must
create a certain amount of self-reflexivity on both sides?
Writing is fundamentally another kind of making - when it works, text
and ideas are a pretty heady medium. So to some extent it's all
practice, or at least speculation, experimentation, thinking of various
sorts. When it works best, the practical work can trial
or extend the writing, and the writing can contextualise, interpret and
unpack the art work. "Practice led research" works for me as an approach
- especially if you don't split art-making and writing along neat
practice / theory lines.
PP: Can you tell me about any projects you have planned for the future,
any new books in the pipeline or art projects in progress?
Since 2008 I've been researching and developing interactive
visualisations of cultural collections datasets, working with partners
including the National Archives of Australia and most recently the
National Gallery of Australia. The work is challenging and
rewarding; I enjoy the way data vis can span the poetic and the prosaic,
and the immersive richness of large data sets. That line of work has
been pulling me away from "art", which is fine with me - I generally
find the edges and interfaces around creative
digital culture and practice more interesting than the portion of it
inside gallery walls. But the writing is also ticking over, mostly on
digital materiality (or transmateriality)
and the aesthetics of computational art and design. There's a new book in there somewhere, I hope.
At the risk of some sort of blog-will-eat-itself situation, I'm posting this paper, presented at TIIC last November, which includes several threads developed here previously - arrays, transmateriality, and the work of HC Gilje. There are some new bits too however, on screens, projection mapping, and lots of tasty examples of a putative "post-screen" practice.
1. Glowing Rectangles
For all the diversity of the contemporary media ecology - network, broadcast, games, mobile - one technical form is entirely dominant. Screens are everywhere, at every scale, in every context. As well as the archetypal "big" and "small" screens of cinema and television we are now familiar with pocket- and book-sized screens, public screens as advertising or signage, urban screens at architectural scales. As satirical news site The Onion observes, we "spend the vast majority of each day staring at, interacting with, and deriving satisfaction from glowing rectangles."
Formally and technically these screens vary - in size and aspect ratio, display technology, spatiotemporal limits, and so on. They are united however in two basic attributes, which are something like the contract of the screen. First, the screen operates as a mediating substrate for its content - the screen itself recedes in favor of its hosted image. The screen is self-effacing (though never of course absent or invisible). This tendency is clearly evident in screen design and technology; we prize screens that are slight and bright - those that best make themselves disappear. Apple's "Retina" display technology claims to have passed an important perceptual threshhold of self-effacement, attaining a spatial density so high that individual pixels are indistinguishable to the naked eye (below - image Bryan Jones).
The second key attribute of contemporary digital screens is their tendency to generality. The self-effacing substrate of the screen is increasingly a general-purpose substrate - unlinked to any specific content type; equally capable of displaying anything - text, image, web site, video, or word-processor. This attribute is coupled of course to the generality of networked computing; since the era of multimedia the computer screen has led the way in modeling itself as a container for anything (just as the computer models itself a "machine for anything"). The past decade has simply seen this general-purpose container proliferate across scales and contexts, ushering us into the era of glowing rectangles.
However over the past decade in design and the media arts, a wave of practice has appeared which as this paper will argue, resists the dominance of the glowing rectangle. Given the near-total cultural saturation of the screen, this is unsurprising, given the ongoing cultural dance of fringe and mainstream in which this practice participates. This is not simply a story of resistance however. In proposing and describing two particular strains of "post-screen" practice, this paper aims firstly to outline the shared terms of their relationship with the screen, and in the process develop a more detailed sense of these conceptual devices of generality, outlined above, and its opposite, specificity. Secondly, and more briefly, it outlines a theorisation of this practice, invoking transmateriality, an account of the paradoxical materiality of (especially digital) media, and Gumbrecht's notion of presence.
2. Arrays
During the opening ceremony of the 2008 Beijing Olympics, a huge grid of drummers assembled in the stadium, each standing before a large square fou drum, a traditional Chinese instrument. Each drum was augmented with white LEDs mounted on its surface, triggered with each drum stroke. The drummers formed a vast array of discrete audiovisual elements, precisely choreographed in the style of these spectaculars. Human pixels, but coarse and resolutely human; at one point the drummers desynchronised entirely, forming a thunderous grid of flickering light. In a ceremony created for the (broadcast) screen - to the infamous extent of splicing computer-animated fireworks into its telecast in place of real ones - the drummers were a moment of involution. Their array echoed all the other, more conventionally self-effacing screens threaded through the event; but it also inverted some of their key attributes. Firstly its substrate, instead of receding behind "content", came forward; if anything substrate and content were one and the same. Secondly, while this array nods towards the generality of the screen in its choreographed patterns - which like the patterns on a screen could be "anything at all" - it veers strongly in the opposite direction, towards the here and now, what I will call specificity. As I argued at the time, the poetics of this array rely on the specificity of its elements - the drummers, drums, and their solid-state illumination - rather than the patterns that play across it.
The drummers are one popular example of a formal trope we can find throughout media arts and design practice over the past decade. Daniel Rozin's 1999Wooden Mirror is one of the earlier examples. Wooden Mirror is an array of square wooden tiles embedded in a large octagonal frame, along with a bundle of custom electronics. The tiles are fitted with servomotors, so that each one can tilt up and down on its horizontal axis. As its angle to the light changes, each tile appears brighter or darker. Rozin wires up the array to a videocamera, to complete the mirror circuit: the brightness of pixels in the incoming image drives the angle of the tiles. Given the overtly visual logic of the work, it's interesting that its sound is equally striking: the wooden tiles clatter like mechanical rainfall, sonifying the rate of change of the image; as the image becomes still, the clatter dies off to a low twitching. Again, this array emphasises the material presence of its substrate. The tonal "generality" of the wooden mirror is functional enough to be familiar, but the coarse mechanical clattering of these pixels makes them inescapably specific.
Rozin has made many similar mirrors; notable is Trash Mirror(2001) where the individual elements - irregularly shaped pieces of rubbish - are packed into a freeform mosaic. This array moves one more step away from the homogeneous generality of the digital screen. Here the elements are irregular in size and shape, but also carry their own specific textures and colours. In Mirrors Mirror(2008) the regular grid returns, but the array elements are themselved replaced by mirrors; as these tilt they reflect different parts of the environment. Here the location of the tonal "content" in the array is, like the image source, deferred to the environment. In a familiar digital screen, image elements are luminous modules whose colour value is independent and absolute. In Rozin's Wooden Mirror that value becomes relative - tonality is based on self-shading, which depends on the lighting of the work. In Mirrors Mirror this relativity is multiplied; each element will reflect a different portion of the environment, depending on both its angle and the viewpoint of the observer.
In many cases these media art arrays depart from the two-dimensional grid entirely. Robert Henke and Christopher Bauder's ATOM (2007-8) (above) is an eight-by-eight grid of white helium balloons, each one fitted with LED illumination and tethered to a computer-controlled winch. The grid becomes a mobile, configurable light-form, tightly coupled with Henke's electronic soundtrack in live performance. This array lowers its resolution drastically, and limits its generality in one dimension (monochrome elements), but extends its reach (literally) into a third axis. ART+COM's 2008 kinetic sculpture at the BMW museum uses a similar configuration, but a higher "resolution" - in this case 714 metal spheres are suspended from motorised cables, forming a smoothly undulating matrix - a sort of programmed corporate ballet. Cloud (2008), a sculpture in Heathrow airport by London art and design firm Troika, illustrates another permutation: here a 2d array forms the skin of a large three-dimensional sculptural form. In this case the elements are electromagnetic flip-dots - components often used in airport signage before it was overtaken by glowing rectangles. As in Rozin's Mirrors, Troika consciously exploit the materiality, gestural character and the sound of these retro-pixels. rAndom International's 2010 Swarm Light demonstrates a "saturated" 3d array. The work consists of three cubic arrays of white LED lights, each ten elements per side; these cubic volumes host a flowing, flickering "swarm" of sound-responsive agents which traverse the space, brightening or dimming the array as they move.
The work of British designers United Visual Artists offers a useful longitudinal study in post-screen imaging; in particular their work addresses one of the central technical players in this field, LED lighting. UVA's first project involved a huge LED array that formed the stage set of Massive Attack's 100th Window tour. Unlike more screenful video backdrops, this low-res grid had an inescapable presence, hung directly behind the band and looming over the stage. Rather than an image machine, UVA treat the grid as a luminous dot-matrix for the twitching alphanumeric characters of real-time data. In subsequent work UVA develop this approach in a number of directions, but digitally articulated light - enabled by the LED - is a recurring theme. In Monolith (2006) UVA use a pair of large, full-colour LED screens, but treat them as a dynamic light source rather than a substrate for images; subtle gradients and washes of colour spill over the audience and into the installation environment, coupled with generated sound. In Volume (2006), another installation piece, the array elements are long vertical LED strips, again treated as generators of pattern, colour and sound; the work forms an interactive field as each element responds to nearby activity. In the context of this steady dismemberment of the screen, UVA's later work The Speed of Lightis notable in that it leaves LED arrays aside entirely. Instead it uses installed lasers manipulated into dynamic, walk-in calligraphy, as if light had been finally prised away from its digital substrate, and turned loose in the environment.
Beyond their formal similarities, these arrays share some core approaches and contexts which provide a coherent portrait of a sort of post-screen practice. These works adopt one key feature of the screen - the "generality" of an articulated substrate - but trade it off to varying extents for more "specificity" - exploiting the local, particular materiality of the work and its environment. This specificity is also technological, reflecting a practice that crafts hard- and software into idiosyncratic configurations, rather than using off-the-shelf infrastructure. Light is a strong theme, in particular the solid-state, digitally addressable light of the LED (essentially a free-floating pixel). However the optical in these arrays is always tightly coupled with other modalities, especially sound, which is either a cherished byproduct of the array mechanism (as in Rozin's Mirrors and Troika's Cloud) or generated by the array elements themselves (as in the drummers and UVA's Volume). A quality of liveness is linked with the turn to specificity and being-in-the-environment; from the "live data" of UVA's Massive Attack show, to the live interaction and generation of their later installations, to the live video driving Rozin's Mirrors. Performance and temporary installation are the dominant forms here - emphasising the intensified moment, rather than the any-time of static content.
3. Projection Mapping and Extruded Light
In one sense these arrays present a disintegration of the screen - they pull its elements apart and embed them in the environment. In another strain of media arts practice, something like the converse occurs, though with what I will argue are similar interests and agendas. In this approach screen-like technologies are used intact, rather than decomposed; but their function and their relationship to the environment is transformed. These works reverse-engineer the digital image, exploiting its digital (general) malleability in order to fit it to a specific environment.
The work of Norwegian artist HC Gilje illustrates one trajectory of this second post-screen approach. Gilje's work from the late 90s was in live digital video, with his ensemble 242.pilots. This practice was linked to the burgeoning activity in experimental electronic music at the time; here again, performance, improvisation and the intensified moment - what Gilje calls an "extended now" - are central concerns, though the work is strongly screen-focused in its results . In Gilje's work over the following decade, he demonstrates another path towards the post-screen. Gilje's nodio (2005-) is a custom software system for distributing video content across collections of linked "nodes". In drifter (2006) these nodes are manifest as a ring of twelve screens which form a linked audiovisual interspace. With dense (2007) these nodes take on a more sculptural presence - hanging strips of fabric illuminated from both sides with a tailored video-projection. Here Gilje adapts the screen technology of the video projector to a sculptural environment, pushing it one step away from image and towards illumination. The work also depends on a specific material surface - the translucent weave of the fabric enables the double-sided layering of pattern.
shift (2008) (above) develops this approach: a technique known as projection mapping, in which the projected image is reverse-engineered to fit a specific surface. In shift Gilje's nodes are simple rectangular boxes, constructed from plywood. Using more custom software, the artist illuminates a cluster of these boxes with precisely mapped projected images. The coupled sound emanates from speakers housed in each box, so the objects are again audiovisual (and acoustically distinct) nodes; Gilje composes material for this environment in search of what he terms "audiovisual powerchords" - moments of intense juxtaposition and interplay. In blink (2009) Gilje dispenses with the boxes, instead treating the bare installation space. Simple, geometric elements - angular lines and bands of tone and colour - are reflected and modulated by the space itself, diffusing from irregular polished floorboards and painted walls. The work plays the room with articulated light, carefully matched to its geometry in way that heightens our awareness of the interplay of space, light and materials.
Projection mapping has recently flourished in "visualist" practice across art, design and performance contexts; trompe-l'oeil architectural facades are one popular genre, manipulating the built environment by rendering it with a tailored skin of articulated light (see for example Urbanscreen's Kubik 555). German designers Grosse 8 and Lichtfront demonstrate a logical extension of the technique, using multiple projectors to create an "augmented sculpture" in the round.
Another notable example is Scintillation (2009) (above) by Xavier Chassaing, a digital stop-motion film in which projection mapping is used to layer a domestic environment with luminous swirls of particles, igniting the petals of an orchid and tracing the curves of a moulded plaster cornice [24]. As in Gilje's blink, Scintillation emphasises the ambience of the projected light - reflections and diffusions are heightened by hand-held macro cinematography, artfully producing an impression of material texture. But in the process it raises some interesting problems for our analytical premise - a shift from the screenful image to something more live and specific. For Scintillation is absolutely a work of filmmaking; here projection mapping - the tailored materialisation of the image - is deployed as a technique for producing generalisable, substrate-independent image content.
The final example in this survey addresses the same tension. In their recent short film Making Future Magic (above), London design agency Berg give an ingenious demonstration of both the material turn of post-screen imaging, and its recuperation as image content. Berg developed an animation technique combining multiple-exposure stop-motion with a hand-held source of articulated light - specifically the glowing rectangle of the moment, Apple's iPad. 3d forms are digitally modelled and animated, then decomposed into sequences of 2d slices. These slices are then replayed into the environment, and thus recomposed into 3d forms, by moving an iPad screen over successive still frame exposures. As Berg term it, this is "extruded light" - as in UVA's latest work, it's as if light itself has been unpinned from its substrate. The results are a beguiling combination of loose, organic light painting with simple 3d geometry and DSLR imaging. As Berg frame the work, it fits entirely within the post-screen turn proposed here. Responding to a brief around "a magical version of future media", Berg are "exploring how surfaces and screens look and work in the world ... finding playful uses for the increasingly ubiquitous ‘glowing rectangles’ ...". Again the material embeddedness of this articulated light is emphasised - the way it reflects from puddles and diffuses through foliage. Screen as object in the world, rather than window to somewhere else. As in Scintillation however the inescapable irony is that the outcomes of this work are entirely bound up with screenful images - with the generalising infrastructures and distribution pipelines of social image sharing, print-on-demand and networked video.
4. Transmateriality and Presence Culture
To recap briefly: the ubiquitous digital screen is characterised by both generality - an ability to display any content at all - and self-effacing slightness - it tries to make itself disappear as a neutral substrate for content. In contrast to these tendencies this paper describes two distinct but parallel strains of "post-screen" practice in the media arts and design. Arrays mimic the grid configuration of the screen, but lower its resolution and emphasise the material presence of the array elements - their local and individual specificity is balanced with their malleable generality (their ability to carry anything-at-all). Projection mapping and "extruded light" practices also emphasise specificity, materiality and a local, performative being-in-the-world, but they do so by different means - exploiting the malleability of the digital screen (and the computational representations it hosts) in order to make it intensely site-specific. To the extent that they both adapt and resist the attributes of our familiar glowing rectangles, we could describe these practices as post-screen, but this "post" is nothing like a conscious critique, let alone a revolutionary break. However hard they may pull towards specificity and local materiality, they are readily - by design or necessity - recaptured as screen fodder.
Both these post-screen tendencies and their screenful recuperation can be usefully framed through the notion of transmateriality, a concept that attempts to capture a fundamental duality in digital (and other) media: they are everywhere and always material, yet often function as if they are immaterial. In a transmaterial view media always operate as local material instances (this is their aspect of specificity) yet retain the ability to hold specificity at bay - resisting the contingencies of flux - to create a functional generalisation in which this pixel is the same as that one, the email I send is the same as the one you receive, and one node on the network is much the same as any other.
In the glowing rectangle paradigm functional generality is entirely dominant. The work considered here, on the other hand, revels more in the pleasures and practices of specificity - the clatter of servo-actuated wood or the play of light on this particular wall. In their push towards liveness (of interaction or data), performativity, their integration of sound, and their emphasis on evanescent materiality, these works evoke what Hans Ulrich Gumbrecht would call "presence culture" - that mode of apprehending the world which is characterised by fleeting but intense moments of being, and a sense of being part of the world of things, rather than outside it, looking in. Gumbrecht constructs presence in opposition to a dominant "meaning culture", in which the essence of material things can be obtained only through interpretation. Gumbrecht describes the relationship between these poles as one of dynamic oscillation. "Presence phenomena" become "effects of" presence, "because we can only encounter them within a culture that is predominantly a meaning culture. ... [T]hey are necessarily surrounded by, wrapped into, and perhaps even mediated by clouds and cushions of meaning".
In exactly the same way we find an inevitable oscillation here between screen and post-screen. We can align the screen with generality and meaning culture, and the post-screen with specificity and presence culture; but here too the post-screen is evanescent and elusive, instead existing largely within the dominant screen culture. However this is not to discount the utopian aspirations of a post-screen practice, which might instead be located through the perspective of transmateriality. For in echoing the screen, or in literally bending it to the local, present and specific, these works operate as reminders of the ubiquitous and everyday materiality of our media, of the fact that depite appearances, every glowing rectangle is already local and specific. If that specificity is latent, then these works demonstrate practical strategies for making it explicit; from hardware hacking to modular LEDs and custom software, they participate in what might be called "expanded computing", using the malleability of digital media to reactivate its presence - and thus our presence, too - in the world of things.
After far too long, some vaguely formed thoughts on dynamic design, after some converging links and conversations in the last few days. One of these is the new MIT Media Lab identity from The Green Eyl. It's nice work, but also seems like a new high-water mark for generative or dynamic graphic design.
In this approach graphic design goes "meta": from controlling a set of visual relationships, to controlling a system for generating visual relationships. As in other generative forms, there's a payoff in the multiplicity of the results - one logo? try 40,000 variants! But more interesting I think is a change in the locus of design, where design happens. To see one of these new logos is to appreciate its colour, form and typography; to see a dozen is to begin to appreciate the variety and coherence of relationships the designers have created. But to engage with the work fully - for example, if you're a Media Lab person, to generate your own personal variant - is to understand that it's not a logo, or even a family of logos, but a dynamic "identity system". And because this is a logo, any instance of it comes to signify not only the client, but the dynamic system, or to be more specific, a quality of "dynamic systemness." What better brand value for the Media Lab?
There is also an aspect of something like performance here. Instead of an imprint or copy, the logo becomes a performance of its system (signifying that system in the process). In discussing this with my friend Geoff Hinchcliffe the other day, he pointed out that this is really nothing new for graphic design. Any book jacket design is inevitably a performance of the genre (or system) that is "book jacket". Graphic forms like book covers are often highly constrained and rule-driven, just like this new-fangled dynamic design. Geoff's own Twitter Modern Classics demonstrates this beautifully, rendering tweets through the design templates of Penguin's iconic paperbacks. If cover design is a set of rules, it's no surprise a computer can execute them so effectively. Here dynamic design is a poetic strategy, a way to strike sparks of joy and surprise from the collision of form and content.
The final example comes by way of Daniel Neville, another designer with an interest in dynamic identity systems (or relational design). In fact the Melbourne Restaurant Name Generator is not really design at all. If anything it's something like generative satire, in the same genre that can turn out band names or even whole computer science papers. The Melbourne Restaurant thing works for me because it is such acute satire: from the recycled decor to the uber-limited menu and the obsession with bicycles, it just nails a whole urban scene. As a piece of generative satire it works by both portraying its target as formulaic - as nothing but a system - while also milking the absurd juxtapositions that its own system generates. It seems to cleave a complex thing at its joints, revealing underlying elements and relationships. Maybe there's something here for dynamic graphic design?
This post is a short excerpt from a paper recently published in Architectural Theory Review 15(2) - a special issue on architecture and geometry with lots of good (Australian) stuff. My paper (pdf) is a critical look at space-filling geometry in generative design. It touches on several things already blogged - the Water Cube and ideal foams, and some generativeprojects that use self-limiting growth. This excerpt looks at the Voronoi diagram as a space-filling process.
The Voronoi diagram has become a ubiquitous motif in recent generative architecture and design. It, too, can be usefully read as a space-filling model. In formal terms, a Voronoi diagram is a way of dividing up space into regions so that, for a given set of sites within that space, each region contains all points in the space that are closer to one site than any other. The result is also foam-like, but as a model the Voronoi diagram has attributes quite different to the ideal Kelvin or Weaire Phelan foams.
Firstly, while the formal model is again based on a strict set of conditions (in this case proximity) it works with an arbitrary input — the given sites —rather than defining a regular structure. The Voronoi is thus a procedural geometric structure in a way that the ideal foams are not: its structure emerges through the application of a specific process or algorithm to a given set of inputs. In this way, the specific spatial relations between neighbouring cells depend on, and emerge locally from, the given spatial relations of the specified sites. This trait also gives the Voronoi model a kind of malleability; sites can be added, removed, or moved, and the spatial structure readily adapts
Again we can read off the attributes of the Voronoi as a model in this way. It is multiplicitous, but in a different way to the grid-like uniformity of the foam models. In this case, the multiplicity can, in fact, be irregular: the sites can be positioned anywhere within a given space. However, this does not amount to much, in terms of heterogeneity: while the sites can be positioned arbitrarily, the procedure, and the relation between sites that it encodes, is entirely uniform. Each site, taken as a formal entity, is identical to every other; this is a kind of uniform diversity. Like the foam models, the Voronoi diagram treats space as indefinite and extensive: it can go on forever; its only practical limit being the computational resources required to calculate the diagram. The model itself has no way of defining an edge or bound. Finally, the variability of the Voronoi can be phrased another way, as arbitrariness; in other words, that there is no inherent reason for a given site to be where it is. There is nothing internal to the model that can generate that differentiation.
In Marc Newson's Voronoi Shelf, for example (above), we see a characteristically organic variety: a range of cell sizes and shapes, different wall thicknesses, all in an agreeable state of harmony. The form gives an impression of inherent logic. It is as if the harmony of the relationships between the cell sites assures us that there must be a reason for them to be as they are. This is unsurprising, given our familiarity with, and aesthetic attunement to, naturally occurring structures that resemble these cells. The visual signature carries an association of organic logic: but in formal fact the cell sites are arbitrary, that is to say, designed. There is no necessary relation of one to another, only (we can but assume) a designer's choice, which is concealed by an appearance, much as the surface of the Water Cube conceals the regularity of its foam model.
Conversely, some designers directly address the arbitrary input to the Voronoi diagram, treating it as an opportunity and exploiting the malleability of the model. As Dimitris Gourdoukis writes, "the problem of deciding on the initial set of points is, I think, one of the most interesting in relation to voronoi diagrams." In Gourdoukis' Algorithmic Body project (above), the locations of the Voronoi sites are specified by a second generative system, a cellular automaton; here the Voronoi acts as a geometric filter, interpreting and interpolating one set of spatial data into another. In Marc Fornes' POLYTOP, the designer proposes a mass-customised product in which customers can design the point cloud that drives the Voronoi geometry; here a problem of arbitrary choice is turned into a feature, towards uniqueness and specificity.
Measuring Cup is a little dataform project I've been working on this year. It's currently showing in Inside Out, an exhibition of rapid-prototyped miniatures at Object gallery, Sydney.
This form presents 150 years of Sydney temperature data in a little cup-shaped object about 6cm high. The data comes from the UK Met Office's HadCRUT subset, released earlier this year; for Sydney it contains monthly average temperatures back to 1859.
The structure of the form is pretty straightforward. Each horizontal layer of the form is a single year of data; these layers are stacked chronologically bottom to top - so 1859 is at the base, 2009 at the lip. The profile of each layer is basically a radial line graph of the monthly data for that year. Months are ordered clockwise around a full circle, and the data controls the radius of the form at each month. The result is a sort of squashed ovoid, with a flat spot where winter is (July, here in the South).
The data is smoothed using a moving average - each data point is the average of the past five years data for that month. I did this mainly for aesthetic reasons, because the raw year-to-year variations made the form angular and jittery. While I was reluctant to do anything to the raw values, moving average smoothing is often applied to this sort of data (though as always the devil is in the detail).
The punchline really only works when you hold it in your hand. The cup has a lip - like any good cup, it expands slightly towards the rim. It fits nicely in the hand. But this lip is, of course, the product of the warming trend of recent decades. So there's a moment of haptic tension there, between ergonomic (human centred) pleasure and the evidence of how our human-centredness is playing out for the planet as a whole.
The form was generated using Processing, exported to STL via superCAD, then cleaned up in Meshlab. The render above was done in Blender - it shows the shallow tick marks on the inside surface that mark out 25-year intervals. Overall the process was pretty similar to that for the Weather Bracelet. One interesting difference in this case is that consistently formatted global data is readily available, so it should be relatively easy to make a configurator that will let you print a Cup from your local data.
These IBM commercials are gorgeous, lavish examples of modern motion graphics from Motion Theory. Like some of the agency's earlier work, and a handful of otherexamples noted here, these ads show how code-literate design (could we call it the P factor?) is transforming this field. For all those reasons, I love this work; but it also really bothers me. I'll try to explain.
The opening line of this voiceover says it all, really. This is data. Making that call - defining what data is - is a powerful cultural gesture right now, because as I've argued before data as an idea or a figure is both highly charged and strangely abstract. It makes a lot of sense for a corporation like IBM to stake a claim on data; this stuff is somehow both blessing and curse, precious and ubiquitous, immaterial and material. IBM promises here to help with the wrangling, but also, most powerfully, to show us what data is.
So, what is data here? In these commercials data is first and foremost material. It is a physical stuff. In Data Baby it wraps a little infant like some kind of luminescent placenta, drifting away into the air, thrown off in shimmering waves as the child breathes. In Data Energy it trails like a cloud behind a tram, and spins with the blades of a wind turbine. A lot of the (beautiful) animation work here has been devoted to simulating behaviour, making this colorful, abstract stuff seem to be tightly embedded in the world with us. What that means is both coupling it tightly to real objects, and supplying it with immanent dynamics - making it drift, disperse or twirl.
The second interesting property of data here - related to the first - is that it just exists. Look again at Data Baby, and note that there is no visible sign of this data being gathered (or rather, made). No oxygen saturation meter, no wires, no tubes, no electrodes. Not a transducer in sight. Not until the closing wide shot do we even see a computer. (This is fascinating in itself; IBM (or their ad agency) gets it that the computer is no longer the right image, or metaphor, for "information technology". Neither is the network; now it's immanent, abundant data.) In other words data here is not gathered, measured, stored or transmitted - or not that we can see. It just is, and it seems to be inherent in the objects it refers to; Data Baby is "generating" data as easily as breathing.
Completing this visual data-portrait are some other related themes: data is multiplicitous and plentiful, it's diverse (many colours and shapes) but ultimately harmonious and beautiful - in Data Transportation it looks like an urban-scale 3d Kandinsky painting.
Several things bother me about this portrayal. The first is the same is the reason I love it: it's powerfully, seductively beautiful, and this amplifies all my other reservations. The vision of data as material, in the world, is also incredibly seductive; my concern is that we get such pleasure from seeing these rich dynamics play out - that the motes wafting from Data Baby's skin seem so right - that we overlook the gaps in the narrative. This vision of material data is also frustrating because it has all the ingredients of a far more interesting idea: data is material, or at least it depends on material substrates, but the relationship between data and matter is just that, a relationship, not an identity. Data depends on stuff; always in it, and moving transmaterially through it, but it is precisely not stuff in itself.
You could say that I'm quibbling about metaphors here, and you'd be right, but metaphors are crucially important because they shape what we think data is, and what it does. Related to data as stuff is this second attribute; data that just is, in the same way that matter is neither created or destroyed, but just exists. This is crucially, maybe dangerously wrong. Data does not just happen; it is created in specific and deliberate ways. It is generated by sensors, not babies; and those sensors are designed to measure specific parameters for specific reasons, at certain rates, with certain resolutions. Or more correctly: it is gathered by people, for specific reasons, with a certain view of the world in mind, a certain concept of what the problem or the subject is. The people use the sensors, to gather the data, to measure a certain chosen aspect of the world.
If we come to accept that data just is, it's too easy to forget that it reflects a specific set of contexts, contingencies and choices, and that crucially, these could be (and maybe should be) different. Accepting data shaped by someone else's choices is a tacit acceptance of their view of the world, their notion of what is interesting or important or valid. Data is not inherent or intrinsic in anything: it is constructed, and if we are going to work intelligently with data we must remember that it can always be constructed some other way.
Collapsing the real, complex, human / social / technological processes around data into a cloud of wafting particles is a brilliant piece of visual rhetoric; it's a powerful and beautiful story, but it's full of holes. If IBM is right - and I think they probably are - about the dawning age of data everywhere, then we need more than a sort of corporate-sponsored data mythology. We need real, broad-based, practical and critical data skills and literacies, an understanding of how to make data and do things with it.