Showing posts with label visualisation. Show all posts
Showing posts with label visualisation. Show all posts

Tuesday, February 07, 2012

Local Colour: Smaller World Network

Back in September I showed a little work called Local Colour at ISEA 2011. This project continues my thinking about generative systems, materiality and fabrication. It's a work in two parts: the first is a group of laser-cut cardboard bowls, made from reclaimed produce boxes - you can see more on Flickr, and read the theoretical back-story in the ISEA paper. Here I want to briefly document the second element, a sort of network diagram realised as a vinyl-cut transfer. The diagram was created using a simple generative system, initially coded Processing - it's embedded below in Processing.js form (reload the page to generate a new diagram).

Local Colour at ISEA 2011
Network diagrams are one of the most powerful visual tropes in contemporary digital culture. Drawing on the credibility of network science they promise a paradigm that can be used to visualise everything from social networks to transport and biological systems. I love how they oscillate between expansive significance and diagrammatic emptiness. In this work I was curious to play with some of the conventions of small world or scale-free networks. A leading theory about how these networks forms involves preferential attachment: put simply it states that nodes entering a network will prefer to connect to those nodes that already have the most connections. In visualising the resulting networks, graph layout processes (such as force direction) use the connectivity between nodes to reposition the nodes themselves; location is determined by the network topology.



This process takes the standard small-world-network model and changes a few basic things. First, it assigns nodes a fixed position in space. Second, it uses that position to shape the connection process: here, as in the standard model, nodes prefer to connect to those with lots of existing connections. But distance also matters: connecting to a close node is "cheaper" than connecting to a distant one. And nodes have a "budget" - an upper limit on how far their connection can reach. These hacks result in a network which has some small world attributes - "hubs" and "clusters" of high connectivity - but where connectivity is moderated by proximity. Finally, this diagram visualises a change in one parameter of the model, as the distance budget decreases steadily from left to right. It could be a utopian progression towards a relocalised future, or the breakdown or dissolution of the networks we inhabit (networks in which distance remains, for the time being, cheap enough to neglect).

The process running here generates the diagram through a gradual process of optimisation. Beginning with 600 nodes placed randomly (but not too close to any other), each node is initially assigned a random partner to link to. Then they begin randomly choosing new partners, looking for one with a lower cost - and cost is a factor of both distance and connectivity. The Processing source code is here.

Read More...

Saturday, March 27, 2010

commonsExplorer

A quick bit of cross-promotion. The commonsExplorer is an experimental "big picture" browser for Flickr Commons collections - Sam Hinton and I started working on it for MashupAustralia months ago, and it's finally ready. Read some background over on the Visible Archive blog, or download the app and try it out.

commonsExplorer 1.0

Read More...

Sunday, May 03, 2009

Transduction, Transmateriality, and Expanded Computing

In common usage a transducer is a device that converts one kind of energy to another. Wikipedia lists a fantastic variety of transducers, mapping out links between thermal, electrical, magnetic, electrochemical, kinetic, optical and acoustic energy. In this form transducers are everywhere: a light bulb transduces electrical energy into visible light (and some heat). A loudspeaker transduces fluctuations in voltage into physical vibrations that we perceive as sound.

In analog media, transduction is overt (put the needle on the record...). But digital media are riddled with it too. Inputs and output devices all contain transducers: the keyboard transduces motion into voltage; the screen transforms voltage into light; the hard drive mediates between voltage and electromagnetic fields. A printer takes in patterns of voltage and emits patterns of ink on a page. Strictly transduction only refers to transformations between different energy types; here I want to extend it to talk about all the propagating matter and energy within something like a computer, as well as those between that system and the rest of the world. From this transmaterial perspective a computer is a cluster of linked mechanisms and substrates; a machine for shifting patterns through time and space.


If this sounds unfamiliar, it's only by historical accident. Mechanical computers, where these patterns are physically perceptible, predate electrical (let alone digital) ones, by centuries (above: a replica of Konrad Zuse's Z1, a mechanical computer from 1936. Image by rreis). Materially, our current computers are more or less black box systems. Their transductions come as a sort of preconfigured bundle or network, a set of familiar relations constructed again by mixtures of hard- and software, protocols, standards: generalising frameworks. I press a key, a letter appears; this is all I need to know. Click "OK". No user-serviceable parts inside.

Except that currently, across the media arts and a whole slew of other fields, the computer is undergoing a rich and productive decomposition. It's composting, to borrow a Sterlingism. This goes under all kind of different names: hardware hacking, device art, homebrew electronics, physical computing. Such practices mount a direct assault on the computer as a material black box, literally and figuratively cracking it open, hooking it up to new inputs and outputs, extending and expanding its connections with the environment. Microcontrollers like the Arduino present us with nothing but a row of bare I/O pins. Finally we can tackle the question of what should go in, and what should come out: of transduction. A whole generation of artists, designers, nerds and tinkerers are taking up soldering irons and doing just that. Below: the Spoke-o-dometer from Rory Hyde and Scott Mitchell's Open Source Urbanism project.


One side-effect of this decomposition of computing is that the ontological status of the digital starts to break down with it. As Kirschenbaum shows brilliantly, the digital is just the analog operating within certain tolerances or threshholds. Thomas Traxler's The Idea of a Tree (below) is a solar-powered system that fabricates objects from epoxy, dye and string, by turning a spindle. Solar energy generates electrical energy, which drives the motor, which draws the string through the dye and onto the spindle: a chain of analog transductions produce an object that manifests specific changes in its local environment. The work is a beautiful demonstration that variability doesn't have to be worked up with generative code: if the system is open to it, it's already there in the flux of the material field.


This is not to dismiss computing, only to recast it: an incredibly dynamic, pliable set of techniques for manipulating the material environment. Paradoxically the very generalities of computing - the abstractions and protocols that insulate it from local, material conditions - make it a powerful tool for transduction, that is, the propagation of specificities. Usman Haque's Pachube is a generalised infrastructure, a set of protocols and standards that rest in turn on wider standards like XML, and which assume a whole stack of functional layers: IP, HTTP, and so on. All in order to propagate material patterns and flows from here to there: this is an architecture of transduction whose utopian aim is to "patch the planet" into a translocal ecology of linked environments.

Digital fabrication is part of the same shift: an expansion and extension of the computer's range of material transductions. Digital pattern, to lasercutter instructions, to physical form. Fabbing shows how material matters. It's unsurprising that a piece of laser-cut ply is aesthetically different to a luminous pattern of pixels; more interesting is the way computation reaches out into the substrate's material properties, and the range of potential applications and domains it opens up. Fabbing has often presented itself with a narrative of materialisation, making the virtual real, translating bits into atoms - Generator.x 2.0 was subtitled "Beyond the Screen." Not so: because of course, the "virtual" never was, and the screen is material too. Fabbing does get us beyond the screen, but only because its processes and materials have different properties, different specificities, and they hook us up to new contexts, as well as new sensations. (Below: Andreas Nicolas Fischer & Benjamin Maus: Reflection - from 5 Days Off: Frozen)


Transduction suggests a way to link practices like physical computing, fabrication, networked environments, and many more. Data visualisation - in the broadest sense, from poetic to fuctionalist - is about creating customised transductions, sourcing new inputs and/or manifesting new outputs (even if they don't reach "beyond the screen"). We could add tangible interfaces, augmented reality, and locative systems. What does all this amount to? In 1970 Gene Youngblood observed a similar moment as the dominant cultural form diversified into a networked, participatory, interdisciplinary field of practices. He called it expanded cinema. So perhaps we can call this expanded computing: digital media and computation as material flows, turned outwards, transducing anything to anything else.

Read More...

Sunday, March 15, 2009

Watching the Street (Navigator) / citySCENE

Vague Terrain 13: citySCENE has just launched. As editor Greg J. Smith writes:

This issue of Vague Terrain is founded on two notions - that the city is a stage set for intervention and an engine for representation.
The collection expands out from this premise in multiple directions: carto-mashups, projection-bombing, sound walks, psychogeographic imaging and ubicomp experiments. Early highlights for me included Crisis Fronts' Cognitive Maps and Database Urbanisms, which presents some impressive work on data visualisation and generative models as urban mapping strategies (below: Case Study: Los Angeles). Overall, on a first look, this collection is incredibly rich. It shows that a creative, wired-up, critical urbanism is not just a wisftul aspiration of the technorati, but a real practice.


Having said all that, it's a privelege to be a part of this collection. My contribution is Watching the Street (Navigator), a browsable visualisation of a single day of images from the Watching the Street dataset. It tests out the hunch that these time-lapse slit-scans can be used to read real patterns in the urban environment - that they are (or can be) more than just suggestive abstractions. It uses a simple interface to display both a single source frame, and a correlated slit-scan visualisation, with image-space and time-space sharing an axis, a bit like a slide rule. Greg Smith called it an "urban viewfinder", which sums the intention up nicely.


Playing with the navigator for a while seems to confirm that hunch. The composites reveal temporal patterns in the environment, but not the spatial context that allows us to identify their causes; the source frames show that spatial context, but not the change over time. Reading the two against each other involves chains and cycles of discovery, analysis and inference. These might be open-ended (spatiotemporal browsing) or more directed. What time do the sandwich-boards go out? How long does the delivery truck stay?

Building the navigator presented some interesting technical challenges: mainly, how to make a web-friendly interface to 1440 source frames (240 x 320) and 480 slit-scan composites (720 x 320). That adds up to about 75Mb of jpegs. Processing 1.0 came to the rescue, with its new built-in dynamic image loader. requestImage() pulls in an image from a given URL, on cue, without bringing the whole applet to a grinding halt; it provides some basic feedback on the state of that image - whether it's loading, loaded, or un-loadable. I also blundered into two other useful lessons: how to use the applet "base" parameter, and how to manage Java's local cache, which kept throwing up earlier versions of the applet during testing.

Having made a lean, mean, browser-friendly version, I'm now thinking of adapting the navigator into a full-screen, offline app, with the whole eight-day dataset, and perhaps some tools for annotation and intra-day comparison. Best of all would be a long term installation; a sort of urban space-time observatory, watching the street but also opening it up to ongoing interpretation. If you'd like it running in your foyer, let me know.

Read More...

Thursday, November 27, 2008

Watching the Street

wts_out_1112
The recent Dorkbot show seemed to go off nicely - it was great to be part of such a strong show of local work (some documentation). I showed some prints from Limits to Growth, as well as a more experimental process piece, Watching the Street - a (sub)urban remake of Watching the Sky.


Credit to Nathan McGinness for the suggestion: use the same time-lapse / slit-scan technique to image change in an urban environment. Technically, the setup was fairly straightforward. Instead of a digital stills camera I used a webcam (in portrait orientation), and wrote a simple Processing script to save stills at one-minute intervals, while extracting and compiling one-pixel slices into 24-hour composites. The webcam was installed in a window box on the gallery street front, with a view across the road, under a street tree, to one of Manuka's low-rise shopping arcades (above). I also attached a printer to the installed rig, so that a new composite could be produced and pinned to the wall each day. So here, some of the resulting images, and a bit of commentary.

The image-gathering process got off to a rocky start. After a few hours, the webcam came unstuck from the side of the window-box, and lay forlornly on its side for the next 48 hours (here's what that looks like). I gaffed it back in place just before the opening, and restarted the capture in time to catch some gallery-goers loitering around out the front.

wts_out_1107
wts_out_1108
These two are the Frday the 7th and Saturday the 8th of November, the first two full day composites. Those striped rectangular chunks around mid-frame are cars, parked in the 30 minute loading zone accross the road. Some stay for a few minutes, a couple for what looks like an hour. Of course on the Saturday, the loading zone doesn't operate, and there's a single car parked in it from mid-morning to mid-afternoon. The single-pixel vertical shards give an indication of passing car and pedestrian traffic.

wts_out_1109
wts_out_1114
A quiet, sunny Sunday the 9th; the form hinted at on the 8th, reveals itself as the shadow of the big plane tree, creeping across the footpath. Then the following Friday the 14th. It's all happening; lots of car and pedestrian traffic, changes in sunlight, looks like an afternoon breeze in the foliage as well. The dominant, bluish horizontal stripe in all these images is the neon sign on the shopping centre - which runs all night. The orange rectangle that extends into the evening is the interior light of a shop - which you'll notice switches off at slightly different times each night.

So you'll notice that as in Watching the Sky, I'm persisting in reading these as visualisations of the environment, as well as digital images in themselves. I'm struck by how this simple, indiscriminate process reveals both expected and unexpected patterns, and continues to provoke new questions. This despite, or I would argue because of, its openness to multiple material / temporal systems. In an interesting bit of synchronicity, I was teaching in the UTS Street as Platform masterclass with Dan Hill (more on that soon) while this piece was running. Could a simple visualisation process like this function "informationally", as it were; to help answer real questions about a very specific slice of urban environment, in near-real time? More interesting for me, could it function in that way without prescribing the question in advance - that is, could it support an open-ended process of exploration and interpretation? I'm planning to build an interactive version of this piece, to try out these ideas. In these static visualisations there's a huge amount of data missing: I set the slice point more-or-less arbitrarily, so there are 479 other potentially interesting slices to browse. It would be nice to be able to change the slice point dynamically, as well as navigating through the source images. I notice that Processing 1.0 (yay!) now supports threaded loading of images: could come in handy. Meanwhile, the full set of composite images are up on Flickr.

Read More...

Wednesday, July 30, 2008

The Visible Archive

I signed the contracts thismorning on a research project that I'm really excited about: a grant from the National Archives of Australia to develop interactive visualisations of their collection. That collection has over nine million items, grouped into some thirty thousand series (or sets); it's basically all of the Federal government's paperwork, but also includes photographs, AV material and other stuff. You can search the collection via the Archives site - and access digital copies of the original records in some cases.

The Visible Archive aims to do what the search interface doesn't: provide a sense of context and orientation, revealing structures and relations within the collection. The visualisations should be useful for both archivists and archive users; and the techniques developed should also be useful for other archives and collections.


The idea seems to have some currency - you may have seen Lev Manovich recently announce a project on Visualizing Cultural Patterns, working with collaborators including Noah Wardrip-Fruin.

Read more and follow the project at its own, freshly minted blog. And if you have any pointers to other related work in the visualisation of cultural datasets, especially archives, please send them along.

Read More...

Wednesday, July 16, 2008

Radiohead's Data Melancholy

In case you missed it, Radiohead have gone all data-aesthetic with their latest video, House of Cards. What's more, it's fully zeitgeist-compliant, with open access and a call for re-visualisations of a quite massive dataset: hundreds of megabytes of spatial data gathered with various 3d laser-scanning rigs. If the download stats and early signs are anything to go on, we will be seeing much more of this dataset.


As well as being technically cool, the project is yet another sign of the increasing cultural prominence of data as both material and idea - in that sense, after Design and the Elastic Mind and Wired's "Petabyte Age", this is more of the same. But it's also something different, it seems to me. Like any other visualisation, House of Cards doesn't only use data, it presents a certain sense of what data is, means, and (crucially) feels like; and this is where it's different. The dominant narrative of data visualisation at the moment is informed by the networked optimism of web 2.0, where the social sphere, and increasingly the world as a whole, is unproblematically digitised; where more is more and truth, beauty, and commercial success all are immanent in the teeming datacloud.

House of Cards, by contrast, is a manifestation of data melancholy. Data here is low res, with a sketchy looseness of detail that evokes the gaps, the un-sampled points. This data is also abject or corrupt, the scanner intentionally jammed with reflective material, a bit like the metallic chaff used to confuse missile guidance systems. These glitches are familiar devices in electronic music and video, including Kid A-era Radiohead. However here the errors are very much in the data; they have migrated out of the music, which is human, organic and more or less intact here. This disjunction between failed data and the emotional, human domain is what characterises the data melancholy; it's illustrated beautifully at the end of House of Cards, with the "party scene" (one of Thom Yorke's ideas for the clip), a social scene decimated into abstract clouds of points. This theme also resonates across In Rainbows, especially in the closing track, Videotape: "this is one for the good days / and I have it all here, in red blue green." Here image data is again a sort of failed trace of an emotional reality, all that remains of "the most perfect day I've ever seen."


Yorke's other motif for House of Cards was "vaporisation," which is clear enough in the clip; I think its most effective in the final shots of the house; the earlier clips of Yorke disintegrating seem a bit langurous, with that undulating look of Perlin noise (is it, anyone?). The house shot in particular reminded me of Brandon Morse's Preparing for the Inevitable; Morse's work in general has a related feel about it, though the models seem to be synthesised rather than sampled. Again the poetics is one of cool, digital melancholy, where tragedy is stripped down to a set of vectors and forces (above: Collapse, from Flickr). Here though, rather than a failure of data (sampled representation) it's a failure of the procedural model, or perhaps failure with, or in, the model.

Read More...

Thursday, July 03, 2008

Image, Data and Environment: Notes on Watching the Sky

Watching the Sky is a data visualisation project I've been working on for the past six months or so. The work is almost ridiculously simple: slit-scan type visualisations of large image time-series, shot from the window of my Canberra office. All the images from this process are up on Flickr. Recently UK journal Photographies invited me to write an "image led" piece on the work for their forthcoming second issue. Here's the essay, which looks at how we interpret, and literally image, pattern and change in the environment, and the role of data in that process. The themes (data, materiality, aesthetics) and some of the examples will be familiar to regular visitors. New things include spatiotemporal imaging (and even photography) as data visualisation, weather vs climate, black cockatoos, a quick look at art using environmental data-sources, and an equally quick dig at Tufte's Wavefields. It's also the most autobiographical bit of writing I've done in years - make of that what you will.

A few related projects that I discovered in the course of things: Miska Knapek's 24 hour visualisations, Michael Surtees'
36 Days of New York Sky, William Gaver's Video Window (pdf) - thanks Karl for the link - and yesyesnono's Travelling Around images - beautiful radial time-slices at a smaller time scale.

05.07_540


My childhood home was near an air force base on the outskirts of Sydney, where the sky was host to a wonderful array of aircraft. Mostly big, droning transports; Caribou and Hercules, each with their signature profiles and engine notes. Jets and helicopters were rarer and more prized: Mackie trainers, F-111s, Iroquois, Sikorskys and Chinooks. Once, miraculously, a visiting Starlifter transport, an immense silver thing apparently suspended over the hobby farms and horse paddocks. Unasked-for, revelatory, literally out of the blue, the planes were also metonymic signs of a wider world, and an idealised high-tech future I could barely wait for. Living signs flew over us too; we loved to think that black cockatoos were harbingers of rain, and would count them to predict the number of wet days ahead (image: Beppie K). I discovered the UFO lore of the early 80s; in dreams I was visited by terrifying lights, and saw archaic aircraft disintegrating above the eucalypt gully behind our house.


I came to Canberra from Sydney in early 2001, and the sky changed, opening out into a brilliant dome bounded by hills. Soon after it changed again as the nostalgic motif of the gliding passenger jet was overlain with catastrophe. This was echoed by strange weather, a long drought. Safe in suburbia, I installed a water tank and began watching the sky more hopefully, tracking rain bands and storm cells on the weather bureau's website: running out to clear the downpipes, then back to the laptop, downloading the latest. Sky data, almost real-time, a new and better harbinger, and with more at stake this time - water in the tank, a four thousand litre buffer against the next dry stretch. Never far away, the question of when weather becomes climate; is this a "blip" or a trend, short term variability or long term change? Temperature and rainfall statistics become common currency, and every month brings new data, but the more we know the less certain we become; in fact the only consensus seems to suggest more uncertainty. Ocean temperature measurements feed supercomputer models whose simulations are distilled into enticing, oracular suggestions, indications, projections. We occupy an increasingly detailed graph of accumulated data, but remain trapped inevitably in the present, at its right hand edge.

Watching the clouds approaching and cross-checking the weather radar, it's impossible not to sense the gaps and disjunctions between the data - an authorised, centralised and objective account of what is - and the situation "on the ground." This patch of rain that should be on us now, and somehow is not. It seems to have eluded the radar's view, slipped between the pixels or time-steps, or vanished in the lag, the aporia of almost-real-time which is the time data itself takes: to gather, check, validate, compile, visualise, distribute. The weather stubbornly continues to occur in the present, and at full resolution. The rainfall figures always come (as any weather watcher will know) from elsewhere, a single, notionally representative monitoring point. We're always cheated, as a result; overstated or undermeasured. Rain carries such social charge, where I live, that locals call the radio station, reporting from their backyard rain gauges in pyjamas and gumboots. This is the only way of closing that gap, to measure the world locally and create data instead of just siphoning it down from the web. I make my own measurements, tapping on the side of the tank slowly, bottom to top, listening for the hollow ring of the air cavity, homing in on the water level: data sonification.


In contemporary networked culture we are constantly reminded of the scale, ubiquity and significance of data. Every search, message, document, image, social exchange is a data transaction. We seem to be couched in data; it is our new environment. We accept this much-heralded "information overload" with more or less equanimity, as our inboxes and hard drives steadily fill. It's not surprising that in recent years artists and designers working in this domain have begun to grapple with data as a material. As I've argued elsewhere this inevitably involves the construction of an idea of what data is, what it's for, and what it contains. This practice also confronts the pragmatic question of what to do with data, what to make from it and (if we accept the value of the term) a data aesthetics.

One of the dominant creative strategies in this field, and its main aesthetic trope, is multiplicity: displays in which the points and lines of simple graphs burgeon into clouds, fields or flows. The datasets, and their visual figures, reflect our overloaded data-environment. This aesthetics of scale has been theorised through the notion of the sublime, a figure historically associated with nature's beautiful and/or terrible expanses; once again data takes the place of environment (see for example Manovich 2002 (doc) and Jevbratt 2004 (13Mb pdf)).

The data sublime is aesthetically expedient, as well as culturally resonant. Sheer scale generates visual richness as well as revealing patterns within datasets; yet the data points we see here are meagre and unmysterious in themselves. Each is a small cluster of symbols and parameters generated through a (social, cultural) process of selection, filtering, quantification and categorisation, in order to grasp some specific slice of the world in a certain way. When data swarms and flows with apparently inherent dynamics, it's easy to forget how data is created, or even that it is created. This is especially true when the data source is the network itself; self-referentiality gives an impression of self-sufficiency, again a world in which data is given, rather than made.


Countering this tendency a number of works draw in data from the physical environment "outside", and direct our attention back towards a space that is more familiar and more uncomfortable than the digital realm. For example Andrea Polli's work brings data from large spatial and temporal scales into the realm of experience, often in close collaboration with scientists; her Atmospherics project (2004) renders meteorological data gathered from a severe storm as a complex spatial soundscape. Heat and the Heartbeat of the City (2004) sonifies temperature data for New York City, beginning with data gathered during the 1990s, and presenting projections for future decades based on climate change modeling. More recently Bonding Energy, by Douglas Repetto and LoVid (2007, above), gathers data from custom-made sculptural devices measuring solar energy levels, and displays changing levels from multiple measuring sites in an animated visualisation. These works use data reflectively, and show a commitment to the "outside" that is their ultimate data source. However they are also limited by the structure of their material, which measures the world through a single value — temperature or solar radiation level. This single point, as telling as it is, seems somehow overdetermined: too much what it is, too tightly bound to an existing set of meanings and stories.

Photographic imaging, by comparison, gathers large amounts of complex data from the environment: many millions of numerical values with a rich set of spatial interrelations. The notion that the camera reveals the otherwise invisible, as in the work of Muybridge for example, mirrors the aims of data visualisation; yet this also reveals an important difference in these two practices. The reduced data of measurements such as temperature go to great lengths to exclude the extraneous. On the other hand photography, if we regard it as a form of data visualisation, often seems to welcome the extraneous, to embrace incursions, unexpected interactions or extra layers. This is not to claim the photographic image has some kind of special relation to reality, or that it isn't just as selective, intentional, and conditional as a temperature measurement; it's more a slight opening out of the field of view. The photograph can operate something like a geological core sample, selective but inclusive, a piece of whatever happens to be within the frame.

In the emergent field of (what I will call) space-time imaging, artists exploit the digital photographic image to reconfigure representations of the world. This work has a pre-digital ancestry in slit-scan photography and cinematic effects, but with the digital image it has expanded and proliferated (see Levin). Artists have begun to approach the image as a two-dimensional data field; they treat time by extension as a third conceptual axis, forming a three dimensional volume. This abstract structure is literalised in projects such as Alvaro Cassinelli's Khronos Projector (2005), where we can "push" parts of the image back in time. While the experience of the work hinges on the fleshing out of a spatial metaphor, its operation can be understood as interactive data visualisation: a technique for selecting and presenting data points from the image series. Other work, such as that of Australian artist Daniel Crooks, can also be understood laterally, I would argue, as data visualisation or re-visualisation. Crooks works with digital video source material and explores the de- and re-composition of the image in ways that deform space and time, but also, like other data practices, reveal their subjects anew . In "time slice" work such as Train 6 (2004) Crooks samples small segments of the time/image stack, revealing their raw edges, rather than trying to smoothly reconstitute the image. The discourse around this work tends to emphasise its (broadly familiar) agenda: reconfiguring perception, breaking down conventions of representation, and so on (see for example Doropoulos). Like Dziga Vertov before him, Crooks' subject matter is deliberately everyday (public transport, urban spaces), drawing attention back to this reflexive project. Yet at its most poignant, Crooks' work also reveals real patterns of movement and change in the world that it samples. It re-visualises reality, and in doing so it demonstrates the richness of the photographic time-series as data set.

05.02_540_radial
Watching the Sky is a deliberately simple-minded experiment. It uses the most basic techniques of slit-scan photography and related digital space-time work. Using a static digital camera tethered to a computer, I take images at three minute intervals; four hundred and eighty per day. The camera is in my office, pointing out the window with an unremarkable view of the neighbouring building, some trees, power lines, and the sky over west Belconnen. A simple script extracts a narrow vertical slice from each image, at the same location in the frame; then compiles those slices into a new image. In the rectangular visualisations the slices are tiled from left to right. In the radial visualisations slices are gradually rotated so that a twenty-four-hour period spans one complete revolution (the "seam" is at midnight).

Of course any number of other visualisation processes are possible. The digital space-time field illustrates many of the options, though this work often plays with the reconstitution of a transformed image, which was not my interest here. Slices are used as a simple way to compress days' worth of data into a single visual field, while preserving as much as possible the spatial relations within each frame. They also make for visualisations with a simple logic, readable as high density graphs.

In a strange inversion of this project, Edward Tufte, prominent theorist of information visualisation and design, recently called for a new generation of information graphics - "wavefields" - that match the data rate of high-definition video, showing "high-resolution, complex, multiple, animated statistical data-flows." Yet the video exemplars that Tufte uses to make this proposal are not "statistical data flows" but abstract shots from the physical environment: rippling reflections on water and undulating meadows. It's striking that Tufte turns to these environmental sources of visual pattern to mock up a more "intense" genre of abstract, statistical visualisation. Among other things, Watching the Sky attempts to demonstrate that this kind of informational density (and aesthetic intensity) is already immanent (it's just out the window).

I'm influenced here by the work of Lisa Jevbratt, an artist whose data visualisations have focused on the digital networks, but whose approach works against any simple notion of information. Here too density is increased to the point of saturation: with a large and multilayered dataset, Jevbratt's 1:1 (1999/2002) visualises the attributes of some 180,000 internet (IP) addresses sampled by the artist. The resulting images are startling and completely abstract, but not at all unstructured. Jevbratt describes the visualisations as "abstract reals", and "objects for interpretation, not interpretations." Instead of demonstrating the already known, or the answer to a preconceived question (information), Jevbratt's data works provoke, and perhaps answer, new questions; in the artist's words "hints, suggestions, and openings."


Although the data source in Watching the Sky is as tangible and unmysterious as possible, surprising hints and suggestions continue to appear. In one of the earliest sketches I found small but distinct variations in the "horizon" over the course of a day, and recurring on successive days. I eventually realised this was caused by the afternoon breeze, shifting foliage by a few pixels within the frame. The dataset here is a trace of a complex material field that in a sense visualises its own internal structure: the passage of a shadow across the ground appears as a recurring pattern, an enfolded or multiplexed representation of another set of material interactions. As a data source, the photographic image also cuts easily across categories and domains. In the rectangular visualisations presented here stripes of colour are visible towards the bottom of the frame. These are caused by cars, parked illegally under the trees; they form another ad-hoc graph that reflects human (cultural, institutional) calendars and cycles, though again they are intermingled with other scales and structures.

Time, and the perception of change, are central here. Like Jevbratt my hope is that these visualisations will be platforms for interpretation that can somehow augment our local, subjective, everyday practice of reading the environment. There's a yawning gap in our culture at the moment, between this experiential scale, and the long, slow-motion catastrophe we seem to be in. Weather watchers comment on the isobars, track the low pressure systems as they pass, speculate on ocean surface temperatures and the Southern Oscillation; like the black cockatoos each data point is an ambiguous sign that refers to a wider material system. This project is a straightforward response that proposes another way to image, and think, pattern and change in the environment.

This is a preprint of an article submitted for consideration in Photographies © 2008 Taylor and Francis; Photographies is available online here.

Read More...

Monday, October 29, 2007

More is More: Multiplicity and Generative Art

Douglas Edric Stanley wrote a nice post recently on complexity and gestalts in code and generative graphics. In it he wonders about "all those lovely spindly lines we see populating so many Processing sketches, and how they relate with code stuctures." I've been wondering about the same thing for a while, and Stanley's post has prodded me to chase up a few of these ideas.

Stanley makes some astute observations about the aesthetic economics of generative art; the fact that it costs almost exactly the same, for the programmer, to draw one, a hundred or a million lines. Stanley pursues the machinic-perceptual implications - how simple code structures contribute to the formation of gestalts; but he only hints at what seems like a more interesting question, of how these generative aesthetics relate to their cultural environment: "all of these questions of abstraction and gestalt are in fact questions about our relationship to complexity and the role algorithmic machines (will inevitably) play in negotating our increasing complexity malaise."

I actually don't think complexity is the right concept here. For me complexity refers to causal relations that are networked, looped and intermeshed (as in "complex systems"). These "lovely spindly lines", and Stanley's gestalt-clouds, show us multiplicity but not (necessarily) complexity. Simple, linear processes are just as good at creating multiplicity. There's certainly a relationship here - complex systems often produce multiplicitous forms and structures; and causal complexities embedded in "real" datasets seem to be a reliable source of rich multiplicities - but complexity and multiplicity aren't the same thing. For the moment I want to focus on the aesthetics of multiplicity.


Multiplicity is the uber-motif of current digital generative art - especially the scene around Processing. Look through the Flickr Processing pool and try to find an image that isn't some kind of swarm, cloud, cluster, bunch, array or aggregate (this one is by illogico). The fact that it's easy to do is a partial and not-very-interesting explanation; to go one step further, it's easy and it feels good. Multiplicity offers a certain kind of aesthetic pleasure. There's probably a neuro-aesthetics of multiplicity, if you're into that, which would show how and where it feels good. Ramachandran and Hirstein have suggested that perceptual "binding" - our tendency to join perceptual elements into coherent wholes - is wired into our limbic system, because it's an ecologically useful thing to do. Finding coherence in complex perceptual fields just feels good. The perceptual fields in generative art are almost always playing at the edges of coherence, buzzing between swarm and gestalt - just the "sweet spot" that Ramachandran and Hirstein propose for art in general.

I don't find this explanation very satisfying either, because it doesn't seem to tell us anything much about the processes involved - it's a "just because," and a fairly deterministic one. Another way in is to think formally about the varieties of multiplicity in generative art. I rediscovered Jared Tarbell's wonderful Invader Fractal (below) in the Reas/Fry Processing book recently. It shows a kind of multiplicity that's the same but different to the "spindly lines" aesthetic. Each invader is the product of a simple algorithm; the whole mass is a visualisation of a space of potential - a sample (but not an exhaustive display) of the space of all-possible-25-pixel- invaders. Multiplicity here is a way to get a perceptual grasp on something quite abstract - that space of possibility. We get a visual "feel" for that space, but also a sense of its vastness, a sense of what lies beyond the visualisation. John F. Simon's Every Icon points in the same direction; towards the vastness of even a highly constrained space of possibility (32x32 1-bit pixels).


Perhaps current aesthetics of multiplicity are actually doing something similar. The technical differences are fairly minor; basically a switch in spatial organisation from array to overlay; a compression of instances into a single picture plane. The shortest (and my personal favourite) path to multiplicity in Processing is aggregation: turn off background() and let the sketch redraw. Reduce the opacity of the drawing for an accumulating visualisation of the space of possibility that your sketch is traversing. Multiplicity here isn't an effect or aesthetic for its own sake; it's intrinsically linked to one of the defining qualities of generative systems - their creation of large but distinctive spaces of potential. Multiplicity is again a way to literally sense that space; but also, since it almost never exhausts or saturates that space, it points to an open, ongoing multiplicity; it actualises a subset of a virtual multiplicity, and shows us (as in Every Icon) how traversing that space is only a question of specifics and contingencies. Multiplicity says "and so on"; an actual gesture towards the virtual.

Multiplicity refers to the specific space of potential in any single system, by actualising a subset of points within it; but it also metonymically refers to an even wider space of potential, which is the one that all computational generative art - and in fact all digital culture - traverses. Because of course any system can be tweaked and changed, no chunk of code is immutable or absolute, the machines of the Processing pool are ever-changing things that collectively sample the space of all possible (generative) computation. Just as it refers directly to the space of potential of its own (local) system, generative multiplicity alludes to the unthinkable space-of-spaces that contains that system - a space the system gradually traverses with every change in its code.

This, for me, explains the aesthetic and cultural charge that multiplicity carries. It's a gesture towards an abstract, unthinkable figure; an aesthetics of the virtual, in the Bergson / Deleuze sense of the word. What's more this particular form of virtuality, or possibility - the one accessable through code and computation - is at the core of digital culture and our contemporary situation. Generative multiplicity is, quite literally, a visualisation of that figure.

Read More...

Friday, September 21, 2007

Langheinrich & Khut - Embodied Media at BEAP

One of the strong points of PerthDAC was its overlap with BEAP, Perth's premiere media arts festival; even better, the conference built in gallery visits to several of the BEAP shows. I'll blog the conference soon - meantime see for example Axel Bruns' comprehensive blogumentation. For now here are some thoughts on two of my favourite works from BEAP, both of which use abstract digital forms to create profoundly embodied experiences.


In Ulf Langheinrich's Waveform B, video projection and strobe lights play over a long, pool-like screen on the floor of the installation space. Entering the darkened central space of the building, the screen flashes and vibrates under ultraviolet strobes, seeming initially to come loose from the floor, hover and drift. The strobe banks mark out audiovisual intervals of time, but always accelerating or slowing, coming together, intensifying or dissipating: temporal waves meet, reinforce and neutralise each other. When these waves are most intense the work's visual field becomes overwhelming; bursts of ultraviolet seem to outpace vision, inducing refractions, afterimages, phenomenal artefacts that drive perception inwards. In calmer moments video-projected noise textures blend with the strobes, and again occupy a perceptual threshhold where time and space interfold; the noise seems to eddy and flow; differentiations in space rise out of this horizontal field and quickly sink back into it. The ripples are derived from video of Ghanaian ocean waves - there's a trace or imprint of fluid dynamics here; the overlay of oceanic ripples and video static recalls Michel Serres' Genesis, where he figures noise itself as a kind of material and informational sea.

Strangely, Hannah Mathews' catalog statement describes the work as "a temple to technology, enabling audiences to meditate upon the inherent stillness of a contemplative digital void." Slightly better than another PICA account - "a multi-level, immersive audiovisual experience of the colour blue." Happily neither description does the work any kind of justice. Waveform B evokes phenomenality; material, sensual experience; though unlike some other works with this aim, Langheinrich eschews (conventional) pleasure in favour of overload, disorientation and the edges of perceptual experience. Augmented with strobes the ubiquitous video projector is stripped back to its technical core, a kind of hyper-articulated source of visual energy, rather than a cinematic window on the wall. A 2005 interview fleshes out some of Langheinrich's background; I was struck especially by his mention of music as an aesthetic model. On that thread, the soundtrack at the PICA installation created an effective atmosphere, but lacked impact - maybe the sub had been turned down?


In George Khut's Cardiomorphologies v.2 participants are gently rigged with breathing and pulse sensors that drive an abstract visualisation. Overlayed concentric rings and discs grow and shrink in patterns that suggest both modernist geometric abstraction and mystical diagrams or mandalas. Using the system the visualisation takes on another inflection, as a kind of avatar, a (data) projection of the self imagined through the language of meditative practice as a point of energy. Biofeedback - at the core of Khut's project - occurs as bodily process drives image which in turn inflects mind and body. I enjoyed that state, but it's not a guaranteed ticket to nirvana; I saw others getting quite uncomfortable as their heightened awareness of breath led into anxiety.

Khut's approach is an interesting combination of techno-pragmatism and an ethical commitment to, and knowledge of, bodily subjectivity. Engaging visitors to the work he's very open about the mechanics of sensors, data gathering, analysis and interpretation; if you're interested he can explain in detail the theoretical correlations between spectral analysis of heartrate fluctuation frequency and the parasympathetic/sympathetic nervous system balance. Khut makes it clear this isn't some mystical strain of data-mapping "magic," but a concrete, physio-psychological process. In fact the conversations around the work are part of the process, drawing out participants' experiences and sensations and informing the ongoing development of the system. Khut's work shows how data practice can engage intelligently with, and reflect on, the extraction or creation of datasets as well as their aesthetic and affective manifestations.

Read More...

Monday, September 10, 2007

Against Information - a Data Art Critique

Next week I'm off to Perth for DAC, where I'll be presenting a paper focusing on data art. It looks at a good handful of works from the last few years, including The Dumpster by Golan Levin with Kamal Nigam and Jonathan Feinberg, We Feel Fine by Jonathan Harris and Sepandar Kamvar, Alex Dragulescu's spam visualisations, Lisa Jevbratt's 1:1 and Infome Imager Lite, Brad Borevitz's State of the Union and some of Jason Salavon's abstraction and amalgamation works.

The paper develops the questions that I posted here a while ago, focusing on how artists construct a notion of data while they use it as a creative material. It especially considers the distinction between data and information, arguing that data art often works to defer, abstract or undermine information - in the sense of a formed or contextualised message - and instead offers us a more open or underdetermined experience of the data as abstract pattern and relation. The problem here is that we can't have unmediated access to the abstract data - it's always mapped to something, structured in ways extraneous to the dataset. And data itself is always extracted, made or constructed, not some kind of autonomous digital object.

The case studies are clumped around four data-figures: indexical data - data as a sign of something real - as in The Dumpster and We Feel Fine; abject data - data as empty and malleable, as in Dragulescu's work; Lisa Jevbratt's data material or Infome; and data as anti-content or "artist's squint" in Salavon's work and Borevitz's State of the Union.

Anyhow, here's the full paper (3.3Mb pdf). Feedback very welcome, of course.

(update: the pdf file was corrupt, sorry - fixed now)

Read More...

Thursday, June 28, 2007

Dataesthetics - Close to Home

The data from the 2006 Australian census has just been released. In the last day or two the media have run the usual kind of headline stories - in which specific bits of data or comparions are extracted, spun and narrativised; nationally, there's been some focus on increasing debt (and income); locally Canberrans have been portrayed as richer, more wired and more generous with their time than everyone else. This process of top-down public storytelling dominates our understanding of this kind of data - but perhaps that will change, because now the whole dataset is available online, for free. It's buried a few steps in, and yes it's in a proprietary (Excel) format, but it's all there for the munging.

I started browsing some data from my suburb, and focused on numbers of kids per mother per age group. It's coarse-grained data but evocative - birth rates suggest a lot about a society. Comparisons suburb by suburb also hint at distinct demographic patterns. I put together a quick visualisation, a stacked area graph (inspired in part by Lee Byron's beautiful last.fm vis). Another reference was the Japanese tradition of Koinobori, the carp pennants that celebrate Boy's (now Children's) Day. So, here are some statistical pennants - suburban emblems that encode demographic data. Maybe we could fly them at the shops, or individuals could annotate them by marking their own place in the local profile. It's fun to play amateur demographer (read on) but the point here is really proof of concept; if I can do this, so can lots and lots of others, and that's interesting in itself.


Each form shows the number of children per woman; the wide end is zero, the narrow end is six or more. So in all the pennants the initial dip shows the difference between the number of women without children, and women with one child; then more women with two kids, fewer with three and so on. The thicker tail visible in the second pennant shows a larger number of women with lots of kids. The bands in each pennant show age groups, with youngest at the top. Most young women have no kids - not a great surprise - but the forms also show older women with larger families, and the relative distribution of children by mother's age group, and how this varies with suburb. The bottom-most pennant comes from an old, wealthy suburb: lots of older women with two and three kids. Pennant two is from a semi-rural town, with a more even distribution of children through the age bands; pennant three is from a new suburb, with wide bands of small, relatively young families. Colours are arbitrary, for the moment.

For more demographic data art see also Jason Salavon's American Varietal project, commissioned by the US Census Bureau.

Read More...

Monday, March 12, 2007

Lisa Jevbratt - Infome Imaging

Lisa Jevbratt has been doing data art for some time now. Her 1:1 (1999) was one of the first data-vis works to gain critical attention in new media art circles. I re-read some of Jevbratt's writing recently, and the artist pointed me to this 2005 paper, which in part sets out the concept of the Infome. Jevbratt seems to be changing direction - towards bio/eco practices - but her work remains significant, especially while data is the new code/black/whatever, for the Processing generation. The Infome idea is particularly interesting, because it creates a distinctive sense of just what data is.

Jevbratt's Infome is a kind of data cosmology - the Infome is an "all-encompassing network environment/organism that consists of all computers and code." Once you get past the biological analogy, the Infome offers a way to treat data as a kind of material that is concrete and self-sufficient, but also shaped by the (social, political, technological) forces outside it. Data is indexical, but not in the empirical sense of measurement or simple correspondence. Instead Jevbratt uses another material (geological) analogy; the Infome is a kind of landscape in which external forces and structures are overlayed and condensed. Another nice twist is that visualisation becomes recursive: "Images can now simultaneously be reality, since they are part of the Infome, and an imprint of that reality, as if the image produced by a potato stamp were also a potato."

Jevbratt's images of the Infome in 1:1 and Infome Imager Lite aspire to this kind of material directness, making a "slice" or "imprint" of the data. She describes the images as "real, objects for interpretation, not interpretations." This desire to present the data "in itself" closely resembles the "pure data" aesthetics of the audiovisual databenders I mentioned in "Hearing Pure Data" (2004). We can make the same critiques of Jevbratt's work - that we can't see the data in itself, only its specific mapping. Jevbratt does take great care to explain the mappings used, and best of all in IIL she encourages the user to experiment with changing mappings and datasets - an artistic precursor of the public data literacy now mentioned in relation to social data-vis services Swivel and ManyEyes. The critique still stands though; it's clearest in the way Jevbratt wraps all these visualisations around the rectangular picture plane - a structure that has no inherent relation to the data, but a significant relation to the art-world context that these works function in.

For Jevbratt these data-impressions allow us to "use our vision to think" - information and pattern arise from a perceptual process, rather than a computational analysis. Like much other data art, Jevbratt resists providing information, in the sense of meaning or message; instead she offers a substrate for information, a field of potential meaning. She writes of seeking "something unexpected," "hints, suggestions, and openings" that lead us into the Infome itself, its immanent, collective dynamics, even its emergent, distributed agency. It's a kind of data mysticism, but also an attempt to sense the real but otherwise imperceptable shapes of digital culture.

Read More...

Monday, November 13, 2006

The Transcendental Data Pour - Alan Liu

Recently read this paper, by writer and online cult studs pioneer Alan Liu, which raises lots of ideas and yet more questions around data aesthetics and practices. It's a few years old, but with a few exceptions remains quite relevant. What's more it's a great read, one of the wittiest and most enjoyable things I've seen in a long while.

Approaching from writing and textuality, Liu tackles the XMLification of everything, the Taylorist dogma of the separation of content from presentation (hello, Blogger) and the subsequent waning of "cool" web design: "non-standard, proprietary, hand-coded, and other clearly infidel (or ... artisanal) practices of embodying content inextricably in presentation." Instead, Liu sees the web becoming a modular, minimalist set of containers for what he calls "data pours" that "throw transcendental information onto the page from database or XML sources reposed far in the background." Literature-wise, Liu regards these data-driven incursions as "blind spots" for both readers and writers, where authorship surrenders to parameterisation or a database query.

Liu gathers a set of artworks around the concept of the data sublime (later tackled by Manovich and Warren Sack, among others) and the question of "what can still be cool" in the post-industrial, database age. Through Kittler, he recalls the Modernist literary interest in the noise in the channel - "like tuning your radio to a Pynchonesque channel of revelation indistinguishable from utter static" -and points out that contemporary data aesthetics seem interested in the same immanent revelation, but here the sources of that data-plenitude are highly rational and structured. Data practices s/mash them up, as if in attempt to feel their inner consistency; Liu identifies this drive with the "ethos of the unknown" - a search for "an experience of the structurally unknowable."

Liu's ideas map onto current data art well enough, though rather than data pours threatening authorship, a new group of authors deploys parameterisation, mapping, munging and filtering as its main techniques. The "mother tongue" now, the source of plenitude, seems also to be increasingly social, rather than natural or linguistic (eg Linkology, The Dumpster, Listening Post). And pulling against the sublime and the ethos of the unknown is its empirical opposite, the seeking of pattern and information. The surface between these seems to be the place to be; as in the beautiful Neuromancer quote in Liu's paper, it's a liminal state where forms emerge and disintegrate. Familiar territory for the arts, as Liu points out.

Read More...

Thursday, September 21, 2006

Data, Code & Performance

A few more thoughts and questions following on from the previous post, and responses to it. If data art isn't necessarily concerned with the (apparent) meaning of its datasets, or their empirical basis, then what is it concerned with? Perhaps one answer has something to do with performance. Whatever else it does, this work performs a process that is meaningful in itself. Whatever else it says, it also says, "watch what I do with this data." It displays a data literacy, an ability to acquire, munge, filter, process, map and render. Since it's primarily operating as art, rather than functional visualisation / sonification, it also demonstrates a process of translating or mediating between those domains. This isn't a criticism (necessarily), just trying to think through a few basics, and taking on those points from toxi and infosthetics re. the tension between art and visualisation here. If data art is partly self-referential performance, then what kind of cultural values exist / are constructed around that? Manovich refers to "data-subjectivity" - are data artists exploring / peforming this "super-modern" state of being?


I'm sure there's a connection here somewhere with literal acts of data-performance. I saw some live coding performances at the Medi(t)ations conference in Adelaide (blogged earlier). Brisbane duo aa-cell(Andrew Sorensen and Andrew Brown) played a great set - two laptops, both running Sorensen's own Impromptu environment, with screens projected to show the accumulating code. Here too there was a kind of mediation between computational and cultural domains - a performance of (largely obscure) code structures that generated a sonic structure dense with musical references. It was partly the pulse of a synth kick drum (hand coded, of course) but I came away thinking of Kraftwerk - laptop live coding as the new "man machine."

Live coding has a transparency that a lot of data art lacks - the code structure is gradually constructed, giving an (expert) observer some chance of following the formal, generative structure. Most data art conceals its mapping and munging, offering only an artefact and a promise that yes, this is "the data." Live coding's transparency is itself pretty opaque, though. At least one audience member at the Adelaide performance had no idea that the displayed text bore any relation to the sound. Live coding looks like great fun for the performers (like most improv), but what about the audience? Is data-subjectivity a prerequisite?

Read More...

Tuesday, August 22, 2006

Data Art - Some Questions

I'm working up a paper on data aesthetics and creative practice, looking especially at visualisation (a kind of companion to "Hearing Pure Data," a paper written a while ago focusing on sonification / audification). At this stage all I have is a collection of questions and semi-formed hunches - so make of it what you will, etc.


  • Are we talking about data or information?Lev Manovich uses the term "info-aesthetics" and connects these practices to the notion of an "information society". What if we move back a step, and look at the relationship between data and information? Data is the raw material, the datum or measurements: information is the message or meaning constructed using those datum. Both terms get used (more or less interchangeably) around artworks doing visualisation, but I think we should maintain the distinction. Is this work concerned with rendering information - a known, formed message? On the surface at least it seems to be more interested in visual interfaces to data, downplaying or leaving open the interpretation of that data - its transformation into information.

  • As I argued in "Hearing Pure Data," presenting the data "in itself" is an impossible ideal; it is inevitably shaped, interpreted, formed, framed, etc., in any manifestation; in which case how does visual data art negotiate its own construction of information from the datasets it works with? Does it pass off its own interpretation and framing as "raw data"?

  • What about the constitution of the data itself? Data art seems to take a pragmatic and concrete approach - "the data is the data" - but any meaning constructed from that data must be inflected by the way the data itself was formed or gathered. This is stating the obvious to anyone working in the empirical sciences... how do data artists respond? In the wake of the AOL reSearch dataset affair, the issue of constructing information from data comes into sharp focus. It will be interesting to see how artists use this dataset (which as Marius Watz recently observed, they no doubt will). The ethics of data art?

  • Data art treats its datasets as generative resources: sources of rich structure, pattern and complexity. It seems that often the appreciation of these formal qualities of the datasets (or their visualisations) exists in tension with the content or referentiality of the data. There's a continuum: TheyRule leans towards referentiality and meaning; Ben Fry's Valence is more concerned with pattern (it's an exploration of a visualisation technique after all); The Dumpster sits somewhere in the middle.

  • Toxi blogged a while ago on the issue of access to quality datasets for creative visualisation. As the comments on his post show, this begs a kind of cart/horse question. Tom Carden writes: "once you've got the info vis bug, you feel like a guy with a big shiny hammer, but nobody will give you a nail." This brings us back to the same question: is this work about data as an indexical link to the world, or data as a generative device? Or both?

  • On a related point, there's a clear crossover between generative and data-driven art; the artists are often one and the same; the same tools are used. How can we think about the relationship between these practices? They seem to be complementary approaches to similar goals (visual and aesthetic complexity, the joy of the unexpected, etc): one builds a generative system from scratch, the other latches onto the most complex existing generative system (the world) and visualises that.


Responses to all this very welcome of course... stay tuned for more chunks of undersupported and undigested theorisation.

Read More...