Tag: interactive

Home Computing – a retro perspective

Home Computing – a retro perspective

Whenever I think of the term ‘home computing’ I inevitably end up with a picture of this in my mind:

This is a ZX Spectrum circa 1982. Its a computer. Its a computer for home use that boasted a massive 16KB of RAM, an amazingly cumbersome rubber keyboard and in the days way before things like the internet, CDs, flash drives or even floppy disks we loaded and saved our programs via audio cassettes….and it sounded like this.


We became bedroom coders and we meticulously copied lines and lines of code from computer hobbyist magazines….that seldom worked…..although saying that there is the story of the mysterious Matthew Smith and Manic Miner – a game that changed how we think about gaming and a source of joy for anyone of my generation.

Looking back on this machine and the hype that surrounded it nearly 35 years later doesn’t just make me feel extremely old it also resonates with some interesting concepts and development that predate the 1980s boom of home computing in the UK and can still be seen in the hyper connected internet world of 2014.

Lets take a whistle stop tour of the history of computing and some of its pioneers and innovators so we can see how all this fits together. Fortunately for us we can take this tour via the internet as opposed to me writing about 15 million lines of code that doesn’t run properly:

Lets start at the end of WW2. During the war computers were being used and developed to help deal with complex calculations around code making and breaking, designing ballistic systems, RADAR and so on but during the post war period the focus began to shift to how these machines might be used for other purposes.

Its amazing to think that back in the mid to late 1940s people like Vaneveer Bush and Joseph Licklider were proposing futuristic uses for the calculator machine that included the ‘Intergalactic Computer Network’, theMEMEX machine (the computer as MEMory EXtender) and how these machines might be used in education and even how these machines might become an essential part of home life.

‘A computer in every home’ then, isn’t such a new idea and as time rolled on with Thomas J Watson (Senior and Junior) at IBM stirring technical revolutions in the business world, the likes of Ted Nelson and Douglas Englebart with ideas of the social potential of computer networks and the invention of the mouse, then Bill GatesSteve Jobs and the homebrew computer club that started the home computer revolution in 1975 in California…

Thats a very potted history of the computer that bring us to somewhere around 1982 and a small me on Christmas morning in the Welsh valleys confronted with a ZX Spectrum and a few cassette tapes purporting to be games.

So how did this come to pass? California to Maesteg? High level futuristic technologies and academic dialogue to Hungry Horace and JetPac?


The main reason that home computing was even possible in the late 1970s and 80s was down to the decreasing cost of  microprocessors (silicon chips) as developments allowed more ‘power’ to be put inside these processors in accordance with a 1965 observation by Gordon E. Moore that gave rise to ‘Moores Law’  – a law that states that the number of transistors that can be crammed into a microchip will double every two years.

This exponential growth of computing power coupled with decreasing manufacturing costs meant that hobbyists and electronics enthusiasts could now afford to start playing with these microchips and start building small computers for themselves.

This DIY approach pioneered by the homebrew computer club in California in the mid 1970s became the catalyst for the realisation of the global computer revolution that has shaped our 21st century lives: from the devices we carry in our pockets, to how we access information, how we interact with one another, how global economics are shaped and everything else powered by those little bits of silicone…

In 1980 as a response to an ITV documentary called ‘The Mighty Micro’ which foretold the potential of the emerging home computer revolution, the UK Government started to ask questions about what this would mean for the economy, education and the future of the country.


As a response, the BBCs ‘Computer Literacy Project’ was developed and its own computer for home and education was released a few years later – the BBC Micro. This computer (built by Chris Curry’s Acorn Computers) was the cornerstone of the BBCs new educational drive around computing that in 1982 led to ‘The Computer Programme’ on BBC 2 as a means of showing the viewing public what these small low cost computers were capable of. The series was successful enough for two series to follow it: Making the Most of the Micro in 1983 and Micro Live from 1984 until 1987. You can get watch (and probably giggle) at some clips of these series online if you fancy a bit of 80s BBC nostalgia….

This machine was set to become part of our memories of school from the late 1980s until the arrival of the now ubiquitous Microsoft Windows computers – but I think most people of this generation will have fond memories of when every now and again your teacher would let you have a go on ‘Chuckie Egg’ – THE BBC Micro game:

If you have fond memories of this game (like me) then you can play it online. Go Chuckie Egg!!!

Thats not the whole story though. Far from it. As well as the BBC Micro that helped realise 1940s aspirations of computers as educational tools there were numerous other machines around that helped bring the computer revolution into our living rooms so we could all witness and participate in the nail biting 10 minute wait to see if our cassette based games would load on our old TVs or crash at the last minute making us jiggle the cables, check our tape players volume level and generally mutter under our breath.

Ah, technology hasn’t changed that much…its just that these days we shout at a different, flatter screen with 32 million HD colours instead of a black and white 8-bit display that bleeps at us.

Anyway, there were a lot of other computers to choose from and one of the key issues was price. In the 1980s a BBC Micro would cost you a whopping £400 but as early as 1980 Science of Cambridge Ltd. (later to be better known as Sinclair Research) were selling their black and white ZX-80 for under £100 (or available in kit form for the geek enthusiast for as little as £79.95).


Sir Clive Sinclair’s DIY computer kit became a market leader due to its affordability and the way in which its user community took to its 8-bit ability to write and play games. Fast forward to 1982 and Sinclair was releasing the all new ZX Spectrum (with the entry level Spectrum 16 still coming in for less than £130). To date there have been some 24,000 software releases for the Spectrum and that interestingly enough includes 100 new titles in 2012 plus a new bluetooth version of the Spectrum announced in 2014.

An incredible legacy and testament indeed to the influence of the machine that earned Sinclair a knighthood for services to British industry and a product viewed by many to be the kickstarter for the whole IT industry in the UK.

Of course if we’re talking about Sinclair its only fair to mention Acorn and in particular its co-founder Chris Curry. The rivalry between the two men and the technologies they helped develop has become the stuff of legend and was to eventually lead to the demise of perhaps one of the most enigmatic people in British industry over the last 100 years.

Acorn Computers was founded in 1979 by Curry and his long term collaborator Hermann Hauser. One of its earliest computers was the much aligned (and missed) Acorn Electron which for many here in the UK represented their first taste of the home computing revolution.

The legendary rivalry between Acorn and Sinclair was not just about the growing competetiveness of the home computer market but also lay in the production of the BBC Micro. Acorn won the contract from the BBC due apparently to Aunties feeling that the Sinclair computers were more of a hobbyist machine than a serious computer that could revolutionise the home and educational potential wrapped up in the home computer revolution.

This rivalry though had a long standing basis – Curry and Hausner had both worked for Sinclair during the 1960s and 70s, helping develop the ill fated Sinclair Black Watch. Curry eventually resigned from Sinclair (then the Science of Cambridge) after Sir Clive refused to pursue Hausner and Curry’s interest in the microcomputer kit – a development that led to the ZX series of computers.

The BBC Micro was however to be the final straw and led to one of the most legendary confrontations between the hot headed Sinclair and the ambitious Curry in a pub bar in London. Despite the BBCs rejection of Sinclair as a contender for its Micro contract Acorn seem to have only just managed to pass muster. The story goes that the BBC called Acorn on a Monday wanting to see a working prototype the following Friday but the Acorn ‘Proton’ was not even switched on until a few hours before the BBC arrived in Cambridge as the team struggled to build a functioning version.

This and many other historic developments that shaped the modern IT world especially here in the UK were dramatised in the great BBC documentary ‘Micro Men’ in 2009 – a warm and sometimes comic depiction of the birth of home computing and well worth a watch: https://www.youtube.com/watch?v=sIcAyFVK0gE.

The battle between the BBC Micro and the Spectrum went on through the 1980s although it was the Spectrum that triumphed in economic (and perhaps social) terms as games developers began to capitalise on the popularity and low cost of the unit. In turn bedroom developers began writing and releasing their own games fostering a growing boom in entrepreneurship that captured the spirit of the time. The print industryalso began to capitalise on the success of these machines and teenage gamers became reviewers and writers and sharers of code.

By 1990 however the UK ‘garden shed’ tinkering and making sensibilities that had been a catalyst for the revolution were beginning to show signs of wear as the 16-bit revolution appeared on the horizon. Sinclair sold the Spectrum to Amstrad but even with the release of the 16-bit Spectrum + the global growth of the UK giants of Microsoft, Apple and Commodore spelled the end of this peculiarly British computer revolution.


Sir Clive pedalled into the distance in his (final) ill-fated C5; a three wheeled battery powered bicycle/car hybrid with a handlebar steering system under the seat.

At the same time the BBC Micro was retired after a total sales record of some 1.5 million units (with the Spectrum selling a massive 5 million).

So by the early 1990s the playground arguments about the Commodore 64 vs the ZX Spectrum vs BBCs groundbreaking 3D ‘Elite’ space exploration game began to be replaced by the Apple vs Microsoft debate and computers perhaps became machines that were ‘used’ as opposed to ‘programmed’ with the inexorable rise of the closed software operating systems OSX and Windows.

At this point lets jump back into our internet time machine and fast forward through more pioneers and hopefully arrive in 2014:

So there was Tim Berners Lee and Linus Torvalds with new thinking around networks, the internet and the software to run it, Larry Page and Sergy Brins started Google-ing and Mark Zuckerberg brought us the blue world of social networking and the funny cat picture revolution. So that brings us bang up to date but if we just take a minute to have a look around us and see what’s happening at the moment in terms of home computing we can make out some of the legacy of the early pioneers and some reflections of the ‘home-brew’ attitude that helped kick start the home computer revolution in the first place. Today there seems to be a growing fascination and uptake of DIY approaches that are not dissimilar to the early days of Spectrum and Acorn as people begin to take on the hardware and software yet again. Programming in bedroomstinkering with robots in shedsautomating their homes and building machines and systems from scratch….. …and somewhere among all this history and development is a 10 year old me waiting for Hungry Horace to load on a Christmas morning in Maesteg.

Do you have memories of your first computer? Maybe it was an early Windows machine? Were you one of those people that enjoyed graphics of the Commodore 64? Perhaps you had a Dragon 32? A Vic-20? An Amstrad? Whatever your story we’d love to hear from you.

Further Links and Reading:

The 10 greatest flops in computer history: They were way ahead of their time and could have advanced the power of mass home computing by years. But these revolutionary concepts became the biggest failures in digital history. http://www.telegraph.co.uk/technology/5132085/The-10-greatest-flops-in-computer-history.html

The history of the computing industry is a fascinating subject. In a short space of time, it has created the world’s wealthiest man, witnessed some of the worst business decisions on record and generated the largest first year profits for any company in modern history! http://www.computinghistory.org.uk/

The history of home computing: 1982 – 2012 http://www.pcadvisor.co.uk/features/desktop-pc/3358626/history-of-home-computing-1982-2012/?pn=1

An exploration of New Media and Interface Design (2012)

An exploration of New Media and Interface Design (2012)

The Role of the Dice Roll: An exploration into how dice can inform new media design and interaction.

“The dice fell: a one and a two – three. He was to leave his wife and children forever.”
The Dice Man

“God does not play dice.”
Albert Einstein

“Not only does God play dice, but… he sometimes throws them where they cannot be seen.”
Stephen Hawking



The following pages are aimed at exploring my recent research into new media art with a focus on the nature of the new media interface as a system that can re-contexualise, reflect and immerse users in order to facilitate the critical space associated with art as experience (Lee, 2009).

Over a number of months I have become fascinated with the differences and similarities of experience that can be achieved through both real world and simulated (new media based) interactions.

The key to developing this research lay with some dice, which as I thought about them began to formulate a question that related fundamentally to my practice:

If a die is a hardware based random number generator, how much layering of experience in a digital simulation would be needed before users considered the outcomes of any interaction (any roll of the die) to be representative, in terms of experience, of a “real world” die?

I began to research the implications of this question with people whom I asked to first hold two dice in their hands, to feel the texture, the weight etc. and then asked if there would be a similarity between them throwing the dice and them pressing a button that generated and displayed two random numbers between one and six.

The overwhelming response was that there would be no similarity at all. A variety of reasons were offered including references to control, luck and the physicality, the “actual-ness” of the dice and while all understood the premise and agreed that a die is a random number generator, they were still unable to reconcile the two interfaces as being of the same nature.

Consequently I started to add layers of complexity and simulation to my question such as a 3D representation of the dice in motion, real world physics and a physical interface to “shake and throw”. At this point, those to whom I was talking suggested that I was “getting closer” to what they felt was the essential experience of throwing dice.

Interestingly, as the complexity of the design grew, the notion of the new media experience I was describing seemed to become more transparent (representative of “real world”) to the user.

Thus it became increasingly clear that there were a great number of influences that were governing peoples imagined experience of digitally simulated (new media) interaction with dice.

The central theme that seemed to be emerging was that the dice roll has three distinct phases that can be summarised as: shake, roll and outcome.

Or in new media terms as: interaction, action, outcome.

These three stages of the dice interaction also have very broad needs in terms of its usability:

1. The context is important as there would seem to be a need for a physical interface, a period of the roll actually happening (being acted on by physics) and the notion of the outcome being “fair” in terms of the user being part of a physical system from shake to roll to outcome.

2. The nature of the interface seems to change in terms of how it reflects, immerses and again reflects the user within each stage:

The shake phase is reflective as the user may follow habitual patterns that “give luck” (physical interaction based on culture and experience).

The roll phase is immersive in that the user is now witnessing the effects of his ritual on the dice in movement (processing phase).

The outcome phase is again reflective as the user reacts to his influence on the system (or call for luck) and can be reactive in terms of how he will throw the dice next time (reflection on experience) (Bolter and Gromala 2005).

3. The experiential reaction to the idea of dice would seem to be laden with historic and cultural references and personal narrative (specifically game playing) through users comments regarding feel, technique and a sense of some control over an admitted random number generator.

These findings when applied to new media can form the basis of interaction design that can encourage a re-imagining of experience but also contain fundamental questions and references in relation to context, liminality, narrative and the nature of the new media interface in terms of how feedback and rhythm within that interface reflect and mirror the user.

Rolling Through History

Dice are one of the oldest gaming instruments in the world. From the Aztecs to Ancient Egypt, from the Sumerians to the Vikings, dice (in various guises from fruit stones to animal knucklebones) have been used to play games, to wrestle with chance, to divine the will of the Gods, to see the future, to make decisions, to cheat and to try and control Fate and of course to gamble.

Today, dice are still an integral part of game playing from snakes and ladders to role-playing games (such as Dungeons and Dragons) as well as its design as random number generator being adapted to the online gaming world within multiplayer online role playing games (MORPGs) such as World of Warcraft.

Johan Hulzinga suggests that these systems of game, ritual and play are fundamental to the development, reinvention and re-creation of our human culture and thus the die itself can be viewed as an instrument or an interface that promotes change, re-imagining and re-thinking of culture and experience (Hulzinga, 1955).

Thus, the die’s fundamental function as a hardware random number generator interface has been an integral part of our culture, our mythology and our society for thousands of years, but it is the die’s physical properties as a system in itself, the die as a physically designed unit that still provides a corporeal experience for the user.

To “roll the dice” or to “shake the bones” provides an experience for the user that can provide a wide variety of experience, from joy to rage and from ecstacy to depression as the user “wins” and “loses” in random rhythm.

Along with the familiar context of the die within gaming systems and gambling that are prevalent in our culture, the die has also been present at, and sometimes a driver of, significant cultural change.

For example, the development of mathematical, rationalist probability theory can be traced to an exchange of letters between the 17th Century French mathematicians Pascal and de Fermat and the resulting work of Cardano and Gallilei, which were based on a question revolving around a dice game of the time.

Thus, the die can be viewed as playing a role in the development and formation of modernism and Enlightenment culture that has been a pervasive driver of scientific, rational endeavour and objective reduction for the past 400 years (Liszkiewicz, 2011).

So it might be said that “rolling the dice”, in this context, has contributed to a revision of cultural systems including the rational empiricism of physics and mathematics.

However, there would seem to be a problem with the rational application of the die as “hardware probability generator interface”, in that the die itself is still imbued with the corporeal and thus subjective properties of premodernist philosophies including fate, luck and divinity that appear to stand at odds with the rational modernist philosophies of logic and reason.

Postmodern philosopher, Bruno Latour suggests that we have never been modern, despite the proclivity of modernist philosophy and culture, and we still hold to older philosophies of magic and myth, fate and luck. (Latour, 1993)

Perhaps then, it may be said that the age old die, with its cultural and epistemological properties is in rhythmic flux between being representative of a system that is premodernist, modernist and postmodernist depending on how it is examined.

Or to put it another way, it depends on which face of the die we choose to look at:

  1. The die is representative of premodernist epistemology, whereby the “control” of the outcome rests with the mystical, the magical, fate and luck.
  2. The die is representative of modernist epistemology, whereby the “control” of the outcome is informed by empirical probability.
  3. The die is representative of postmodernist epistemology, whereby we choose to lose (or relinquish a degree of, dependent on context) control. The die as a physical system, in this context, is an integral part of a human system that is defined by action and thought which in turn are influenced through culture and experience.

Thus in designing an effective interface that allows for reflection on and due consideration of this multitude of cultural and historical experience, it would seem imperative that care is taken to develop a starting point in the new media feedback/interaction system that is culturally and experientially familiar (at least initially) as well as conscious of the initial context of the user.

Randomness, Glitch and Liminality

It is also important to consider these issues in relation to the design of the system itself: the randomness of the die is difficult to represent effectively in digital terms as the generated data cannot be truly random as it is based on computational randomness which has an inherent pattern.

This issue may be overcome through digital pseudo random generation (PRG) or seeded randomness.

Although this method of generation is often held as true in statistical randomness testing (itself borne out of modernist mathematical probability) true randomness is more often described through physical models including for example, coin flipping, card shuffling and of course dice rolling.

It would seem that true randomness possesses a quality of unpredictability that is difficult to replicate in a digital manner unless it is possible to “randomise the randomness at random”, or to put it another way, introduce unexpected results, themselves generated at random through intentionally introducing what may be considered a “digital error” or glitch in the system (Krapp, 2011).

Glitch art has become a form viewed by many digital artists as being an aesthetic of the digital age, from its inception as a music genre in the 1990s through to use and incorporation in VJing and other image based practices.

The glitch has its basis in corrupting digital data (at random or intentionally) that has the effect reconfiguring the data and creating new forms and representations as opposed to those initially described by the “ordered” data.

Owing much of its philosophy and approach to circuit bending (repurposing digital hardware), glitch art can offer an unexpected re-imagining of form that can produce works that challenge digital art as rational, somehow predestined and un-human.

Glitch art (perhaps like digital and new media art generally) is in many ways self-referential (in that it is reflective of the creators position within the technology that creates his work) and this can create its own issues of accessibility in terms of the initial data source: a glitch art work can perhaps only make sense when that initial data source is transparent to the viewer/user and thus the process itself becomes the focus of the work.

While glitch art in and of itself may present issues around elitism (albeit a “hacker-elite”), the concept of introducing error and glitch as a part of an interactive system may help in presenting a more real world “feel” to and experience of new media art (Downey).

Whilst to intentionally cause errors in the data output may initially (in the case of the die simulator) allow a more tangible experience for the user in terms of the cultural weight of the die as a hardware interface by producing a “truer” randomness, the introduction of a glitch also offers a means of exploring and re-imagining the nature of that experience.

If the glitch creates a “truer” randomness, the pattern of probability is disrupted and at a conceptual level it may be the case that another particular cultural facet of the die can be explored: cheating.

The notion of cheating or in some way exerting control over what is a random event is closely linked with play, especially within games that involve dice. From the die tappers (whereby dice are constructed with a central well of mercury that when tapped can add weight to a particular corner) to loaded dice (dice specifically constructed to favour a particular face) and even the introduction by casinos of walled tables for dice throwing, all seem to indicate our predilection for trying to control premodernist concepts of chance, fate and luck (Liszkiewicz, 2011).

From Greek stories of cheating the Gods (who in many stories and writings use dice themselves to decide men’s fates) to breaking the bank in Las Vegas casinos there seems to be an inherent cultural “excitement” over how chance, fate and apparent true randomness can be “loaded” in favour of the user/player.

Within these cultural references, the stakes are often high; life or death, win big or lose big and while the concept of cheating aims to shorten the odds in a more favourable way for the user/player, the human dice interaction system is still dependent on the three stages of the die roll: shake, roll, outcome.

In terms of the glitch in the system, the effect is perhaps most applicable during the roll phase whereby the user is now pure spectator at a window awaiting the outcome of his physical technique and interaction (or to put it another way his attempt to control the outcome) during the shake phase.

This roll (processing) phase of the experience, when coupled with the stakes applied to the outcome, can be said to be representative of a liminal state that in itself poses questions about the nature of experience and how the glitch and error, needed to be more representative of true randomness, play their part in the development of the narrative of the system as a whole.

This liminality; this state of becoming; this limbo of existence that is in flux; that exists during the roll phase can represent the more corporeal element of the experience as the die takes the user/player from winner to loser; from life to death in an analogue period that can be representative of every state in between and yet neither one of the binary opposites that are eventually revealed.

The glitch in the system, an approximation of true randomness, also has its liminal properties: it is often referred to as “error”, “corruption” or “bug” that has the effect of suggesting that the process is flawed. However when applied in a new media sense this flaw has the potential to allow us to explore, re-imagine and revise our socio-cultural relationships with historical experience, identity and narrative


Aristotle’s Poetics, complete with its prescriptive forms and character archetypes, describes a concept of narrative with three distinct phases: beginning, middle and end.

While this model seems to fit well with the phases of the die roll (as outlined previously), within a new media system, with its inherent ability to re-contextualise, re-represent and challenge perception, there may be an opportunity to explore and re-examine this concept of narrative.

Central to this proposition is the notion that the outcome (Aristotle’s narrative ending) is not set and is dependent on the liminal phase of the die roll.

Thus the “die narrative” challenges the notions of fatality with its reliance on random chance as being the driver of the outcome.

This notion seems to fit well with the socio-cultural aspects of the hardware die and when set within a new media context invites system design that can take advantage of this break with traditional narrative and perhaps present a re-contextualised, re-imagined and revised approach to personal narrative and identity.

The looped nature of this tripartite experience, whereby the user can be encouraged to keep rolling and re-rolling the die, offers the opportunity, through the inclusion of glitch, to change the context of the experience as the system can, for example, randomly freeze the action during the liminal roll phase, play out the action in a different order and “load” the die dependent on not only the nature of the interaction but also via the hard coded error within the window interface.

Thus within the ordered, culturally specific experience of the (new media) die (shake, roll, outcome) there is potential to reshape and re-organise the narrative as it is playing out.

While the nature of the glitch itself can play a role in challenging Aristotle’s Poetics (which in many ways the random, fate-challenging die has always contributed to) the concept of the die itself has its own story to contribute when thinking about narrative.

From “choose your own adventure” literature, through the text based adventure games of early 1980s personal computers (leading to the development of MORPGs) and the exploration of hypertext as literature (as in Heath Bunting’s readme.htm) and Luke Rhinehart’s flaneuring  “The Dice Man” (with its derive that is inherently unreliable and challenges notions of fatality), the die as being representative of random number generator hardware interface, has had its role to play in challenging the prescriptive nature of Aristotlean narrative.

Thus this attitude to the changing narrative of the self, society and culture, whether resulting in Manovich’s spatial narrative of the “digital cinema” (as a product of re-mediation) or a more drastic, postmodern rejection of narrative and “the end of history”; the die and its new media counterpart, the glitch, can be seen as central to explorations and investigations into what narrative means in a non-linear, object orientated digital world that by its nature is in its own constant state of liminal rhythm and flux.


In terms of my original question, it would seem there are a great number of considerations to made regarding both the interface design and its context and also the potential of that interface to offer a critical space of reflection, re-imagining and re-conceptualisation of the die throwing experience for the user.

While notions of the glitch can be seen to offer the potential for investigations into narrative, identity and culture within the new media interface, it would seem imperative that initial conditions for the experience are reliant on the cultural, social and historical faces of the die.

This initial transparency is crucial in order to promote exploration and re-contextualisation of the experience and all its implications (social, cultural, historical plus narrative and identity) for the user to revise through play.

The rhythm between transparency and opacity (in the case of the dice roll as mirror, window, mirror) of the interface can also be central in encouraging the critical space needed for this re-contextualisation.

Finally while this document has concentrated on the re-mediation of a seemingly fundamental and familiar instrument recognisable for is socio-cultural, historical and game play facets, the act of that re-mediation involves a depth of research and, perhaps more importantly, an appreciation of the role of the roll of the die and how that tripartite experience is a feedback loop between user and interface that carries substantial power in terms of its meaning and implications.


Bachelard, G. (1994) The Poetics of Space,trans M. Jolas. Boston: Beacon Press.

Bignell, J. (2000), Postmodern Media Culture. Endinburgh: Edinburgh University Press

Bird, J. (1989) The Changing World of Geography. Oxford: Clarendon Press.

Bolter, J. D. and Gromala D. (2000), Remediation: Understanding New Media. Mass: MIT Press

Bolter, J.D. and Gromala D. (2005), Windows and Mirrors; Interaction Design,

Bolter, J.D. Gromala D. (2000), Remediation: Understanding New Media. Mass: MIT Press

Bunting, H. (1998), Readme.htm. http://www.irational.org/heath/_readme.html (accessed January 2012)

Caldwell, J. T. (Editor), (2000), Theories of the New Media: A Historical Perspective. London: Athlone Press

Carroll, N. (1996) Theorizing the Moving Image. Cambridge Mass: Press Syndicate of the University of Cambridge

Cubbit, S. (2004) The Cinema Effect. Cambridge Mass: MIT Press

Deleuze, G. (2005) Cinema 2: The Time Image, trans H. Tomlinson and R. Galeta. London: Continuum

Downey, J. (no date), Glitch Art. http://jonasdowney.com/workspace/uploads/writing/glitch-art-jonasdowney.pdf (accessed January 2012)

Freud, S. (1919) The Uncanny. Available at http://people.emich.edu/acoykenda/uncanny1.htm (Accessed July 2011)

Frohlich, D.M. (2004) Audiophotography: Bringing photos to life with sounds. Netherlands: Kluwer Academic Publishers.

Gleeson, B. (1996) A Geography for Disabled People. Transactions of the Institute of British Geographers.

Hubbard, P and Kitchin, R. (2001) Key Thinkers on Space and Place (2nd Edition). London: SAGE Publications Ltd.

Hulzinga, J. (1955) Homo LudensA Study of the Play-element in Culture. London:Routledge and Keagan

Krapp, P. (2011), Noise Channels: glitch and error in digital culture. Minneapolis: University of Minnesota Press

Latour, B. (1993), We Have Never Been Modern. Mass: Harvard University Press

Lee, H. J. (2009). The Screen as Boundary Object in the Realm of Imagination. Georgia: Georgia Institute of Technology

Lefebvre, H. (1991) The Production of Space. Oxford: Blackwell.

Liszkiewicz, A. J. P. (2011), On Dice. http://interactive.usc.edu/2011/08/29/on-dice/ (accessed January 2012)

Lunefeld, P. (Editor), (1999), The Digital Dialectic: New Essays on New Media. Mass: MIT Press

Lunefeld, P. (Editor), (1999), The Digital Dialectic: New Essays on New Media. Mass: MIT Press

Manovich, L. (2001), The Language of New Media. Mass: MIT Press

Massey, D. (2005) For Space. London: SAGE Publications Ltd.

Mulvey, L. (2006) Death 24x a second. London: Reaktion

Reinhart, L. (1972) The Dice Man. London: Grafton Books

Rieser, M. (Editor), Zapp, A. (Editor), (2004). New Screen Media: Cinema/Art/Narrative. London: BFI Publishing

Rieser, M. and Zapp, A. (Editors), (2004). New Screen Media: Cinema/Art/Narrative. London: BFI Publishing

Sinha, S. (no date) Remembrance of Images Past: Cinema, Memory and the Social Construction of the Concept of Time. Available athttp://silhouette-mag.wikidot.com/article-cat:vol3-cover-pg3 (Accessed May 2011)

Vaneigem, R. (1983) The Revolution of Everyday Life. London: Rebel Press

Wollen, P. (1998) Signs and Meaning in the Cinema. Bury: St Edmundsbury Press.

Zylinska, J. (Editor), (2002) The Cybernetic Experiments. London: Continuum

Where am I?

Where am I?

A Thought Experiment into the nature and implications of new media art.

I’m sitting here which will become there. I’m sitting here now which will become there, then.

Through the actual utterance of these concepts around where and when I am doing what I am doing, I aim to evoke a sense of mental disconnectedness from linear ideas of space, place, time and perhaps narrative by calling a mental subroutine of imagination.

This imaginative sub-routine call or process, this re-imagining and re-construction of time, space and linearity, as, initially, a purely mental process will hopefully allow me to investigate new media art as a means of understanding ourselves and the world around us. The term New Media is problematic at best.

Here is a definition from Webopedia (http://www.webopedia.com/TERM/N/new_media.html):

“A generic term for the many different forms of electronic communication that are made possible through the use of computer technology.

The term is in relation to “old” media forms, such as print newspapers and magazines, that are static representations of text and graphics. New media includes: • Web sites • streaming audio and video • chat rooms • e-mail • online communities • Web advertising • DVD and CD-ROM media • virtual reality environments • integration of digital data with the telephone, such as Internet telephony • digital cameras• mobile computing “

This definition helps to contextualise some of the issues involved in trying to find a starting point from which we can investigate new media: Firstly, this definition itself is a new media “product” or artifact in that is a (HTML) web page that itself links to other wepages and other social sites via hyperlinks. It is a subjective artifact of that which it tries to define.

Secondly this HTML page is transient in nature – although this definition may well be the case today (which is not necessarily the case) it does not follow that this definition will hold true tomorrow. This subjective artifact of the medium it purports to define is time specific.

Thus there is a peculiar (perhaps even peculiarly postmodern) aspect to the very definition of new media, whereby it can be said that new media does not necessarily know what it is and a definition will always be hard to nail down because of the word “new”.

In this way, new media can be seen not only as transient, fluid and ever changing in terms of both its form and content, but also as being in a liminal state; a state of “becoming”, of evolving, of metamorphorsing from what it is “now” to what it “will be”.

At this point I am going to halt my own linear narrative and take a little time to reflect on what I have just written in order that I can get some critical distance that is necessary for a “reflective experience” (Lee, 2009). What I will do however is place a mental hyperlink from this point to a point in the future where I can re-open this particular window of investigation.

So, Jumping back to our fleeting and fluid definition of new media, it suggests that “electronic communication and computing technologies” are at the heart of new media.

So let us open that window and investigate… (@Schrodingers cat has just tweeted me saying he’s been sick in his box!)

Bill Nichols in his essay “The Work of Culture in the Age of Cybernetic Systems” describes the computer as “..more than an object: it is also an icon and a metaphor that suggests new ways of thinking about ourselves and our environment, new ways of constructing images of what it means to be human and to live in a humanoid world.” (from “Theories of New Media”).

The iconic computer from the Xerox PARC through the Amigas and Spectrums to the iPhone and iPad represent a scenario unique in cultural and social history.

For the first time, the means of production (the tools), the routes of dissemination (the network) and the means of consumption are all mediated through the same technologies, based on the computer interface.

From work to leisure we are now almost ubiquitous computer technology users and while these developments can promote the democratisation and participatory nature of personalised media and art production there are also ever present the corporate capitalist “traditional” re-mediated media that perhaps seek to control and redefine the new in order to develop “TV adverts only better”. However, before this story moves toward an exploration of what Bob Stein refers to as the “M word” (for both Marketing and Marxism) in new media, I feel it important for me to browse more laterally towards an examination of the computer as interface (the GUI).

Page loading…

So I’m still sitting here/there (actually I’m somewhere and some-when different) and trying to articulate some of the plethora of thinking around new media art. As ever I find myself in front of a computer and more precisely in front of a screen.

The display; the screen itself is what is often cited as critical area of investigation in new media that can perhaps offer the widest potential in terms of understanding ourselves and the world around us.

As Hyun Jean Lee states in “The screen as boundary object in the realm of imagination”: “As an object at the boundary between virtual and physical reality, the screen exists as both a displayer and as a thing displayed, thus functioning as a mediator. The screen’s virtual imagery produces a sense of immersion in its viewer, yet at the same time the materiality of the screen produces a sense of rejection from the viewer’s complete involvement in the virtual world. The experience of the screen is thus an oscillation between these two states of immersion and rejection.”

The screen and the frame, the viewed and the viewer are all familiar areas of investigation in the history of art but the key difference between new media art and what has gone before is this sense of “immersion” and interactivity.

Whereas in traditional media forms there is a sense of broadcast, a one way mediated “lecture” between producers and audience via the frame or screen, the new media screen offers a dialogue between the window object of perception/presentation and the viewer/user.

This shift from viewer to user forms a central investigation point in new media criticism – the concept of a realisation and representation in real time of a window that can also mirror and reflect a users actions offers up new forms of experimentation and investigation. The key here is the digital nature of the new media forms.

At the heart of any digital representation is digital data that due to its construction (made up of 0s and 1s at its most fundamental) can free form from content and can therefore be manipulated and re-represented: Thus in the digital world of the window mirror we can re-represent data feeds from weather stations as digital graphs, visualise audio data and even “listen” to digital images.

As experimental art practice has taken advantage of the particular peculiarities of the development of digital datas ability to free form from content, the folding of both space and time through digital manipulation and re-mapping has led to interesting insights and a re-imagining of our perceptions of the world around us.

For example the work of Tamás Waliczky where he describes his “time crystal” works as aiming “..to preserve in frozen form brief moments in an individual’s life. These crystals exist simultaneously alongside each other in space, and a virtual camera (whose viewing angle is to some extent the lofty vantage point of God) can observe them from any desired location. By travelling through the time crystals, the camera can re-produce the original movement, but from a diverse range of perspectives and at varying speeds.” (http://www.waliczky.net/pages/waliczky_sculptures1-frame.htm)

Also other works of note that involve the folding and re-representation of time and space are: The 4th Dimension where the artist “uses images like geological layers. He does not play with the image as a background/form, but as a geological mound. For him, each line becomes a system that can be isolated, the way a geologist approaches and analyzes each single stratum” (http://www.zbigvision.com/The4Dim.html)

And Steina’s “Bent Scans” (2002) “The installation uses four computers resulting in four different image projections. Though all four computers have the same camera input, a different program on each creates a very different video image on each projection. By stepping into the camera view, the visitor will experience a different view of him or herself in an immediate past time.” (http://www.vasulka.org/Steina/Steina_BentScans/MOV_Bent_Scans.html)

This small sample of works that emphasise our relationship with time and space perhaps have their origins in moving image experimental work that introduced specific time and space based interventions in real time: For example, Peter Campus “Origins” (1972) and Peter Weibels 1973 work “Observation of the Observation: Uncertainty” and Dan Grahams “Present Continuous Past(s) (1974).

The commonality between these works often involves the shifting perspective of the participant/viewer when engaging in the interactive feedback loop offered through the screen/frame that represents both window and mirror.

The user participant uses the screen/frame both as a window through which to look into a parallel space-time and as a mirror that reflects his own interaction and participation in the feedback loop.

In this way the screen becomes the mediator of the experience as well as existing as a central object in the space of the work.

This notion of feedback and interaction is critical in investigating how the re-visualisation of digital data can impact on the viewer/user.

Interaction in art is not a new media phenomenon, there has always been dialogue between artist/producer and viewer that exists as a feedback loop whereby the viewer (of a traditional static artwork) can reflect his own interpretation onto the work.

This act of interaction as interpretation has traditionally allowed a critical space for reflection on, and reading of, an artwork, but within the screen of new media, that room for reflection, a space and time to evaluate the experience is often lacking, as the screen/frame itself, as part of the experience, begins to dissolve and the critical distance between artwork and viewer diminishes.

Therefore, in order to encourage that critical reflective space, the new media window/mirror finds itself in a shifting state of presence: sometimes window, sometimes mirror and sometimes traditional frame, in order that the viewer/user can interact, be immersed and reflect on the experience.

Thus in new media the frame of the screen becomes mirror and window, and as the complexity in terms of spatio-temporal layers of interaction increase, so the frame becomes a frame within a frame; windows that look at mirrors and mirrors that reflect disparate times in that same windowed space. The window I am looking at now as I type this, contains a representation of the time I have spent doing this typing. Simultaneously, in the same space where I type this there are several other windows that offer me a view into other “times” and “places”.

As these windows literally open up in terms of the GUI of the screen, we are faced with interacting with a mirror world that is a multi-temporal space. This spatio-centric world of my desktop thus offers me windows into (or onto) digital imagery and film (as I wait for my film to upload to Vimeo), the digital content of my computer, a program I am developing in Processing, my website etc. whilst in the same space I am engaged with and interacting with what is now and what will be as I continue to type.

This multi-temporal spatio-centric world of the computer with which we all now seem to be familiar, is suggested by Manovich as being an underlying condition of the “digital cinema”, whereby we may come to expect an increasing emphasis on spatial elements of arrangement, of montage and of experience as opposed to the “show and replace” temporal approach of traditional moving image practice. This is but one example of thought around the implications for narrative in the new media frame.

In Manovich’s digital cinema, temporal narrative is seen to be being replaced by a rediscovered spatial narrative akin to the art of Giotto and Courbet.

A second concern around the nature of narrative in a new media concept loops and links back to my earlier exposition of the interactive new media loop in the ever-morphing window/mirror.

Manovich insists that the re-discovery of the loop as a narrative driver for new media has its basis in the mechanical; as in the cranking, circular loop of early movie cameras and the Zootrope.

So the loop has a rightful place in the digital cinema as a “re-found” means of communication and representation.

However, the loop of the new media window/mirror can be seen to break with traditional “Aristotlean” narrative (which has a beginning, middle, end with its archetypes and prescriptive form). The conflict between Aristotles Poetics and new media can be seen to centre around “..his idea of ‘mimesis’ as a truthful reflection of ‘reality’…(which) cannot hold since today it would make more sense to talk of ‘multiple realities’ for different readers(users).

The reader activity and background therefore becomes much more important in thinking about what happens when hypertext (new media) narratives are ‘read’ ” (http://www.cyberartsweb.org/cpace/ht/hoofd3/)

The nature of this narrative break may lie in the digital nature of contemporary new media.

Aristotles poetics relies on a production line system where one process follows another in a linear fashion. In the digital world this can be compared to early DOS programming where commands were executed one after the other sequentially so as to produce a linear system.

With the advent of Object Orientated Programming (OOP) (notably with the development of Apples Operating System) systems became co-existant and able to communicate with each other at any time depending on the route of the data flow.

For example, the word processor I am using now has the text objects themselves as I type, but simultaneously there exist a number of tools that can affect the form of this content if I choose to activate them (for example change the font, colour, size etc.).

Once again we witness windows in windows, form and content separated and a multitudinous narrative spreading out before us that is dependent on interaction via a GUI, and both dependent on and as a result of the digital nature of the truly multi-media machine (as in literally meaning many ways or forms of communicating).

So this narrative break, whether heading towards a more spatially based narrative and/or away from Aristotles Poetics poses an issue in new media art.

Within the digital context as investigated here, narrative seems to be in a constant process of creation through interaction that leaves the idea of an “ending” somewhat redundant. So, because of the very nature of the user/viewer interaction and the doubling multi-path options provided by that interaction in the non-linear window/mirror interface, there is a sense of new media art being un-finished, in flux or becoming.

This notion of “un-finishedness” fits well with the fleeting temporal nature of new media although it can create a sense of “failure” and even death. The term “unfinished” is tainted with failure as well as the romanticism of imagination it imbues within us. For example the notion of how Benjamin would have formulated and presented a final version of his “Arcades Project”.

How would Schubert’s unfinished symphony have turned out? And what impact on the world for the continued work of Keith Haring and Paul Monete? Thus the unfinished inspires imagination but also conjures up concepts of failure and the incomplete.

But perhaps, as Paul Lunenfeld expresses in his essay “Unfinished Business”, it is the process of becoming rather than the end, that is in need of celebration.

A constant flux, movement and dialogue between what is and what may be, can perhaps shine a light on that which “..is not a resolution, but rather a state of suspension” within which there are constantly emerging new opportunities and developments whereby we are encouraged to re-imagine, reflect and re-invent ideas of our own perceptions.

New media with its constant re-invention of itself through its own mediation perhaps has an affinity with the unfinished – to quote Ted Nelson “Everything changes every six weeks now”.

Through new platforms, new programs, new approaches and new tools that are developing, that obsolesce and that are replaced, perhaps exemplify the position of new media to be at the forefront of investigation into this philosophy of the unfinished narrative of the self.

Lunefeld goes on to explore how new media activities and engagement can be compared to the flaneur and the mid 20th century avant-garde movement the Situationist International (SI). The flaneur with his altered, aloof and observational vantage point and the SIs notion of the “derive” encourage a re-examination of the urban cityscape in order to “engage with the city as an open-ended place of play and investigation”.

The derive can be described as “a technique of rapid passage through varied ambiences. Dérives involve playful-constructive behavior and awareness of psycho-geographical effects, and are thus quite different from the classic notions of journey or stroll.” (http://www.bopsecrets.org/SI/2.derive.htm)

This notion of the meander through the post-modern isolation of the city-scape can be paralleled with the new media experience in terms of both experience and language: we often “browse” the internet and create complex cyber-psycho-geographies of the online world. We follow where mood and links (hyper and physical, HTML, feet and trains) take us in order to discover something new, or reflect on something known or to gain a new perspective.

This flaneuring typifies my meandering through this particular presentation of new media. My derive seems to drive or link me to a playful idea of the analogy between my imagination as a system for the re-mapping of space, place and time and the world I gaze into, the windows I use and the mirrors in which I see my thoughts reflected. If therefore, new media possesses aspects of the “un-finished” and elements of the “derive”, this helps me to hyperlink back to the anchor icon I left earlier when discussing liminality. As indicated through the constant re-invention and the very aspect of the “new-ness” of new media any art/media produced within this all consuming re-mediating space possesses an element of “becoming” and thus an aspect of being liminal; between states, neither this nor that, somehow intermediate.

The consequences of this line of thought lead to number of different scenarios that can offer unique insights and discourse with regard to an examination of ourselves and our environment. The cybernetic world of the “Post Human” exemplified through the work of Orlan and Stelarc, offers new ways and platforms to discuss issues around gender, feminism and politics as our physically augmented selves (through cosmetic surgery, pacemakers, dialysis to name but three current cybernetic enhancements) “becomes other than (themselves) which is mediated through the new technology which determines it” (Clarke in Cybernetic expts).

However, with this liminality comes an opposing view of the liminal as “monstrous, diseased, queer, black, female, insane” and “polluted” (Clarke).

As much of the literature draws on concepts behind moving image works such as Terminator (1984), Robocop (1987), Johnny Mnemonic (1995), Blade Runner (1982) and the mulititude of “Frankenstein” adaptations, the cybernetic investigation calls us to question our nature in terms of how we react to “the other”, how we can come to terms with the “unheimlich” of the cybernetically “altered” and what this represents in terms of our understanding of our own natures. If we apply these concerns to the new media, the sense of the “unfinished” and the “derive” of non-linear narrative brings into question the supposed “rationality” of the machine. We are often halted in our everyday pursuit of operational procedure by claims of “illegal operations” and “fatal exceptions” as the temporal logic of the digital production line command clashes with the “illogic”; the “irrational”.

This error; the polluted, the illogical, the irrational, the corrupt, the bug, the glitch, can often stop a process dead – “finished”. But the glitch itself, the ghost in the machine, the “irrational” is what may help us glean a deeper understanding of ourselves as irrational systems; it can force a new perspective.

Thus, the glitch, the error, the corrupt, may inform a digital derive that we had not thought of before and hyperlink us into a new loop that helps lay out a new, new media narrative that is closer to our irrational selves and may help us to understand, interact with and perceive the world around us in new ways.

So, as I hyper-jump via an irrational glitch to a new window/mirror I will not finish this by looping back to my imagined starting point wherever and whenever that may now exist and restart in the middle of the narrative with a quick “Where was I?” and “Where am I going”? </start>

Kinect Investigations

Kinect Investigations

Microsoft Kinect Sensor and OpenNI Framework

Originally known as Project Natal, the Kinect is a motion-sensing device used as a hands free controller with Microsoft’s X-Box games console. The kinect uses motion sensing data that includes depth data (via infrared) in order to allow users to control gameplay through gestures, motion and speech.

Continue reading “Kinect Investigations”

“Playing” the Image

“Playing” the Image

Initial Ideas:

Falling out of this was the idea of “memory triggers” and if we could “play” our memories, emotions and ideas just like we’d play a piano and interact with the visual forms triggered in real time as we might do with a musical instrument. By “playing” the pre-constructed memories as shots we can create a montage of musical memory that uses sound to affect colour, time, motion and movement of the images presented.

Developing the concept:

In a more detailed analysis here, I’m thinking about the nature of the content as well as the interactive element. Here’s my thinking around the nature of the content and how the initial treatment addresses key concepts of the moving image.

1. Memories created using images (found or filmed/ still and moving) and voice over

-exploration of the relationship between still and moving image

-exploration of colour and memory

2. Animated memories created in a virtual environment using particle systems, emitters, boids etc.

-how does the integration of still and moving image in a virtual (“un-natural”) environment affect out relationship with the image.

-opportunity for generative, artificial intelligence (via boids systems) to introduce an organic development of shots

Set up

In this set up I’m using 3 MIDI triggers for both visual and audio which are composited and re-introduced into the system. Sound (generative audio and voice tracks) is layered both in the A/V mixer and the synth modules allowing for interactive generation of A/V montage based on the users (me!) interpretation and reaction to both sound and image.

The A/V mixer can be set up in a number of different ways with different triggers effecting different parameters.

In this way the 5 shots that can be generated would be an exploration using the audio itself to trigger paramaters in the A/V mixer that relate to:

1. colour – volume, velocity (how hard a key is pressed) and pitch can be used to directly effect the colour balance of a clip or A/V mix.

2. time – similarly triggers can be used to affect the start and end point of a clip or speed and direction of the playhead.

3. Movement – created directly in terms of the order of playing the clips in order to create an A/V montage.

4. Sound – this shot would be related directly to how a piece of music can create a specific shot.

5. Interaction – the key to this is the relationship between sound, image and user in no particular order and having run a few tests (see below) with some clips it is really interesting to explore how the image encourages a certain key press and how the sound itself leads the user on to more experimentation: what I mean by that is it’s a very moreish toy! 🙂