Tuesday 24 July 2007

NEW BLOG

I've started a new blog to document the thinking, testing and development for both the practical and written content of my MA project.

New Blog

Sunday 22 July 2007

Experiment one

This short video is to show me testing my system so far.

I have networked two computers so they can pass OSC signals between each other.
One inputs from a webcam using ‘Reactivision’ (this is what is on the left hand side of the video). I have then used Max/MSP to send this data out as OSC signals, which then get sent to ‘Isadora’ which is what is creating the visuals and to the other computer running ‘Reaktor 5’ which is what is creating the sound you hear.

The first two clips are a test of a simple type of ‘colour organ’. The Third clip is using a desk lamp to control the amount of light to the webcam. I have set Isadora to analyse the amount of light inputted then to output a value to control the sound and the imagery.

Monday 16 July 2007

MA Project

Dissertation

At this stage I am still unable to formulate a specific question.
I know the general area I want to explore.
This being the combination of music and imagery and it’s effect on a person.

Each form can trigger direct responses from our senses and minds and the combination of both delivers something that neither can on their own.

Does the combination trigger different responses in our brain.

‘Visual Music is an art form with a primary objective of intentionally activating the mind’s ability to fuse visuals and music into a “synaesthetic experience”.’

Is this an interaction in itself. Do our minds fuse the elements in the same way or do we each create our own experiences.

Does the combination of music and imagery allow more or less room for our minds to include our own imaginations.

I would like to consider two of what I believe to be examples of music and imagery having the strongest impact. These being film and television scores and VJ culture.
Within these examples, one form is supportive of the other. Although I would like to explore how each change an audience’s perception of the other.

Something else I have come across in my research is the combination of music and imagery as a form of psychotherapy, used to bring about a therapeutic change.
One organisations main approach is to play patients certain types of music in a relaxed state and ask them to describe any images that they experience in their mind.

I found this particularly interesting as this suggests that everyone has the ability to create images when listening to music. I would like to look in to this further.

Also I would like to look into possibilities for the future. Will there be a time when we will be able to see images created by a persons mind, are the images we see in our heads actual translatable images.

Practical project.

I am in a similar position with my practical project.
Yet to come up with a definitive idea.

I think so I far have been trying to combine too many ideas or functions in to one project.

User created visual music, emotive responses and the interaction between body and system are all the things I am trying to include and I am not sure if they will all work together. On reflection I think so far my ideas or too complicated and not instantly useable by an audience.

I have been experimenting with various technologies in the hope of giving my self some idea of possible interactions, spaces, sounds and visuals.
Most of these have been successful but in a way, haven’t helped because they have given me many more options and opened new avenues to explore.

Experimentation.
Using the ‘pod’ screens as back projection screens I used a webcam and ‘isadora’. I played around with many different effects and images and user created visuals. All were fun and visually appealing but didn’t really have any effect or create anything particularly immersive.

Over the weekend I managed to get more control and interactivity in to the Isadora visuals. I used Reactivision to input data from the webcam, then created a MAX/MSP patch that converts splits the data up into separate objects for multiple controls, scales the numbers then outputs the new numbers to a port on my network. Finally I set Isadora to input data from this port.

I now have the problem of using all of these elements to create something simple but effective.

Playing with the pods (low quality phone photo)





MAX/MSP Patch

Reactivision

Tuesday 10 July 2007

MA Project ideas and testing

My original idea was to create an instrument that would be controlled by users movements in a space. The instrument would create sounds and corresponding ambient imagery.

In the discussion last week it became clear that it would be difficult to do something that hadn’t been done before. Also that the installations of this nature that have the biggest effect, are those that engage the user on an emotional level as well as just a cerebral one. This is something I have found from both personal experience and from previous research and so I fully agreed. I still want to incorporate some of my original ideas. Being, to incorporate the body within a space, using both sound and imagery to create a more immersive environment and hopefully explore the relationship between user perception and the two forms.

Since the discussion last week I have been thinking about how to gain a more emotional engagement with users. I have been researching previous works on the net most of which can be found in my delicious links below.

After lots of brainstorming sketching and day dreaming I have developed two ideas.

The first:

A single screen projection that places moving images on to the on-screen presence of the user. The closer the user gets or the more surface created by the user, the more of the image will be shown. Allowing them to explore the image using their body.

The images will be quite ambiguous and the main idea being that the sound content will react to the users movements. The more of the picture they reveal, the more of the levels of the soundscape will be heard. As the more levels come in, the soundtrack will change in mood and in turn hopefully change the users perception of the image.

The images will change over time so I can experiment with different moods and relationships between the sound and imagery. I think the more ambiguous the imagery, the more that user can ‘project’ on to it and bring their own imaginations too, hopefully being influenced by the soundtrack.

I have done some preliminary testing using a bed sheet, an i-sight camera and real time video manipulation software ‘Isadora’. I found it very playful and with the right images could be a nice experience in itself.

Here are some pictures.





The Second:

A more personal approach based my own emotional responses to a certain recent events.

A dual screen projection, on opposite sides of a small dark room.
The basic idea being to use the same image on both screens and use music to create a different perception of the moving image.
As the user turns around to face a screen the music changes accordingly.

Some of the things I want to reflect are the feeling of isolation the inability to change a developing course and also a sudden change of everything you thought you knew.

An image I have in mind is a tunnel, a looping clip going further and further in with a distant light at the end.
The music will hopefully convey whether you are going further in or whether you are getting further out.




I am then thinking that if the user stays looking at one screen for enough time then an ending will play out. A rough idea is that.
The bad ending will be that the tunnel fills up with water or starts to crumble, the screen behind them will also change and hopefully make the user feel trapped, all the time the music getting more chilling and louder to a climax where the room will suddenly go dark.
The happy ending will be that the tunnel reaches the end and the both screens fill with brightly coloured euphoric imagery and the music will getting louder and be as euphoric and colourful as the imagery.

I like both ideas or I at least think they perhaps have potential, but need some guidance in the tutorial tomorrow. I think that the first idea has a more playful nature and would enable me to explore the relationship between sound, imagery and user perception. The second idea, if done well might gain more of an emotional response. I understand that they are both probably single user experiences, but I think that in itself works with each concept.

Any thoughts or comments, please let me know.

Delicious page:

http://del.icio.us/tom.newell

Wednesday 13 June 2007

Usability Group Project

I've just finished editing the film demonstrator for our Interactive Map. The film shows how users can use the interface to navigate the City.
I've uploaded it to youtube.


Also just uploaded the completed interface. HERE

Saturday 2 June 2007

Other Stuff - Visit to Liverpool

Last weekend I went to Liverpool to visit family. I had the chance to visit some of the cities galleries and museums. Next year the city is going to be the European Capital of Culture and so has recently had big investment put in to the Arts.
By chance some of the exhibitions were very relevant to what I have been learning on the course. So I've posted what I saw so you can have a look too.

The first was at the Liverpool World Museum. The Exhibition was entitled Animated Adventures and it aimed to take you behind the scenes of animated film making, right from the ideas stage, to set design, cell painting, stop frame, CGI and sound design. It’s main feature was Aardman and the making of their last film – Curse of the Were-rabbit.

I thought the exhibition was fantastic. Not only because I love animation but also because the exhibition itself was so good. It was a brightly coloured, easily accessible, fun multimedia exhibition. It mainly comprised of covered walls with snippets of information, pictures, video footage and actual sets and puppets from the film. All arranged in order of the process of making the film.

The best bit came at the end, when you were allowed to ‘play’ with the interactive parts of the exhibition. These ‘booths’ aimed to demonstrate how animation is created by allowing users to create their own.
Using a very simple interface and a stage with props such as toys and letters. You simply placed or moved items on the stage and pressed a big red ‘record’ button to record a frame. When you had finished you simply press play and your animation is shown on a large screen above the booth for you and other visitors to see. The other booths included cell painting in which you colour your own animated characters and sound design where you press buttons to trigger sound over some animated footage. The interfaces were very easy to use and while I was there I saw people of all ages using and enjoying them. These were highly interactive, multi user, informative and very immersive interfaces.

Link to the website HERE

The next exhibition I saw was entitled ‘Centre of the creative universe. Liverpool and the Avant-Garde.’

It aimed to exhibit a unique account of Liverpool’s art scene and how the city has inspired a diverse range of nationally and internationally renowned artists.
For me the exhibition did exactly what it aimed to do, I saw many diverse works and learned how the city had inspired each one of them.
The best bit about the exhibition though was the design of the map which showed all the links between the Avant-Garde and the city itself. The map was used as promotional material for the exhibition as well as covering the entire wall in front of the doors as you enter the exhibition. I thought that it was a good example of well designed information architecture.


Website HERE

The final thing I got to see was at the Walker art gallery. The gallery has recently built a new facility called ‘Big Art’. It’s objective is to get young children, up to age 8, to engage and understand more about the art on display in the museum. The designers constructed things like trails and quizzes for the children to take part on their visit. Within the facility itself the children take part in different activities, ranging from dressing up as people in the paintings and taking photos of each other to using touch screen interfaces to play different games related to specific works. Among other interactive areas, there was one where the children had to pair up and work as a team. They communicated by phones on different sides of the room. One had to describe objects in boxes on the wall next to them, each object could be found in a painting in the gallery. The other had to draw it on an LCD screen, based on the description from the other child, when they drew it correctly they could open another box.

The website for ‘BIG ART’ can be found HERE.

I thought that there were some really good examples of interaction design. I thought that they would get young children excited about going to the gallery and in turn learn more about why the paintings are important, how they were created and more about the people in them.

Thursday 31 May 2007

Interactive Narrative - Storyworld

Just finished A.C's police interview.
I recorded both voices and used Logic Pro's vocal transformer to make each sound different.
I have uploaded it for your listening pleasure HERE