Tuesday 24 July 2007

NEW BLOG

I've started a new blog to document the thinking, testing and development for both the practical and written content of my MA project.

New Blog

Sunday 22 July 2007

Experiment one

This short video is to show me testing my system so far.

I have networked two computers so they can pass OSC signals between each other.
One inputs from a webcam using ‘Reactivision’ (this is what is on the left hand side of the video). I have then used Max/MSP to send this data out as OSC signals, which then get sent to ‘Isadora’ which is what is creating the visuals and to the other computer running ‘Reaktor 5’ which is what is creating the sound you hear.

The first two clips are a test of a simple type of ‘colour organ’. The Third clip is using a desk lamp to control the amount of light to the webcam. I have set Isadora to analyse the amount of light inputted then to output a value to control the sound and the imagery.

Monday 16 July 2007

MA Project

Dissertation

At this stage I am still unable to formulate a specific question.
I know the general area I want to explore.
This being the combination of music and imagery and it’s effect on a person.

Each form can trigger direct responses from our senses and minds and the combination of both delivers something that neither can on their own.

Does the combination trigger different responses in our brain.

‘Visual Music is an art form with a primary objective of intentionally activating the mind’s ability to fuse visuals and music into a “synaesthetic experience”.’

Is this an interaction in itself. Do our minds fuse the elements in the same way or do we each create our own experiences.

Does the combination of music and imagery allow more or less room for our minds to include our own imaginations.

I would like to consider two of what I believe to be examples of music and imagery having the strongest impact. These being film and television scores and VJ culture.
Within these examples, one form is supportive of the other. Although I would like to explore how each change an audience’s perception of the other.

Something else I have come across in my research is the combination of music and imagery as a form of psychotherapy, used to bring about a therapeutic change.
One organisations main approach is to play patients certain types of music in a relaxed state and ask them to describe any images that they experience in their mind.

I found this particularly interesting as this suggests that everyone has the ability to create images when listening to music. I would like to look in to this further.

Also I would like to look into possibilities for the future. Will there be a time when we will be able to see images created by a persons mind, are the images we see in our heads actual translatable images.

Practical project.

I am in a similar position with my practical project.
Yet to come up with a definitive idea.

I think so I far have been trying to combine too many ideas or functions in to one project.

User created visual music, emotive responses and the interaction between body and system are all the things I am trying to include and I am not sure if they will all work together. On reflection I think so far my ideas or too complicated and not instantly useable by an audience.

I have been experimenting with various technologies in the hope of giving my self some idea of possible interactions, spaces, sounds and visuals.
Most of these have been successful but in a way, haven’t helped because they have given me many more options and opened new avenues to explore.

Experimentation.
Using the ‘pod’ screens as back projection screens I used a webcam and ‘isadora’. I played around with many different effects and images and user created visuals. All were fun and visually appealing but didn’t really have any effect or create anything particularly immersive.

Over the weekend I managed to get more control and interactivity in to the Isadora visuals. I used Reactivision to input data from the webcam, then created a MAX/MSP patch that converts splits the data up into separate objects for multiple controls, scales the numbers then outputs the new numbers to a port on my network. Finally I set Isadora to input data from this port.

I now have the problem of using all of these elements to create something simple but effective.

Playing with the pods (low quality phone photo)





MAX/MSP Patch

Reactivision

Tuesday 10 July 2007

MA Project ideas and testing

My original idea was to create an instrument that would be controlled by users movements in a space. The instrument would create sounds and corresponding ambient imagery.

In the discussion last week it became clear that it would be difficult to do something that hadn’t been done before. Also that the installations of this nature that have the biggest effect, are those that engage the user on an emotional level as well as just a cerebral one. This is something I have found from both personal experience and from previous research and so I fully agreed. I still want to incorporate some of my original ideas. Being, to incorporate the body within a space, using both sound and imagery to create a more immersive environment and hopefully explore the relationship between user perception and the two forms.

Since the discussion last week I have been thinking about how to gain a more emotional engagement with users. I have been researching previous works on the net most of which can be found in my delicious links below.

After lots of brainstorming sketching and day dreaming I have developed two ideas.

The first:

A single screen projection that places moving images on to the on-screen presence of the user. The closer the user gets or the more surface created by the user, the more of the image will be shown. Allowing them to explore the image using their body.

The images will be quite ambiguous and the main idea being that the sound content will react to the users movements. The more of the picture they reveal, the more of the levels of the soundscape will be heard. As the more levels come in, the soundtrack will change in mood and in turn hopefully change the users perception of the image.

The images will change over time so I can experiment with different moods and relationships between the sound and imagery. I think the more ambiguous the imagery, the more that user can ‘project’ on to it and bring their own imaginations too, hopefully being influenced by the soundtrack.

I have done some preliminary testing using a bed sheet, an i-sight camera and real time video manipulation software ‘Isadora’. I found it very playful and with the right images could be a nice experience in itself.

Here are some pictures.





The Second:

A more personal approach based my own emotional responses to a certain recent events.

A dual screen projection, on opposite sides of a small dark room.
The basic idea being to use the same image on both screens and use music to create a different perception of the moving image.
As the user turns around to face a screen the music changes accordingly.

Some of the things I want to reflect are the feeling of isolation the inability to change a developing course and also a sudden change of everything you thought you knew.

An image I have in mind is a tunnel, a looping clip going further and further in with a distant light at the end.
The music will hopefully convey whether you are going further in or whether you are getting further out.




I am then thinking that if the user stays looking at one screen for enough time then an ending will play out. A rough idea is that.
The bad ending will be that the tunnel fills up with water or starts to crumble, the screen behind them will also change and hopefully make the user feel trapped, all the time the music getting more chilling and louder to a climax where the room will suddenly go dark.
The happy ending will be that the tunnel reaches the end and the both screens fill with brightly coloured euphoric imagery and the music will getting louder and be as euphoric and colourful as the imagery.

I like both ideas or I at least think they perhaps have potential, but need some guidance in the tutorial tomorrow. I think that the first idea has a more playful nature and would enable me to explore the relationship between sound, imagery and user perception. The second idea, if done well might gain more of an emotional response. I understand that they are both probably single user experiences, but I think that in itself works with each concept.

Any thoughts or comments, please let me know.

Delicious page:

http://del.icio.us/tom.newell