Common Table, A Nothing Final

For our nothing final we (myself,Chris Hall, and Akmyrat Tuyliyev) wanted to explore the idea of communication without the power of speech. When we erase the language barrier by making it a non factor entirely, what are the other ways in which people communicate?  Furthermore, how do those methods along with the things we may do unconsciously impact the experience of others in ways we don’t realize?

We wanted to frame this idea by seating people at separate portions of a split table, capturing a live feed of them, and projecting them into each other’s space.

In this completed space they are free to interact with each other however they please, but because we’ve removed the audio, their awareness that certain actions trigger sounds and changes in the other room is limited.

One of the first things we wanted to accomplish was figuring out if the table portion of our illusion was even possible. We taped off quarter circles in two different places and played with camera placement:

And found that it was doable with a slight skew on each one. We’d also need to compensate for using a quarter table to visually represent a third but our later tables were cut to different sizes to help with this.

We also initially wanted to use both gesture and face tracking. We started in processing using KineckPV2 and had some successes with that, effectively using facial expressions and hand tracking as a triggers.

But we ended up switching over to kinectron for usability reasons and had to sideline our face tracking. We instead ended up tracking different hand states (using the created “clapping” from an example and adding “face touching” as well)  and placing them in different zones that we created relative to different points on the body.

One or both hands in a specific zone would trigger a different reaction in Isadora and stop files that had been previously started.

Getting all of these triggers to move around however is where we started to encounter problems. None of us had worked with OSC before (but another library and a room full of residents helped with that) and we were also using syphon to move around our video streams. We got everything working on one computer but then set up a router to get all 4 (2 mac/pc pairs) on the same network. And everything moved in both directions! For all of 5 seconds and then everything broke…

But that’s ok! We put a plant on one side to trigger reactions in the other space and to interact with actual visitors. And we definitely have some other kinks to work out as well. We have some lag issues and getting across the communication through gesture aspect is a little more difficult. We also wanted to trigger light changes and were considering using actual sensed objects. Things to work out in the future…