Cinematic VR Challenge #5: Adventures in 360 Video, Motion Capture, and Teleportation



SIXR (Simulated Immersive Experimental Realities) and many wonderful sponsors organize quarterly opportunities for diverse creators to network, receive mentorship, and gain experience with 360 video and virtual reality through hands-on collaboration and learning.

In late September, virtual reality and 360 video creators flocked to Capitol Hill for SIXR’s fifth Cinematic VR Challenge. Attendees split into teams and created 360/VR video experiences in response to the theme “Free To Be Yourself” in just 48 hours.


Motion Capture and Teleportation in Cinematic VR

Team Beautiful built Dance of Eva using Unity’s 360 Panorama Capture and MoCapNow’s motion capture tools. In the virtual world, viewers experience the movement of a dancer, captured at MoCapNow. The figure dances inside of 360 space decorated by Eva Armisen’s art while listening to Dawna Markova’s poem I Will Not Die an Unlived Life. Markova’s poetry foreshadows the continued success of future VR creators: she vows “to live so that which came to me as seed / goes to the next as blossom / and that which came to me as blossom, / goes on as fruit.”

We talked to Team Beautiful about their project to learn more about what it was like to work at MocapNow, the challenges that the team ran into, and the tools they used to get the job done.


The team builds Dance of Eva, 2017.

Pixvana: Who was on your team and what roles did you each play?

Team Beautiful: Debra Bouchegnies provided creative input, worked on the environment, did sound editing and design, and built the credits. Kelly Campelia directed art and choreography, provided environment and sound design support, and project managed. Michael Gourlay did the motion capture as well as model integration and special effects in Unity. Ishita Kapur developed the concept and narrated the piece. She also worked on visual effects. Carolyn Seiden choreographed and danced: her motion was captured for the project. Brad Cerenzia did environmental and asset design—he built the sky, pictures, ground, lights, and painting in Unity.

What was the concept behind the project?

Brad Cerenzia: We pitched over a dozen ideas on Friday night and then networked to see which ideas could work well together. Debra and I had talked previously about doing a project that integrates an artist's work as a 3D model, Carolyn pitched an idea of 3D models dancing and interacting with their environment, and the poetry came from Kelly.

Debra Bouchegnies: As synchronicity would have it, we quickly discovered that many of the artist’s images depicted lines from the poem! We had also intended to do a texture wrap using the art onto the 3D dancing model, but due to some bugs with the motion capture file we were unable to deliver the model in time, so ran out of runway. But the resulting model enhanced with visual effects is sweet!

What tools did you use?

Debra Bouchegnies: We used motion capture at MocapNow studios, Unity, Final Cut Pro X to edit audio, and Unity Asset store and Pond5 for asset search and acquisition.

What was it like to work at MocapNow?

Brad Cerenzia: The team there was amazing. They held a pre-cap meeting a few days before the hackathon so that we could see the space and learn what we needed to bring day of capture to be most successful. From beginning to end, they were great to work with. Watching choreography translated in real-time on a computer screen was a highlight of my weekend!

Debra Bouchegnies: While coming from a film and video background, this was my first experience with VR production and I was thrilled to have the opportunity to experience a motion capture session. CJ and Ander at MocapNow were extremely kind and generous with their time, expertise, and facility, which normally goes for at least $8K per day. I also attended their best practices session earlier in the week, which was awesome. When we hit a snag with corrupted files during production, they patiently and diligently continued to work through the problem until we had the deliverable file. It’s so incredible to see how supportive folks are of each other in this community and I’m excited to become a part of it.

What were some challenges that your team ran into while working on this project?

Brad Cerenzia: Not knowing enough about what kind of model to bring for the MoCapNow team to be most efficient; I owned that part of the project, and some of the decisions I made were educated guesses that only partly paid off. I didn't know enough about the right kind of rigging for model selection, for example; the first model was over 500MB and another was around 1MB, so there was a lot to learn, quickly. We were also trying to do character design with an awesome artist, Jeff Lewis, and that didn't work very well because we changed models and didn't have the expertise to know how to re-rig the motion capture data onto a new model. Also, the mocap data corrupted so we had to re-run the choreography a couple times: our shoot took a lot longer than we anticipated. Back in development mode, we collaborated in Unity using the built-in versioning tool, and for the most part that worked well, at least better than passing around a thumb drive or trying to share a folder on Dropbox. We ran into some conflicts when people were editing the same scene, and the conflict resolution process in Unity is not transparent, so that was a challenge! But our willingness to work together to solve problems and team spirit was refreshing and kept us motivated to deliver a piece that celebrates the artist, the dancer, the poet, and love.

Kelly Campelia: As these types of hackathons go, at least half the battle is running against the clock as we create something in less than two days with people we’ve recently met. We had so many great ideas and contributions and we were also learning how to use and dealing with new tools. Our team worked very well together under the pressure and helped each other so that we could get something done we would be proud of, even if in the end it was missing half the amazing things we dreamed up.

Team Feels’ project, The Book of True Feelings, used 360 video to explore the impact of varying visual cues on the viewer’s perception of an interpersonal encounter. In the experience, an actor describes his relationship to the viewer in emotionally ambiguous terms and in a flat tone (i.e. “I wrote about your impact on my life.”). The same actor is positioned around the viewer at every 90 degrees: at each position, he lip-syncs to the track with an emotional inflection that is either angry, fearful, joyful, or sorrowful.


Team Feels demos their project, 2017.

Team Fluidity transformed footage from Fluidity 2017, a community arts and wellness retreat, into a virtual reality quest. Guided by a graceful, eloquent avatar, users teleport between events at the gathering, including outside yoga and fire-dancing performances.


Team Fluidity maps out their project, 2017.

Team Life Could Be Better created an infomercial-style advertisement for “The Duplicator.” The viewer experiences a tense and awkward family dinner until the scene freezes and a onesie-clad actor introduces the handy tool. As the 360 video explains, the Duplicator handles all the stressful, anxiety-inducing moments in your life for you so that you’re able to maintain your mental health.


Team Life Could Be Better demos their video to an excited viewer, 2017.

Shout out to organizers Diana Fairbank and Budi Mulyo for facilitating such a lively and impressive weekend of collaboration and learning!


Did you attend the challenge and have a blast working with 360 video? Sign up for our beta to begin managing, encoding, previewing, and distributing immersive 360 video and virtual reality experiences. Learn more about getting started with SPIN Studio.


Posted by Team Pixvana