You haven't seen anything yet.

Our Flavor of VR

Except for games, the sad truth is that there are a very few applications for which VR is only marginally better than it was back in the mid 90's. Every one of the VR cliches used in demos from Oculus, HTC, Microsoft, Magic Leap and the lot, has been used over and over since the heyday of Jaron Lanier. There simply is no vision; no original thought in this seemingly frantic rush for investor money.

Developers, many who have squandered their youth cloistered away playing computer games, don't have enough life experience to be able to figure out an application that would be meaningful to people who aren't enamored with technology for technology's sake.  OK, so maybe we've all spent a bit too much time with a game controller in our hands - but the point still stands, much like riding a roller-coaster, the thrill of VR is ephemeral.

Don't get us wrong, we love VR; but like so many trendy things it has been forced out of context and co-opted into a hype shrouded bubble. Unfortunately, the future of VR has come and gone and it isn't really going to get that much better until the visual field gets a whole lot bigger and CG characters get a whole lot smarter. Then and only then, will you see the full potential of VR - and we'll be the ones behind the curtain, bringing it to life.


Thanks to your tax dollars and the military industrial complex’s need for more accurate real-time targeting, IMU technology took a major leap forward. Poised on a tiny chip that sits inside nearly everything from the Predator drone’s AGM-114 Hellfire missiles to your smart phone, the Inertial Measurement Unit (IMU) tracker uses a gyroscope, accelerometer, GPS and magnetometer to figure out where it is, where it is looking and where it is going: And this is precisely why Spatial Media will work this time.

The Oculus Rift head-mounted display isn't about Virtual Reality, that's been around since Jaron Lanier coined the phrase back in the early 80's, Oculus simply figured out how to clock the IMU faster, then Facebook stepped in and acquired them for $2 billion because they understand where this is all going. Motion-to-photons lag is the delay between inputting a command and seeing the effects of that command. For Oculus, that magic number is somewhere under 20 milliseconds and even though you can't perceive it, for the very first time, VR didn’t make you puke.


A very large part of linear narrative is wrapped up in the art of framing. The director, cinematographer, actor and editor all conspire to convey the emotional arc of the character as they drive its vector along the central theme of the narrative. The problem with working in a virtual medium is that there is no framing. The viewer is free to look around and will most likely be looking the other way at all the major plot points: you’ve lost the narrative. This holds true for both VR and Panorama. The narrative must be re-imagined.


In an attempt to create a virtual environment for pre-visualizing and developing content, we invited the CTO of UNITY (Joachim Ante), the head of the Autodesk MAYA dev team (Bruno Sargent) and the smartest AI guy in the Gaming industry (Dr. Bill Klein) to hang out at the Culver City offices for a few days and help us work on a wild idea. Amazingly they all showed up and the result was a quasi-real-time, VR content generator.

A few more months of coding and tweaking and we finally had an easy-to-use system that combined the power of MAYA (the motion picture industry's most powerful 3D application), with the real-time performance of UNITY (the game industry's most powerful Game Engine) and it was populated with the smartest CG characters in the world. Cool!

The first thing we did (after getting it patented), was use in on a couple movie projects around town as a prototyping tool. It was a big hit and everyone could see that the potential was really unlimited. (see video below)

The basic concept is that you pick some generic characters, dress them, select their AI persona and then drop them into a set. You can build a set if you like, but since the system is game engine-based, just about any environment that you can think of is already built. Then all you do is drag-and-drop the dialog, (either slugs from the script or pre-recorded audio clips), give your characters some rudimentary blocking instructions and hit play.

And here's the really cool bit: the cameras all have AI scripts too, so they know when and where to look. The system will automatically set master shot sequences according to basic movie industry customs (Wide Establishment > Master > Tight Reaction > Two Shot > Reverse etc.). You can watch (Lurk) conventionally on any 2D monitor or headset in VR mode - or - you can take the POV of any of the characters in the scene.

It's plug and play with both Oculus and UNITY so you can have numerous people, all with different vantage points, viewing, and even participating in the same scene.

You can "possess" a character (we all agree that we need a better term for this but this is the first thing that came to mind and it seems to have stuck) that is driven by AI protocol and actually become that character. If the character has dialogue or action in a scene you could even over-ride the script and "improvise". Since the major characters are AI driven, they will be able to adapt and gently guide you back to the central theme of the scene.

Directors, DPs, Producers, Actors, Artists - everyone that has used it has fallen in love with it.

So here then is a small taste of our VCG (Virtual Content Generator), or as our friends in production like to call it, "The Prototyper".

visit the site