- in News
The idea sounds very simple in principle: you strap on a headset, say the magic words (“Abracadabra” “Open sesame” or whatever) and instead of being in that pokey, dingy, stark or crowded office or that untidy home-office you were in a minute ago, you are now inside a luxury office with a panoramic view of New York City.
Yes, just a minute ago, you were Joe Public, crammed into a small office with three colleagues, or looking out onto a garbage-strewn backyard. Now you are Gordon Gecko looking down on Wall Street or Harrison Ford (in Working Girl) looking out across a panoramic view of Manhattan.
Once the dream of science fiction, we now have headsets with such high resolution that they can show you the office, the panoramic view beyond the window and one or more computer screens on which you can work. Or if you prefer, you can take your office with you to that sun-drenched beach. What is more, compared to the cost of downtown office real estate, it’s actually quite cheap. Even the most expensive VR headset costs less than a month’s rent per worker in Manhattan or London or Tokyo.
And with live streaming video feeds, you can plug an up-to-date view of the cityscape or the ocean into the background from pairs of 3D cameras mounted at appropriate locations the world over. Not a still picture, you understand. Not even a pre-recorded video looping through over and over again like in Trevanian’s Loo Sanction. No, a real time view of what you would see if you were actually there.
So why aren’t we all doing it already?
Some people would say that the reason is because the background would be too distracting. But that can’t be the answer. After all some people do work in offices in big cities with panoramic views. And some of those offices are comfortable – even luxurious. And there are people who take their laptops to the beach and even work. I’ve done it myself. I once wrote a thriller (under a pen-name) while sitting on a promontory sticking out fifty yards into the Dead Sea, listening to quiet music.
So it’s not the distractions that are the problem. Rather, the problem is best expressed in a recent article in the Guardian by Alex Hern (“I tried to work all day in a VR headset and it was horrible”):
It’s surprisingly hard to find and use the mouse and keyboard. You probably think you can touch-type. I certainly did… But it turns out that there’s a difference between being able to type without looking at the keyboard and being able to type without being able to see your own hands, even in your peripheral vision.
He goes on to describe the experience of “banging around the desk trying to find where I’d left my mouse without knocking over my coffee.”
And this is the only real problem. Although Hern also wrote about the problem of wearing the headset for too long, I think this was very much his personal experience and not representative of others. Many gamers wear their headsets for hours. And Aaron Frank, who wrote an article for Motherboard (“I Worked in a VR Office, and It Was Actually Awesome”) wrote, in contrast:
I wondered about the visual endurance required to stay in VR for such long periods of time, but Bob Perry, CEO of Envelop VR, said that some people in his company code in VR for hours a day without reporting any issues.
So, it is the practical problem of finding keyboard and mouse that is really the problem. The obvious solution would be to display a virtual keyboard and mouse in front of the user and to track their hands and fingers when they use it. The problem is that tracking technology is not that accurate… yet!
One solution might be to attach the keyboard to a tracker device and overlay the VR image with a positionally-matched virtual keyboard and a virtual image of the user’s hands (as tracked by the camera). This technology already exists. But it is at best a temporary, stop-gap solution.
However, two new technologies are making their way from the lab to the market and should be with us very shortly.
The first is accurate finger tracking. Qualcomm’s Spectra Module program has created a Computer Vision kit and Premium Vision kit. These can be used to carry out passive and active depth sensing. Active depth sensing involves firing pulses of infrared light and capturing their reflection off a surface with an IR camera. The module uses over 10,000 depth points and can discern position up to 0.125mm between dots. They have used this system to accurately track a pianist’s hands as he played the piano.
And if they can do it with a 0.125mm accuracy, they can surely do it for a computer keyboard too. Moreover, as human fingers emit infrared radiation, they can presumably also achieve accurate results with passive IR.
But there is an even more potent – not to say esoteric – technology, just around the corner. An Article in Wired under the headline “Brain-machine Interface Isn’t Sci-fi Anymore,” described demonstration given by Thomas Reardon of CTRL-Labs in which he placed a “terrycloth stretch band with microchips and electrodes woven into the fabric” on each forearm and proceeded to type into a computer without touching the keyboard. He actually surprised the interviewer by starting with the keyboard and then pushing it away, while carrying on with microscopic finger movements. Yet his typing continued to appear on the screen.
Reardon explained that the electrodes in the armband were capturing the electronic signals travelling down his nerves and the software was interpreting with high enough accuracy to be able to determine which he key he would have been pressing.
Understandably, Reardon himself described the demonstration as a “mind fuck” and the interviewer could hardly disagree. The software was so accurate that it even pick up twitches from the fingers.
This technology is not yet on the market, but it is more than looming on the horizon. And with pixel density and resolution getting better, and Microsoft working with hardware partners to bring in the Windows headset, we at bestvr.tech confidently predict that the virtual reality office will arrive some time in 2018.