By now you’ve probably seen it at least once.
You might have even seen it a thousand times.
You have probably already seen it somewhere else, too.
But what if you were to put it into your mind and let it do its thing?
That’s exactly what researchers at Carnegie Mellon University are trying to do.
A team of researchers is working on what’s essentially a VR headset that would allow you to be immersed in the virtual world while you watch it.
The result is a virtual version of the world where your brain actually works.
“It is like having a virtual reality camera in your head that you can actually look at,” says Paul E. Kocher, an associate professor of computer science at CMU and one of the researchers involved in the project.
“You can actually move your head around.”
The project is called EyeScope, and it’s being developed by a team led by Kochel, a CMU assistant professor of biomedical engineering.
It uses a pair of 3D-printed eyes that connect to the head-mounted displays inside the headset.
“We’re using a pair [of] optical fibers that have the same curvature as the eyes,” Kochere tells me.
The fiber is a thin layer of material that’s sandwiched between the glass lens and the headset’s front side.
“If you put a piece of fiber between the lens and headset, the light from the light emitting from the fiber travels through that fiber and reaches the headset,” Kock tells me, describing how the fibers look.
“So that fiber will have an electric field that is perpendicular to the fiber.”
The researchers also use a laser to project the image onto the fiber and, when the fiber is about 100 nanometers thick, they can see the virtual image with a resolution of about 300 centimeters.
They call this “binaural” sound.
EyeScope would also allow users to look down into the headset, something the team says it hopes to do soon.
They have yet to figure out how to make the virtual reality headset smaller.
They also need to figure how to connect the headset to the computer.
“At the moment, the way that the computer can do that is using an optical fiber to create a mirror image of the image,” Koko tells me when I ask him about that.
“That’s a bit of a technical problem because we’re not sure how we can solve that.”
But it’s a problem that could be solved, he says.
“Right now, you can only see the image through the headset because the fiber has to be as thin as possible.
So the fiber needs to be a bit thicker to actually allow the computer to be able to see the images.”
The optical fibers have to be so thin that they can’t block the light that hits them, Koko says.
The researchers are also working on software that will allow the headset and the computer communicate.
That’s where the real magic happens.
The computer’s vision system uses a bunch of different technologies.
Some of these include infrared, which makes the world look red; a depth camera that lets you see inside objects; and a depth map that shows where you’re looking from the inside of the headset when you’re using it.
“When you see something through the eyes, you have this sense of distance that you’re actually seeing in the world,” Koca tells me while I talk to Koche.
“And that’s because there’s a lot of infrared light bouncing off the world and it hits the camera.
It’s a reflection of the light.”
Kochet and his team developed software to detect infrared light and to map that light onto the pixels of the virtual-reality headset.
But it takes a lot more effort than that to make sure that you have a good picture of where you are in the real world.
“The system is not very accurate.
So, it takes some time,” KOChe says.
There’s another reason why it takes so long.
“Some of the pixels on the display have different color intensity.
And we have to figure if those different colors of the different pixels are related to the different kinds of colors that are present in the scene, and if that’s the case, we can determine how well it can reproduce the real-world scene.”
That’s why, when Koches group developed software that could detect the infrared light from a distance of about 20 meters, they ran into problems.
The software they developed was based on “cubic-area” (CA) algorithms.
These algorithms are used in computer vision to determine whether an object is a part of a scene.
The CMU researchers used these algorithms to create software that was able to tell if the images they were seeing were of objects in a 3D scene or whether they were looking at a black-and-white picture.
But when the images were displayed, the software would tell them the color intensity of the white objects in the image. The