In-Depth Analysis: What's Inside HoloLens?

NikolausD

New member
Jan 25, 2015
15
0
0
Visit site
Before we can even start speculating about what kinds of neat things HoloLens will enable us to do, we need to know how exactly the device works and what its technical capabilities and limitations are. After doing a bit of research, gathering information from Microsoft's official statements, from hands-on reports and from images of the hardware, I realized that we already know quite a lot about it, so here's my summary. If you have any additional information, please comment!

1. HOW ARE THE "HOLOGRAPHIC" IMAGES PRODUCED?

This is probably the most fascinating, and also kind of mysterious part about HoloLens hardware-wise.

Microsoft has very vividly, but also only very vaguely described how the virtual images are being produced: A so-called "light engine", which has light particles bouncing around inside millions of times, emits the light towards two separate lenses (one for each eye), which consist of three layers of glass of three different primary colors (blue, green, red) and have "micro-thin" corrugated grooves on them. The light hits those layers and finally enters the eye in specific angles, intensities and colors, producing an image on the eye's retina.

Some journalists guess that this is basically a form of technology that uses interference to reconstruct light fields, which means that the light enters the eye in the same angles that it would if it originated from an actual object in space (in that sense the terms "holography" is not even that inappropriate for this technology, since actual holography is based on a similar physical principle).

If that's how HoloLens works, it's significantly different from most 3D systems. It's truly "beyond screens and pixels" (as Microsoft put it). Most 3D head mounted displays, such as the Oculus Rift, send 2D images from 2D screens into the eyes in an angle that simulates an infinite distance between the image and the eye, with the 3D effect relying solely on the different perspectives portrayed on the 2D images for the left and the right eye, a.k.a. stereoscopy. If HoloLens actually recreates light fields, that would elevate the 3D experience to a whole new level by recreating not only the stereoscopic effect of real life 3D vision but also the different optical focuses of light originating from different distances. That would make for a much more natural visual experience, and it would explain why those who got to try the HoloLens described the virtual objects as almost indistinguishable from real objects. Try this: hold your finger a few inches from your eyes and look at it. Then look at the wall behind it. You will notice that, as you focus on the wall in the distance, your finger does not only become double but also blurry - that's because the focal point for the finger is a different one than for the wall. If you were viewing the same thing through an Oculus Rift or a 3D TV, the finger would become double, but it would stay just as sharp as the rest of the image. That unnatural ubiquitous sharpness, combined with the unusual sensation of performing convergence without accommodation (adjusting the position of the eyes without adjusting the focus of the eyes' lenses) result in an unnatural visual experience, which would appear as even more unnatural and jarring when combined with real world vision in an augmented reality system such as the HoloLens. Real objects would be out of focus if you focus on virtual objects, and vice versa.

Regarding the quality of the images, reports say that the image is clear, but not "4k-like", a little "more grainy" than portrayed in the promotional videos. Microsoft mentioned that the lenses are "HD", which is interesting because with other devices such as the Oculus Rift it's not the lenses that determine the resolution, but the display. Again, this seems to indicate that HoloLens does not use conventional screen technology.

What do those lenses look like? If you look at images of the device, you can see two somewhat rectangular lenses behind the big dark visor, one for each eye (they basically look like normal frameless glasses). I assume that those are the lenses that produce the image, their size matches the reported field of view. The "light engine" must be located somewhere above those lenses. The big dark visor covering those lenses and almost the whole field of view probably makes the virtual objects appear more opaque by simply reducing the brightness of the real world image, preventing bright real objects from shining through virtual images (according to testers, the virtual image is not perfectly opaque - I don't think the prototypes used in those demos had a shaded visor though). The special camera rig that was used during the on-stage demo had some sort of glasses mounted on its front, I guess that those served the same purpose as the visor on the headset, I could be wrong though. There are buttons on the side of the device for manual "contrast" adjustment, I think that those actually adjust the intensity of the virtual images to different lighting conditions so the image doesn't look transparent in daylight or too bright in low light conditions.

There are a bunch of other companies besides Microsoft that seem to be working on similar display technologies, one of which (Osterhout Design Group) Microsoft bought patents from last year. I couldn't find any information on how those other systems work, so if you know more about this, please comment!

2. WHAT SENSORS DOES THE DEVICE USE?

The device has 18 sensors built in.

It's safe to assume that some of those sensors are similar to those used in VR headsets and phones (things like accelerometer, gyroscope, magnetometer).

The device uses depth cameras that are based on Kinect technology, but with a wider field of sensing (120x120 degrees) and using "a fraction of the energy used by Kinect". I read somewhere that there are multiple cameras at the front and sides of the device, but on images of the device I can only see two pairs of cameras on the front side (probably two visual cameras and two depth cameras).

Speaking of visual cameras, they can be used to record and share the user's visual experience from a first person perspective (as demonstrated in the Skype application). Since there seem to be two cameras, roughly at interpupillary (eye-to-eye) distance, I wonder if HoloLens is capable of recording stereoscopic photos/videos.

Other sensors that we know of are: at least one microphone, and some sort of sensor that automatically measures the interpupillary distance (and maybe also the position of the eyes relative to the lenses) so the system can render the images appropriately. I wonder how that sensor works, and if it also allows for eye tracking functionality so the system knows where the user is looking.

3. HOW DOES IT TRACK THE USER'S POSITION?

HoloLens provides full positional and rotational tracking without using any external markers or external cameras. I assume that the depth cameras (and/or visual cameras) detect movement relative to static objects surrounding the user, and other sensors such as accelerometer, gyroscope and magnetometer provide additional information to improve the tracking. Any slight imperfections or lag in the tracking process would ruin the merging of the real world and virtual objects, and according to those who used the prototypes tracking works flawlessly.

4. HOW DOES IT RECOGNIZE THE SPATIAL ENVIRONMENT?

To me this is the second most fascinating thing about this device besides the visual system - how does it scan the environment so accurately and intelligently that it allows for virtual objects to fit so perfectly into the real environment? According to hand-on reports, virtual objects can sit perfectly flat on furniture, and the device perfectly recognizes the position and shape of real world objects that sit "in front of" virtual objects, even as the user moves around changing the visual perspective rapidly.

I think this is only possible with a combination of very high definition, very low latency depth cameras, and software that is intelligent enough to translate the depth information into a complete spatial map and to recognize shapes and objects.

5. WHAT KIND OF AUDIO SYSTEM DOES IT HAVE?

There are speakers on top of both ears. There's a good reason why the device doesn't use earphones: earphones would block out real world sounds, which is not how augmented reality is supposed to work. It's not clear if the speakers are traditional ones or ones that transmit sound through the skull (which would prevent other people from hearing what the user is hearing). There's an adjustable red element on the sides of the device, I wonder if those might be for the speakers or for other ergonomic adjustments.

The device has "spatial sound", which means that sound is adjusted for the left and the right ear so that sounds appear to be originating from certain directions. There are different kinds of 3D sound solutions out there - simple ones only allow for horizontal localization, and more advanced systems that can make things sound as if they are above or below you. There's no information yet on what kind of system is used in HoloLens.

There are volume buttons on the side of the device.

6. WHAT KIND OF COMPUTER IS DRIVING THE DEVICE?

The device is basically a wearable Windows 10 computer with three "high end" processors built in - a CPU, a GPU, and a separate processor which according to Microsoft handles all the information coming from the various sensors and uses that data to generate information about position, the spatial environment, voice inputs, gestures, and so on. (Even just the fact that this device has 3 powerful processors inside - besides the futuristic visual system - makes a pricing in the Oculus Rift range very unlikely. This device is definitely going to be quite expensive).

7. WHAT DO WE KNOW ABOUT ERGONIMICS?

According to Microsoft, what was demoed on stage is the actual industrial design and will weigh about 400g.

Heat is vented through the sides of the headset so the device doesn't feel hot on the head.

8. WHAT KINDS OF COOL THINGS ARE PROVEN TO BE POSSIBLE WITH HOLOLENS?

• Putting virtual objects onto real objects (on tables, walls, …)
• Changing the texture, appearance and lighting of real objects
• Making virtual holes into real objects or making real objects disappear
• Putting virtual screens into the real world
• Moving a cursor or virtual objects by head movement
• Initiating actions by doing hand gestures or voice commands
• Replacing the complete environment with a virtual one
• Integrating real objects into a virtual environment
• Moving a cursor from a real screen into the virtual world using a mouse
• Seeing avatars of other users within a real or virtual environment (body tracking presumably requires external hardware such as Kinect)
• Letting other people annotate your real environment

Sources (I can't post links yet, so please google them):
Microsoft's presentation and on-stage demo from January 21st;
Hands-on reports from Wired, The Verge, Neowin, Arstechnica and Windows Central;
Article on Wired titled "Microsoft in the age of Satya Nadella"
Article on TechCrunch on Microsoft's acquisition of patents from the Osterhout Design Group
 
Last edited:

sl#WP

New member
May 31, 2011
32
0
0
Visit site
Well, this is probably the most detailed and comprehensive summary of all. You seem understand it very well. I want to add to #1. how the image is created. That's pretty much how it works. I say this with certainty because I have been following this since like 10 years ago. This is totally different from any current 3D system. Actually the objective, I believe, is not 3D effect, the aim is to eliminate the traditional display. The image, or pixels is never generated. What is generated it the lights that you are eyes are supposed to see when an real object is the proper place. The resolution is not depending on any physical rendering device, it only depends on the source of the information ( say original image resolution, the MARS image, for example), and light processing speed. If you have a 4K image, HoloLens should be able to show it 4K or a computer generated image, resolution is unlimited, provide there is enough computing power. So, at the end, the HPU power is the limitation.
 

NikolausD

New member
Jan 25, 2015
15
0
0
Visit site
#1. how the image is created. That's pretty much how it works. I say this with certainty because I have been following this since like 10 years ago.
Great! Do you know any online sources that explain this technology in detail, or examples of other implementations of it? I tried to find them but I don't even know what keywords to search for.
 

NikolausD

New member
Jan 25, 2015
15
0
0
Visit site
It truly sounds like they've thought of everything. My only worry is the price, and how many minutes does the battery last?

There's no information on battery life yet, but we can roughly estimate it.

The battery has to power 3 powerful processors, a bunch of sensors (all of which presumably don't consume that much power), speakers, and the "light engine", which is very different from a screen, so we don't really know how power consuming it is. But my rough guess is that the device as a whole uses about as much power as a powerful laptop. The whole device weighs about 400g, so the battery will probably weigh somewhere between 150 and 250g. So unless they're using some novel kind of battery technology, battery life should be significantly shorter than that of an ultrabook during heavy usage.

Having said that, I'm pretty sure that the device can optionally be connected to a power source during usage, so it can be used for extended periods of time sitting/standing near a power outlet or PC (for example at the desk at work, on the couch for gaming and skyping, alone in bed for... well, you know what I mean).
 

JamesDax

New member
Sep 2, 2011
748
0
0
Visit site
I think I read somewhere that the device would support wireless charging. So since you'd be using it mostly around the house/office 2-3 hours of usage probably won't be a big deal.

​Can't copy paste a link for some reason. Anyway, there is an article in PC World that states that the hololens will be using the Intel Cherry Trail SoC which supports wireless charging.
 

sl#WP

New member
May 31, 2011
32
0
0
Visit site
Great! Do you know any online sources that explain this technology in detail, or examples of other implementations of it? I tried to find them but I don't even know what keywords to search for.

Not really. I could if I had time to dig it. There have been articles over the years. Most of times people just let it pass, didn't pay much attention. I paid attention because I know it was the way to the future. I remember this same thing was demoed in MS tech fest, the demoer was wearing a laptop on the backpack instead of wearing it. And the UI was projected instead projected directly to eyes. 4 years ago, I hear Craig Mundie (MS CSO) said, they are working on beaming light direct to people's eyes. I was very excited hearing that, because I know finally someone figured out. There is no way in physics that you can project object in the air. Hololens directly operates in front of eyes, forget about source of the image or real object, eventually it's lights strike your retinas makes you see things. As far as I know no one else have done that before, probably no one ever thought about it before. People mentioned other companies doing things like this, or that, to be honest, those cannot even hold a candle to HoloLens.
 

NikolausD

New member
Jan 25, 2015
15
0
0
Visit site
I think I read somewhere that the device would support wireless charging. So since you'd be using it mostly around the house/office 2-3 hours of usage probably won't be a big deal.
​Can't copy paste a link for some reason. Anyway, there is an article in PC World that states that the hololens will be using the Intel Cherry Trail SoC which supports wireless charging.

Thanks for the tip, I found the article, it's Title is "microsoft-hololens-uses-unreleased-intel-atom-chip"

So it seems like the CPU and GPU in HoloLens are based on Intel Cherry Trail, the Bay Trail successor for tablets and low-end PCs. It's smaller, faster, and has wireless charging and wireless video streaming capabilities.

That could mean that battery life might be better than expected, but we still don't know much about the "HPU". Some are speculating it's generating the image and driving the "light engine", but Microsoft has said specifically that it manages sensor input (position, gesture, voice, etc.), which is certainly a complex process but probably doesn't require that much power either.
 

JoeCogan

New member
Jan 28, 2014
5
0
0
Visit site
I don't know for sure but I think it may be using Virtual Retinal Display (VRD) technology: See en[dot]wikipedia[dot]org[slash]wiki[slash]Virtual_retinal_display

It's being used in the Avegant Glyph glasses: See avegant[dot]com
Here's a review of it (based on the CES 2015 showing): See gizmag[dot]com[slash]eyes-on-avegant-glyph[slash]30660

Sorry I can't post links (yet).
 

NikolausD

New member
Jan 25, 2015
15
0
0
Visit site
I don't know for sure but I think it may be using Virtual Retinal Display (VRD) technology: See en[dot]wikipedia[dot]org[slash]wiki[slash]Virtual_retinal_display

VERY interesting, thanks for the link! Although I'm not sure if the technical implementation in HoloLens (the whole thing about the lenses and the corrugated grooves and interference) is the same as described in the Wikipedia article, it's very possible that HoloLens is based on the same basic principle as VRD.

This is how I envision the technology behind HoloLens (purely speculative): The "light engine" generates a single beam (or probably multiple beams) of light that run across the lenses at high speed, running across the whole surface of the lense during every "frame". During that run, the light engine constantly modulates two aspects of the beam: brightness (by modulating the intensity of the beam) and color (probably by somehow hitting the three colored layers of the lense specifically or separately). The angle at which the beam hits the eye is determined by the spot on the lense that the beam is passing through. So during each "frame" light of varying intensity, color and direction enters the pupil, creating an image on the eye's retina.

Does that sound legit? ;)

The advantages of this sort of system are obvious: No more "pixel" resolution (image quality is determined by the resolution at which the GPU renders the signal and by the speed at which the light beams are being modulated); Natural focus just like real life vision; Probably low power consumption; And, obviously, no display that blocks the view.

Disadvantages: Full opacity of the virtual image is hard to achieve, because the light from the real world still hits the eye, the system just adds additional light beams to that; Black colors are basically impossible to depict (there are no "black" light beams)
 

sl#WP

New member
May 31, 2011
32
0
0
Visit site
Disadvantages: Full opacity of the virtual image is hard to achieve, because the light from the real world still hits the eye, the system just adds additional light beams to that; Black colors are basically impossible to depict (there are no "black" light beams)

Don't know for sure, but I think the background light can be entirely blackout. It doesn't make sense otherwise.
 

NikolausD

New member
Jan 25, 2015
15
0
0
Visit site
Don't know for sure, but I think the background light can be entirely blackout. It doesn't make sense otherwise.
The "background" is the real world, which of course you can't black out. There's no "background light" in the device (there's no screen). If you turn the light source ("light engine") off, you don't see black, you see the real world behind the transparent lenses.
EDIT: oh I get it, you probably mean there's some way the lense can block the light from the real world selectively? Yea that would be the only solution, but I wonder if that would be technically possible. The lenses certainly can't do that actively, so there would need to be something that MAKES the lense block light. Interesting to speculate about.
 

rhapdog

Retired Senior Ambassador
Aug 26, 2014
3,035
0
0
Visit site
EDIT: oh I get it, you probably mean there's some way the lense can block the light from the real world selectively? Yea that would be the only solution, but I wonder if that would be technically possible. The lenses certainly can't do that actively, so there would need to be something that MAKES the lense block light. Interesting to speculate about.

Not sure exactly how that would work. I'm no expert on the subject. However, I do know that with a "polarized" lens, if you introduce a second polarization at a 90 degree angle, it will cause that portion to be blacked out. This is why commercial pilots are forbidden from wearing polarized sunglasses in the cockpit, because the windshields are polarized, and turning their head can cause the windshield in the cockpit to go black. If they have figured out how to simulate the second polarization, then it would be possible. Just not sure how, or if it is possible to simulate.
 

NikolausD

New member
Jan 25, 2015
15
0
0
Visit site
Not sure exactly how that would work. I'm no expert on the subject. However, I do know that with a "polarized" lens, if you introduce a second polarization at a 90 degree angle, it will cause that portion to be blacked out. This is why commercial pilots are forbidden from wearing polarized sunglasses in the cockpit, because the windshields are polarized, and turning their head can cause the windshield in the cockpit to go black. If they have figured out how to simulate the second polarization, then it would be possible. Just not sure how, or if it is possible to simulate.
I think the most elegant solution would have to be some kind of mechanism that allows the optical output of the light engine to trigger the blacking out of parts of the lens, because having a separate system for that would make the whole thing so much more complicated (synching it with the light engine in both time and space, separate GPU-output, ...)

My guess is that HoloLens doesn't have that kind of functionality. Otherwise there would be no need for the shaded visor. On the other hand, none of the hands on reports complained about opacity issues - Paul Thurrott said the room just "disappeared" and was replaced by mars. So maybe HoloLens does have the ability to selectively block external light somehow.
 

bilzkh

New member
Aug 10, 2011
704
0
0
Visit site
I think I read somewhere that the device would support wireless charging. So since you'd be using it mostly around the house/office 2-3 hours of usage probably won't be a big deal.

​Can't copy paste a link for some reason. Anyway, there is an article in PC World that states that the hololens will be using the Intel Cherry Trail SoC which supports wireless charging.
If Cherry Trail is indeed the SoC powering the HoloLens, then I expect the initial price-point to be a lot lower than anticipated. I imagine the major cost drivers will be the HPU unit, lens technology and various cameras equipped. If they can offer the initial version for $499 it would be a big hit.
 

sl#WP

New member
May 31, 2011
32
0
0
Visit site
My guess is that HoloLens doesn't have that kind of functionality. Otherwise there would be no need for the shaded visor. On the other hand, none of the hands on reports complained about opacity issues - Paul Thurrott said the room just "disappeared" and was replaced by mars. So maybe HoloLens does have the ability to selectively block external light somehow.

Never thought this much, but people who were in the demo say there isn't see through lights where you are not supposed to see (real object behind the image). See, the magic happens after the first layer of glass, so both the real objects lights and generated lights are source of inputs for the light engine, they can be modulated separately, which means the final results can be controlled and customized. If they want the real object light to be 0% they can do that. Or merged, or overlapped (both ways).
 

NikolausD

New member
Jan 25, 2015
15
0
0
Visit site
Never thought this much, but people who were in the demo say there isn't see through lights where you are not supposed to see (real object behind the image). See, the magic happens after the first layer of glass, so both the real objects lights and generated lights are source of inputs for the light engine, they can be modulated separately, which means the final results can be controlled and customized. If they want the real object light to be 0% they can do that. Or merged, or overlapped (both ways).
I never thought of it this way, you're saying the lenses somehow redirect the light, that's coming from outside, to the light engine, which then manipulates the light (by adding or subtracting light), sends it to the lenses, which project the light into the eye? I don't know, there are a few things that we know about HoloLens that make this unlikely. The way Microsoft described it, the pathway is: light engine -> lenses -> eyes (not external light sources -> lenses -> light engine -> lenses -> eyes). Also, I don't think that those lenses fulfil two separate, very different purposes (redirecting light into the light engine, AND modulation and redirecting light from the light engine to the eyes). Also, with the system you described, the whole image would be artificial, not just the virtual parts of it. If that was the case, I think people who tried the HoloLens would have noticed that real life vision looks somehow different than it normally does (for example, lower res).
 

JamesDax

New member
Sep 2, 2011
748
0
0
Visit site
I think the most elegant solution would have to be some kind of mechanism that allows the optical output of the light engine to trigger the blacking out of parts of the lens, because having a separate system for that would make the whole thing so much more complicated (synching it with the light engine in both time and space, separate GPU-output, ...)

My guess is that HoloLens doesn't have that kind of functionality. Otherwise there would be no need for the shaded visor. On the other hand, none of the hands on reports complained about opacity issues - Paul Thurrott said the room just "disappeared" and was replaced by mars. So maybe HoloLens does have the ability to selectively block external light somehow.

In another review/preview I read the guy said that in the Minecraft demo where the castle was on the table that part of the table had a hole and that you could see objects that were under the table. So it looks like that they can erase, so to speak, real objects in the room.
 

NikolausD

New member
Jan 25, 2015
15
0
0
Visit site
So it looks like that they can erase, so to speak, real objects in the room.

Well, yeah HoloLens can definitely make things appear like that, but my theory is that HoloLens simply projects the virtual image (in this case, the image of what's underneath the table) at a brightness that is higher than the real image (of the table's surface) so that the resulting image on your retina consists mostly of the virtual image, so you don't notice the real image. I think's that's why HoloLens has that huge dark glass that covers the whole field of view - by making the real image darker, it's easier for the virtual image to be significantly brighter than the real image.

I read somewhere (I think it was Paul Thurrott) that you could in fact see the real image shine through the virtual image at some points, especially things like lamps or bright reflections in the real world.
 
Last edited:

Members online

Forum statistics

Threads
323,319
Messages
2,243,628
Members
428,060
Latest member
oliveeAria