Back

Mixed Reality Showdown: Meta 2 vs. HoloLens

Nestor Köhler • Software Developer

It has now been two years since Microsoft released the HoloLens, their Mixed Reality headset. Though there has been both valid criticism and praise (e.g. small field of view and spatial mapping, respectively) for the HoloLens, the discussion has always felt a bit constrained by the fact that there hasn’t been any comparable device to benchmark the HoloLens against.

During these two years there have been two potential competitors on the horizon: the Meta 2 and the Magic Leap.

While the Magic Leap still remains in development, the Meta 2 finally shipped at the end of last year. We received ours here at Futurice at the end of January, and I immediately set out to explore its capabilities and compare it with the HoloLens.

Developing with the Meta 2 has highlighted some very fundamental aspects of the MR user experience that the HoloLens gets right and that are lacking on the side of the Meta 2. This blog post is part one of a two part comparison – the second part being this video comparison that demonstrates the capabilities of each device in practice. The video showcases aspects such as the field of view and spatial mapping capabilities of each device. This post is aimed at providing insight into the experiential side of using each device, and the experience of developing for them. Together these two viewpoints should provide a complete picture of why I see the HoloLens as the device worthy of more interest at this time.

The Form Factor

Let’s begin with one of the fundamental aspects of a headset: How it is worn on the head. The HoloLens and the Meta 2 have two quite different approaches to supporting the weight of the headset. Let’s begin by looking at how the HoloLens does it.

hl head shot-aa6a7eb0-88d2-44e9-bc16-7e8da19fef12 s1800x0 q80 noupscale

As can be seen from the above image, the weight of the HoloLens is supported in two ways: by an adjustable headband and a nose rest. Shown is also how the headband can be rotated according to preference, changing how the weight is distributed.

Of these two, the nose rest is the one with the biggest tradeoffs. On the positive side, the nose rest makes the HoloLens very stable on the head, helps keep the viewable area in a spot that feels very natural to look at, and makes it easy to place it in the same position each time. On the other hand, the nose rest is, both in my own experience and the experience of many others who have tested it, the biggest source of discomfort – more on that a bit later in this post. It can also make it difficult for people with glasses to comfortably use the HoloLens.

m2 head shot

The above image illustrates how the Meta 2 is supported by a longitudinal strap on top of the head, an adjustable band that goes around the head, and a pad that rests on the forehead. How comfortable this setup is to the user seems to vary depending on the shape of the person’s head. I have a relatively large head and personally feel like the device wants to slide down my forehead, forcing me to tighten the headband quite a bit. On the other hand, others with smaller heads than mine have commented that it feels quite stable.

I will discuss the display itself in more detail in the next section, but I will mention here that the image quality of the Meta 2 is heavily dependent on the placement of the headset. For a clear image, you are required to place the headset quite far down on your forehead – forcing you to adjust the headset based on image quality first and comfort second. Also, this causes the main viewable area to be placed in a spot where I feel like I’m constantly looking down at things.

A note also has to be made of the Meta 2’s cable. Unlike the HoloLens, the Meta 2 isn’t self-contained and has to be attached to a separate computer. Personally, I find it extremely frustrating. The 9 foot (~2.7 meter) cable is in practice quite short and severely restricts the area within which you can operate. It is also really easy to step on. These two together mean that at any given time, part of your attention has to be on the whereabouts of the cable, and not straying too far away from the computer.

I can show you the world

hololens display

The HoloLens and the Meta 2 use very different technologies for displaying their holographic images. Starting with the HoloLens, it uses two holographic lenses, which are actually waveguides that transport light through the lenses. These are made up of 3 layers, one for each color component (red, green, and blue). The light is then extracted out of the lenses using so-called Surface Relief Gratings, sub-wavelength gratings that bend the light in the correct direction. The full details of how the system works can be found here.

m2 displays

The technology used by the Meta 2 is much more straightforward. It has a single display above the visor that points down (see the image above). The image from the display is then reflected to the user’s eyes via the surface of the semi-spherical visor.

One of the most obvious aspects the choice of technology impacts is the field of view (FoV) of the device. This is also one of the major selling points of the Meta 2 compared to the HoloLens. The Meta 2 sports a 90° FoV, compared to the 30° of the HoloLens. For a demonstration of the FoV of each device, please refer to this post’s video companion.

In practice, I didn’t find the larger FoV of the Meta 2 to improve my experience too much, for two main reasons. First, as mentioned earlier the Meta 2 has to be placed quite low on your forehead. This means that the vertical FoV is almost entirely dedicated to seeing things at the same level with your eyes and below. If you want to look at something above eye-level, you will have to crane your neck to tilt your head up. Second, when using the Meta 2, objects are supposed to be within arm’s reach – literally – since you interact with objects directly by grabbing them. The HoloLens, on the other hand, expects objects to be roughly 2 meters away from the user. So the amount of the visible FoV that an object occupies is roughly equal.

As was mentioned earlier, the quality of the image when using the Meta 2 is heavily dependent on the positioning of the headset, and even small changes in position can cause the image to become blurry and/or distorted. This becomes frustrating after a while, especially as a developer since I have to take the headset off and put it back on several times over the course of a day. The HoloLens in comparison is very forgiving when it comes to placement and has the nose rest to help place the headset correctly.

visor comparison

The experience of looking through each device is also quite different. As seen above, the main discontinuity in the FoV of the HoloLens stems from the edges of the lenses. This is mitigated by the fact that the visor is so close to your eyes that they are close to your peripheral view. The Meta 2, on the other hand, has a very fragmented feel to the field of view. The darkened area where the images are shown is very prominent, and at the edges, where the visor curves backwards, there is some very distracting refraction going on.

One thing that surprised me when comparing the devices was the difference in the “physicality” of the images they produce, in the absence of a better word. To truly understand this difference you have to experience it by looking through one headset and then immediately switching to the other one, but I will do my best to explain it. When looking at a hologram produced by the HoloLens it really looks like it exists in the same space as the physical objects around it; my eyes focus on it just as effortlessly as they do on the real objects around it, and I feel very comfortable giving an estimate of its actual position in the world. With the Meta 2 it feels more like I’m just watching a 3D image on a transparent screen and I have to actively switch between looking at the image and the world around it. When using the Meta 2, I also experience eye strain similar to when I use VR glasses, which amplifies the experience that I’m not looking at something that really exists.

Going long-term

At this point I have worn both of these devices for extended periods of time, up to 2 hours in a row, on several occasions. And while neither device is particularly pleasant to wear for that long, I still prefer the HoloLens, as the discomfort is only physical. The HoloLens weighs just over 0.5kg, and a decent amount of that weight is supported by the nose. This constant pressure on the bridge of the nose can at times cause headaches.

Physically, the main source of discomfort when wearing the Meta 2 is the the constant pressure on your forehead. Though unpleasant, it alone is still better than the pressure on your nose from the HoloLens. The real discomfort is caused by a combination of eye strain, the fragmented FoV, as well as the instability of the spatial tracking, which is demonstrated in the companion video. After long development sessions I found myself feeling slightly disoriented, reminiscent of the feeling you get after spending a long time on a boat and getting back on land.

Influencing the virtual

Displaying virtual objects is all well and good, but more often than not, the point is also to interact with them. The Meta 2 and HoloLens have gone with very different ways of doing this. With the Meta 2 you directly grab objects with your hand(s), while with the HoloLens you use a combination of gaze (i.e. the direction your head is pointing) and gestures. For a demonstration of how these interactions work in practice, see the companion video. Here I will focus on one often overlooked aspect: reliability.

Whatever else you may want to say about the gesture-based system used by the HoloLens, it is extremely reliable. Because of this it never feels like you have to fight the device to make it do what you want. The interaction experience is therefore mainly reliant on your ability to make use of the data provided to you by the HoloLens.

The same cannot be said for the Meta 2. The base interaction is very simple and intuitive. Just stretch out your open hand and close it into a fist when you want to grab something. The problem is that the actual hand tracking is extremely temperamental. Sometimes it loses track of your hand as you make a fist. Sometimes it detects non-existent hands. Sometimes it registers grabs even though your hand is open. Overall it just feels like you have to fight the device for it to cooperate.

Development ecosystem

Having discussed the physical devices, let’s take a look at what it is actually like to develop for each device. To begin with, let’s assume you are interested to start developing for one of these devices. What resources are currently available to help you get started?

I want to begin by noting that I do not expect there to really be any third party resources for the Meta 2, simply based on how new it is. Therefore what I will be looking at is the official support provided by the manufacturers – Microsoft for the HoloLens, and Meta for the Meta 2 – and the resources they provide. And also, on this front I do not expect as many resources from the side of Meta, considering the difference in size of the companies and the maturity of the product.

Having said all that, I have to be honest and say I am still extremely disappointed by the resources offered by Meta. Or, more specifically, the imbalance between design and technical resources. To illustrate this, let’s first take a look at the resources offered for the HoloLens.

hl tutorials

The amount of resources provided by Microsoft is immense. As an example, just going through the Academy, each lesson contains several multi-minute videos guiding you through development, detailed step-by-step lists explaining what to do, and the complete code for the central scripts. And that’s just the basic tutorials that help you get up to speed. Now, let’s look on the side of Meta.

meta design resources

Shown above is the page that contains all Meta’s documents related to designing for the Meta 2. This actually along the lines of what I would expect – perhaps even exceeding my expectations a little – for a company this size and such a new product. For the most part, the content is actually really interesting, connecting the design guidelines with research done in neuroscience. But design is only one part of development. You also have to implement your designs. So what resources does Meta provide for teaching the actual development?

meta dev resources

A single tutorial. And the tutorial page itself doesn’t even contain the actual tutorial, instead linking to this blog post. The tutorial itself is a 10-minute video that contains less subject matter than almost any single tutorial in the Mixed Reality Academy. Beyond that, all you have to go by is the SDK Features section – which doesn’t do much more than list how things work and some best practices – and/or the example scenes provided, which provide help just one step above “read the source code”.

Taking all of this together I get this weird impression that Meta doesn’t seem too interested in developers like me actually developing apps for the Meta 2. This is further compounded when I try to answer the question “Where can I publish my app and find apps done by other people?”

If I google “meta 2 app store” the first result I get is this article promising an upcoming, dedicated store. That was two years ago, and apps are still nowhere to be found. In comparison, googling “hololens app store” gives a direct link to Microsoft’s store that conveniently lists all HoloLens apps.

Moving on to the issue of actually developing applications, the main way of developing for both devices is by using Unity. Both are easy to get started with, though the HoloLens is a bit more streamlined.

The only requirement for starting development for the HoloLens is that Visual Studio is installed. This is required since VS is used to deploy applications to the HoloLens. Beyond that, all you need to do is to change a couple of settings and technically you have a working HoloLens app – albeit one without any functionality.

For the Meta 2, you first have to download the SDK from Meta’s site. After installing the SDK you can import a Unity package containing all the needed assets. Once that’s done, you simply delete the default camera in the scene and drop in the camera prefab provided by the Meta SDK. At this point you have a working Meta 2 app, but with no functionality other than being able to scan the environment.

One definitive advantage of the Meta 2 is the fact that you can run it directly in the editor, without any special arrangements. This makes iterating much faster and easier. Although it is possible to use Remote to Device for similar debugging with the HoloLens, Remote to Device is not always entirely reliable. And since Remote to Device doesn’t run the application on the HoloLens you still have to deploy it to ensure everything works correctly.

When coding for each device I far preferred to code for the HoloLens. I found the whole Meta 2 SDK to be a bit messy, especially in comparison with the HoloLens APIs. This comes down to two main factors. First, as mentioned earlier, the lack of tutorials and general technical resources made it hard to form a clear picture of what development patterns to use. The result of this is that I felt like I didn’t have a good picture of all the parts of the system, which ones I should care about, and where to go if I want to customise functionality. Second, support for the HoloLens is built directly into Unity. Therefore functionality is exposed following the same kinds of patterns I’m already used to in Unity. The exposed functionality is also, on average, on a slightly lower level, making it easier for me to build my own system around that functionality. This kind of approach of course places a bit more burden on the developer when starting up a new project, but personally I find it worth it.

Conclusions

Let me now bring together all the things I have discussed to explain why I feel that the HoloLens is the more interesting device at the moment. I will also refer to some of the conclusions I’ve reached in the companion video to paint a complete picture. I strongly suggest watching the video if something seems unclear.

Let’s begin with a very straightforward issue: price. The HoloLens is currently priced at $3000. But that’s it, it contains everything you need. The Meta 2 clocks in at $1995, but that is assuming you already have the computer capable of supporting it. If you need to upgrade your computer then the price can quickly start climbing closer to the same level as the HoloLens.

Let’s move on to the viewing experience. The only real advantage the Meta 2 has over the HoloLens is an improved field of view and more rendering power. Over time, I find that these are of far less importance than hologram stability, convincing mixing with the real world, and the “physicality” discussed earlier in this post. These are what create the illusion of a truly Mixed Reality.

Though I think neither device’s interaction system is adequate, I would still choose the HoloLens over the Meta 2. The need for everything to be close enough to grab with your hand(s) simply limits the possible use cases too much – especially combined with the fact that the cable restricts the area in which you are able to operate. And this is not even mentioning how unreliable the Meta 2’s tracking is.

Speaking of being tethered – to me, this is an absolutely massive downside. Reading articles on the internet, I constantly see people saying they are happy to trade being tethered for having more processing power. I could not disagree more. Most really interesting use cases require you to move around a lot and/or using the device somewhere other than your workstation.

When it comes to spatial mapping there really is no contest. The HoloLens allows you to map and produce a model of the real world – up to 3 meters away – while running in the background. With the Meta 2, you have to explicitly tell the device to do the same, and it only has a range of up to approximately a meter.

It’s good to also consider how all of these factors reflect on potential future releases. The base for the HoloLens is really strong, with strong spatial tracking capabilities, reliable gesture recognition, and already being tetherless. The main features that need improving is expanding the FoV, adding more processing power, and expanding gestures and hand tracking.

The Meta 2 on the other hand… There are actually very few things I consider to be at an acceptable level. First of all, it needs to become tetherless. That means finding a way to pack all these specifications into the headset. Then it needs a display this is actually pleasant to watch. Spatial and head tracking needs to be improved so that objects aren’t constantly moving around. Hand tracking needs to be reliable enough that it doesn’t feel like I have to fight to do what I want to. And so on and so forth.

So there you have it. The reasons why I think the HoloLens is still the device to beat. My biggest takeaway from this entire experience is a new-found appreciation for how great an achievement the HoloLens actually is. And now I will wait to see if the next hyped up contender, the Magic Leap One, will be able to dethrone the HoloLens. Or maybe the next version of the HoloLens is what I should actually be looking forward to.

Sign up for Futurice news

Futu Connect is our semi-regular roundup of all things Futurice. Be in the know about career-changing events, industry-leading content, non-profit passion projects, and an ever-changing job board.

Enter your email address below.