From the course: Unity: Building VR User Interfaces

VR UI overview - Unity Tutorial

From the course: Unity: Building VR User Interfaces

Start my 1-month free trial

VR UI overview

- [Instructor] Now, before we dive into the practical implementation of UI in VR there's some topics that we need to cover. UI in VR is still very new. And we're learning new things about it all the time. The first thing to understand about the challenges for UI in VR is that there is no screen. We're essentially cutting out the middle man and projecting information directly onto the user's eyeballs That doesn't mean that we're going to throw away everything we know about 2D UI. 2D UI is still everywhere in the real world and it's extremely familiar to us In fact, we've been using it for thousands of years. 2D UI is still currently the best way to consume glyph based data. aka reading text. Think about all those 3D games that you play, there's still always a 2D menu somewhere. Now, the challenges with 2D UI is that 2D UI has an intended viewing distance. If you play 3D games on a computer monitor, the viewing distance is already sorted out for you. In VR, since we're in the world, we now have to sort out this issue of viewing distance. Luckily for us, Google's VR Lab has spent a lot of time on this problem and has come up with this concept of distance-independent millimeters design Now, this model we'll be using throughout the course and gives us a good handle on how to design for VR. Now there's two types of UI that we're going to cover through the course. The first is internal UI. This is like a heads up display. This is part of the user, and is visible only to the user and has user dependent states. The other type is external UI. This is part of the scene. And, likewise, is visible to all who use that scene and has world dependent states. External UI is also a little bit more trickier for VR because we can't tell exactly where the user is viewing this UI from. Now, before we move forward we need to start thinking angularly. The eye has a curved surface. So, all the information, all the light that flows into the eyeball gets interpreted by this curved lens of your eye. Next up, we have two of these curved surfaces, making binocular vision which is compounding the complexity of the problem. Along with this binocular vision, we have IPD. And this is inter-pupillary distance. If you take a look at your oculus quest, at the bottom you have a little IPD slider, which allows you to move the screens for best focus for your eyes. Now, we don't have to worry about IPD too much, the SDK and the lenses take care of this for us. But the thing that we are going to have to worry about is convergence point. This is like the focus point of your eyes. If you take a pen and move it further away from you, and closer to you, you'll see that the closer this pen gets to you and if you're trying to focus on it with your eyes, the more work your eyes do. So, we've identified that it takes physical effort to focus your eyes and move your head. And this brings us to a principle of VR. Good VR experience is comfortable. We want to give maximum return for minimal effort. Now, over on the oculus developer portal, under the learned vision section, we've got this beautiful diagram that shows how we view things and the best viewing angle for stuff. So, if we take a look at the image on the left. We will see portrayed binocular vision is a lot larger than the small area in the middle there, the light blue section, that is for simple recognition. That basically means that we're seeing a lot of stuff, we have a lot of information that is projected into our eye, but only a small region around about a 60 degree cone is something that we can focus on and ingest data easily with. If we move over to the image on the right, the first thing I want to highlight is that the lower visual field is bigger than the upper visual field. Meaning that, we've got more area to project UI information to the user below the standard line of sight and above the standard line of sight. The next point that I want to make is that we've got eye rotation. We've got an optimal eye rotation which is also around about a 60 degree cone, similar to the image on the left. Any information that is displayed above this or below that will take more effort from the user to rotate their eyes or their head to consume. The last point I want to make is that the standard line of sight is different to the normal line of sight setting. It's about a 15 degree difference. Now, Google has also identified the same that the optimal place to put UI is not the standard line of sight, but around about 6 degrees below the standard line of sight, or the horizon line. This will give us a good starting point for how to project our information. So, from all of this data we've gathered, that we're going to have the center of our UI or the information that we want to consume around about 6 degrees below the horizon line and with about a 60 degree cone. Now that we've got some of that theory under our belt, it's time to make some practical examples.

Contents