VR dizziness cause and solution

Note: Some personal information about VR collected from the Internet. 

By Long Luo

Foreword


With VR The industry is gradually entering people’s attention, and the VR boom is in the early stage of its outbreak. Looking at several cutting-edge companies in the VR field in China, China’s VR hardware is not low compared to international standards. If you take the “international standard” Oculus VR hardware, it can be said to be basically the same. This is undoubtedly good news for the majority of domestic users of ordinary products. The domestic VR team has a high cost performance under the premise of maintaining the technical clearance of the hardware.

However, even the most cutting-edge VR products still have a fatal “hard injury”: The feeling of dizziness is too strong.

Many VR experiencers said that after using VR products for a period of time, they will discomfort, nausea, and even vomiting. This has become the biggest stumbling block for the advancement of VR. Solving the dizziness problem has become an urgent need for VR.

Therefore, it is necessary for us to have a deeper understanding of VR dizziness, then we will explain in detail VR dizziness and some resolving dizziness in the current market Technology.

1. Why does VR dizziness occur?


The root cause of VR dizziness It’s the visual brain’s cognition of movement is out of sync. The specific point is that the (VR) picture seen by the eye does not match the (real position) information received from the ear. The burden on the brain increases, resulting in dizziness.

VR dizziness is caused by hardware and dizziness caused by software.

1. Hardware dizziness

The dizziness caused by VR hardware is mainly in 4 places: GPU, sensor, display Screen, chip imaging lens, and interpupillary distance and distance adjustment structure.

Solving the hardware dizziness is also very simple. Using the best hardware can reduce the hardware dizziness as much as possible. But at this stage, do consumers think that many VR hardware is immature? The reason is that from the perspective of cost, manufacturers have made the “price ratio” of VR equipment higher.

If we try to eliminate the problem of dizziness from the hardware level, on the one hand, we need the hardware market to reduce costs, and on the other hand, we need the supply of the industrial chain.
  
From the equipment point of view, the unreasonable hardware and software design caused dizziness, so the solution is also from these two aspects. At present, major manufacturers have put forward their own methods. Hardware can play a greater role, but it must not be seen in isolation.
  

2. Software dizziness

The concept of software dizziness is larger.

There are several reasons for VR dizziness:

<1>. Game content

Many VR games have their own The content will cause dizziness. For example, when you are riding a “VR roller coaster”, you are visually in the state of the picture, doing violent high-speed exercise, but the vestibular system indicates that you are not exercising, which will cause dizziness.

<2>. The difference between the picture and the real world

The VR display gives serious picture distortion and small field of view. These are all Different from the real world, you will feel dizzy after a long time.

<3>. Picture lags behind the action

The delay of the VR hardware causes time out of synchronization. When people turn their angle of view or move At that time, the speed of picture presentation cannot keep up. In a full-view screen such as VR, this delay is the biggest problem that causes dizziness. At present, reducing delay is the main method to reduce VR dizziness.

In a full-view screen like VR, this delay is the biggest problem that causes dizziness.

<4>. Different pupillary distance

Because everyone’s pupillary distance is different, for some people, the human eye The center of the pupil, the center of the lens, and the center of the screen are not in one line, which results in ghosting. People will be very easily dizzy after a long time.

<5>. The depth of field is not synchronized

The depth of field is not synchronized, which is also one of the reasons for vertigo. For example, in front of you, there is a table, on the table, a cup is placed near, and a doll is placed in the distance. If you look at the cup close up, the doll in the distance should logically be blurred, but now, the doll in the distance can also be seen very clearly.

In general, VR equipment is not realistic enough to simulate reality, and it cannot really deceive the brain. The troubled brain is overwhelmed and causes dizziness.

Second, how to solve VR dizziness?


In the previous chapter, we explained the causes of VR dizziness, then in this chapter we will explain in detail the current technology to solve VR dizziness.

1. Low-latency technology

When purchasing virtual reality equipment, an important indicator is from turning the head to turning the screen The delay.

The picture delay depends to a large extent on the refresh rate of the display. The most advanced virtual reality device in the world has a refresh rate of 75Hz. Studies have shown that the delay between head movement and visual field cannot exceed 20ms, otherwise dizziness will occur.

The 20ms delay time is a very big challenge for VR headsets. First of all, the equipment needs a sufficiently accurate method to determine the speed, angle and distance of the head rotation. This can be achieved by using an inertial gyroscope (with sensitive response but poor accuracy) or optical methods. Then the computer needs to render the picture in time, and the monitor also needs to show the picture in time, all of which need to be completed within 20ms. Correspondingly, if the display time of each frame is more than 20ms from the previous frame, the human eye will also feel the delay. Therefore, the frame refresh rate of VR headsets should exceed 50FPS, and 60FPS is currently a benchmark. But in order to achieve better results, this refresh rate should continue to be increased. For example, the current Oculus Rift CV1 and HTC Vive use a 90hz refresh rate, while Sony Project Morpheus uses a 120hz refresh rate.

75Hz means that it takes at least 1 second divided by 75 times to be equal to 13.3 milliseconds each time, including the time for safety insurance, which is generally 19.3ms delay. Therefore, any claim that the delay is below 19.3ms is false propaganda.

So what is the 19.3ms delay?

The generation process of the 19.3ms delay:

  1. First turn from the head to the sensor to read The data takes about 1ms. If a world-class sensor is used, then the sampling rate is 1KHz, which means that one thousand data can be read per second, then each data is 1ms, and this 1ms is obtained. delay.

  2. Then the data needs to be transmitted to the computer via the microcontroller. Because their interfaces are different, it’s like the power plug of the air conditioner cannot be inserted into the socket of a small desk lamp, and some conversion work is required. The single-chip microcomputer is responsible for such conversion. It takes about 1ms for the data from the sensor to the microcontroller. Because the generation of the previous data takes 1ms, if the data is not transmitted to the microcontroller within 1ms, the subsequent data will be discarded.

  3. Next, the microcontroller transmits data to the PC via the USB cable. The USB cable has a very high transmission rate, but the transmission is completely controlled by the Host side (that is, the PC side). In other words, if the Host does not receive the data sent by the microcontroller, the data will be discarded. In the case of the HID method, the Host will often check whether there is data transmission, and then store the data in the memory, so this time is within 1ms. At this point, the data has reached the PC’s memory, and all the hardware processes have been completed. Due to the limitation of data bandwidth, communication protocol, etc., it will take up the time between 3ms~4ms, which is difficult to reduce.

  4. After the transmission on the hardware is completed, the process of software algorithm processing is complete.
    Due to the noise and drift of the analog signal itself, there is a lot of noise and drift in the data after it is converted into a digital signal. Therefore, complex digital signal processing methods are needed to filter out these noises and drifts. In this way, the 9-axis data from the sensor becomes the quaternion rotation data of the head rotation required to render the game. Processing this data is generally within 1ms. When rendering, just multiply this rotated quaternion by the coordinates of the camera to get the viewing direction, which can be used to render the scene. Through a special algorithm (such as Time-warp, the fastest algorithm at present), according to the image obtained by the previous data processing, the actual displayed picture is completed. Thanks to the Time-warp algorithm, we can basically ignore the delay in rendering the scene.

  5. After the scene is rendered, it needs to do anti-distortion, anti-dispersion and other processing. These processes generally take 0.5ms of GPU time. To be safe, set this time to 3ms to ensure that the GPU will be able to perform anti-distortion and anti-dispersion before the next frame is ready to be transmitted to the display, that is, before the next vertical synchronization signal comes.

  6. Then it is time to transfer the image to the monitor. As mentioned earlier, it takes 13.3ms to calculate according to 75Hz. Is it over here? No, there is also the time for the monitor to display the image. Since LCD displays are a physical process in which crystals are rotated by an electric field, traditional LCD displays require 15-28ms to respond. The latest OLED technology reduces this time to the microsecond level.

Alright, let us add up these times, 3ms + 3ms + 13.3ms = 19.3ms. Of course, this is the most ideal situation, and it is also possible that CPU performance, USB packet loss and other issues may cause such a low delay.

Of course, Oculus looks forward to the future and can reduce the delay even lower, which is not a boast. It can be seen from the above formula that the main bottleneck is the 13.3ms delay process. Through some special methods, it can be reduced by half or even less. But this requires the joint efforts of hardware manufacturers, operating systems, and game developers.

At present, improving the delay of VR hardware is the best solution to improve dizziness. The continuous compression of all the required time can reduce the delay of the picture, and solve the problem of dizziness from the hardware first. Secondly, just like the previous 3D vertigo, users can adapt to a period of time to make VR a real virtual reality product.

2. Add Virtual Reference Object

Researchers from the School of Computer Graphics Technology, Purdue University A virtual nose can solve dizziness and other problems. The researchers tested 41 participants in various virtual scenarios. Some of them have virtual noses and some do not. It turns out that people with noses can stay awake longer.

Researchers said that the effect of curing dizziness may be because people need a fixed visual reference, so even if a car dashboard is added to VR, the same effect can be produced.

So the question is, adding a nose, will it affect the experience of the virtual world? Don’t worry, because in the test, none of the participants found a nose there! They were so addicted when they were playing games that after the incident, they told them that they had a nose, but they still didn’t believe it! In fact, in real life, if we pay attention to it, we can see our own nose, but we just get used to it and we don’t notice it. But our sensory system can be aware of the presence of the nose, after all, the body is still very honest.
The researchers also told their findings to John Carmack, the chief technology officer of Oculus VR. He said that he had never heard of such a magical thing before, and he had to study it carefully.

3. Electric Vestibular Stimulation

Currently, there is a company called vMocion abroad that intends to use the Mayo Medical Center’s aerospace medicine and The vestibular research laboratory spent more than 10 years researching technology to solve this problem. This technology is called electrical vestibular stimulation (GVS), which places electrodes in strategic locations (two electrodes are placed behind each ear, one on the front and one on the nape), to track the perceived movement of the user’s inner ear, and The movement of the field of view is triggered into a GVS synchronization command, which stimulates three-dimensional movement. If it works, it can allow users to fully immerse themselves in the current environment and truly feel that the spacecraft they are driving is diving or turning.

4. Adjust the distance between the lenses

In the original “mobile phone box”, everyone did not actually consider this One point, everyone thinks more about how to adapt to users with different myopia (some boxes do not even consider this), remember that Samsung first used the design of the pulley to adjust the distance between the lenses, and you can freely adjust the distance between the two lenses. distance. In addition, there are also plans to display that the center point of the screen can be adjusted through a Bluetooth controller, etc. So as to ensure that the center of the picture, the center of the lens, and the center of the human eye are three points and one line. Avoid ghosting and dizziness.

5. Light Field Photography

A light field snapshot can focus, expose, and even adjust the depth of field after the picture is acquired. It not only records the sum of all light falling on each photosensitive unit, the light field camera is also designed to determine the intensity and direction of each incoming light. With this information, you can generate not just one, but every possible image that enters the camera’s field of view at that moment. For example, photographers often adjust the camera lens to focus on the face and deliberately blur the background. Some people want to get a blurred face with a very clear background. With light field photography, you can get any effect with the same photo.

Magic Leap is currently the best in this regard. What Magic Leap does is not to display the picture on the screen, but to directly project the entire digital light field onto the user’s retina, so that the user can freely choose the focus position according to the focusing habits of the human eye to accurately Combination of virtuality and reality simulates the visual effects of the human eye. There is no need to mention issues such as refresh rate and resolution. Magic Leap solves all the problems from another direction and uses another method.

Three. The actions of the mainstream vendors


From the perspective of the actions of mainstream vendors such as Oculus and HTC, whether you start with hardware or software Start, or the combination of software and hardware, are mainly to solve these 5 problems:
  
1. Increase the refresh frame rate
2. Increase the resolution
3. Reduce picture distortion
4 . Optimize head movement tracking
5. Optimize content

In terms of software, image algorithms need to be improved.

Let’s take a look at what the current mainstream manufacturers do. This includes VR headset manufacturers and upstream and downstream industry chain manufacturers:
  
Oculus uses aspherical mirrors to reduce image distortion. The resolution of the first generation product is 720p, and the second generation is upgraded to 1080p. The use of aspheric mirrors makes it difficult to adapt the video image and eliminate visual errors, which makes it difficult for similar manufacturers to imitate.

They also used a series of actions to reduce the delay and try to compress the entire process from user input to completion of the new image, which can be controlled within 30 frames per second. At the same time, some software algorithms are used to correct the distortion. At present, Oculus’s research focus has shifted from delay reduction to tracking accuracy, image quality, and resolution. In March of this year, Oculus will be on sale one after another, when we will see its latest achievements in eliminating dizziness.
 
HTC vive reduces image delay by increasing the image refresh rate, the refresh rate is 90 frames per second (but still not as good as Oculus). At the same time, more sophisticated sensor design is used to achieve more accurate head movement tracking.

Nvidia, as a chip manufacturer, is also anxious for VR manufacturers, and jointly developed the “GameWorks VR” technology with Epic Games, the developer of the iOS action game “Infinity Blade” series. GameWorks VR technology uses the “Multi-Resolution Shading (MRS)” method to display the game screen that the player sees with a higher resolution in the center area and a lower resolution in the surrounding area. This technology will greatly increase the refresh rate of the game screen by about 50%.

Summary

The dizziness problem is the most important reason that hinders the outbreak of VR. I believe time will be the master key to solve all problems. Looking forward to the near future, We can experience virtual reality without dizziness. Going into the sky, into the earth, into the sea, lying at home, truly experiencing all the goodness, it’s great.

Note: Some personal information about VR collected from the Internet. 

By Long Luo

Foreword


With VR The industry is gradually entering people’s attention, and the VR boom is in the early stage of its outbreak. Looking at several cutting-edge companies in the VR field in China, China’s VR hardware is not low compared to international standards. If you take the “international standard” Oculus VR hardware, it can be said to be basically the same. This is undoubtedly good news for the majority of domestic users of ordinary products. The domestic VR team has a high cost performance under the premise of maintaining the technical clearance of the hardware.

However, even the most cutting-edge VR products still have fatal “hard injuries”: Dizziness is too strong.

Many VR experiencers said that after using VR products for a period of time, they will discomfort, nausea, and even vomiting. This has become the biggest stumbling block for the advancement of VR. Solving the dizziness problem has become an urgent need for VR.

Therefore, it is necessary for us to have a deeper understanding of VR dizziness, then we will explain in detail VR dizziness and some resolving dizziness in the current market Technology.

1. Why does VR dizziness occur?


The root cause of VR dizziness It’s the visual brain’s cognition of movement is out of sync. The specific point is that the (VR) picture seen by the eye does not match the (real position) information received from the ear. The burden on the brain increases, resulting in dizziness.

VR dizziness is caused by hardware and dizziness caused by software.

1. Hardware dizziness

The dizziness caused by VR hardware is mainly in 4 places: GPU, sensor, display Screen, chip imaging lens, and interpupillary distance and distance adjustment structure.

Solving the hardware dizziness is also very simple. Using the best hardware can reduce the hardware dizziness as much as possible. But at this stage, do consumers think that many VR hardware is immature? The reason is that from the perspective of cost, manufacturers have made the “price ratio” of VR equipment higher.

If we try to eliminate the problem of dizziness from the hardware level, on the one hand, we need the hardware market to reduce costs, and on the other hand, we need the supply of the industrial chain.
  
From the equipment point of view, the unreasonable hardware and software design caused dizziness, so the solution is also from these two aspects. At present, major manufacturers have put forward their own methods. Hardware can play a greater role, but it must not be seen in isolation.
  

2. Software dizziness

The concept of software dizziness is larger.

There are several reasons for VR dizziness:

<1>. Game content

Many VR games have their own The content will cause dizziness. For example, when you are riding a “VR roller coaster”, you are visually in the state of the picture, doing violent high-speed exercise, but the vestibular system indicates that you are not exercising, which will cause dizziness.

<2>. The difference between the picture and the real world

The VR display gives serious picture distortion and small field of view. These are all Different from the real world, you will feel dizzy after a long time.

<3>. Picture lags behind the action

The delay of the VR hardware causes time out of synchronization. When people turn their angle of view or move At that time, the speed of picture presentation cannot keep up. In a full-view screen such as VR, this delay is the biggest problem that causes dizziness. At present, reducing delay is the main method to reduce VR dizziness.

In a full-view screen like VR, this delay is the biggest problem that causes dizziness.

<4>. Different pupillary distance

Because everyone’s pupillary distance is different, for some people, the human eye The center of the pupil, the center of the lens, and the center of the screen are not in one line, which results in ghosting. People will be very easily dizzy after a long time.

<5>. The depth of field is not synchronized

The depth of field is not synchronized, which is also one of the reasons for vertigo. For example, in front of you, there is a table, on the table, a cup is placed near, and a doll is placed in the distance. If you look at the cup close up, the doll in the distance should logically be blurred, but now, the doll in the distance can also be seen very clearly.

In general, VR equipment is not realistic enough to simulate reality, and it cannot really deceive the brain. The troubled brain is overwhelmed and causes dizziness.

Second, how to solve VR dizziness?


In the previous chapter, we explained the causes of VR dizziness, then in this chapter we will explain in detail the current technology to solve VR dizziness.

1. Low-latency technology

When purchasing virtual reality equipment, an important indicator is from turning the head to turning the screen The delay.

The picture delay depends to a large extent on the refresh rate of the display. The most advanced virtual reality device in the world has a refresh rate of 75Hz. Studies have shown that the delay between head movement and visual field cannot exceed 20ms, otherwise dizziness will occur.

The 20ms delay time is a very big challenge for VR headsets. First of all, the equipment needs a sufficiently accurate method to determine the speed, angle and distance of the head rotation. This can be achieved by using an inertial gyroscope (with sensitive response but poor accuracy) or optical methods. Then the computer needs to render the picture in time, and the monitor also needs to show the picture in time, all of which need to be completed within 20ms. Correspondingly, if the display time of each frame is more than 20ms from the previous frame, the human eye will also feel the delay. Therefore, the frame refresh rate of VR headsets should exceed 50FPS, and 60FPS is currently a benchmark. But in order to achieve better results, this refresh rate should continue to be increased. For example, the current Oculus Rift CV1 and HTC Vive use a 90hz refresh rate, while Sony Project Morpheus uses a 120hz refresh rate.

75Hz means that it takes at least 1 second divided by 75 times to be equal to 13.3 milliseconds each time, including the time for safety insurance, which is generally 19.3ms delay. Therefore, any claim that the delay is below 19.3ms is false propaganda.

So what is the 19.3ms delay?

The generation process of the 19.3ms delay:

  1. First turn from the head to the sensor to read The data takes about 1ms. If a world-class sensor is used, then the sampling rate is 1KHz, which means that one thousand data can be read per second, then each data is 1ms, and this 1ms is obtained. delay.

  2. Then the data needs to be transmitted to the computer via the microcontroller. Because their interfaces are different, it’s like the power plug of the air conditioner cannot be inserted into the socket of a small desk lamp, and some conversion work is required. The single-chip microcomputer is responsible for such conversion. It takes about 1ms for the data from the sensor to the microcontroller. Because the generation of the previous data takes 1ms, if the data is not transmitted to the microcontroller within 1ms, the subsequent data will be discarded.

  3. Next, the microcontroller transmits data to the PC via the USB cable. The USB cable has a very high transmission rate, but the transmission is completely controlled by the Host side (that is, the PC side). In other words, if the Host does not receive the data sent by the microcontroller, the data will be discarded. In the case of the HID method, the Host will often check whether there is data transmission, and then store the data in the memory, so this time is within 1ms. At this point, the data has reached the PC’s memory, and all the hardware processes have been completed. Due to the limitation of data bandwidth, communication protocol, etc., it will take up the time between 3ms~4ms, which is difficult to reduce.

  4. After the transmission on the hardware is completed, the process of software algorithm processing is complete.
    Due to the noise and drift of the analog signal itself, there is a lot of noise and drift in the data after it is converted into a digital signal. Therefore, complex digital signal processing methods are needed to filter out these noises and drifts. In this way, the 9-axis data from the sensor becomes the quaternion rotation data of the head rotation required to render the game. Processing this data is generally within 1ms. When rendering, just multiply this rotated quaternion by the coordinates of the camera to get the viewing direction, which can be used to render the scene. Through a special algorithm (such as Time-warp, the fastest algorithm at present), according to the image obtained by the previous data processing, the actual displayed picture is completed. Thanks to the Time-warp algorithm, we can basically ignore the delay in rendering the scene.

  5. After the scene is rendered, it needs to do anti-distortion, anti-dispersion and other processing. These processes generally take 0.5ms of GPU time. To be safe, set this time to 3ms to ensure that the GPU will be able to perform anti-distortion and anti-dispersion before the next frame is ready to be transmitted to the display, that is, before the next vertical synchronization signal comes.

  6. Then it is time to transfer the image to the monitor. As mentioned earlier, it takes 13.3ms to calculate according to 75Hz. Is it over here? No, there is also the time for the monitor to display the image. Since LCD displays are a physical process in which crystals are rotated by an electric field, traditional LCD displays require 15-28ms to respond. The latest OLED technology reduces this time to the microsecond level.

Alright, let us add up these times, 3ms + 3ms + 13.3ms = 19.3ms. Of course, this is the most ideal situation, and it is also possible that CPU performance, USB packet loss and other issues may cause such a low delay.

Of course, Oculus looks forward to the future and can reduce the delay even lower, which is not a boast. It can be seen from the above formula that the main bottleneck is the 13.3ms delay process. Through some special methods, it can be reduced by half or even less. But this requires the joint efforts of hardware manufacturers, operating systems, and game developers.

At present, improving the delay of VR hardware is the best solution to improve dizziness. The continuous compression of all the required time can reduce the delay of the picture, and solve the problem of dizziness from the hardware first. Secondly, just like the previous 3D vertigo, users can adapt to a period of time to make VR a real virtual reality product.

2. Add Virtual Reference Object

Researchers from the School of Computer Graphics Technology, Purdue University A virtual nose can solve dizziness and other problems. The researchers tested 41 participants in various virtual scenarios. Some of them have virtual noses and some do not. It turns out that people with noses can stay awake longer.

Researchers said that the effect of curing dizziness may be because people need a fixed visual reference, so even if a car dashboard is added to VR, the same effect can be produced.

So the question is, adding a nose, will it affect the experience of the virtual world? Don’t worry, because in the test, none of the participants found a nose there! They were so addicted when they were playing games that after the incident, they told them that they had a nose, but they still didn’t believe it! In fact, in real life, if we pay attention to it, we can see our own nose, but we just get used to it and we don’t notice it. But our sensory system can be aware of the presence of the nose, after all, the body is still very honest.
The researchers also told their findings to John Carmack, the chief technology officer of Oculus VR. He said that he had never heard of such a magical thing before, and he had to study it carefully.

3. Electric Vestibular Stimulation

Currently, there is a company called vMocion abroad that intends to use the Mayo Medical Center’s aerospace medicine and The vestibular research laboratory spent more than 10 years researching technology to solve this problem. This technology is called electrical vestibular stimulation (GVS), which places electrodes in strategic locations (two electrodes are placed behind each ear, one on the front and one on the nape), to track the perceived movement of the user’s inner ear, and The movement of the field of view is triggered into a GVS synchronization command, which stimulates three-dimensional movement. If it works, it can allow users to fully immerse themselves in the current environment and truly feel that the spacecraft they are driving is diving or turning.

4. Adjust the distance between the lenses

In the original “mobile phone box”, everyone did not actually consider this One point, everyone thinks more about how to adapt to users with different myopia (some boxes do not even consider this), remember that Samsung first used the design of the pulley to adjust the distance between the lenses, and you can freely adjust the distance between the two lenses. distance. In addition, there are also plans to display that the center point of the screen can be adjusted through a Bluetooth controller, etc. So as to ensure that the center of the picture, the center of the lens, and the center of the human eye are three points and one line. Avoid ghosting and dizziness.

5. Light Field Photography

A light field snapshot can focus, expose, and even adjust the depth of field after the picture is acquired. It not only records the sum of all light falling on each photosensitive unit, the light field camera is also designed to determine the intensity and direction of each incoming light. With this information, you can generate not just one, but every possible image that enters the camera’s field of view at that moment. For example, photographers often adjust the camera lens to focus on the face and deliberately blur the background. Some people want to get a blurred face with a very clear background. With light field photography, you can get any effect with the same photo.

Magic Leap is currently the best in this regard. What Magic Leap does is not to display the picture on the screen, but to directly project the entire digital light field onto the user’s retina, so that the user can freely choose the focus position according to the focusing habits of the human eye to accurately Combination of virtuality and reality simulates the visual effects of the human eye. There is no need to mention issues such as refresh rate and resolution. Magic Leap solves all the problems from another direction and uses another method.

Three. The actions of the mainstream vendors


From the perspective of the actions of mainstream vendors such as Oculus and HTC, whether you start with hardware or software Start, or the combination of software and hardware, are mainly to solve these 5 problems:
  
1. Increase the refresh frame rate
2. Increase the resolution
3. Reduce picture distortion
4 . Optimize head movement tracking
5. Optimize content

In terms of software, image algorithms need to be improved.

Let’s take a look at what the current mainstream manufacturers do. This includes VR headset manufacturers and upstream and downstream industry chain manufacturers:
  
Oculus uses aspherical mirrors to reduce image distortion. The resolution of the first generation product is 720p, and the second generation is upgraded to 1080p. The use of aspheric mirrors makes it difficult to adapt the video image and eliminate visual errors, which makes it difficult for similar manufacturers to imitate.

They also used a series of actions to reduce the delay and try to compress the entire process from user input to completion of the new image, which can be controlled within 30 frames per second. At the same time, some software algorithms are used to correct the distortion. At present, Oculus’s research focus has shifted from delay reduction to tracking accuracy, image quality, and resolution. In March of this year, Oculus will be on sale one after another, when we will see its latest achievements in eliminating dizziness.
 
HTC vive reduces image delay by increasing the image refresh rate, the refresh rate is 90 frames per second (but still not as good as Oculus). At the same time, more sophisticated sensor design is used to achieve more accurate head movement tracking.

Nvidia, as a chip manufacturer, is also anxious for VR manufacturers, and jointly developed the “GameWorks VR” technology with Epic Games, the developer of the iOS action game “Infinity Blade” series. GameWorks VR技术通过”多重解析描写(Multi-Resolution Shading,简称为MRS)”的方式,使玩家看到的游戏画面以中心区域更高分辨率,周围区域分辨率偏低的形式显示出来。这个技术会使游戏画面的刷新率得到大幅度提升,提升约50%。

总结

晕眩问题是阻碍VR爆发最重要的原因,相信时间将是解决一切问题的万能钥匙,期待在不远的将来,我们能够体验到不晕的虚拟现实。上天、入地、下海,躺在家里,真实经历一切的美好,真好。

Leave a Comment

Your email address will not be published.