Video: NVIDIA Pascal VRWorks Virtual Reality

Ali Güngör

Genel Yayın Yönetmeni
Yönetici
Katılım
22 Haziran 2011
Mesajlar
52.968
Çözümler
17
Yer
İstanbul Türkiye
Daha fazla  
Cinsiyet
Erkek
Meslek
Technopat
Profil Kapağı
1523300036
Nvidia presented VRWorks Virtual Reality at Pascal Editor's Day. It begins with Computing challenges in simulating reality: Graphics, audio, touch, physical simulation and Nvidia solutions.

This video was recorded at U.S.A. Texas Austin Nvidia Global Presentation of GeForce GTX 1080 and GTX 1070.

After the agreed NDA date we are making the record publicly available for all technology enthusiasts. There are great many details about new Nvidia Pascal architecture, new 16 nanometer production process, new drivers, software features and VR (Virtual Reality) in this series.

Bu içeriği görüntülemek için üçüncü taraf çerezlerini yerleştirmek için izninize ihtiyacımız olacak.
Daha detaylı bilgi için, çerezler sayfamıza bakınız.

Nvidia GeForce GTX 1080 Review: NVIDIA GeForce GTX 1080 İncelemesi - Technopat (Turkish language)

VRWorks™ is a comprehensive suite of APIs, libraries, and engines that enable application and headset
developers to create amazing virtual reality experiences. VRWorks enables a new level of presence by
bringing physically realistic visuals, sound, touch interactions, and simulated environments to virtual
reality.

NOTE: To learn more about other NVIDIA VRWorks, visit NVIDIA VRWorks™.

upload_2016-5-17_20-49-16.png


VRWorks Graphics

Lens Matched Shading uses the new Simultaneous MultiProjection architecture of NVIDIA Pascalbased GPUs to provide substantial performance improvements in pixel shading. The feature improves upon MultiRes Shading by rendering to a surface that more closely approximates the lens corrected image that is output to the headset display. This avoids rendering many pixels that would otherwise be discarded before the image is output to the VR headset.

Traditionally, VR applications have had to draw the scene twice—once for the left eye, and once for the
right eye. Single Pass Stereo uses the new Simultaneous MultiProjection architecture of NVIDIA Pascalbased GPUs to draw the scene geometry only once, then simultaneously projects both righteye and lefteye views of the geometry. The increased efficiency (or increased performance) allows developers to effectively double the geometric complexity of VR applications, increasing the richness and detail of their virtual world.

MultiRes Shading is an innovative rendering technique for VR whereby each part of an image is rendered at a resolution that better matches the pixel density of the lens corrected image. MultiRes Shading uses Pascal architecture features to render multiple scaled viewports in a single pass, delivering substantial performance improvements in pixel shading.

VR SLI provides increased performance for virtual reality apps where multiple GPUs can be assigned a specific eye to dramatically accelerate stereo rendering. With the GPU affinity API, VR SLI allows scaling for systems with more than 2 GPUs. VR SLI is supported for DirectX and OpenGL.

VR Headset Developers

Context Priority provides headset developers with control over GPU scheduling to support advanced
virtual reality features such as asynchronous time warp, which cuts latency and quickly adjusts images as gamers move their heads, without the need to rerender a new frame. Combined with the Preemption support of NVIDIA’s Pascalbased GPUs, Context Priority delivers faster, more responsive refreshes of the VR display.

With Direct Mode, the NVIDIA driver treats VR headsets as head mounted displays accessible only to VR
applications, rather than a normal Windows monitor that your desktop shows up on, providing better plug
and play support and compatibility for the VR headset.

Front Buffer Rendering enables the GPU to render directly to the front buffer to reduce latency.

VR Works Audio

Traditional VR audio provides an accurate 3D position of the audio source within a virtual environment.
However, sound in the real world reflects more than just the location of the source—sound is a function of
your physical environment. For example, audio in a small room will sound different than the same audio
outdoors because of the reflections caused by sound bouncing off the walls of the room. Leveraging NVIDIA’s
OptiX ray tracing technology, VRWorks Audio traces the path of sound in realtime, delivering physically accurate audio that reflects the size, shape, and material properties of the virtual environment you are in.

Realistically modelling touch interactions and environment behavior is critical for delivering full
presence in VR. Today’s VR experiences deliver touch interactivity through a combination of positional
tracking, hand controllers, and haptics. NVIDIA’s PhysX Constraint Solver detects when a hand controller
interacts with a virtual object and enables the game engine to provide a physicallyaccurate
visual and haptic response. PhysX also models the physical behavior of the virtual world around you so that all interactions, whether it be an explosion or hand splashing through water, are accurate and behave as in the real world.

Warp & Blend APIs provide application independent geometry corrections and intensity adjustments across entire desktops to create a seamless VR CAVE environments. Warp & Blend APIs enable all the above adjustments for pristine image quality without introducing any latency.

NVIDIA provides various synchronization techniques to prevent tearing and image misalignment while creating one large desktop that is driven from multiple GPUs or clusters. Various technologies like Frame Lock, Stereo Lock, Swap Groups & Swap Barriers are available to help developers design seamless and expansive VR CAVE & Cluster environments.

GPU Affinity provides dramatic performance improvements by managing the placement of graphics
and rendering workloads across multiple GPUs. This provides developers fine grain control to pin OGL
contexts to specific GPUs.

NVIDIA’s GPU Direct for Video technology enables low latency video transfers to and from the GPU enabling
developers to seamlessly overlay video and graphics into VR environments.

VR audio, physics, and haptics let you hear and feel every moment.

It’s true, VR is about immersion. But immersion isn’t just one sense—it is actually three: sight, sound,
and touch. So while the graphics portion of Pascal powers “sight,” it is the physics capability of Pascal
that is now being harnessed to power sound and touch. With the introduction of the GeForce GTX
1080, NVIDIA is announcing three new important additions to VRWorks, our comprehensive suite of
SDKs and libraries for developers that assist them in designing cuttingedge
content.

VR Audio


upload_2016-5-17_20-57-46.png




NVIDIA VRWorks Audio uses ray tracing, a technique used in generating images in computer graphics,
to trace the path of audio propagation through a virtual scene. VRWorks Audio simulates the
propagation of acoustic energy through the surrounding environment before reaching the user’s ear.


VR Touch & PhysX

NVIDIA’s PhysX Constraint Solver detects when a hand controller interacts with a virtual object and
enables the game engine to provide a physicallyaccurate
visual and haptic response.

upload_2016-5-17_20-59-8.png



PhysX also models the physical behavior of the virtual world around you so that all interactions,
whether it be an explosion or hand splashing through water, are accurate and behave as in the real
world.

VR SLI

With VR SLI, multiple GPUs can be assigned a specific eye to dramatically accelerate stereo rendering. VR SLI even allows scaling for PCs with more than two GPUs.

VR SLI provides increased performance for virtual reality applications where multiple GPUs can be
assigned to a specific eye (the same number of GPUs are assigned to each eye) to dramatically
accelerate stereo rendering. With the GPU affinity API, VR SLI allows scaling for systems with >2 GPUs.
VR SLI is supported for DirectX and OpenGL.

AFR SLI — Not Appropriate for VR

Alternateframe rendering (AFR) is the method used for SLI on traditional monitors. GPUs using AFR SLI
trade off work on entire frames. In the case of two GPUs, the first GPU renders the even frames and
the second GPU renders the odd frames. The GPU start times are staggered by a halfaframe to maintain regular frame delivery to the display.

upload_2016-5-17_21-1-23.png


AFR SLI works reasonably well to increase frame rates relative to a single-GPU system, but it does not
help with latency. So this method is not the best model for VR.

How VRI SLI Works

A better way to use two GPUs for VR rendering is to split the work of drawing a single frame across
both GPUs. With VR SLI, this means rendering the frames for each eye on their own individual GPU.

upload_2016-5-17_21-2-40.png


So the frame for the left eye is rendered on the first GPU, and the frame for the right eye is rendered
on the second GPU at the same time.

upload_2016-5-17_21-3-17.png


Parallelizing the rendering of the leftand righteye frames across two GPUs yields a massive
improvement in performance, allowing VR SLI to improve both frame rate and latency relative to a
single-GPU system.

Note that unlike traditional AFR SLI, which uses a profile in the NVIDIA driver, VR SLI requires
application side integration to enable performance scaling. VR SLI is now integrated into applications
such as Valve’s The Lab and ILMxLAB’s Trials on Tatooine , with many more in progress including UE4,
Unity and Max Play engine integrations.
 
Son düzenleyen: Moderatör:
Uyarı! Bu konu 8 yıl önce açıldı.
Muhtemelen daha fazla tartışma gerekli değildir ki bu durumda yeni bir konu başlatmayı öneririz. Eğer yine de cevabınızın gerekli olduğunu düşünüyorsanız buna rağmen cevap verebilirsiniz.

Geri
Yukarı