Video: Pascal Simultaneous Multi Projection Explained

Ali Güngör

Genel Yayın Yönetmeni
Yönetici
Katılım
22 Haziran 2011
Mesajlar
53.240
Çözümler
17
Yer
İstanbul Türkiye
Daha fazla  
Cinsiyet
Erkek
Meslek
Technopat
Profil Kapağı
1523300036
Nvidia explained the new Pascal Simultaneous Multi Projection technology. It enables a better view angle for multi-monitor setups. But how? Here we are...

This video was recorded at U.S.A. Texas Austin Nvidia Global Presentation of GeForce GTX 1080 and GTX 1070. After the agreed NDA date we are making the record publicly available for all technology enthusiasts. There are great many details about new Nvidia Pascal architecture, new 16 nanometer production process, new drivers, software features and VR (Virtual Reality) in this series.

Bu içeriği görüntülemek için üçüncü taraf çerezlerini yerleştirmek için izninize ihtiyacımız olacak.
Daha detaylı bilgi için, çerezler sayfamıza bakınız.

Nvidia GeForce GTX 1080 Review: NVIDIA GeForce GTX 1080 İncelemesi - Technopat (Turkish language)

The Simultaneous Multi-Projection block is a new hardware unit, which is located inside the PolyMorph Engine at the end of the geometry pipeline and right in front of the Raster Unit. As its name implies, the Simultaneous Multi-Projection (SMP) unit is responsible for generating multiple projections of a single geometry stream, as it enters the SMP engine from upstream shader stages.


upload_2016-5-17_18-33-9.png

GeForce GTX 1080 Features a New PolyMorph Engine that Supports Simultaneous
Multi-Projection.


The Simultaneous Multi-Projection Engine is capable of processing geometry through up to 16 pre-configured projections, sharing the center of projection (the viewpoint), and with up to 2 different projection centers, offset along the X axis. Projections can be independently tilted or rotated around an axis. Since each primitive may show up in multiple projections simultaneously, the SMP engine provides multi-cast functionality, allowing the application to instruct the GPU to replicate geometry up to 32 times (16 projections x 2 projection centers) without additional application overhead as the geometry flows through the pipe.

In all scenarios, the processing is hardware-accelerated, and the stream of data never leaves the chip. Since the multi-projection expansion happens after the geometry pipeline, the application saves all the work that would otherwise need to be performed in upstream shader stages. The savings are particularly important in geometry-heavy scenarios, such as tessellation, where running the geometry processing pipeline multiple times (once for each projection) would be prohibitively expensive. In extreme cases, the SMP engine can reduce the amount of required geometry work by up to 32x!

One example application of SMP is optimal support for surround displays. The correct way to render to a surround display is with a different projection for each of the three displays, matching the display angle. This is supported directly in a single pass by Pascal SMP, by specifying three separate projections, each corresponding to the appropriately tilted monitor. Now, the user has the flexibility to choose the desired tilt for their side displays and will see their graphics rendered with geometrically correct perspectives, at a much wider field of view (FOV). Note that an application using SMP to generate surround display images must support wide FOV settings, and also use SMP API calls to enable the wider FOV.

upload_2016-5-17_18-34-52.png


upload_2016-5-17_18-34-57.png


Surround setups with SMP perspective correction (left} and without SMP perspective correction (right). Note how in the right image the rendering frustum used by the application is inconsistent with display placement, resulting in geometric distortions on the side displays.

In cases where the projection surface can't be exactly represented with finite number of planar projections, the SMP engine can still provide substantial efficiency gains by generating a much closer approximation to the desired projection surface.

SMP: Designed for the New Display Revolution

Since the early days of 3D rendering, the graphics pipeline has been designed with a simple assumption that the rendertarget is a single, flat display screen. However, in recentyears advances in display technology have led to many new types of display scenarios that do not fit the classical assumption. Surround multi-monitor setups are an excellent solution to give a sense of immersive realism in 3D games, and curved single-display monitors are also becoming popular. VR display systems put a lens between the viewer and the screen, requiring a new type of projection that is different from the standard flat planar projection that traditional GPUs support. The figure below illustrates a variety of these technologies that are both here today or still in development.

upload_2016-5-17_18-35-20.png


Displays are rapidly evolving beyond the traditional single flat display.

Traditional GPUs can support these types of displays, but only with significant inefficiencies—either requiring multiple rendering passes, or rendering with overdraw and then warping the image to match the display, OLboth. .
Maxwell had a limited "Multi-Resolution" capability that was a precursor to Pascal SMP, Maxwell was able to flip a projection by exactly 90 degrees (ie for cube mapping), or take a single projection direction and proportionally scale the resolution in subregions of the screen. While useful for applications such as VXGI, this capability was limited and also didn't match up efficiently with the needs of these new display scenarios.
With Simultaneous Multi-Projection and the ability to handle multiple tilted or rotated projections at once, Pascal GPUs can now support these new display use cases directly, with dramatically improved efficiency.

Projections in 3D Graphics

The notion of "projection" has been fundamental since the dawn of 3D computer graphics. Geometric objects in the scene are modeled in three dimensions. However, in order to display a view of the scene on a flat display, the scene needs to be projected onto the screen, a process referred to as perspective projection. Projection is the computer graphics equivalent of drawing a picture on a window that exactly matches the view of the real world that you saw when looking through the window. One of Albrecht Durer's prints, reproduced below, depicts the process of constructing a perspective projection of a scene and illustrates the analogy to viewing the world through a window.

upload_2016-5-17_18-35-58.png


Albrecht Durer’s Pictures for Geometry

Just as in the illustration above, perspective projection is performed by taking a line from each point in the scene to the location of the eye of the viewer, and finding the spot at which the line intersects the projection plane. The projection is defined by two pieces of information, (a) the location of the eye of the viewer, and (b) the direction that the viewer is looking.
The image below shows a basic projection performed on the GPU. The green box represents a projection that could be displayed on a computer screen, which would create a proper view of a cube, pyramid, and cylinder.

upload_2016-5-17_18-36-36.png


A 3-dimensional scene projected onto a flat plane

However, as discussed above, there are multiple display scenarios that do not match this simple model.

Perspective Surround

Let’s take an example of a typical surround display setup, comprising three separate monitors, placed side-by-side directly next to each other. In this setup, a game would assume a wider horizontal field of view, but it would still render assuming a single, flat plane projection. See the figure below:

upload_2016-5-17_18-37-0.png


Single Plane Projection in Surround Gaming

If the user arranges their monitors to form a flat plane, the final result will be geometrically correct. However, this setup requires a large amount of desk space and offers a limited field of view. It is preferable to rotate the left and right side monitor inwards, which should dramatically increase the field of view. However, if the game is rendered assuming a single planar projection, the apparent perspective of the image will no longer be correct—it will appear excessively stretched and distorted on the sides.

upload_2016-5-17_18-37-17.png


With Tilted Monitors, a Single Flat Projection Will No Longer Appear Correct To The Viewer.


This occurs because although the displays are tilted inwards, the rendering assumes they are in a flat plane. Note that the lines of projection are unchanged although the displays have moved—so the blue lines on the edges no longer match up with the side displays. The projection no longer matches the display setup and is therefore incorrect.

In order to address this issue, one option is to render each monitor separately, appropriately adjusting projection parameters to match the tilt of each display. However, this approach results in a significant increase in the rendering workload, since the scene effectively has to be rendered three times, resulting in the game engine performing 3x the scene management work, 3x the OS runtime and driver work, and 3x the GPU front end and geometry work.

Instead, the Pascal SMP feature allows a single rendering pass. Perspective Surround is configured to know that there are three active projections - one for each monitor and each primitive, and will apply each of the active projections for each display on the fly. The result is a correctly rendered surround view, with no loss of performance.

Single Pass Stereo

An important aspect of VR rendering is the requirement that at least two projections need to be generated, one for each eye. As in the surround case, this is normally supported by apps today by
rendering to each eye separately, which results in twice the amount of work for the entire pipeline, starting from the driver and the OS, and all the way down to the GPU’s raster backend.

upload_2016-5-17_18-38-58.png


To generate a stereo pair, two projections of the sceen need to be rendered, one for each eye.

With Pascal’s SMP engine, which supports two separate projection centers, the GPU can render the two stereo projections directly in a single rendering pass. This SMP capability is known as Single Pass Stereo. With Single Pass Stereo, all the pipeline work, including scene submission, driver and OS scheduling, and geometry processing on the GPU, can be performed only once, which resulting in substantial performance gains—see NVIDIA’s “Barbarian” demo as an example.

In Single Pass Stereo mode, the application runs vertex processing only once, but outputting two, (rather than one), positions for each vertex being processed. The two positions represent locations of the vertex as viewed from the left and from the right eye. The SMP hardware takes care of picking the right version of the vertex and routing it to the appropriate eye. From that point on, the SMP hardware can further apply a number of projections to the current primitive, as explained in the SMP architecture section. This functionality is important for combining the Single Pass Stereo with Lens Matched Shading, which we will review in the next section.

Lens Mtached Sahding

The explosive growth of interest in VR applications has increased the importance of supporting displays which require rendering to non-planar projections. VR displays have a lens in between the viewer and the display, which bends and distorts the image. For the image to look correct to the user, it would have to be rendered with a special projection that inverts the distortion of the lens. Then when the image is viewed through the lens, it will look undistorted, because the two distortions cancel out.

Traditional GPUs do not support this type of projection; instead they only support a standard “planar” projection with a uniform sampling rate. Producing a correct final image with traditional GPUs requires two steps—first, the GPU must render with a standard projection, generating more pixels than needed. Second, for each pixel location in the output display surface, look up a pixel value from the rendered result from the first step to apply to the display surface.

The images below provide an example of this traditional GPU two-step process. On the left is the first step rendering. On the right is an example of the final image as it would be shown on the VR display. The center of the image looks about the same as it would with a standard projection, but on the sides the image is squeezed. The final image in this example (based on Oculus Rift parameters) is 1.1 Mpixels per eye. If the source rendering was perfectly matched to the final projection, it should also be 1.1 Mpixels per eye. However, due to the mismatch in projections, the source image is 2.1 Mpixels per eye—86% more pixels than necessary.

upload_2016-5-17_18-39-32.png
upload_2016-5-17_18-39-36.png


First Pass Image and Final image Required For Correct Viewing Through the HMD Optics.

Leveraging SMP’s ability to use multiple projection planes for a single viewpoint, we can attempt to approximate the shape of the lens-distorted projection. This feature is known as Lens Matched Shading.
With Lens Matched Shading, the SMP engine subdivides the display region into four quadrants, with each quadrant applying its own projection plane. The parameters can be adjusted to approximate the shape of the lens distortion as closely as possible. The left image below is the new rendered image with Lens Matched Shading, compared to the final image on the right. The source image on the left is now 1.4 Mpixels per eye instead of 2.1 Mpixels, a significant reduction in shading rate that translates to a 50% increase in throughput available for pixel shading.

upload_2016-5-17_18-41-7.png
upload_2016-5-17_18-41-11.png


First Pass Image with Lens Matched Shading, and Final Image.


One step in determining the Lens Matched Shading parameters is to check the sampling rate compared to the sampling rate required for the final image. The objective for the default / “conservative” setting of Lens Matched Shading is to always match or exceed the sampling rate of the final image. The image below shows an example comparison for the lens matched shading image above.

upload_2016-5-17_18-41-26.png


First pass image sampling rate compared to final image.

Blue indicates pixels that were sampled at a higher rate than required, gray indicates a matched rate, and any red pixels would indicate initial sampling that was below the rate in the final image. The absence of red pixels confirms that the setting matches the objective.

In addition, developers will have the option to use different settings; for example one could use a setting that is higher resolution in the center and undersampled in the periphery, to maximize frame rate without significant visual quality degradation.

In summary, with the combination of Single Pass Stereo and Lens Matched Shading, Pascal can deliver up to 2x performance improvement for VR, compared to a GPU without support for Simultaneous Multi-Projection.

upload_2016-5-17_18-41-46.png
 
Son düzenleyen: Moderatör:
Uyarı! Bu konu 8 yıl önce açıldı.
Muhtemelen daha fazla tartışma gerekli değildir ki bu durumda yeni bir konu başlatmayı öneririz. Eğer yine de cevabınızın gerekli olduğunu düşünüyorsanız buna rağmen cevap verebilirsiniz.

Geri
Yukarı