Chapter 39. volume rendering techniques pdf
Volumetric Light Scattering as a Post-Process. Playable Universal Capture Chapter Deferred Shading in Tabula Rasa Chapter True Impostors Chapter The Importance of Being Linear Chapter Practical Post-Process Depth of Field.
Incremental Computation of the Gaussian Chapter Blending Textures for Terrain Image Processing with 1. Skip to content. Star Branches Tags. Could not load branches. Could not load tags. Latest commit. Git stats 13 commits. Failed to load latest commit information. View code. Geometry Manipulation 2. Rendering Techniques 3. Image Space 4. Geometry Manipulation 1. Rendering Techniques 2. Image Space 3. Shadows 4. Releases No releases published. Packages 0 No packages published. You signed in with another tab or window.
Reload to refresh your session. Corporate and Government Sales corpsales pearsontechgroup. International Sales international pearsoned. Visit Addison-Wesley on the Web: www. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form, or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior consent of the publisher.
Printed in the United States of America. Published simultaneously in Canada. For information on obtaining permission for use of material from this work, please submit a written request to:. Pearson Education, Inc. Skip to main content. Home GPUGems. The CD content, including demos and content, is available on the web and for download. Foreword Now is an excellent time to be working in the field of computer graphics. For more information, please contact: U. For information on obtaining permission for use of material from this work, please submit a written request to: Pearson Education, Inc.
Rendering Water Caustics Chapter 3. Skin in the "Dawn" Demo Chapter 4. Animation in the "Dawn" Demo Chapter 5. Implementing Improved Perlin Noise Chapter 6. Fire in the "Vulcan" Demo Chapter 7. Cinematic Lighting Chapter Shadow Map Antialiasing Chapter Omnidirectional Shadow Mapping Chapter Ambient Occlusion Chapter Finally, after the slices are drawn in sorted order, the rendering state is restored, so that the algorithm does not affect the display of other objects in the scene.
The following example is intended as a starting point for understanding the implementation details of texture-based volume rendering. In this example, the transfer function is fixed, the data set represents the opacity directly, and the emissive color is set to constant gray. In addition, the viewing direction points along the z axis of the data coordinate frame; therefore, the proxy geometry consists of rectangles in the x-y plane placed uniformly along the z axis.
The algorithm consists of the steps shown in Algorithm Figure shows the image generated by the simple volume renderer. The following sections demonstrate how to make each stage of the example more general and useful for a variety of tasks. This section presents an overview of the components commonly used in texture-based volume rendering applications.
The goal is to provide enough details to make it easier to understand typical implementations of volume renderers that utilize current-generation consumer graphics hardware, such as the GeForce FX family of cards.
Volumetric data sets come in a variety of sizes and types. For volume rendering, data is stored in memory in a suitable format, so it can be easily downloaded to graphics memory as textures. Usually, the volume is stored in a single 3D array. Depending on the kind of proxy geometry used, either a single 3D texture object or one to three sets of 2D texture slices are created.
The developer also has to choose which available texture formats to use for rendering. For example, power-of-two-size textures are typically used to maximize rendering performance. Frequently, the data set is not in the right format and not the right size, and it may not fit into the available texture memory on the GPU. In simple implementations, data processing is performed in a separate step outside the renderer.
In more complex scenarios, data processing becomes an integral part of the application, such as when data values are generated on the fly or when images are created directly from raw data. To change the size of a data set, one can resample it into a coarser or a finer grid, or pad it at the boundaries. Padding is accomplished by placing the data into a larger volume and filling the empty regions with values.
Resampling requires probing the volume that is, computing interpolated data values at a location from voxel neighbors. Although the commonly used trilinear interpolation technique is easy to implement, it is not always the best choice for resampling, because it can introduce visual artifacts.
If the quality of resampling is crucial for the application, more complex interpolation functions are required, such as piecewise cubic polynomials. Fortunately, such operations are easily performed with the help of publicly available toolkits. For example, the Teem toolkit includes a great variety of data-processing tools accessible directly from the command line, exposing the functionality of the underlying libraries without having to write any code Kindlmann Examples of using Teem for volume data processing are included on the book's CD and Web site.
Advanced data processing can also be performed on the GPU, for example, to create high-quality images Hadwiger et al. Local illumination techniques and multidimensional transfer functions use gradient information during rendering. Most implementations use central differences to obtain the gradient vector at each voxel. The method of central differences approximates the gradient as the difference of data values of two voxel neighbors along a coordinate axis, divided by the physical distance.
For example, the following formula computes the x component of the gradient vector at voxel location. To obtain the gradient at data boundaries, the volume is padded by repeating boundary values. Visual artifacts caused by central differences are similar to those resulting from resampling with trilinear interpolation. If visual quality is of concern, more complex derivative functions are needed, such as the ones that Teem provides.
Depending on the texture format used, the computed gradients may need to be quantized, scaled, and biased to fit into the available range of values. Transfer functions emphasize regions in the volume by assigning color and opacity to data values. Histograms are useful for analyzing which ranges of values are important in the data. In general, histograms show the distribution of data values and other related data measures. A 1D histogram is created by dividing up the value range into a number of bins.
Each bin contains the number of voxels within the lower and upper bounds assigned to the bin. By examining the histogram, one can see which values are frequent in the data. Histograms, however, do not show the spatial distribution of the samples in the volume. The output of the data-processing step is a set of textures that are downloaded to the GPU in a later stage.
It is sometimes more efficient to combine several textures into a single texture. For example, to reduce the cost of texture lookup and interpolation, the value and normalized gradient textures are usually stored and used together in a single RGBA texture.
During the rendering stage, images of the volume are created by drawing the proxy geometry in sorted order. When the data set is stored in a 3D texture, view-aligned planes are used for slicing the bounding box, resulting in a set of polygons for sampling the volume. Algorithm computes the proxy geometry in view space by using the modelview matrix for transforming vertices between the object and view coordinate systems.
Proxy polygons are tessellated into triangles, and the resulting vertices are stored in a vertex array for more efficient rendering. Figure illustrates Algorithm with two slice polygons. The first polygon contains three vertices, the second is composed of six vertices and is tessellated into six triangles.
There are several ways to generate texture coordinates for the polygon vertices. For example, texture coordinates can be computed on the CPU in step 3 c of Algorithm from the computed vertex positions and the volume bounding box.
In this case the coordinates are sent down to GPU memory in a separate vertex array or interleaved with the vertex data. There are different methods for computing the texture coordinates on the GPU, including automatic texture coordinate generation, the texture matrix, or with a vertex program. Advanced algorithms, such as the one described in Section In this case, the algorithm works the same way, but the modelview matrix needs to be modified accordingly.
The role of the transfer function is to emphasize features in the data by mapping values and other data measures to optical properties.
The simplest and most widely used transfer functions are one dimensional, and they map the range of data values to color and opacity. Typically, these transfer functions are implemented with 1D texture lookup tables. When the lookup table is built, color and opacity are usually assigned separately by the transfer function. For correct rendering, the color components need to be multiplied by the opacity, because the color approximates both the emission and the absorption within a ray segment opacity-weighted color Wittenbrink et al.
Using data value as the only measure for controlling the assignment of color and opacity may limit the effectiveness of classifying features in the data.
Incorporating other data measures into the transfer function, such as gradient magnitude, allows for finer control and more sophisticated visualization Kindlmann and Durkin , Kindlmann For example, see Figure for an illustration of the difference between using one- and two-dimensional transfer functions based on the data value and the gradient magnitude. Transfer function design is a difficult iterative procedure that requires significant insight into the underlying data set.
Some information is provided by the histogram of data values, indicating which ranges of values should be emphasized. The user interface is an important component of the interactive design procedure. Typically, the interface consists of a 1D curve editor for specifying transfer functions via a set of control points. Another approach is to use direct manipulation widgets for painting directly into the transfer function texture Kniss et al.
The lower portions of the images in Figure illustrate the latter technique. The widgets provide a view of the joint distribution of data values, represented by the horizontal axis, and gradient magnitudes, represented by the vertical axis. Arches within the value and gradient magnitude distribution indicate the presence of material boundaries. A set of brushes is provided for painting into the 2D transfer function dependent texture, which assigns the resulting color and opacity to voxels with the corresponding ranges of data values and gradient magnitudes.
The assigned opacity also depends on the sampling rate. For example, when using fewer slices, the opacity has to be scaled up, so that the overall intensity of the image remains the same. Equation 3 is used for correcting the transfer function opacity whenever the user changes the sampling rate s from the reference sampling rate s 0 :.
Equation 3 Formula for Opacity Correction. Illumination models are used for improving the visual appearance of objects. Simple models locally approximate the light intensity reflected from the surface of an object.
The most common approximation is the Blinn-Phong model , which computes the reflected intensity as a function of local surface normal , the direction , and intensity I L of the point light source, and ambient, diffuse, specular, and shininess coefficients k a , k d , k s , and n :.
The computed intensity is used to modulate the color components from the transfer function. Typically, Equation 4 is evaluated in the fragment shader, requiring per-pixel normal information. In volume rendering applications, the normalized gradient vector is used as the surface normal. Unfortunately, the gradient is not well defined in homogeneous regions of the volume.
For volume rendering, the Blinn-Phong model is frequently modified, so that only those regions with high gradient magnitudes are shaded Kniss et al. Local illumination ignores indirect light contributions, shadows, and other global effects, as illustrated by Figure To efficiently evaluate the volume rendering equation Equation 1 , samples are sorted in back-to-front order, and the accumulated color and opacity are computed iteratively. A single step of the compositing process is known as the Over operator :.
Equation 5 Back-to-Front Compositing Equations. If samples are sorted in front-to-back order, the Under operator is used:. Equation 6 Front-to-Back Compositing Equations. The compositing equations Equations 5 and 6 are easily implemented with hardware alpha blending. For the Over operator, the source blending factor is set to 1 and the destination blending factor is set to 1 — source alpha.
For the Under operator, the source blending factor is set to 1 — destination alpha and the destination factor is set to 1. Alternatively, if the hardware allows for reading and writing the same buffer, compositing can be performed in the fragment shading stage by projecting the proxy polygon vertices onto the viewport rectangle see Section To blend opaque geometry into the volume, the geometry needs to be drawn before the volume, because the depth test will cull proxy fragments that are inside objects.
The Under operator requires drawing the geometry and the volume into separate color buffers that are composited at the end. In this case, the depth values from the geometry pass are used for culling fragments in the volume rendering pass.
This section describes techniques for improving the quality of rendering and creating volumetric special effects. The local illumination model presented in the previous section adds important visual cues to the rendering. Such a simple model is unrealistic, however, because it assumes that light arrives at a sample without interacting with the rest of the volume. Furthermore, this kind of lighting assumes a surface-based model, which is inappropriate for volumetric materials. One way to incorporate complex lighting effects, such as volumetric shadows, is to precompute a shadow volume for storing the amount of light arriving at each sample after being attenuated by the intervening material.
During rendering, the interpolated values from this volumetric shadow map are multiplied by colors from the transfer function. But in addition to using extra memory, volumetric shadow maps result in visual artifacts such as blurry shadows and dark images. A better alternative is to use a pixel buffer to accumulate the amount of light attenuated from the light's point of view Kniss et al. To do this efficiently, the slicing axis is set halfway between the view and the light directions, as shown in Figure a.
This allows the same slice to be rendered from both the eye and the light points of view. The amount of light arriving at a particular slice is equal to 1 minus the accumulated opacity of the previously rendered slices. Each slice is first rendered from the eye's point of view, using the results of the previous pass rendered from the light's point of view, which are used to modulate the brightness of samples in the current slice.
The same slice is then rendered from the light's point of view to calculate the intensity of the light arriving at the next slice. Algorithm uses two buffers: one for the eye and one for the light.
Figure shows the setup described in Algorithm Volumetric shadows greatly improve the realism of rendered scenes, as shown in Figure Note that as the angle between the observer and the light directions changes, the slice distance needs to be adjusted to maintain a constant sampling rate.
0コメント