Effective Contour Creation with OpenGL Texture Mapping

In graphics applications, "let the hardware do it" is generally good advice. Today's graphics hardware keeps getting better and better, and OpenGL knows how to exploit its capabilities. Even producing contour plots can be a fairly painless operation.


December 01, 2000
URL:http://drdobbs.com/effective-contour-creation-with-opengl-t/184401330

Effective Contour Creation with OpenGL Texture Mapping/Figure 1

Figure 1: Mapping an 8-texel to a triangle

Effective Contour Creation with OpenGL Texture Mapping/Figure 2

Figure 2: Generating smooth contours by setting the normal at each vertex

Effective Contour Creation with OpenGL Texture Mapping/Figure 3

Figure 3: Higher-order polygons with 1-D texture mapping

Effective Contour Creation with OpenGL Texture Mapping/Figure 4

Figure 4: 3-D contours with lighting effects added

Effective Contour Creation with OpenGL Texture Mapping/Figure 5

Figure 5: Semi-transparent 3-D contours

Effective Contour Creation with OpenGL Texture Mapping/Listing 1

Listing 1: Creating a 1-D texture object in OpenGL

void CContoursDoc::CreateTextureObject()
{
    // Define texture image
    unsigned char Texture8[8][3] =
    {
        { 0x00, 0x00, 0xa0 },   // Dark Blue 
        { 0x00, 0x00, 0xff },   // Blue 
        { 0x00, 0xa0, 0xff },   // Indigo 
        { 0x00, 0xa0, 0x40 },   // Dark Green 
        { 0x00, 0xff, 0x00 },   // Green 
        { 0xff, 0xff, 0x00 },   // Yellow 
        { 0xff, 0xcc, 0x00 },   // Orange 
        { 0xff, 0x00, 0x00 }    // Red 
    };

    // Set pixel storage mode 
    ::glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    // Generate a texture name
    ::glGenTextures(1, m_nTexName);

    // Create a texture object
    ::glBindTexture(GL_TEXTURE_1D, m_nTexName[0]);
    ::glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER,
        GL_NEAREST);
    ::glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER,
        GL_NEAREST);
    ::glTexImage1D(GL_TEXTURE_1D, 0, 3, 8, 0, GL_RGB, 
        GL_UNSIGNED_BYTE, Texture8);
}
Effective Contour Creation with OpenGL Texture Mapping/Listing 2

Listing 2: Creating a texture-mapped triangle

void CContoursDoc::CreateTriangleList(UINT nList)
{
    ::glNewList(nList, GL_COMPILE);
    ::glNormal3f(0.0f, 0.0f, 1.0f);

    ::glBegin(GL_TRIANGLES); 
        ::glTexCoord1f(0.1f);
        ::glVertex3d(-1, -1, 0);
        ::glTexCoord1f(0.7f);
        ::glVertex3d(-1, 1, 0);
        ::glTexCoord1f(1.0f);
        ::glVertex3d( 1, 0.8, 0);
    ::glEnd();
    ::glEndList();
}
December 2000/Effective Contour Creation with OpenGL Texture Mapping

Effective Contour Creation with OpenGL Texture Mapping

Wenfei Wu

In graphics applications, "let the hardware do it" is generally good advice. Today's graphics hardware keeps getting better and better, and OpenGL knows how to exploit its capabilities. Even producing contour plots can be a fairly painless operation.


Introduction

Contour images are widely used for the visualization of information. This article presents an approach to creating contour plots using the one-dimensional texture mapping feature of OpenGL [1, 2]. In this approach, a 1-D texture image is first created based on the colors and number of bands required in the contour plot. The texture mapping operation then maps this image to the geometry [3] in the 3-D coordinates.

I present an example in this article that demonstrates the efficiency of this approach. Specifically, the example demonstrates how OpenGL enables you to shift work to hardware to achieve high run-time speed. OpenGL is an API that provides access to graphics hardware for the rendering of high-quality 3-D images. This powerful 3-D graphics library is well known for its performance and portability.

I'll start with a description of one-dimensional texture mapping. Then I'll guide you through the key points of this approach. You will learn how to create 1-D texture objects to represent data, and how to map the texture to triangles to create contours. Once you know how to create a contour plot, you should get a feeling for the efficiency. The example also shows how to apply lighting and alpha blending to contours to enhance the visual effects.

A side note: I used MFC in this example, but only to develop the GUI portion of the code. Texture mapping has nothing to do with MFC; you can use other GUI APIs, or you can implement texture mapping solely with OpenGL.

One-Dimensional Texture Mapping

The technology of texture mapping has been around for a long time. To obtain better performance, this technique has been converted from a software-based to a hardware-supported feature. It was not until low-cost hardware support and high quality APIs became available that texture mapping began to be more widely used. The method discussed in the article takes advantage of wide availability of texture mapping as a hardware capability.

A 1-D texture image is a one-dimensional array of pixels, also called texels. You can imagine a 1-D texture as a picture having width only, and a height of just one pixel — or vice versa. You can define the color of each pixel to represent the color of a contour band. Usually, the color of the first pixel denotes the minimum value of the data, and the last pixel represents the maximum value. The coordinate system of a texture image defines the first pixel's coordinate at 0.0 and the last pixel's at 1.0. In rendering routines, a coordinate between 0.0 and 1.0 is used to select a color from the texture image.

A texture can be mapped to primitive objects, triangles, and higher order polygons. To perform a texture mapping, you specify a texture coordinate for each vertex in the object, besides the geometrical coordinates. This causes vertices to be rendered in the colors corresponding to the texture coordinates. Between vertices, the face of the object is rendered with the colors corresponding to a "stretched" texture image. More specifically, the result depends on a magnification filter, which defines how the texels in the texture occupy the position between vertexes. One of the commonly used magnification filters in contour plotting is the nearest-neighbor filter. In this mode, the pixels between vertices display the color of the closest texel in the texture image. Therefore, contours are rendered with distinct boundaries. Another useful filter is the linear filter, which linearly interpolates the colors of the closest texture pixels on each side of the pixel. The result is a continuous color ramp. All of the work is done before the geometry is rendered to the screen or buffer.

The procedure for creating contours using texture mapping can be described in four steps:

  1. Create a texture image.
  2. Enable texture mapping.
  3. Normalize the data to be represented.
  4. Render geometric objects with texture mapping.

The objective in this example is to map a 1-D texture image to triangles in 3-D space to create a contour plot with eight color bands.

Creating a Texture Image

There is nothing special about my approach to creating a texture image. I elected to use an OpenGL texture object. The routine that creates a texture object is shown in Listing 1.

The CreateTextureObject routine starts by defining a texture image. The color of a texel is basically composed of red, green, and blue components, known as RGB values. A texture image's width must be a power of two. I use eight texels to hold eight colors.

The glPixelStorei function sets the pixel storage mode. You can find information on how to use this and the following OpenGL commands in many references [1, 2, 4, 5]. I will not repeat the instructions in this article.

The glGenTextures function generates a texture object name.

CreateTextureObject then calls the glBindTexture function to create a 1-D texture object using the given name. This function also sets the newly created texture object as current — the one which will be applied in subsequent rendering operations. If you create more than one texture object, the last one created will be current. You may need to call this command again to set the current texture [6].

The glTexParameteri call tells OpenGL how to map the texture to geometry. The first glTexParameteri function call tells OpenGL I want to use 1-D texture mapping with a nearest-neighbor filter when the texture image is larger than the geometry in terms of pixels. The second states that I want to use the same thing when the texture image is smaller than the geometry.

The glTexImage1D call specifies the 1-D texture based on the defined texture image. This command loads the texture into hardware. Thus, all subsequent lookups into the texture image take place in graphics hardware, rather than in the application software. This capability significantly increases the performance.

It is common to need to modify a texture after creating it. The glTexSubImage1D function allows the user to change a specific pixel in the texture image. Modifying an existing texture is less computationally expensive than creating one. However, you can't specify pixels outside the range of the existing texture — in other words, you can't increase the size of the texture.

Enabling Texture Mapping

Enabling and disabling 1-D texture mapping capability is easy. Just call OpenGL state functions glEnable and glDisable with argument GL_TEXTURE_1D. In addition, you can call the glIsEnabled function to detect the current state. Once you disable the texture mapping, you can still render the geometry, but it is displayed without texture. Working with various OpenGL graphics capabilities, these three state functions also enable you to toggle graphics capabilities without disturbing the rendering scenario.

Normalizing Data to be Represented

The coordinates of a texture image range from 0.0 to 1.0, while the data to be represented can fall in any arbitrary range. Thus, the raw data must be normalized so that it can be mapped to the texture image. Normalization consists of scaling and offsetting the data values so that they all fall within the range 0.0 to 1.0. Once the data has been normalized, it can be used to lookup the corresponding texture color.

Rendering Geometry with Texture Mapping

Mapping the texture image onto a triangle is straightforward. All you need to do is to set the texture coordinate for each vertex. This process is shown in Listing 2. The code between the calls to glBegin and glEnd creates a single triangle. For each vertex of the triangle, the call to glTexCoord1f sets the current texture coordinate; then the call to glVertex3d specifies the geometrical coordinates of the vertex. In Listing 2, the texture coordinates at each of the triangle's vertices are 0.1, 0.7, and 1.0 respectively. The geometrical coordinates of the vertices are (-1.0, -1.0, 0.0), (-1.-, 1.0, 0.0), and (1.0, 0.8, 0.0) respectively. When an object is defined in this way, OpenGL modifies the pixel information after the shading operations are completed. Note that the dimensionality of texture has nothing to do with the dimensionality of the geometry.

The rendering result is a triangle with color contours as illustrated in Figure 1. The color at the vertex is modified with the corresponding color from the texture image. The colors between vertices depend on the specified magnification filter.

This approach provides two important benefits here: 1) the interpolation between vertices takes place in texture space and it is done in hardware, as opposed to the interpolation taking place in color space and being done in software; 2) only one value is used to specify the texture coordinate, in contrast to three values that would be required if working in color space. OpenGL handles the interpolation and all of the dirty work, and it works very fast.

However, if I render a simulated curved surface composed of a group of triangles, the surface may not be rendered smoothly. To render a curved surface smoothly, I must tell OpenGL the vertex normal at each vertex [7]. Recall that a normal to a planar surface is a vector that is at 90 degrees to the plane. A vertex normal is the mean value of the normals of the adjacent polygons. In the sample code (available at www.cuj.com/code) the normal at each vertex is set via a call to glNormal3fv. When the triangles are rendered, this produces a smooth surface contour plot as shown in Figure 2.

OpenGL treats higher-order polygons as groups of triangles. It divides a polygon into a group of triangles based on vertex order, as shown in Figure 3. Thus, mapping a 1-D texture image to a high-order polygon is equivalent to mapping to a group of triangles.

Adding Light and Semi-Transparency

The texture mapping approach to contour generation makes it easy to add other visual effects. In fact, in an OpenGL application, there is nothing different in principle between texture mapping and adding lighting or alpha blending (for semi-transparency effects). The example source code demonstrates how to achieve this. Frequently some geometry objects need to be rendered with light or alpha blending while others do not. For example, you may not want a graph legend to be affected by blending and lighting. You can easily call glDisable to temporally turn off the blending and lighting capabilities before rendering the legend, and call glEnable to turn them on afterwards. Figures 4 and 5 shows three-dimensional contours with lighting and alpha-blending.

Conclusion

Creating contours with texture mapping is efficient and flexible. Compared with explicit interpolation methods, the texture mapping method requires much shorter code, and achieves higher run-time speed. It is also flexible when adding other features, such as lighting, and semi-transparency. Once you master the concept and technique, you can easily modify the given example for your requirements.

To take it a step further, a texture object can be defined as a special memory segment, which holds more information than just an image. Mapping this information to geometry can create more sophisticated applications. That is beyond the scope of this article.

Notes and References

[1] Mason Woo, Jackie Neider, Tom Davis, Dave Shriner, and the OpenGL Architectural Review Board. OpenGL 1.2 Programming Guide, Third Edition: The Official Guide to Learning OpenGL, Version 1.2 (Addison-Wesley, 1999).

[2] Dave Shreiner (Editor), and the OpenGL Architectural Review Board. OpenGL Reference Manual Third Edition: The Official Reference Document to OpenGL, Version 1.2 (Addison-Wesley, 1999).

[3] "Geometry" refers to a set of renderable objects, such as triangles or polygons.

[4] Richard S. Wright, Jr., and Michael Sweet. OpenGL SuperBible, Second Edition (Waite Group, 1999).

[5] Ron Fosner. OpenGL Programming for Windows 95 and Windows NT (Addison-Wesley, 1996).

[6] Like many graphics APIs, OpenGL does not assign properties, such as texture, to individual objects; rather, when it renders an object it uses a set of current system properties, which are global in scope.

[7] This is not a texture mapping issue per se; it is a computer graphics issue.

Wenfei Wu is a software development professional with extensive experience in GUI and 3-D graphics. He is an R&D Engineer at ADINA R&D, Inc. He holds a Master's degree in engineering from McMaster University, Canada. He can be reached at [email protected].

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.