Chapter Overview

>Background |

Geometry is actually a common core like all the others from the last chapter, too. However, the geometry core is of course the most important and most complicated one, and there are some things you should know about it. That is why a whole chapter is dedicated to this special core.

Before we start generating geometry like crazy, we need to know something about the concepts. At the very beginning the implementation might look a bit uncomfortable, but please keep in mind that the geometry class was designed to provide a maximum of flexibility while still being most performant.

One big advanatge of the OpenSG geometry is it's great flexibility. It is possible to store different primitive types (triangles, quads etc.) in one single core, there is no need to create seperate cores for each primitive type. Even lines and points can be used in the same core along with triangles and others.

All data describing the geometry is stored in seperate arrays. Positions, colors, normals as well as texture coordinates are stored in their own osg::MField. OpenGL is capable of processing different formats of data, because some perform better under certain circumstances, or only a part of the data is needed. OpenSG features the same data formats, by providing a lot of different classes which are luckily very similar to use and are all derived from osg::GeoProperty. Prominent examples for geometry properties are osg::GeoPosition3f or osg::GeoNormal3f. There are a lot of other datatypes, of course, just have a look at the osg::GeoProperty description page. All these geometry property classes are simple STL vectors slightly enhanced with some features we already know like multi thread safety.

Often one vertex is used by more than just one primitive. On a uniform grid for example most vertices are used by four quads. OpenSG can take advantage of this by indexing the geometry. Every primitive does not have to have a seperate copy of the vertices, but an integer index which points to a vertex. In this way it is possible to reuse data with a minimum of additional effort. It is even possible to use more than one index for different properties. Jump to Indexing for a detailed overview.

First of all we are going to build a geometry node with the most important features from bottom up. A good example is the simulation of water as this covers many problems you might encounter if creating your own geometry. This water tutorial will be developed throughout the whole chapter. Let us think about what we will actually need and what we are going to do in detail:

We simulate water by using a uniform grid with N * N points, with N some integer constant. As these points are equidistant we only need to store the height value (the value that is going to be changed during simulation) and one global witdth and height as well as some global origin where the grid is going to be placed.

There are a lot of algorithms which try to simulate the movement of water more or less adequate or fast but as we are more concerned on how to do stuff in OpenSG, I propose that we just take a very easy 'formula' to calculate the height values. Of course, if you are interested, you may replace the formula by any other.

Now take our framework again as a starting point, then add some global variables and a new include file.

#include <OpenSG/OSGGeometry.h> // this will specify the resolution of the mesh #define N 100 //the two dimensional array that will store all height values Real32 wMesh[N][N]; //the origin of the water mesh Pnt3f wOrigin = Pnt3f(0,0,0); //width and length of the mesh UInt16 width = 100; UInt16 length = 100;

Insert the code right at the beginning of the createScenegraph() function which should still be empty at this point.

Before we start creating the geometry we should first initialize the wMesh array to avoid corrupt data when building the scenegraph. For now, we simply set all height values to zero.

for (int i = 0; i < N; i++) for (int j = 0; j < N; j++) wMesh[i][j] = 0;

Now we can begin to build the geometry step by step. The first thing to do is to define the type of primitives we want to use. Quads would be sufficient for us. However, as mentioned before, it is possible to use more than one primitive. That will be discussed here : Primitive Types.

// GeoPTypes will define the types of primitives to be used GeoPTypesPtr type = GeoPTypesUI8::create(); beginEditCP(type, GeoPTypesUI8::GeoPropDataFieldMask); // we want to use quads ONLY type->addValue(GL_QUADS); endEditCP(type, GeoPTypesUI8::GeoPropDataFieldMask);

* We just told OpenSG that this geometry core we are about to create will consists of only one single type of object: a quad. But of course this is not restricted to a single quad. Just watch the next step. *

Now we have to tell OpenSG how long (i.e. how many vertices) the primitives are going to be. The length of a single quad is reasonably four, but we want more than one quad, of course, so we multiply four by the number of quads. With N*N vertices we have (N-1)*(N-1) quads.

// GeoPLength will define the number of vertices of // the used primitives GeoPLengthsPtr length = GeoPLengthsUI32::create(); beginEditCP(length, GeoPLengthsUI32::GeoPropDataFieldMask); // the length of a single quad is four ;-) length->addValue((N-1)*(N-1)*4); endEditCP(length, GeoPLengthsUI32::GeoPropDataFieldMask);

* We have to provide as many different length values as we have provided types in the previous step. As we only added one quad type we need to specify one single length. With N=100 the length will be 39204! Well, of course this does not mean we are creating a quad with so many vertices! OpenSG is smart enought to know that a quad needs four vertices and thus OpenSG was told to store 39204/4=9801 quads as it finishes creation of a quad after four vertices have been passed and begins with the next one. *

Now we will provide the positions of our vertices by using the data of the 'wMesh' array we initialized previously.

// GeoPositions3f stores the positions of all vertices used in // this specific geometry core GeoPositions3fPtr pos = GeoPositions3f::create(); beginEditCP(pos, GeoPositions3f::GeoPropDataFieldMask); // here they all come for (int x = 0; x < N; x++) for (int z = 0; z < N; z++) pos->addValue(Pnt3f(x, wMesh[x][z], z)); endEditCP(pos, GeoPositions3f::GeoPropDataFieldMask);

* You might question yourself if this is actually useful what I am doing here. It looks like that the width and length of the mesh we create corresponds to the resolution we choose, that is the higher the resolution the greater the mesh is. Well, that is correct. After creating the complete geometry core we are going to scale that whole thing to the correct size provided by the global variables [The Author: I actually haven't found the time to do that - so this may follow in the near future]. Of course it would be reasonable to store not just height values but whole points like an two dimensional array of osg::Pnt3f. But storing whole points consumes more memory than one Real32 value. Anyway, it is up to you or whether memory is a concern or not. As we want to play a bit around with scenegraph manipulation I have chosen the first variant. *

Now we assign colors to the geometry, actually just one color this time, to be specific. However, every vertex needs a color, so the same color value is added as often as we have vertices stored. This is not very efficent in this special case, however it is easy to implement. Multi indexing will be an alternative I present to you later on.

//GeoColors3f stores all color values that will be used GeoColors3fPtr colors = GeoColors3f::create(); beginEditCP(colors, GeoColors3f::GeoPropDataFieldMask); for (int x = 0; x < N; x++) for (int z = 0; z < N; z++) colors->addValue(Color3f(0,0,(x+1)/(z+1))); endEditCP(colors, GeoColors3f::GeoPropDataFieldMask);

Normals are still missing. We add them in a similar way like the color.

GeoNormals3fPtr norms = GeoNormals3f::create(); beginEditCP(norms, GeoNormals3f::GeoPropDataFieldMask); for (int x = 0; x < N; x++) for (int z = 0; z < N; z++) // As initially all heights are set to zero thus yielding a plane, // we set all normals to (0,1,0) parallel to the y-axis norms->addValue(Vec3f(0,1,0)); endEditCP(norms, GeoNormals3f::GeoPropDataFieldMask);

And some material...

SimpleMaterialPtr mat = SimpleMaterial::create();

* Well, this material is not doing anything interesting except for it's existence. But if no material is assigned the renderer stops doing his job leaving you with a black screen. So we assign an "empty" material. *

Something still missing? Yes of course! If you think about what we have done so far you might notice that something is quite not correct. We have not considered yet that a quad uses four vertices and thus most quads, except for these at borders, uses vertices already used by four other quads. However we provided every vertex just a single time.

Of course we did, because everything else would be a waste of memory. That is what indexes are used for. The next block of code tells OpenSG which vertices are used by a quad. The vertices are only referenced, but not copied, in this way.

**Vertex are used by multiple quads**

* Quad A uses vertex 1,2,3,4 whereas vertex 4 is used by quads A,B,C and D. The index which defines quad A would point to the vertices 1,2,3 and 4. Quad B would reuse the vertices 2 and 4 as well as two others not considered here. *

// GeoIndicesUI32 points to all relevant data used by the // provided primitives GeoIndicesUI32Ptr indices = GeoIndicesUI32::create(); beginEditCP(indices, GeoIndicesUI32::GeoPropDataFieldMask); for (int x = 0; x < N-1; x++) for (int z = 0; z < N-1; z++){ // points to four vertices that will // define a single quad indices->addValue(z*N+x); indices->addValue((z+1)*N+x); indices->addValue((z+1)*N+x+1); indices->addValue(z*N+x+1); } endEditCP(indices, GeoIndicesUI32::GeoPropDataFieldMask);

* There are different possibilities on how to index the data. That will be discussed in this section: Indexing. *

Now that we have created all data we need, we can create the geometry object that will hold all the pieces together.

GeometryPtr geo = Geometry::create(); beginEditCP(geo, Geometry::TypesFieldMask | Geometry::LengthsFieldMask | Geometry::IndicesFieldMask | Geometry::PositionsFieldMask | Geometry::NormalsFieldMask | Geometry::ColorsFieldMask ); geo->setTypes(type); geo->setLengths(length); geo->setIndices(indices); geo->setPositions(pos); geo->setNormals(norms); geo->setColors(colors); endEditCP(geo, Geometry::TypesFieldMask | Geometry::LengthsFieldMask | Geometry::IndicesFieldMask | Geometry::PositionsFieldMask | Geometry::NormalsFieldMask | Geometry::ColorsFieldMask );

* Some pages ago I told you that the field masks need not to be specified as the library in this case assumes that all fields will be changed. I also told you that leaving them out will slow down your application. However, as the start up is not performance critical in most circumstances, I personally, would leave the field masks out. To be honest: who cares if startup takes 5 or 5.1 seconds ;-) Maybe you just give it a try and compare the time you wait for the system to start up *

Finally we put the newly created core into a node and return it.

NodePtr root = Node::create(); beginEditCP(root); root->setCore(geo); endEditCP(root); return root;

Your first version of the water simulation is done. Compile and execute and watch the beautiful result! Please notice that you need to rotate the view in order to see anything. This is because we are initially located at y=0, just the same as the plane, thus you can see the plane as a line only. We can fix this if we add some value to the camera position during setup. You can add the code directly, before the glutMainLoop is called in the main function:

Navigator * nav = mgr->getNavigator(); nav->setFrom(nav->getFrom()+Vec3f(0,50,0));

* This will get the navigator helper object from the simple scene manager. The setFrom() method allows to specify a point (osg::Pnt3f) where the camera shall be located. In that case we get the current position via getFrom() and add 50 units to the y-axis. This ensures, that the camera is above our mesh and not at the same height. *

The code so far can be found in file progs/09geometry_water.cpp.

What? A plane? That whole effort for a simple plane?

Of course the result is a plane as we set all height values to zero previously. We need to modify the values during the display function. But first we have a deeper look at what we have done so far!

If you remember what we have done at the beginning when we started to create the water mesh geometry, you know that we had told OpenSG to use just one single primitive type, a quad, with a length of 39204 vertices. Now here some words about the geometry's flexibility: If you want to use triangles, quads and some polygons you need not to create seperate geometry cores, but you can use them all in one single core even mixed with lines and points.

This is done by first telling OpenSG what primitives you are going to use. Let us imagine this little example: We want to use 8 quads, 16 triangles two lines and another 8 quads. Sure, you could (and should) put the quads together to 16 quads, but we leave it that way for now. Data from modeling packages is not quite well structured most of the time, so better get used to it ;-)

Now, we simply tell OpenSG what is going to come:

// do not add this code to the tutorial source. // It is just an example GeoPTypesPtr type = GeoPTypesUI8::create(); beginEditCP(type, GeoPTypesUI8::GeoPropDataFieldMask); type->addValue(GL_QUADS); type->addValue(GL_TRIANGLES); type->addValue(GL_LINES); type->addValue(GL_QUADS); endEditCP(type, GeoPTypesUI8::GeoPropDataFieldMask);

As far as well, but OpenSG also need to know how many of each type will come. The length we have provided previously in our example specify the number of vertices, not the number of quads, triangles or whatever. So with some math we will find out that we need 32 vertices for 8 quads (8 quads * 4 vertices per quad = 32) and 48 for the 16 triangles and so on

// do not add this code to the tutorial source. // It is just an example GeoPLengthsPtr length = GeoPLengthsUI32::create(); beginEditCP(length, GeoPLengthsUI32::GeoPropDataFieldMask); length->addValue(32); // 8 quads length->addValue(48); // 16 triangles length->addValue(4); // 2 lines length->addValue(32); // 8 quads endEditCP(length, GeoPLengthsUI32::GeoPropDataFieldMask);

Here is a list of all supported primitives:

Type | Number of vertices per primitive |

GL_POINTS | 1 |

GL_LINES | 2 |

GL_LINE_STRIP | any |

GL_LINE_LOOP | any |

GL_TRIANGLES | 3 |

GL_TRIANGLE_STRIP | any |

GL_TRIANGLE_FAN | any |

GL_QUADS | 4 |

GL_QUAD_STRIP | any |

GL_POLYGON | any |

If you are striping geometry, please make sure to provide a correct number of vertices!

Please notice that concave polygons are not supported neither by OpenGL nor by OpenSG! So make sure your polygons with more than three vertices are convex.

The following imagine shows an example of primitive types and corresponding lengths.

**Primtives and corresponding length**

OpenSG geometry is very flexible and powerful. It is easy to mix different primitive types in one core, assign some properties like normals or texture coordinates to them and you can even reuse data with indexing (see next section Indexing). So far everything seems to be fine, but from another point of view things might become difficult. If you want to walk over all triangles for example you can easily run into problems, as the data might be highly mixed up with different primitive types. So you would have to take care of a lot of special cases usualy solved by some kind of big ugly switch block.

This is where geometry iterators may help you out. These will iterate primitive by primitive, face by face (which is either a quad or a triangle) or triangle by triangle.

For example if you are using the build-in ray intersection functionality you might have encountered the problem of finding the triangle you actually hit. You can easly get the hit point, but the promising method "getHitTriangle" returns an Int32... so what to do? This integer defines the position in the index data array of the geometry. We will have a closer look at the ray intersection functions later in section Intersect Action, but for now I only want to show a little code fragment of how to use a geometry iterator. Let's image we have send some ray into the scene, hit a triangle and now we have an integer returned from that class. We now try to compute the coordinates of the three vertices.

// the object 'ia' is of type osg::IntersectAction and // stores the result of the intersection traversal //retrieve the hit triangle as well as the node Int32 pIndex = ia->getHitTriangle(); NodePtr n = ia->getHitObject(); // we make sure that the core of this node is // actually a geometry core, just for safety std::string coretype = n->getCore()->getTypeName(); if (coretype != "Geometry"){ std::cerr << "No geometry core! Nothing to do!" << std::endl; return; } // get the geometry GeometryPtr geo = GeometryPtr::dcast(n->getCore()); // Creating the iterator object TriangleIterator ti(geo); // jump to the index we got from the // IntersectAction class ti.seek(pIndex); // and now retrieve the coordinates Pnt3f p1 = ti.getPosition(0); Pnt3f p2 = ti.getPosition(1); Pnt3f p3 = ti.getPosition(2);

The usage of these iterators is very easy, at least if you know how to do it. When I first started using OpenSG I had a similar problem to solve and I did it with five times the code length and much more effort. It took me some time to cut it down to these few lines... :)

Indexing is a very important topic if you want to use geometry efficiently. In the example above we added each vertex only a single time and this vertex was reused by all other primitives. On the one hand this is smarter than providing such a vertex four times, on the other hand we added the same color object N*N times, although adding it once would have been sufficient. All these problems can be addressed by choosing the right kind of indexing.

First of all there is the possibility to not use indexing at all. The following figure shows how the data would be organized in memory

**Geometry data which is not indexed**

* I guess this figure could need some explanation. At the top you have three colored circles representing a vertex. The yellow vertex, for example is used by both quads and the triangle, whereas the blue vertex is used by the right quad and the triangle. Below you find a sample data set. The first row contains the data of a GeoPTypes object. In this case we have two quads, a triangle, a polygon, another quad followed by two triangles and finally another polygon. This row may continue with even more types. The next row defines the length of these types. The quads have a length of four and the triangles have three, that's easy, but polygons can have any number of vertices. The last three rows represent the data that defines the geometry. In this case we have position-, normal- and color information given. This could be extended by some more data (i.e. texture coordiantes etc.). A column is one dataset for one vertex. *

You know can see that the yellow vertex appears three times in our data. With no indexing the vertex data is copied for every time it is used! Of course this is not very efficient. You will learn more efficent methods next.

This is the most often used type of geometry and this is also very close to OpenGL. Indexing is easy and efficient, but it does not handle all cases. This is the kind of geometry storage we used in the water mesh example above. The following figure shows how simple indexing work.

**Indexed geometry data**

* As you can see, every vertex is stored exactly one time. The data of the yellow vertex is referenced three times. *

Indexed geometry in general is a lot better than non-indexed geometry, but still we have some issues that are not solved optimally. In our water mesh example every vertex has exactly the same color. With indexed geometry we need to have as many entries in positions as there are in the normals and colors array - so we need to store the same color a lot too often. This issue is adressed by multi indexed geometry.

Another issue is, that some vertices need additional different data even though the position is the same. For example a textured cube has different normals at the corners whereas the position is the same. To handle such cases the vertex data need to be replicated. This too can be handeled with multi indexed geometry.

In order to face the issues encountered with single indexing you need to use multiple indices. However using a seperate index field for every property would double the number of Fields for the Geometry NodeCore, which is already pretty complex, and working with several different index fields would not be fun at all. That is why OpenSG uses another approach: interleaving indeces.

The idea is quite simple. You define a mask of which indices you are going to use, let's say we want to use indices for positions, normals and colors in this order. Now you have to provide every vertex with three indices, that is a triangle would have nine indices assigned to it. The first index is used for the position, the second for the second field provided in the mask field etc.

Again the following figure shows how it works

**Multi indexed geometry**

If using multi indexing the property arrays need not to be equally long any more. In our case this means that our color array could be one entry long with every color index pointing to this one and only entry.

So now that you have non-indexed, single- and multi-indexed Geometry at hand, which should you use? In general single-indexed Geometry is the most efficient way for rendering. It can make sense to use non-indexed geometry, if there are no shared vertices. In this case indices only cost memory brint no benefit. Multi-indexed data can be more compact in terms of memory (if the data is bigger than the additional index), but OpenGL doesn't natively support it. Therefore it has to be split up to be used with OpenGL, which can have a big impact on performance. Only use it if memory is really critical or you really need it.

Conclusion: use single-indexed geometry, if you can.

Often geometry itself stays untouched during a simulation except for rigid transformations applied to the whole geometry. However, if it is necessary to modify the geometry during a simulation (like in our water example) it is mostly very important to do it fast. In this section we want to enable animation of the water mesh and by doing so, I will demonstrate some tricks on how to speed up this important task.

Before we start, we quickly implement a function which will simulate the behaviour of water with respect to the time passed. As said earlier I will only use a simple function but feel free to replace this with a more complex one.

Add this block of code somewhere at the beginning of the file (at least before the display function).

void updateMesh(Real32 time){ for (int x = 0; x < N; x++) for (int z = 0; z < N; z++) wMesh[x][z] = 10*cos(time/1000.f + (x+z)/10.f); }

* Please notice: It is important to divide by 1000.f. If you divide by 1000 an integer division will be calculated yielding discret values, but that is not correct in most cases! *

And replace the display function with this code

void display(void) { Real32 time = glutGet(GLUT_ELAPSED_TIME); updateMesh(time); mgr->redraw(); }

Well, but of course we won't see anything different now on screen, because we have updated our datastructure, but not the scenegraph. So now comes the interesting part: We are going to modify the data stored in the graph. Of course we could generate a new geometry node and replace the old with the new one. Well, this is obviously not very efficient due to a big amount of memory deletion and allocation. What we are actually going to do is the following:

First of all we need a pointer to the appropriate geometry node we want to modify. Luckily this is no big deal this time, as we know that the root node itself contains the geometry core. Add this block of code in the display() function right before mgr->redraw() is called

// we extract the core out of the root node // as we know this is a geometry node GeometryPtr geo = GeometryPtr::dcast(scene->getCore()); //now modify it's content // first we need a pointer to the position data field GeoPositions3fPtr pos = GeoPositions3fPtr::dcast(geo->getPositions()); //this loop is similar to when we generated the data during createScenegraph() beginEditCP(pos, GeoPositions3f::GeoPropDataFieldMask); // here they all come for (int x = 0; x < N; x++) for (int z = 0; z < N; z++) pos->setValue(Pnt3f(x, wMesh[x][z], z), N*x+z); endEditCP(pos, GeoPositions3f::GeoPropDataFieldMask);

* Previously we used addValue() to add osg::Pnt3f objects to the osg::GeoPositions3f array. Now we use setValue() to overwrite existing values. If you have a look at code where we first added the points to the array, you can see that these were added column major, i.e. the inner loop added all points along the z-axis where x was zero then all points with x=1 and so on. setValue() gets a point as first parameter and an integer as second which defines the index of the data that will be overwritten. With N*x+z we overwrite the values like we first generated them: column major. *

Now you can again look forward to compilation and execution. The file 09geometry_water2.cpp contains the code so far.You will be rewarded with an animation of something that doesn't look like water at all, but is nice anyway. The problem is, that the water is uniformly shaded and the "waves" can only be spotted at the borders.

**Animated water without proper lightning**

The next chapter will be about lightning, this is where we will improve the appearance of the water.

Another issue is the performance. With a resolution of 100*100 Vertices (= 19602 polygons) the animation is no longer smooth when moving the camera with the mouse on my AMD 1400 Mhz machine with an ATI Radeon 9700! So we are definitly in need of some optimizations

We used the interface methods provided by the GeoPositions class. These are relativly slow compared to directly working on the data lying beneath. By getting access to the multi field, where all data is finally stored, we can speed up the rendering process quite a bit. In your display function remove some of the code we added in the last step

//remove the following code //this loop is similar to when we generted the data during createScenegraph() beginEditCP(pos, GeoPositions3f::GeoPropDataFieldMask); // here they all come for (int x = 0; x < N; x++) for (int z = 0; z < N; z++) pos->setValue(Pnt3f(x, wMesh[x][z], z), N*x+z); endEditCP(pos, GeoPositions3f::GeoPropDataFieldMask);

and replace with this

//get the data field the pointer used to store the positions GeoPositions3f::StoredFieldType *posfield = pos->getFieldPtr(); //get some iterators GeoPositions3f::StoredFieldType::iterator last, it; // set the iterator to the first data element it = posfield->begin(); beginEditCP(pos, GeoPositions3f::GeoPropDataFieldMask); //now simply run over all entires in the array for (int x = 0; x < N; x++) for (int z = 0; z < N; z++){ (*it) = Pnt3f(x, wMesh[x][z], z); ++it; } endEditCP(pos, GeoPositions3f::GeoPropDataFieldMask);

The result will be all the same, of course, but working directly on the field with an iterator or faster to compute to some extend.

As you might know, OpenGL is capable of using "display lists". Such lists are usually defined by a programmer and OpenGL compiles this list and thus can render the content of such a list a bit faster. However there is an overhead in compiling display lists, which makes it useless for objects which change permanently - like our water mesh. In OpenSG display lists will be generated per default for every geometry, and they will be generated again if the geometry data changes. You can turn off this feature by telling your geometry core :

// geo is of type osg::Geometry geo->setDlistCache(false);

Add this line where the geometry core is created during createScenegraph(). Don't forget to extend the edit mask field with the following mask

Geometry::DlistCacheFieldMask

This may increase rendering performance a lot if used wisely. When using static geometry you should not turn this feature off as this will slow down rendering. Only disable display lists on geometry which is modified often or even every frame. * Please notice: transformations do not affect the geometry in that way! Only direct manipulation of the geometry data is a performance problem with display lists! *

All hints and tricks that can be used with OpenGL can be used with OpenSG the one way or another, too. Of course, it is not good to allocate new memory during rendering and similar things. If you want to tweak your application to the maximum possible it might be useful to read a book about this specific topic.

I run a little self-made benchmark on my machine to show you the results of the optimizations I suggested above. Please keep in mind that this is only one example and thus claims not to be an objective benchmark! I simply let OpenSG render 500 frames and watched how long it took.

**Display Lists on Dynamic objects**

As you can easily see the usage of the multi field manipulation instead of the interface methods is not such a big win at all, but turning the display lists off, is rewarded with a performance increase by about 170!

**Notice:** You might know think that display lists are stupid and should be turned off, to increase performance - of course that is not the case, as a display lists only purpose is to increase performance! They **only** will slow rendering down if the list themselves are constantly recreated as this will be the case with non-rigid transformations. With static geometry they perform very well. I ran some small tests on my machine with the beethoven model (progs/data/beethoven.wrl), which has 60k polygons. For this benchmark I let OpenSG render 5000 frames and took the time. The figure below shows the results

**Display Lists on Static Objects**

OpenSG comes along with some useful utility functions, which can help you with some basic, but important tasks. I remember when I first needed face normals where the model only had vertex normals. I had one or two long nights I spend with the geometry in general and the geometry iterators until I had succeeded in developing an algorithm that worked in that way I wanted it to. It was a few days later when I realized a function called "calcFaceNormals". Well, my variant of face normal calculation was as fast as the build-in function was, but the only annoying thing was, that I did it with some dozens lines of code where one single line would have done the job. Here is how it works

* Note: you need to include the following include file for the utility functions: *

```
#include <OpenSG/OSGGeoFunctions.h>
```

If you have some geometry core, for which you want to calculate face normals you simply need to type

//geo again is of type osg::Geometry calcFaceNormals(geo);

Of course with vertex normals it is just all the same

calcVertexNormals(geo);

You probably already know, that face normals are unique for every triangle or quad. Objects rendered with face normals will look faceted, which might not be what you want. For a smooth rendering normals per vertex are required. Please keep in mind that vertex normals can only be computed correctly, if the geometry data is also correct. The resulting vertex normal is just the result of averaging all neighbouring vertex normals and if some vertices are stored multiple times the result will be incorrect.

Anyway, identical vertices, which are defined multiple times, can be unified automatically (i.e. they are "merged" into one vertex) by calling

createSharedIndex(geo);

on the geometry beforehand.

**Different normals used for rendering**

* The left image was rendered using face normals, resulting in a faceted look, as promised. The middle image shows what happens if you calculate vertex normals with multiple vertex definitions, where the right image shows correct vertex normal rendering with usage of createSharedIndex() before the vertex normals were calculated *

The faceted effect on a sphere is often not what you want and calculating vertex normals does a fine job here. However, some other objects might not perform so well. The problem is, that all normals at a vertex will be averaged and thus edges you may want to keep will be averaged out, too.

**Box with bad vertex normals**

* See how bad the cube now looks. If you increase the mesh resolution the effect will reduce in size, but that is not a good solution anyway. *

There is another variant of calcVertexNormals which can be given an angle. All edges between faces that have an angle larger that the one specified, will be preserved.

Replacing the old function call with

calcVertexNormals(geo, deg2rad(30))

would help us out with the cube.

* deg2rad is a useful function that allows you to convert degree values into radians. As you might guess there is also a rad2deg function. *

Calculating vertex normals is more complex than it sounds and it requires a considerably amount of time to compute. Using one of these methods on a per frame basis is not really recommended!

As OpenGL supports a lot of different geometry data formats, also new problems arise. Of course, not all of these variants are equally efficient, but luckily OpenSG also offers some methods to improve the data automatically. I mentioned createSharedIndex() before, which will look for identical elements and remove the copies, changing only the indexes.

This step is necessary for osg::createOptimizedPrimitivs to work as this method needs to know which primitves are using the same vertex which means they are neighbours. It tries to reduce the needed number of vertices to a minimum by combining primitives to stripes or fans. No property values will be changed, only the indexes are modified. The actual algorithm used here is very fast, but will not necessarily provide an optimal solution. Due to its pseudo random nature you can run it several times and take the best result. If performace is critical you can of course do it only once which will yield a non-optimal but definetly better solution than before.

osg::createSingleIndex will reduce a multi indexed geometry into a single indexed. Multi-indexing is very efficient in storage but when it comes to rendering performance single indexing is better. The reason is that OpenGL does not support multi indexing directly. OpenGL's more efficient specifiers like vertex arrays cannot be used with OpenSG's multi indexing geometry. Finally you have do decide for yourself what's suiting better for you, but it is good to know that you can convert from multi- to single indexing.

Last but not least, it is possible to let OpenSG render normals for you. This may be useful for debuging purposes, so you can make sure the normals are actually pointing in the desired direction. There are two methods, one for vertex and the other for face normals. You should make sure which exist before calling one of these methods.

**Rendered normals of a sphere**

* This nice picture shows the rendered normals of a sphere. However, it would be difficult to figure out if one is facing the wrong direction anyway ;-) *

This code fragment shows how to do it:

root = calcVertexNormalsGeo(some_geometry_core, 1.0); SimpleMaterialPtr mat = SimpleMaterial::create(); GeometryPtr geo = GeometryPtr::dcast(root->getCore()); beginEditCP(geo); geo->setMaterial(mat); endEditCP(geo);

* Note that you have to add a material even if it is "empty" like this one, else you won't see anything but error messages! *

osg::calcVertexNormalsGeo need some geometry core and a float value which will define the length of the normals. Of course this does not change the real normals in any way.

Next Chapter : Light

Generated on 8 Feb 2010 for OpenSG by 1.6.1