Procedural geometry - pipes

The concept of generating geometry in Unity is not complicated. You have a list of vertices, and you tell Unity how those vertices should be connected to form triangles. Additionally you can provide information about UV layout for texturing and lighting / generating smooth normals. In the end you have three lists, vertices, triangles and UVs. You assign these to a mesh component and you get a mesh out of it. One thing to note is that vertices for each triangle must be assigned clockwise.

This tells Unity that the triangle is facing outwards and you will see it's surface. If you assign them counterclockwise the triangle will be facing inwards and will be invisible. For a much more in depth and eloquent explanation, see the tutorial on mesh generation from Catlike Coding

 Derpy clockwise vertex assignment illustration.

Derpy clockwise vertex assignment illustration.

For my purposes I need to generate pipes, to illustrate the backbone of a protein. A few things have to be done:

  1. Take a set of world space coordinates (location of amino acids) and use them as points along the pipe.
  2. Create a circle of vertices around each point, to define the profile of the pipe
  3. Rotate these circles so they form nice corners at each turn.

One is easy, I already have those coordinates for my space filling model.

Two is relatively simple as well. I can get the x and z (or y) coordinates using the parametric equation of an ellipse, with the radius value equal on each axis.

x = r cos t
z = r sin t

x and z are my coordinates I'm trying to find. r is the radius, which could be different for each axis. t is my angle in radians. 360° = 2π radians, so if I divide by n (the number of sides I want), I get a step size which I can add to t, n times, to get my circle of points. 

 

Three was the difficult step for me conceptually. My end solution was as follows:

Given three points, A, B, and C, get the vector AB, BC, and AC. Take the average vector of AB and BC. Get the cross product of AB and AC. Average vector tells me where the circle should be facing and the cross product gives me the normal of triangle ABC. I use the cross product to define "Up" and the average vector to define the "Look" value in Quaternion.SetLookRotation(crossVector, averageVector).

 
Figure: AB (red), BC (blue), Average vector (Green), Cross product / normal (Orange)
Pipe cross section

This gives me a quaternion that defines the plane and start point of each circle. I was getting weird twists in my pipes, where the start point for the circle would flip. When I connected my vertices they would cross to the opposites side of the next joint, bascially 180 degress around the the next circle. To keep everything lined up, I took the absolute value of each XYZ value in the cross product vector, and made this my up direction. No more twists! 

The end result is that for each point P in my backbone I use the previous point (P-1) and the next point (P+1) to calculate the orientation of the cross section at P. I could have used more cross sections to create smoother angles but as this is for VR purposes, poly counts must be kept low. 

Optimizing instantiated geometry for VR

I'm currently developing a VR molecular viewer in Unity for the HTC Vive. It's not meant to compete with any of the tools used for research. Instead it is meant to explore new uses of VR and allow people to play with proteins, encourage curiosity, and give a different view of something scientists look at everyday. It's pretty cool to walk around a microscopic object, toss it in the air, and even stick your head inside!

I was able to whip up a system to color, scale and change resolution quickly enough. Unfortunately, VR requires frame rates of 90fps or higher so you don't get sick and disoriented. Creating a sphere, even one with just 20 tris, for every atom in a protein caused my Vive to drop to the compositor. I looked into optimizing geometry and learned about dynamic/static batching and mesh combining. Following are my findings, with links to unity documentation:

 

Mesh combine

  • Massive FPS improvements
  • More difficult and computationally intensive
  • As all meshes are combined to single mesh, they share the same material. You can combine in stages, with your final mesh having multiple submeshes and materials.
  • More materials =  more meshes = less gains.
  • The whole mesh is always drawn, regardless of distance. If you combine objects that are distant from each other, you can actually lose efficiency as they might normally be culled.
  • Meshes have a max of ~65k tris.

Dynamic / Static Batching

  • Moderate FPS improvment
  • Super easy to implement. Dynamic batching occurs automatically, but is not as efficient as static.
  • Static batching requires objects to be Static, meaning they can't be moved, materials can be updated and so on.
  • Once an object has been set to static, you can return it to 'dynamic' but still have issues changing materials and so on. I had to destroy them and reinstantiate.

 

 

For my purposes, mesh combine was the way to go. I based my code off of a post found here, and created a more generic method that takes all the currently used materials, creates a dictionary with material names as keys and lists of combine meshes as values, then processes them into their appropriate meshes. Meshes have a max of ~64k, and I had WAY more geometry then that, so I had to batch them into smaller meshes. The end result went from 12-24fps to over 150fps, with over a million polygons. Colliders were a no go at this point, so raycasting to individual atoms is out of the question. Luckily I have an array of coordinates, so I can still draw call outs to different points of interest.

This is a pretty specific use case, but I think the idea is easily transferable to situations where the view is limited (rooms, obscured by hills, etc) so that you don't run into the issue of drawing objects far in the distance. I think it can be really useful in situations where you have geometry that shares a lot of simple materials. With the trend towards low poly art in indie games, this can allow a much greater amount of geometry on the screen, leading to more interesting and higher definition environments.