Optimizing instantiated geometry for VR

I'm currently developing a VR molecular viewer in Unity for the HTC Vive. It's not meant to compete with any of the tools used for research. Instead it is meant to explore new uses of VR and allow people to play with proteins, encourage curiosity, and give a different view of something scientists look at everyday. It's pretty cool to walk around a microscopic object, toss it in the air, and even stick your head inside!

I was able to whip up a system to color, scale and change resolution quickly enough. Unfortunately, VR requires frame rates of 90fps or higher so you don't get sick and disoriented. Creating a sphere, even one with just 20 tris, for every atom in a protein caused my Vive to drop to the compositor. I looked into optimizing geometry and learned about dynamic/static batching and mesh combining. Following are my findings, with links to unity documentation:

 

Mesh combine

  • Massive FPS improvements
  • More difficult and computationally intensive
  • As all meshes are combined to single mesh, they share the same material. You can combine in stages, with your final mesh having multiple submeshes and materials.
  • More materials =  more meshes = less gains.
  • The whole mesh is always drawn, regardless of distance. If you combine objects that are distant from each other, you can actually lose efficiency as they might normally be culled.
  • Meshes have a max of ~65k tris.

Dynamic / Static Batching

  • Moderate FPS improvment
  • Super easy to implement. Dynamic batching occurs automatically, but is not as efficient as static.
  • Static batching requires objects to be Static, meaning they can't be moved, materials can be updated and so on.
  • Once an object has been set to static, you can return it to 'dynamic' but still have issues changing materials and so on. I had to destroy them and reinstantiate.

 

 

For my purposes, mesh combine was the way to go. I based my code off of a post found here, and created a more generic method that takes all the currently used materials, creates a dictionary with material names as keys and lists of combine meshes as values, then processes them into their appropriate meshes. Meshes have a max of ~64k, and I had WAY more geometry then that, so I had to batch them into smaller meshes. The end result went from 12-24fps to over 150fps, with over a million polygons. Colliders were a no go at this point, so raycasting to individual atoms is out of the question. Luckily I have an array of coordinates, so I can still draw call outs to different points of interest.

This is a pretty specific use case, but I think the idea is easily transferable to situations where the view is limited (rooms, obscured by hills, etc) so that you don't run into the issue of drawing objects far in the distance. I think it can be really useful in situations where you have geometry that shares a lot of simple materials. With the trend towards low poly art in indie games, this can allow a much greater amount of geometry on the screen, leading to more interesting and higher definition environments.