Needless to say, it's taking a bit longer than I thought it would [grin]
Thesis TopicsI was recently investigating a thesis topic that one of my professors had proposed, and was led to the Visualization ToolKit (VTK). I hadn't heard of this before, so I read into the documentation on their site for a while and was pleasantly surprised to find an excellent rendering resource. The library itself is open source, making it a great learning tool available for free. It's written in C++, with bindings for both Tcl and Java - so it should be fairly accessible to most levels of experience.
While reading some of the background information, I found this white paper about the design and rationale behind their architecture and implementation. I find it interesting how they have broken their design into two distinct parts: a rendering pipeline and a data source pipeline. This got me thinking...
My own renderer is more or less just that - a renderer. It's flexible enough to do different types of computations (i.e. GPGPU) but the data sources are somewhat limited to loading meshes or procedural geometry (and of course textures). A while back I was dabbling in software rendering, and a similar design began to appear in my library - each stage in the pipeline was basically an object and the glue to connect those objects were buffers of memory to share as input/output. I am going to be considering how this can be applied effectively to a wide variety of application areas and see if I can add this type of a data pipeline into my engine. It's a huge topic, but I am really interested to see where it goes!