Friday 30 November 2012

Technical Development: Point Sprite Refinement, Basic Dirt Containers



This video shows the updated point sprite systems, as well as basic dirt containers


Heat Vent

The only problem with the previous implementation of this system was that the distortion was more noticeable with distance.  By calculating the distance from the emitter to the camera, the approximate depth of these points are rendered to the depth layer, which is used to scale the final distortion when combined with the final scene.  Now distortion is controlled and is more noticeable at close distances, and softens to no distortion farther away. 


Flare System

By combining the simple point sprite Flare System directly with a Heat Vent System, much more realistic fire is produced.  The heat system is modified so that particles initially expand, and then contract in line with the fire particles to preserve detailed distortion on the smaller particles.  If the heat point sprites only continue to expand, the smaller fire particles that trail from the flare  much more noticeably appear as circles. 

These processes are also combined with a simple the exhaust system.  The final result from these three simple point sprite systems is quite convincing, and the effect is further enhanced by blooming the bright colored pixels with the Light System.

Up close the detail is still lost, but the the mid range and far distance views are quite successful.  Combining simple systems for fast realistic effects is a major goal of this exploration.


Dust System

The harsh lines with intersecting ground and objects with dust particles are still quite noticeable and problematic.  The Dust System is also rendered to the Depth Layer, where the blue channel contains a highly dynamic range from 0-1, and also contains distortion data in the red and yellow channels.  This will cause the final scene to soften these harsh depth lines, and will also blur the color lines that result.  

Though despite these efforts, the harsh lines and artifacts from this system are still apparent,  further time will be needed to smooth out this rendering.  So far the best solution is to prevent dust particles from intersecting with emitting objects as much as possible.



Conveyor System

I worked on enhancing the depth of this system by adding an additional row of vertices in the middle of the stream, raised by a given height value.  There are noticeable artifacts at the end points of stream, which seem they will be difficult to fix.  Other than that, the stream does appear to be more rounded, but a similar result might also be produced by interpolating between two normals.  I am now using a generic bump map as a dynamic key for any ground or dirt texture.  Using a bump map renders more realistic and dynamic dirt, and is highly adaptable.


Dirt Containers

For machine animation, I created a simple system to simulate dirt filling a container.  This process is quite simple, consisting  of a set of verticies to define the dirt, and minimum and maximum values for each vertex.  The location and texture coordinates of the vertices are then interpolated between the two depending on a fill value.  The implementations in the video are very simple, and can definitely be enhanced. 

I will be working on creating a simulation where the excavator will automatically dig and unload dirt into a dump truck, and the dump truck will also unload its bucket onto a conveyor system.  I will focus on linking the container, falling dirt, dust and conveyor systems smoothly between these processes.

Tuesday 20 November 2012

Artistic Development: Organic Camera and Enhanced Rendering

(For this phase of the project I will be posting artistic development sections, exploring different rendering styles, and techniques for integrating organic input for final image creation)


This quick video shows basic rendering stylization, as well as slightly enhanced use of the Organic Motion Camera.


Rendering:

For rendering enhancement, I have worked on basic anti-aliasing techniques, detail exaggeration, as well as a simple depth of field function.

Depth Key
For depth function implementation, I am quickly rendering terrain and main objects to a separate layer, where the distance of each pixel from the camera is stored in the blue channel.  Though it is slower to re-render to a separate layer, a rendering separate key image also allows more channels for other effects, such as heat distortion. 

From the depth image, I generate a line map with a simple laplacian filter.  This image contains edges around objects, and how much they differ from the background. Once the final scene and the depth layers have been rendered, I filter the main image through a Gaussian blurring filter with the final Depth Image, and the Depth Line Image as keys.  The depth lines are used for primary edge softening, while the general depth data is used to implement and basic depth of field function.

For the depth of Field, a variable is set for a threshold in which to increase the level of blurring.  This implementation only simulates blurring from the far plane, and does not quite represent realistic camera focus.  This effect could be further enhanced with multiple threshold values that define near and far planes, and how quickly they fade, simulating depth of field and focusing operations.


Color Key
After all scene components have been rendered, I generate a basic line map of the image with a simple laplacian filter. This map can be used simply as a key to blur harsh edges, or to sharpen image detail.

For sharpening lines on closer objects, I first process the Color Line Map with a simple Gausian Blur filter to soften and enlarge the lines.  The data from this image is  subtracted from the scene image to darken the extracted color lines.  This function is scaled by the depth data, so that only close objects show enhanced detail. 

I also use this line map to implement simple motion blur, similar to the technique used for the light map.  Once the initial line map is rendered, I render the previous line map with scaled intensity, and then pass the combined image through the Gaussian filter.  Depending on the intensity of the previous image, this motion blur can produce a soft and natural effect, or a more intense and abstracted image.





Organic Motion Camera:

One major goal of this project, based in artistic representation, is to work on additional ways to translate organic motion from a Kinect Sensor into the presentation of the virtual scene.  The most obvious way to alter how a view looks at a scene is to change the camera.


The Organic Camera is to simulate the effect of "free hand" filming, rather than mechanical linear, or steady shots.  For the Organic Motion Camera,  one hand controls the 3D position of the camera, while the other controls panning and tilting, as well as zooming.  Both hand can be used, or only one at a time to simulate tripod or dolly shots. 

If you would like to know more about this concept, this video is a pretty decent example of how I film: http://www.youtube.com/watch?v=Ahn7a8qeAYg&feature=plcp (Dont mind the tone or content, this video is pretty personal, but definitely a relevant basis to the "industrial" side of this project...)  -Though it is pretty rough and disorientating at times, I think this style conveys more emotion and depth, and can be much more interesting and effective at presenting information if done well.


The 3D points for the camera control are taken directly from generated skeleton data.  The Kinect depth and skeletal data both contain a lot of noise, and are produce quite a lot of shaking and strange movement.  I have implemented a simple system to average the last n points from each hand, and if needed, Ill work on other ways of smoothing out this data, while still allowing for quick control.


General Organic Input:

The section of the video where the color and contrast is shifting is to show further use of organic input.  For this demo, one hand simply controls the scene brightness, contrast, and saturation.  Dynamic variable control with organic input can create some pleasing results, can be interesting, and push the final presentation of the image forward.

Additionally, the motion of the excavator in the video is also produced by organic input.  It is easy to see how effective this integration can be for animation control.

To implement these techniques in addition to the Organic Motion Camera, scenes would have to be rendered in layers, where object and camera movements were stored and replayed.  Since the camera system simply relies on two 3D Vectors for control, it should be easy to read motion from an array of 3D point transformations.   While the entire scene would replay, the user could then control the final coloring of the scene.

Integrating and layering multiple organic input systems will be explored in future development, as well as refining the Organic Camera System