Tuesday 18 December 2012

Composition: Initial Tarsands Model

I worked on composing the various systems I have developed so far to create a virtual working model of a basic Tarsands Mining Operation.  This video explores this environment and was captured while using the organic motion camera.



This scene was created mainly for purposes of user evaluation, to test the integration of these various systems, as well as the limits of complicated composition.  This is an initial implementation that is still in complete and quite rough-  I will work on smoothing out machine animation as routing additional dump trucks and excavators with more dynamic implementation.  There are also still some problems with the track systems, as well as some issues with intersecting steam systems.  I feel that the dust, dirt and container systems could also be worked on, smoothing the transitions between systems as well as general complexity.

Creating such a model was challenging, but as I continue to adapt my land map editor as well as these systems such a task will become increasingly more manageable.  Though I feel this project is successful in displaying these systems as well as creating an engaging environment, it is somewhat limited by the detail of my 3D models. 

Obviously I can not hope to create a technically perfect model of the tarsands, nor would I wish to attempt such a task  (There is no way one could possible represent the scale of the Alberta Tarsands)  Such a model is intended for more conceptual representation.  I plan to first use this or a similar model for animation projects on tar sands and pipeline expansion...

Creation of this model marks the end of this development cycle.  I feel satisfied with these results, and I would be content to end this project at this benchmark.  Though so far by evaluating this process, as well as realizing levels of public awareness with industrial development, I still feel very motivated to continue development on this project.  As I feel I am nearing my limits of technical representation, and in the coming moths I will largely focus on more artistic development. 

Kinect System Development

As a component of this project, I have worked on developing an easy to use system to allow users to control various processes within the Industrial Abstraction System.  As previously mentioned, the motivating goal for this development is to integrate organic input with procedural systems, in order to make environments more interactive, and to create animations with potential to present more depth and emotion.  The Organic Motion Camera is the most direct way to achieve these goals, and is a major focus of this stage in project development.

Here is a video showing the use of the organic motion camera system.  Initially the system must run through a simple set up procedure in order to correctly interpret data from the current user.  This system still needs improvement, but it is coming along with some pleasing results.





General Function

The system uses built in skeletal tracking to process hand positions for input control.  Each hand has an offset point which the user can modify.  The difference between the processed hand position and its respective offset point is the control data that can be most effectively used for system control.  With the camera system, these offset points can be interpreted as rate of change values, where a constant distance away from the offset will continue to move the camera at a respective constant speed.

Initially these offset points were predetermined, which required the user to stand at a specific distance and have hands resting at certain points.  This created many problems, as it was difficult to completely stop camera movement, and hand movement would often intersect, and thus interfeer with the data taken from the skeleton. 

To solve these problems, I implemented a process in which each control point would only be active if the user points their hands forward, and inactive when their hand points upwards.  When hands are in the inactive position, the offset point is set to the current position of the hand, so the control point will be the Zero Vector.  When the hands are in the active position, the offset remains at the last in active point.  Additionally, if a user's hand motions intersect, they can simply reset the offset points to avoid collision. 

Both hands can operate independently, so a user can choose only to pan or move instead of both at once.  It is also easy to stop movement, as the user needs to only put their hands up, in a natural stopping pose.


The rgb video display is used as a reference guide for the user, where red circles indicate that the hand is in the inactive position, white circles are the active offset, and the green and blue cicles are the active hand positions.  This video, along with control sliders, are normally contained in a separate frame, in order to have a clean program display for animation capturing.


 Data Processing

The technique used for gesture recognition is to count the samples from the depth map which are within a specified radius from the associated hand points.  These values are then used to determine if the hand is pointed forward (less samples) or pointed up (more samples) based on a determined threshold. 

The set up process is neccesary to establish a linear function to determine this threshold based on the current distance of each hand.  First the close position is set, where the threshold value is set between the samples taken from the active and inactive positions.  After the back position is set, these state thresholds along with the average distance points take, are used to determine a linear function to calculate the threshold for each hand depending on the current depth.

There is a lot of noise from the kinect sensor from both the depth and skeleton data.  To mitigate unwanted motion, the data taken from each hand is stored in a matrix which contains previous values, in which the current control value is obtained by averaging the samples in this these arrays.  Though this smoothing is adjustable, the more samples per matrix will generate latency with respect to response time.

The amount of noise from the kinect sensor can still be problematic, as the control points occasionally jump, affecting not only the current motion of the camera, but also the current number of samples taken from the depth map to determine the state of each hand.  For example, when in motion, sometimes the system takes additional samples which exeed the given threshold, which resets the offset point and abruptly stops the camera motion.  Simple solutions to mitigate this false recognition of an inactive state are to increase the state threshold, or to store Boolean values of the current state in a sample array, where the state will only change if the array contains the same (entirely true or false) values. 


Evaluation

 This system is likely to be the corner stone of this project in the context of Computer Graphics Development.  I plan to write a technical paper focusing on user interaction with this system through a virtual environment.

I ran a sample user evaluation to determine problems with the current system, as well as to receive feed back for added functionality and ease of use.  General feed back was positive, though I definitely need to work on ease of use as well as implementing features to assist navigation as well as hand signals to reset the position of the camera if the user becomes lost.

For the tests, each user went through the set up process, and then was allowed to use the system freely in order to feel comfortable with the controls.  After they indicated they were ready, they then would run through a sequence of objectives requiring them to move to various positions on a map, look at certain targets, as well as demonstrate control of the zoom in and out functionality.

Though this system takes much more concentration and is far more challenging to use than a mouse/keyboard camera, I found users to be more engaged with the virtual environment, as there is no physical hardware interaction between them and their actions.  The camera motions are simple enough that users should eventually be able to use the system somewhat subconsciously, being in a mindset of engagement where their thoughts are more directly translated into action.

Mouse/keyboard control is more intuitive and comfortable to use, and users would be able to complete the tasks more easily and efficiently than with the Kinect system.  Having such imediate control seems to give users the ability to explore environments much more quickly, with less time to absorb the present information.  The Organic Motion Camera system is more challenging and requires more careful concentration, but I believe its function creates a unique user experience, as well as engaging the user more deeply with the given environment.  I believe that successful use of the Kinect System for control requires more direct willful suspension of disbelief, and thus engages the user more directly with the given virtual environment.   This essentially is a major goal of this project that successfully integrates the motivating components of technical development, artistic interaction, and environmental/industrial education.

This assumption will likely be the focus of the system paper in which I will be writing- The next phase of testing will try to prove that use of the kinect system is more engaging, and that users absorb more data from the current environment through its use. 

Technical Development: Advanced Track Systems

I have been working for a few years to develop a simple and efficient track system that can be used to render and animate rail lines, roads, pipelines, power lines, and general object movement.  All these systems contain  vertices that describe a set of curves which object movement is interpolated upon.

Here is a video showing a basic implementation of the rail system, first where the line follows terrain height, then where the line is smoothed, and finally where the terrain height is modified to prevent terrain from overlapping the rail line.  The video also shows a sample for pipeline animation-





General Track Lines 

Generation

One of the goals of this system is to have an implementation where you can connect the current line to any other point-  This is done most easily by connecting segments of circles, which with an endpoint and angle, a smooth arc can connect to any other point.

One problem with this system is when connecting points, the end angle can not be defined.  So for connecting new sections with objects or other arc segments, a more complicated cubic function would have to be used.  Creating smooth circuits and connecting to bridge objects will necessary for full implementation, but for now linear segments work very well. 


These arc segments exist only to generate vertex arrays for the roads and rails, as well as a set of nodes that object movement is based upon.  The height of these vertices are set to the terrain height, so the lines do not cut underground. Naturally, these lines are very rough depending on there terrain, where in reality, train lines and roads are smooth with a maximum gradient.  The track vertices then have a set value they can change over a certain distance, effectively smoothing the line based on the system (roads can have more bumps and variation)  Lowering and raising vertices from the terrain creates the need to modify the terrain, which is done most simply by setting near by vertices to equal the height of the track during a traverse.  This works well, but the resulting flatness is pretty rough and obvious.  I will probably try to allow a certain level of slope differentiation, most likely operating on triangles rather than squares, but it is hard to prevent terrain lines from cutting into the upper parts of the track or eroding from beneath. 


Movement

After the lines are generated, objects move based on a separate set of vertices that contain the position, length, and rates of change for direction and angles.  Positions of objects are essentially interpolated between vertices, as the objects contain data for what index they are currently on, and the position in the segment [0-1].  Distance moved on the segment is determined by the length of the segment, and speed of the object so movement is consistent across the entire system.

These systems are fully scalable, and operate well in 3D.  Though there are a few issues of angles changes jumping between 0-360, these problems could definitely be smoothed out.  There are also a few features such as grading, where objects and lines would tilt into curves, but this is beyond the current scope of this project. 


Rail Lines

Train motion is currently based off of moving two separate wheel truck objects, and then determining the mean rotation and position of the train body based upon these two objects.  Each train car is moved in this way, and as long as each truck moves by the same speed, the train appears to move smoothly. Currently the the different train cars are initially offset from the starting position by relative distances dependent on the length of the curve.  Eventually I would also like to implement simple slack action between the train cars.

Pipelines
This system is more simple, as the only animation that really will take place is pipeline generation similar to that shown in the video.  For this visual traversal, I added an alpha value to the vertices which is set by the current position on the pipeline.   This system is quite simple but effective, I would also like to connect the lines to objects such as above river crossings, which would require seperate implementation for similar animation.

Roads
Roads are similar to the Rail Lines, where each object contains data for its current index on the road line as well as the current possition on the line segment.  I also implemented a system to fade the begining and end of the road segments.  Using an alpha value similar to the piplines, the fade is also present in the movement data, which can be used to multiply by the speed of the current object to slow it down as it approaches the end of the segment.


These systems are all quite simple, but difficult to explain without showing the math and implementation in detail-  I will try to write a comprehensive tutorial on these systems in the future.

Friday 30 November 2012

Technical Development: Point Sprite Refinement, Basic Dirt Containers



This video shows the updated point sprite systems, as well as basic dirt containers


Heat Vent

The only problem with the previous implementation of this system was that the distortion was more noticeable with distance.  By calculating the distance from the emitter to the camera, the approximate depth of these points are rendered to the depth layer, which is used to scale the final distortion when combined with the final scene.  Now distortion is controlled and is more noticeable at close distances, and softens to no distortion farther away. 


Flare System

By combining the simple point sprite Flare System directly with a Heat Vent System, much more realistic fire is produced.  The heat system is modified so that particles initially expand, and then contract in line with the fire particles to preserve detailed distortion on the smaller particles.  If the heat point sprites only continue to expand, the smaller fire particles that trail from the flare  much more noticeably appear as circles. 

These processes are also combined with a simple the exhaust system.  The final result from these three simple point sprite systems is quite convincing, and the effect is further enhanced by blooming the bright colored pixels with the Light System.

Up close the detail is still lost, but the the mid range and far distance views are quite successful.  Combining simple systems for fast realistic effects is a major goal of this exploration.


Dust System

The harsh lines with intersecting ground and objects with dust particles are still quite noticeable and problematic.  The Dust System is also rendered to the Depth Layer, where the blue channel contains a highly dynamic range from 0-1, and also contains distortion data in the red and yellow channels.  This will cause the final scene to soften these harsh depth lines, and will also blur the color lines that result.  

Though despite these efforts, the harsh lines and artifacts from this system are still apparent,  further time will be needed to smooth out this rendering.  So far the best solution is to prevent dust particles from intersecting with emitting objects as much as possible.



Conveyor System

I worked on enhancing the depth of this system by adding an additional row of vertices in the middle of the stream, raised by a given height value.  There are noticeable artifacts at the end points of stream, which seem they will be difficult to fix.  Other than that, the stream does appear to be more rounded, but a similar result might also be produced by interpolating between two normals.  I am now using a generic bump map as a dynamic key for any ground or dirt texture.  Using a bump map renders more realistic and dynamic dirt, and is highly adaptable.


Dirt Containers

For machine animation, I created a simple system to simulate dirt filling a container.  This process is quite simple, consisting  of a set of verticies to define the dirt, and minimum and maximum values for each vertex.  The location and texture coordinates of the vertices are then interpolated between the two depending on a fill value.  The implementations in the video are very simple, and can definitely be enhanced. 

I will be working on creating a simulation where the excavator will automatically dig and unload dirt into a dump truck, and the dump truck will also unload its bucket onto a conveyor system.  I will focus on linking the container, falling dirt, dust and conveyor systems smoothly between these processes.

Tuesday 20 November 2012

Artistic Development: Organic Camera and Enhanced Rendering

(For this phase of the project I will be posting artistic development sections, exploring different rendering styles, and techniques for integrating organic input for final image creation)


This quick video shows basic rendering stylization, as well as slightly enhanced use of the Organic Motion Camera.


Rendering:

For rendering enhancement, I have worked on basic anti-aliasing techniques, detail exaggeration, as well as a simple depth of field function.

Depth Key
For depth function implementation, I am quickly rendering terrain and main objects to a separate layer, where the distance of each pixel from the camera is stored in the blue channel.  Though it is slower to re-render to a separate layer, a rendering separate key image also allows more channels for other effects, such as heat distortion. 

From the depth image, I generate a line map with a simple laplacian filter.  This image contains edges around objects, and how much they differ from the background. Once the final scene and the depth layers have been rendered, I filter the main image through a Gaussian blurring filter with the final Depth Image, and the Depth Line Image as keys.  The depth lines are used for primary edge softening, while the general depth data is used to implement and basic depth of field function.

For the depth of Field, a variable is set for a threshold in which to increase the level of blurring.  This implementation only simulates blurring from the far plane, and does not quite represent realistic camera focus.  This effect could be further enhanced with multiple threshold values that define near and far planes, and how quickly they fade, simulating depth of field and focusing operations.


Color Key
After all scene components have been rendered, I generate a basic line map of the image with a simple laplacian filter. This map can be used simply as a key to blur harsh edges, or to sharpen image detail.

For sharpening lines on closer objects, I first process the Color Line Map with a simple Gausian Blur filter to soften and enlarge the lines.  The data from this image is  subtracted from the scene image to darken the extracted color lines.  This function is scaled by the depth data, so that only close objects show enhanced detail. 

I also use this line map to implement simple motion blur, similar to the technique used for the light map.  Once the initial line map is rendered, I render the previous line map with scaled intensity, and then pass the combined image through the Gaussian filter.  Depending on the intensity of the previous image, this motion blur can produce a soft and natural effect, or a more intense and abstracted image.





Organic Motion Camera:

One major goal of this project, based in artistic representation, is to work on additional ways to translate organic motion from a Kinect Sensor into the presentation of the virtual scene.  The most obvious way to alter how a view looks at a scene is to change the camera.


The Organic Camera is to simulate the effect of "free hand" filming, rather than mechanical linear, or steady shots.  For the Organic Motion Camera,  one hand controls the 3D position of the camera, while the other controls panning and tilting, as well as zooming.  Both hand can be used, or only one at a time to simulate tripod or dolly shots. 

If you would like to know more about this concept, this video is a pretty decent example of how I film: http://www.youtube.com/watch?v=Ahn7a8qeAYg&feature=plcp (Dont mind the tone or content, this video is pretty personal, but definitely a relevant basis to the "industrial" side of this project...)  -Though it is pretty rough and disorientating at times, I think this style conveys more emotion and depth, and can be much more interesting and effective at presenting information if done well.


The 3D points for the camera control are taken directly from generated skeleton data.  The Kinect depth and skeletal data both contain a lot of noise, and are produce quite a lot of shaking and strange movement.  I have implemented a simple system to average the last n points from each hand, and if needed, Ill work on other ways of smoothing out this data, while still allowing for quick control.


General Organic Input:

The section of the video where the color and contrast is shifting is to show further use of organic input.  For this demo, one hand simply controls the scene brightness, contrast, and saturation.  Dynamic variable control with organic input can create some pleasing results, can be interesting, and push the final presentation of the image forward.

Additionally, the motion of the excavator in the video is also produced by organic input.  It is easy to see how effective this integration can be for animation control.

To implement these techniques in addition to the Organic Motion Camera, scenes would have to be rendered in layers, where object and camera movements were stored and replayed.  Since the camera system simply relies on two 3D Vectors for control, it should be easy to read motion from an array of 3D point transformations.   While the entire scene would replay, the user could then control the final coloring of the scene.

Integrating and layering multiple organic input systems will be explored in future development, as well as refining the Organic Camera System


Friday 19 October 2012

Technical Development: Heat Vent, Flares, Conveyor Belt, Dust


 This video shows the progress made on the additional effect systems so far.  Each system is explained bellow:




Heat Vent (Distortion)

For the effect of Heat Distortion from vents, I am currently using a simple "fade in then out" point sprite system which uses a simple cloud texture as a mask for sampling from bump map.

For this process I am rendering a second scene which contains data used for 2d effects processing.  I am planning to create a simple depth map for anti-alias blurring as well as implementing some kind of depth of field technique.  The heat system uses up 2 channels of the effects render layer, one for each x and y distortion. 

Once the final scene is rendered, I apply the depth/distortion layer to the scene which simply offsets the source pixel by the x and y amount contained in the red and green channels of the depth map. 


This effect works pretty well up close, but the main ripples are caused by the point sprite textures rather than the inner bump map texture.  I could solve this by using larger, less noisy point sprites, or having a single sprite in which the bump map will flow through more visibly...   I want there to be small ripples up close, and less noticeable effect from farther away.



Flare System (Fire)

This effect is achieved by a simple point sprite system and post process blooming with the in place Light System.  The point sprites fade in and grow in size, until they reach the set maximum threshold where they can spawn a new sprite randomly..  Once sprites are past this maximum, they fade out and decrease in size more quickly.  Like all these systems this is a very quick and rough approximation, but it gives a pretty decent looking effect. 

The flare is bloomed simply through the light layer, which renders all pixels with alpha values (writen by model shaders) and also if a pixel is above a certain value.  (Standard Blooming)  These pixels are over saturated, and pass through a Seperable Gaussian blur filter and then combined with the final scene.  This gives a nice effect making the flare appear it is emitting light.

Combining this system with the heat particle stream produces a pretty amazing and realistic effect.  In the sample video the emitters are still separate, to show the difference.  The point sprite implementation of the heat effect is the best for this, as it is more easily controlled.  There is still a problem with distance, but now farther away the effect looks better, and less noticeable close up.  This problem will likely be solved by scaling the effect by the distances read from the depth layer.



Conveyor System

I am basing this system off of my first implementation of the falling dirt system.  This system is fairly straightforward, but is tedious to work with and variable heavy at the moment.

I use a texture strip that is composed of 3 rectangles.  The start of the stream section A, (texture fade from alpha to dirt) can not wrap and mirror upon itself, so once the strip has reached its maximum length, the middle, section B starts to grow.  Once there is no more input flowing into the system, section B ends, and the final segment, section C, begins to flow.  This section is the length of the section A, so if there is only a small amount, the resulting strip is symmetric and contains matching lines.

This system could probably be optimized, and as of now only one section of dirt can move along the belt at one time (though two separate stream objects could be used.)

I plan to add depth to the stream, most easily by sampling off of a bump map as well as having a set of raised vertices in the middle of the strip.  For open dump conveyor belts I will add falling dirt at the end, and try to connect the two systems smoothly.



Dust System

The dust system is very simple, basically a recreation of the exhaust system, with less particles that are based off of the movement of an object.  One pretty major problem with this system is the obvious straight lines created by rendering points sprites from the Z buffer.  If I sort every particle with each object I could simply not read from the Z-buffer, but since that is a large operation I will try to solve this issue with anti-alias blending, most likely read from the Depth/Distortion layer.  In this layer solid object and the ground (which are the problems) do will not render to the alpha channel, which will be open for conflicting point sprite systems to independently use this channel.  Hopefully there will be a way to distinguish these hard lines, and blur them out smoothly.


Rail System

The train is currently running off of a simple implementation of the track system, which is created by successive segments of circles.  This system is great on a 2D plane, but becomes very complicated with the addition of variable height.  I will be converting this system so nodes are generated from mathematical curves, but the movement of objects will be an interpolation between nodes and associated directional and position vectors.   I will write a seperate post, and likely a tutorial on this system once this 3D implementation is worked out.  This is also be the basis for the roads, pipelines, and powerlines... trains are just more fun to start out with!

Friday 21 September 2012

Development Cycle II

This semester I will be working on another Directed Studies Project, focusing on refining the existing systems, developing a few new ones, but most importantly focusing on the Organic Motion Camera.

My plan is to create a functional industrial environment utilizing many different systems, showing  to allow users the ability to fully explore the virtual world through the Organic Motion Camera.  I will conduct a user based evaluation of the effectiveness of the camera system, as well as the visual systems.  I will create two sample animations, one using a standard linear camera, and the other using the Organic Motion Camera.  The comparison of these two animations will be another standard of the technical evaluation. 

Using development techniques and the evaluation data, I will write a condensed technical paper explaining my project.  Hopefully I will be able publish this paper through a Graphics Conference, in order to gain more experience, and exposure within the academic field.  Once the paper is submitted, I will post my programs for shared use and development.... Hopefully my systems and tools will be easy enough to use for communal use in games and animations.

After this cycle, I should also be able to quickly create artistic representative industrial models and environments.  I have a lot of project in mind, including a few video games, many educational animations, as well as some visual art installations...  I am taking a break from the normal course calender to focus on personal interest and project development...  I hope to get as far as I can on these projects now for future development through my University education.

To start off, I worked on integrating the previous systems into one working environment. 

Pretty simple, nothing really new, but putting things together really helped to lay a future framework for additional systems.



This summer I was also lucky enough (...with a few detours) to gather a significant amount of documentation and reference footage.  From the industrial forest harvesting opperations in British Columbia, to the Alberta Tar Sands, Rocky Mountain Copper and Coal mines, and Gas and Oil infrastructure in the Southern United States.. I plan to develop basic systems for these processes in the future. 

Here is a short video showing real life examples of some of the additional systems I plan to develop:









Monday 23 April 2012

First Development Closing

As I have now finished my Directed Studies class, the initial development cycle for this project is complete.   I definitely plan to continue working on this project, including further refinement of these systems, as well as developing many more.

I plan to use these tools for Animations focused on Education and Awareness, for elements in Video Game Environments, and even for Visual Art Installations.  Though this project definitely has a long way to go before any applications can really be taken seriously. 

I put together a compilation video, briefly showing each of the systems I have developed during this cycle. 








I also created a sample oil tanker spill animation, using a Kinect sensor for camera controls.  This implementation is the Organic Motion Camera, which I also plan on using extensively and refining for ease of use.


Wednesday 4 April 2012

Advanced Land Modification and Dirt Movement

I have done significant work on refining the dirt systems.  I have worked on smoothing out the basic land alteration, making changes in texture, height and saturation smooth between vertices, and have ensured that the height of the land can not be less than the height of the bucket.



I have also changed the dirt system to implement point sprites.  These sprites are simple a set of masks that use the ground textures for color.  When a sprite is created, its life span is the current height of the bucket, as it falls closer to the ground it fades out to ensure a smooth blend into the land.  This system is heavily reliant on how much the bucket is "open".  Given as a percentage, this value is used to calculate the spawn rate, the amount of side to side variation (width of stream) and the depth of each point sprite.



If the bucket is barely open, you should see a very thin stream from the side, but a thicker stream from the front.  If the bucket us fully open, the stream from the side will appear much thicker.  This implementation is done by storing the Vector direction of the bucket and the value of the bucket when the dirt particles are spawned.  In the rendering process, the width of each sprite is determined by taking the dot product between the spawn direction vector, and the vector between the current sprite and the camera.  this is scaled by the complement of the bucket value and added to the value of the bucket multiplied by the sprite width.


As you can see I have also integrated the exhaust and light systems into this model.  The visuals are quite strong, and this is definitely the aesthetics I am looking for.  For model shading, I have implemented basic "Desaturate Highs" and "Desaturate Lows" which are basic Final Cut filters.  This coloring gives a much more dissonant feel than normal shading.

These systems could still use some refinement, but are pretty much finalized.

Land Formations and Water Pollution

A simple system I have worked on is creating an animation of undisturbed land changing into a mine site. Using a simple map editor, I can save XML files that contain vertex data for the land grid, which holds values for terrain height, texture weights, and saturation. In the editor I created an additional array to hold the "change" values for the terrain map, which is the height at the end of the animation.

At the start of the program, I simple divide the difference between the height and the change values by the running time for the animation. Once the animation has started, I subtract the divided difference value from the height, and also change the saturation and the texture by this same value.







The process is simple, but the results are pretty strong.


Another supplementary system is general water pollution.  This system only discolors water, but could be easily used to represent sedimentary contamination, acid mine drainage, and thermal pollution.  My goal is to have this system work with streams and lakes, which means the pollution needs to be able to flow in a specific direction.





So far I am using a diffuse algorithm, and only spreading the pollution weight in the specified direction, but these results are not quite what I am looking for.  I want the pollution to disipate more from the source, and flow down stream if the emitter is turned off.  I will continue to work on this process define in more detail the algorithm used.

Wednesday 14 March 2012

Oil Pollution Refinement

Previously I was just using a simple linear texture for the oil pollution effect, which was just an appropriation of the realistic color spectrum.  Since I have talked to a Graduate Student from the UVic Graphics Department (credit will be given for the help) about the actual pyhsics of the appearance of oil slicks.  The color spectrum at the borders of large scale oil spills, or small puddles on roads are from the distortion of light traveling through the material.  Basically the change in color is a function of the viewing angle, and the thickness of the oil on the water.



I have created a simple 2d color spectrum map, that for each vertex I determine the sampling point from the viewing angle relative to the camera, and an oil weight value which spreads to neighboring pixels.  This effect is a very basic approximation of the real physics, but gives a nice effect very efficiently.




My water is rendered as a solid color, which transitions to a dulled reflection as the viewing angle decreases.  The water system has the potential for more detail, I will be adding in the reflections of the land, smoke, and models on the water as well as a simple bump map that will give the effect of smaller sub waves.  The water can also recognize nodes that are underground, which will be culled and set to inactive, which can be anchors for smooth wave transitions to the land.  (pollution and waves will then not spread through land)

The smoke in this scene is an adaptation of the steam system.

Steam System Finalization

In XNA 4.0 there is a common problem with rendering point sprites with Alpha Blending.  For the Steam System I am using Alpha Blending vs Additive Blending because it is crucial to give the clouds the needed voluminous effect.  Additive Blending looks great for the darker exhaust, but for particles that are more detailed and fade out from center, Alpha Blending is the right choice.

As each pixel is written to the Z-Buffer, even those with full alpha, normal rendering results are box artifacts around point sprites for certain camera orientations. These artifacts occur when point sprites are drawn out of order, (relative to the camera position) but even if the list is sorted, the artifacts are extremely visible when two particles have the same distance.  The solution to this problem is to disable writing to Z-Buffer, but maintaining the read function, so the point sprites do not show through other objects in the scene.

This simple solution still has problems with my steam system.  After particles spawn, they fade out linearly until their transparency value is 0.  The sprites are rendered correctly when viewed from the front of the chain, but appear hollow from behind.  This requires that the list needs to be sorted relative to the camera.  From behind the point sprites with the most opacity are drawn last, so the over all effect appears to be a smoke funnel, which is a nice effect, but is not what I am looking for.

As the collection of Point Sprites is a linked list, I can simple choose draw from start or draw from end depending on the position of the camera relative to the spawn point and the wind direction.  This looks fine except at the point where the order changes, there is a very obvious shift in the smoke column because of the details in the steam clouds, and the faded borders necessary to give the steam volume.

My solution to this transition point is to multiply the color of each smoke cloud by the dot product of the vector between the camera and the spawn point, and the wind vector.  This gives a smooth blending as you rotate around the steam column, but the steam only renders fully when viewed exactly from behind or in front.  Make the transition quadratic preserves the full volume effect longer and then quickly changes to the fade transition.

The following video shows these problems:


The transition ends up becoming very white, but I am able to specify the color of the transition to match the color of the smoke.  I am also slightly reducing the size of the point sprites at the transition, otherwise it appears that they grow in the transition.  After balancing these different functions and constant scalars, I have been able to produce steam columns that have volume, texture, and transition smoothly for all orientations.

Using multiple emitters, each emitter must be sorted by its relative position to the camera.  The number of emitters should be very small, (unless the implementation is showing an over all view of the Tar Sands or the coal plants in China)  Unfortunately there is an obvious shift at the transition point, but most of the time this is not too noticeable, and for my purposes this artifact can be tolerated, as it can be avoided.



The second part of this video is integrating the factory with a land map created with my terrain editor.  As particles spawn and dissipate, they effect the land by saturating nearby vertices. 

Saturday 10 March 2012

Light System Finalization

I have continued to work on refining the light system, and have made improvements on the initial implementation.  As this is the end of this part of the development, I will re iterate over the details of the entire system.



Light Sources:
Light sources are defined in texture map.  Areas with high alpha are light sources.  The more transparent a pixel is the more light it will "emit"   This allows me to specify very detailed and unique light sources.  Light colors can also be defined in the texture.  Light sources will be saturated in the bloom process, so I define the colors in the texture map to be very desaturated, so when there is no light, the sources will be rendered gray.


Basic Method:-Draw Each model, preserving alpha values of light sources to final render
-On New Layer:  Extract pixels with alpha values less than 1 from scene
(this layer is half the size, for efficiency)
-Perform Gaussian Blur on light map

Problems:
-As the light sources of the model are drawn with transparency, you can see through parts of the model.
-Each model with light must be drawn black before the light sources are drawn, so they will not appear to be transparent.



Distance Problem: (both methods)
-As the light sources get farther away, there are less pixels for the Gaussian Filter to sample from.  The light sources appear to dissipate very quickly, especially with a render layer that is half the screen size.
-Filters with larger kernels look much better close up, but dissipate more quickly
-Filters with small kernel sizes still give a good light effect close up, and remain longer, but still dissipate too quickly.
-No filter looks bad close up, but remains over long distances.

This video shows these different cases:


To solve this problem, I use a very small kernel size (which also more efficient) and I draw the light source on the model at full resolution, and then perform the bloom process on the extracted light map.  At close distances only the bloomed light source is visible, and at the farther distances when the bloom dissipates you can still see the light source on the model.


Another goal of the light system is to allow for animations.  Each model will have a variable that will acount for the saturation and value of its light sources.  These variables can be set to be completely on or off, or for a blinking effect of the lights.



Final Process
-Draw each Model normally.  (Draw light sources more grey, this is what will be seen when the lights are off)
-If model has light, draw light pass:
  • For every pixel in texture that has alpha = 1, draw full transparency black
  • If pixel has transparency, saturate and increase brightness by alpha weight, then set final alpha value to light value.  (If light source is off, the second pass will not appear, if light value is full, second pass will be rendered over the model)
  • The alpha values of light sources still have to be less than 1 to be extracted.
-Run Light Extract pass over rendered Scene, for any pixel with any alpha, saturate to preserve color.  All other pixel are black.
-Preform Light Bloom pass on light map
-Combine bloomed light map with rendered Scene

This video shows the results of the final process.


There is still room for improvement in aesthetics with the process variables, and some of these will be able to be set in each model for diverse effects.



Light Motion Blur
-A simple feature that gives a nice effect, but may not be desirable
-Before combining Blurred Light map with final scene, save render layer as texture
-Next pass draw light map along with previous bloomed light map with partial transparency, to ensure that the previous map will dissipate.
-This effect is not very noticeable at farther distances.

Initial Land Modification and Dirt Movement

One of my main inspirations for this project was to create a program where a user could control an Excavator and physically alter the land in the environment.  This process is one of the main foundations for this project.  Alteration includes height modification, texture deformation, and color desaturation.  (Shifting colors towards grey)

This feature still has some smoothing out to do.  For realistic implementation, IE a large land map with small machines, subdivisions will be required for accurate results.  Implementing efficient subdivision is a goal for this project, but it will be very tricky to keep an efficient system for seamless alteration. 




Another related system to the excavator simulation is falling dirt from the bucket of the machine.  So far I have a simple implementation using a texture band that has a beginning, looping middle, and end texture.  The falling dirt adds also back into the land map where it falls.  This system is quite simple to implement in conjunction with the excavation process.  For each update I am already checking what grid space the bucket is above, and the distance from the ground surface.  These are the only external parameters needed for the falling dirt.

This gives a nice effect, but is not quite realistic enough.  I am going to work on implementing this effect with more dynamic point sprites that are also partially animated.  Though this current system is still quite useful, as it could be easily adapted to be used for a conveyer belt system (part of any mining operation or processing facility)

Tuesday 28 February 2012

Initial Oil Pollution

The first form of water pollution I am working on oil spills on large bodies of water.  I am using a texture that ranges from full alpha, then has an oil color spectrum transitioning into a solid oil fill.  The water grid has variable for oil weight, and when this value is past a certain threshold, it spreads to its neighbors.  This creates slight variation in the size of the transition spectrum, but does not stretch too much and stays at a good level.

Oil Texture

Right now I am using a simple water system that has simple, but efficient updating and movement.  The water grid has the potential to be interactive, and can be implemented so it can be updated with the GPU rather than using CPU cycles, which will be much more efficient and allow for a much more detailed water grid.




This video shows a simple oil spill spreading across the water grid.  The oil is spreading based on the UpSpeed of the waves to make it more varied and not completely linear.  The wind direction forces movement more in the specified direction, but still allows the spread in the other directions as well depending on the magnitude. 

Friday 3 February 2012

Smoke and Steam Systems Polishing

 

I have added a few changes to the exhaust system to make it look more realistic.  I basically using two elements for each particle spawn, one that is larger with more transparency in the center that lasts longer... This gives a nice "wispy" effect as the smoke disappears.  The other particle is smaller but solid in the middle, to fill the centers of the larger particles.  The filler particles only last about half as long. 





I have done a lot more work with the Steam System, slowing the movement/spawn rate down, adding more variation and implementing a simple "fade in, fade out" life span for each particle.  At the peak of its existence, there is a chance that it will spawn a new particle that has a greater directional velocity, giving a nice "billowing" effect.  The particles are also chained to the element before them, so they do not bunch up too close and do not get too far away.

I still have more adjustments to make with this system, and I am still having trouble with alpha blending... I must either sort every particle relative to the position of the camera, or I must figure out a way to Alpha Blend particles correctly in XNA 4.0  (Apparently this is a fairly common problem, with no real clear cut solutions)

Monday 23 January 2012

Advanced Lighting

I have been thinking about ways to implement more dynamic lighting.

My previous system required each model to be rendered a second time, where only the transparent areas of the texture would be saved.  (This Render Target is then passed to the Gausian Blur shaders)  The old system also arbitrarily set a specific color, and a fixed intensity for any alpha value.  Since only lights were saved, the old system also was quite buggy if there was a light behind an object.

I am using textures where you can specify the light color, and the parts of the "light" that are brighter (Lamps in head lights, LED truck lights, etc)  Here is a test for this implementation:




 Basic implementation of Advanced Lighting

Now the model is only rendered once.  The entire scene's alpha values are saved into the main Render Target, then a new shader isolates the transparent areas, and then saturates and brightens these light sources depending on the input values.  (Brightness and Saturation should be stored in each model for individual light animations and effects... this may be tricky to implement)  I have made a few different sample lights simply by increasing the opacity in certain areas.

When the initial scene is blended with the blurred light map, the transparent light sources are desaturated and become completely opaque.  This gives a good effect that the lights are "off" when the light intensity is set to 0.  
Further work is still needed for this system. 

Saturday 21 January 2012

Initial Steam and Land Alteration

I have worked further on the smoke (Exhaust) particle system. Implemented durational size and alpha change, all particles are initialized with random velocity and initial size.

 Train Test, better exhaust smoke.


I have started the implementation for a basic steam system.  Similar to the smoke exhaust, but I would like it to be more constant and voluminous.  I still have a lot to work on with this system, but this is a decent start.

In this video I am also using a basic land alteration technique, simple desaturating the land underneath the smoke over time to simulate the effect of pollutants such as fly ash and soot.

Video captured with hyper cam, fraps not working.
Untextured Model rendered with basic XNA model drawing.

Sunday 15 January 2012

Initial Light Bloom and Smoke

 
Video from project test


Light Bloom
By adapting and simplifying the code of the Bloom Sample from Microsoft XNA resources,
http://create.msdn.com/en-US/education/catalog/sample/bloom

I have been able to achieve a desired Light Bloom Effect.  I am using a small, 3 pixel kernel size to preserve the light effect, from farther distances.  This still produces an adequate result and will save time versus using the original 15 kernel size.  The filtering process uses seperable Gausian filtering via GPU.  The result is combined with the original scene.

As of now light emitters are defined by transparency in the texture.  Models are rendered first normally, then a second pass renders only the light spots, which is then passed through the filtering process.



Smoke
Using the point sprite system in Riemer's XNA tutorial,
http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2/Point_sprites.php
I have been able to implement a simple exhaust system.  I will be adapting his code to suit my needs, and use a custom vertex system that will allow the smoke particles to increase in size, and to become more transparent.

Saturday 7 January 2012

Project Plan

Objective:
I plan to create and implement simple real time graphics systems for artistic visualizations of industrial aesthetics. This engine will be used to push forward Computer Graphics further into the realms of Environmentalism and Visual Art.

Butte, Montana: source of inspiration.


Purpose:
The main purpose of this project is to produce an efficient and dynamic Graphics Engine that can be used to for video games, simulations, or educational programs to raise awareness about the impacts large scale industry has on the environment.  Compiling various systems into a collaborative whole will also give me essential experience for serious projects, and will help me develop a unique and artistic style with Computer Graphics.

 
Goals:
With this project I plan to study and create my own graphics systems for various effects of industrial operations.  These mainly deal with methods of pollution, but also imply the implementation of unified general aesthetics.  All these systems should be as simple as possible and very efficient, this will free up processing time for the rest of the program. 

I have compiled a video that shows examples of the following systems:

 Video footage mine own, except from Youtube sources:
http://www.youtube.com/watch?v=HkrH-KYJ-Q4
http://www.youtube.com/watch?v=ba7ZBplVcys
http://www.youtube.com/watch?v=0SCVD44ybTo


(In order of approximated complexity)
Light Bloom- Strong and piercing light sources on machines
            -Define Light Emitters, use separable bloom filters

Smoke-  Combustion Exhaust from machines and factories dissipating into the atmosphere
            -Particle System, emitters/point sprites

Steam- Thicker and longer lasting exhaust from factory smoke stacks
            -Particle System, emitters/point sprites

Land Alteration- Interactive land map that is physically modified when affected by machines or by exhaust
            -Land Grid, height modifications, hue/saturation transformations

Dirt Movement- Falling dirt from machines to the ground
            -Independent objects with semi transparent animated textures

Water Pollution- Dirt and pollutants mixing with clean water
            -Interactive water grid, simple fluid motion texture blending


Implementation:
All these systems will be present in a functional 3D environment that the user can move a camera through to explore an industrial wasteland.  This project will be coded in C# using XNA.  I will be using mostly custom HLSL shaders for my effects.


In this map there are 3 main components which implement the project's graphic systems.

Factory - Produces smoke and steam, and is dumping waste into the river
Coal Train - Runs across the map pumping exhaust into the air, leaving a trail waste behind it
Excavators - Continuously dig away at the hill side and dump the dirt onto the ground.


This project is to be completed by the end of the 2012 Spring Semester 


Starting Point:
I have already made a 3D map editor, I will adapt this system to accommodate this projects needs, I will also add some new features.
 
 
I also have some raw models to work with... I will also be finalizing these, and developing systems for movement and animation.