Monday, 10 August 2015

Refined Map Editor

Over the past year I have had the chance to work a few weeks here and there on refining and refactoring my terrain editor to allow for more realistic and detailed map creation.

 A large part of this work was to merge and create a library with the effects, objects, and point sprite systems created in the Industrial Abstraction Project, as well as creating a system and editor for Bezier curves to be used for things such as rail lines, power lines and pipelines.

Working on creating an Industrial Abstraction Library has also provided a chance to create documentation and process diagrams for the various systems I have created.

All this work is a step closer to opening and sharing this library on GitHub, though all of this development is still taking place in the discontinued XNA. (I am aware that it may not be too hard to implement this work in Monogame, but I dont imagine that I will get a chance to switch for a while.)


Anyway, bellow is a video showing the various layers and editors that I have implemented in the Industrial Abstraction Map Editor. This video is a bit long, but it shows how fast representative maps can be created with these tools.

Sunday, 6 April 2014

Advanced Animation, integration with Forest Rendering System

It has been a while since I have posted to this blog, especially as my primary development has been working on the Forest Rendering Project this year.  Though I have expanded on the Industrial Abstraction project, particularly to integrate these two systems and to develop simple animations with them. 

A sample of this work can be seen in the animation bellow.  This is a first attempt at creating a short story based animation using these systems:


The raw footage for this animation was composed in a program using both the Industrial Abstraction and Forest Rendering systems in a C# and XNA environment.  The skinned animation for deer was exported from blender, and the final composition, including transitions and color effects was composed in a video editing program.



Integration with the Forest Rendering Project

The specifics of integrating the industrial abstraction systems are explained in the Forest Rendering Project blog, but mainly involve a somewhat lengthy differed rendering process, separating out opaque models, visible point sprite effects (such as steam, flares) and depth point sprite effects (such as heat vents)  Combining these elements is reliant on maintaining a high resolution bit map (with a fairly high bit depth rate, 64 bit vs 32 bit colors)

Probably of the most interest is the ability to extend concepts of land modification, (such as pipeline construction, pollution darkening ground, machines tearing into the earth) can easily be applied to the Forest Rendering System, by rendering interactions to a Modification Map.  This render target can affect the height of a forest (no forest if its height value is 0), if it is a cut block, or the life value (which determines the color of trees)  

Ultimately this allows interactions of industrial processes to be visualized through their impact on forests.  Having a organic life like rendering system mixed with the systems of industrial rendering present a sharp contrast between the two, which amplifies the effectiveness of both.  As shown in the animation above, the steam and flare systems can be directly applied to forests to model wild fires, interaction that which I plan to develop further on its own. 


Animation System

To create high quality animations with these systems, I developed a simple animation system, which records values for different elements separately in layers.  For now, this recording process takes live input, (from mouse, controller, or kinect system) and writes specified values to a vector every half second.  This system can thus record and play back live organic animation.  Though as these recordings usually need to be smoothed out or adjusted, I implemented a simple graph editor using Form controls, which allow me to edit specific values for each layer. 

This animation system is somewhat of a crude implementation, but works fairly well.  As of now it is built entirely around the concept of live animation recording, vs a program such as after effects which can not be done in real time.  Though there are draw backs to such an implementation, particularly that the entire animation must be played back to see the effect of value changes, and that having values recorded every half second means that it can become very tedious to make large changes in an animation, as many value points may need to be modified.  Certainly I can work to improve the editing functions of the control form, especially by being able to select a range of value points, and implement functions such as Set To Value, Smooth, and Interpolate.

Tuesday, 18 December 2012

Composition: Initial Tarsands Model

I worked on composing the various systems I have developed so far to create a virtual working model of a basic Tarsands Mining Operation.  This video explores this environment and was captured while using the organic motion camera.



This scene was created mainly for purposes of user evaluation, to test the integration of these various systems, as well as the limits of complicated composition.  This is an initial implementation that is still in complete and quite rough-  I will work on smoothing out machine animation as routing additional dump trucks and excavators with more dynamic implementation.  There are also still some problems with the track systems, as well as some issues with intersecting steam systems.  I feel that the dust, dirt and container systems could also be worked on, smoothing the transitions between systems as well as general complexity.

Creating such a model was challenging, but as I continue to adapt my land map editor as well as these systems such a task will become increasingly more manageable.  Though I feel this project is successful in displaying these systems as well as creating an engaging environment, it is somewhat limited by the detail of my 3D models. 

Obviously I can not hope to create a technically perfect model of the tarsands, nor would I wish to attempt such a task  (There is no way one could possible represent the scale of the Alberta Tarsands)  Such a model is intended for more conceptual representation.  I plan to first use this or a similar model for animation projects on tar sands and pipeline expansion...

Creation of this model marks the end of this development cycle.  I feel satisfied with these results, and I would be content to end this project at this benchmark.  Though so far by evaluating this process, as well as realizing levels of public awareness with industrial development, I still feel very motivated to continue development on this project.  As I feel I am nearing my limits of technical representation, and in the coming moths I will largely focus on more artistic development. 

Kinect System Development

As a component of this project, I have worked on developing an easy to use system to allow users to control various processes within the Industrial Abstraction System.  As previously mentioned, the motivating goal for this development is to integrate organic input with procedural systems, in order to make environments more interactive, and to create animations with potential to present more depth and emotion.  The Organic Motion Camera is the most direct way to achieve these goals, and is a major focus of this stage in project development.

Here is a video showing the use of the organic motion camera system.  Initially the system must run through a simple set up procedure in order to correctly interpret data from the current user.  This system still needs improvement, but it is coming along with some pleasing results.





General Function

The system uses built in skeletal tracking to process hand positions for input control.  Each hand has an offset point which the user can modify.  The difference between the processed hand position and its respective offset point is the control data that can be most effectively used for system control.  With the camera system, these offset points can be interpreted as rate of change values, where a constant distance away from the offset will continue to move the camera at a respective constant speed.

Initially these offset points were predetermined, which required the user to stand at a specific distance and have hands resting at certain points.  This created many problems, as it was difficult to completely stop camera movement, and hand movement would often intersect, and thus interfeer with the data taken from the skeleton. 

To solve these problems, I implemented a process in which each control point would only be active if the user points their hands forward, and inactive when their hand points upwards.  When hands are in the inactive position, the offset point is set to the current position of the hand, so the control point will be the Zero Vector.  When the hands are in the active position, the offset remains at the last in active point.  Additionally, if a user's hand motions intersect, they can simply reset the offset points to avoid collision. 

Both hands can operate independently, so a user can choose only to pan or move instead of both at once.  It is also easy to stop movement, as the user needs to only put their hands up, in a natural stopping pose.


The rgb video display is used as a reference guide for the user, where red circles indicate that the hand is in the inactive position, white circles are the active offset, and the green and blue cicles are the active hand positions.  This video, along with control sliders, are normally contained in a separate frame, in order to have a clean program display for animation capturing.


 Data Processing

The technique used for gesture recognition is to count the samples from the depth map which are within a specified radius from the associated hand points.  These values are then used to determine if the hand is pointed forward (less samples) or pointed up (more samples) based on a determined threshold. 

The set up process is neccesary to establish a linear function to determine this threshold based on the current distance of each hand.  First the close position is set, where the threshold value is set between the samples taken from the active and inactive positions.  After the back position is set, these state thresholds along with the average distance points take, are used to determine a linear function to calculate the threshold for each hand depending on the current depth.

There is a lot of noise from the kinect sensor from both the depth and skeleton data.  To mitigate unwanted motion, the data taken from each hand is stored in a matrix which contains previous values, in which the current control value is obtained by averaging the samples in this these arrays.  Though this smoothing is adjustable, the more samples per matrix will generate latency with respect to response time.

The amount of noise from the kinect sensor can still be problematic, as the control points occasionally jump, affecting not only the current motion of the camera, but also the current number of samples taken from the depth map to determine the state of each hand.  For example, when in motion, sometimes the system takes additional samples which exeed the given threshold, which resets the offset point and abruptly stops the camera motion.  Simple solutions to mitigate this false recognition of an inactive state are to increase the state threshold, or to store Boolean values of the current state in a sample array, where the state will only change if the array contains the same (entirely true or false) values. 


Evaluation

 This system is likely to be the corner stone of this project in the context of Computer Graphics Development.  I plan to write a technical paper focusing on user interaction with this system through a virtual environment.

I ran a sample user evaluation to determine problems with the current system, as well as to receive feed back for added functionality and ease of use.  General feed back was positive, though I definitely need to work on ease of use as well as implementing features to assist navigation as well as hand signals to reset the position of the camera if the user becomes lost.

For the tests, each user went through the set up process, and then was allowed to use the system freely in order to feel comfortable with the controls.  After they indicated they were ready, they then would run through a sequence of objectives requiring them to move to various positions on a map, look at certain targets, as well as demonstrate control of the zoom in and out functionality.

Though this system takes much more concentration and is far more challenging to use than a mouse/keyboard camera, I found users to be more engaged with the virtual environment, as there is no physical hardware interaction between them and their actions.  The camera motions are simple enough that users should eventually be able to use the system somewhat subconsciously, being in a mindset of engagement where their thoughts are more directly translated into action.

Mouse/keyboard control is more intuitive and comfortable to use, and users would be able to complete the tasks more easily and efficiently than with the Kinect system.  Having such imediate control seems to give users the ability to explore environments much more quickly, with less time to absorb the present information.  The Organic Motion Camera system is more challenging and requires more careful concentration, but I believe its function creates a unique user experience, as well as engaging the user more deeply with the given environment.  I believe that successful use of the Kinect System for control requires more direct willful suspension of disbelief, and thus engages the user more directly with the given virtual environment.   This essentially is a major goal of this project that successfully integrates the motivating components of technical development, artistic interaction, and environmental/industrial education.

This assumption will likely be the focus of the system paper in which I will be writing- The next phase of testing will try to prove that use of the kinect system is more engaging, and that users absorb more data from the current environment through its use. 

Technical Development: Advanced Track Systems

I have been working for a few years to develop a simple and efficient track system that can be used to render and animate rail lines, roads, pipelines, power lines, and general object movement.  All these systems contain  vertices that describe a set of curves which object movement is interpolated upon.

Here is a video showing a basic implementation of the rail system, first where the line follows terrain height, then where the line is smoothed, and finally where the terrain height is modified to prevent terrain from overlapping the rail line.  The video also shows a sample for pipeline animation-





General Track Lines 

Generation

One of the goals of this system is to have an implementation where you can connect the current line to any other point-  This is done most easily by connecting segments of circles, which with an endpoint and angle, a smooth arc can connect to any other point.

One problem with this system is when connecting points, the end angle can not be defined.  So for connecting new sections with objects or other arc segments, a more complicated cubic function would have to be used.  Creating smooth circuits and connecting to bridge objects will necessary for full implementation, but for now linear segments work very well. 


These arc segments exist only to generate vertex arrays for the roads and rails, as well as a set of nodes that object movement is based upon.  The height of these vertices are set to the terrain height, so the lines do not cut underground. Naturally, these lines are very rough depending on there terrain, where in reality, train lines and roads are smooth with a maximum gradient.  The track vertices then have a set value they can change over a certain distance, effectively smoothing the line based on the system (roads can have more bumps and variation)  Lowering and raising vertices from the terrain creates the need to modify the terrain, which is done most simply by setting near by vertices to equal the height of the track during a traverse.  This works well, but the resulting flatness is pretty rough and obvious.  I will probably try to allow a certain level of slope differentiation, most likely operating on triangles rather than squares, but it is hard to prevent terrain lines from cutting into the upper parts of the track or eroding from beneath. 


Movement

After the lines are generated, objects move based on a separate set of vertices that contain the position, length, and rates of change for direction and angles.  Positions of objects are essentially interpolated between vertices, as the objects contain data for what index they are currently on, and the position in the segment [0-1].  Distance moved on the segment is determined by the length of the segment, and speed of the object so movement is consistent across the entire system.

These systems are fully scalable, and operate well in 3D.  Though there are a few issues of angles changes jumping between 0-360, these problems could definitely be smoothed out.  There are also a few features such as grading, where objects and lines would tilt into curves, but this is beyond the current scope of this project. 


Rail Lines

Train motion is currently based off of moving two separate wheel truck objects, and then determining the mean rotation and position of the train body based upon these two objects.  Each train car is moved in this way, and as long as each truck moves by the same speed, the train appears to move smoothly. Currently the the different train cars are initially offset from the starting position by relative distances dependent on the length of the curve.  Eventually I would also like to implement simple slack action between the train cars.

Pipelines
This system is more simple, as the only animation that really will take place is pipeline generation similar to that shown in the video.  For this visual traversal, I added an alpha value to the vertices which is set by the current position on the pipeline.   This system is quite simple but effective, I would also like to connect the lines to objects such as above river crossings, which would require seperate implementation for similar animation.

Roads
Roads are similar to the Rail Lines, where each object contains data for its current index on the road line as well as the current possition on the line segment.  I also implemented a system to fade the begining and end of the road segments.  Using an alpha value similar to the piplines, the fade is also present in the movement data, which can be used to multiply by the speed of the current object to slow it down as it approaches the end of the segment.


These systems are all quite simple, but difficult to explain without showing the math and implementation in detail-  I will try to write a comprehensive tutorial on these systems in the future.

Friday, 30 November 2012

Technical Development: Point Sprite Refinement, Basic Dirt Containers



This video shows the updated point sprite systems, as well as basic dirt containers


Heat Vent

The only problem with the previous implementation of this system was that the distortion was more noticeable with distance.  By calculating the distance from the emitter to the camera, the approximate depth of these points are rendered to the depth layer, which is used to scale the final distortion when combined with the final scene.  Now distortion is controlled and is more noticeable at close distances, and softens to no distortion farther away. 


Flare System

By combining the simple point sprite Flare System directly with a Heat Vent System, much more realistic fire is produced.  The heat system is modified so that particles initially expand, and then contract in line with the fire particles to preserve detailed distortion on the smaller particles.  If the heat point sprites only continue to expand, the smaller fire particles that trail from the flare  much more noticeably appear as circles. 

These processes are also combined with a simple the exhaust system.  The final result from these three simple point sprite systems is quite convincing, and the effect is further enhanced by blooming the bright colored pixels with the Light System.

Up close the detail is still lost, but the the mid range and far distance views are quite successful.  Combining simple systems for fast realistic effects is a major goal of this exploration.


Dust System

The harsh lines with intersecting ground and objects with dust particles are still quite noticeable and problematic.  The Dust System is also rendered to the Depth Layer, where the blue channel contains a highly dynamic range from 0-1, and also contains distortion data in the red and yellow channels.  This will cause the final scene to soften these harsh depth lines, and will also blur the color lines that result.  

Though despite these efforts, the harsh lines and artifacts from this system are still apparent,  further time will be needed to smooth out this rendering.  So far the best solution is to prevent dust particles from intersecting with emitting objects as much as possible.



Conveyor System

I worked on enhancing the depth of this system by adding an additional row of vertices in the middle of the stream, raised by a given height value.  There are noticeable artifacts at the end points of stream, which seem they will be difficult to fix.  Other than that, the stream does appear to be more rounded, but a similar result might also be produced by interpolating between two normals.  I am now using a generic bump map as a dynamic key for any ground or dirt texture.  Using a bump map renders more realistic and dynamic dirt, and is highly adaptable.


Dirt Containers

For machine animation, I created a simple system to simulate dirt filling a container.  This process is quite simple, consisting  of a set of verticies to define the dirt, and minimum and maximum values for each vertex.  The location and texture coordinates of the vertices are then interpolated between the two depending on a fill value.  The implementations in the video are very simple, and can definitely be enhanced. 

I will be working on creating a simulation where the excavator will automatically dig and unload dirt into a dump truck, and the dump truck will also unload its bucket onto a conveyor system.  I will focus on linking the container, falling dirt, dust and conveyor systems smoothly between these processes.

Tuesday, 20 November 2012

Artistic Development: Organic Camera and Enhanced Rendering

(For this phase of the project I will be posting artistic development sections, exploring different rendering styles, and techniques for integrating organic input for final image creation)


This quick video shows basic rendering stylization, as well as slightly enhanced use of the Organic Motion Camera.


Rendering:

For rendering enhancement, I have worked on basic anti-aliasing techniques, detail exaggeration, as well as a simple depth of field function.

Depth Key
For depth function implementation, I am quickly rendering terrain and main objects to a separate layer, where the distance of each pixel from the camera is stored in the blue channel.  Though it is slower to re-render to a separate layer, a rendering separate key image also allows more channels for other effects, such as heat distortion. 

From the depth image, I generate a line map with a simple laplacian filter.  This image contains edges around objects, and how much they differ from the background. Once the final scene and the depth layers have been rendered, I filter the main image through a Gaussian blurring filter with the final Depth Image, and the Depth Line Image as keys.  The depth lines are used for primary edge softening, while the general depth data is used to implement and basic depth of field function.

For the depth of Field, a variable is set for a threshold in which to increase the level of blurring.  This implementation only simulates blurring from the far plane, and does not quite represent realistic camera focus.  This effect could be further enhanced with multiple threshold values that define near and far planes, and how quickly they fade, simulating depth of field and focusing operations.


Color Key
After all scene components have been rendered, I generate a basic line map of the image with a simple laplacian filter. This map can be used simply as a key to blur harsh edges, or to sharpen image detail.

For sharpening lines on closer objects, I first process the Color Line Map with a simple Gausian Blur filter to soften and enlarge the lines.  The data from this image is  subtracted from the scene image to darken the extracted color lines.  This function is scaled by the depth data, so that only close objects show enhanced detail. 

I also use this line map to implement simple motion blur, similar to the technique used for the light map.  Once the initial line map is rendered, I render the previous line map with scaled intensity, and then pass the combined image through the Gaussian filter.  Depending on the intensity of the previous image, this motion blur can produce a soft and natural effect, or a more intense and abstracted image.





Organic Motion Camera:

One major goal of this project, based in artistic representation, is to work on additional ways to translate organic motion from a Kinect Sensor into the presentation of the virtual scene.  The most obvious way to alter how a view looks at a scene is to change the camera.


The Organic Camera is to simulate the effect of "free hand" filming, rather than mechanical linear, or steady shots.  For the Organic Motion Camera,  one hand controls the 3D position of the camera, while the other controls panning and tilting, as well as zooming.  Both hand can be used, or only one at a time to simulate tripod or dolly shots. 

If you would like to know more about this concept, this video is a pretty decent example of how I film: http://www.youtube.com/watch?v=Ahn7a8qeAYg&feature=plcp (Dont mind the tone or content, this video is pretty personal, but definitely a relevant basis to the "industrial" side of this project...)  -Though it is pretty rough and disorientating at times, I think this style conveys more emotion and depth, and can be much more interesting and effective at presenting information if done well.


The 3D points for the camera control are taken directly from generated skeleton data.  The Kinect depth and skeletal data both contain a lot of noise, and are produce quite a lot of shaking and strange movement.  I have implemented a simple system to average the last n points from each hand, and if needed, Ill work on other ways of smoothing out this data, while still allowing for quick control.


General Organic Input:

The section of the video where the color and contrast is shifting is to show further use of organic input.  For this demo, one hand simply controls the scene brightness, contrast, and saturation.  Dynamic variable control with organic input can create some pleasing results, can be interesting, and push the final presentation of the image forward.

Additionally, the motion of the excavator in the video is also produced by organic input.  It is easy to see how effective this integration can be for animation control.

To implement these techniques in addition to the Organic Motion Camera, scenes would have to be rendered in layers, where object and camera movements were stored and replayed.  Since the camera system simply relies on two 3D Vectors for control, it should be easy to read motion from an array of 3D point transformations.   While the entire scene would replay, the user could then control the final coloring of the scene.

Integrating and layering multiple organic input systems will be explored in future development, as well as refining the Organic Camera System