Tuesday 18 December 2012

Composition: Initial Tarsands Model

I worked on composing the various systems I have developed so far to create a virtual working model of a basic Tarsands Mining Operation.  This video explores this environment and was captured while using the organic motion camera.



This scene was created mainly for purposes of user evaluation, to test the integration of these various systems, as well as the limits of complicated composition.  This is an initial implementation that is still in complete and quite rough-  I will work on smoothing out machine animation as routing additional dump trucks and excavators with more dynamic implementation.  There are also still some problems with the track systems, as well as some issues with intersecting steam systems.  I feel that the dust, dirt and container systems could also be worked on, smoothing the transitions between systems as well as general complexity.

Creating such a model was challenging, but as I continue to adapt my land map editor as well as these systems such a task will become increasingly more manageable.  Though I feel this project is successful in displaying these systems as well as creating an engaging environment, it is somewhat limited by the detail of my 3D models. 

Obviously I can not hope to create a technically perfect model of the tarsands, nor would I wish to attempt such a task  (There is no way one could possible represent the scale of the Alberta Tarsands)  Such a model is intended for more conceptual representation.  I plan to first use this or a similar model for animation projects on tar sands and pipeline expansion...

Creation of this model marks the end of this development cycle.  I feel satisfied with these results, and I would be content to end this project at this benchmark.  Though so far by evaluating this process, as well as realizing levels of public awareness with industrial development, I still feel very motivated to continue development on this project.  As I feel I am nearing my limits of technical representation, and in the coming moths I will largely focus on more artistic development. 

Kinect System Development

As a component of this project, I have worked on developing an easy to use system to allow users to control various processes within the Industrial Abstraction System.  As previously mentioned, the motivating goal for this development is to integrate organic input with procedural systems, in order to make environments more interactive, and to create animations with potential to present more depth and emotion.  The Organic Motion Camera is the most direct way to achieve these goals, and is a major focus of this stage in project development.

Here is a video showing the use of the organic motion camera system.  Initially the system must run through a simple set up procedure in order to correctly interpret data from the current user.  This system still needs improvement, but it is coming along with some pleasing results.





General Function

The system uses built in skeletal tracking to process hand positions for input control.  Each hand has an offset point which the user can modify.  The difference between the processed hand position and its respective offset point is the control data that can be most effectively used for system control.  With the camera system, these offset points can be interpreted as rate of change values, where a constant distance away from the offset will continue to move the camera at a respective constant speed.

Initially these offset points were predetermined, which required the user to stand at a specific distance and have hands resting at certain points.  This created many problems, as it was difficult to completely stop camera movement, and hand movement would often intersect, and thus interfeer with the data taken from the skeleton. 

To solve these problems, I implemented a process in which each control point would only be active if the user points their hands forward, and inactive when their hand points upwards.  When hands are in the inactive position, the offset point is set to the current position of the hand, so the control point will be the Zero Vector.  When the hands are in the active position, the offset remains at the last in active point.  Additionally, if a user's hand motions intersect, they can simply reset the offset points to avoid collision. 

Both hands can operate independently, so a user can choose only to pan or move instead of both at once.  It is also easy to stop movement, as the user needs to only put their hands up, in a natural stopping pose.


The rgb video display is used as a reference guide for the user, where red circles indicate that the hand is in the inactive position, white circles are the active offset, and the green and blue cicles are the active hand positions.  This video, along with control sliders, are normally contained in a separate frame, in order to have a clean program display for animation capturing.


 Data Processing

The technique used for gesture recognition is to count the samples from the depth map which are within a specified radius from the associated hand points.  These values are then used to determine if the hand is pointed forward (less samples) or pointed up (more samples) based on a determined threshold. 

The set up process is neccesary to establish a linear function to determine this threshold based on the current distance of each hand.  First the close position is set, where the threshold value is set between the samples taken from the active and inactive positions.  After the back position is set, these state thresholds along with the average distance points take, are used to determine a linear function to calculate the threshold for each hand depending on the current depth.

There is a lot of noise from the kinect sensor from both the depth and skeleton data.  To mitigate unwanted motion, the data taken from each hand is stored in a matrix which contains previous values, in which the current control value is obtained by averaging the samples in this these arrays.  Though this smoothing is adjustable, the more samples per matrix will generate latency with respect to response time.

The amount of noise from the kinect sensor can still be problematic, as the control points occasionally jump, affecting not only the current motion of the camera, but also the current number of samples taken from the depth map to determine the state of each hand.  For example, when in motion, sometimes the system takes additional samples which exeed the given threshold, which resets the offset point and abruptly stops the camera motion.  Simple solutions to mitigate this false recognition of an inactive state are to increase the state threshold, or to store Boolean values of the current state in a sample array, where the state will only change if the array contains the same (entirely true or false) values. 


Evaluation

 This system is likely to be the corner stone of this project in the context of Computer Graphics Development.  I plan to write a technical paper focusing on user interaction with this system through a virtual environment.

I ran a sample user evaluation to determine problems with the current system, as well as to receive feed back for added functionality and ease of use.  General feed back was positive, though I definitely need to work on ease of use as well as implementing features to assist navigation as well as hand signals to reset the position of the camera if the user becomes lost.

For the tests, each user went through the set up process, and then was allowed to use the system freely in order to feel comfortable with the controls.  After they indicated they were ready, they then would run through a sequence of objectives requiring them to move to various positions on a map, look at certain targets, as well as demonstrate control of the zoom in and out functionality.

Though this system takes much more concentration and is far more challenging to use than a mouse/keyboard camera, I found users to be more engaged with the virtual environment, as there is no physical hardware interaction between them and their actions.  The camera motions are simple enough that users should eventually be able to use the system somewhat subconsciously, being in a mindset of engagement where their thoughts are more directly translated into action.

Mouse/keyboard control is more intuitive and comfortable to use, and users would be able to complete the tasks more easily and efficiently than with the Kinect system.  Having such imediate control seems to give users the ability to explore environments much more quickly, with less time to absorb the present information.  The Organic Motion Camera system is more challenging and requires more careful concentration, but I believe its function creates a unique user experience, as well as engaging the user more deeply with the given environment.  I believe that successful use of the Kinect System for control requires more direct willful suspension of disbelief, and thus engages the user more directly with the given virtual environment.   This essentially is a major goal of this project that successfully integrates the motivating components of technical development, artistic interaction, and environmental/industrial education.

This assumption will likely be the focus of the system paper in which I will be writing- The next phase of testing will try to prove that use of the kinect system is more engaging, and that users absorb more data from the current environment through its use. 

Technical Development: Advanced Track Systems

I have been working for a few years to develop a simple and efficient track system that can be used to render and animate rail lines, roads, pipelines, power lines, and general object movement.  All these systems contain  vertices that describe a set of curves which object movement is interpolated upon.

Here is a video showing a basic implementation of the rail system, first where the line follows terrain height, then where the line is smoothed, and finally where the terrain height is modified to prevent terrain from overlapping the rail line.  The video also shows a sample for pipeline animation-





General Track Lines 

Generation

One of the goals of this system is to have an implementation where you can connect the current line to any other point-  This is done most easily by connecting segments of circles, which with an endpoint and angle, a smooth arc can connect to any other point.

One problem with this system is when connecting points, the end angle can not be defined.  So for connecting new sections with objects or other arc segments, a more complicated cubic function would have to be used.  Creating smooth circuits and connecting to bridge objects will necessary for full implementation, but for now linear segments work very well. 


These arc segments exist only to generate vertex arrays for the roads and rails, as well as a set of nodes that object movement is based upon.  The height of these vertices are set to the terrain height, so the lines do not cut underground. Naturally, these lines are very rough depending on there terrain, where in reality, train lines and roads are smooth with a maximum gradient.  The track vertices then have a set value they can change over a certain distance, effectively smoothing the line based on the system (roads can have more bumps and variation)  Lowering and raising vertices from the terrain creates the need to modify the terrain, which is done most simply by setting near by vertices to equal the height of the track during a traverse.  This works well, but the resulting flatness is pretty rough and obvious.  I will probably try to allow a certain level of slope differentiation, most likely operating on triangles rather than squares, but it is hard to prevent terrain lines from cutting into the upper parts of the track or eroding from beneath. 


Movement

After the lines are generated, objects move based on a separate set of vertices that contain the position, length, and rates of change for direction and angles.  Positions of objects are essentially interpolated between vertices, as the objects contain data for what index they are currently on, and the position in the segment [0-1].  Distance moved on the segment is determined by the length of the segment, and speed of the object so movement is consistent across the entire system.

These systems are fully scalable, and operate well in 3D.  Though there are a few issues of angles changes jumping between 0-360, these problems could definitely be smoothed out.  There are also a few features such as grading, where objects and lines would tilt into curves, but this is beyond the current scope of this project. 


Rail Lines

Train motion is currently based off of moving two separate wheel truck objects, and then determining the mean rotation and position of the train body based upon these two objects.  Each train car is moved in this way, and as long as each truck moves by the same speed, the train appears to move smoothly. Currently the the different train cars are initially offset from the starting position by relative distances dependent on the length of the curve.  Eventually I would also like to implement simple slack action between the train cars.

Pipelines
This system is more simple, as the only animation that really will take place is pipeline generation similar to that shown in the video.  For this visual traversal, I added an alpha value to the vertices which is set by the current position on the pipeline.   This system is quite simple but effective, I would also like to connect the lines to objects such as above river crossings, which would require seperate implementation for similar animation.

Roads
Roads are similar to the Rail Lines, where each object contains data for its current index on the road line as well as the current possition on the line segment.  I also implemented a system to fade the begining and end of the road segments.  Using an alpha value similar to the piplines, the fade is also present in the movement data, which can be used to multiply by the speed of the current object to slow it down as it approaches the end of the segment.


These systems are all quite simple, but difficult to explain without showing the math and implementation in detail-  I will try to write a comprehensive tutorial on these systems in the future.