**Overview**

A car is a nonholonomic system meaning that the system cannot be integrated. Nonholonomic implies that paths between states are curves rather than linear and therefore the system must pass through intermediate states in order to reach the predicted state. To correct error, the system must also follow a curved trajectory but summing system inputs does not result in rectification due to nonholonomy.

A car's degrees of freedom are constrained by the rolling motion of the wheels and a car has a limited range of steering angles constrained by limitations imposed by the mechanical linkage that make up the steering system. A typical car has a steering range somewhere around +/- 30 degrees. Additionally, a car cannot move unless it has a speed input driving the car forward or backward. The turning radius of the car, *i.e.* the radius of the circle the rear axel rotates around, is subject to the speed that the car carries through the turn. A car moving at slow speed through a small steering angle will turn more sharply than the same car moving through the same steering angle at a fast speed. This dependency of curvature subject to speed makes a car nonholonomic.

This project implemented dynamic control of a simple car following a prescribed trajectory. Modeling the system required kinematic and dynamic models, planning and control systems, and a dynamical simulation. The implementation was produced in the Gazebo simulator[1] using the Open Dynamics Engine(()ODE())[3] to model dynamical behavior.

**Kinematic Model**

The car kinematic model is based on the simple car model[2]. For the sake of simplicity, the world in which this car operates is treated as a flat plane. In this case, the car's state can be described by the state (q = (x,y,\theta)). The center of the rear axel (r) acts as the center of the car with the (x)-axis pointing along the longitudinal axis of the car (()forward()). Let (s) denote the signed speed of the car and let (\phi) denote the steering angle. Let the wheel base (L) denote the distance from the front axel (f) to (r). Let (u = (u_s, u_\phi)) define a command for the car. Given the above, the change in state can be defined as:

**Dynamic Model**

The center of mass is located at the origin of the car local frame. The forward and backward directions are along the car local frame's negative and positive (x)-axis respectively and the left and right directions are along the car local frame's positive and negative (y)-axis respectively.

Each wheel is a cylinder attached to a bearing joint at the wheel's center that rotates around the local frame's (x)-axis. The rear bearings attach to the car body allowing the rear wheels to roll freely. The front bearings attach to spindle links that attach to kingpin joints. The kingpins attach the front wheel assembly to the car body and rotate around the car frame's (z)-axis. The front wheel assemblies allow the wheels to roll freely and to steer by applying torque to the kingpins.

The steering linkage is a virtual double four-bar design which allows the innermost wheel to travel a shorter arc than the outer wheel and helps to minimize wheel sliding and skidding when steering. The steering linkage is virtual in that the geometric constraints are applied when a steering angle is provided but the linkage is not physically modeled in the car geometry. The steering box joint and steering link allow steering angles to be maintained and dynamically controlled using a component subject to dynamic forces. Steering angles are computed and input into the steering box and the state of the steering box (()steering angle()) is transmiitted to the kingpins via the steering linkage.

The center of the rear axel is modeled with a differential joint and driveshaft link. Due to the kinematic model, the position of the car is not measured relative to the center of mass (()CoM()) but is instead measure relative to the center of the rear axel (()CoA()). The state of the car is therefore a function of the center of axel where the position of the car is the position of the rear differential/driveshaft in the world frame and the orientation of the car is the direction of the normalized vector drawn from the rear differential/driveshaft to the steering box/link in the world frame.

**Planning**

A plan for the car to follow is computed from a lissajous curve using parameters that generate a figure eight(()(a=1), (b=2), (\delta=\frac{\pi}{2})()). The lissajous function is used to compute the state of the car along discrete steps along the curve. From a given state and a previous state, a command to move from the previous state to the given state is computed. Each planned state and planed command is then added to a list for use by the controller. In order to compute the command, the inverse differential equation of the kinematic model is used which requires a minimization.

**Control**

Although the plan is in a sense perfect for the kinematic model, the performance in the dynamical simulator results in side effects, e.g. oversteer, understeer, skidding, and control lag, which ensure that the car cannot follow the plan exactly as prescribed simply by following the planned commands. These effects result in the car drifting off the plan or responding late to control input and therefore the car can reasonably track the plan but there is substantial accumulation of error and therefore the car cannot reach the end of the plan in the prescribed time and state. The planned commands are therefore handled as feed-forward commands and are modified by dynamically computed feed-back commands.

Feed-back commands are generated by computing the state error between the current state of the car and linear interpolation of the planned states. The state error is fed into the kinematic differential equations to compute a command. The feed-back command is then scaled and capped to minimize extreme conditions such that the feed-back command will continually adjust the system but no single command will make the system unstable.

The feed-back command is added to the feed-forward command to generate the current command. This combination constantly readjusts the system behavior. Although the current command will not fully realize the actual predicted planned state, the constant input of the feed-back command will allow the system to reasonably approximate the planned behavior.

The current command is fed into the controller which is composed of a speed controller and steering controller. The speed controller is implemented as a proportional only controller and the steering controller is implemented as a proportional-derivative controller. Speed is affected by applying a force directly to the CoG of the car which is like a phantom hand pushing the car. Note, differential drive was considered but was determined out of scope for this project. Steering is affected by PD control on the steering box and the state of the steering box is fed through the double four-bar steering computation which sets the angle of each respective kingpin resulting in the correlating angle to be applied to each steering wheel.

**Results**

The experiment was run in Gazebo 1.8.6 and demonstrates that the kinematic, planning, control and dynamic models can be combined together to produced the desired trajectory and behavior. The combination of feed-forward and feed-back control enable the car to closely approximate the planned trajectory in the expected time. The lissajous function was scaled such that the car maintains a moderate speed so that it does not skid in corners and also does not stop in corners which can occur when the steering angle is high and the speed is low. The overall period of the trajectory is approximately 64 seconds.

Figure[State] shows the car follows the plan with reasonable accuracy. The O in the figure indicates the start position and the X indicates the end position and the overlap between the start and end positions shows that the car ends its route in close proximity to the expected position as indicated by the start position.

Figure[Planned(()P()) and Actual(()A()) Commands] indicates the difference in the planned commands, *i.e.* the feed-forward commands, and the current command. The control lag is evident from the elongation in wavelength of the actual commands and shows that the car completes the trajectory slower than the planned expectation. Additionally, while the speed is a fairly smooth curve, the state error results in substantial steering correction.

Figure[Speed Feed-Forward, Feed-Back, and Combined Commands] and Figure[Steering Angle Feed-Forward, Feed-Back, and Combined Commands] give a more detailed view of the summation of commands. In the speed plot, the feed-back control peaks quickly to account for starting from rest and then settles into a smooth curve that has maximum peaks about the exits of turns to assist in acceleration and steadily diminishes along the straight-aways into minimum troughs when decelerating into the turns. The cap on feed-back speed keeps the system from generating a large spike due to initial error and then has no further impact and therefore the car does not immediately explode into instability due to speed. In the steering plot, the feed-back oscillates quickly from left and right extremes which is evident in the animation of the car. The cap on feed-back steering keeps the steering from generating extreme inputs that will cause the steering linkage to exceed its constraints or to cause singularities.

Figure[Feedback Error] shows the state error determined in the feed-back command calculation. The error spikes quickly at start due to starting from rest but quickly settles into oscillating about zero error. The end error state shows that the car completes the trajectory with relatively little error.

**References**

[1] Open Source Robotics Foundation. Gazebo simulator, 2013. http://www.gazebosim.org/

[2] Steven M. LaValle. Planning Algorithms. Cambrige University Press, 2006.

[3] Russell Smith. Open dynamics engine, 2007. http://www.ode.org/

This problem was a challenging development. Reynolds' paper is fairly easy to digest, but takes close reading to get a deep understanding of the proposed approach. I made several aborted developments in the process because of that typical overwhelming desire to code first without a true appreciation of the model proposed. In some respects, the earlier work was a better approach in creating smooth trajectories and consistency in changes in orientation; however, the complexity was overwhelming. Below is a video of the earlier approach with only the motivator behavior handled.

While the first video is a final product, an aesthetically pleasing and more realistic animation would combine the techniques explored in the aborted development with the successful implementation of the full model proposed by Reynolds'.

References:

Reynolds, C.W., "Flocks, Herds, and Schools: A Distributed Behavioral Model"

]]>Below is a demonstration of 10 balls dropped into a box.

Below is a demonstration of the initial unit test on linear dynamics:

Below is a demonstration of the initial unit test on rotational dynamics:

Below are subsequent unit tests of other aspects of the physical simulator:

References:

Baraff, D., "An Introduction to Physically Based Modeling: Rigid Body Simulation I - Unconstrained Rigid Body Dynamics"

Baraff, D., "An Introduction to Physically Based Modeling: Rigid Body Simulation II - Nonpenetration Constraints"

Ericson, C., "Real Time Collision Detection", Morgan Kaufmann, 2005

This first video demonstrates the final version of development and shows the running animation without any trajectory visualizations.

The following video demonstrates an interim unit test development of the humanoid hierarchy walking animation, *e.g.* straight legged, and includes visualizations of the body trajectory and the joint trajectories.

The following video demonstrates another interim unit test development of the bipedal hierarchy walking animation which proved the cumulative transformations between root link and constituent outboard links.

The following video demonstrates another interim unit test development of the bipedal hierarchy walking animation which proved the basic trajectories to be applied to both shoulders and hips.

The following video demonstrates an early unit test development of the pendulum trajectory which proved the sinusoidal trajectory for shoulders and hips as well as the reversing of trajectories thereby allowing infinite reversible animations.

The final video demonstrates an early unit test development of the cyclic trajectory which proved the closing of trajectories thereby allowing infinite forward animations.

]]>The following videos are demonstrations of each of these requirements. The trajectory is that of a Double Immelmann flight maneuver. In each video and in the successive slides, the yellow arc traces the B-Spline trajectory and the red arc traces the Catmull-Rom trajectory.

The following video is a demonstration of the biplane actor following a Uniform Non-Rational B-Spline trajectory with rotations handled by Euler Angles:

This video is a demonstration of the biplane actor following a Catmull-Rom spline trajectory with rotations handled by Euler Angles:

This video is a demonstration of the biplane actor following a Uniform Non-Rational B-Spline trajectory with rotations handled by Quaternions:

This video is a demonstration of the biplane actor following a Catmull-Rom spline trajectory with rotations handled by Quaternions:

It is virtually impossible to distinguish between the euler angle rotations and the quaternion rotations. In fact YouTube kept rejecting the second video of the pair because the checksums reported the video was duplicate.

In the process of developing the Immelman trajectory and for general debugging, several more trajectories were implemented and tested.

The first test was to simply draw a straight line composed of only four control points and to verify that the linearly interpolated arclength matched the expected overall arclength. In both cases, the arclength matched the expected and the line was drawn properly. In the below render, only the Catmull-Rom is visible because the B-Spline was drawn first and overdrawn by Catmull-Rom.

The second test was to evaluate a circle curve where the arclength over the circle should be . In this case, Catmull-Rom approximated the circle better than B-Spline and the arclength of Catmull-Rom was less than 1% relative error of the expected value while B-Spline was closer to 13% relative error. In the visualization, it is clear that even though Catmull-Rom is a better approximate, it still has continuity problems at the ends and doesn't fully approximate the circle.

This example demonstrates a fundamental that will be consistently demonstrated in all following tests where Catmull-Rom is a better approximation of the curve than Uniform Non-Rational B-Spline; however, Catmull-Rom can demonstrate continuity problems at control points because it is not as relaxed as Uniform Non-Rational B-Spline.

The third test was to evaluate a sine curve. The sine curve was composed of six control points with four median points marking the curve and two end control points matching the coordinates of the medians at the ends of the curve. The arclength over the sine curve should be the the integral of sine over the interval . Catmull-Rom approximates the curve to less than 1% relative error while B-spline approximates to around 20% relative error.

To develop the Double Immelmann, first the Immelmann was defined...

Followed by developing the Double Immelmann...

The continuity problems of Catmull-Rom are better demonstrated in the following joined spline example. In this case, the frequency of the sine function is changed and because of this joint between the spliced splines, there is a clear and significant problem with the continuity at the joint.

In order to smooth out the spline, removing the control point in this case eliminates the continuity problem, but the spline is no longer approximating the same function. Conversely, the B-Spline was not significantly altered from the above example with or without the splice control point; however, on close inspection, it is clear that the function it is approximating is different once the splice control point is removed.

A final trajectory that was tested is the loop which is just an extension of the circle curve. It was expected that extending the trajectory beyond the bottom of the circle might allow Catmull-Rom to better approximate the circle; however, this expectation was not realized. It does however smooth the continuity problems demonstrated in the pure circle test above at the control points marking and .

]]>The following images are renders of three dimensional textures generated with Perlin Noise.

The concrete textures are simply greyscale visualizations of the turbulence produced by the noise algorithm using different seeds.

Because a three dimensional texture is highly expensive memory-wise to maintain, memory can be conserved through preprocessing models with perlin noise then extracting slices of the texture. The slices can then be remapped as two dimensional textures to the run-time model. In the following example, slices from one of the concrete cubes have been rendered as surfaces of a cube map which can then be treated as a single two dimensional image and subsequently mapped onto only the surface of the cube during run-time.

The memory savings in this approach is considerable and can result in an order of magnitude reduction in memory requirements. The three dimensional image is 64 x 64 x 64 pixels with 3 channels per pixel at 4 bytes per channel. The memory requirement for that texture is at least 3MB. And as the size of the texture increases, the memory grows considerably to a point where a 512 x 512 x 512 can bring a standard machine to a swapping/starving halt. The two dimensional extraction reduces the above case to 64 x 64 x 3 x 4 x 6 reducing the memory requirement to 300KB.

The turbulence can be used to preturb data and introduce randomness. The following cube is simply a sine function applied to the 3D texture.

Turbulence is then applied with a few additional parameters to modulate the resultant image. The rest of these images are various results generated by tweaking parameters applied to the preturbed sine texture.

]]>

The basis of the approach is to generate a signal lattice of random values. The lattice is processed at varying levels of resolution via interpolation to generate self-similar noise between resolutions of the lattice. At each noise level, the signal is compressed in amplitude and frequency. Once the granularity of a noise level reaches a satisfactory depth or is smaller than the resolution of a pixel, the noise generation is halted. The varying noise levels are then recombined into a single turbulence value for each pixel.

In the case of this implementation, bilinear interpolation was used to interpolate values in between the lattice values and noise was generated to a depth of five meaning that noise generation consisted of Noise(x), Noise(2x), Noise(4x), Noise(8x) and Noise(16x). Noise was then added back together using varying factors to produce the different turbulence textures displayed below.

These first five images help visualize each of the Noise(cx) levels. Noise(16x) is actually the lattice in this case, but this is neither a requirement nor a limitation on the implementation.

The following image is the visualization of the turbulence that results from the addition of the above individual Noise levels.

Manipulation of the addition operation can produce a variety of turbulence results. Each of the following could be applied in a variety of applications. The labels are simply how I visualized the texture could be applied when I generated it.

The following image is one of those happy accidents that can occur. This image is a Noise(x) level that I generated while playing with using absolute value. The resulting visualization clearly demonstrates how the textures generated wrap well in u and v and also clearly shows how a fundamental height map can be produced through the process which can then be reprocessed through the Perlin Noise algorithm.

The following images are much larger textures generated by the same implementation. The lattice is still generated with the same resolution as the Noise(16x) level. Using a smaller lattice in these cases would produce much higher resolution/less pixelated textures but similar complexity (*e.g.* less busy) turbulence as the small textures above.

These following images are a large texture generated using a small lattice and demonstrate the high pixel, smooth transition, low complexity effect described above.

Finally the Perlin Texture is mapped to a sphere and demonstrates one application as clouds over a planet's surface.

]]>UV coordinates are a normalized approach to addressing texture space. Because Textures or images have varying resolutions and can be scaled, a normalized system allows a texture to be easily replaced with another whether the replacement is with a different resolution of the same texture because the distance to the object changes, as in MIP mapping, or the replacement is with an entirely different texture.

There are varying approaches to cylinder mapping, but the most straight forward is to simply map the x,y,z coordinates of a vertex onto the surface of a cylinder. The vertex x and y components can be transformed to u by projecting them onto the unit circle, measuring the angle from the x-axis to that point and then dividing by 2PI to produce a value clamped between zero and one. The vertex z component can be transformed to v by simply dividing z by the height of the mesh.

The first developmental step required the loading of textures from file. Because of the overall requirement to not utilize any libraries I chose to work with 24-bit bitmaps as parsing them is straightforward and their header information is well documented. Once the bitmap loader was properly vetted through unit testing, I moved on to rendering the texture to a trivial surface.

The plane is straightforward as each corner maps to an extens of the texture. This developmental step afforded me the ability to debug the trilinear interpolation of the uv coordinates and the mapping from screen space pixel to texture space uv coordinate. After proving the ability to load and render a basic texture to a basic surface, I validated that the system would render a more complex topographical texture which would move me one step toward the overall goal that I had set.

After validating the rendering of the topographical texture, I began working on the cylinder map. In order to visualize what the system was calculating, I wrote a function to procedurally generate a cylinder mesh and then used the cylinder texture map function to see the results and to facilitate debugging of the map function.

Once the texture map was validated for the well-controlled cylinder, I applied it to a model from the wild, the better-ball. While the better-ball is a sphere, it contains imperfect data. Because of the floating point math used to generate it, values that approach zero are not actually zero and so some uncertainty was introduced that was not present in the cylinder mesh. Better-ball enabled the texture map function to be validated for these less than perfect models.

Below are renders of the composite scene of the better-ball sphere inside the procedurally generated cylinder. Both models are mapped with the same texture. The cylinder points map to the sphere so you should be able to visualize a line drawn from the central axis of the sphere along a perpendicular plane which intersects with the skin of the sphere. If you extend that line out to the cylinder, you will note that the same color appears on the surface of the cylinder at the point of intersection. The renders include the use of an Earth, Earth at Night, Moon, Sun and Io topographical texture borrowed from NASA.

Finally, the cylinder map is applied to an odd non-uniform mesh. In this case the Utah teapot has had the Earth texture applied to it via cylinder map where the cylinder's z axis is aligned in the first case along the x axis of the teapot, in the second case along the y axis of the teapot and in the third case along the z axis of the teapot.

]]>In lab 1 we developed the basic renderer with wireframe capability and this is the only other case where other OpenGL facilities were allowed or used, e.g. line drawing from vertex to vertex. The goal of lab 1 was to implement and test the necessary mathematical functions, to perform perspective projection, to implement backface culling and in general to develop the necessary system framework for continued development throughout the semester.

In lab 2 we implemented depth buffering and a coherence based scan line algorithm. The goal of this lab was to render solid polygons in an appropriate back to front ordering so that polygons that are in front of others properly occlude those that are behind. The fill for the polygons was a simple random color procedure.

In lab 3 we implemented shading which necessitated the addition of an intensity Bi-directional Reflectance Distribution Function, i.e. BRDF, as well as lights to the scene and materials to the models. The following are renders from the system that I developed using the classic "Utah Teapot" model with a red material and the phong BRDF. The teapot is positioned at the world origin with no rotation and the camera is positioned above and beside the model and then tilted down to produce the isometric view. In the top light cases, the light is positioned directly above the model along the world y-axis, i.e. up/down, and in the side light cases, the light is positioned along the world z-axis, i.e. forward/back. The light is omni-directional, includes attenuation and in both cases is very close to the surface of the model.

Just for fun, I added a shading mode to render the depthbuffer itself. The graphical representation came in very handy in debugging the data stored within it. Unfortunately, the teapot model and this particular perspective configuration doesn't have a significant amount of depth variation, so another model has also been rendered that is a much better demonstration of what this shows. Pixels that are very close to the near clipping plane shift toward pure white while pixels that are far from the viewer, and are therefore close to the far clipping plane, shift toward to pure black.

]]>My preliminary UML architectural sketch of the game engine framework project.

]]>