"If con is the opposite of pro, is congress the opposite of progress?"
More pages: 1 2 3 4 5 6 ... 11 ... 12
Framework 3 (Last updated: June 26, 2012)
Framework 2 (Last updated: October 8, 2006)
Framework (Last updated: October 8, 2006)
Libraries (Last updated: September 16, 2004)
Really old framework (Last updated: September 16, 2004)
Phone-wire AA
Tuesday, June 26, 2012 | Permalink

Executable
Source code
PhoneWireAA.zip (2.2 MB)

Required:
Direct3D 10
Traditionally aliasing within a surface has been addressed with mipmapping and aliasing due to geometric edges has been handled by MSAA. While this setup usually works reasonably well it does not fully solve the aliasing problem. Lighting computations based on mipmapped samples can still introduce aliasing and there are certain geometry that can cause MSAA to break down.

A common culprit of aliasing in games is phone wires. Due to the thin geometry they tend to get broken up into a bunch of random disconnected dots after a distance. MSAA helpes reduce the problem but doesn't fully solve it. It merely pushes the problem a factor further into the distance. Once past this larger distance, the same issues will start appearing even with MSAA. This demo illustrates a simple but effective technique for solving the problem.



The idea is to adjust the radius of the wire to make sure it does not get smaller than a pixel wide. If the wire's radius would make it smaller than a pixel, it clamps the width to a pixel and instead fades with an alpha value corresponding to the radius reduction ratio. For example, if the wire is deemed to be half a pixel large at current distance, it clamps width to a full pixel and sets coverage to 0.5 instead. While the technique solves the problem of aliasing due to thin geometry, it does not address the general problem of jaggies; however, your regular MSAA will take care of that. With both enabled you get a very natural looking wire at any distance.

A similar approach could likely be used on other aliasing prone geometry based on thin geometry, such as antenna towers, pipes and railings.

This demo should run on any DX10 capable GPU.

Second-Depth Anti-Aliasing
Sunday, August 28, 2011 | Permalink

Executable
Source code
SDAA.zip (1.3 MB)

Required:
Direct3D 10
Recommended:
Direct3D 10.1
This demo shows another anti-aliasing technique using the depth buffer and a second-depth buffer for locating the edges. First edges are deteced. Then for each edge pixel it is determined whether an edge is cutting through this pixel or not. There are essentially two types of edges in a scene: creases and silhouette edges. When there's a crease the distance to the edge can be found by sampling two neighbors to the left and two the right and computing the intersection point where they meet, and similarly vertically of course. For silhouette edges there's a gap in the depth values so this doesn't work. So we use a buffer of second-depth values for this case, i.e. the depths of the backfaces in the scene, and use the same logic with those values and compute the intersection point in the same fashion. Once the edges are found we blend with a neighbor pixel in the same fashion as in GPAA/GBAA.

This technique is less intrusive to the overall rendering and should be easier to integrate into an existing engine than GPAA/GBAA. The only addition is the second-depth buffer. If the engine already has a pre-Z pass this can be altered into a second-depth pass by switching to front-face culling. Depending on the scene this may turn out faster or slower than a regular pre-Z pass. In this demo second-depth is in fact faster.

On the downside, this technique requires that there indeed are backfaces behind edges. This means that thin objects, billboards/foliage etc., will not get antialiased unless extra backside geometry is added. Quality-wise it rivals that of GPAA/GBAA, but requires good precision in the depth buffer to work.

This demo should run on any DX10 capable GPU.

Geometry Buffer Anti-Aliasing (GBAA)
Monday, July 18, 2011 | Permalink

Executable
Source code
GBAA.zip (2.3 MB)

Required:
Direct3D 10
GBAA is another anti-aliasing technique that works in the same spirit as GPAA, with the same quality, but with a substantially different method to accomplish the results. Whereas GPAA operates entirely as a post-process step (plus optionally pre-processing edges), GBAA uses a mixed method. During main scene rendering the geometry information is stored to a separate buffer, alternatively it uses available channels in an existing buffer. Anti-Aliasing is then performed in the end, in a resolve-pass if you will, as a fullscreen pass using the stored geometric information.

The geometry is represented as a two-channel edge distance. The closest direction (horizontal/vertical) distance is stored and the other is set to zero. It is possible to pack it into one channel and only store a bit indicating the major direction, but this demo uses two channels for simplicity.

A geometry shader is used during main rendering to compute the edge distances within a triangle. These are passed down in three components of an interpolator. The pixel shader simply selects the shortest distance to output to the buffer.

The advantage of GBAA over GPAA is that it gets rid of the second geometry pass. It also does not require any line drawing, something that consumer level GPUs generally suck at. Additionally, having geometry information in a buffer allows us to anti-alias other edges than just geometric ones, most notably alpha-tested edges. The shader just needs to output the distance to the alpha-edge, something that's easily approximated using gradient instructions. This is also demonstrated in the demo. On the down side it requires additional memory for the geometry buffer, but does not balloon up in the same way as MSAA does.

This demo should run on any DX10 capable GPU.

Geometric Post-process Anti-Aliasing (GPAA)
Saturday, March 12, 2011 | Permalink

Executable
Source code
GPAA.zip (1.2 MB)

Required:
DirectX 10
Recently a number of techniques have been introduced for doing antialiasing as a post-processing step, such as MLAA and just recently SRAA. MLAA attempts to figure out the underlying geometric properties by analyzing the pixel colors in the final image. This can be complemented with depth buffer information such as in Jimenez's MLAA. SRAA uses super-resolution buffers to figure out the geometry. This demo shows a different approach which instead of trying to figure out the geometry instead passes down the actual geometry information and uses that to very accurately smooth geometric edges.

The technique is relatively simple. The scene is initially rendered normally without any antialiasing. Then the backbuffer is copied to a texture and the geometric edges in the scene are drawn in a post-step. The edges are drawn as lines and for each shaded pixel the shader checks what side of the line the pixel center is and blends the pixel with a neighbor up or down in case of a mostly horizontal line, or left/right in case of a mostly vertical line. The shader computes the coverage a neighboring primtitive would have had on the pixel and uses that as the blend weights. This illustration shows the logic of the algorithm:



The wide line is the geometric edge. The arrows show how the neighbor pixel is chosen. The dashed lines shows the value of line equation for that pixel, which is used to compute the coverage value. The coverage and neighbor pixel is all that's needed to evaluate the anti-aliased color. The blending is done by simply shifting the texture coordinate so that the linear filter does all the work, resulting in a single texture lookup needed.

Unlike many other antialiasing techniques, this technique really shines for near horizontal or near vertical lines. The worst case is diagonal lines, but the quality is good in all angles. Here is a sample image with and without GPAA.



The main advantage of this technique is performance and quality. The actual smoothing of edges is quite cheap. Instead the biggest cost is the copying of the backbuffer. On a HD 5870 this demo was rendering a frame at about 0.93ms (1078fps) at 1280x720, of which the backbuffer copying costed 0.08ms and edge smoothing 0.01ms. For games using posteffects the backbuffer will likely have to be copied anyway. In such a case an alternative approach is to render the smoothing offsets directly to a separate buffer and then sample the backbuffer with those offsets in the final pass. This way you could avoid doing an extra backbuffer copy.

The main disadvantage of this technique is that you need to have geometric information available. For game models it is fairly straightforward to extract the relevant edges in a pre-processing step. These need to be stored however which takes additional memory, alternatively a geometry shader can be used. The extra geometric rendering may be a fair bit more costly in real game scenes than in this demo.

An artifact can sometimes occur when there are parallel lines less than a pixel away from each other. This could result in partly smooth and partly jagged edges which sometimes look worse than the original aliased edge.

A small issue is that due to precision and different computations the shader code may for some pixels decide the edge went on the other side of the pixel center than what the rasterizer thought, in case the edge passes almost exactly across the pixel center. This results in single pixel artifacts in otherwise smooth edges. This is generally only noticable if you look carefully over static pictures.

Future work:
It would be ideal to come up with an offset buffer together with the main rendering to avoid the extra geometric pass. Ideally a hardware implementation could generate the values during rasterization of the scene into a separate buffer. Edge pixels would get assigned a blending neighbor and a coverage value. With say 8 bits per pixel, of which 2 bits are for picking neighbor and 6 bits for coverage, it would likely look good enough. Theorethically it would represent coverage as good as 64x MSAA. Alternatively a more sophisticated algorithm could use 3 bits for all eight neighbors and 5 bits for coverage. A hardware implementation could avoid the single-pixel error mentioned above.

For software approaches one might consider using a geometry shader to pass down the lines to the pixel shader. The geometry shader could use adjacency information to only pass down the relevant silhouette edges and avoid internal edges. It might be possible to render out the offsets directly in the main pass with this information, although the buffer might have to be post-processed to recover missed pixels outside of the primitive.

In any case, there's a lot of potential and I'll surely be playing more with this.

1K-Mandelbrot
Sunday, February 13, 2011 | Permalink

Executable
Source code
1K-Mandelbrot.zip (73 KB)

Required:
OpenGL 2.0
This demo is a 1K demo, that is, the final binary is no more than 1024 bytes in size. The main secrets to creating such a tiny executable is first to throw out the C runtime, and secondly to link with a compressing linker, such as Crinkler.

The first thing you learn when you learn to code C/C++ is that the program execution starts at main(). This is of course a bunch of lies. The program typically starts in a startup function provided by the C runtime, which after doing a whole bunch of initialization eventually calls main(). When main() returns the program doesn't terminate, instead it just returns to the C runtime which will close down its stuff and eventually ask the OS to terminate the process. If you create a simple project with an empty main() and compile as usual you will likely get an executable that is something like 30KB or so. So you've already excluded yourself from the 4K demos, let alone 1K, even before writing any code. That overhead is mostly the C runtime. You can throw it out by specifying a custom entry point and handling process termination yourself. Throwing out the C runtime has the disadvantage of losing a whole bunch of features, including some of the most basic things like printf(), but you probably won't need so much of that anyway. With this you can easily get down to only a couple of kilobytes, depending on optimization flags used.

Using Crinkler you can get even smaller executables than with the standard linker. In addition to stripping unneccesary overhead from the executable it also compresses the code and data. It basically includes a small decompressor in the exe that unpacks everything before it runs. Compression rate depends on your code and content. Writing compressible code is essential for good compression ratio. For instance, in this demo the chosen coordinates have lots of zeros in the lower bits, which might not be immediately obvious when looking at the human readable values in the code.

Then there are of course the obvious steps such as compiling for size and setting all the compiler options to generate as small code as possible, and passing good flags to crinkler of course.

In addition to this, it's just a bunch of hard work trimming bytes here and there until you get down to your target size. This demo employs a few tricks, but there are probably more that could be used to trim it even further.

There are two variants of the demo, one compiled to be as tiny as possible and fits within the 1KB limit, and one that's somewhat more fully featured and interesting to watch but slightly larger than 1KB. The latter kind has "-Full-" in the filename. For the tiny version, run the one that matches your desktop resolution as it is not setting the display mode for you. The full version sets the display mode, so you can run any resolution of your preference. The tiny version runs through a limited set of predefined coordinates to zoom to. The full version searches random "interesting" coordinates and thus will appear different every time you run it.

Volume roads
Monday, October 4, 2010 | Permalink

Executable
Source code
VolumeRoads.zip (1.5 MB)

Required:
Direct3D 10
Rendering roads is usually done by first smoothing the terrain under the road and then rendering some simple 2D geometry on top of the terrain. This approach commonly causes issues with depth fighting though. A liberal amount of depth bias that's usually needed to solve this unfortunately also usually puts the road in front of the terrain at some places and/or clips through wheels on vehicles and other artifacts. Another solution is to put the road geometry somewhat above the terrain. This unfortunately causes visible parallax against the terrain.

This demo solves these problems by applying a technique similar to the Volume Decals demo. A road segment is rendered as a box encapsulating the terrain. The depth buffer is sampled and back-projected into the local space of the box, and the .xz coordinates are used as texture coordinates. Unlike in the decal case the road texture is tiled in one direction, so we can't just pad it with zero alpha or use the texture border color to handle the "outside the texture" case. This is solved by first cutting out the affected pixels with a stencil pass. When rendering the road the stencil bits are then reset to zero.

Road textures are typically relatively high resolution and needs mipmapping, however, when computing texture coordinates from the depth buffer we get discontinuities wherever there was a geometric edge between a triangle and the background. This causes artifacts in 2x2 pixel blocks as the gradients get messed up. In the Volume Decals demo this was solved by simply not using mipmaps, which works on the fairly low-resolution textures generally used in that technique. That's not going to work with roads though. So for this demo I sample neighbor depths. For pixels belonging to the same primitive the deltas should be constant, so whenever a delta is larger in one direction, it means we crossed over to a background pixel there. So I compare which of left and right pixel has the smallest difference in depth, and compute the texture coordinate for that pixel as well. The same is done vertically. This gives me the two gradients and I can do a SampleGrad() lookup and get the right mipmap even at border pixels. The benefit of computing correct gradients is visible in these screenshots:

With:


Without:


With a less uniformly colored texture the artifacts would be even more obvious.

Since this technique uses the depth buffer the roads are smeared on top of the underlying geometry and no biases of any kind are necessary. This means it will work even with not so smooth geometry, as evident in the demo, although in practice you obviously still normally want to smooth the geometry under the road.

The demo allows you to place roads dynamically over the terrain using the left mouse button, and clear all with right button.

This demo should work on Radeon HD 2000 series and GeForce 8000 series and up.

Volume decals
Sunday, October 4, 2009 | Permalink

Executable
Source code
VolumeDecals.zip (1.2 MB)

Required:
Direct3D 10
This demo illustrates a fairly simple technique for creating seamless decals on arbitrary geometry. This is done by rendering a convex volume (such as a sphere or a box) around the decal area. Using the screen position and the information in the depth buffer the local position within the decal volume is computed. This is then used as a texture coordinate into a volume texture containing the decal. To get unique looking decals a random rotation is applied on each individual decal volume.
The decal pass is rendered at the end of the g-buffer pass in a deferred renderer and simply alters the diffuse and specular intensity parts while leaving the normals unchanged. The lighting can then be done as usual.

This demo should run on Radeon HD 2000 series and up, as well as GeForce 8000 series and up.

Recursion
Saturday, February 28, 2009 | Permalink

Executable
Source code
Recursion.zip (84 KB)

Required:
Direct3D 10
One of the least advertised features of shader model 4.0 is that temporaries are indexable. This means you can declare an array in your shader and arbitrarily read and write to it. This opens up for lots of interesting algorithm. What this demo does is to use this feature to do recursion. An array of temporaries is used to emulate a stack, together with an index that's used as the stack pointer. The actual algorithm implemented recursively here is perhaps not the most exciting, it's just a simple tessellation of a sphere. More interesting is that fact that it's possible.

A couple of notes about the implementation. HLSL does not support recursive function calls. If it did, the code would have looked very straightforward. In fact, HLSL hardly ever creates function calls at all. The only way I've found to convince it to do so is to use a switch case with the [call] directive. And even in that case it only works from main(), not from any subroutine. So what I had to do was to emulate the whole function calling mechanism manually in a while-loop.

The end result is a shader that does tessellation up to three levels. It could've gone much higher if geometry shaders didn't have an output limitation of 1024 floats. The demo renders the base geometry with tessellation levels 0, 1, 2 and 3.

It should be noted that while indexing into temporaries is cool and sometimes also convenient it's a feature that should be used sparingly because the performance implications could be very nasty. One reason is that it's harder for the compiler to optimize since indexed reads and writes are hard to analyze. The other reason is that the number of temporaries your shader needs could go up dramatically, which reduces your GPU's ability to hide latencies. Even a small array could easily double or triple the temporary count your shader needs.

One cool thing about recursion though is that you don't need a very deep stack to get a lot of stuff done. In this demo I only need a stack of 12 registers. The shader as a whole needs 25 registers according to GPU Shader Analyzer. One could have implemented the same algorithm in a flat matter where it would tessellate the results from the previous pass in a loop. However, this would require that the array is large enough to fit the entire results of a pass, which in this demo would've meant 192 registers.

This demo should run on Radeon HD 2000 series and above and GeForce 8000 series and above.

More pages: 1 2 3 4 5 6 ... 11 ... 12