Using Vertex Shaders to apply 2D deformation effects to a rendered 3D scene.

Background

   A number of articles have been published recently about doing funky effects by rendering to an off screen texture map and then plastering the texture map onto a screen space quad and rendering the quad. 

A number of effects can be achieved with this method:
    By varying the opacity and not clearing the screen buffer a quick and dirty motion blur effect can be created. 
    By using pixel shaders a per-pixel transfer function can be created allowing for number of cool filter effects like brightness/contrast shifting or solarize.    

   A recent x-box title, wreckless, used a number of different filters for creating a few different looks for playback of in game movies.  

  There are a few problems that I'm aware of with rendering to a texture. It stalls the rendering pipeline and there is a potential large performance hit reading back from the texture map. The performance issues are reported here, and a slashdot discussion debating the results, I haven't played with this enough to independently verify this. 

  A second issue is aesthetic, since the texture map isn't typically the exact same size as your screen you can wind up squashing and stretching your image back to the original size. This introduces a bit of blurring and softening of the image that may be undesirable, I haven't seen any discussions on this. 

 

2D Deformations

    So recently I have been on a Escher kick. I had seen a new paper talking about Escher's warped space paintings. Looking at a study for one I was struck by how similar it was to the Photoshop spherize distortion filter. Photoshop has a whole class of distortion filters that just amount to putting your image on a rubber grid and twisting/stretching/compressing it. ( Kind of scary that weeks of work can now be reduced to a menu click. )   

  I am not really sure how Photoshop implements these filters.  But this rubber grid looks suspiciously like a tessellated quad with various distortion functions applied to it. 

  Instead of just rendering a single screen space quad why not tessellate the quad and warp it. This is a fairly easy change to make,  Just create a vertex array or display list where the uv coordinates vary over the surface from 0 to 1. The grid should not have to be extremely fine to get good results. 

   When rendering use a vertex shader to distort the various points in hardware. Unless I start doing sine or cosine operations I've been able to keep a consistently high frame rate.

  Most of the details of the implementation are trivial but It keep in mind that if your distortion filters pull vertexes into the center of the screen you can wind up with the edge of the quad being visible, ruining the effect. There are a number of ways to get around this issue: 

  • Create a texture that is a bit larger then necessary and just have some additional padding around the edges. (probably the easiest but not the most computationally efficient)

  • Encode a falloff grid into the vertex data itself so the edges have a weight of zero, use this additional data as a multiplier of deformation.

  • Write deformation code that distorts less the farther the vertex is from the center of the effect.

  There are pros and cons to each option. A over renders but allows you to pull vertexes off the edge of the screen. B and C results in an effect that will not be able to touch the edges of the screen. I think it really is up to what kind of distortion you are going to want to apply.  

   Additionally Vertex shaders are still rather limited in their instruction count, so don't expect to add a dozen features into each filter.

   Once read back speed is increased it would be an interesting next step to chain deformations together. The performance hit would probably be too severe at the moment. 



original image (3ds max rendering )


Spherize

Sine wave applied to the verts

Twist filter

Compression Wave

Download Animated Demo 
(uses CG version 1.5) 

 
Spherize Sample cg Code:

  #pragma bind appdata.position = ATTR0
#pragma bind appdata.normal = ATTR2
#pragma bind appdata.color = ATTR3
#pragma bind appdata.text1 = ATTR8

#include <stdlib.h>

struct appdata : application2vertex 
 {
 float4 position;
 float4 normal;
 float4 color;
 float2 text1;
 };

struct vfconn : vertex2fragment 
 {
 float4 HPOS;
 float4 COL0;
 float2 TEX0;
 };

vfconn main(appdata IN, 
            uniform float4 Kd, 
            uniform float4x4 ModelViewProj, 
            uniform float4 consts) 
 { 
 vfconn OUT; 
 float4 pos = IN.position; 
 float3 norm; 
// const xy hold the xy origin of the deformation
 norm.xy = pos.xy - consts.xy; 
 float dis = length(norm.xy); 
// const z holds a scale factor for ramping effect in and out
 pos.xyz = pos.xyz + consts.z*(norm.xyz)/(pow((dis+1),10)); 
 OUT.HPOS = mul(ModelViewProj, pos ); 
 OUT.COL0 = IN.color; 
 OUT.TEX0 = IN.text1; 
 return OUT; 
 }

In case anyone is curious the sample image is a homage to Cubic Space Division by M. C. Escher with the advantages of a modern ray tracer, it's all really a bit of a geometric irony... which I will leave as an exercise for the reader.

Gedalia Pasternak | Gedalia has been playing with computer graphics and computer animation long enough to say "I remember when." He's just finished working on the Asheron's Call 2 engine, and is currently freelancing.

To my home page
To other Articles