I prototyped these stylized dancers, targeting music video, music visualization and stylized human character visualization in MR and VR. I designed this experience to make Music, VFX and the characters performance be synchronized and support each other, for exciting users' eyes and ears and for enhancing the liveliness.
The challenges I set up for this prototype, were as follows.
Using as less pre-cached resources as possible.
Using as many run-time graphic resources as possible.
Adding as many different stylized dancers as possible, in the scene.
Recorded from Unity
The dancer asset was downloaded from 3DPeople.
Animation data was procedurally analyzed to extract weighted skinned mesh and bones. Through the skinning conversion process, there was slight vertex position mismatch, but it was not recognizable. The final converted animation kept the fidelity and worked as a good resource to drive associated visual effects.
Houdini's skinning converter was used to generate the fbx file,
I purchased Burno Mars' 24 K Magic. In houdini, the input audio signal was filtered in CHOP and encoded in a mesh. I also made a solver to calculate the "increment" of the beats.
In the initiation state, the data were read by reading the height of vertices and decoded. The intensity and the incremented beat were stored in a compute buffer. This buffer was accessed from the shaders of each vfx assets at run-time. The intensity was used for driving the the intensity of the vfx components like the brightness and scale of instanced splats and trails. The incremented beat was used for driving the animation speed of the vfx components, like the animation speed of the 3D volumetric ripples. So the animation of the vfx components was driven by non-linearly flowing time and this enhanced the emotional alignment among music, characters' performance and the animated vfx.
I used Unity's GPU instancing both for rendering the scattered points and rending the trails. The brightness and size of the instanced splats and trails were driven by the intensity and beats of the music. I made the trails enhance the dynamism of the dancer, not distracting much the character's performance.
Points were scattered and relaxed to be evenly distributed, at the initiation state. I made a point scattering and relaxation system. I reproduced Houdini's scatter tool from scratch, with compute shaders. These points are used for instancing splats and for determining the roots of the trails.
A compute shader calculated the areas of mesh's triangles. The areas were lined up from the first triangle to the last one. And points were also lined up in the same space, so that the point count per triangle were determined based on the areas of each triangles.
Points were positioned on triangles based on the randomly generated barycentric coordinates.
Relaxation required neighbor searches both for points' pushing each other and for searching for closest points on mesh. I developed a neighbor search algorithm - the linked visitor list algorithm.
Once points were relaxed enough, they stored the barycentric coordinates that had been calculated from the last closest points on mesh search, in their buffer.
The scattering - relaxation process made the points distributed evenly on the mesh. In the run-time performance, points referred to the vertex buffer and the triangle buffer of the animated mesh. The barycentric coordinates and the triangle ids had been stored in a custom buffer, so that they could be positioned on their target triangles of the animated mesh. Their animation could be synchronized with the animated mesh. The normal values are calculated by sampling the normals of target triangles' vertices and interpolating them. The vertex shader rotated the instanced splats to make them face outward. They were revealed and scaled locally, based on the fresnel.
The trail system checked the velocity of the roots, so that faster movement produced more trails. In the run-time performance, the position of four control points were periodically stored in the trail buffer. In the vertex shader, the instanced meshes were deformed to the interpolated position with the Catmull-Rom splines that were constructed with the control points. Longer period of control point sampling, generated longer trails.
I made a system for generating real-time animated 3D volumetric ripples for a dancer. The brightness and the speed of ripples were synchronized with the music's beats and tempo.
I made the animated ripples rendered inside of the dancers body and made them deformed representing the dancing animation.
Voronoi noise was used to determine the cells. Animated ripples were created in them. The data - normalized radius and normalized age - were encoded in the RGB channels, and calculated to determine the time and space for drawing animated ripples. Each cells generated the repeatedly animated ripples, with per-cell based timing offsets.
The data for drawing volumetric ripples, were stored in 256 x 256 x 64 volume container in Houdini. The volumetric data were sliced on z axis then stored in one 2K texture atlas.
I used 20 steps of ray-marching for sampling the data from the volumetric texture atlas and then decode it. The density and brightness were accumulated during the ray-marching iteration in the fragment shader. Ray-marching started from each point on animated mesh, that was projected as pixel on the screen. The direction of ray-marching was rotated by checking the normal direction of the T-posed reference mesh and that of the animated target mesh.
I implemented a volume deformation algorithm and made a volume deformer for Disney short animation, "Fetch". I applied the same logic for the shader of the blue ripple dancer.
The marching rays were bent with the rotation matrix that represented the direction difference between the static reference normal and the animated target normal.
Volume deformation algorithm
@Disney "Fetch"
I used the Oculus Integration asset, to simulate this experience in VR environment.
I used the ARKit to detect the anchor and put down-scaled dancers. Running in iPad pro.