Pre-Integrated Skin Shader for Unity3D

Penner’s Work

I’ve been working on cobbling together Eric Penner‘s Pre-Integrated Skin Shading to Unity3D for a while. I’ve been adjusting it a little to fit in with Unity3D and generally tweak it to my liking. It’s probably strayed a fair bit from being optimised or realistic but here’s the code for it and some sample shots.

In Action

(Clicky for bigger).
Example of skin shader in Unity3D.

The Textures

Download the BRDF Lookup texture it wants here.

I generated it via Jon Moore,‘s code, which he wrote after trying the same shader technique. You can find the code here.

Set the Wrap Mode to Clamp inside of Unity. If you’re using linear lighting, it’s important you set this to “Bypass sRGB Sampling” within Unity (set Texture Type to Advanced to expose the option) otherwise it’ll appear to be lit differently to other materials.

The Code

Shader "Custom/Skin Shader" {
  Properties {
		_Color ("Main Color", Color) = (1,1,1,1)
		_MainTex ("Diffuse (RGB)", 2D) = "white" {}
		_SpecularTex ("Specular (R) Gloss (G) SSS Mask (B)", 2D) = "yellow" {}
		_BumpMap ("Normal (Normal)", 2D) = "bump" {}
		// BRDF Lookup texture, light direction on x and curvature on y.
		_BRDFTex ("BRDF Lookup (RGB)", 2D) = "gray" {}
		// Curvature scale. Multiplier for the curvature - best to keep this very low - between 0.02 and 0.002.
		_CurvatureScale ("Curvature Scale", Float) = 0.005
		// Controller for fresnel specular mask. For skin, 0.028 if in linear mode, 0.2 for gamma mode.
		_Fresnel ("Fresnel Value", Float) = 0.2
		// Which mip-map to use when calculating curvature. Best to keep this between 1 and 2.
		_BumpBias ("Normal Map Blur Bias", Float) = 1.5
	}
 
	SubShader{
		Tags { "Queue" = "Geometry" "RenderType" = "Opaque" }
 
		CGPROGRAM
 
			#pragma surface surf SkinShader fullforwardshadows
			#pragma target 3.0
			// Bit complex for non-desktop.
			#pragma only_renderers d3d9 d3d11 opengl
			// Required for tex2Dlod function.
			#pragma glsl
 
			struct SurfaceOutputSkinShader {
				fixed3 Albedo;
				fixed3 Normal;
				fixed3 NormalBlur;
				fixed3 Emission;
				fixed3 Specular;
				fixed Alpha;
				float Curvature;
			};
 
			struct Input
			{
				float2 uv_MainTex;
				float3 worldPos;
				float3 worldNormal;
				INTERNAL_DATA
			};
 
			sampler2D _MainTex, _SpecularTex, _BumpMap, _BRDFTex;
			float _BumpBias, _CurvatureScale, _Fresnel;
 
			void surf (Input IN, inout SurfaceOutputSkinShader o)
			{
				float4 albedo = tex2D ( _MainTex, IN.uv_MainTex );
				o.Albedo = albedo.rgb;
 
				o.Normal = UnpackNormal ( tex2D ( _BumpMap, IN.uv_MainTex ) );
 
				o.Specular = tex2D ( _SpecularTex, IN.uv_MainTex ).rgb;
 
				// Calculate the curvature of the model dynamically.
				
				// Get a mip of the normal map to ignore any small details for regular shading.
				o.NormalBlur = UnpackNormal( tex2Dlod ( _BumpMap, float4 ( IN.uv_MainTex, 0.0, _BumpBias ) ) );
				// Transform it back into a world normal so we can get good derivatives from it.
				float3 worldNormal = WorldNormalVector( IN, o.NormalBlur );
				// Get the scale of the derivatives of the blurred world normal and the world position.
				// From these it's possible to work out the rate of change of the surface normal; or it's curvature.
				#if SHADER_API_D3D11
					// In DX11, ddx_fine should give nicer results.
					float deltaWorldNormal = length( abs(ddx_fine(worldNormal)) + abs(ddy_fine(worldNormal)) );
					float deltaWorldPosition = length( abs(ddx_fine(IN.worldPos)) + abs(ddy_fine(IN.worldPos)) );
				#else
					// Otherwise stick with ddx or dFdx, which can be replaced with fwidth.
					float deltaWorldNormal = length( fwidth( worldNormal ) );
					float deltaWorldPosition = length( fwidth ( IN.worldPos ) );
				#endif
				
				o.Curvature = ( deltaWorldNormal / deltaWorldPosition ) * _CurvatureScale;
			}
 
			inline fixed4 LightingSkinShader( SurfaceOutputSkinShader s, fixed3 lightDir, fixed3 viewDir, fixed atten )
			{
				viewDir = normalize( viewDir );
				lightDir = normalize( lightDir );
				s.Normal = normalize( s.Normal );
				s.NormalBlur = normalize( s.NormalBlur );
				
				float NdotL = dot( s.Normal, lightDir );
				float3 h = normalize( lightDir + viewDir );
 
				float specBase = saturate( dot( s.Normal, h ) );
 
				float fresnel = pow( 1.0 - dot( viewDir, h ), 5.0 );
				fresnel += _Fresnel * ( 1.0 - fresnel );
 
				float spec = pow( specBase, s.Specular.g * 128 ) * s.Specular.r * fresnel;
 
				float2 brdfUV;
				float NdotLBlur = dot( s.NormalBlur, lightDir );
				// Half-lambert lighting value based on blurred normals.
				brdfUV.x = NdotLBlur * 0.5 + 0.5;
				//Curvature amount. Multiplied by light's luminosity so brighter light = more scattering.
				brdfUV.y = s.Curvature * dot( _LightColor0.rgb, fixed3(0.22, 0.707, 0.071 ) );
				float3 brdf = tex2D( _BRDFTex, brdfUV ).rgb;
				
				float m = atten; // Multiplier for spec and brdf.
				#if !defined (SHADOWS_SCREEN) && !defined (SHADOWS_DEPTH) && !defined (SHADOWS_CUBE)
					// If shadows are off, we need to reduce the brightness
					// of the scattering on polys facing away from the light
					// as it won't get killed off by shadow value.
					// Same for the specular highlights.
					m *= saturate( ( (NdotLBlur * 0.5 + 0.5) * 2.0) * 2.0 - 1.0);
				#endif
 
				fixed4 c;
				c.rgb = (lerp(s.Albedo * saturate(NdotL) * atten, s.Albedo * brdf * m, s.Specular.b ) * _LightColor0.rgb + (spec * m * _LightColor0.rgb) ) * 2;
				c.a = s.Curvature; // Output the curvature to the frame alpha, just as a debug.
				return c;
			}
 
		ENDCG
	}
	FallBack "VertexLit"
}
Posted in Unity | Comments Off on Pre-Integrated Skin Shader for Unity3D

Smoke Trails in Unity3D

Swirly smoke trails

I had a need to create some swirly-ish smoke that lingered for a project and I figured I’d stick the script up here.

It’s a little more advanced than this early video, but it should give you the idea of what it does.

It’s best to keep the transparency for the line renderer solid and use an alpha’d texture for this (base at the left, end at the right) otherwise the last segment of the smoke will snap-vanish when it gets dropped, which is a little jarring. I have code in here to handle that more gracefully when a texture is used. Be sure to set the texture’s wrap mode to clamped, too.
Example of a smoke trail texture.

The Code

#pragma strict

@script RequireComponent ( LineRenderer)

private var line : LineRenderer;
private var tr : Transform;
private var positions : Vector3[];
private var directions : Vector3[];
private var i : int;
private var timeSinceUpdate : float = 0.0;
private var lineMaterial : Material;
private var lineSegment : float = 0.0;
private var currentNumberOfPoints : int = 2;
private var allPointsAdded : boolean = false;
var numberOfPoints : int = 10;
var updateSpeed : float = 0.25;
var riseSpeed : float = 0.25;
var spread : float = 0.2;

private var tempVec : Vector3;

function Start() {
	tr = this.transform;
	line = this.GetComponent ( LineRenderer );
	lineMaterial = line.material;

	lineSegment = 1.0 / numberOfPoints;

	positions = new Vector3[numberOfPoints];
	directions = new Vector3[numberOfPoints];

	line.SetVertexCount ( currentNumberOfPoints );

	for ( i = 0; i < currentNumberOfPoints; i++ ) {
		tempVec = getSmokeVec ();
		directions[i] = tempVec;
		positions[i] = tr.position;
		line.SetPosition ( i, positions[i] );
	}
}

function Update () {

	timeSinceUpdate += Time.deltaTime; // Update time.

	// If it's time to update the line...
	if ( timeSinceUpdate > updateSpeed ) {
		timeSinceUpdate -= updateSpeed;

		// Add points until the target number is reached.
		if ( !allPointsAdded ) {
			currentNumberOfPoints++;
			line.SetVertexCount ( currentNumberOfPoints );
			tempVec = getSmokeVec ();
			directions[0] = tempVec;
			positions[0] = tr.position;
			line.SetPosition ( 0, positions[0] );
		}

		if ( !allPointsAdded && ( currentNumberOfPoints == numberOfPoints ) ) {
			allPointsAdded = true;
		}

		// Make each point in the line take the position and direction of the one before it (effectively removing the last point from the line and adding a new one at transform position).
		for ( i = currentNumberOfPoints - 1; i > 0; i-- ) {
			tempVec = positions[i-1];
			positions[i] = tempVec;
			tempVec = directions[i-1];
			directions[i] = tempVec;
		}
		tempVec = getSmokeVec ();
		directions[0] = tempVec; // Remember and give 0th point a direction for when it gets pulled up the chain in the next line update.
	}

	// Update the line...
	for ( i = 1; i < currentNumberOfPoints; i++ ) {
		tempVec = positions[i];
		tempVec += directions[i] * Time.deltaTime;
		positions[i] = tempVec;

		line.SetPosition ( i, positions[i] );
	}
	positions[0] = tr.position; // 0th point is a special case, always follows the transform directly.
	line.SetPosition ( 0, tr.position );

	// If we're at the maximum number of points, tweak the offset so that the last line segment is "invisible" (i.e. off the top of the texture) when it disappears.
	// Makes the change less jarring and ensures the texture doesn't jump.
	if ( allPointsAdded ) {
		lineMaterial.mainTextureOffset.x = lineSegment * ( timeSinceUpdate / updateSpeed );
	}
}

// Give a random upwards vector.
function getSmokeVec () : Vector3 {
		var smokeVec : Vector3;
		smokeVec.x = Random.Range ( -1.0, 1.0 );
		smokeVec.y = Random.Range ( 0.0, 1.0 );
		smokeVec.z = Random.Range ( -1.0, 1.0 );
		smokeVec.Normalize ();
		smokeVec *= spread;
		smokeVec.y += riseSpeed;
		return smokeVec;
}
Posted in Tutorial, Unity | Comments Off on Smoke Trails in Unity3D

Translucent Shader for Unity3D

DICE Presentation

I’ve worked up a translucency shader for Unity3D based on DICE’s “Approximating Translucency for a Fast, Cheap and Convincing Subsurface Scattering Look” presentation.

In Action

Example of translucency shader in Unity3D.
Example of translucency shader in Unity3D.

The Code

Bear in mind, until I can figure out how to get the attenuation of lights without the shadow value included in Surface Shaders, the translucency may look a little odd when used with shadow-casting lights.

This is just Unity’s regular Bumped Diffuse shader with translucency added in.

Shader "Custom/Translucent" {
	Properties {
		_MainTex ("Base (RGB)", 2D) = "white" {}
		_BumpMap ("Normal (Normal)", 2D) = "bump" {}
		_Color ("Main Color", Color) = (1,1,1,1)
		_SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1)
		_Shininess ("Shininess", Range (0.03, 1)) = 0.078125

		//_Thickness = Thickness texture (invert normals, bake AO).
		//_Power = "Sharpness" of translucent glow.
		//_Distortion = Subsurface distortion, shifts surface normal, effectively a refractive index.
		//_Scale = Multiplier for translucent glow - should be per-light, really.
		//_SubColor = Subsurface colour.
		_Thickness ("Thickness (R)", 2D) = "bump" {}
		_Power ("Subsurface Power", Float) = 1.0
		_Distortion ("Subsurface Distortion", Float) = 0.0
		_Scale ("Subsurface Scale", Float) = 0.5
		_SubColor ("Subsurface Color", Color) = (1.0, 1.0, 1.0, 1.0)
	}
	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 200

		CGPROGRAM
		#pragma surface surf Translucent
		#pragma exclude_renderers flash

		sampler2D _MainTex, _BumpMap, _Thickness;
		float _Scale, _Power, _Distortion;
		fixed4 _Color, _SubColor;
		half _Shininess;

		struct Input {
			float2 uv_MainTex;
		};

		void surf (Input IN, inout SurfaceOutput o) {
			fixed4 tex = tex2D(_MainTex, IN.uv_MainTex);
			o.Albedo = tex.rgb * _Color.rgb;
			o.Alpha = tex2D(_Thickness, IN.uv_MainTex).r;
			o.Gloss = tex.a;
			o.Specular = _Shininess;
			o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_MainTex));
		}

		inline fixed4 LightingTranslucent (SurfaceOutput s, fixed3 lightDir, fixed3 viewDir, fixed atten)
		{		
			// You can remove these two lines,
			// to save some instructions. They're just
			// here for visual fidelity.
			viewDir = normalize ( viewDir );
			lightDir = normalize ( lightDir );

			// Translucency.
			half3 transLightDir = lightDir + s.Normal * _Distortion;
			float transDot = pow ( max (0, dot ( viewDir, -transLightDir ) ), _Power ) * _Scale;
			fixed3 transLight = (atten * 2) * ( transDot ) * s.Alpha * _SubColor.rgb;
			fixed3 transAlbedo = s.Albedo * _LightColor0.rgb * transLight;

			// Regular BlinnPhong.
			half3 h = normalize (lightDir + viewDir);
			fixed diff = max (0, dot (s.Normal, lightDir));
			float nh = max (0, dot (s.Normal, h));
			float spec = pow (nh, s.Specular*128.0) * s.Gloss;
			fixed3 diffAlbedo = (s.Albedo * _LightColor0.rgb * diff + _LightColor0.rgb * _SpecColor.rgb * spec) * (atten * 2);

			// Add the two together.
			fixed4 c;
			c.rgb = diffAlbedo + transAlbedo;
			c.a = _LightColor0.a * _SpecColor.a * spec * atten;
			return c;
		}

		ENDCG
	}
	FallBack "Bumped Diffuse"
}
Posted in Unity | Comments Off on Translucent Shader for Unity3D

Unity3D xNormal Tangent Basis Plugin now x64 compatible

I’ve updated the original post with x64 and x86 links, so you can now use the plugin with x64 versions of xNormal. Yay!

Posted in Unity | Comments Off on Unity3D xNormal Tangent Basis Plugin now x64 compatible

modoToSMD Update

Now works with modo 601

Tim Grant at Puny Human Games got in touch to let me know my modo exporter script for Valve’s Source Engine was broken in newer versions of modo, where it didn’t pick up the textures applied to the model.

I’ve since updated it to work with modo 601.

Get it here!

Posted in Source | Comments Off on modoToSMD Update

Unity3D Tangent Basis Calculator Plugin for xNormal

Download for xNormal 3.19.2
Download for xNormal 3.18.10
Download for xNormal 3.18.9
Download for xNormal 3.18.8
Download for xNormal 3.18.3
Download for xNormal 3.18.1

What’s it do?

It ensures that the mesh tangent/binormals used to bake the normal map exactly match the ones that Unity generates for it’s meshes. This means that less time needs to be spent on adding support loops or other tricks to ensure that normal maps render correctly, giving a no-fuss superior quality normal map for use in Unity3D.

Thanks to Aras Pranckevińćius of the Unity3D dev team for releasing the source code to their tangent basis calculator.

Examples

Here are some examples of normal maps baked from the plugin, compared to various methods of importing tangents and baking normal maps.
Normal map synced to Unity3D's tangent basis.

Issues

There is an issue with the way Unity imports its meshes that can break your tangents, related to the Smoothing Angle setting. If you see strange lighting seams on your mesh where there should be none, this is probably the cause.

To fix this automatically, try this;
SmoothingAngleFix.js

To fix this manually;

  • In the Inspector for the FBX file…
  • Set Normals to Calculate.
  • Set Smoothing Angle to 180.
  • Set Normals to Import.
  • Set Tangents to Calculate.
Posted in Unity | Comments Off on Unity3D Tangent Basis Calculator Plugin for xNormal

Unity Terrain Tri-Planar Texturing

I’ve worked up some shaders for Unity’s Terrain system that allows for tri-planar texturing. Adds normal/spec/gloss, too.

Get it here!

Example

Here’s a shot of it in action with a procedurally generated terrain by Derek Traver.
Unity Terrain with Tri-Planar Texturing

Posted in Unity | Comments Off on Unity Terrain Tri-Planar Texturing

Dynamic Ambient Lighting in Unity

Dynamic Ambient Lighting

I’ve been working on getting dynamic ambient lighting working within Unity. Based off Valve’s 6-colour pre-baked ambient lighting (detailed here, pg 5, ch 8.4.1), but it grabs the 6 colours dynamically.

There’s probably lots more you could do to optimise it further (e.g. use replacement shaders when rendering the cubemap that do simpler lighting calcs). But you could do that yourself as required.

I’d also advise against using it on anything other than your main character, as it’s likely too expensive to run on multiple objects.

Cubemap Camera Script

Create a new camera and turn off the GUI, Flare and Audio components.

Set up it’s Culling Layers to not render non-essential things like particles or incidental detail. Also move your character to it’s own layer and set the camera not to render it (we don’t want bits of the character rendered into the cubemap).

Attach this javascript to it and set the target to be your character and set up the offset from your character’s position so that it’s in the centre of your character (i.e. a 2m tall character wants to be offset 0, 1, 0 so that the camera renders from the characters centre.

@script ExecuteInEditMode

public var target : Transform;
public var cubemapSize : int = 128;
public var oneFacePerFrame : boolean = true;
public var offset : Vector3 = Vector3.zero;
private var cam : Camera;
private var rtex : RenderTexture;

function Start () {
	cam = camera;
	cam.enabled = false;
	// render all six faces at startup
	UpdateCubemap( 63 );
	transform.rotation = Quaternion.identity;
}

function LateUpdate () {
    if ( oneFacePerFrame ) {
        var faceToRender = Time.frameCount % 6;
        var faceMask = 1 << faceToRender;
        UpdateCubemap ( faceMask );
    } else {
        UpdateCubemap ( 63 ); // all six faces
    }
}

function UpdateCubemap ( faceMask : int ) {
	if ( !rtex ) {
		rtex = new RenderTexture ( cubemapSize, cubemapSize, 16 );
		rtex.isPowerOfTwo = true;
		rtex.isCubemap = true;
		rtex.useMipMap = true;
		rtex.hideFlags = HideFlags.HideAndDontSave;
		rtex.SetGlobalShaderProperty ( "_WorldCube" );
	}

	transform.position = target.position + offset;

	cam.RenderToCubemap ( rtex, faceMask );
}

function OnDisable () {
	DestroyImmediate ( rtex );
}

Dynamic Ambient Shader

This is the shader that generates and applies the ambient lighting from the cubemap rendered by the camera above.

Create a new shader, paste this code into it and save it. We’ll integrate it into our shaders next.

Shader "DynamicAmbient" {
	Properties {
		_MainTex ("Diffuse (RGB) Alpha (A)", 2D) = "white" {}
		_BumpMap ("Normal (Normal)", 2D) = "bump" {}
	}

	SubShader{
		Pass {
			Name "DynamicAmbient"
			Tags {"LightMode" = "Always"}

			CGPROGRAM
				#pragma vertex vert
				#pragma fragment frag
				#pragma fragmentoption ARB_precision_hint_fastest

				#include "UnityCG.cginc"

				struct v2f
				{
					float4	pos : SV_POSITION;
					float2	uv : TEXCOORD0;
					float3	normal : TEXCOORD2;
					float3	tangent : TEXCOORD3;
					float3	binormal : TEXCOORD4;
				}; 

				v2f vert (appdata_tan v)
				{
					v2f o;
					o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
					o.uv = v.texcoord.xy;
					o.normal = mul(_Object2World, float4(v.normal, 0)).xyz;
					o.tangent = v.tangent.xyz;
					o.binormal = cross(o.normal, o.tangent) * v.tangent.w;
					return o;
				}

				sampler2D _MainTex;
				sampler2D _BumpMap;
				samplerCUBE _WorldCube;

				float4 frag(v2f i) : COLOR
				{
					fixed4 albedo = tex2D(_MainTex, i.uv);
					float3 normal = UnpackNormal(tex2D(_BumpMap, i.uv));

					float3 worldNormal = normalize((i.tangent * normal.x) + (i.binormal * normal.y) + (i.normal * normal.z));

					float3 nSquared = worldNormal * worldNormal;
					fixed3 linearColor;
					linearColor = nSquared.x * texCUBEbias(_WorldCube, float4(worldNormal.x, 0.00001, 0.00001, 999)).rgb; // For unknown reasons, giving an absolute vector ignores the mips....
					linearColor += nSquared.y * texCUBEbias(_WorldCube, float4(0.00001, worldNormal.y, 0.00001, 999)).rgb; // ...so unused components must have a tiny, non-zero value in.
					linearColor += nSquared.z * texCUBEbias(_WorldCube, float4(0.00001, 0.00001, worldNormal.z, 999)).rgb;

					float4 c;
					c.rgb = linearColor * albedo.rgb;
					c.a = albedo.a;
					return c;
				}
			ENDCG
		}
	}
	FallBack Off
}

Integrating the Ambient Shader into Surface Shaders

Now, we can use the above shader wherever we want it via the UsePass command, and blending everything else on top.

The key here is to ensure your surface shader’s blend mode is set to additive (One One) otherwise it’ll just write clean over the lovely ambient light that’s been applied.
So, before your surface shader’s CGPROGRAM block, add the lines;

UsePass "DynamicAmbient/DYNAMICAMBIENT"
Blend One One

We’ve also got to ensure that our surface shader doesn’t use the ambient light value that’s set in the editor, otherwise it’ll add the two together and defeat the purpose. So when you define the surface shader to use, ensure you add the noambient argument. e.g;

#pragma surf BlinnPhong noambient

Your new surface shader with dynamic ambient lighting should look something like this;

Shader "Bumped Specular" {
	Properties {
		_Color ("Main Color", Color) = (1,1,1,1)
		_SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1)
		_Shininess ("Shininess", Range (0.03, 1)) = 0.078125
		_MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {}
		_BumpMap ("Normalmap", 2D) = "bump" {}
	}
	SubShader {
		Tags { "RenderType"="Opaque" }
		LOD 400

	UsePass "DynamicAmbient/DYNAMICAMBIENT"
	Blend One One
	CGPROGRAM
		#pragma surface surf BlinnPhong noambient

		sampler2D _MainTex;
		sampler2D _BumpMap;
		fixed4 _Color;
		half _Shininess;

		struct Input {
			float2 uv_MainTex;
			float2 uv_BumpMap;
		};

		void surf (Input IN, inout SurfaceOutput o) {
			fixed4 tex = tex2D(_MainTex, IN.uv_MainTex);
			o.Albedo = tex.rgb * _Color.rgb;
			o.Gloss = tex.a;
			o.Alpha = tex.a * _Color.a;
			o.Specular = _Shininess;
			o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap));
		}
	ENDCG
	}
	FallBack "Specular"
}

Now apply your new shader to your character’s material and we’re done ūüôā

Posted in Tutorial, Unity | Comments Off on Dynamic Ambient Lighting in Unity

Team Fortress 2 & Anisotropic Highlight Unity Shaders

Just a note that I’ve put up a couple of Unity Surface Shaders onto the Unify Community Wiki.

Team Fortress 2

This replicates the toon ramp shader that Valve’s Team Fortress 2 uses. With a toon ramp and rim lighting.

Code and usage instructions here.

Gordon Freeman TF2 using Unity Team Fortess 2 Shader

Anisotropic Highlights

This replicates the anisotropic highlights you find on surfaces like brushed metal and long hair.

Code and usage instructions here.

Brushed metal effect using Unity Anisotropic Highlight shader.

Posted in Unity | Comments Off on Team Fortress 2 & Anisotropic Highlight Unity Shaders

Unity Skeletal Ragdoll / Jiggle Bones Tutorial

This is a tutorial on getting bones that are part of an animated skeleton to be controlled by Unity’s physics system rather than animation, i.e. ragdoll or jiggle bones. It took me a while to figure out the specifics of getting it working.

This method will also work with 3DS Max Bipeds Twist bones, which rely on the animation being fully baked.

Click on any of the images for a larger version.

Setting Up In 3DS Max

Set up and skin your rig as you usually would. Include any bones you would like to be jiggle bones and attach them into the skeleton’s hierarchy as normal.

In my case, I have some dangling belt attachments and my characters ponytail intended to be jiggle bones.

 The Key to Jiggle Bones

The important thing to note is that, while all of my regular bones are keyframed into the T-Pose, the jiggle bones do not have keyframes. They are positioned correctly but they are not keyframed into position.

It is imperative that the jiggle bones remain unkeyframed throughout your T-Pose and all of your animations. One keyframe anywhere in there and the animation system takes control over the physics and you’ve just got regular skeletal animation again.

Exporting the T-Pose

When exporting the t-pose, select your mesh(es) and your bones, including your twist bones and including the bones that you want to be jiggle bones.

Go to Export > Export Selected and choose FBX as the format.

Ensure that you do not export animation at this point – we only want the model, it’s bones and the skinning information. Chosing animation export here will mean that it gives keyframed positions to our jiggle bones, which stops them being jiggle bones.

 Exporting Animations

When using Bipeds Twist bones, they are not keyframed directly and so have to have the animation fully baked out.

The problem arises here, because baking the animation also means that any jiggle bones get keyframes assigned. Disaster!

What do we do? Simple – don’t export the jiggle bones at all when exporting animation.

This is where Export Selected comes into play again. We simply select everything we did in the T-Pose export¬†except the jiggle bones. Unity will simply find that the jiggle bones aren’t in the exported animation and won’t attempt to apply the animation to our jiggle bones.

To this end, I find it easier to keep the jiggle bones in their own layer that I keep frozen. Which stops me accidentally moving, selecting or keyframing them.

So, once you have selected everything you want exported (except the jiggle bones, of course), go once again to Export > Export Selected and choose FBX as the format.

This time the filename’s the Unity animation ¬†standard of [modelName]@[animationName].fbx.

Ensure you tick the box marked Bake, or your twist bones won’t be exported.

Setting Up in Unity

Now the jiggle bones are in and unkeyframed, hooking them up to the physics engine is the same process as you would set up any ragdolled joint, but I’ll run through the process anyway.

The Parent Bone

The parent bone should be whichever bone your jiggle bone is attached to. First up, drag your model into the scene then find and select the parent bone in the project hierarchy.

Apply a rigidbody to your parent bone. Set it to be Kinematic and uncheck Apply Gravity.

Apply a collider to your parent bone and set up it’s scale and alignment so that it fits the mass of the vertices the bone has influence over.

The Jiggle Bone

Jump down the hierarchy in the project pane and select the bone you want to be a jiggle bone.

Again, apply a rigidbody but this time ensure Kinematic is unchecked and Apply Gravity is checked. Set the mass and drag to fit the material and size of whatever your jiggle bone is.

Apply a collider and set it’s scale and alignment to fit your jiggle bone.

Now apply a physics joint. Any kind will work but in this instance I’m using a Character Joint. Set the Connected Body to be the same Rigidbody component you just applied to the parent bone. Set up the constraints of your physics joint to be what you like.

Other Bones

In some cases, you will want other parts of your model to collide with your jiggle bone. In my case, I want both of my characters thighs to be able to knock the belt attachments around as she runs.

For each of these bones, repeat the process we used in the parent bone of attaching a kinematic rigidbody and collider.

Final Touches

So, now we should have rigidbodied bones that succumb to gravity and collide with other bones as they move via skeletal animation.

But you’ll notice that as your character moves around the world, they don’t get affected by momentum, they just sit there hanging down.

There are two possible solutions to this;

1) In your models Animation component in the inspector, there’s a tickbox for “Animate Physics”, which may work for you but I found this gives very jittery, erratic results. So, as a solution…

2) Use a little script to get the movement of the parent bone and apply it as a force to the rigidbody. Create a new Java Script and copy this code into it then apply it to each of your jiggle bones;

#pragma strict

private var thisParent : Transform;
private var thisRigidbody : Rigidbody;

private var parentPosLastFrame : Vector3 = Vector3.zero;

function Awake () {
	thisParent = transform.parent;
	thisRigidbody = transform.GetComponent.< Rigidbody > ();
}

function Update () {
	thisRigidbody.AddForce ( ( parentPosLastFrame - thisParent.position ) * 100 );
	parentPosLastFrame = thisParent.position;
}

Finished!

Thanks for reading, and I hope some of you found this helpful.

Posted in Tutorial, Unity | Comments Off on Unity Skeletal Ragdoll / Jiggle Bones Tutorial