Dan's Blog

Musings on code, and other assorted sundries

Reflections in Unity

Update 2017-08-11: It seems the built example in the post was no longer working, as UnityPlayer isn’t really a thing anymore. It’s been recompiled as a WebGL assembly using Unity 2017.1.

A while ago, while working on (yet another) prototype, I had a need to project decals onto my terrain (and objects on that terrain) that were based on geometry; not based on a static texture. My specific use case was that I wanted the player to be able to draw a road for placement on the terrain, but to show a placement guide instead of the actual road. The guide would emulate the shape of the road to be placed exactly.

This meant that I needed to be able to project, essentially, a triangle strip onto my terrain. I searched up and down trying to find out how to accomplish this; but like most things in game development, there was no definitive answer (or more correctly, I didn’t understand what to search for).

I found 3rd party Unity assets that had a similar effect (accomplished by creating a high vertex count model that hugs terrain), but this method was flawed in that the resolution was restricted by the vertex count of the decal model. What I needed was a true projection, but of a piece of geometry.

As it turns out, the answer to this is to stop thinking of decals altogether, and start thinking of reflections. When we think of reflections in graphics, we tend to think of water, or metallic surfaces. Reflections have many more uses than just literal reflections though. For my particular use case, using reflections as decals was the answer to my problem.

When starting out, this can feel like a complicated topic. In a way, it is. But it is also rather simple once you get a grasp of what a reflection really is (so far as graphics are concerned). A reflection is nothing more than an image rendered from a camera in your scene, projected onto surfaces in your scene. Unity makes this fairly easy by providing (most of) the components necessary for this.

The Setup

We need the following things to accomplish a reflection:

  • A Projector component
  • A Camera component (that is not your main camera)
  • A RenderTexture to render into
  • A projection shader
  • Something to project
  • Something to project onto

The Projector component and the Camera component don’t have to be on the same GameObject, but in my particular use case, it helped that they were. Here are the settings for the components:


You’ll notice that the camera and projector share similar settings, both being orthographic. This is not necessary; it just happened to be the setting that I required. In reality, you can use any combination of orthographic and perspective. Which combinations you use depend on the type of effect you are trying to achieve.

The mask for the Camera is set to a layer called “Projection”. Any object we wish to reflect should be on this layer. In turn, the Projector has masks for “Projection” and “IgnoreProjection”. This prevents us from projecting onto objects that we plan to project (a reflection on the very thing we are reflecting), and also allows us to set objects that should never have a reflection for any reason.

The camera is set to render to a RenderTexture. In case you are not familiar with them yet, this is nothing more than a texture that can be rendered into from a Camera. Think of it as a blank canvas that a Camera will paint on. The background is set to a solid color. While this is not completely necessary (due to the culling mask), I think it helps to make it clearer what is going to be reflected.

The Projector uses a Material creatively called “ProjectionMaterial”. This material is set up to use a custom projection shader.

The Shader

Unity comes with two projection shaders as part of the standard assets (Light and Multiply). These are fine for circumstances where you want to emulate shadows, or want to have a glowing effect on anything you project; but if you just want to reflect an image with no bells and whistles, there is nothing built-in. Fortunately, the shader to do this is fairly simple:

Shader "Projector/Texture" {
	Properties {		
		_ProjectedTexture ("Cookie", 2D) = "" {}		
	Subshader {
		Tags {"Queue"="Transparent"}
		Pass {
			ZWrite Off
			ColorMask RGBA
			// NOTE: Edit this to adjust how the projection appears on the
			//       surface. This setting makes it appear (more or less)
			//       as the object being projected itself is rendered on
			//       screen. Blend One would make it appear as if it is
			//       glowing on the surface. Play with this as you see fit.
			Blend SrcAlpha OneMinusSrcAlpha
			Offset -1, -1
			#pragma enable_d3d11_debug_symbols
			#pragma vertex vert
			#pragma fragment frag			
			#include "UnityCG.cginc"

			float4x4  _Projector;	
			sampler2D _ProjectedTexture;		

			struct v2f {
				float4 uvProjected : TEXCOORD0;				
				float4 pos : SV_POSITION;

			v2f vert (float4 vertex : POSITION)
				v2f o;
				o.pos = mul (UNITY_MATRIX_MVP, vertex);
				o.uvProjected = mul (_Projector, vertex);										
				return o;
			fixed4 frag (v2f i) : SV_Target
				float xnormalized = i.uvProjected.x / i.uvProjected.w;
				float ynormalized = i.uvProjected.y / i.uvProjected.w;

				if(i.uvProjected.w < 0 || xnormalized < 0 || xnormalized > 1 || ynormalized < 0 || ynormalized > 1) {
					return float4(0,0,0,0);
				} else {
					fixed4 texS = tex2Dproj (
					return texS;

Let’s break this down.

_Projector is a special matrix that is passed to our shader from the Projector component. It is used to scale the vertex passed to our vert function so that the texture is projected correctly onto surfaces. The result is a float4 containing UV coordinates for the texture sampling, as well as a special “W” component.

The W component is important, as it serves two purposes:

  1. It represents a dot product between the direction the projector is facing; and a vector pointing at the vertex passed to our vert() function, from the position of the projector. In other words, it allows us to determine if the vertex is in front of, or behind the projector.
  2. It represents a “scaling factor”.

The dot product concept can be a bit confusing to comprehend without an image:

Projection Dot Product Example

In this image, we have two surfaces; one in front and one in behind. We have three vectors; one is the direction the projector is facing, and the other two are vectors pointing to positions on both surfaces. If you know your dot products, you’ll see how the dot product would be positive for any point on the surface in front of the projector, and negative for any point behind. If you do not know what a dot product is, Here is an interactive example to explain it.

The scaling factor when the projector is orthographic will always be 1 (or extremely close to it); The texture being projected will always be a 1:1 representation regardless of distance to the projector. When the projector is perspective however, this value will change based on how far away the object bring projected onto is from the projector (or more correctly, the particular vertex we are processing). The x and y components of the uv used for mapping the texture coordinate are multiplied by this number to scale it.

In the frag() function, we make use of the W component to ensure that we only project the texture in the direction the projector is facing. If we did not do this, we would end up projecting onto objects that are behind us as well as in front of us.

We also use it to determine if we are sampling outside of the texture. What happens when we sample outside of the bounds of the texture is that the pixels at the very edges of the texture are repeated into infinity across the surface we are projecting onto. The built-in projection shaders have a requirement that the 1px border of the texture be black, white, or transparent (depending on the blend mode) to prevent this from occurring. This is not possible with a texture being captured from a camera though; so our shader is designed to prevent this by ensuring the scaled x and y values are in the 0..1 range. If they are outside of it, it means that we are trying to sample outside of the bounds of the texture.

With this information, we test to ensure that we actually want to continue sampling our texture. If we do not pass these tests, we will abort by returning a “blank pixel”; one whose alpha value is set to 0.

If we pass the above tests though, we then sample our texture using tex2Dproj(), with the UNITY_PROJ_COORD() macro. tex2Dproj() samples our texture by first dividing x and y by w, and then using the coordinates to sample the texture. It is functionally equivalent to doing tex2D(_ProjectedTexture, i.uvProjected.xy / i.uvProjected.w). It is the same thing we are doing to test that we are within the bounds of the texture.

UNITY_PROJ_COORD() is a special macro built into Unity. It is defined as such:

#define UNITY_PROJ_COORD(a) a.xyw
#define UNITY_PROJ_COORD(a) a

It exists solely as a workaround in the case where tex2Dproj() doesn’t accept a float4 properly. I am not familiar with the details of this, so it is safest to just use the macro. But, it is not strictly necessary if you do not care about compatibility.

Finally, we return a float4 representing the color sampled from _ProjectedTexture.

The Loose Ends

You need an object(s) you want to reflect. This is simple; just point your render camera at the object(s). Make sure your object(s) are on the Projection layer. This really can be anything from a single object, to your entire scene. You also need something to project on. This is the easiest of all, just add one or more objects that are on any layer BUT the Projection or IgnoreProjection layers, and point your Projector at it!

The Result

Here is a demo showing the final result, which also provides a way to see how changing certain settings can change the reflection:

Click here to acquire the sources for this demo project.