Category: devlog

Pick Up and Throw – Materials, Scale and Selection Updates

Pick Up and Throw – Materials, Scale and Selection Updates

After this and this recent post, the player PickUpandThrow mechanic has now been updated to address some issues including:

  • Collisions when ‘holding’ an interactable object
  • Better throwing/firing mechanics
  • Selection outline for interactables
  • Swapping materials

There are a few other updates, especially related to destructable objects that I’ll cover elsewhere.

Pick up and Throw updates


One of the main problems I wanted to address in this update was how to change the PBR material of the interactable object so that when it is held, the player can ‘see through’ it. This was no porblem with just using the base color option of the attached material but when using an Albedo texture, an alpha animation was more problematic.

My first effort was to create a custom shader using the Unity ShaderGraph – a very neat tool that I’ll no doubt come back to at a later point. While creating a basic PBR setup with Albedo, Smooth, Metallic, AO etc channels was straight forward there seems to be an issue with the way in which SG deals with normal map calculation (see this unity forum post). So while I was able to create an alpha slider for the albedo channel and expose that parameter in the Inspector, the normal mapping was a bit weird.

That led me into Blender at first and experimenting with trying to UV unwrap and export a project specific cube/sphere (which in turn led to this post) but I was still stuck with the normals issue.

The solution (at the mo) is Unity’s built in URP lit shader that includes Albedo, Metallic, Normal and AO channels that render nicely but accessing an alpha solution was a bit of a pain. In the end I opted for a second (lit shader) material slot in the Object class that was a copy of the original apart from the surface type set to Transparent instead of Opaque. Transparent materials have easy access to the alpha channel (in the same way as using just the color channel) that in turn can be run through an Animator component in order to achieve the fade up/down effect I need.

In the Object class, the relevant lines look like this:


    //Set the 2 materials in the Inspector:
    public Material defaultMaterial;
    public Material alphaMaterial;

            //if is holding this object:
            if (pickUpAndThrow.isHoldingObject)
                if (pickUpAndThrow.hitObjectIsAtHand)
                    //switch off outline:
                    outline.enabled = false;

                    //change material surface type to transparent:
                    renderer.material = alphaMaterial;

                    //fade down alpha animation:
                    animator.SetBool("isHolding", true);


                if (!pickUpAndThrow.hitObjectIsAtHand)
                    //switch on outline:
                    outline.enabled = true;

                    //change surface to opaque
                    renderer.material = defaultMaterial;

                    //fade up alpha animation:
                    animator.SetBool("isHolding", false);

I’m not posting the whole code here as I’m only interested in what has changed compared to the script posted in Change Color On Raycast Hit.

The main changes are using the second (transparent) material and accessing the animator component that simply fades the alpha value of the material.

There may be issues in the future as transparent materials are expensive, plus I’m not altogether sure if I’m spending resources in duplicating shaders here, but as this effect is applied to one object at a time, and based on my current tests it does the job for now.

Selection Highlighting

I wanted to add an outline to selectable objects to indicate that they can be picked up. Fortunately this solution was an easy (if lazy) one. After wrestling with the SG once again, and coming to the conclusion that while fresnel was a great option for spheres, it wasn’t so effective with cubes – what with it being fresnel!!!! – I found this FREE asset that was easy to integrate, access and control from the Object class.

While I do try my best not to rely on asset store resources this plug in is great…and I’m using it!!!!

One Thing At A Time!

Another easy fix (that should also save a bit of performance…until I need multi-tasking interactions!!!) was to limit the raycast call from the player so there are no other raycast trigger events while holding an object.

This fix is in the PlayerManager() class, in Update():

        //Only perform a raycast if NOT
        //holding an interactable object
        if (!pickUpandThrow.hitObjectIsAtHand)
        { rayCastHit = rayCast.DoRaycast(cam, scanRange); }

Collisons and Physics

In this final section of this post I want to highlight the physics interactions that include collisions, throwing, pick up move/rotate and moving playerhand based on object scale.

As an additional note, this unity answers post was a help in determining the rotation/position maths. The main methods to get a grip of are the MovePosiiton() and RotatePosition() calculations that eliminate the isKinematic calls I was making before.

Fortunately for me I’ve been employing good practice and the comments in the PickUpandThrow class, (that manages all related interactions/mechanics) explain the whole process in a much more concise and demonstrable manner than I would manage using this text…

private void FixedUpdate()
        if (input.isInteracting && !isHoldingObject)
            //we can pick up the object if we can see it:
            if (playerManager.rayCastHit != null
                && playerManager.rayCastHit.CompareTag("Interactable"))
                //set the RB of target object received from the raycast
                //in PlayerManager:
                hitObjectRigidBody =

                //move the player hand forward by size of
                //object * offset
                hitObjectScale = hitObjectRigidBody.transform.localScale;

                playerHand.transform.localPosition = new Vector3(
                    0, 0, hitObjectScale.z * playerHandOffset);

                //toggle interact button
                input.isInteracting = false;

                //we are now holding the object
                isHoldingObject = true;

        //Pick up the object
        if (isHoldingObject && !hitObjectIsAtHand)
            //move towards player hand
                pickUpSpeed * Time.fixedDeltaTime));

            //rotate towards zero:
            Vector3 targetDirection =
                (hitObjectRigidBody.position -;

            Quaternion targetRotation = Quaternion.LookRotation(targetDirection);

                (hitObjectRigidBody.transform.rotation, targetRotation,
                pickUpSpeed * Time.fixedDeltaTime));

            //turn off gravity
            hitObjectRigidBody.useGravity = false;

        if (isHoldingObject && hitObjectIsAtHand)
            //if object has reached player hand disable its movement
            //by 'teleporting it into position. 

            //is the object is != rotation zero
            //slow rotation by lerping the angular velocity
            //to zero:
            if (hitObjectRigidBody.rotation
                != Quaternion.Euler(
                hitObjectRigidBody.angularVelocity = Vector3.Lerp
                    pickUpSpeed * Time.fixedDeltaTime);

        if (input.isInteracting && isHoldingObject)
            //turn gravity back on:
            hitObjectRigidBody.useGravity = true;

            //stop objects rotation by setting
            //it to zero to make sure it fires forward
            hitObjectRigidBody.rotation = Quaternion.Euler(;

            //set direction based on camera transform:
            Vector3 pushDirection = cam.transform.forward;

            //add upwards multiplier only to Y transform
            pushDirection.y *= upMultiplier;

            //Fire the object away
                pushDirection * throwSpeed, ForceMode.Impulse);

            //reset hand position
            playerHand.transform.localPosition =;

            //we are no longer holding the object
            isHoldingObject = false;

            //toggle interact button
            input.isInteracting = false;
Blender to Unity 01 – Scale, UV’s and Import

Blender to Unity 01 – Scale, UV’s and Import

This post forms part of a likely ongoing series of notes related to using Blender with Unity.

Scale and Dimensions

Any initial Blender scene will present with a unit scale cube that has dimensions of 2x2x2. To get set with a unit cube that can be later exported and imported into Unity without scaling concerns, any scaling or rotation operations MUST BE APPLIED:

This process is important for maintaining consistency across platforms.

If 1 Blender Unit = 1 Unity Unit = 1 metre issues of weird scaling and sizes when switching platforms can be minimised. This concept is also valuable when building assets in Blender that can successfully translate to real world values/dimensions in Unity.

UV Problems

When using Blender to unwrap objects  – a better approach when building more complex assets – make sure to recalculate the mesh normals so that the Unity import has no ‘missing’ faces:

Recalculate normals on unwrapped mesh

Using .blend files in Unity:

.blend files can be imported directly by dragging and dropping into Unity.

In a blender scene with multiple meshes/objects, the whole collection can be dragged into the Unity scene or each individual mesh can be selected and dragged out – but will have a rotation on x of -90. 

This issue can be solved by using this approach from this forum post:

’In object mode, set the X rotation of your model to -90. Press Ctrl + A and apply rotation, X rotation appears to be 0 now in Blender, set it to 90 and save/export it. You’ll see that it will appear both correct and at 0 rotations now.’

This approach has the advantage of being able to use an ‘open’ blender file as opposed to a ‘closed’ FBX file:

Using .fbx Files

.fbx files can be easily exported from Blender and used in Unity. Once the Blender model is complete – unwrapped etc. – select the mesh/meshes/objects/collections needed in the model for Unity and use File–> Export–>FBX:

Use the Presets shown on the right – make sure to check Apply Unit and Apply Transform

The exported FBX file can now be imported to Unity. Select the mesh and look at the following settings in the Inspector:

  • In the ‘Model’ tab all ‘Scene’ checks can be turned off except ‘Convert Units’
  • Use ‘Swap UVs’ if the blender model has its own UV map(s).
  • Click ‘Apply’ to reimport the mesh with these settings.

At this time ‘Material Creation Mode’ drop down in the ‘Materials’ tab is set to ‘None’.

Materials and Textures will be exported from Blender as image files .

This process to be covered at a later date.

Change Color on Raycast Hit

Change Color on Raycast Hit

In the process of developing the RB controller, one of the functions I wanted to incorporate was highlighting objects when hit by the raycast. This has been on the dev list for some time, so this basic prototype is well overdue.

The starting point for this functionality is this forum post:

This functionality has been highlighted in a previous post about the Pick Up and Throw functionality, but one bug in the setup was related to the colors ‘sticking’ and not reverting to their original state:

Tangled up in blue – previously selected objects not reverting to their original color

Turns out the reason for this unwanted behaviour was attaching the function to the main player manager and trying to read and store color values on a case by case basis.

By moving the script on to the object itself and allowing that to check for raycast hits, the ‘sticky color’ problem has been resolved while simultaneously streamlining my logic:

  • Each ‘interactable’ object can now have bespoke color values even though they all share the same class
  • No need to compare tags as the interaction is calculated by the object
Testing color change on hit using multiple objects and colors

Heres the related code snippet in the object class – only called if the object is hit:

    public Color defaultColor;
    public Color highlightColor;
    public Transform previousObject = null;
    new Renderer renderer;

//Get the renderer component on this object:
renderer = GetComponent<Renderer>();

//is raycast hitting THIS object?
        if (playerManager.rayCastHit == this.transform)
            //Check if this hit is the same as the stored hit:
            if (previousObject != this.transform)
                //store hit object:
                previousObject = this.transform;

                //change hit object color:
                renderer.material.color = highlightColor;

        //if no raycast hit:
            //Reset hit object material
            renderer.material.color = defaultColor;

            //Clear reference
            previousObject = null;

This basic functionality can now be extended to use a better highlighting effect by utilising Unity’s Shader Graph to produce an edge glow effect. This effect will target the material (not just the color) and offer better representation of selected objects.

I will also post about changing the Interactable Object class to a Scriptable Object that should allow these objects to be instantiated with custom data including materials/textures and bespoke UI messages/information.

Rigidbody Based Pick Up and Throw

Rigidbody Based Pick Up and Throw

As the Rigidbody version of the Player continues to develop (full functionality breakdown to follow) the concept of being able to pick up and throw interactable objects now has some solutions.

(Its also worth noting here that the prototype illustrated also has a change color on raycast hit function that I’ll cover elsewhere.)

This added functionality has been on the dev list for a while so its good to get a working prototype up and running.

Development and Code

The basic pseudo code looks something like this:

if isInteracting && !isHoldingObject
check the raycast hit object is Interactable
move the object to the player

isHoldingObject = true
toggle isInteracting

if IsInteracting && isHoldingObject
AddForce to rigidbody based on PlayerCamera transform

isHoldingObject = false
toggle isInteracting

That seems simple enough but I quickly ran into issues specifically around getting more used to handling rigidbodies in FixedUpdate().

This answer on Unity forum was a great help:

But before the specifics, one obvious issue with my pseudo code was the move function would put the selected object on top the player – so firstly I needed to set up an offset that would also act as a target to which any selected object could be moved.

Easiest way to do that was to create an Empty +10u on player.z

  • Player hand is the target for any selected objects
  • Its a child of the player camera in order to inherit position and rotation that equates to the ‘look’ direction/rotation of the player. Used to set throw direction.
  • has Trigger that detects when the selected object has reached the ‘hold’ position
  • Layer is set to ‘Ignore Raycast’ (a built in Layer in Unity) that allows the raycast from the player to ignore this object
  • Attached script is used for OntriggerEnter mointoring

Once this was set up I was able to set up and work on the PickAndThrow script attached directly to the Player Object. After a few days of some serious wrestling with this script, below is a working prototype with plenty of commentary that explains the process step by step:

public class PickUpandThrow_PlayerRB : MonoBehaviour
   //Get references.....

    private void Start()
        input = GetComponent<InputManager>();
        playerManager = GetComponent<PlayerManager>();
        cam = GetComponentInChildren<Camera>();
        audioSource = GetComponentInChildren<AudioSource>();

        playerHand = GameObject.Find("PlayerHand").transform;
        playerHand_CollisionDetector =

    private void FixedUpdate()
        //Check player hand object for collision with hit RB:
        hitObjectIsAtHand = playerHand_CollisionDetector.hitInteractable;

        if (input.isInteracting && !isHoldingObject)
            //we can pick up the object if we can see it:
            if (playerManager.rayCastHit != null
                && playerManager.rayCastHit.CompareTag("Interactable"))
                //set the RB of target object received from the raycast
                //in PlayerManager:
                hitObjectRigidBody =

                //toggle interact button
                input.isInteracting = false;

                //we are now holding the object
                isHoldingObject = true;

                //play audio pick up FX:
                //This clip needs to be adjusted based on difference between
                //hand and object (* pitch)

        //Pick up the object
        if (isHoldingObject && !hitObjectIsAtHand)
            //make isKinematic to cancel physics interactions:
            hitObjectRigidBody.isKinematic = true;

            //correct orientation ready for throwing

            //move towards player hand
                pickUpSpeed * Time.fixedDeltaTime));

        if (isHoldingObject && hitObjectIsAtHand)
            //if object has reached player hand, disable its movement
            //by 'teleporting' it into position. 

        if (input.isInteracting && isHoldingObject)
            //at this point the object is being thrown so re-enable
            //all physics properties by setting isKinematic = false:
            hitObjectRigidBody.isKinematic = false;

            //Fire the object away (forward)
            //Add force relative to camera in order to take account
            //of rotation.x ('look up')
                (cam.transform.forward * throwSpeedForward)
                + (cam.transform.up * throwSpeedUp), ForceMode.Impulse);

            //we are no longer holding the object
            isHoldingObject = false;

            //toggle interact button
            input.isInteracting = false;


One of the main issues here was getting the selected object to Move to the PlayerHand position. Thsi was a lack of understanding of how MovePosition) works – If the rigidbody has isKinematic set to false, RB.MovePosition works like transform.position=newPosition and ‘teleports’ the object to the new position, rather than performing a smooth transition over FixedTime.

The other referenced script of interest here is the PlayerHandCollisionDetector attached (quite logically) to the PlayerHand:

public class PlayerHand_CollisionDetector : MonoBehaviour
    PickUpandThrow_PlayerRB pickUpandThrow;
    public bool hitInteractable;

    public void Start()
        //Get the RB from pickUpAndThrow NOT playerManager!!!!
        pickUpandThrow = FindObjectOfType<PickUpandThrow_PlayerRB>();

    private void OnTriggerEnter(Collider collider)
        if (pickUpandThrow.hitObjectRigidBody != null)
            //Check if the incoming collider is the SAME as the stored
            //rigidbody from pickUpandThrow - the RB is constant as its
            //NOT being updated by the raycast.

            //This comparison will avoid false reading from the bool that arose
            //from using the rayCastHit object from playerManager
            if (collider.transform == pickUpandThrow.hitObjectRigidBody.transform)
                //toggle the hit  - BECAUSE THIS IS A TRIGGER!!!!!
                hitInteractable = !hitInteractable;

Its easily seen some of the issues I had here! Just making the point that the rigidbody we need to compare is derived from the PickUpandThrow class as the reference there is stable. As opposed to getting the reference from the PlayerManager where the reference is being updated by the raycast.

Summing up

While this setup is functional enough there are some issues that need to be ironed out:

Kinematic Objects don’t care about your colliders!!!
  • selected object can move through other colliders when being ‘carried’ by the player. This due to rigidbody.isKinematic set to true on selection. A solution that comes to mind is to create another empty child of the Player with a box collider that dynamically scales to cover the area of Player + Object. This collider can be activated when hitObjectIsAtHand ?
  • thrown objects sometimes go in unexpected/inaccurate directions. This is due to the AddForce function of the rigiidbody being calculated using cam.transformDirection * a hard coded value, instead of calculating the up value based on camera.rotation.x
  • the standalone nature of this element of the player will be addressed by refactoring and using it as a called method from PlayerManger. In this way, PlayerManager makes conditional calls to other classes which ‘should’ help with efficiency.
Custom Colliders for RigidBodies

Custom Colliders for RigidBodies

While making good progress developing a new RB version of the player (which I’ll post about elsewhere), I stumbled on the issue that the Daleks solved back in the 80s – stairs!!!

After some searching through the forums I found the simplest solution (these things usually end up being quite simple) that I wanted to record here.

The issue is that when importing/using custom objects, Unity will automatically generate a mesh collider for the object using the objects’ base mesh. Thats great and makes for realistic interactions…if you’re using the character controller. Not so great if using a RB based controller as the collision will stop the player dead.

At the start of this project that was one of my reasons for going with a CC based controller, but I wanted to experiment with a RB controller and I’ve found that I prefer it – especially for physics interactions that seem much more fluid, realistic and immersive.

At this point, my RB player is using about 150 lines of code in a player manager along with a few classes ( of < 60 lines of code each) handling things like movement, power updates, raycasts, and writing some outputs to the UI. This compares to my CC main class that was > 880+ lines of code plus extra classes for raycasting etc.

In fact the new RB controller does everything the old CC based controller did, but with a better ‘feel’ , less code (therefore less bug tracking issues) and less (Ahem…..) ‘physics malfunctions‘.

  • I use the term ‘less physics malfunctions’ there as opposed to ‘no physics malfunctions……where the term ‘malfunction’ is best defined as shouting ‘What the hell is going on!!!!’ quite loudly…….

So thats all good apart from the stairs.

The solution is simply to replace the mesh collider generated by Unity with a (or series of) primitive collider(s) (in this case box colliders) as required to ‘approximate’ the overall shape of the mesh.

Colliders are added as Empty children of the mesh and adjusted using their transforms to create a RB friendly shape:

Hierarchy view of stairs and colliders
Side view: fist collider is just a box (in green) that covers the last (topmost) stair
Side View: 2nd collider is a rotated box collider (in green) that changes the ‘surface’ into a ramp
Example of RB player navigating stairs. Left stairs have mesh collider, right stairs have simplified colliders that create a ramp.

So thats it – the simple primitive based series of colliders transforms the stairs into a ramp that an RB can climb.

Some extra trickery can help achieve ‘stair-ness’ such as running a movement script/animation on the camera/player on entering the collider that takes into account current velocity (now easily accessible from the RB player!) and adds an appropriate bob/move/climb action.

Just to prove the usefulness of this approach (esp as a dev tool) here’s the custom collider for the arrow attached to the RB player that I’m using to debug and test the controller:

Hierarchy of colliders
View of colliders (in green) on arrow

The only issue with this that I immediately foresee is the depth to which the 2nd collider ‘juts’ through the ground plane. Not a problem in my sandbox scene but may become a problem when developing an environment that has multiple levels. In that case the colliders may start to act on objects that are below them and provoke some weird behaviour. One solution may be to add more, gradually smaller colliders to the stairs so the depth of ground penetration can be minimised.

Refactoring UI Elements

Refactoring UI Elements

I started this post with just the reference pic above almost 6 weeks ago, so I’ve been trying to remember exactly what I was trying to tell myself here. Fortunately for me I did annotate the code so I’ve managed to backtrack and figure things out.

As with many posts during a period of review and testing this is again about extensibility, reusability and ease of use.

The issue was with the size of the UI prefab – it had become a bit of a goliath with many interdependent elements (objects) being controlled via the centralised UI controller class. This meant that the the UI was a pain to instantiate in a scene as the references were difficult to ‘find’ and were so interwoven with one another that any bug tracking or recycling/repurposing was getting to be a bit of a chore. What I’m always after is rapid prototyping and that means trying to make things that are for the most part drag and drop.

I’ve read so many times across the Unity fora about the importance of keeping things as simple and independent as possible – I think I’ve written a lot about it too – but as ever the best teacher is failure and thats exactly what my giant prefab brought to the fore.

So as of this last iteration of the UI, all child elements are separated out into their own classes and are saved as individual prefabs. This has simplified matters and allowed the whole thing to be come more extensible and flexible in terms of functionality:

Overview of the UI elements

This new arrangement of prefabs can now be used in a modular way and new elements should be able to be introduced with a minimum of fuss as everything is pretty much separate, with each element of the UI existing as its own class containing its own functions that can be referenced and called from anywhere or anything else.

For example, the prefab element called BroadcastMessages contains:

  • its own class
  • a text (TMP) object
  • a background object
  • an animator component

The class itself becomes very simple to manage as its functionality applies only to its children. This functionality is accessed as a function that displays (and animates) incoming text addressed to that function from elsewhere by calling:


public class BroadCastMessages : MonoBehaviour
    TMP_Text broadcastMessages_Text;
    Animator animator_broadcastMessages;

    void Start()
        broadcastMessages_Text = GetComponentInChildren<TMP_Text>();
        animator_broadcastMessages = GetComponentInChildren<Animator>();
        broadcastMessages_Text.enabled = false;

    //public method for receiving messages:
    public void IncomingBroadcastMessages(string message)
        broadcastMessages_Text.text = message;

        if (broadcastMessages_Text.text != null)
            broadcastMessages_Text.enabled = true;
            animator_broadcastMessages.SetBool("broadcastMessage_FadeUp", true);

        if (broadcastMessages_Text.text == null)
            animator_broadcastMessages.SetBool("broadcastMessage_FadeUp", false);
            broadcastMessages_Text.enabled = false;

The upshot of this is that the UI is now a system of prefab elements responsible (in the main) for their own part of the display using no more that 30-40 lines of code:

Outline of UI structure using individual prefabs with individual classes

The only complication/extension in the display I am currently using is the ScreenToWorldPoint method (outlined in this post) that displays a line linking information boxes to specific objects identified and passed from the raycast function in PlayerInteraction(). This display method can be seen in the screen grab at the top of this post.

The information (text) is passed from the raycast hit object to either the RHS or LHS messages class (depending on the objects’ tag).

The information is then displayed using the ScreenToWorldPoint method which I have now simplified by moving into the class UI_Manager – the same function is called by both the RHS and LHS Messages classes so this saves duplication:

public class UI_Manager : MonoBehaviour

    public void FadeUpLineRenderer(LineRenderer lineRenderer, Transform rayHitTransform,
        TMP_Text text)
        //Fade up and link text to the object in world space

    public void FadeDownLineRenderer(LineRenderer lineRenderer, TMP_Text text)
        if (alpha_lineRenderer > 0)
            //Fade down

So this class (UI_manager) gets called by either LHS of RHS messages which provide references to a lineRederer, a Transform and the text. The LHS/RHS messages are accessed themselves using a unique function.

The snippet below is from the LHS_messages class. Its is a slightly more complex version of the Broadcast Messages class but basic functionality is the same – messages are passed to it using the call:


Then the appropriate elements are passed to the function in UI_Manager:




public class LHS_Messages : MonoBehaviour

//Incoming messages public method
    public void IncomingMessages(string message)
        LHS_messageText.text = message;

    void Update()
        //If no raycast hits turn off the displays
        if (playerInteraction.rayCastHitObject == null)
                //fade down UI window
                LHS_messageAnimator.SetBool("LHS_Panel_FadeUp", false);

                //set the rayHitTransform to null
                rayHitTransform = null;

                //Fade Down LineRenderer:
                uI_Manager.FadeDownLineRenderer(lineRenderer, LHS_messageText);

        //if raycast hit display if tag is "Interactable"
        if (playerInteraction.rayCastHitObject != null
            && playerInteraction.rayCastHitObject.CompareTag("Interactable"))
            //turn on this element
            LHS_messageText.enabled = true;
            lineRenderer.enabled = true;

            //Fade up via Animator
            LHS_messageAnimator.SetBool("LHS_Panel_FadeUp", true);

            // set up the generic Transform component:
            rayHitTransform = playerInteraction.rayCastHitObject;

            //ADD THE LINE
            uI_Manager.FadeUpLineRenderer(lineRenderer, rayHitTransform,

Summing up what this approach and refactoring has helped achieve is a more extensible, easier to track/debug and generally much more flexible series of prefabs/classes that can be adapted, reused, repurposed and rewritten as circumstances require.

For example the UI_Manager class is now nothing more complex than an extra function holder that can be logically extended to include and execute any repeated functions that occur within the UI as a whole – I do believe I’m not far away form understanding Properties here so no doubt at some point down the line I’ll have an Aha! moment and write a new post referencing and ridiculing my clumsiness in this one….

That plus perhaps a more important point for me – by trying to employ a more sensible, simplified approach to writing the code in the first place I’m able to come back after a complete absence of some 6 weeks and take only 20 minutes or so to see what I’m doing.

Knowing what I’m trying to do is mostly a good thing…..leaving easy to read comments is definitely a good thing.

Better Referencing in Unity

Better Referencing in Unity

After several months of development trying to keep things as simple as possible has become increasingly important.

With only a few elements on the game sandbox its sometimes been a struggle to quickly iterate and change functionality without breaking relationships and spending hours trawling through the code trying to find bugs and/or having to reattach or redefine references.

In my recent elevator script development this took an extreme turn as the relationships between objects became overly complex and lacking centralised control, so I spent some time looking at ways to simplify the process.

Component references:

Having to constantly drag and drop game object and component references into the script window, the following methods have proved to be useful:

void Start()
            //Cache references in Start() to save memory:

            //get component attached to this game object:
            component = GetComponent<Type>();

            lineRenderer = GetComponent<LineRenderer>();

            //Using the Find method is useful to search the scene but can eventually lead to performance issues
            //so used by caching reference and assigned in Start()

            reference = Type.Find("Name");

            //this can be use in tandem with GetComponent:
            //example from the UI:
            //GET THE GO and derive elements from that:
            LHS_messageObject = GameObject.Find("LHS_Messages");

            //Attached components:
            LHS_messageRectTransform = LHS_messageObject.GetComponent<RectTransform>();
            LHS_messageText = LHS_messageObject.GetComponent<TMP_Text>();
            LHS_messageAnimator = LHS_messageObject.GetComponent<Animator>();

             //Find of Type:
            //This method is useful for finding specific components by type
            //like other classes:

            reference = Component.FindObjectOfType<Type>();

            //example grabbing the player interaction class in the UI:
            playerInteraction = GameObject.FindObjectOfType<PlayerInteraction>();

These methods have been useful in referencing objects/classes and when used in tandem with arrays have proved themselves useful in quickly creating and editing core functionality of the controller class.

        //example using arrays of components:         
        elevatorMechanics = GameObject.FindObjectOfType<ElevatorMechanics>();

        doors = new Transform[elevatorMechanics.numStoreysInThisBuilding];
        doorAnimators = new Animator[elevatorMechanics.numStoreysInThisBuilding];
        doorIsOpen = new bool[elevatorMechanics.numStoreysInThisBuilding];

        for (int i = 0; i < elevatorMechanics.numStoreysInThisBuilding; i++)
            if (transform.GetChild(i).name.Contains("Placeholder"))
                doors[i] = transform.GetChild(i);
                doorAnimators[i] = doors[i].GetComponentInChildren<Animator>();
                doorIsOpen[i] = false;

Saying that, its important to mention (again) that having a centralised point (object/class) within the hierarchy that deals with all the ‘thinking’ of the structure is massively important. Using this approach during the elevator development I was able to cut down hundreds of lines of code across 5 classes to less that 50 lines across 4 of the attached scripts (dealing with individual functions like the buttons control panel, call buttons, triggers and doors) plus a longer ‘controller’ class that deals with the core mechanics of moving, UI, calculating floors and triggering events.

In this hierarchical structure the controller class is ‘aware’ of its own immediate environment in terms of how it should be affected in the world plus a deeper awareness of its children – doors, platform, buttons etc.

In turn, the child elements are aware of nothing except their own functionality and pass that functionality on to the controller class.

As a final note on referencing now that I’ve veered off into architecture, this post is worth mentioning as an extension method for finding objects within a hierarchy structure:

GameObject GetChildWithName(GameObject obj, string name) {
     Transform trans = obj.transform;
     Transform childTrans = trans. Find(name);
     if (childTrans != null) {
         return childTrans.gameObject;
     } else {
         return null;

Elevator Prefab

Elevator Prefab

After more than 2 weeks of wrestling with this idea, illness and the slaughter of more than 2500 orcs in Shadow of War (go go Batman in Mordor!!!) the elevator prefab is now functionally complete.

The idea was to create a functional prefab that would take input from the Inspector and set up arrays of elements allowing the player to interact via:

  • elevator call buttons: one on each floor to call the elevator to that floor if not present
  • a control panel of buttons that allow player to select desired floor.
  • an array of doors that open and close as the elevator reaches their floor

The Inspector takes arguments for number of floors and floor height in order to set up an array of floors that have index of floor number and elements of floor heights. Each floor is modular and modelled and imported from Blender. At this point the structure is built in the Inspector to allow easy access to editing functions:

A note on riding platforms:

After finding some really nasty jitter issues with the character when riding rigidbody platforms I found a solution using this thread to derive the following script. This is attached to the rigidbody collider object in the scene (the elevator platform). The main lines are:

  • other.transform.SetParent(this.transform) – sets the GO OnTriggerEnter as child of the RB object (the GO is the player but not explicit in this example)
  • other.transform.SetParent*null) – remove the parent OnTriggerExit
    private void OnTriggerEnter(Collider other)
    numObjsInTrigger += 1;    
    //In order to stop the jittering caused by riding platforms
    //set the other as a child of the empty parent of the platform


    //Catch trigger events caused by any of the attached GOs
    //whose names contain 'elevator' and return (nullify)
    //the trigger action . Also remove the registered
    //numObjsInTrigger call
    if ("Elevator"))
        numObjsInTrigger -= 1;

    if (numObjsInTrigger >= elevatorMechanics.numObjsToTrigger)
        if (other.transform.CompareTag("Player"))
            //Send the bool message
            playerIsInElevator = true;

private void OnTriggerExit(Collider other)
    numObjsInTrigger -= 1;

    //remove the parenting

    //This stops activation when there is still more
    //than one object in the trigger
    if (numObjsInTrigger < elevatorMechanics.numObjsToTrigger)
        if (other.transform.CompareTag("Player"))
            //send the bool message
            playerIsInElevator = false;


This process has served as a great learning experience especially in terms of organising hierarchy and how that relates to script functionality. In the current iteration of this prefab the top level element (Elevator) controls all movement/animation functions by taking references from children.

This has resulted in an efficient code structure that ensures control functions are accessed within the parent object – the ‘brain’ of this structure – by reaching out to the child elements for references to individual elements – doors, control panel, call buttons and trigger events.

In this way the only loops running at Update exist in the parent object – all children may run loops to set up their own arrays at Start but have no update function. This centralisation of functionality will make this prefab extensible in the future – e.g. while this moves a platform on Y, it would be straightforward (until Quaternions!) to make a platform move on X/Z.

Iteration #0: feature updates (and fiddling)

Iteration #0: feature updates (and fiddling)

After managing to deliver on my project management priorities, iteration ) of TGM47 is complete.

Iteration #0 is a series of prefabs that address basic interactions and movement.

The main purpose of this iteration was to create prefabs that would make future iterations a bit more efficient as I can draw and iterate from these general principles to create environment/level specific instances.

Of course, with the explicit approach to development being organic there have been more than a few bits of fiddling and going off on tangents that are perhaps fundamentally opposed to some of the original concepts – specifically eleveators!

Iteration #0 – Features


  • Movement that is inertia based
  • Boost (accelerate on Y) – jump
  • Dash (accelerate on Z)
  • Power consumption


  • Raycasting to recognise and retrieve information from other GameObjects


  • RigidBody Interactions


  • Transform based audio for all movement elements


  1. Display and update power consumption
  2. Methods to allow messages from other objects to be displayed linked from CamSpace to WorldSpace
movement and interaction


  • A button controlled door opened/closed using the Player Raycast to distinguish between and activate individual doors


  • A door controlled via a collision trigger
  • Parameters include options for how many objects (and of which type) are required to activate the door
button controlled and pressure pad controlled doors


  • Based on the door prefab this elevator is activated in the same way as the Door(PP)() but has other customisable options:
  • A button that can call the elevator to this floor
  • Elevator ‘knows’ which floor it is on and responds accordingly – going up/down/already on this floor

Some conclusions and thoughts

Why do I need elevators anyway???? Seeing as how the main player is the TGM camera complete with a boost (fly) function/hover it seems a bit unnecessary.

Having said that it was more the challenge of trying to develop a universal/catch all prefab that could solve what is (at least for me) a quite complicated logical problem. In this example there are only 2 floors so the logic isn’t particularly head scratching but (for the sake of it), I’ll try and develop a more universal script that I’ll then put on the Unity store…..

One of the fundamental lessons learned in this iteration is the need to have some idea of how the logic will work in a given scenario and what objects need access to what data. Rule of thumb (based on research) seems to be to allow parent objects to control and access data in children and try and push scene/level wide data into an overall manager that injects data into the objects in the scene as requires.

Thats an ongoing learning process at the mo but as scenes become more complex this approach will help keep the logic and the code clean, accessible, easy to debug and easy to read.

A future iteration of this needs to include some experimentation with the player itself. The next iteration is focussed on developing a way to ‘shoot’ video textures onto world object, although after having started to experiment with animations in Unity and Blender I want to develop a TPS with Cinemachine option for this project.

(GIFs created with

Animation: Curves and Immersion

Animation: Curves and Immersion

This vid highlights approaches to developing procedural animations that can increase feel and immersion using curves.

Many of these tips are useful when thinking about developing gtm47 using fluid animations and the power of the Unity FSM animator as opposed to hard coded transforms.

Iteration and testing is done mainly using a minimal amount of keyframes in order to focus on how the curve can affect overall feel and ‘realism’.

Can be implemented in Blender and imported to Unity.

Human Places, Human Spaces

Human Places, Human Spaces

This 1979 (complete with awesomely future-retro funky intro tune and graphics) film is based on William H Whyte – Street Life Project and explores how people use spaces in the real world.

A useful resource that can help design how AI elements might interact with game architecture.

Project Management

Project Management

You will comply

Resistance has indeed proved futile!

When I first started developing tthe glitch machine B47 as a fully fledged project in April/May 2020, I was coming at it form the point of view of a bit of a hobbyist/hacker without much regard for protocols and the ‘trappings’ of management approaches.

For me, the project HAD to be organic and I wanted the dev itself to inform the content, structure and narrative. After all, one of the main themes of this whole idea is about liminality, about becoming and being in a state of flux; so any kind of formal structure struck me as fundamentally anti- glitch.

Of course it didn’t take long to get completely out of my depth and lose control of what I was doing and while I appreciated the chaos it wasn’t conducive to making progress.

The number of scripts, the bugs they inevitably create (especially in my code), the concept ideas, artwork, 3D models, textures, materials, AV content, scenes, iterations, theory and on and on…. that are involved in getting anywhere with any kind of project like this need to be addressed…they need to be managed.

So I started trying out some tools and thinking about how best to manage this seemingly overwhelming workload. is a really nice note taking app with lots of other lovely functionality that lends itself to a dev environment.

Comes complete with a database manager that makes task tracking really straightforward plus lots of other features that allowed me to make a functional, centralised space for all project areas.

With notion you can easily publish your site to the web and add hosting if you want to:. Theres plenty of support and help available online and good community engagement.

Easy to use, free for personal use, and comes with a desktop app too!

So thats all good!

I used this for about a month and was learning as I went – both about the platform and how best to organise and mange the project. The eventual breaking point for me though was API integration.

At the time of writing theres no official public API and while this situation looks set to change in the near future I needed to move on and start connecting ideas and content to this site.

At its core is a visual to do list…but with power ups!

Using this app (again free for personal use and a desktop app) is super easy and setting up a scrum style project flow is extremely straightforward.

Boards can be expanded and shared with teams to track project progress and neat power up options allow some automation of tasks, like moving ‘complete’ tasks to the ‘completed’ list. You only get one power up with a free account so choose wisely….

Another great feature was adding priority fields to tasks and being able to filter accordingly. I also used this chrome plugin to add some extra functionality without having to cough up any money: This was useful for tweaking some parameters and setting the overall look when using the app online – bu that’s online only and doesn’t apply to the desktop app and won’t work in a team situation unless everyone has the same setup.

Anyway, all round thumbs up from me and used this for a week or so – but again the breaking point for me was the lack of integration with other apps I wanted to use and exporting CSVs all the time is just a pain…..

Google Sheets I don’t think needs much explanation – its part of the Google suite of online office apps…and its free….and its HUGELY functional….overwhelmingly so!

So after a month or so of playing with other apps like notion, and trello, it turns out I had learned a fair amount about how project management works and more importantly what I wanted my project management to do!

That last statement should have been the first task in my project management saga I know, but this is an organic process……

So once I had the over arching process clear in my head I made my own system in sheets. The resource here: was useful in getting started and then its just a matter of how complex/deep you want to go .

There’s a massive amount of resources, help and community support for pretty much any kind of functionality you need inside your sheets, so after about a week of development I now have a system that:

  • tracks dev area iterations based on tasks
  • includes a sprint page tracking focussed function dev
  • includes an overview that provides an ‘at a glance’ status breakdown
  • is a PM system that is centralised and always available
  • is flexible and extensible to suit project needs

With this tool I’m now able to prioritise tasks, plan functionality and implement iterative sprints while retaining the flexibility I need in order to expand and develop the project. In fact this post is a part of a web design sprint due today.

As far as deviating from the glitch aesthetic I’d say…..

…you can’t break it ’til you make it, right?

Player: UI Info Overlay

Player: UI Info Overlay

Solution to displaying interactable object information on `UI based on

    //send the ray from the **CAMERA** (not the player)
    rayDirection = camera.transform.TransformDirection(Vector3.forward);
    //point of origin is LOOK - from the camera
    ray_pointOfOrigin = camera.transform.position;
    if (Physics.Raycast(ray_pointOfOrigin, rayDirection,
        out interactableObjectHit, scanRange))
        //Change the color based on object in range
        //Check if the hit obj has Interactable Tag
        if (interactableObjectHit.transform.CompareTag("Interactable"))
            crosshair.color =;
            //Get the transfrom of the hit object
            rayCastHitObject_transform = interactableObjectHit.transform;
            //Get the name of the hit object
            rayCastHitObject_name =;
            if (interactableObjectHit.transform == null)
                rayCastHitObject_name = null;
        crosshair.color = Color.white;
        //change the name to null so we can differentiate a
        //no hit event in the UI controller
        rayCastHitObject_name = null;
Running in update to ensure we are updating what we're looking at.
    We are receiveing raycast info from playerinteraction script that gives
    us the name of the object we've hit (we are looking at) using:
    rayCastHitInfo =;
    We can use this info in the UI to display the name and info about
    the object we're currently looking at.
    if (playerInteraction.rayCastHitObject_name != null)
        //Turn on the components we need in UI
        canvasGroup_interactableObjUI.enabled = true;
        panel_interactableObjUI.enabled = true;
        text_interactableObjUI.enabled = true;
        lineRenderer_hitObjectToUI.enabled = true;
        //add the line
        lineRenderer_hitObjectToUI.startWidth = 0.001f;
        lineRenderer_hitObjectToUI.endWidth = 0.01f;
        //Set up a new V3 to hold the position of the UI element:
        Vector3 UI_element_transform = new Vector3(
            //x and y will attach to pivot points of the UI object set in the Inspector
            //Add 1.0f on z else we won't see the line
        //No. of points on the line (start/end = 2)
        lineRenderer_hitObjectToUI.positionCount = 2;
        //start at position of the UI obj (rayInfo) and change it to World position
        lineRenderer_hitObjectToUI.SetPosition(0, cam.ScreenToWorldPoint(UI_element_transform));
        //second point is position of the object hit by the raycast in playerInteraction
        lineRenderer_hitObjectToUI.SetPosition(1, playerInteraction.rayCastHitObject_transform.position);
        Add the text
        -this needs to pull in pre stored text like a databasethat depends on
        the name of the hit object - playerInteraction.rayCastHitObject_name
#glitch theory

#glitch theory

An ongoing list of related theoretical resources

“..the Event is a situation of radical
interruption, a transformation incited by things that were
unseen until that very moment. The thing that was hidden reveals itself
and turns the tables: just as workers that leave the darkness of their
workplaces or a secret desire that explodes. The Event is always
unexpected, as you can’t expect the revolution to obey a timetable. It does
not mean that it’s a miracle or supernatural intervention. The Event is
a revelation of things that were there, but were never expected to be
revealed, something that could be expected (sometime, somewhere), but was virtually

Badiou, A. (2005) Being and Event,trans O. Feltham. London: Continuum

The Glitch art is dead project was initiated by Aleksandra Pieńkosz and Zoe Stawska as an attempt to fill the gap in our understanding of new visual phenomena and to breach the gap between the digital and the material. The album consists of works presented on the Glitch art is dead exhibition that took place in Teatr Barakah gallery in Kraków in fall 2015. 29 artists from 14 countries presented their 70 graphics and 12 videos. The works of arts were chosen after an open call for members of online Glitch Artist Collective. The number and the quality of collected material pushed the project forward.

Aleksandra Pieńkosz and Piotr Puldzian Płucienniczak (eds.) – Glitch art is dead

Peter Zumthar: Bruder Klaus Field Chapel

Peter Zumthar: Bruder Klaus Field Chapel

This architecturally intriguing piece is a prime candidate to explore the impact of architecture within tgmB47.

The design, look and feel of the chapel can be adapted to incorporate elements of AV content as texture/material within the structure itself.

The feeling of movement within the central chapel, a feeling of being surrounded by a vast space while being encouraged to ‘move’ towards the heavens produces a dynamic atmosphere that can be further enhanced through using AV textures to increase immersion.

The Bruder Klaus Field Chapel by Peter Zumthor, completed in 2007, is known for its beautiful respect for the materials which were used to construct the sensuous space. The interior of the chapel is a black cavity left behind by 112 tree trunks burnt out of the cast concrete walls. Twenty-four layers of concrete were poured into a frame surrounding the trunks, stacked in a curved conical form, forming a stark contrast to the comparatively smooth angular façade. After removing the frame, many small holes were left behind in the walls, creating an effect reminiscent of the night sky. The chapel’s “beautiful silence” and undeniable connection to its surrounding landscape make it an evocative and popular destination for many.

Architecture in Game Design (GDC 2016)

Architecture in Game Design (GDC 2016)

Some interesting resources on architecture (as in buildings and space) and game design.

This vid from GDC 2016 explores some interesting architecture themes:

Themes to explore:

  • +ve and -ve space
  • how does the local environment impact on materials and textures
  • how can space reflect use/use reflect space
  • can AV content as materials/texture influence the ‘feel’ of an architectural landscape?
  • Sharp/Hard geometry = Heavy/Industrial
  • Soft/Curved geometry = light, floating, ethereal
Conceptual Inspiration from Fantastic Voyage

Conceptual Inspiration from Fantastic Voyage

I was thinking that based in the game start idea it might be an idea to source some content based on the color-mungous 1966 film, fantastic voyage – look, font etc.

This might allow opportunity to change pace and disrupt expectations as I expect the experience to move into dark territory presenting a potentially interesting juxtaposition to this ’60s scifi feel.

Player: Audio Management

Player: Audio Management

After plenty of headaches with audio sources it seems that this solution is best option:

It’s possible to play multiple sounds at once with ONE Audiosource. You can play up to 10-12 audio sources at once (only) by using PlayOneShot(); With it, Unity mix the audio output from the audio clip into a single channel. (Which is why it’s limited to 10-12 clip at once.

The key here is ONLY USING ONESHOT(). The problem comes when using Play() and PlayOneShot() form the same AudioSource.

So the player in the glitch machine has 2 audio source – one for movement and one for SFX that can play simultaneously. The AudioSources are assigned in the Inspector

    public AudioSource audioSourceMovement;
    public AudioSource audioSourceFX;

         MOVEMENT AUDIO (Audio Source 01)
        if (!audioSourceMovement.isPlaying)
            audioSourceMovement.volume = 0.05f;
            audioSourceMovement.pitch = (float)(1.0f +
                (playerDistanceUp * 0.01) + (playerSpeed * 0.01f));
            audioSourceMovement.clip = moveAudio;

         FX AUDIO (Audio Source 02)
        if (!audioSourceFX.isPlaying)
            audioSourceFX.pitch = 1f;
            audioSourceFX.volume = 0.2f;

            if (isBoosting)

            if (playerMovement.current_Power <= 0)

Set up a toggle button using the Input Actions Manager

Set up a toggle button using the Input Actions Manager

Set up a toggle button using the Input Actions Manager: derived from


if (controls.Gameplay.crouch.triggered) 
{ isCrouching = !isCrouching; //toggle } 
//This will toggle 'crouch' on and off. Also make sure that the 
//controls.Gameplay.crouch.performed/cancelled IS NOT called in