Category: code

Pick Up and Throw – Materials, Scale and Selection Updates

Pick Up and Throw – Materials, Scale and Selection Updates

After this and this recent post, the player PickUpandThrow mechanic has now been updated to address some issues including:

  • Collisions when ‘holding’ an interactable object
  • Better throwing/firing mechanics
  • Selection outline for interactables
  • Swapping materials

There are a few other updates, especially related to destructable objects that I’ll cover elsewhere.

Pick up and Throw updates

Materials:

One of the main problems I wanted to address in this update was how to change the PBR material of the interactable object so that when it is held, the player can ‘see through’ it. This was no porblem with just using the base color option of the attached material but when using an Albedo texture, an alpha animation was more problematic.

My first effort was to create a custom shader using the Unity ShaderGraph – a very neat tool that I’ll no doubt come back to at a later point. While creating a basic PBR setup with Albedo, Smooth, Metallic, AO etc channels was straight forward there seems to be an issue with the way in which SG deals with normal map calculation (see this unity forum post). So while I was able to create an alpha slider for the albedo channel and expose that parameter in the Inspector, the normal mapping was a bit weird.

That led me into Blender at first and experimenting with trying to UV unwrap and export a project specific cube/sphere (which in turn led to this post) but I was still stuck with the normals issue.

The solution (at the mo) is Unity’s built in URP lit shader that includes Albedo, Metallic, Normal and AO channels that render nicely but accessing an alpha solution was a bit of a pain. In the end I opted for a second (lit shader) material slot in the Object class that was a copy of the original apart from the surface type set to Transparent instead of Opaque. Transparent materials have easy access to the alpha channel (in the same way as using just the color channel) that in turn can be run through an Animator component in order to achieve the fade up/down effect I need.

In the Object class, the relevant lines look like this:

rigidBodyObjects_Interactions()

Start()
{
    //Set the 2 materials in the Inspector:
    [Header("Materials")]
    public Material defaultMaterial;
    public Material alphaMaterial;
}

Update()
{
            //if is holding this object:
            if (pickUpAndThrow.isHoldingObject)
            {
                if (pickUpAndThrow.hitObjectIsAtHand)
                {
                    //switch off outline:
                    outline.enabled = false;

                    //change material surface type to transparent:
                    renderer.material = alphaMaterial;

                    //fade down alpha animation:
                    animator.SetBool("isHolding", true);

                }

                if (!pickUpAndThrow.hitObjectIsAtHand)
                {
                    //switch on outline:
                    outline.enabled = true;

                    //change surface to opaque
                    renderer.material = defaultMaterial;

                    //fade up alpha animation:
                    animator.SetBool("isHolding", false);
                }
}

I’m not posting the whole code here as I’m only interested in what has changed compared to the script posted in Change Color On Raycast Hit.

The main changes are using the second (transparent) material and accessing the animator component that simply fades the alpha value of the material.

There may be issues in the future as transparent materials are expensive, plus I’m not altogether sure if I’m spending resources in duplicating shaders here, but as this effect is applied to one object at a time, and based on my current tests it does the job for now.

Selection Highlighting

I wanted to add an outline to selectable objects to indicate that they can be picked up. Fortunately this solution was an easy (if lazy) one. After wrestling with the SG once again, and coming to the conclusion that while fresnel was a great option for spheres, it wasn’t so effective with cubes – what with it being fresnel!!!! – I found this FREE asset that was easy to integrate, access and control from the Object class.

While I do try my best not to rely on asset store resources this plug in is great…and I’m using it!!!!

One Thing At A Time!

Another easy fix (that should also save a bit of performance…until I need multi-tasking interactions!!!) was to limit the raycast call from the player so there are no other raycast trigger events while holding an object.

This fix is in the PlayerManager() class, in Update():

        //Only perform a raycast if NOT
        //holding an interactable object
        if (!pickUpandThrow.hitObjectIsAtHand)
        { rayCastHit = rayCast.DoRaycast(cam, scanRange); }

Collisons and Physics

In this final section of this post I want to highlight the physics interactions that include collisions, throwing, pick up move/rotate and moving playerhand based on object scale.

As an additional note, this unity answers post was a help in determining the rotation/position maths. The main methods to get a grip of are the MovePosiiton() and RotatePosition() calculations that eliminate the isKinematic calls I was making before.

Fortunately for me I’ve been employing good practice and the comments in the PickUpandThrow class, (that manages all related interactions/mechanics) explain the whole process in a much more concise and demonstrable manner than I would manage using this text…

private void FixedUpdate()
    {
        if (input.isInteracting && !isHoldingObject)
        {
            //we can pick up the object if we can see it:
            if (playerManager.rayCastHit != null
                && playerManager.rayCastHit.CompareTag("Interactable"))
            {
                //set the RB of target object received from the raycast
                //in PlayerManager:
                hitObjectRigidBody =
                   playerManager.rayCastHit.GetComponent<Rigidbody>();

                //move the player hand forward by size of
                //object * offset
                hitObjectScale = hitObjectRigidBody.transform.localScale;

                playerHand.transform.localPosition = new Vector3(
                    0, 0, hitObjectScale.z * playerHandOffset);

                //toggle interact button
                input.isInteracting = false;

                //we are now holding the object
                isHoldingObject = true;
            }
        }

        //Pick up the object
        if (isHoldingObject && !hitObjectIsAtHand)
        {
            //move towards player hand
            hitObjectRigidBody.MovePosition(Vector3.MoveTowards(
                hitObjectRigidBody.position,
                playerHand.position,
                pickUpSpeed * Time.fixedDeltaTime));

            //rotate towards zero:
            Vector3 targetDirection =
                (hitObjectRigidBody.position - Vector3.zero).normalized;

            Quaternion targetRotation = Quaternion.LookRotation(targetDirection);

            hitObjectRigidBody.MoveRotation(Quaternion.RotateTowards
                (hitObjectRigidBody.transform.rotation, targetRotation,
                pickUpSpeed * Time.fixedDeltaTime));

            //turn off gravity
            hitObjectRigidBody.useGravity = false;
        }

        if (isHoldingObject && hitObjectIsAtHand)
        {
            //if object has reached player hand disable its movement
            //by 'teleporting it into position. 
            hitObjectRigidBody.MovePosition(playerHand.position);

            //is the object is != rotation zero
            //slow rotation by lerping the angular velocity
            //to zero:
            if (hitObjectRigidBody.rotation
                != Quaternion.Euler(Vector3.zero))
            {
                hitObjectRigidBody.angularVelocity = Vector3.Lerp
                    (hitObjectRigidBody.angularVelocity,
                    Vector3.zero,
                    pickUpSpeed * Time.fixedDeltaTime);
            }
        }

        if (input.isInteracting && isHoldingObject)
        {
            //turn gravity back on:
            hitObjectRigidBody.useGravity = true;

            //stop objects rotation by setting
            //it to zero to make sure it fires forward
            hitObjectRigidBody.rotation = Quaternion.Euler(Vector3.zero);

            //set direction based on camera transform:
            Vector3 pushDirection = cam.transform.forward;

            //add upwards multiplier only to Y transform
            pushDirection.y *= upMultiplier;

            //Fire the object away
            hitObjectRigidBody.AddRelativeForce(
                pushDirection * throwSpeed, ForceMode.Impulse);

            //reset hand position
            playerHand.transform.localPosition = Vector3.zero;

            //we are no longer holding the object
            isHoldingObject = false;

            //toggle interact button
            input.isInteracting = false;
        }
    }
Change Color on Raycast Hit

Change Color on Raycast Hit

In the process of developing the RB controller, one of the functions I wanted to incorporate was highlighting objects when hit by the raycast. This has been on the dev list for some time, so this basic prototype is well overdue.

The starting point for this functionality is this forum post:

https://answers.unity.com/questions/1112741/how-do-i-change-the-raycasthit-material-back.html

This functionality has been highlighted in a previous post about the Pick Up and Throw functionality, but one bug in the setup was related to the colors ‘sticking’ and not reverting to their original state:

Tangled up in blue – previously selected objects not reverting to their original color

Turns out the reason for this unwanted behaviour was attaching the function to the main player manager and trying to read and store color values on a case by case basis.

By moving the script on to the object itself and allowing that to check for raycast hits, the ‘sticky color’ problem has been resolved while simultaneously streamlining my logic:

  • Each ‘interactable’ object can now have bespoke color values even though they all share the same class
  • No need to compare tags as the interaction is calculated by the object
Testing color change on hit using multiple objects and colors

Heres the related code snippet in the object class – only called if the object is hit:

[Header("Colors")]
    public Color defaultColor;
    public Color highlightColor;
    public Transform previousObject = null;
    new Renderer renderer;

Start()
{
//Get the renderer component on this object:
renderer = GetComponent<Renderer>();
}

Update()
{
//is raycast hitting THIS object?
        if (playerManager.rayCastHit == this.transform)
        {
            //Check if this hit is the same as the stored hit:
            if (previousObject != this.transform)
            {
                //store hit object:
                previousObject = this.transform;

                //change hit object color:
                renderer.material.color = highlightColor;
            }
        }

        //if no raycast hit:
        else
        {
            //Reset hit object material
            renderer.material.color = defaultColor;

            //Clear reference
            previousObject = null;
        }
............
}

This basic functionality can now be extended to use a better highlighting effect by utilising Unity’s Shader Graph to produce an edge glow effect. This effect will target the material (not just the color) and offer better representation of selected objects.

I will also post about changing the Interactable Object class to a Scriptable Object that should allow these objects to be instantiated with custom data including materials/textures and bespoke UI messages/information.

Rigidbody Based Pick Up and Throw

Rigidbody Based Pick Up and Throw

As the Rigidbody version of the Player continues to develop (full functionality breakdown to follow) the concept of being able to pick up and throw interactable objects now has some solutions.

(Its also worth noting here that the prototype illustrated also has a change color on raycast hit function that I’ll cover elsewhere.)

This added functionality has been on the dev list for a while so its good to get a working prototype up and running.

Development and Code

The basic pseudo code looks something like this:

if isInteracting && !isHoldingObject
check the raycast hit object is Interactable
move the object to the player

isHoldingObject = true
toggle isInteracting

if IsInteracting && isHoldingObject
AddForce to rigidbody based on PlayerCamera transform

isHoldingObject = false
toggle isInteracting

That seems simple enough but I quickly ran into issues specifically around getting more used to handling rigidbodies in FixedUpdate().

This answer on Unity forum was a great help:
https://answers.unity.com/questions/1650650/throwing-object-with-force.html

But before the specifics, one obvious issue with my pseudo code was the move function would put the selected object on top the player – so firstly I needed to set up an offset that would also act as a target to which any selected object could be moved.

Easiest way to do that was to create an Empty +10u on player.z

  • Player hand is the target for any selected objects
  • Its a child of the player camera in order to inherit position and rotation that equates to the ‘look’ direction/rotation of the player. Used to set throw direction.
  • has Trigger that detects when the selected object has reached the ‘hold’ position
  • Layer is set to ‘Ignore Raycast’ (a built in Layer in Unity) that allows the raycast from the player to ignore this object
  • Attached script is used for OntriggerEnter mointoring

Once this was set up I was able to set up and work on the PickAndThrow script attached directly to the Player Object. After a few days of some serious wrestling with this script, below is a working prototype with plenty of commentary that explains the process step by step:

public class PickUpandThrow_PlayerRB : MonoBehaviour
{
   //Get references.....

    private void Start()
    {
        input = GetComponent<InputManager>();
        playerManager = GetComponent<PlayerManager>();
        cam = GetComponentInChildren<Camera>();
        audioSource = GetComponentInChildren<AudioSource>();

        playerHand = GameObject.Find("PlayerHand").transform;
        playerHand_CollisionDetector =
            FindObjectOfType<PlayerHand_CollisionDetector>();
    }

    private void FixedUpdate()
    {
        //Check player hand object for collision with hit RB:
        hitObjectIsAtHand = playerHand_CollisionDetector.hitInteractable;

        if (input.isInteracting && !isHoldingObject)
        {
            //we can pick up the object if we can see it:
            if (playerManager.rayCastHit != null
                && playerManager.rayCastHit.CompareTag("Interactable"))
            {
                //set the RB of target object received from the raycast
                //in PlayerManager:
                hitObjectRigidBody =
                   playerManager.rayCastHit.GetComponent<Rigidbody>();

                //toggle interact button
                input.isInteracting = false;

                //we are now holding the object
                isHoldingObject = true;

                //play audio pick up FX:
                //This clip needs to be adjusted based on difference between
                //hand and object (* pitch)
                audioSource.PlayOneShot(audioClip[0]);
            }
        }

        //Pick up the object
        if (isHoldingObject && !hitObjectIsAtHand)
        {
            //make isKinematic to cancel physics interactions:
            hitObjectRigidBody.isKinematic = true;

            //correct orientation ready for throwing
            hitObjectRigidBody.MoveRotation(Quaternion.Euler(Vector3.zero));

            //move towards player hand
            hitObjectRigidBody.MovePosition(Vector3.MoveTowards(
                hitObjectRigidBody.position,
                playerHand.position,
                pickUpSpeed * Time.fixedDeltaTime));
        }


        if (isHoldingObject && hitObjectIsAtHand)
        {
            //if object has reached player hand, disable its movement
            //by 'teleporting' it into position. 
            hitObjectRigidBody.MovePosition(playerHand.position);
        }

        if (input.isInteracting && isHoldingObject)
        {
            //at this point the object is being thrown so re-enable
            //all physics properties by setting isKinematic = false:
            hitObjectRigidBody.isKinematic = false;

            //Fire the object away (forward)
            //Add force relative to camera in order to take account
            //of rotation.x ('look up')
            hitObjectRigidBody.AddRelativeForce(
                (cam.transform.forward * throwSpeedForward)
                + (cam.transform.up * throwSpeedUp), ForceMode.Impulse);

            //we are no longer holding the object
            isHoldingObject = false;

            //toggle interact button
            input.isInteracting = false;

            audioSource.PlayOneShot(audioClip[1]);
        }
    }
}

One of the main issues here was getting the selected object to Move to the PlayerHand position. Thsi was a lack of understanding of how MovePosition) works – If the rigidbody has isKinematic set to false, RB.MovePosition works like transform.position=newPosition and ‘teleports’ the object to the new position, rather than performing a smooth transition over FixedTime.

The other referenced script of interest here is the PlayerHandCollisionDetector attached (quite logically) to the PlayerHand:

public class PlayerHand_CollisionDetector : MonoBehaviour
{
    PickUpandThrow_PlayerRB pickUpandThrow;
    public bool hitInteractable;

    public void Start()
    {
        //Get the RB from pickUpAndThrow NOT playerManager!!!!
        pickUpandThrow = FindObjectOfType<PickUpandThrow_PlayerRB>();
    }

    private void OnTriggerEnter(Collider collider)
    {
        if (pickUpandThrow.hitObjectRigidBody != null)
        {
            //Check if the incoming collider is the SAME as the stored
            //rigidbody from pickUpandThrow - the RB is constant as its
            //NOT being updated by the raycast.

            //This comparison will avoid false reading from the bool that arose
            //from using the rayCastHit object from playerManager
            if (collider.transform == pickUpandThrow.hitObjectRigidBody.transform)
            {
                //toggle the hit  - BECAUSE THIS IS A TRIGGER!!!!!
                hitInteractable = !hitInteractable;
            }
        }
    }
}

Its easily seen some of the issues I had here! Just making the point that the rigidbody we need to compare is derived from the PickUpandThrow class as the reference there is stable. As opposed to getting the reference from the PlayerManager where the reference is being updated by the raycast.

Summing up

While this setup is functional enough there are some issues that need to be ironed out:

Kinematic Objects don’t care about your colliders!!!
  • selected object can move through other colliders when being ‘carried’ by the player. This due to rigidbody.isKinematic set to true on selection. A solution that comes to mind is to create another empty child of the Player with a box collider that dynamically scales to cover the area of Player + Object. This collider can be activated when hitObjectIsAtHand ?
  • thrown objects sometimes go in unexpected/inaccurate directions. This is due to the AddForce function of the rigiidbody being calculated using cam.transformDirection * a hard coded value, instead of calculating the up value based on camera.rotation.x
  • the standalone nature of this element of the player will be addressed by refactoring and using it as a called method from PlayerManger. In this way, PlayerManager makes conditional calls to other classes which ‘should’ help with efficiency.
Custom Colliders for RigidBodies

Custom Colliders for RigidBodies

While making good progress developing a new RB version of the player (which I’ll post about elsewhere), I stumbled on the issue that the Daleks solved back in the 80s – stairs!!!

After some searching through the forums I found the simplest solution (these things usually end up being quite simple) that I wanted to record here.

The issue is that when importing/using custom objects, Unity will automatically generate a mesh collider for the object using the objects’ base mesh. Thats great and makes for realistic interactions…if you’re using the character controller. Not so great if using a RB based controller as the collision will stop the player dead.

At the start of this project that was one of my reasons for going with a CC based controller, but I wanted to experiment with a RB controller and I’ve found that I prefer it – especially for physics interactions that seem much more fluid, realistic and immersive.

At this point, my RB player is using about 150 lines of code in a player manager along with a few classes ( of < 60 lines of code each) handling things like movement, power updates, raycasts, and writing some outputs to the UI. This compares to my CC main class that was > 880+ lines of code plus extra classes for raycasting etc.

In fact the new RB controller does everything the old CC based controller did, but with a better ‘feel’ , less code (therefore less bug tracking issues) and less (Ahem…..) ‘physics malfunctions‘.

  • I use the term ‘less physics malfunctions’ there as opposed to ‘no physics malfunctions……where the term ‘malfunction’ is best defined as shouting ‘What the hell is going on!!!!’ quite loudly…….

So thats all good apart from the stairs.

The solution is simply to replace the mesh collider generated by Unity with a (or series of) primitive collider(s) (in this case box colliders) as required to ‘approximate’ the overall shape of the mesh.

Colliders are added as Empty children of the mesh and adjusted using their transforms to create a RB friendly shape:

Hierarchy view of stairs and colliders
Side view: fist collider is just a box (in green) that covers the last (topmost) stair
Side View: 2nd collider is a rotated box collider (in green) that changes the ‘surface’ into a ramp
Example of RB player navigating stairs. Left stairs have mesh collider, right stairs have simplified colliders that create a ramp.

So thats it – the simple primitive based series of colliders transforms the stairs into a ramp that an RB can climb.

Some extra trickery can help achieve ‘stair-ness’ such as running a movement script/animation on the camera/player on entering the collider that takes into account current velocity (now easily accessible from the RB player!) and adds an appropriate bob/move/climb action.

Just to prove the usefulness of this approach (esp as a dev tool) here’s the custom collider for the arrow attached to the RB player that I’m using to debug and test the controller:

Hierarchy of colliders
View of colliders (in green) on arrow

The only issue with this that I immediately foresee is the depth to which the 2nd collider ‘juts’ through the ground plane. Not a problem in my sandbox scene but may become a problem when developing an environment that has multiple levels. In that case the colliders may start to act on objects that are below them and provoke some weird behaviour. One solution may be to add more, gradually smaller colliders to the stairs so the depth of ground penetration can be minimised.

Refactoring UI Elements

Refactoring UI Elements

I started this post with just the reference pic above almost 6 weeks ago, so I’ve been trying to remember exactly what I was trying to tell myself here. Fortunately for me I did annotate the code so I’ve managed to backtrack and figure things out.

As with many posts during a period of review and testing this is again about extensibility, reusability and ease of use.

The issue was with the size of the UI prefab – it had become a bit of a goliath with many interdependent elements (objects) being controlled via the centralised UI controller class. This meant that the the UI was a pain to instantiate in a scene as the references were difficult to ‘find’ and were so interwoven with one another that any bug tracking or recycling/repurposing was getting to be a bit of a chore. What I’m always after is rapid prototyping and that means trying to make things that are for the most part drag and drop.

I’ve read so many times across the Unity fora about the importance of keeping things as simple and independent as possible – I think I’ve written a lot about it too – but as ever the best teacher is failure and thats exactly what my giant prefab brought to the fore.

So as of this last iteration of the UI, all child elements are separated out into their own classes and are saved as individual prefabs. This has simplified matters and allowed the whole thing to be come more extensible and flexible in terms of functionality:

Overview of the UI elements

This new arrangement of prefabs can now be used in a modular way and new elements should be able to be introduced with a minimum of fuss as everything is pretty much separate, with each element of the UI existing as its own class containing its own functions that can be referenced and called from anywhere or anything else.

For example, the prefab element called BroadcastMessages contains:

  • its own class
  • a text (TMP) object
  • a background object
  • an animator component

The class itself becomes very simple to manage as its functionality applies only to its children. This functionality is accessed as a function that displays (and animates) incoming text addressed to that function from elsewhere by calling:

BraodCastMessages.IncomingBroadcastMessages(message);

public class BroadCastMessages : MonoBehaviour
{
    [Header("BROADCAST MESSAGES")]
    TMP_Text broadcastMessages_Text;
    Animator animator_broadcastMessages;

    void Start()
    {
        //GAME BROADCAST MESSAGE (LARGE CENTRAL)
        broadcastMessages_Text = GetComponentInChildren<TMP_Text>();
        animator_broadcastMessages = GetComponentInChildren<Animator>();
        broadcastMessages_Text.enabled = false;
    }

    //public method for receiving messages:
    public void IncomingBroadcastMessages(string message)
    {
        broadcastMessages_Text.text = message;

        if (broadcastMessages_Text.text != null)
        {
            broadcastMessages_Text.enabled = true;
            animator_broadcastMessages.SetBool("broadcastMessage_FadeUp", true);
        }

        if (broadcastMessages_Text.text == null)
        {
            animator_broadcastMessages.SetBool("broadcastMessage_FadeUp", false);
            broadcastMessages_Text.enabled = false;
        }
    }
}

The upshot of this is that the UI is now a system of prefab elements responsible (in the main) for their own part of the display using no more that 30-40 lines of code:

Outline of UI structure using individual prefabs with individual classes

The only complication/extension in the display I am currently using is the ScreenToWorldPoint method (outlined in this post) that displays a line linking information boxes to specific objects identified and passed from the raycast function in PlayerInteraction(). This display method can be seen in the screen grab at the top of this post.

The information (text) is passed from the raycast hit object to either the RHS or LHS messages class (depending on the objects’ tag).

The information is then displayed using the ScreenToWorldPoint method which I have now simplified by moving into the class UI_Manager – the same function is called by both the RHS and LHS Messages classes so this saves duplication:

public class UI_Manager : MonoBehaviour
{
   ........

    public void FadeUpLineRenderer(LineRenderer lineRenderer, Transform rayHitTransform,
        TMP_Text text)
    {
        //Fade up and link text to the object in world space
    }

    public void FadeDownLineRenderer(LineRenderer lineRenderer, TMP_Text text)
    {
        if (alpha_lineRenderer > 0)
        {
            //Fade down
    }
}

So this class (UI_manager) gets called by either LHS of RHS messages which provide references to a lineRederer, a Transform and the text. The LHS/RHS messages are accessed themselves using a unique function.

The snippet below is from the LHS_messages class. Its is a slightly more complex version of the Broadcast Messages class but basic functionality is the same – messages are passed to it using the call:

LHS_Messages.IncomingMessages(message)

Then the appropriate elements are passed to the function in UI_Manager:

UI_Manager.FadeUpLineRenderer

or

UI_Manager.FadeDownLineRenderer

public class LHS_Messages : MonoBehaviour
{
..................

//Incoming messages public method
    public void IncomingMessages(string message)
    {
        LHS_messageText.text = message;
    }

    void Update()
    {
        //If no raycast hits turn off the displays
        if (playerInteraction.rayCastHitObject == null)
        {
            {
                //fade down UI window
                LHS_messageAnimator.SetBool("LHS_Panel_FadeUp", false);

                //set the rayHitTransform to null
                rayHitTransform = null;

                //Fade Down LineRenderer:
                uI_Manager.FadeDownLineRenderer(lineRenderer, LHS_messageText);
            }
        }

        //if raycast hit display if tag is "Interactable"
        if (playerInteraction.rayCastHitObject != null
            && playerInteraction.rayCastHitObject.CompareTag("Interactable"))
        {
            //turn on this element
            LHS_messageText.enabled = true;
            lineRenderer.enabled = true;

            //Fade up via Animator
            LHS_messageAnimator.SetBool("LHS_Panel_FadeUp", true);

            // set up the generic Transform component:
            rayHitTransform = playerInteraction.rayCastHitObject;

            //ADD THE LINE
            uI_Manager.FadeUpLineRenderer(lineRenderer, rayHitTransform,
                LHS_messageText);
        }
    }
}

Summing up what this approach and refactoring has helped achieve is a more extensible, easier to track/debug and generally much more flexible series of prefabs/classes that can be adapted, reused, repurposed and rewritten as circumstances require.

For example the UI_Manager class is now nothing more complex than an extra function holder that can be logically extended to include and execute any repeated functions that occur within the UI as a whole – I do believe I’m not far away form understanding Properties here so no doubt at some point down the line I’ll have an Aha! moment and write a new post referencing and ridiculing my clumsiness in this one….

That plus perhaps a more important point for me – by trying to employ a more sensible, simplified approach to writing the code in the first place I’m able to come back after a complete absence of some 6 weeks and take only 20 minutes or so to see what I’m doing.

Knowing what I’m trying to do is mostly a good thing…..leaving easy to read comments is definitely a good thing.

Better Referencing in Unity

Better Referencing in Unity

After several months of development trying to keep things as simple as possible has become increasingly important.

With only a few elements on the game sandbox its sometimes been a struggle to quickly iterate and change functionality without breaking relationships and spending hours trawling through the code trying to find bugs and/or having to reattach or redefine references.

In my recent elevator script development this took an extreme turn as the relationships between objects became overly complex and lacking centralised control, so I spent some time looking at ways to simplify the process.

Component references:

Having to constantly drag and drop game object and component references into the script window, the following methods have proved to be useful:

void Start()
        {
            //Cache references in Start() to save memory:

            //get component attached to this game object:
            component = GetComponent<Type>();

            //example:
            lineRenderer = GetComponent<LineRenderer>();

            //Using the Find method is useful to search the scene but can eventually lead to performance issues
            //so used by caching reference and assigned in Start()

            reference = Type.Find("Name");

            //this can be use in tandem with GetComponent:
            //example from the UI:
            //GET THE GO and derive elements from that:
            LHS_messageObject = GameObject.Find("LHS_Messages");

            //Attached components:
            LHS_messageRectTransform = LHS_messageObject.GetComponent<RectTransform>();
            LHS_messageText = LHS_messageObject.GetComponent<TMP_Text>();
            LHS_messageAnimator = LHS_messageObject.GetComponent<Animator>();

             //Find of Type:
            //This method is useful for finding specific components by type
            //like other classes:

            reference = Component.FindObjectOfType<Type>();

            //example grabbing the player interaction class in the UI:
            playerInteraction = GameObject.FindObjectOfType<PlayerInteraction>();
        }

These methods have been useful in referencing objects/classes and when used in tandem with arrays have proved themselves useful in quickly creating and editing core functionality of the controller class.

        //example using arrays of components:         
        elevatorMechanics = GameObject.FindObjectOfType<ElevatorMechanics>();

        doors = new Transform[elevatorMechanics.numStoreysInThisBuilding];
        doorAnimators = new Animator[elevatorMechanics.numStoreysInThisBuilding];
        doorIsOpen = new bool[elevatorMechanics.numStoreysInThisBuilding];

        for (int i = 0; i < elevatorMechanics.numStoreysInThisBuilding; i++)
        {
            if (transform.GetChild(i).name.Contains("Placeholder"))
            {
                doors[i] = transform.GetChild(i);
                doorAnimators[i] = doors[i].GetComponentInChildren<Animator>();
                doorIsOpen[i] = false;
            }

Saying that, its important to mention (again) that having a centralised point (object/class) within the hierarchy that deals with all the ‘thinking’ of the structure is massively important. Using this approach during the elevator development I was able to cut down hundreds of lines of code across 5 classes to less that 50 lines across 4 of the attached scripts (dealing with individual functions like the buttons control panel, call buttons, triggers and doors) plus a longer ‘controller’ class that deals with the core mechanics of moving, UI, calculating floors and triggering events.

In this hierarchical structure the controller class is ‘aware’ of its own immediate environment in terms of how it should be affected in the world plus a deeper awareness of its children – doors, platform, buttons etc.

In turn, the child elements are aware of nothing except their own functionality and pass that functionality on to the controller class.

As a final note on referencing now that I’ve veered off into architecture, this post is worth mentioning as an extension method for finding objects within a hierarchy structure: https://answers.unity.com/questions/183649/how-to-find-a-child-gameobject-by-name.html

GameObject GetChildWithName(GameObject obj, string name) {
     Transform trans = obj.transform;
     Transform childTrans = trans. Find(name);
     if (childTrans != null) {
         return childTrans.gameObject;
     } else {
         return null;
     }
 }

Elevator Prefab

Elevator Prefab

After more than 2 weeks of wrestling with this idea, illness and the slaughter of more than 2500 orcs in Shadow of War (go go Batman in Mordor!!!) the elevator prefab is now functionally complete.

The idea was to create a functional prefab that would take input from the Inspector and set up arrays of elements allowing the player to interact via:

  • elevator call buttons: one on each floor to call the elevator to that floor if not present
  • a control panel of buttons that allow player to select desired floor.
  • an array of doors that open and close as the elevator reaches their floor

The Inspector takes arguments for number of floors and floor height in order to set up an array of floors that have index of floor number and elements of floor heights. Each floor is modular and modelled and imported from Blender. At this point the structure is built in the Inspector to allow easy access to editing functions:


A note on riding platforms:

After finding some really nasty jitter issues with the character when riding rigidbody platforms I found a solution using this thread to derive the following script. This is attached to the rigidbody collider object in the scene (the elevator platform). The main lines are:

  • other.transform.SetParent(this.transform) – sets the GO OnTriggerEnter as child of the RB object (the GO is the player but not explicit in this example)
  • other.transform.SetParent*null) – remove the parent OnTriggerExit
    private void OnTriggerEnter(Collider other)
    {
    numObjsInTrigger += 1;    
    //In order to stop the jittering caused by riding platforms
    //set the other as a child of the empty parent of the platform

    other.transform.SetParent(this.transform);

    //Catch trigger events caused by any of the attached GOs
    //whose names contain 'elevator' and return (nullify)
    //the trigger action . Also remove the registered
    //numObjsInTrigger call
    if (other.gameObject.name.Contains("Elevator"))
    {
        numObjsInTrigger -= 1;
        return;
    }

    if (numObjsInTrigger >= elevatorMechanics.numObjsToTrigger)
    {
        if (other.transform.CompareTag("Player"))
        {
            //Send the bool message
            playerIsInElevator = true;
        }
    }
}

private void OnTriggerExit(Collider other)
{
    numObjsInTrigger -= 1;

    //remove the parenting
    other.transform.SetParent(null);

    //This stops activation when there is still more
    //than one object in the trigger
    if (numObjsInTrigger < elevatorMechanics.numObjsToTrigger)
    {
        if (other.transform.CompareTag("Player"))
        {
            //send the bool message
            playerIsInElevator = false;
        }
    }
}

Outcomes:

This process has served as a great learning experience especially in terms of organising hierarchy and how that relates to script functionality. In the current iteration of this prefab the top level element (Elevator) controls all movement/animation functions by taking references from children.

This has resulted in an efficient code structure that ensures control functions are accessed within the parent object – the ‘brain’ of this structure – by reaching out to the child elements for references to individual elements – doors, control panel, call buttons and trigger events.

In this way the only loops running at Update exist in the parent object – all children may run loops to set up their own arrays at Start but have no update function. This centralisation of functionality will make this prefab extensible in the future – e.g. while this moves a platform on Y, it would be straightforward (until Quaternions!) to make a platform move on X/Z.

Player: UI Info Overlay

Player: UI Info Overlay

Solution to displaying interactable object information on `UI based on https://forum.unity.com/threads/get-ui-placed-right-over-gameobjects-head.489464/

    //send the ray from the **CAMERA** (not the player)
    rayDirection = camera.transform.TransformDirection(Vector3.forward);
    //point of origin is LOOK - from the camera
    ray_pointOfOrigin = camera.transform.position;
    if (Physics.Raycast(ray_pointOfOrigin, rayDirection,
        out interactableObjectHit, scanRange))
    {
        //Change the color based on object in range
        //Check if the hit obj has Interactable Tag
        if (interactableObjectHit.transform.CompareTag("Interactable"))
        {
            crosshair.color = Color.red;
            //Get the transfrom of the hit object
            rayCastHitObject_transform = interactableObjectHit.transform;
            //Get the name of the hit object
            rayCastHitObject_name = interactableObjectHit.transform.name;
            if (interactableObjectHit.transform == null)
            {
                rayCastHitObject_name = null;
            }
        }
    }
    else
    {
        crosshair.color = Color.white;
        //change the name to null so we can differentiate a
        //no hit event in the UI controller
        rayCastHitObject_name = null;
    }
/*
RAYCAST INFO    
Running in update to ensure we are updating what we're looking at.
    We are receiveing raycast info from playerinteraction script that gives
    us the name of the object we've hit (we are looking at) using:
    rayCastHitInfo = hit.transform.name;
    We can use this info in the UI to display the name and info about
    the object we're currently looking at.
    *********************************************************************/
    if (playerInteraction.rayCastHitObject_name != null)
    {
        //Turn on the components we need in UI
        canvasGroup_interactableObjUI.enabled = true;
        panel_interactableObjUI.enabled = true;
        text_interactableObjUI.enabled = true;
        lineRenderer_hitObjectToUI.enabled = true;
        //add the line
        lineRenderer_hitObjectToUI.startWidth = 0.001f;
        lineRenderer_hitObjectToUI.endWidth = 0.01f;
        //Set up a new V3 to hold the position of the UI element:
        Vector3 UI_element_transform = new Vector3(
            //x and y will attach to pivot points of the UI object set in the Inspector
            text_interactableObjUI.rectTransform.position.x,
            text_interactableObjUI.rectTransform.position.y,
            //Add 1.0f on z else we won't see the line
            1f
            );
        //No. of points on the line (start/end = 2)
        lineRenderer_hitObjectToUI.positionCount = 2;
        //start at position of the UI obj (rayInfo) and change it to World position
        lineRenderer_hitObjectToUI.SetPosition(0, cam.ScreenToWorldPoint(UI_element_transform));
        //second point is position of the object hit by the raycast in playerInteraction
        lineRenderer_hitObjectToUI.SetPosition(1, playerInteraction.rayCastHitObject_transform.position);
        /*********************************************************************
        Add the text
        -this needs to pull in pre stored text like a databasethat depends on
        the name of the hit object - playerInteraction.rayCastHitObject_name
        *********************************************************************/
Player: Audio Management

Player: Audio Management

After plenty of headaches with audio sources it seems that this solution is best option:

It’s possible to play multiple sounds at once with ONE Audiosource. You can play up to 10-12 audio sources at once (only) by using PlayOneShot(); With it, Unity mix the audio output from the audio clip into a single channel. (Which is why it’s limited to 10-12 clip at once.

The key here is ONLY USING ONESHOT(). The problem comes when using Play() and PlayOneShot() form the same AudioSource.

So the player in the glitch machine has 2 audio source – one for movement and one for SFX that can play simultaneously. The AudioSources are assigned in the Inspector

    public AudioSource audioSourceMovement;
    public AudioSource audioSourceFX;

       /**********************************************
         MOVEMENT AUDIO (Audio Source 01)
        **********************************************/
        if (!audioSourceMovement.isPlaying)
        {
            audioSourceMovement.volume = 0.05f;
            audioSourceMovement.pitch = (float)(1.0f +
                (playerDistanceUp * 0.01) + (playerSpeed * 0.01f));
            audioSourceMovement.clip = moveAudio;
            audioSourceMovement.Play();
        }

	/***********************************
         FX AUDIO (Audio Source 02)
        ***********************************/
        if (!audioSourceFX.isPlaying)
        {
            audioSourceFX.pitch = 1f;
            audioSourceFX.volume = 0.2f;

            if (isBoosting)
            {
                audioSourceFX.PlayOneShot(boostAudio);
            }

            if (playerMovement.current_Power <= 0)
            {
                audioSourceFX.PlayOneShot(outOfPower);
            }
          
           //etc......etc.........

Set up a toggle button using the Input Actions Manager

Set up a toggle button using the Input Actions Manager

Set up a toggle button using the Input Actions Manager: derived from https://forum.unity.com/threads/new-input-system-how-to-use-the-hold-interaction.605587/

/**********************************************************************
USE THIS AS A DEFAULT WAY OF SETTING UP A TOGGLE!!!
***********************************************************************/ 

if (controls.Gameplay.crouch.triggered) 
{ isCrouching = !isCrouching; //toggle } 
//This will toggle 'crouch' on and off. Also make sure that the 
//controls.Gameplay.crouch.performed/cancelled IS NOT called in 
//Awake()