Holiday Laziness & Employment

Happy New Year! Had I wrote a post when I was supposed to, a “Merry Christmas” would have been in order, but it is what it is.

I’ve got some great news. I’ve been hired! Woo-hoo! The position is an entry-level software engineer (programmer). I’m incredibly excited to get started, so that I can continue to expand my knowledge of programming. The paycheck doesn’t hurt either :). In the future, I would like to write a post on my experience with the interview process, and highlight some things that were done that I feel led to my success. Hopefully, it will help some of y’all still looking, or for those of you who are looking for new opportunities outside of your current employer.

“But, what’s been going on with the game?”, you ask? I knew you were eager to find out ;). Well, progress has been pretty slow over the past few weeks. This is partially to blame on the holiday season, and partially (mostly) to blame on my addiction to Dragon Age: Inquisition (so good *drool*).

The Good

Both Nate (my teammate) and I decided to get together for a day and work on the project collaboratively. He decided to break ground on modeling our characters (planning on one male and one female), and I continued to hack away at the movement code. It cannot be stressed enough how effective this strategy is. We were able to cover more ground in one 5 hour session than we ever had in a week working independently. I attribute the success to motivating one another, because we could both see the results of the other person’s work. It also kept each of us honest. If one of us hit a wall and needed a break, it was spent constructively scoping out what the other individual had accomplished, rather than skimming through Facebook for the next half hour.

I am most excited about the thing which I have no involvement in: beginning to model our characters! Nate broke ground on the female model first. female-2 female-1 Through some manner of wizardry, Nate imports these images into Blender, and transforms it into this:

Torso of Female model.

Torso of Female model.

As stated, this is just beginning of the modeling process, which is why there is little to no definition to the model. I am still very excited to see this part of the game getting started, and cannot wait to update you all further on its progress. The code update is fairly minimal, and may come off as even boring to some. Nevertheless, the additions, which revolve around refining acceleration and deceleration, were greatly needed.


The two highlighted “if” statements, if you remember, were once a single statement. This, we found, caused the player avatar to freeze in place if Speed exceeded MaxSpeed, which happened often. It would remain frozen as long as input continued (player kept swiping down), and resumed as normal once input stopped and the deceleration method was executed. Thus, the offending portion was moved to its own “if” statement, and if true, set the Speed to the MaxSpeed. One problem solved.

Another portion in the first statement worth mentioning is when we set the Acceleration variable equal to a set of values. This was to make the acceleration look more gradual, and therefore lifelike. There exists a similar set of logic within the “Decelerate” method as well, but it may be unnecessary in the end. The foundational logic is there, and it looks pretty good. Some more tweaking with the numbers wouldn’t hurt though.

The Bad

Yes, it was good that we sat down and broke some serious ground as a team. However, I think that is the only time we worked on the project between then and now. Matters are only likely to get worse as well with me getting hired, and he still occupied by his job, family and school work. We may have to hold more scheduled pow wows in the future in order to see this thing through.

The rest of “The Bad” involves running into more coding problems, of course. I had some extra time after finishing up the acceleration and deceleration bits, and decided to break ground on getting our avatar to move right and left of the path, depending on where the player’s fingers were on the screen. It did not go well, but it wasn’t a total disaster either.

Governance for moving left and right. Housed in the updateSpeed() method.

Governance for moving left and right. Housed in the updateSpeed() method.

It should be stated that the logic depicted above actually works as intended. It will move the avatar depending on where the fingers are located. However, when playtesting, the avatar will jitter in the direction of the finger, and then reset along the path. Thinking that something within the MoveCharacter() method was interfering, I temporarily disabled the whole method. Sure enough, the player moves towards where the finger is located.

Okay, it is a good thing that the movement logic works, but we have two additional problems to deal with now.

1.) The avatar was free to move anywhere it wanted to. We only wanted the avatar to be able to move a set distance to the left and right of the the path.

2.) The camera, since it is a child of the avatar, follows the avatar along that axis. We only want it to follow the avatar’s forward movement. This means we will have to remove it as a child to the avatar, and write an additional script specifically for the camera’s movement with regard to the avatar.

What’s Next

For me, the big focus will be resolving the two issues we now have mentioned in “The Bad”. Nate will continue to chip away at the female model in his free time. Collectively, we need to come up with a course of action. Setting a time and day of the week were we can both sit down and hack at this seems like the best way to go for the foreseeable future.

– Andrew Medhurst


C# – I’m Getting Better At This

What I’ve Been Up To

My educational background indicates that I am technically a game designer, but through this project, you would have thought that I went to school for programming. I’ve become so proficient at leveraging C#, that I feel confident in applying to a few entry-level developer positions. This doesn’t mean I’ll get them, but I feel like I could do them.

That doesn’t mean that I’ve stopped applying to Game Designer positions, because trust me, I have been…A LOT! As a matter of fact, I’ve been applying to anything closely related to video games. UI/UX designer, Game Tester, .Net Developer, nothing is off the table. When you are poor, you don’t really get to be picky.

On another note, I’m working on giving my portfolio a new home! This is what I have so far.


Pretty impressive, huh? Well, it isn’t much to look at right now, but there is certainly more to come. In the end, the idea is for the page to be a portfolio piece in and of itself. If you’d like to see my humble homepage in the flesh, feel free to hop on over to

The Game

My update for the game is going to be pretty short this week. I was able to get the avatar to run along or path. It also accelerates when input is identified, and decelerates when none is present. This basic functionality is as far as it goes right now. In the coming weeks, the feature will be altered in an effort to make it feel more natural and intuitive.

If you look at the inspector section on the right-hand side, you’ll see the global “Speed” variable gradually increases by the value of “Acceleration” as the user swipes downward on the mobile device. It then decreases by “Deceleration” when input is no longer received.

For now, we have a very primitive solution for getting the main camera to follow the avatar as it moves along the path. It is attached as a “child” object to the player avatar, and will follow it wherever it goes.


Main Camera object is housed under the Player (avatar), causing it to follow the Player.

The Code

Now for how it works:


Now with comments! Hopefully, this will make the code easier to digest.

As usual, all of this code resides in the updateSpeed() function (I really need to change that name), which is then called in the Update() function.

Let me just say that TouchPhase is my savior. Without it, this would probably be much more complicated. and take much longer to figure out. My goal is to have the comments in the image describe what is going on. In a nutshell, when the variable of finger 1 moves in downward direction (swiping), Speed is updated at a rate of (Speed times Acceleration) over seconds. When no fingers are detected, the code controlling the rate of Deceleration is executed.

Now, located underneath all of this is the formula that actually pushes the avatar forward. It invokes the transform function, and tells it Translate, or update the position of the avatar, by our first fingers DeltaPosition on the X-axis by the current Speed value over Time.deltaTime (seconds). The zeroes represent no change to the Y or Z axis.

Coming Up…

As stated previously, this week will revolve around getting movement just right. This will include setting some sort of delay for after no input is detected. This is so that the avatar will not lose speed in between swipes. After no input is detected for a second or two, then the deceleration code will execute. Both the Acceleration and Deceleration variables will likely be multiplied by another very small float value. This is an effort to give both features a more gradual look and feel to them. Right now they seem rather rigid and sudden. MaxSpeed will also be revised, as the avatar seems to move pretty fast at a fraction of that value already.

In the event that all of that gets wrapped up in a timely manner, I’ll begin exploring how to make the avatar strafe left and right, depending on how far away finger 1 is relative to the avatar’s location. This feature will be used to dodge obstacles, once those are implemented.

As always, thanks for your continued interest in this project, and thanks for reading!

– Andrew Medhurst

Cube-atar has ups!

First of all, I want to take a moment to celebrate that I am publishing another blog post only 7 days after my previous one. This may not seem like a big deal to most, but I like to celebrate small victories.

With that out of the way, I’m also pleased to say that the milestone for this week was achieved. Our avatar can jump! Hence, the title of this post. The mechanic works just as described in my previous post. When two fingers are pressed to the screen at the same time, the cube jumps and then lands on the ground.

In this post, I’ll elaborate on this mechanic, as well as delve into the fundamentals of our avatar’s forward movement.


Just as a reminder, this is a function native to Unity’s API, and is used to run the code within it at each frame. I have used it to run two other functions, updateSpeed() and MoveCharacter().


The lines of code within these two functions could technically reside directly in Update(). We decided to separate them into individual functions, then call them in Update(). This is intended to keep things looking clean and organized. The final line in Update() is designed to move the avatar forward along the avatar’s x-axis when the variable “Speed” is greater than 0. This will likely move into the updateSpeed() function in the future, as it looks a bit out of place in its current location.

updateSpeed() – Jumping

This is the hub for all of our touch functionality as of right now. The name of the function doesn’t accurately reflect that, but that is because its original purpose was solely to update the Speed variable. Obviously, things change.


Lines 60 through 65 give the avatar the conditions that need to be met in order to jump. Before I break down the code, we are going to need some background on Input.GetTouch(0) and  TouchPhase.Began in order for this to make sense.

Input.GetTouch(integer): When a finger is pressed to the screen, it is stored in an array, which assigns that touch a specific ID. When a finger is removed, it is deleted from the array. We can recall this touch ID with GetTouch(integer). Since array’s use an index of zero, GetTouch(0) is equal to the first touch detected.

TouchPhase.Began is the phase that is executed on the next frame once GetTouch(0) is detected. It will execute for that single frame alone, and will not continue running, regardless of how long that finger remains pressed to the screen.

Now, look at the associated “if” statement. In English, it says that, “If 2 fingers are detected against the screen AND if the Touch Phase Began for finger 1 while finger 2 is also touching the screen, execute the jump.”

This should indeed make the cube jump. However, there is one small problem…

There appears to be nothing in place to determine what consists of a single jump. This means that the player can continuously tap the screen with two fingers and essential fly through the enter game world, which is most certainly not the mechanic that we’re aiming for.

Luckily, the issue can be resolved with the placement of a simple boolean called “canJump”. It’s default value is set to true.


canJump, which must be true in order to jump, is set to false when a jump is execute. This is then set back to true when the avatar lands.

The floor checking code, which resides in the MoveCharacter()* function, essential asks if the character’s Y position (up and down) is on par with the floor’s Y position. If it is, the jump is considered to be finished, and the player can execute a jump once again. Now the jump looks more like something we would want.

* Unfortunately, that is all that will be shown of the MoveCharacter(). Much of the code consists of example code from an asset called iTween, which is not free. I have to respect the work of others.

I know you can’t see it, but your are just going to have to trust that I’m actually tapping a screen to get it to jump ;).

updateSpeed() – Movement

The next portion of updateSpeed() is intended to update the global Speed variable by the change in position of a finger that is swiped in a downward direction.Technically, this feature is broken right now. iTween’s pathing feature is somehow interfering with it’s execution.


Even still, the logic remains the same. A local variable called nbtouches is declared on line 70. This is to keep track of how many fingers are currently being pressed to the screen. In all honesty, we could likely do without the for loop that follows, but it is there just as a fail safe. On line 75, if we have more than zero touches, but less than 2, GetTouch(i).

Remember GetTouch? We are basically calling for the first finger that is touching the screen once again. Although, this time we are not listening for a second finger as well. This time, we are listening for a change in the deltaPosition (location) of that finger, over the course of Time.deltaTime (seconds). That formula updates speed, and thus updates the transform.Translate formula we have running in Update(). Pretty cool, huh?

Up Next…

The biggest focus is going to be getting the forward movement and iTween path to intertwine and work in concert correctly. In my experience, this is harder than it sounds. Taking an external asset and code, and trying to marry it up with your native work is surprisingly mind-numbing. Getting the avatar to accelerate and decelerate in a believable, organic looking manner should take up a good chunk of time as well.

Additionally, we may encounter an issue with jumping while moving forward. We just want to ensure that the movement velocity carries over to jumping properly.

Thanks for tuning in and for your continued interest!

– Andrew Medhurst

The Hiatus

Obviously, my original intention of providing weekly updates on my current project didn’t hold up. It has been, oh, 4 months since my initial post announcing my bold undertaking? Calling it a hiatus would be an understatement. I wish a legitimate excuse existed, but there is none. Sometimes life just runs off on you.

My absence, however, should not be interpreted as abandonment of our small team’s cause. We have been at work. Most of our progress involves sifting through Unity’s API in an attempt to piece together our game’s mechanics via C# scripts. We have also been sifting through Unity’s asset store in an attempt to find resources that may provide us with preexisting tools that we could use. You know, working smarter, not harder and all that.

It goes without saying that an undertaking of this kind rarely runs smoothly, and our case is no different. In this post, I while go through (hopefully keeping it as brief as possible) the steps we’ve taken, the steps we’ve retraced, and some of the challenges we are immediately facing.

The First Prototype

For whatever reason, despite our platform of choice being mobile devices for the game’s final release, I pushed us in the direction of creating our first prototype by using the keyboard as an input method. This did help us get something working on the screen, but it ended up being a waste of time, as the API commands for touch input are vastly different from that as simple key presses on a keyboard. Thankfully, we switch input methodologies prior to completing the prototype, but that means the prototype is still in the works currently.

Despite all of this work now technically archived, I want to go through it, and explain how some of the scripts will actually act as a sort of template for the work that now needs to be done for touch input.

Let’s start from ground zero. This is our player avatar:


Pretty sophisticated stuff, right? Okay, maybe not, but it gets the job done for the time being.

The floor started out with a simple white colored texture which we stuck with for a while. Once we started implementing movement, however, it was difficult to determine if the avatar was behaving the way we wanted to. Thus, we nominated to use a checked texture so it was easier to see if the avatar was even moving at all.

Now that we had our avatar, what exactly did we want it to do? Well, we knew we wanted it to move forward. Since our game was of the “endless runner” type, we weren’t concerned with being able to move backwards. The simplest way to achieve this would be attaching a Rigidbody component to the avatar. Then, a script is created and attached to the avatar.


Now some code is needed within the script to actually get the cube moving.


This basic script starts out with declaring a public variable called speed. It is public so that it may be altered within the editor at run time. That way, if the avatar is moving too quickly or too slowly, this float value can be edited without having to revisit the script itself.

The rest of the code resides in the indigenous Unity function “FixedUpdate()”. The code within this function runs at each fixed frame, and is used in favor of the “Update()” function when dealing with Rigidbodies. Most objects in a game will need to be continuously refreshed and updated during the game. Thus, most will find their way to one of these two functions.

First up is an if statement saying that, “if either the W key or E key is pressed, do something”. That something is applying force in the forward direction (Z axis) of a Vector3. This force is applied by a rate of our Speed variable over the course of Time.deltaTime (change of time in seconds). The logic behind assigning two keys is an attempt to represent a sort of running movement with two fingers in the best way we could on a keyboard. We later added booleans that would enable one of the keys after the other was pressed, and disable the same key once it was pressed (i.e. E pressed = E disabled, W enabled).

Another if statement states that if the space key is pressed down, a separate function called “Jump()” should be executed. Jump() applies a force in the Vector3.up (Y axis) direction by another public float called jumpSpeed.

iTween and the PlayerController

That is about as far as we got with using a rigidbody. We came across an asset called iTween. The examples provided by this asset showed a way to move the player avatar without the help of a rigidbody. We found this favorable, as it was easier to customize further additions to the code. Rigidbody components come with a host of preexisting characteristics (gravity, drag, etc.) that can interfere with desired behavior in the long run.

The key feature that we were interested from iTween was its ability to attach a game object to a path. Once this is done, the object will move along the path while rotating along it. This is to simulate the avatar facing “forward” as it meets turns in the path. The player themselves will not control any rotation. I would show you exactly how this is done, but while iTween is free to download, the associated examples are not. As I do not want to infringe on anyone’s intellectual property, I will refrain from displaying and going through the code (plus, it is kind of long).

If you want to take a closer look at iTween, please feel free to check out the webpage. It has a lot of functionality, and comes in handy for a host of different applications.

Acceleration & Deceleration

We had the basics down. The player avatar moves forward along a path, but not very well. When running in the real world, an individual would gradually gain speed and eventually reach a top speed. Speed would conversely gradually decrease and stop if they ever stopped running. To make the game more lifelike, we wanted to mimic this behavior within the game.

Believe it or not, the bit of code below does just that.


Whoa! What’s going on here!?

For this particular feature we are concerned primarily with the MaxSpeed, Acceleration, and Deceleration public float variables. Speed is also important, but I’ve already explained what that guy is all about. These variables are all assigned an initial value, but are kept public so that they can be changed in the editor if needed.

The function labeled DetectKeys(), which is ultimately called in Update(), will set the rules for our acceleration/deceleration. The highlighted region says that (deep breath) if the W key is pressed AND Speed is greater than negative MaxSpeed (essentially a safeguard) AND Speed is less than MaxSpeed AND boolean moveForwardW is equal to true, accelerate the avatar by a rate of Speed plus Acceleration over the span of Time.deltaTime. If these conditions are not met, decelerate the avatar by a rate of Speed plus Deceleration over the course of Time.deltaTime until Speed is equal to 0. Got all that? This same logic is repeated for the E key.

The logic works, but not efficiently. Key presses had to be timed perfectly in order for the momentum to carry over to the next key press. If timed improperly, the avatar would decelerate to a stop, and the process would then need to be repeated until another mistake in timing was made. This was clearly not the behavior we wanted. You would need to be a veteran of the game in order to perceive these controls as intuitive. It was at this point that we nominated to abandon prototyping the game with a keyboard, and thus made the move to code for mobile platform touch input.

Up Next…

We are in the midst of working through transcribing our keyboard code over to touch. As of right now, Unity is recognizing how many touch inputs there are at a time, and we are capable of moving the avatar forward when swiping in a downward direction.

Over the course of the next week, we will be working on getting the avatar to jump when two fingers are tapped against the screen simultaneously. I promise to give a timely update from now on!

Thank you for taking the time to read my rantings, and for your continued interest in our little project.


Andrew Medhurst

The Beginning of a New Chapter: Becoming An Indie Game Designer

Okay! I have officially graduated from Full Sail University! Now what? Well, seeing as to how most of the job boards are looking for game designers with at least 2 years of experience, and seeing how I am still a novice at the whole networking game, I decided to become an “Indie Developer”. Woo-hoo!

So, what does that mean for me, exactly? Well, I can’t do everything on my own, so I asked a very good friend of mine if he’d like to go on this journey with me. Thankfully, he said yes, because I am no good at art and he is awesome at it. This makes him our sanctioned artist.

Like everything in life, there are pros and cons to working in a small (super small in this case) team.


  • We have complete creative freedom over what we want to do and create, what tools we want to use, and the pace that we take.
  • We decide what tools we use.
  • Not much gets lost in translation, and there isn’t a heavy emphasis on documentation/ record keeping, as there are only two parties involved.


  • We aren’t getting paid for this.
  • Self-motivation sometimes really sucks.
  • I have experience with UDK. However, we nominated to use Unity3D, which involves a learning process.
  • A host of other organizational software and tools are needed. Finding the right one is half the battle.

Nut-shell Concept

The concept of our game was inspired by my buddy’s wife. She recommended creating a simple app where the user could swipe their index and middle finger on the screen of a mobile device to run on a treadmill. We took that and thought that it would make an interesting platform/obstacle type of game. Now, granted, there are a host of “endless running” games out there on the iTunes and Google Play app stores, but I never came across on with the mechanic we were looking to adopt. Hence, “Running Game” (working title) was born.

Our Tools (so far)

Considering I had such a rough experience with UDK while I was a student, I wanted to work with Unity3D, which is widely regarded in the small-time indie realm of game development. We spent the first couple of months running through the video tutorials they have on the website. During this time, I fell in love with the C# programming language, which meant one more thing needed to be learned. If you are interested in Unity and never got around to testing it out, I highly recommend it. You can find the tutorials here:

Since my partner and I will be working on this project at different times and different locations, we knew we needed some kind of version control. Github caught our interest, mostly because it is free, and because many others within the community use it. We then utilized to act as our repository for our project. And finally, considering a GUI is always preferable to typing in a command prompt, we use SourceTree ( to push and pull our data to said repository. We are sorting our way through documentation, tutorials, and other known best practices for the mentioned tools, as it can be pretty particular about what can be pushed and when. Do it wrong, and it throws a hissy fit.

Coming Up…

We final broke ground on our first prototype this week! So far, we have a cube that moves… Hey, it is better than nothing, okay! I hope to update this site with our progress at the end of every week. I will be posting problems and solutions for everything I come across during this project, so stay tuned!

Final Project video walkthrough

I did it! I finally graduated from Full Sail University’s Game Design program. The last four months of the degree revolved around a team of 4 other students and myself creating our very own game from concept to realization. I learned a great deal about UnrealScript, communication, cooperation, utilizing Adobe Flash to create menus, and so much more. It really was an amazing and trying experience.

I decided to condense the game into two video walkthroughs of the game, simply because there is so much stuff going on behind the scenes. It would take ages to go over everything! If you have questions with regards to a specific feature, please let me know, and I will do my best to shed some light on the topic of interest. Enjoy!

Part 1

Part 2

Fun With UDK: Another Project

Like most other projects, this one also covered the course of three weeks. The team included five individuals, including myself, but this time the game had to feature a minimum of three separate levels or locations.

Thankfully, I was able to locate all of the critical assets for this project, so all textures and meshes should be visible.


The School Maze. Consists of the majority of the game.

The original concept consisted of a nerdy school boy, named Little Russ, trying to get through the day without getting pulverized by a bully. This bully went by the name “Big Boy Bruce”.  Primarily, the game consists of the player engaging in dialogue with Big Boy Bruce, and then selecting a response to the situation with the numerical keys. The chosen response will effect certain stats, such as health and courage.

Response options listed on screen for dialogue scenarios. Breaks cardinal rule of obstructing the player's vision of the main action area, but the team couldn't figure out how to reposition the text.

Response options listed on screen for dialogue scenarios. Breaks cardinal rule of obstructing the player’s vision of the main action area, but the team couldn’t figure out how to reposition the text.

When the player entered a dynamic trigger volume (used to assign checkpoints in the project previous), all movement would be disabled and an AI to represent Bruce would approach. We had to use default skeletal meshes, as no one on the team knew how to create custom meshes via other software.


Notice how selecting “2” has changed the stats. I don’t know why we included an “out of 10” display on the HUD. A singular number that changes would have been sufficient.

After a response was selected, the related stats would update, and the AI would move out of frame and be destroyed. At this point, movement capability was restored to the player so that they may progress.

Kismet was utilized heavily to govern what executed and when. Kismet is a visual scripting language, indigenous to UDK. It really helps for people like me who don’t grasp traditional coding very well. Each dialogue sequence needs a new set of AI to represent Bruce, with subsequent guidance on how the dialogue pans out. Therefore, there are 8 similar looking sequences to the one in the image below.

I’m going to cover Kismet quickly with images, as the previous post covered visual scripting in great detail. Once again, this was my domain of the project, and tested my abilities greatly.


One Kismet sequence for governing the spawning and movement of Bruce. Tells AI when to approach player and begin dialogue, when to allow the player to select a response, and what to do after a selection is made.

Sequence begins with spawning an enemy actor.This sequence spawns one once the level is loaded, but all others spawn once a specific trigger is touched.

Sequence begins with spawning an enemy actor.This sequence spawns one once the level is loaded, but all others spawn once a specific trigger is touched.


When the player actor touches a certain trigger volume, the “Set Physics” node disables the player’s movement capability, tells the enemy AI to move to a specific location, and then commences dialogue on-screen.


When trigger touched enables options for selecting a response to the dialogue.

The spiderweb mess above consists of toggle nodes that enable specifically delegated keys. When one of these is pressed, all those options are toggled back off so that they don’t function again. An announcement is made to confirm the selection. The announcement node is then linked to custom-made nodes that alter the player’s stats on the HUD. These are not native to UDK, and had to be made in Microsoft Visual Studio, utilizing UnrealScript. This was not done by me, but another member that is more savvy at scripting than I. Enemy actor was then set to move to a set destination. After reaching the destination, the actor is destroyed and movement capability is restored to the player.

Operations of the Dodgeball Mini-Game

So, affecting the outcome of a game through dialogue choices is fun and all, but it doesn’t make for a very fun game all on its own. Therefore, we came up with a few mini-games to offer up some variety. One of these is “Dodgeball”.


Dodgeball Mini-game Room. Positioned so that you can see the camera angle and Interpolating Actors that will be animated for the player to dodge.

To create the balls, we had to use the spherical brush, add the volume to the level, and then converted them to Interpolating Actors. Those apple-looking icons on the ground act as relays to designate where the player would appear when teleported to the room. There are a series of Dynamic Trigger Volume boxes surrounding the red outline. These are used to tell the game when the player has moved out of bounds, and should stop the game.


When a specified trigger is touched, the player is teleported to the dodgeball room. The camera is reassigned to the one in the previous image, a brief announcement is displayed, and the dodgeballs begin their animation sequence. The red lines imply boundaries, and the game ends if the player goes past them.

If a player is struck by a ball, 1 unit of health is lost. After all animations have completed, the player is instructed to press the spacebar to return to the main area of the game (school maze), and the game continues from where it was interrupted.


All Kismet used to execute Dodgeball

The kismet sequence for guiding this mini-game is as follows:


Trigger touched teleports player to dodgeball room. Announcements play and a series of matinees execute. Afterwards, a trigger is teleported to the room.


The teleported trigger allows the player to teleport back to the school when the spacebar is pressed.


If one of the cyan colored Dynamic Trigger Volumes are touched by the player, player movement disabled, announcement played, and teleport the trigger that will send the player back to the school, into the room for use.


If the player is touched by one of the InterpActors (balls), remove 1 unit of health from the HUD.

The Food Fight Mini-Game

The food fight utilized the same ball mesh from the dodgeball mini-game. As stated previously, no one had any skills to create a “food” mesh, so we worked with what we had.

When the player teleports to the cafeteria, the camera shifts to a 2-Dimensional side-scrolling perspective. Rotation keys are disabled and collision volumes are placed so that the player cannot deviate from a linear path. The player must make their way from one side of the room to the next. If they are struck by the food, 1 unit of health is deducted from the HUD.


Food Fight Mini-game Room

Starting location of the player when teleported to the cafeteria. Teleportation initiates when a trigger is touched by the player.


The game in action! I had already hit some “food” at this point, hence why my health is low.

When the player reaches the end, there is a trigger that will teleport the player back to the school. I decided not to show the kismet for this one, because it is really unorganized and I don’t think it is properly linked together. Plus, I’ll have more visual scripting to show off soon.

Like right now!

End-Game Conditions

So, what’s the deal with the Courage and Health meters on the HUD? When the Health runs out (equals zero), the game is set to end and shut down. If the Courage meter is above 5 when the player reaches the exit to the school maze, they have successfully made it through the school day and win the game. If it is below 5, Little Russ gets beat up by the bully and the game is lost.


Kismet sequence to visualize the process of fulfilling and executing end-game conditions.

What could have been done better

We neglected to implement a tutorial for moving, selecting answers for dialogue, and how to play the mini-games.

If I had more time, I would have switched dodgeball around where the player is the one trying to hit AI targets, rather than being the one to dodge the balls.

Custom meshes to represent food and better balancing of projectiles for food fight. Some of them come in too quickly and feel unavoidable.

More time playtesting the game. Not everything works as intended, and this hurt us in the final deliverable of the game.