After finally getting around to turning my “Sloppy Joe” project into actual code, I spent a few hours here and there and worked out a simple React/Redux/Typescript project with a few hundred lines of code that had a pretty good start on the things I wanted it to do. It was time to push it up to GitHub.
I created the repo on GitHub, set my remote, and ran git pullfrom my project directory.
To my horror, it replaced my local masterwith the remote master, deleting everything that wasn’t specified by .gitignore. I have git pull --rebaseas my default setting, and the day finally came when that bit me hard. Knowing that any such problem has always been encountered by someone else, I searched around a bit on how to recover from a git pull --rebasedisaster, and I found this little gem.
I’ve been around the block on backups, version control, and sharing of data for my game development projects. As a hobbyist developer, paying for storage is entirely impractical, so I’ve tried a lot of things to balance workflow with free-ness.
TL;DR I now use a Raspberry Pi as a dedicated Git server. Go to the bottom for links to learn how.
Homebrew Engine Runaround
Nothing — On my earliest projects, like most people, I used no VCS. When I screwed up code, I had to unravel all my mistakes. Code refactors were a hellscape of fear.
Google Drive (round 1) — When I first started developing games at home, I would periodically zip the entire project up and put it on Google Drive.
👍 Sufficient space for free.
👍 Reliable server.
👎 Terrible workflow.
Perforce Repo — My codebase was getting way too big not to have proper version control. I was familiar with Perforce, and it was free to use locally on my machine, so I went with that.
👍 Free for individuals
👍 Decent for code, if a little clunky.
👎 Astronomical overhead for team projects.
Google Drive (round 2) + Perforce — I later onboarded a friend to help with art. He needed to share assets with me and run the game. The Engine code lived on perforce on my machine only. New engine builds were added periodically to Google Drive. The asset folder structure was synched onto Google Drive.
👍 With one coder and one artist, this worked fairly well as long as we didn’t often dip into the same spaces.
👍 Reliable, free hosting.
👎 Backup, but no version control functionality, even for text-based assets, like level scripts or maps.
BitBucket + Mercurial — A friend tipped me off to free version control hosting on BitBucket. This is where that project still lives today.
👍 Free private repos with multiple users!
👍 Plenty of space for smaller projects.
👍 Distributed version control means we can share the repo without relying on internet connection.
👎 Command-line use of mercurial prohibitively intimidating for artist.
TortoiseHg made it usable enough, though that is also pretty clunky.
👎 The concept of merging is still pretty intimidating to artists. I’ve really only seen this handled well in Perforce, by avoiding it all together with file locking.
Unity Engine Runaround
I started getting serious with Unity projects around 2015, when I left my job at Activision. At the time, there was no Unity Collaborate, so I started where I had left off, but swapped out Mercurial for Git.
Last year, I decided I wanted to try and revisit the shmup (shoot-em-up) genre, this time through the use of Unity, which would handle many of the things that ate up my time on the previous game. This time, there would be no worrying about physics engines, editors, scripting language integrations, shaders, audio APIs. And I would think more carefully this time about my use of physics.
There were, however, some problems that physics presented for use in a shmup. In particular, shmup gameplay is all about precision and predictability. When you press the button or interact with the world, it needs to react consistently, or you risk punishing the player at random.
In a shmup, the player typically immediately moves at full speed when the joystick is fully extended in a direction. If you ramp the speed, the feeling of precision input is lost. This can be accomplished by placing the player ship explicitly each frame, but if you’re using a physics engine, you’ll lose the ability to have the player collide with enemies, the environment, and camera boundaries, and need to come up with another solution. Typically, this is solved in shmups by limiting playable space in code and literally blowing up the player when they touch anything. For my game, however, this is not the intended design.
I accomplished both goals by giving the player a dynamic Rigidbody2D, which will stop when it collides with external forces. Player input is transferred into motion by directly setting the velocity of the player’s ship. This means that collision with other objects works correctly, and the speed of motion reacts immediately and predictably.
I still wanted to impart “blowback” physics on the player. Since velocity is set directly each frame, forces imparted by, say, an explosion, would immediately be negated on the next frame when velocity was re-applied. To enable this, I created a MonoBehaviour component called “PushModule” that contains an additive velocity property which is added to the input-derived velocity each frame. This allows both the “pushing” effect, and the “predictable input” effect to work together harmoniously.
This part took a great deal of research and time to land on a consistent solution. First, I tried setting up bullets as triggers, but I found that at low framerates, the bullets could skip right over a target without dealing any damage. Next, I tried removing the Collider2D and Rigidbody2D. Each frame in the FixedUpdate function, I would Raycast between the previous position and the new position. This mostly worked, but when a bullet was fired at a fast-moving object, there was still a chance they could skip over each other.
Continuous collision means that any time a Rigidbody2D would collide with another, even at fast speed and low frame rate, the collision is detected and correctly reported. It’s more expensive than Discrete collision, but in the case of a shmup, accuracy is extremely important.
As with all things in a shmup, bullets need to behave predictably, which makes them a great candidate to be moved by game code, not by the physics engine. Without using the physics engine, we cannot achieve Continuous collision detection, do what do we do?
The concept of a Kinematic Rigidbody2D is a bit difficult to describe, but ultimately, it means that it can affect others, but will not be affected by others. In other physics engines, I’ve heard this called “keyframed” or “infinite mass” collision. This allows the object to be placed by code while still being visible to the physics engine.
The last piece of the bullet collision puzzle came in the form of Rigidbody2D.MovePosition. In order for Continuous collision detection to work correctly when setting position by code, you need to use this function to update the body’s position, rather than setting the transform’s position. This informs the physics engine that a position change was made, allowing it to calculate everything that would have happened between the old and new positions.
If you’ve played any old-school shmups, you know that enemy movement is predictable and precise, allowing you to plan your attack well. To achieve this, I wanted to make sure enemy movement was done through updating the position in game code, not through physics forces.
From here, I decided to scale back the notion of what mattered in an NPCs physics. Since their motion is being tightly scripted, the notion of stopping when they hit another object is meaningless. Because of this, I can get away with NPCs having Collider2D on them, but no Rigidbody2D. Motion is achieved entirely through script (following splines, ‘homing’ toward a player, or simply rocketing forward) and no physical simulation is necessary.
The one thing I did want was the ability to “blowback” some NPCs from explosions. To achieve this, I simply used the same “PushModule” as I had used on the player, this time interacting with the Transform rather than the Rigidbody2D.
Playing with Perspective
In assembling a particular boss fight, I ran into a problem that arose from the perspective. The game is rendered in perspective 3D, but played in flat 2D. This boss would move to one side of the screen and fire a laser beam. Due to the perspective of the camera, simply flattening the z-values of the laser’s start and end positions was insufficient. Doing it that way resulted in the action of the laser not lining up correctly with the visuals of the laser.
To fix this, instead of raycasting from the start position of the laser to the end position of the laser, I instead take the start and end positions of the beam and project them from their respective camera-parallel planes (not always the same plane), into what I call the “gameplay plane”, which is the camera-parallel plane in which the player flies.
Player Motion vs. Auto-Scrolling
I had always had auto-scrolling segments as part of this game. For a tightly-scripted shmup experience, autoscroll is vitally important to keeping the pace as-intended.
When ignoring the physics engine, it’s fairly simple. All player motion happens relative to the camera, rather than the world, but with a physics engine, this becomes more complicated.
For my initial implementation, I simplified the player physics vastly by keeping the camera totally stationary and moving the world around it. What I found, however, was that I was losing a great deal of goodness that you get from working with a static environment.
Rendering of static meshes can be optimized better than moving meshes.
If the camera doesn’t move, the skybox doesn’t move.
Enemies that move independently are no trouble, but enemies that interact with the environment introduce strange parenting structures when the environment is constantly moving.
How do we handle cases where we want to allow the player to explore? Do we keep the player still and move everything else? What does this mean for camera drift?
I had so much trouble getting the physics to play nice with camera movement in my original shmup game back in 2011 that I was dreading making the transition back to a dynamic camera. Unity3D’s core systems, however, offered me a cheap hack that, seems to be working quite well so far.
I believe my next step from here is to skip the parenting element, and simply recalculate the new position and orientation based on relativity between the camera Transform and the ship transform, then apply the positions using Rigidbody2D.MovePosition and Rigidbody2D.MoveRotation, though as it is today, it does not seem to have any significant adverse effects to use the parenting hack.
Do you have any physics tricks or tips that you’ve used in side-scrolling games in Unity3D? Let me know in the Facebook post!
It’s been a while since I really shared what was going on with Slonersoft. I’ve had a shmup in development for a long time. Helios Warp made a short debut at the Northwest Pinball and Arcade Show. I’ve done a lot of sketching as well. Will post more soon.