A Tale of `git pull –rebase` Horror (and so can you!)

After finally getting around to turning my “Sloppy Joe” project into actual code, I spent a few hours here and there and worked out a simple React/Redux/Typescript project with a few hundred lines of code that had a pretty good start on the things I wanted it to do. It was time to push it up to GitHub.

I created the repo on GitHub, set my remote, and ran git pullfrom my project directory.

To my horror, it replaced my local masterwith the remote master, deleting everything that wasn’t specified by .gitignore. I have git pull --rebaseas my default setting, and the day finally came when that bit me hard. Knowing that any such problem has always been encountered by someone else, I searched around a bit on how to recover from a git pull --rebasedisaster, and I found this little gem.

git reflog (Documentation)

This spits out raw commit references to everything you’ve been doing. From there, I was able to git checkout [SHA], create a new branch app-setup and merge my new master up into that branch.

Since these two branches were effectively from two unrelated repos, it refused to merge these two branches and will give out the following error.

fatal: refusing to merge unrelated histories

And after some minimal-effort googling, I found that i just needed to git merge [branch] --allow-unrelated-histories

I had to manually merge the README files, which was no big deal, since I intended to throw one out, and now everything is fine and my heart rate has returned to normal.

Centralized GameDev Version Control for Cheapskates Like Me

Or: How I learned to love my Raspberry Pi.

I’ve been around the block on backups, version control, and sharing of data for my game development projects.  As a hobbyist developer, paying for storage is entirely impractical, so I’ve tried a lot of things to balance workflow with free-ness.

TL;DR I now use a Raspberry Pi as a dedicated Git server.  Go to the bottom for links to learn how.

Homebrew Engine Runaround

  1. Nothing — On my earliest projects, like most people, I used no VCS.  When I screwed up code, I had to unravel all my mistakes.  Code refactors were a hellscape of fear.
  2. Google Drive (round 1) — When I first started developing games at home, I would periodically zip the entire project up and put it on Google Drive.
    1. 👍 Sufficient space for free.
    2. 👍 Reliable server.
    3. 👎 Terrible workflow.
  3. Perforce Repo — My codebase was getting way too big not to have proper version control.  I was familiar with Perforce, and it was free to use locally on my machine, so I went with that.
    1. 👍 Free for individuals
    2. 👍 Decent for code, if a little clunky.
    3. 👎 Astronomical overhead for team projects.
  4. Google Drive (round 2) + Perforce — I later onboarded a friend to help with art.  He needed to share assets with me and run the game.  The Engine code lived on perforce on my machine only.  New engine builds were added periodically to Google Drive. The asset folder structure was synched onto Google Drive.
    1. 👍 With one coder and one artist, this worked fairly well as long as we didn’t often dip into the same spaces.
    2. 👍 Reliable, free hosting.
    3. 👎 Backup, but no version control functionality, even for text-based assets, like level scripts or maps.
  5. BitBucket + Mercurial — A friend tipped me off to free version control hosting on BitBucket.  This is where that project still lives today.  
    1. 👍 Free private repos with multiple users!
    2. 👍 Plenty of space for smaller projects.
    3. 👍 Distributed version control means we can share the repo without relying on internet connection.
    4. 👎 Command-line use of mercurial prohibitively intimidating for artist.
      1. TortoiseHg made it usable enough, though that is also pretty clunky.
    5. 👎 The concept of merging is still pretty intimidating to artists.  I’ve really only seen this handled well in Perforce, by avoiding it all together with file locking.

Unity Engine Runaround

I started getting serious with Unity projects around 2015, when I left my job at Activision.  At the time, there was no Unity Collaborate, so I started where I had left off, but swapped out Mercurial for Git.

  1. Bitbucket + Git — Bitbucket had adopted Git, which I was more familiar with, so I switched to that, using a .gitignore file specific to Unity.
    1. 👍 Git has a more mature toolset compared to Mercurial.
    2. 👎 Working alone this was fine, but conflicts in assets are effectively unfixable.
    3. 👎 CLI version control is a bear for non-coders.
    4. 👎 Unity projects get big, fast, and Bitbucket has a “soft limit” of 1Gb and a “hard limit” of 2Gb.  Bring in a few store assets, and you’re beyond that in a hurry.
  2. Unity Collaborate — In 2016, Unity announced the beta for Collaborate.  A friend had been using it professionally and recommended checking it out (at least, over Git).
    1. 👍 It’s integrated right into the editor.
    2. 👍 It can do some amount of conflict resolution on assets.
    3. 👍 Workflow was much easier to understand for artists.
    4. 👍 During beta, they offered 15Gb of storage.
    5. 👎 In 2017, they dropped the free tier to 1Gb, which meant buying pretty much any store asset bumped you over the limit.  This was the final straw for me.
    6. 👎 Workflow was awful for coders — I think it’s built on Git, but it locks you out of access to nearly everything useful about Git.  Diffing, merging, branching, offline commits?  Nah.
  3. Personal Raspberry Pi Server + Git — I knew nothing about hosting my own git server, but I figured someone must, so like a good programmer, I typed my problem into Google and figured it out.
    1. 👍 Access to central repo from anywhere.
    2. 👍 As many users as my little Pi can support in traffic.
    3. 👍 Effectively unlimited storage
    4. 👍 State-of-the-art version control for code with loads of 3rd party and open-source tools support.
    5. 👍 Distributed system for offline work.
    6. 👎 Still clunky for asset conflicts.
      1. The Unity team themselves have provided a merge tool, but I haven’t tried it yet.
    7. 👎 SD Card failure rate is high on Raspberry Pi, USB failure rate is lower, but still present.  Reliability is eclipsed by that of cloud services.

How I set up my Raspberry Pi Git server.

I’m not going to go into detail on this part.  Plenty of people have explained this better than I can.  I’ll link you to the resources I used.

  1. Set up a Raspberry Pi git repo (Instructables).
    1. Note: use a USB drive, not a SD card, as they tend to burn out on high traffic in Raspberry Pis.  It also allows you to pull the drive out if something goes wrong with your Pi.
  2. Set up an SSH key on your Raspberry Pi.
    1. This is technically optional, but it’d be foolish to skip.  You’ll be exposing your Pi to outside threats, so every bit of security is worthwhile.
  3. Assign your Raspberry Pi a static IP from your router.
    1. This gives you direct access to your Raspberry Pi from anywhere.
  4. Update the ‘origin’ URL of your git repo to point at the new static IP instead of the inside-your-house IP.
    1. Remember setting the remote repo url in Instructables step above?  Do that again, but change out the previous IP address for your new static one.
  5. (optional) Give your Static IP a friendly DNS name.
    1. If you pay for site hosting, odds are your hosting service will let you assign a subdomain to it.  This can make it a little easier to work with — something like gitpi.mysite.com

Shmup Physics in Unity3D

My 2011 homebrewed shmup engine included some unique physics features.

Last year, I decided I wanted to try and revisit the shmup (shoot-em-up) genre, this time through the use of Unity, which would handle many of the things that ate up my time on the previous game.  This time, there would be no worrying about physics engines, editors, scripting language integrations, shaders, audio APIs.  And I would think more carefully this time about my use of physics.

Physics Decisions

Though some would say that there’s no question that you should avoid using a physics engine for a shmup, I wanted to keep some of the things that it afforded me.

  1. Well-defined ship & environment collision shapes.
  2. Continuous bullet collision (more on this later).
  3. “Blowback” effects from weapon fire & explosions.
  4. Interactive props
  5. Dynamic camera boundaries

There were, however, some problems that physics presented for use in a shmup.  In particular, shmup gameplay is all about precision and predictability.  When you press the button or interact with the world, it needs to react consistently, or you risk punishing the player at random.

Player Movement

In a shmup, the player typically immediately moves at full speed when the joystick is fully extended in a direction.  If you ramp the speed, the feeling of precision input is lost.  This can be accomplished by placing the player ship explicitly each frame, but if you’re using a physics engine, you’ll lose the ability to have the player collide with enemies, the environment, and camera boundaries, and need to come up with another solution.  Typically, this is solved in shmups by limiting playable space in code and literally blowing up the player when they touch anything.  For my game, however, this is not the intended design.

I accomplished both goals by giving the player a dynamic Rigidbody2D, which will stop when it collides with external forces.  Player input is transferred into motion by directly setting the velocity of the player’s ship.  This means that collision with other objects works correctly, and the speed of motion reacts immediately and predictably.

The PushModule creates a sense of weight and reaction where the physics are faked.

I still wanted to impart “blowback” physics on the player.  Since velocity is set directly each frame, forces imparted by, say, an explosion, would immediately be negated on the next frame when velocity was re-applied.  To enable this, I created a MonoBehaviour component called “PushModule” that contains an additive velocity property which is added to the input-derived velocity each frame.  This allows both the “pushing” effect, and the “predictable input” effect to work together harmoniously.

Bullet Collision

This part took a great deal of research and time to land on a consistent solution.  First, I tried setting up bullets as triggers, but I found that at low framerates, the bullets could skip right over a target without dealing any damage.  Next, I tried removing the Collider2D and Rigidbody2D.  Each frame in the FixedUpdate function, I would Raycast between the previous position and the new position.  This mostly worked, but when a bullet was fired at a fast-moving object, there was still a chance they could skip over each other.

After conducting some research on the matter, I found that the solution came in the form of CollisionDetectionMode2D.Continuous along with Rigidbody2D.isKinematic.

Continuous collision means that any time a Rigidbody2D would collide with another, even at fast speed and low frame rate, the collision is detected and correctly reported.  It’s more expensive than Discrete collision, but in the case of a shmup, accuracy is extremely important.

As with all things in a shmup, bullets need to behave predictably, which makes them a great candidate to be moved by game code, not by the physics engine.  Without using the physics engine, we cannot achieve Continuous collision detection, do what do we do?

The concept of a Kinematic Rigidbody2D is a bit difficult to describe, but ultimately, it means that it can affect others, but will not be affected by others.  In other physics engines, I’ve heard this called “keyframed” or “infinite mass” collision.  This allows the object to be placed by code while still being visible to the physics engine.

The last piece of the bullet collision puzzle came in the form of Rigidbody2D.MovePosition.  In order for Continuous collision detection to work correctly when setting position by code, you need to use this function to update the body’s position, rather than setting the transform’s position.  This informs the physics engine that a position change was made, allowing it to calculate everything that would have happened between the old and new positions.

NPC Movement

NPCs move along precisely-scripted paths.

If you’ve played any old-school shmups, you know that enemy movement is predictable and precise, allowing you to plan your attack well.  To achieve this, I wanted to make sure enemy movement was done through updating the position in game code, not through physics forces.

From here, I decided to scale back the notion of what mattered in an NPCs physics.  Since their motion is being tightly scripted, the notion of stopping when they hit another object is meaningless.  Because of this, I can get away with NPCs having Collider2D on them, but no Rigidbody2D.  Motion is achieved entirely through script (following splines, ‘homing’ toward a player, or simply rocketing forward) and no physical simulation is necessary.

The one thing I did want was the ability to “blowback” some NPCs from explosions.  To achieve this, I simply used the same “PushModule” as I had used on the player, this time interacting with the Transform rather than the Rigidbody2D.

Playing with Perspective

In assembling a particular boss fight, I ran into a problem that arose from the perspective.  The game is rendered in perspective 3D, but played in flat 2D.  This boss would move to one side of the screen and fire a laser beam.  Due to the perspective of the camera, simply flattening the z-values of the laser’s start and end positions was insufficient.  Doing it that way resulted in the action of the laser not lining up correctly with the visuals of the laser.

This boss proved problematic when the lasers didn’t align nicely along the gameplay plane.

To fix this, instead of raycasting from the start position of the laser to the end position of the laser, I instead take the start and end positions of the beam and project them from their respective camera-parallel planes (not always the same plane), into what I call the “gameplay plane”, which is the camera-parallel plane in which the player flies.

Player Motion vs. Auto-Scrolling

I had always had auto-scrolling segments as part of this game.  For a tightly-scripted shmup experience, autoscroll is vitally important to keeping the pace as-intended.

When ignoring the physics engine, it’s fairly simple.  All player motion happens relative to the camera, rather than the world, but with a physics engine, this becomes more complicated.

For my initial implementation, I simplified the player physics vastly by keeping the camera totally stationary and moving the world around it.  What I found, however, was that I was losing a great deal of goodness that you get from working with a static environment.

  1. Static environments can bake in Global Illumination & Reflection Probes.
  2. Rendering of static meshes can be optimized better than moving meshes.
  3. If the camera doesn’t move, the skybox doesn’t move.
  4. Enemies that move independently are no trouble, but enemies that interact with the environment introduce strange parenting structures when the environment is constantly moving.
  5. How do we handle cases where we want to allow the player to explore?  Do we keep the player still and move everything else?  What does this mean for camera drift?

I had so much trouble getting the physics to play nice with camera movement in my original shmup game back in 2011 that I was dreading making the transition back to a dynamic camera.  Unity3D’s core systems, however, offered me a cheap hack that, seems to be working quite well so far.

List<Transform> originalParents=newList<Transform>();
foreach(Transformtargetintargets) {


for(inti=0;i<targets.Count;i++) {

I believe my next step from here is to skip the parenting element, and simply recalculate the new position and orientation based on relativity between the camera Transform and the ship transform, then apply the positions using Rigidbody2D.MovePosition and Rigidbody2D.MoveRotation, though as it is today, it does not seem to have any significant adverse effects to use the parenting hack.

Do you have any physics tricks or tips that you’ve used in side-scrolling games in Unity3D?  Let me know in the Facebook post!

Restarting the Devlog

It’s been a while since I really shared what was going on with Slonersoft.  I’ve had a shmup in development for a long time.  Helios Warp made a short debut at the Northwest Pinball and Arcade Show.  I’ve done a lot of sketching as well.   Will post more soon.