67. Mouse Wrangling

Okay, so I’m a bit late on this devlog entry, but in one of my recent video dev logs (Which I do weekly over on my YouTube channel; if you haven’t been checking them out) I promised that I would write about how mouse movement is handled in-depth, and so I’m going to pay off on my promise.

Mouse movement in games is almost always a bit more of a tricky thing to get right than it seems when playing a finished product. For one, the mouse is a direct input method, so when you move the mouse you expect an equivalent motion in-game, either of the camera or a cursor. This means that latency is much more noticeable here than it is on something indirect like a thumbstick.

Latency can be handled in any number of ways and is mostly outside the scope of this article, but I mention it because it’s a very common case where the technical details of how you handle input can have a large effect on how the game feels.

Mouse motion in Taiji is handled in a way that I haven’t seen any other games use. Most top-down or 2d games with mouse controls (i.e. Diablo, Civilization, Starcraft), just have you move the mouse around on the screen like it were your desktop, and the camera moves very rigidly and only under direct player control. However, in Taiji, camera movement is much more dynamic and loosely follows the position of your player avatar. This causes a bit of an issue, in that the camera may still be moving when the player is trying to use the mouse cursor to target a tile on a puzzle panel. Trying to hit a moving target with the mouse is a bit of a challenge that I am not interested in introducing into the game.

There are a few ways that this problem could have been resolved. One possible solution is to just never move the camera when a puzzle is visible and interact-able. However, this causes some discontinuities in the camera movement which can be, at best, irritating or at worst, nauseating. It also doesn’t work for some panels that are always interact-able if they are on screen. Another possibility is to just give up and lock the camera to the player as many other games do (Diablo, Nuclear Throne). This approach didn’t appeal to me for aesthetic reasons. I want the game to have a relaxed feel, and the slower camera movement is part of that.

The approach I chose instead was to treat the mouse cursor as though it is character existing in the world of the game, and allow the player to control that character directly with the mouse. Another way of thinking about this is that the mouse behaves as though the entire world of the game was your computer desktop, and we are just seeing a small view into that “desktop” at any one time. The view into this “desktop” can move around, but the mouse cursor would stay in the same place relative to everything else on the “desktop”. Technically, this is to say that the cursor is in world-space rather than screen space. This can all seem a bit abstract though, so to help make this concept of “world-space” vs “screen-space” mouse cursors a bit more clear, I recommend watching the video below, which I excerpted from one of my recent video devlogs.

Great! So this fixes our issue of having the mouse cursor drift around as the camera moves, and of the player trying to hit a moving target sometimes when they are clicking on puzzle panel tiles. However, we now have introduced another problem, which is that, since the fairy cursor character only moves if the player moves the mouse; when the player walks across the map, they might forget and leave the cursor behind. Luckily, for this game, in particular, the player is seldom needing to move both the player avatar and the cursor at the same time, so if the player walks more than a certain distance without touching the mouse, we can just put the little fairy character into a separate mode where they follow the player’s avatar around automatically. This might seems like it would get annoying, but in practice, it never really gets in the way of what you’re doing.

So how does this work?

So far, I’ve mostly been recapping the same ground that I covered in my recent devlog video about the mouse movement, but in this written devlog I’ve got much more space to dive into some of the technical details about how this functions.

Fundamentally, the work that we need to do here is to translate a certain amount of motion of a physical mouse into an equivalent motion in the game world. Taiji is being developed using Unity, so in this case, I use Unity’s input system (which Unity is currently in the process of replacing). In Unity’s input system, there are a few ways that I can access information about the actual mouse device.

One possibility is to simply look at the screen position of the mouse, and then project that into the game world in some way. However, we don’t use this method, as we want to make sure the OS mouse cursor is confined to the window while the game is running (you can free up the mouse to use other applications by pausing the game, of course). So we lock the OS mouse cursor to the center of the game window and just ask Unity to tell us how much it tried to move each frame.

mouseDelta = new Vector3(Input.GetAxisRaw("Mouse X"),Input.GetAxisRaw("Mouse Y"));

In the above code, mouseDelta represents the delta (amount of change) in the position of the mouse from the previous time we polled (the last frame). We get the horizontal (X) and vertical (Y), components of this delta, using the GetAxisRaw functions to avoid any time smoothing that Unity might otherwise apply.

Now we can get an amount that the mouse has moved, but there’s one problem; if the OS mouse cursor moved 10 units horizontally, we don’t know how far that really is. We don’t have any idea what the units are. Unfortunately, Unity’s documentation is no real help here, but through experimentation, I have determined that these values are in “pixels at 96dpi“. This might seem the same as pixels on the screen, however, because of screen scaling in Windows, these values may not correspond 1:1 with pixels on the screen.

In any case, correcting for this is fairly easy, as we can simply ask unity for the DPI of the screen. Then we normalize this value to 96dpi and multiply the mouse movement by this value:

float DPI_Scale=(Screen.dpi/96.0f);
mouseDelta *= DPI_Scale;

This now means our mouseDelta is in actual screen pixels.

So…at this point we can just take the movement and project it into the world of the game…right?

Well, unfortunately, no, and this is part of the reason that I had to go down an annoying rabbit-hole that involved calling into the Win32 API. For now let’s just continue onward and ignore that problem, as the explanation of how the mouse gets transformed into world space is already quite complicated on its own. But just make a mental note and we’ll come back here in a bit.

So, we have another issue that we have to resolve, which is that the game can be running in its own scaled resolution. This happens at two levels. The first is that the player can run the game at a sub-native resolution but in fullscreen mode. The game runs in a borderless fullscreen window, which means that if, for example, the game is running fullscreen at 1920×1080 on a 4k monitor, one pixel in the game’s output resolution will correspond to 4 screen pixels.

There are unfortunately some issues here, in that Unity does not always properly tell you the resolution information for the monitor that the game is running on, and instead shows you the resolution of the “main monitor” as it is set in the display settings in Windows. This will be the cause of the Win32 API madness later (interestingly enough, the DPI value is always correct, even with the screen resolution information is not). In any case, we will pretend for now that the Unity API call returns the correct value in all cases, and so the code to resolve this possible mismatch in resolution is as follows:

Vector2 ScreenRes = new Vector2(Screen.width, Screen.height);
float renderScale = (ScreenRes.x / ScreenRes.y < GameRes.x / GameRes.y) ? ScreenRes.x/GameRes.x : ScreenRes.y/GameRes.y;
if(!Screen.fullScreen) renderScale = 1.0f;
mouseDelta *= renderScale;

You might notice that this is a bit more complicated than the code accounting for screen dpi scaling. This is because the game runs at a forced 16:9 aspect ratio in order to tightly control what the player can and cannot see at any given time. This means that if the player is running the game fullscreen on a monitor of a different aspect ratio, it will be either letterboxed or pillarboxed, depending on whether the monitor’s native aspect is wider or narrower than the games. The final rendering scale will, therefore, depend on which side of the game’s view fully spans the monitor (horizontal in the case of letterboxing, and vertical in the case of pillar boxing)

Also of course, if the game is not fullscreen, we don’t have to worry about this render scaling at all.

Omigosh, we’re almost done with this and then we can go home. So the next and final thing that we have to do to get the mouse movement into world space is to account for the degree to which the in-game camera is zoomed in. Luckily, Unity provides us with a fairly straightforward function to map a position from the game’s rendered viewport into a world position. We use that to figure out a scaling value between screen and world space:

float mouseScale = (mainCamera.ScreenToWorldPoint(Vector3.one).x - mainCamera.ScreenToWorldPoint(Vector3.zero).x)/2.0f;
mouseDelta *= mouseScale;

ScreenToWorldPoint takes an input coordinate in pixels and returns a coordinate in “Unity Units”, so we are taking the horizontal distance from one screen pixel to a pixel immediately up and to the right of it, then dividing that horizontal distance by 2 to find the zoom factor. The reason for the division by 2 is actually a bit of a mystery to me at the time of writing this, which is why writing these deep tech dives can be a bit more useful for development than they might otherwise seem. I initially thought that perhaps this was because I was somehow returning the diagonal distance here. However, changing the code to use a pixel directly to the right does not produce a different result. So I guess it remains a mystery to me. However, without the division, the scaling will be wrong. Perhaps someone will comment on this devlog and tell me where I’ve obviously messed up the math somewhere earlier that would require this division or some other reason why it needs to happen here.

At this point, other than multiplying the mouse movement by an additional sensitivity value, we are done, and we can now apply mouseDelta as a translation to the actual in-game object representing the mouse cursor.

To Be Continued

I know, I know, I said I would get into the API nonsense, but this piece has gone on long enough on its own, so I’m gonna leave this on a cliffhanger and get back to fill in the details on that later this week. Till now, thanks for reading, and remember to wishlist the game on steam if you haven’t!

66. Burnout and Walking Animations

I plan on posting these video development logs on a weekly basis over at my YouTube channel. I may post some occasional reminders here going forward, but I’d rather keep this written devlog as it’s own separate thing rather than simply as a cross-posting of the videos. So, if you don’t want to miss any of the video dev logs, I recommend you subscribe to my YouTube channel.

However, since you’re here instead of at YouTube, I’ll reward you with a few sneak peeks into what I left out of the video log.

This past week, I’ve been working on relatively small polish features, which is a bit of a continuation of the work I did last week with footstep sounds (still have some more of those to do actually). I think this is partly as a way to give myself a break from tearing up major chunks of the game to add artwork. But even if it feels like less progress, these small things propagate across the entire game and affect your interactions throughout.

One of these small improvements is to the walking animations. The previous animations were serviceable, however when I added running animations, the running animations looked significantly better in comparison. So I added more frames to the walking animation and made some small tweaks. You can see the comparison between the old (left) and new (right) animations below:

I still want to add animations for when you move diagonally, and hope to get to that next. But I think even this goes some ways towards giving the game a more polished feeling.

I did a few other fun things, but I’ll save those for next week’s video. Hope to see you then. 🙂

65. Taiji Video Logs and Next Gen Thoughts

I’ve started a new video series in addition to the normal written development logs. This will hopefully provide a more exciting and fun avenue to show and talk about what’s involved in working on the game, and perhaps just some of my thoughts on game design in general. These for the most part should get posted here, but you can subscribe to the YouTube channel if you want to know as soon as they go up.

Bonus: Thoughts on Next Gen Consoles

(8 out of 10 gamers couldn’t tell which of the above was a next-gen game)

I was planning to talk about the tonally strange but visually impressive Unreal Engine 5 demo on my development stream yesterday, but since the stream ran into technical difficulties and didn’t happen, I’ll say it here instead.

Maybe I’m only noticing this for the first time now because I’m actively developing this game while new consoles are being announced, but it feels like there is a uniquely large gap between how developers and gamers feel about the new hardware. Gamers seem largely underwhelmed, whereas developers are excited by the prospects.

This can mostly be explained by the differences in what these two customers want out of a new gaming console. Developers want the hardware to make it easier for them to make games. Gamers just want to be sold on graphics or gameplay that pushes past the limits of what they’ve seen before.

On the first point, it’s easy to make a case that both Microsoft and Sony are providing a way forward for developers. Microsoft is selling a beefy PC for your living room, and beefy PCs are easier to get games running well on. Sony is selling a slightly less beefy PC, but with some serious storage tricks up its sleeve that can only really happen with bespoke game-playing hardware.

For gamers, well, it’s harder to make the case.

This is partly developers’ fault. We have gotten so good at working with limited hardware that it’s a challenge to show the difference between real-time global illumination and traditional baked lightmaps or between dynamically decimated hero assets and manually authored LODs. There isn’t much difference as far as the result is concerned, however one of the primary benefits of working with better hardware and technology is that developers can get to the same results much easier and faster.

Pushing the frontiers of gameplay or photorealism is only partly about the hardware. Hardware matters for sure–you can’t run everything on a potato–but innovation is increasingly the thing that pushes boundaries.
A good example of graphics innovation being more important than hardware is the introduction of Physically-Based Materials over the past decade. This precipitated a giant leap forward in average visual fidelity for games, not so much because the hardware was more powerful, but because the pipeline for authoring the artwork was much improved.

Although an argument could be made that additional processing power allowed for shaders that were complex enough to more accurately simulate physical phenomena, this innovation in material authoring and rendering didn’t occur any earlier in the film industry either. So it seems like more of a process innovation than having access to better hardware.

As another way of saying the same thing: It was possible before PBM to make games and films that had very realistic looks to them, but success required artists with tons of technical experience and skill. By changing the tools, it became much easier for even an inexperienced artist to produce an output that looks very realistic.

I think this is the type of progress on display in the Unreal demo and is also largely lost on the average gamer. For them, it’s simply about the results.

As for gameplay innovation, that is a much more challenging problem, and unless you are going specifically for a game design about verisimilitude (i.e. Grand Theft Auto), it’s a problem that is largely divorced from the visual one. Of the game designs that I feel were most impactful over the past decade, I think rather few of them would be technical powerhouses. Some of them (Dark Souls) are downright technical messes. So it’s hard to say exactly what feels “next generation” in terms of game design until you see it, and it’s hard to draw a direct connection between these design leaps and console generations.

Well, I’m certainly excited about a new hardware generation. There’s still something about it that reminds me of the old days when you’d read in a magazine about something Nintendo was cooking up in Japan. But it remains to be seen whether or not the next console generation will convince as many people to pull out their wallets on launch day as this previous one did. It is challenging to convince gamers that they need new hardware simply because it makes things easier for developers.

64. The Graveyard

I spent the last month doing an art pass on another one of the major areas in the game. This area is called “The Graveyard” and consists of more or less what the name describes. Some of the earliest art tests for the game were done with this area, so I thought it would be fun to compare a very early version of the games aesthetic with its current look. (Some of the details in the old version have been hidden to avoid spoilers.)

I would say that the new version is much improved. The old version often did not read as a graveyard at all, probably because the only headstones featured are the strange puzzle ones.

This area exists in a snowy biome, which is different than anything I’ve currently built for the game, and required doing a lot of custom artwork. It was a bit of a challenge, but things went much smoother than I expected. It’s sometimes hard for me to believe that I’ve gotten competent enough at doing art for that to happen. I don’t really consider myself much of an artist.

One of the more fun technical bits to get right was the snowy grass, which uses a procedural technique to generate the patterns of snow and exposed grass. You can see a demonstration of its flexibility below:

One of the other things I did this month is much more secret, but is something that I’ve been planning for a long while. I finally figured out a way to implement the idea. I can’t tell you much more, but here’s a sneak peek:

I’m quite happy with the way this interior came out visually, but since I chose a different approach to lighting it, it has me thinking I need to rework some of the other interior areas to match this level of fidelity.

For the next month, I still have more areas that need artwork, but I also have some puzzle design stuff that I need to finish up. Mostly mix-in puzzles, but I also need to get started on prototyping some gameplay concepts for the ending of the game. Somehow after all this time, I still don’t have a proper ending on the game.

63. The Gardens

This last week I finished the art pass on the area I mentioned in the last devlog post. I settled on “The Gardens” as the name for the area, as it features flower gardens near the entrance, a vegetable garden to the south, and an abandoned grape trellis to the east.

This marks the end of a month for which the game was unplayable, since I have to break a lot of things when revising areas. Normally these types of revisions don’t take a whole month, but the metapuzzle concept here (mentioned in the last update) was quite technically complex, and I ended up having to pare back some parts of the design due to playability problems.

I still don’t want to spoil how the metapuzzle functions, but essentially, it was possible for the player to get the puzzle into an un-solvable state. I tried to resolve this in a very lightweight way that would (in theory) add some additional depth to the puzzle. However, once I implemented it. I realized that my solution didn’t actually work, and so I had to choose a more aggressive fix that eliminated some of the additional depth I had hoped to have.

I am still hopeful that I will come up with a way to add that depth back in, but I felt I had spent enough time on the problem for now, and the metapuzzle is in “good enough” shape. Certainly an improvement on what was there before.

So I’ve finished up six of the eleven major areas in the game, and the next obvious step would be to begin artwork on one of the others. However, I’m a little reluctant to immediately break the game again, so I plan on switching gears to puzzle design for a bit. I still need to take a look at improving the panel puzzles for the Gardens, so I may do that this week in addition to some of the other puzzle design tasks on the list.

62. Metamusings

Four of the ten major sub-areas of the game involve symbols embedded into the puzzle panels which the player figures out the meaning of over the course of the area. In each of those areas, there is also metapuzzle used for navigating that area. These metapuzzles require the player to solve multiple puzzle panels which interact with eachother in some way. Each area has a unique theme to this metapuzzle which is connected to the general theme of the area’s mechanic in some way or another.

I’ve had three these metapuzzles designed for a while now, but I’ve been spending the past week or so designing and implementing the fourth one. I’m also in the middle of ripping up the area containing it and doing a proper art pass on it, so the following screenshot is an example of something super work-in-progress:

I don’t want to explain too much of how the puzzle works, so as not to spoil things, and because I am likely to change some details as I continue implementing it. But the basic idea is that you can move the bridges between each circular platform and you use that to get around the area.

It’s been a bit of a technical challenge to get this puzzle implemented, and as of this writing, there are still a few features that are not set up yet. Part of the challenge here is that so much state can be changed across the metapuzzle, and it’s important to keep all of the state in sync.

The overall theme for the area is a terraced garden surrounding a dry lakebed. The major features of the area are still work in progress as well, but I have begun some work on the artwork for the entry terraces, which you can see below:

These will require some more detailing work, but the overall structure is more or less correct. The design of the stepped gardens is loosely based off the Hyakudanen at Awaji Yumebutai. In the game, the player will start at the stop of this stairwell as the entryway into the area. Once they get to the bottom, they will find some puzzle panels opening access into the metapuzzle section.

61. Testing my Patience

I’m mostly kidding about the title, but I’ve been putting the game out to some new testers these past few weeks (which is why I missed December’s devlog update).

Whenever I stop working on new things and take some time to reflect, I often get a bit depressed. I’ve mentioned this feeling during the previous round of testing and it’s similar feeling during this round. I feel accomplished in that I’ve made enough progress for the game to be worth evaluating again. But there’s still so much to do that it’s overwhelming to think about.

Looking at things at a high level, I’ve done a “good enough” art pass on five of the eleven areas in the game (although some of them I’d still like to make major tweaks to)

So that means I’m almost halfway done with the art, which is pretty good progress. But it also means that I still have the majority of the game to finish up.

It’s hard to make estimates on how “close to done” the design is because progress there is much less straightforward. With the art, it’s probably good enough to have art that looks decent and isn’t overly confusing. But with the design, there’s no “right” way for anything to be. It’s down to my personal decisions about what types of puzzles to focus on and how much should be required to progress in each area.

I also still haven’t put in anything resembling an ending, and I’m not even sure what that might entail. I think it’s rather hard to make satisfying endings to puzzle games. If the puzzles are too hard, it can wreck the pacing and just make the ending feel like a chore. Alternatively, if the ending is too easy, it can feel anticlimactic. Usually what works best is something that feels like a large change of pace from what came before.

To that end, I have a few ideas, but they are underdeveloped at the moment.

I’ll close out this post with a short clip of one of the areas that I’ve recently re-did the art for. It’s not entirely finished, but I’m happy with how it’s come along.

59. Shrines and Ancient Ruins

It’s hard to be sure exactly what to write about, since most of the work lately has been going into painting over each of the areas in the game. But this past month I’ve finished drafts of art for two major areas in the game, so I guess I’ll post up some screenshots!

(You can click any of the screenshots to view them full-size)

Shrine

This area is styled after a Japanese shrine and centered within a large lake.

Ruins

This area is an ancient ruin seated atop a narrow plateau. Some parts of the ruins have seen better days.

You may recognize this area from an earlier iteration of the art. Some parts of this area are still unfinished, art wise, and I need to add in the shadows. (I paint in all the shadows by hand!)

Bonus

Here’s a bonus screenshot of another area in the game.

58. Arts and Crafts

This past month, I’ve been both working on Taiji and crunching on promotional materials for Manifold Garden. (Which is out now on Apple Arcade and the Epic Games Store, by the way! You should check it out if you like puzzle games. I get no extra money if the game does well, so I am just recommending it personally.)

Seeing off Manifold Garden has been exciting. But turning back around to work on my own thing has been a bit depressing. It still has so much further to go before it will be done! I’ve been trying to keep my head on straight but it’s been a bit of a damper on my spirits.

The Breaking Point

Some technical aspects about the visuals in Taiji started to come unraveled earlier this month. One of the decisions I made early on was how to sort all of the individual graphical elements in the game. Although for 3D games, sorting is just handled as part of the perspective (except for translucent objects), in 2D games you usually set up an explicit sorting order.

In Unity, there are actually two systems you can use to handle sorting, the first is a sorting axis, which is equivalent to the Painter’s Algorithm: objects that are further away from the camera are drawn first and closer ones are drawn last.

The other system is Sorting Layers. These are just buckets you can put different objects into and you can set the order in which the buckets draw. My initial idea was to only use 3 sorting layers for the entire game: a layer below the player, the player layer, and a layer above the player. This seemed like it would work, because you are additionally allowed to specify a numerical sorting order for the objects within each layer.

The primary benefit of this approach is that it is player-centric. This means that I know that all objects in the “Below Player” layer will always be drawn below the player, and vice-versa for the “Above Player” layer.

But what happens if I want to have objects that are above the player at one point, and then below the player at another?

There are two types of scenarios where this problem might happen.

One is “vertical” objects that the player can walk around, such as trees. If we place them below the player, the player will be walking above the branches, and if we place them above, then the trunk will float over the players head. This problem is easy to solve by simply placing those objects on the player layer. In this case, Unity will fall back to the sorting axis and sort by distance. However, we can tell Unity to sort using the Y-axis, instead of the Z-axis. This means that objects that are higher on the screen than the player will draw behind them, and those that are lower, draw in front.

The other slippery sorting situation is when the player is underneath an area which they can climb up into. A basic example of this is a bridge over a canyon. The player might be in the canyon, walking underneath the bridge, but they can also climb out of the canyon and end up above the bridge, walking across it.

The player can go up those stairs at the top of the screen and then walk over the bridge.

This scenario is challenging to achieve under a simple 3 layer (Below Player, Player, Above Player) setup. The only real way to do this is to either shuffle all the objects between the above and below layers, or have copies of the objects on both layers, and only enable whichever is appropriate depending on where the player is.

I was using a mixture of both of these systems up until recently. It worked, although it was quite cumbersome. You’re moving around of dozens of objects from layer to layer all the time, and you can’t even see any of the visual issues until you run around the game. But eventually you run into scenarios where there need to be more than two layers, and it all falls apart.

So I made the difficult decision to change the entire sorting system used by the game. Under the new setup, each area in the game has a sorting layer, and the player is moved from layer to layer as they walk around the world, always staying at order 0 in whichever layer they are in. Objects with negative sort values will be below the player in that sorting layer, and those with positive values will sort above the player.

This setup makes so much more sense. Since only the player ever moves around, I never have to worry about the environment looking any different than it does in the editor.

In fact, I feel like I should have changed things over much sooner than I did.

I think this particular type of mistake was misguided optimization, which is even worse than premature optimization. Instead of optimizing for my sanity, and the simplicity of building the game over the long haul, I tried to optimize for the number of layers without being sure that it would ever be an issue. It wasn’t a performance concern, more just an aesthetic one.

I think it’s important to accept that your game is going to be a big icky mess at some point anyway, so you just should just leave the cleanup until you can actually see what you’re dealing with.

In any case, things haven’t been perfectly rosy with the new setup, but I’ll leave that story for next month perhaps. See you soon.

Proof Of Work

Perhaps you’d like to see the work I did related to Manifold Garden? If so, you can check out the following links:

Mood Trailer

Manifold Garden Instagram (Daily Videos since July 18)

Edge Detection & Anti-Aliasing Comparison 2015 vs 2019

Architectural Inspirations

Now Available Trailer (Although about 80% of this was Derek Lieu, and I just polished it up and replaced a few shots)

I also did a ton of odds and ends stuff that I can’t really take the time to list here, but suffice to say it’s been a bit busy.