71. Trees in the Wind

I’ve wanted to get animation for the trees into the game for a while, but could never quite manage it. I’ve tried many different approaches to this, but always the main issue is that the game renders at a fixed low-resolution pixel grid.

Some other pixel art games choose to render the game at a higher internal resolution than that of the actual pixel art, which means that you can rotate sprites without much aliasing. You can see a comparison between a high-res rotation-based animation (left) and how the effect breaks down when you render at a low resolution (right):

I have never liked the look of pixel art games that mix different resolutions, so I chose to render Taiji in a way that would force all effects in the game to be rendered at the same resolution of the base pixel art. But as you can see above, this means that rotating pixel art tends to cause strange artifacts that sort of look like edges sliding across the image. Obviously, this is very unaesthetic looking, and we need to try something else.

One possibility that I tried was to add in some noise to attempt to jitter out the sampling and create a smoother look. This removes the “sliding edges” appearance, but ends up adding in a lot of noise along edges. The effect could perhaps work well with a game that has a more forgiving art-style with a lot of noise built into the graphics.

So, with a couple of failures under my belt, I decided to rule out large motions such as rotating the entire tree, and instead I focused my efforts on animating the leaves on their own. This type of effect can be done fairly easily in a shader by simply adding in a noise offset when you sample the texture for the leaves.

This is certainly an improvement, but the effect is a bit too strong. Also if you look at it closely, it feels more like the tree is underwater than being effected in the wind. We could tone the strength of the distortion down, but then the motion becomes so subtle that it’s almost not worth having.

Another possibility that I attempted was to custom author a secondary texture which would control how the distortion was applied. I tried using a noise texture with leaf pattern built into it. I even did some tests pre-rendering leaves with Blender so that I could use the scene normals of the leaves to modulate the distortion.

I didn’t save this iteration of the shader, but suffice to say that it did not work much better than the purely random noise I was using earlier.

However, I started to think that an approach similar to how I animated the grass would be effective. The grass is essentially just a flat texture on the inside, with all the distortion happening along the outside edges.

So what would it look like if I did the same for the trees?

We’re getting close! This effect is even more pleasing, with a better balance between the details of the original pixel art and significant enough motion to be worthwhile. However, the motion feels a bit unnatural because it is confined completely to the outside edges.

What I chose to do to resolve this was to re-incorporate the idea of having a secondary texture control where the distortion effect can be applied. When used to highlight the internal edges, this forms the final effect. The wind map texture is below on the left. You can see that some interior pixels are colored red, those are the ones that are allowed to be distorted in the final effect on the right:

Overall, I’m pretty happy with how this came out. It adds some much needed motion to the trees, giving those scenes a more dynamic feel, and it doesn’t distort the base pixel art so much that it feels unnatural.

For a fun bonus, remember when I said that the unconstrained effect looked like water? I ended up using the same effect for this reflection in the water:

70. It’s All Mine

The past month has been spent working on the art for the Mine area. At this point it’s complete apart from the indirect lighting pass. I’ve already started implementing the art into the game, as this is a time-consuming process in its own right. There are lots of details that you have to figure out at that point which are easy to overlook when just doing art in a paint program. Some of the obvious ones though, are sorting order issues and putting trigger volumes all over the place.

I’m overall pretty happy with how the art for this area has turned out. A pattern that I’ve noticed is that each new area comes out a little bit closer than the last area to what my original vision was. It’s been a big learning experience working on this game.

That brings me to another point which I’ve been ruminating a bit on lately. As I get closer to the finish line for the game (there’s only two areas left), it becomes much clearer what the final form of the game is likely to be. Although this is exciting, it also ends up being a bit depressing, because it becomes clear that the game will not quite live up to all of my own hopes for it.

Don’t get me wrong, I think the game is in many ways much better than any of my initial ideas suggested, but there’s still this sense of wistful potential. I left many of my early ideas on the cutting room floor in order to make a more focused game, and I still feel a longing for a game that doesn’t exist. Perhaps I’m too close to the project and I can only see it’s flaws. Perhaps this is just a general feeling that I will always have, that will continue to drive me to make more games in the future.

Thinking about the future, I’m not really sure what I will end up doing after Taiji is finished. At the moment, I try not to think about it too much, because I still have a long road to finish walking down, and I don’t want to lose focus by finding distraction in some new shiny idea.

Part of me wants to quit games and never do this again. I can’t lie, this has become quite a grueling process. Working on a game solo has its upsides, but its very easy to lose motivation and feel isolated and lonely.

Oh well, we’ll see what the future holds when we get there. For now, I’m trying to decide how much I will be able to afford going back to areas that I’ve already done and improving the artwork there. I’m glad that many of you have had nice things to say about the look of the game, since I don’t consider myself a particularly great artist. Still, I have made most of the progress on the game by trying to keep a mantra of “good enough for now”, and part of that was consoling myself with the possibility that I could go back and revise stuff that I don’t like “later”.

The fact is though, I don’t have an infinite amount of time, money, or energy to work on this game, so at some point: “later” has to become “forever”.

This is standing out in my mind more because part of the Mine area connects with the Shrine area, and while working on the art for the Mine, I availed myself of the opportunity to improve that connecting part of the Shrine. This was something that has been on my list of “go back to later hopefully”, and I found it made me really happy to bring that small area more in line with my original vision.

Again, we’ll see how much of that I can actually afford to do before I have to ship. At this point the focus has to be on making a mad dash towards having something “shippable”. I suppose that’s typically called a beta; where you have all the features in the game and could theoretically put the game out, but you are taking time to polish and fix bugs and such. I honestly am looking forward to that point much more than I am looking forward to actually shipping the game.

Gallery!

For the last couple months (!), I’ve been working on doing the art for one of the areas in the game. This area is called the Gallery, and is a large manor which has been repurposed to house many works of art and puzzles. This area marks the largest and most complex single interior that I’ve made for the game so far. Below you’ll see the exterior on the left, as well as a diagonal cutaway to show all the different interior floor layers.

Needless to say, this was a time consuming effort, and I’m glad to finally be finished with the area. I have also been working on tying it into the surrounding environment, which will take some more time. Below you can see the area near the entrance, which features a winding path down a cliff-side.

68. In The Mountains of Madness (Win32 Wrangling Across the API Boundary)

As I mentioned in the previous blog post, I ran into some issues with mapping the mouse movement from the screen into the game. One of those issues required venturing outside the bounds of the Unity API.

So, for the longest time, I had a certain number of playtesters who were reporting that the mouse movement in the game was very sluggish for them. Unfortunately, I was not able to reproduce this issue until I purchased another monitor. It’s possible that the issue still exists in other circumstances and for other reasons, but in this blog post I will be covering what I did to at least resolve the issue that I could reproduce.

The Issue

“…Unity does not always properly tell you the resolution information for the monitor that the game is running on, and instead shows you the resolution of the “main monitor” as it is set in the display settings in Windows.”

–Me, earlier

Although Unity provides us with a Screen structure that contains information corresponding to the resolution of the screen and the size of the game window, this information is not always correct for our purposes. I was wrong however, in the previous post, in asserting that it always pulls info from the main monitor; what it actually does depends on whether the game is running in windowed mode or fullscreen mode.

If the game is running in windowed mode, then Screen.currentResolution is the native resolution of the monitor that the game is currently running on. However, if the game is running in fullscreen mode, then currentResolution will always be the same as the game’s internal rendering resolution, regardless of the final output resolution.

For example, if we are running the game fullscreen at 1920×1080 but the display is 3840×2160, even though Unity upscales the game to 3840×2160 in order to make it fullscreen, currentResolution will be 1920×1080.

This is a big problem, because in order to scale our mouse movement from screen space into world space, we need to know the ratio between the size of the screen pixels and game pixels. In fullscreen mode, if the player is running below their native monitor resolution, a single pixel in the game’s internal resolution will correspond to multiple pixels on the screen because of upscaling.

Funnily enough, even though we can get this ratio in windowed mode, it is totally unnecessary there, as the game is be rendered unscaled in windowed mode.

The Solution

Because I couldn’t reproduce the issue for the longest time, my initial hunch here was that solving this problem would involve coming up with some other way to handle mouse input appropriately.

I hoped that the new input system for Unity would allow me to solve this issue, however my initial experiments showed some kind of cumulative performance issue which would cause the game to become unplayable after about 30 minutes or so. (I am not sure if this issue has been resolved at this point, but in any case, I decided to pursue other possibilities for fixing the mouse movement.)

Upon finally reproducing the issue, and coming to the diagnosis that I mentioned in the previous section of this article, I set about trying to get the information about the monitor in some other way.

There are other functions in the Unity API, but none of them were very helpful. For example, you can easily find information about all the monitors using the Display class, but there is no information about which monitor the game is currently running on. Display.main is simply the main system monitor according to Windows (this was the cause of my earlier confused reporting about Screen)

So I did what any confused programmer would do at this point; I googled the problem. This led me to this thread on the Unity forums, and some very confusing code.

There’s no real good way around saying it, I just copied and pasted that code and tried to work from there. This was after locating the only thing I could find written officially by Unity about this type of functionality, which was really no help at all.

(I also found a Unity Answers thread with some similar code to the thread on the forums.)

So, in hopes of explaining what I have learned in a better way, and adding one more random thing to the internet about how to handle calls to the Win32 api from Unity. I will post the entire class I wrote and we’ll go through the code as far as I can explain it.

First, the code:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System;
using System.Runtime.InteropServices;

public static class MonitorInfo 
{
    [DllImport("user32.dll")]
    private static extern IntPtr GetActiveWindow();
    [DllImport("user32.dll")]
    private static extern IntPtr MonitorFromWindow(IntPtr hwnd, int flags);
    
    [StructLayout(LayoutKind.Sequential)]
    public struct RECT
    {
        public int left;
        public int top;
        public int right;
        public int bottom;
    }
    
    [StructLayout(LayoutKind.Sequential)]
    public class MONITORINFO
    {
        public int cbSize = Marshal.SizeOf(typeof(MONITORINFO));
        public RECT rcMonitor = new RECT();
        public RECT rcWork = new RECT();
        public int dwFlags = 0;
    }
    
    [DllImport("user32.dll", CharSet = CharSet.Auto)] [return: MarshalAs( UnmanagedType.Bool )]
    private static extern bool GetMonitorInfo(IntPtr hmonitor, [In, Out] MONITORINFO info);
    
    
    public class monitor
    {
        public int width, height;
    }
    
    public static monitor current;
    static MONITORINFO info;
    public static bool isValid;
    
    public static void Update()
    {
        if(info == null) info = new MONITORINFO();
        isValid = GetMonitorInfo(MonitorFromWindow(GetActiveWindow(), 0), info);
        if(isValid)
        {
            if(current == null) current = new monitor();
            current.width = info.rcMonitor.right - info.rcMonitor.left;
            current.height = info.rcMonitor.bottom - info.rcMonitor.top;
        }
    }
    
    
}

Looking at the boilerplate at the top of the file, it’s pretty standard fare, however we need the following in order to interface between Unity’s API and the Win32 API layer:

using System.Runtime.InteropServices;

Next, in our class we need to declare prototypes for the external functions that we plan on calling. We also tell Unity with the [DllImport] attribute to import these functions at runtime from the user32.dll file.

[DllImport("user32.dll")]
private static extern IntPtr GetActiveWindow();
[DllImport("user32.dll")]
private static extern IntPtr MonitorFromWindow(IntPtr hwnd, int flags);

These function definitions are based off the interfaces specified for these functions on MSDN:

GetActiveWindow()

HWND GetActiveWindow();

MonitorFromWindow()

HMONITOR MonitorFromWindow(
  HWND  hwnd,
  DWORD dwFlags
);

A big part of the annoyance here is these types that Microsoft uses for their API calls. What the heck is an HWND? Well, unfortunately I do have some experience with the terrible windows API, so I know that HWND is a Handle to a WiNDow. Similarly for HMONITOR, which is a handle to a monitor. And by handle, they just mean a 32-bit pointer.

God knows how you’re supposed to find this out, but the type that we’re supposed to use in C# to deal with 32-bit pointers is IntPtr.

Okay, so that just leaves DWORD to figure out.

Well, a DWORD, or Double WORD is just a 32-bit integer. This is essentially based off the idea that the processor word size is 16 bits (which it is not, but probably was at the time the Windows API was designed).

Anyway, moving on, we can just use int in our C# code, as the C# integer size is 32 bits. That gives us the definitions below:

private static extern IntPtr GetActiveWindow();
private static extern IntPtr MonitorFromWindow(IntPtr hwnd, int flags);

private and static may not be necessary for you, but in my case this is part of a static class so that it can be easily accessed at global scope from all my other classes in Unity. Static classes in C# require all members to be declared static (and unfortunately it doesn’t do this automatically, we actually have to type static a billion times). I also chose to make these members private because this makes this all a bit less worry-prone as a globally scoped class.

So, next we have a couple of structure definitions:

[StructLayout(LayoutKind.Sequential)]
public struct RECT
{
    public int left;
    public int top;
    public int right;
    public int bottom;
}
[StructLayout(LayoutKind.Sequential)]
public class MONITORINFO
{
    public int cbSize = Marshal.SizeOf(typeof(MONITORINFO));
    public RECT rcMonitor = new RECT();
    public RECT rcWork = new RECT();
    public int dwFlags = 0;
}

These are again, based on the MSDN documentation for these structures.

Rect

typedef struct tagRECT {
  LONG left;
  LONG top;
  LONG right;
  LONG bottom;
} RECT, *PRECT, *NPRECT, *LPRECT;

MonitorInfo

typedef struct tagMONITORINFO {
  DWORD cbSize;
  RECT  rcMonitor;
  RECT  rcWork;
  DWORD dwFlags;
} MONITORINFO, *LPMONITORINFO;

We could use long in the C# code for the RECT members, and it would technically be more correct, however integers work just fine here. This may be some legacy feature of the API though.

We also need to use the [StructLayout] attributes, because normally, in C# the compiler is free to re-order the elements of a struct or class for memory efficiency purposes, however in our case we need the elements of these structures to be in the exact order that they are in the Win32 API.

One strange detail is this line:

public int cbSize = Marshal.SizeOf(typeof(MONITORINFO));

This is essentially code that I copied from the forum post I mentioned earlier, and as such I can’t entirely explain why it works this way, but the Marshal class is part of the System.Runtime.InteropServices that we included earlier. Specifically, it is a class that is used in order to convert between managed and unmanaged memory types.

Based on that, I can hazard a guess that what we’re doing here is actually pulling the size of the MONITORINFO struct from the Win32 API side of the the code, and not the size of the MONITORINFO class that we are currently in the process of defining in our own code. This seems reinforced by the fact that if we change this line to….

public int cbSize = sizeof(typeof(MONITORINFO));

…then Unity will complain that MONITORINFO is undefined.

Okay, moving on. Now we have this whopper of a function prototype definition.

[DllImport("user32.dll", CharSet = CharSet.Auto)] [return: MarshalAs( UnmanagedType.Bool )]
private static extern bool GetMonitorInfo(IntPtr hmonitor, [In, Out] MONITORINFO info);

This is, of course, still based on the definition on the MSDN page:

GetMonitorInfoA()

BOOL GetMonitorInfoA(
  HMONITOR      hMonitor,
  LPMONITORINFO lpmi
);

“Wait, hold on, why is this called GetMonitorInfoA?”

Answering that will also answer why we need to have this CharSet = CharSet.Auto attribute as part of the dll import.

There are two versions of GetMonitorInfo in the Win32 api, one for ASCII characters (GetMonitorInfoA, which is the legacy version) and one for UTF-32 characters (GetMonitorInfoW).

“But wait, why on earth are we worried about text?”

We only have to care about this because this function could potentially return a MONITORINFOEX structure, which contains a string that is a name of the monitor. In our case, we are just throwing that data away and using the smaller MONITORINFO struct, but we still have to support it as part of our function prototype definition.

*sigh*

Another oddity as part of the Attribute definition is this:

[return: MarshalAs( UnmanagedType.Bool )]

Why do we have to marshal bools? Don’t ask me, but the function returns a bool specifying whether or not a monitor could successfully be found, and if you actually want to know that, you’ll need to marshal the bool across the API boundary, because managed and unmanaged bools are not compatible.

The only detail that might be confusing about the actual definition of the function is the [In, Out] attribute. This seems to be the way to specify that a parameter is passed by reference across the API boundary here. Changing it to ref does not work.

At this point, the rest of the code should be fairly understandable, if you have any experience with Unity C# coding:

public class monitor
{
    public int width, height;
}
    
public static monitor current;
static MONITORINFO info;
public static bool isValid;
    
public static void Update()
{
    if(info == null) info = new MONITORINFO();
    isValid = GetMonitorInfo(MonitorFromWindow(GetActiveWindow(), 0), info);
    if(isValid)
    {
        if(current == null) current = new monitor();
        current.width = info.rcMonitor.right - info.rcMonitor.left;
        current.height = info.rcMonitor.bottom - info.rcMonitor.top;
    }
}

One thing that’s worth noting, is that I keep track of a isValid bool publicly, so that I can always check if the calls to the Win32 api returned valid data before I go around using it.

Implementation

So with all that done, we can now change the code that handles the mouse scaling to the following:

Vector2 ScreenRes = new Vector2(Screen.width, Screen.height);
if(MonitorInfo.isValid) ScreenRes = new Vector2(MonitorInfo.current.width, MonitorInfo.current.height);

This means, that as long as we are able to return some valid information from the Win32 API, we will use that. If not, we will fall back to the Screen structure that Unity provides for us, which may be wrong in some cases.

Hope you learned something!

67. Mouse Wrangling

Okay, so I’m a bit late on this devlog entry, but in one of my recent video dev logs (Which I do weekly over on my YouTube channel; if you haven’t been checking them out) I promised that I would write about how mouse movement is handled in-depth, and so I’m going to pay off on my promise.

Mouse movement in games is almost always a bit more of a tricky thing to get right than it seems when playing a finished product. For one, the mouse is a direct input method, so when you move the mouse you expect an equivalent motion in-game, either of the camera or a cursor. This means that latency is much more noticeable here than it is on something indirect like a thumbstick.

Latency can be handled in any number of ways and is mostly outside the scope of this article, but I mention it because it’s a very common case where the technical details of how you handle input can have a large effect on how the game feels.

Mouse motion in Taiji is handled in a way that I haven’t seen any other games use. Most top-down or 2d games with mouse controls (i.e. Diablo, Civilization, Starcraft), just have you move the mouse around on the screen like it were your desktop, and the camera moves very rigidly and only under direct player control. However, in Taiji, camera movement is much more dynamic and loosely follows the position of your player avatar. This causes a bit of an issue, in that the camera may still be moving when the player is trying to use the mouse cursor to target a tile on a puzzle panel. Trying to hit a moving target with the mouse is a bit of a challenge that I am not interested in introducing into the game.

There are a few ways that this problem could have been resolved. One possible solution is to just never move the camera when a puzzle is visible and interact-able. However, this causes some discontinuities in the camera movement which can be, at best, irritating or at worst, nauseating. It also doesn’t work for some panels that are always interact-able if they are on screen. Another possibility is to just give up and lock the camera to the player as many other games do (Diablo, Nuclear Throne). This approach didn’t appeal to me for aesthetic reasons. I want the game to have a relaxed feel, and the slower camera movement is part of that.

The approach I chose instead was to treat the mouse cursor as though it is character existing in the world of the game, and allow the player to control that character directly with the mouse. Another way of thinking about this is that the mouse behaves as though the entire world of the game was your computer desktop, and we are just seeing a small view into that “desktop” at any one time. The view into this “desktop” can move around, but the mouse cursor would stay in the same place relative to everything else on the “desktop”. Technically, this is to say that the cursor is in world-space rather than screen space. This can all seem a bit abstract though, so to help make this concept of “world-space” vs “screen-space” mouse cursors a bit more clear, I recommend watching the video below, which I excerpted from one of my recent video devlogs.

Great! So this fixes our issue of having the mouse cursor drift around as the camera moves, and of the player trying to hit a moving target sometimes when they are clicking on puzzle panel tiles. However, we now have introduced another problem, which is that, since the fairy cursor character only moves if the player moves the mouse; when the player walks across the map, they might forget and leave the cursor behind. Luckily, for this game, in particular, the player is seldom needing to move both the player avatar and the cursor at the same time, so if the player walks more than a certain distance without touching the mouse, we can just put the little fairy character into a separate mode where they follow the player’s avatar around automatically. This might seems like it would get annoying, but in practice, it never really gets in the way of what you’re doing.

So how does this work?

So far, I’ve mostly been recapping the same ground that I covered in my recent devlog video about the mouse movement, but in this written devlog I’ve got much more space to dive into some of the technical details about how this functions.

Fundamentally, the work that we need to do here is to translate a certain amount of motion of a physical mouse into an equivalent motion in the game world. Taiji is being developed using Unity, so in this case, I use Unity’s input system (which Unity is currently in the process of replacing). In Unity’s input system, there are a few ways that I can access information about the actual mouse device.

One possibility is to simply look at the screen position of the mouse, and then project that into the game world in some way. However, we don’t use this method, as we want to make sure the OS mouse cursor is confined to the window while the game is running (you can free up the mouse to use other applications by pausing the game, of course). So we lock the OS mouse cursor to the center of the game window and just ask Unity to tell us how much it tried to move each frame.

mouseDelta = new Vector3(Input.GetAxisRaw("Mouse X"),Input.GetAxisRaw("Mouse Y"));

In the above code, mouseDelta represents the delta (amount of change) in the position of the mouse from the previous time we polled (the last frame). We get the horizontal (X) and vertical (Y), components of this delta, using the GetAxisRaw functions to avoid any time smoothing that Unity might otherwise apply.

Now we can get an amount that the mouse has moved, but there’s one problem; if the OS mouse cursor moved 10 units horizontally, we don’t know how far that really is. We don’t have any idea what the units are. Unfortunately, Unity’s documentation is no real help here, but through experimentation, I have determined that these values are in “pixels at 96dpi“. This might seem the same as pixels on the screen, however, because of screen scaling in Windows, these values may not correspond 1:1 with pixels on the screen.

In any case, correcting for this is fairly easy, as we can simply ask unity for the DPI of the screen. Then we normalize this value to 96dpi and multiply the mouse movement by this value:

float DPI_Scale=(Screen.dpi/96.0f);
mouseDelta *= DPI_Scale;

This now means our mouseDelta is in actual screen pixels.

So…at this point we can just take the movement and project it into the world of the game…right?

Well, unfortunately, no, and this is part of the reason that I had to go down an annoying rabbit-hole that involved calling into the Win32 API. For now let’s just continue onward and ignore that problem, as the explanation of how the mouse gets transformed into world space is already quite complicated on its own. But just make a mental note and we’ll come back here in a bit.

So, we have another issue that we have to resolve, which is that the game can be running in its own scaled resolution. This happens at two levels. The first is that the player can run the game at a sub-native resolution but in fullscreen mode. The game runs in a borderless fullscreen window, which means that if, for example, the game is running fullscreen at 1920×1080 on a 4k monitor, one pixel in the game’s output resolution will correspond to 4 screen pixels.

There are unfortunately some issues here, in that Unity does not always properly tell you the resolution information for the monitor that the game is running on, and instead shows you the resolution of the “main monitor” as it is set in the display settings in Windows. This will be the cause of the Win32 API madness later (interestingly enough, the DPI value is always correct, even with the screen resolution information is not). In any case, we will pretend for now that the Unity API call returns the correct value in all cases, and so the code to resolve this possible mismatch in resolution is as follows:

Vector2 ScreenRes = new Vector2(Screen.width, Screen.height);
float renderScale = (ScreenRes.x / ScreenRes.y < GameRes.x / GameRes.y) ? ScreenRes.x/GameRes.x : ScreenRes.y/GameRes.y;
if(!Screen.fullScreen) renderScale = 1.0f;
mouseDelta *= renderScale;

You might notice that this is a bit more complicated than the code accounting for screen dpi scaling. This is because the game runs at a forced 16:9 aspect ratio in order to tightly control what the player can and cannot see at any given time. This means that if the player is running the game fullscreen on a monitor of a different aspect ratio, it will be either letterboxed or pillarboxed, depending on whether the monitor’s native aspect is wider or narrower than the games. The final rendering scale will, therefore, depend on which side of the game’s view fully spans the monitor (horizontal in the case of letterboxing, and vertical in the case of pillar boxing)

Also of course, if the game is not fullscreen, we don’t have to worry about this render scaling at all.

Omigosh, we’re almost done with this and then we can go home. So the next and final thing that we have to do to get the mouse movement into world space is to account for the degree to which the in-game camera is zoomed in. Luckily, Unity provides us with a fairly straightforward function to map a position from the game’s rendered viewport into a world position. We use that to figure out a scaling value between screen and world space:

float mouseScale = (mainCamera.ScreenToWorldPoint(Vector3.one).x - mainCamera.ScreenToWorldPoint(Vector3.zero).x)/2.0f;
mouseDelta *= mouseScale;

ScreenToWorldPoint takes an input coordinate in pixels and returns a coordinate in “Unity Units”, so we are taking the horizontal distance from one screen pixel to a pixel immediately up and to the right of it, then dividing that horizontal distance by 2 to find the zoom factor. The reason for the division by 2 is actually a bit of a mystery to me at the time of writing this, which is why writing these deep tech dives can be a bit more useful for development than they might otherwise seem. I initially thought that perhaps this was because I was somehow returning the diagonal distance here. However, changing the code to use a pixel directly to the right does not produce a different result. So I guess it remains a mystery to me. However, without the division, the scaling will be wrong. Perhaps someone will comment on this devlog and tell me where I’ve obviously messed up the math somewhere earlier that would require this division or some other reason why it needs to happen here.

At this point, other than multiplying the mouse movement by an additional sensitivity value, we are done, and we can now apply mouseDelta as a translation to the actual in-game object representing the mouse cursor.

To Be Continued

I know, I know, I said I would get into the API nonsense, but this piece has gone on long enough on its own, so I’m gonna leave this on a cliffhanger and get back to fill in the details on that later this week. Till now, thanks for reading, and remember to wishlist the game on steam if you haven’t!

66. Burnout and Walking Animations

I plan on posting these video development logs on a weekly basis over at my YouTube channel. I may post some occasional reminders here going forward, but I’d rather keep this written devlog as it’s own separate thing rather than simply as a cross-posting of the videos. So, if you don’t want to miss any of the video dev logs, I recommend you subscribe to my YouTube channel.

However, since you’re here instead of at YouTube, I’ll reward you with a few sneak peeks into what I left out of the video log.

This past week, I’ve been working on relatively small polish features, which is a bit of a continuation of the work I did last week with footstep sounds (still have some more of those to do actually). I think this is partly as a way to give myself a break from tearing up major chunks of the game to add artwork. But even if it feels like less progress, these small things propagate across the entire game and affect your interactions throughout.

One of these small improvements is to the walking animations. The previous animations were serviceable, however when I added running animations, the running animations looked significantly better in comparison. So I added more frames to the walking animation and made some small tweaks. You can see the comparison between the old (left) and new (right) animations below:

I still want to add animations for when you move diagonally, and hope to get to that next. But I think even this goes some ways towards giving the game a more polished feeling.

I did a few other fun things, but I’ll save those for next week’s video. Hope to see you then. 🙂

65. Taiji Video Logs and Next Gen Thoughts

I’ve started a new video series in addition to the normal written development logs. This will hopefully provide a more exciting and fun avenue to show and talk about what’s involved in working on the game, and perhaps just some of my thoughts on game design in general. These for the most part should get posted here, but you can subscribe to the YouTube channel if you want to know as soon as they go up.

Bonus: Thoughts on Next Gen Consoles

(8 out of 10 gamers couldn’t tell which of the above was a next-gen game)

I was planning to talk about the tonally strange but visually impressive Unreal Engine 5 demo on my development stream yesterday, but since the stream ran into technical difficulties and didn’t happen, I’ll say it here instead.

Maybe I’m only noticing this for the first time now because I’m actively developing this game while new consoles are being announced, but it feels like there is a uniquely large gap between how developers and gamers feel about the new hardware. Gamers seem largely underwhelmed, whereas developers are excited by the prospects.

This can mostly be explained by the differences in what these two customers want out of a new gaming console. Developers want the hardware to make it easier for them to make games. Gamers just want to be sold on graphics or gameplay that pushes past the limits of what they’ve seen before.

On the first point, it’s easy to make a case that both Microsoft and Sony are providing a way forward for developers. Microsoft is selling a beefy PC for your living room, and beefy PCs are easier to get games running well on. Sony is selling a slightly less beefy PC, but with some serious storage tricks up its sleeve that can only really happen with bespoke game-playing hardware.

For gamers, well, it’s harder to make the case.

This is partly developers’ fault. We have gotten so good at working with limited hardware that it’s a challenge to show the difference between real-time global illumination and traditional baked lightmaps or between dynamically decimated hero assets and manually authored LODs. There isn’t much difference as far as the result is concerned, however one of the primary benefits of working with better hardware and technology is that developers can get to the same results much easier and faster.

Pushing the frontiers of gameplay or photorealism is only partly about the hardware. Hardware matters for sure–you can’t run everything on a potato–but innovation is increasingly the thing that pushes boundaries.
A good example of graphics innovation being more important than hardware is the introduction of Physically-Based Materials over the past decade. This precipitated a giant leap forward in average visual fidelity for games, not so much because the hardware was more powerful, but because the pipeline for authoring the artwork was much improved.

Although an argument could be made that additional processing power allowed for shaders that were complex enough to more accurately simulate physical phenomena, this innovation in material authoring and rendering didn’t occur any earlier in the film industry either. So it seems like more of a process innovation than having access to better hardware.

As another way of saying the same thing: It was possible before PBM to make games and films that had very realistic looks to them, but success required artists with tons of technical experience and skill. By changing the tools, it became much easier for even an inexperienced artist to produce an output that looks very realistic.

I think this is the type of progress on display in the Unreal demo and is also largely lost on the average gamer. For them, it’s simply about the results.

As for gameplay innovation, that is a much more challenging problem, and unless you are going specifically for a game design about verisimilitude (i.e. Grand Theft Auto), it’s a problem that is largely divorced from the visual one. Of the game designs that I feel were most impactful over the past decade, I think rather few of them would be technical powerhouses. Some of them (Dark Souls) are downright technical messes. So it’s hard to say exactly what feels “next generation” in terms of game design until you see it, and it’s hard to draw a direct connection between these design leaps and console generations.

Well, I’m certainly excited about a new hardware generation. There’s still something about it that reminds me of the old days when you’d read in a magazine about something Nintendo was cooking up in Japan. But it remains to be seen whether or not the next console generation will convince as many people to pull out their wallets on launch day as this previous one did. It is challenging to convince gamers that they need new hardware simply because it makes things easier for developers.

64. The Graveyard

I spent the last month doing an art pass on another one of the major areas in the game. This area is called “The Graveyard” and consists of more or less what the name describes. Some of the earliest art tests for the game were done with this area, so I thought it would be fun to compare a very early version of the games aesthetic with its current look. (Some of the details in the old version have been hidden to avoid spoilers.)

I would say that the new version is much improved. The old version often did not read as a graveyard at all, probably because the only headstones featured are the strange puzzle ones.

This area exists in a snowy biome, which is different than anything I’ve currently built for the game, and required doing a lot of custom artwork. It was a bit of a challenge, but things went much smoother than I expected. It’s sometimes hard for me to believe that I’ve gotten competent enough at doing art for that to happen. I don’t really consider myself much of an artist.

One of the more fun technical bits to get right was the snowy grass, which uses a procedural technique to generate the patterns of snow and exposed grass. You can see a demonstration of its flexibility below:

One of the other things I did this month is much more secret, but is something that I’ve been planning for a long while. I finally figured out a way to implement the idea. I can’t tell you much more, but here’s a sneak peek:

I’m quite happy with the way this interior came out visually, but since I chose a different approach to lighting it, it has me thinking I need to rework some of the other interior areas to match this level of fidelity.

For the next month, I still have more areas that need artwork, but I also have some puzzle design stuff that I need to finish up. Mostly mix-in puzzles, but I also need to get started on prototyping some gameplay concepts for the ending of the game. Somehow after all this time, I still don’t have a proper ending on the game.

63. The Gardens

This last week I finished the art pass on the area I mentioned in the last devlog post. I settled on “The Gardens” as the name for the area, as it features flower gardens near the entrance, a vegetable garden to the south, and an abandoned grape trellis to the east.

This marks the end of a month for which the game was unplayable, since I have to break a lot of things when revising areas. Normally these types of revisions don’t take a whole month, but the metapuzzle concept here (mentioned in the last update) was quite technically complex, and I ended up having to pare back some parts of the design due to playability problems.

I still don’t want to spoil how the metapuzzle functions, but essentially, it was possible for the player to get the puzzle into an un-solvable state. I tried to resolve this in a very lightweight way that would (in theory) add some additional depth to the puzzle. However, once I implemented it. I realized that my solution didn’t actually work, and so I had to choose a more aggressive fix that eliminated some of the additional depth I had hoped to have.

I am still hopeful that I will come up with a way to add that depth back in, but I felt I had spent enough time on the problem for now, and the metapuzzle is in “good enough” shape. Certainly an improvement on what was there before.

So I’ve finished up six of the eleven major areas in the game, and the next obvious step would be to begin artwork on one of the others. However, I’m a little reluctant to immediately break the game again, so I plan on switching gears to puzzle design for a bit. I still need to take a look at improving the panel puzzles for the Gardens, so I may do that this week in addition to some of the other puzzle design tasks on the list.