Gallery!

For the last couple months (!), I’ve been working on doing the art for one of the areas in the game. This area is called the Gallery, and is a large manor which has been repurposed to house many works of art and puzzles. This area marks the largest and most complex single interior that I’ve made for the game so far. Below you’ll see the exterior on the left, as well as a diagonal cutaway to show all the different interior floor layers.

Needless to say, this was a time consuming effort, and I’m glad to finally be finished with the area. I have also been working on tying it into the surrounding environment, which will take some more time. Below you can see the area near the entrance, which features a winding path down a cliff-side.

68. In The Mountains of Madness (Win32 Wrangling Across the API Boundary)

As I mentioned in the previous blog post, I ran into some issues with mapping the mouse movement from the screen into the game. One of those issues required venturing outside the bounds of the Unity API.

So, for the longest time, I had a certain number of playtesters who were reporting that the mouse movement in the game was very sluggish for them. Unfortunately, I was not able to reproduce this issue until I purchased another monitor. It’s possible that the issue still exists in other circumstances and for other reasons, but in this blog post I will be covering what I did to at least resolve the issue that I could reproduce.

The Issue

“…Unity does not always properly tell you the resolution information for the monitor that the game is running on, and instead shows you the resolution of the “main monitor” as it is set in the display settings in Windows.”

–Me, earlier

Although Unity provides us with a Screen structure that contains information corresponding to the resolution of the screen and the size of the game window, this information is not always correct for our purposes. I was wrong however, in the previous post, in asserting that it always pulls info from the main monitor; what it actually does depends on whether the game is running in windowed mode or fullscreen mode.

If the game is running in windowed mode, then Screen.currentResolution is the native resolution of the monitor that the game is currently running on. However, if the game is running in fullscreen mode, then currentResolution will always be the same as the game’s internal rendering resolution, regardless of the final output resolution.

For example, if we are running the game fullscreen at 1920×1080 but the display is 3840×2160, even though Unity upscales the game to 3840×2160 in order to make it fullscreen, currentResolution will be 1920×1080.

This is a big problem, because in order to scale our mouse movement from screen space into world space, we need to know the ratio between the size of the screen pixels and game pixels. In fullscreen mode, if the player is running below their native monitor resolution, a single pixel in the game’s internal resolution will correspond to multiple pixels on the screen because of upscaling.

Funnily enough, even though we can get this ratio in windowed mode, it is totally unnecessary there, as the game is be rendered unscaled in windowed mode.

The Solution

Because I couldn’t reproduce the issue for the longest time, my initial hunch here was that solving this problem would involve coming up with some other way to handle mouse input appropriately.

I hoped that the new input system for Unity would allow me to solve this issue, however my initial experiments showed some kind of cumulative performance issue which would cause the game to become unplayable after about 30 minutes or so. (I am not sure if this issue has been resolved at this point, but in any case, I decided to pursue other possibilities for fixing the mouse movement.)

Upon finally reproducing the issue, and coming to the diagnosis that I mentioned in the previous section of this article, I set about trying to get the information about the monitor in some other way.

There are other functions in the Unity API, but none of them were very helpful. For example, you can easily find information about all the monitors using the Display class, but there is no information about which monitor the game is currently running on. Display.main is simply the main system monitor according to Windows (this was the cause of my earlier confused reporting about Screen)

So I did what any confused programmer would do at this point; I googled the problem. This led me to this thread on the Unity forums, and some very confusing code.

There’s no real good way around saying it, I just copied and pasted that code and tried to work from there. This was after locating the only thing I could find written officially by Unity about this type of functionality, which was really no help at all.

(I also found a Unity Answers thread with some similar code to the thread on the forums.)

So, in hopes of explaining what I have learned in a better way, and adding one more random thing to the internet about how to handle calls to the Win32 api from Unity. I will post the entire class I wrote and we’ll go through the code as far as I can explain it.

First, the code:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System;
using System.Runtime.InteropServices;

public static class MonitorInfo 
{
    [DllImport("user32.dll")]
    private static extern IntPtr GetActiveWindow();
    [DllImport("user32.dll")]
    private static extern IntPtr MonitorFromWindow(IntPtr hwnd, int flags);
    
    [StructLayout(LayoutKind.Sequential)]
    public struct RECT
    {
        public int left;
        public int top;
        public int right;
        public int bottom;
    }
    
    [StructLayout(LayoutKind.Sequential)]
    public class MONITORINFO
    {
        public int cbSize = Marshal.SizeOf(typeof(MONITORINFO));
        public RECT rcMonitor = new RECT();
        public RECT rcWork = new RECT();
        public int dwFlags = 0;
    }
    
    [DllImport("user32.dll", CharSet = CharSet.Auto)] [return: MarshalAs( UnmanagedType.Bool )]
    private static extern bool GetMonitorInfo(IntPtr hmonitor, [In, Out] MONITORINFO info);
    
    
    public class monitor
    {
        public int width, height;
    }
    
    public static monitor current;
    static MONITORINFO info;
    public static bool isValid;
    
    public static void Update()
    {
        if(info == null) info = new MONITORINFO();
        isValid = GetMonitorInfo(MonitorFromWindow(GetActiveWindow(), 0), info);
        if(isValid)
        {
            if(current == null) current = new monitor();
            current.width = info.rcMonitor.right - info.rcMonitor.left;
            current.height = info.rcMonitor.bottom - info.rcMonitor.top;
        }
    }
    
    
}

Looking at the boilerplate at the top of the file, it’s pretty standard fare, however we need the following in order to interface between Unity’s API and the Win32 API layer:

using System.Runtime.InteropServices;

Next, in our class we need to declare prototypes for the external functions that we plan on calling. We also tell Unity with the [DllImport] attribute to import these functions at runtime from the user32.dll file.

[DllImport("user32.dll")]
private static extern IntPtr GetActiveWindow();
[DllImport("user32.dll")]
private static extern IntPtr MonitorFromWindow(IntPtr hwnd, int flags);

These function definitions are based off the interfaces specified for these functions on MSDN:

GetActiveWindow()

HWND GetActiveWindow();

MonitorFromWindow()

HMONITOR MonitorFromWindow(
  HWND  hwnd,
  DWORD dwFlags
);

A big part of the annoyance here is these types that Microsoft uses for their API calls. What the heck is an HWND? Well, unfortunately I do have some experience with the terrible windows API, so I know that HWND is a Handle to a WiNDow. Similarly for HMONITOR, which is a handle to a monitor. And by handle, they just mean a 32-bit pointer.

God knows how you’re supposed to find this out, but the type that we’re supposed to use in C# to deal with 32-bit pointers is IntPtr.

Okay, so that just leaves DWORD to figure out.

Well, a DWORD, or Double WORD is just a 32-bit integer. This is essentially based off the idea that the processor word size is 16 bits (which it is not, but probably was at the time the Windows API was designed).

Anyway, moving on, we can just use int in our C# code, as the C# integer size is 32 bits. That gives us the definitions below:

private static extern IntPtr GetActiveWindow();
private static extern IntPtr MonitorFromWindow(IntPtr hwnd, int flags);

private and static may not be necessary for you, but in my case this is part of a static class so that it can be easily accessed at global scope from all my other classes in Unity. Static classes in C# require all members to be declared static (and unfortunately it doesn’t do this automatically, we actually have to type static a billion times). I also chose to make these members private because this makes this all a bit less worry-prone as a globally scoped class.

So, next we have a couple of structure definitions:

[StructLayout(LayoutKind.Sequential)]
public struct RECT
{
    public int left;
    public int top;
    public int right;
    public int bottom;
}
[StructLayout(LayoutKind.Sequential)]
public class MONITORINFO
{
    public int cbSize = Marshal.SizeOf(typeof(MONITORINFO));
    public RECT rcMonitor = new RECT();
    public RECT rcWork = new RECT();
    public int dwFlags = 0;
}

These are again, based on the MSDN documentation for these structures.

Rect

typedef struct tagRECT {
  LONG left;
  LONG top;
  LONG right;
  LONG bottom;
} RECT, *PRECT, *NPRECT, *LPRECT;

MonitorInfo

typedef struct tagMONITORINFO {
  DWORD cbSize;
  RECT  rcMonitor;
  RECT  rcWork;
  DWORD dwFlags;
} MONITORINFO, *LPMONITORINFO;

We could use long in the C# code for the RECT members, and it would technically be more correct, however integers work just fine here. This may be some legacy feature of the API though.

We also need to use the [StructLayout] attributes, because normally, in C# the compiler is free to re-order the elements of a struct or class for memory efficiency purposes, however in our case we need the elements of these structures to be in the exact order that they are in the Win32 API.

One strange detail is this line:

public int cbSize = Marshal.SizeOf(typeof(MONITORINFO));

This is essentially code that I copied from the forum post I mentioned earlier, and as such I can’t entirely explain why it works this way, but the Marshal class is part of the System.Runtime.InteropServices that we included earlier. Specifically, it is a class that is used in order to convert between managed and unmanaged memory types.

Based on that, I can hazard a guess that what we’re doing here is actually pulling the size of the MONITORINFO struct from the Win32 API side of the the code, and not the size of the MONITORINFO class that we are currently in the process of defining in our own code. This seems reinforced by the fact that if we change this line to….

public int cbSize = sizeof(typeof(MONITORINFO));

…then Unity will complain that MONITORINFO is undefined.

Okay, moving on. Now we have this whopper of a function prototype definition.

[DllImport("user32.dll", CharSet = CharSet.Auto)] [return: MarshalAs( UnmanagedType.Bool )]
private static extern bool GetMonitorInfo(IntPtr hmonitor, [In, Out] MONITORINFO info);

This is, of course, still based on the definition on the MSDN page:

GetMonitorInfoA()

BOOL GetMonitorInfoA(
  HMONITOR      hMonitor,
  LPMONITORINFO lpmi
);

“Wait, hold on, why is this called GetMonitorInfoA?”

Answering that will also answer why we need to have this CharSet = CharSet.Auto attribute as part of the dll import.

There are two versions of GetMonitorInfo in the Win32 api, one for ASCII characters (GetMonitorInfoA, which is the legacy version) and one for UTF-32 characters (GetMonitorInfoW).

“But wait, why on earth are we worried about text?”

We only have to care about this because this function could potentially return a MONITORINFOEX structure, which contains a string that is a name of the monitor. In our case, we are just throwing that data away and using the smaller MONITORINFO struct, but we still have to support it as part of our function prototype definition.

*sigh*

Another oddity as part of the Attribute definition is this:

[return: MarshalAs( UnmanagedType.Bool )]

Why do we have to marshal bools? Don’t ask me, but the function returns a bool specifying whether or not a monitor could successfully be found, and if you actually want to know that, you’ll need to marshal the bool across the API boundary, because managed and unmanaged bools are not compatible.

The only detail that might be confusing about the actual definition of the function is the [In, Out] attribute. This seems to be the way to specify that a parameter is passed by reference across the API boundary here. Changing it to ref does not work.

At this point, the rest of the code should be fairly understandable, if you have any experience with Unity C# coding:

public class monitor
{
    public int width, height;
}
    
public static monitor current;
static MONITORINFO info;
public static bool isValid;
    
public static void Update()
{
    if(info == null) info = new MONITORINFO();
    isValid = GetMonitorInfo(MonitorFromWindow(GetActiveWindow(), 0), info);
    if(isValid)
    {
        if(current == null) current = new monitor();
        current.width = info.rcMonitor.right - info.rcMonitor.left;
        current.height = info.rcMonitor.bottom - info.rcMonitor.top;
    }
}

One thing that’s worth noting, is that I keep track of a isValid bool publicly, so that I can always check if the calls to the Win32 api returned valid data before I go around using it.

Implementation

So with all that done, we can now change the code that handles the mouse scaling to the following:

Vector2 ScreenRes = new Vector2(Screen.width, Screen.height);
if(MonitorInfo.isValid) ScreenRes = new Vector2(MonitorInfo.current.width, MonitorInfo.current.height);

This means, that as long as we are able to return some valid information from the Win32 API, we will use that. If not, we will fall back to the Screen structure that Unity provides for us, which may be wrong in some cases.

Hope you learned something!

67. Mouse Wrangling

Okay, so I’m a bit late on this devlog entry, but in one of my recent video dev logs (Which I do weekly over on my YouTube channel; if you haven’t been checking them out) I promised that I would write about how mouse movement is handled in-depth, and so I’m going to pay off on my promise.

Mouse movement in games is almost always a bit more of a tricky thing to get right than it seems when playing a finished product. For one, the mouse is a direct input method, so when you move the mouse you expect an equivalent motion in-game, either of the camera or a cursor. This means that latency is much more noticeable here than it is on something indirect like a thumbstick.

Latency can be handled in any number of ways and is mostly outside the scope of this article, but I mention it because it’s a very common case where the technical details of how you handle input can have a large effect on how the game feels.

Mouse motion in Taiji is handled in a way that I haven’t seen any other games use. Most top-down or 2d games with mouse controls (i.e. Diablo, Civilization, Starcraft), just have you move the mouse around on the screen like it were your desktop, and the camera moves very rigidly and only under direct player control. However, in Taiji, camera movement is much more dynamic and loosely follows the position of your player avatar. This causes a bit of an issue, in that the camera may still be moving when the player is trying to use the mouse cursor to target a tile on a puzzle panel. Trying to hit a moving target with the mouse is a bit of a challenge that I am not interested in introducing into the game.

There are a few ways that this problem could have been resolved. One possible solution is to just never move the camera when a puzzle is visible and interact-able. However, this causes some discontinuities in the camera movement which can be, at best, irritating or at worst, nauseating. It also doesn’t work for some panels that are always interact-able if they are on screen. Another possibility is to just give up and lock the camera to the player as many other games do (Diablo, Nuclear Throne). This approach didn’t appeal to me for aesthetic reasons. I want the game to have a relaxed feel, and the slower camera movement is part of that.

The approach I chose instead was to treat the mouse cursor as though it is character existing in the world of the game, and allow the player to control that character directly with the mouse. Another way of thinking about this is that the mouse behaves as though the entire world of the game was your computer desktop, and we are just seeing a small view into that “desktop” at any one time. The view into this “desktop” can move around, but the mouse cursor would stay in the same place relative to everything else on the “desktop”. Technically, this is to say that the cursor is in world-space rather than screen space. This can all seem a bit abstract though, so to help make this concept of “world-space” vs “screen-space” mouse cursors a bit more clear, I recommend watching the video below, which I excerpted from one of my recent video devlogs.

Great! So this fixes our issue of having the mouse cursor drift around as the camera moves, and of the player trying to hit a moving target sometimes when they are clicking on puzzle panel tiles. However, we now have introduced another problem, which is that, since the fairy cursor character only moves if the player moves the mouse; when the player walks across the map, they might forget and leave the cursor behind. Luckily, for this game, in particular, the player is seldom needing to move both the player avatar and the cursor at the same time, so if the player walks more than a certain distance without touching the mouse, we can just put the little fairy character into a separate mode where they follow the player’s avatar around automatically. This might seems like it would get annoying, but in practice, it never really gets in the way of what you’re doing.

So how does this work?

So far, I’ve mostly been recapping the same ground that I covered in my recent devlog video about the mouse movement, but in this written devlog I’ve got much more space to dive into some of the technical details about how this functions.

Fundamentally, the work that we need to do here is to translate a certain amount of motion of a physical mouse into an equivalent motion in the game world. Taiji is being developed using Unity, so in this case, I use Unity’s input system (which Unity is currently in the process of replacing). In Unity’s input system, there are a few ways that I can access information about the actual mouse device.

One possibility is to simply look at the screen position of the mouse, and then project that into the game world in some way. However, we don’t use this method, as we want to make sure the OS mouse cursor is confined to the window while the game is running (you can free up the mouse to use other applications by pausing the game, of course). So we lock the OS mouse cursor to the center of the game window and just ask Unity to tell us how much it tried to move each frame.

mouseDelta = new Vector3(Input.GetAxisRaw("Mouse X"),Input.GetAxisRaw("Mouse Y"));

In the above code, mouseDelta represents the delta (amount of change) in the position of the mouse from the previous time we polled (the last frame). We get the horizontal (X) and vertical (Y), components of this delta, using the GetAxisRaw functions to avoid any time smoothing that Unity might otherwise apply.

Now we can get an amount that the mouse has moved, but there’s one problem; if the OS mouse cursor moved 10 units horizontally, we don’t know how far that really is. We don’t have any idea what the units are. Unfortunately, Unity’s documentation is no real help here, but through experimentation, I have determined that these values are in “pixels at 96dpi“. This might seem the same as pixels on the screen, however, because of screen scaling in Windows, these values may not correspond 1:1 with pixels on the screen.

In any case, correcting for this is fairly easy, as we can simply ask unity for the DPI of the screen. Then we normalize this value to 96dpi and multiply the mouse movement by this value:

float DPI_Scale=(Screen.dpi/96.0f);
mouseDelta *= DPI_Scale;

This now means our mouseDelta is in actual screen pixels.

So…at this point we can just take the movement and project it into the world of the game…right?

Well, unfortunately, no, and this is part of the reason that I had to go down an annoying rabbit-hole that involved calling into the Win32 API. For now let’s just continue onward and ignore that problem, as the explanation of how the mouse gets transformed into world space is already quite complicated on its own. But just make a mental note and we’ll come back here in a bit.

So, we have another issue that we have to resolve, which is that the game can be running in its own scaled resolution. This happens at two levels. The first is that the player can run the game at a sub-native resolution but in fullscreen mode. The game runs in a borderless fullscreen window, which means that if, for example, the game is running fullscreen at 1920×1080 on a 4k monitor, one pixel in the game’s output resolution will correspond to 4 screen pixels.

There are unfortunately some issues here, in that Unity does not always properly tell you the resolution information for the monitor that the game is running on, and instead shows you the resolution of the “main monitor” as it is set in the display settings in Windows. This will be the cause of the Win32 API madness later (interestingly enough, the DPI value is always correct, even with the screen resolution information is not). In any case, we will pretend for now that the Unity API call returns the correct value in all cases, and so the code to resolve this possible mismatch in resolution is as follows:

Vector2 ScreenRes = new Vector2(Screen.width, Screen.height);
float renderScale = (ScreenRes.x / ScreenRes.y < GameRes.x / GameRes.y) ? ScreenRes.x/GameRes.x : ScreenRes.y/GameRes.y;
if(!Screen.fullScreen) renderScale = 1.0f;
mouseDelta *= renderScale;

You might notice that this is a bit more complicated than the code accounting for screen dpi scaling. This is because the game runs at a forced 16:9 aspect ratio in order to tightly control what the player can and cannot see at any given time. This means that if the player is running the game fullscreen on a monitor of a different aspect ratio, it will be either letterboxed or pillarboxed, depending on whether the monitor’s native aspect is wider or narrower than the games. The final rendering scale will, therefore, depend on which side of the game’s view fully spans the monitor (horizontal in the case of letterboxing, and vertical in the case of pillar boxing)

Also of course, if the game is not fullscreen, we don’t have to worry about this render scaling at all.

Omigosh, we’re almost done with this and then we can go home. So the next and final thing that we have to do to get the mouse movement into world space is to account for the degree to which the in-game camera is zoomed in. Luckily, Unity provides us with a fairly straightforward function to map a position from the game’s rendered viewport into a world position. We use that to figure out a scaling value between screen and world space:

float mouseScale = (mainCamera.ScreenToWorldPoint(Vector3.one).x - mainCamera.ScreenToWorldPoint(Vector3.zero).x)/2.0f;
mouseDelta *= mouseScale;

ScreenToWorldPoint takes an input coordinate in pixels and returns a coordinate in “Unity Units”, so we are taking the horizontal distance from one screen pixel to a pixel immediately up and to the right of it, then dividing that horizontal distance by 2 to find the zoom factor. The reason for the division by 2 is actually a bit of a mystery to me at the time of writing this, which is why writing these deep tech dives can be a bit more useful for development than they might otherwise seem. I initially thought that perhaps this was because I was somehow returning the diagonal distance here. However, changing the code to use a pixel directly to the right does not produce a different result. So I guess it remains a mystery to me. However, without the division, the scaling will be wrong. Perhaps someone will comment on this devlog and tell me where I’ve obviously messed up the math somewhere earlier that would require this division or some other reason why it needs to happen here.

At this point, other than multiplying the mouse movement by an additional sensitivity value, we are done, and we can now apply mouseDelta as a translation to the actual in-game object representing the mouse cursor.

To Be Continued

I know, I know, I said I would get into the API nonsense, but this piece has gone on long enough on its own, so I’m gonna leave this on a cliffhanger and get back to fill in the details on that later this week. Till now, thanks for reading, and remember to wishlist the game on steam if you haven’t!

66. Burnout and Walking Animations

I plan on posting these video development logs on a weekly basis over at my YouTube channel. I may post some occasional reminders here going forward, but I’d rather keep this written devlog as it’s own separate thing rather than simply as a cross-posting of the videos. So, if you don’t want to miss any of the video dev logs, I recommend you subscribe to my YouTube channel.

However, since you’re here instead of at YouTube, I’ll reward you with a few sneak peeks into what I left out of the video log.

This past week, I’ve been working on relatively small polish features, which is a bit of a continuation of the work I did last week with footstep sounds (still have some more of those to do actually). I think this is partly as a way to give myself a break from tearing up major chunks of the game to add artwork. But even if it feels like less progress, these small things propagate across the entire game and affect your interactions throughout.

One of these small improvements is to the walking animations. The previous animations were serviceable, however when I added running animations, the running animations looked significantly better in comparison. So I added more frames to the walking animation and made some small tweaks. You can see the comparison between the old (left) and new (right) animations below:

I still want to add animations for when you move diagonally, and hope to get to that next. But I think even this goes some ways towards giving the game a more polished feeling.

I did a few other fun things, but I’ll save those for next week’s video. Hope to see you then. 🙂

65. Taiji Video Logs and Next Gen Thoughts

I’ve started a new video series in addition to the normal written development logs. This will hopefully provide a more exciting and fun avenue to show and talk about what’s involved in working on the game, and perhaps just some of my thoughts on game design in general. These for the most part should get posted here, but you can subscribe to the YouTube channel if you want to know as soon as they go up.

Bonus: Thoughts on Next Gen Consoles

(8 out of 10 gamers couldn’t tell which of the above was a next-gen game)

I was planning to talk about the tonally strange but visually impressive Unreal Engine 5 demo on my development stream yesterday, but since the stream ran into technical difficulties and didn’t happen, I’ll say it here instead.

Maybe I’m only noticing this for the first time now because I’m actively developing this game while new consoles are being announced, but it feels like there is a uniquely large gap between how developers and gamers feel about the new hardware. Gamers seem largely underwhelmed, whereas developers are excited by the prospects.

This can mostly be explained by the differences in what these two customers want out of a new gaming console. Developers want the hardware to make it easier for them to make games. Gamers just want to be sold on graphics or gameplay that pushes past the limits of what they’ve seen before.

On the first point, it’s easy to make a case that both Microsoft and Sony are providing a way forward for developers. Microsoft is selling a beefy PC for your living room, and beefy PCs are easier to get games running well on. Sony is selling a slightly less beefy PC, but with some serious storage tricks up its sleeve that can only really happen with bespoke game-playing hardware.

For gamers, well, it’s harder to make the case.

This is partly developers’ fault. We have gotten so good at working with limited hardware that it’s a challenge to show the difference between real-time global illumination and traditional baked lightmaps or between dynamically decimated hero assets and manually authored LODs. There isn’t much difference as far as the result is concerned, however one of the primary benefits of working with better hardware and technology is that developers can get to the same results much easier and faster.

Pushing the frontiers of gameplay or photorealism is only partly about the hardware. Hardware matters for sure–you can’t run everything on a potato–but innovation is increasingly the thing that pushes boundaries.
A good example of graphics innovation being more important than hardware is the introduction of Physically-Based Materials over the past decade. This precipitated a giant leap forward in average visual fidelity for games, not so much because the hardware was more powerful, but because the pipeline for authoring the artwork was much improved.

Although an argument could be made that additional processing power allowed for shaders that were complex enough to more accurately simulate physical phenomena, this innovation in material authoring and rendering didn’t occur any earlier in the film industry either. So it seems like more of a process innovation than having access to better hardware.

As another way of saying the same thing: It was possible before PBM to make games and films that had very realistic looks to them, but success required artists with tons of technical experience and skill. By changing the tools, it became much easier for even an inexperienced artist to produce an output that looks very realistic.

I think this is the type of progress on display in the Unreal demo and is also largely lost on the average gamer. For them, it’s simply about the results.

As for gameplay innovation, that is a much more challenging problem, and unless you are going specifically for a game design about verisimilitude (i.e. Grand Theft Auto), it’s a problem that is largely divorced from the visual one. Of the game designs that I feel were most impactful over the past decade, I think rather few of them would be technical powerhouses. Some of them (Dark Souls) are downright technical messes. So it’s hard to say exactly what feels “next generation” in terms of game design until you see it, and it’s hard to draw a direct connection between these design leaps and console generations.

Well, I’m certainly excited about a new hardware generation. There’s still something about it that reminds me of the old days when you’d read in a magazine about something Nintendo was cooking up in Japan. But it remains to be seen whether or not the next console generation will convince as many people to pull out their wallets on launch day as this previous one did. It is challenging to convince gamers that they need new hardware simply because it makes things easier for developers.

64. The Graveyard

I spent the last month doing an art pass on another one of the major areas in the game. This area is called “The Graveyard” and consists of more or less what the name describes. Some of the earliest art tests for the game were done with this area, so I thought it would be fun to compare a very early version of the games aesthetic with its current look. (Some of the details in the old version have been hidden to avoid spoilers.)

I would say that the new version is much improved. The old version often did not read as a graveyard at all, probably because the only headstones featured are the strange puzzle ones.

This area exists in a snowy biome, which is different than anything I’ve currently built for the game, and required doing a lot of custom artwork. It was a bit of a challenge, but things went much smoother than I expected. It’s sometimes hard for me to believe that I’ve gotten competent enough at doing art for that to happen. I don’t really consider myself much of an artist.

One of the more fun technical bits to get right was the snowy grass, which uses a procedural technique to generate the patterns of snow and exposed grass. You can see a demonstration of its flexibility below:

One of the other things I did this month is much more secret, but is something that I’ve been planning for a long while. I finally figured out a way to implement the idea. I can’t tell you much more, but here’s a sneak peek:

I’m quite happy with the way this interior came out visually, but since I chose a different approach to lighting it, it has me thinking I need to rework some of the other interior areas to match this level of fidelity.

For the next month, I still have more areas that need artwork, but I also have some puzzle design stuff that I need to finish up. Mostly mix-in puzzles, but I also need to get started on prototyping some gameplay concepts for the ending of the game. Somehow after all this time, I still don’t have a proper ending on the game.

63. The Gardens

This last week I finished the art pass on the area I mentioned in the last devlog post. I settled on “The Gardens” as the name for the area, as it features flower gardens near the entrance, a vegetable garden to the south, and an abandoned grape trellis to the east.

This marks the end of a month for which the game was unplayable, since I have to break a lot of things when revising areas. Normally these types of revisions don’t take a whole month, but the metapuzzle concept here (mentioned in the last update) was quite technically complex, and I ended up having to pare back some parts of the design due to playability problems.

I still don’t want to spoil how the metapuzzle functions, but essentially, it was possible for the player to get the puzzle into an un-solvable state. I tried to resolve this in a very lightweight way that would (in theory) add some additional depth to the puzzle. However, once I implemented it. I realized that my solution didn’t actually work, and so I had to choose a more aggressive fix that eliminated some of the additional depth I had hoped to have.

I am still hopeful that I will come up with a way to add that depth back in, but I felt I had spent enough time on the problem for now, and the metapuzzle is in “good enough” shape. Certainly an improvement on what was there before.

So I’ve finished up six of the eleven major areas in the game, and the next obvious step would be to begin artwork on one of the others. However, I’m a little reluctant to immediately break the game again, so I plan on switching gears to puzzle design for a bit. I still need to take a look at improving the panel puzzles for the Gardens, so I may do that this week in addition to some of the other puzzle design tasks on the list.

62. Metamusings

Four of the ten major sub-areas of the game involve symbols embedded into the puzzle panels which the player figures out the meaning of over the course of the area. In each of those areas, there is also metapuzzle used for navigating that area. These metapuzzles require the player to solve multiple puzzle panels which interact with eachother in some way. Each area has a unique theme to this metapuzzle which is connected to the general theme of the area’s mechanic in some way or another.

I’ve had three these metapuzzles designed for a while now, but I’ve been spending the past week or so designing and implementing the fourth one. I’m also in the middle of ripping up the area containing it and doing a proper art pass on it, so the following screenshot is an example of something super work-in-progress:

I don’t want to explain too much of how the puzzle works, so as not to spoil things, and because I am likely to change some details as I continue implementing it. But the basic idea is that you can move the bridges between each circular platform and you use that to get around the area.

It’s been a bit of a technical challenge to get this puzzle implemented, and as of this writing, there are still a few features that are not set up yet. Part of the challenge here is that so much state can be changed across the metapuzzle, and it’s important to keep all of the state in sync.

The overall theme for the area is a terraced garden surrounding a dry lakebed. The major features of the area are still work in progress as well, but I have begun some work on the artwork for the entry terraces, which you can see below:

These will require some more detailing work, but the overall structure is more or less correct. The design of the stepped gardens is loosely based off the Hyakudanen at Awaji Yumebutai. In the game, the player will start at the stop of this stairwell as the entryway into the area. Once they get to the bottom, they will find some puzzle panels opening access into the metapuzzle section.

61. Testing my Patience

I’m mostly kidding about the title, but I’ve been putting the game out to some new testers these past few weeks (which is why I missed December’s devlog update).

Whenever I stop working on new things and take some time to reflect, I often get a bit depressed. I’ve mentioned this feeling during the previous round of testing and it’s similar feeling during this round. I feel accomplished in that I’ve made enough progress for the game to be worth evaluating again. But there’s still so much to do that it’s overwhelming to think about.

Looking at things at a high level, I’ve done a “good enough” art pass on five of the eleven areas in the game (although some of them I’d still like to make major tweaks to)

So that means I’m almost halfway done with the art, which is pretty good progress. But it also means that I still have the majority of the game to finish up.

It’s hard to make estimates on how “close to done” the design is because progress there is much less straightforward. With the art, it’s probably good enough to have art that looks decent and isn’t overly confusing. But with the design, there’s no “right” way for anything to be. It’s down to my personal decisions about what types of puzzles to focus on and how much should be required to progress in each area.

I also still haven’t put in anything resembling an ending, and I’m not even sure what that might entail. I think it’s rather hard to make satisfying endings to puzzle games. If the puzzles are too hard, it can wreck the pacing and just make the ending feel like a chore. Alternatively, if the ending is too easy, it can feel anticlimactic. Usually what works best is something that feels like a large change of pace from what came before.

To that end, I have a few ideas, but they are underdeveloped at the moment.

I’ll close out this post with a short clip of one of the areas that I’ve recently re-did the art for. It’s not entirely finished, but I’m happy with how it’s come along.