Interfaces

You know what’s really hard in game development? User interfaces. That doesn’t seem obvious: there are certainly lots of areas that require hard stuff: maths, physics, etc. Graphics, physics, advanced AI, procedural generation – these all require pretty advanced concepts; and user interface generally does not. But the fact is, I’ve seen good existing libraries for all of these things; I’ve never seen a good GUI library. And none of my fellow developers had, either. Every game GUI system out there is weird, quirky, slow, hard to use and/or badly coded. Sometimes all of the above. Here’s what GUI systems are commonly used with Unity3D engine.

UnityGUI

Unity3D has a built-in system called UnityGUI. It’s pretty weird: all interface is created by code, and controls are function calls. Here’s what I mean: usually, GUI systems have a notion of “control”: a button, a line of text, a checkbox, etc. These controls are created either in some kind of editor or directly by code, like this:

var button = new Button();
button.Width = 100;
button.Height = 30;
button.Caption = "Click me";
button.Click = DoStuff;

Then this “button” is placed somewhere, like in a window, and the GUI system takes care of drawing it in the right place, checking for clicks, etc. You can still change the button after it’s created, i.e. resize it or change color or whatever.

In UnityGUI, though, there is no explicit button. Instead, the code calls a function that draws a button on screen immediately, and also checks for clicks. This makes the code dead simple:

if(Button(100,30,"Click Me"))
{
    DoStuff();
}
unityGUI

This is how UnityGUI looks like.

But while this is simple and easy to code, there are many problems with this approach. First, since there is no button, there’s no easy way to change its size, or text, or whatever, dynamically. Second, it requires that the code that draws the button, and the code that runs on button click, are specified in one place. This is called “tight coupling” in programmers’ parlance, and is a big no-no: it causes code to become tangled, intertwined, and ultimately unusable. And third, since all GUI elements are drawn immediately with this approach, it tends to be quite slow.

UnityGUI is essentially so bad, even Unity Technologies employees don’t advise using it. It does have its use, though: when you need to draw lots of mostly independent controls, and don’t care much about visual prettiness or performance, UnityGUI really shines. This is not the case with games, but this is exactly what’s needed for game tools. My editors for procedural generation, game objects and settings, and lots of other stuff all use UnityGUI and I love it; but it can’t be used for the actual game.

NGUI

This is how NGUI looks like

Another GUI system commonly used with Unity is NGUI. It’s a relatively nice system, based on the common approach of creating various controls in the editor (it basically reuses Unity level editor). However, it does not cut it for my needs. NGUI is based on the premise that all controls are basically created and laid out in advance, and the game might show or hide them, with animations maybe, but not create new ones. Not that dynamic creation is impossible in NGUI – it’s just not really thought out. In the same vein, NGUI does not offer any ways for laying out (positioning) controls on screen, except the most basic. This is enough for many interfaces, but Xenos is going to need more. I plan lots of different windows: inventory, character and NPC information, dialogs, crafting tables, etc; simple layouts of NGUI are not enough. Also, NGUI costs $95 – it’s not much, but still an investment.

Other systems

There are some other GUI systems usable with Unity out there, but they’re less widely used and are probably less functional. There are also two really advanced alternatives: Flash and HTML/Javascript GUI (basically, integrating a whole another renderer into Unity). These are nice, but they require advanced capabilities of Unity3D Pro license, which I don’t have (and it costs $1500 and I’m not willing to spend this much just yet).

This leaves me with no other choice but create my own GUI system. Now, I can’t really hope to best literally everyone out there. Most probably, my GUI system would turn out to be bad too. However, what I do hope to achieve, is make a system that is good enough in the areas that really matter for this game, and maybe is shitty in some less important ones. Also, having GUI system written by myself means that I’d probably understand it really well and would be able to fix things relatively easy. That’s not guaranteed, though.

XGUI

XGUI is the name of system that I ended up with. It’s not fully finished yet, but all the big stuff is in. I can create windows and controls using a visual editor. I can automatically generate “glue” code that makes using these windows easy, and de-couples control creation and use. I can create and change anything dynamically. I have a small, but effective library of simple controls that can be combined to create complex interfaces. And I have advanced automated layouting system, that adapts to target resolution.

Basically, what’s missing is drag-and-drop support (it’s pretty easy to add) and a system for editing and playing animations in GUI.
And, of course, missing is any kind of nice artwork to actually show off the interface. For now, all my GUI consists of differently colored boxes. But I hope to enlist an actual artist’s help for this, so stay tuned. Meanwhile, here’s how the GUI looks now.

xgui-editor xgui-ingame

Places to go, people to see

Activities

Last time, I was talking about pathfinding algorithm in Xenos. But where would the NPCs  go? For now, there’s not many places for them to visit; just enough to demonstrate that they can do something and the code actually works.

Here’s how it works. When the game generates a house, it marks some areas of it as activity zones. Right now, there are 3 types of such zones: farm plots outside are marked as farming zones, beds inside are marked as resting zones, and the whole house is marked as idle zone. Then, when an NPC spawns, it checks all zones around and prepares a list of all activities it can do.

Npc activity

An NPC selected farming zone and is “tending to crops”.

The idea is that during the game, NPCs would somehow select the most “preferable” activity, go to its corresponding zone, and play required animations there. For example, at night resting in a bed is preferable, and during the day, farming or some such. Right now, though, they just select a random activity every few seconds… which is already surprisingly effective. Even three really simple activities make the village seem alive; adding more would probably be even better.

Locations

When I had a whole village of NPCs doing stuff (OK, pretending to do stuff), I wanted to add some other places. The village is far from done, of course, but I think the most interesting gameplay would happen outside, and I had no outside at this point.

For Xenos, I want to build a game world consisting of many discrete locations. There would be a short loading screen when moving between them: not because of some technical difficulties, but to make generation simpler, so that different locations don’t have to line up exactly. Also, having a clear line between “loaded” and “unloaded” locations simplifies game mechanics: I don’t have to worry about some place being unloaded suddenly – only as a part of leaving the whole location behind.

Adding another location brought an unexpected difficulty: the whole procedural generation system is quite unwieldy when I add new content. For example, at first I created an asphalt tile and a concrete wall tile. Adding them to the game took maybe a couple of minutes, but to actually see them, I had to create a new map generation feature that creates something out of asphalt and concrete, and then use it in some bigger feature that would create a “town” location… that was unacceptably slow.

Location editor

Creating a small town map

So, in addition to procedural generation, I had to create an old-fashioned map editor, so that I could create locations and test new stuff quickly. This took quite a long time, but would hopefully be faster in the long run.

Of course, I’m not dropping the procedural generation from the game. Instead, I hope it would work well together with hand-made maps: the systems are connected, and I can both use procedural system to generate something in the editor, and feed parts of hand-made maps into generation (i.e. a generation system creates a town layout, and fills it with buildings that are made by hand).

With the editor in place, I quickly added a few tiles and objects that can be encountered in a town, and built a little sample location. Next, I wanted to start adding actual gameplay, but there still were one system to build: user interface…