An experiment: writing code without feedback

Posted by Zeh Fernando on 1/October/2014 at 12:51

Computing has come a long way in the past few decades. Originally, programmers used to write code by themselves, using punch cards, and only later be allowed to run them on machines. The end result was a process of writing code and debugging as completely separated steps.

I never used punch cards (I’m not that old), but I had similar experiences. In high school, I didn’t own a PC, but I had about 2 hours of lab time per week. I normally stretched that to 4 hours, but for a nerdy teenager with lots of programming ideas, that wasn’t enough time. This made me maximize the quality of the time I actually spent with the school’s computers: I would spend the whole week writing code on paper, and when I got to the computer lab, I could spend the whole time actually typing the code rather than thinking on what code to write, or debugging. I’d then run the program, print a list of the error messages I got, print the source for the whole program, take that home with me, and spend the next week debugging the code by hand, doing corrections, and writing new code for the next time I had access to the computers.

That is obviously a very alien concept today. We’re constantly writing code and running it to see if it works, and we get immediate feedback from our code writing tools so simple mistakes can be quickly fixed. In fact, if something takes more than a few seconds to compile it frustrates me to no end, and a slow environment – for example, API calls that time out – can drag my productivity to the ground just out of sheer impatience.

Recently, I started an experiment in doing something a little bit different.

I have a project that I’m doing at home using Unity, C#, and Visual Studio. Those are great tools, and the project is evolving well.

However, from time to time, I find myself having just a few minutes of free time (e.g. just before meetings), or conjuring some idea that I want to quickly implement in this project.

In these cases, rather than firing up Visual Studio and Unity to write the code out, I’ve started editing the project’s source files directly, with minimal feedback. That means opening the files in a simple text editor (Notepad++) and writing code with no regard for how (or even whether) it is running. Of course, I try to write something that I believe is correct, and that hopefully will run at once, but the interesting thing about this approach is that I don’t care about errors that much. I am not checking them, either via the (helpful) IDE checks or by running the code. I am writing code, not fixing it.

The result of this approach is a deeper concentration in what I’m writing, rather than how it’s running.

Of course, this approach requires a second step, which is getting home and actually testing the project. That means running my environment – Visual Studio and Unity – and fixing any errors that show up in what I tried to accomplish. These are typically plentiful; sometimes, really stupid syntax errors creep in too.

Still, it has been very interesting seeing how much can be accomplished with this system. By keeping my focus on writing code only, it’s easier to do it in bursts of a few minutes at a time, as if I’m writing the blueprint of something and delegating the responsibility of testing and getting it to actually run to someone else. It also allows me to spend time dedicated to testing and fixing errors later, so I don’t have to constantly switch between different frames of mind: writing new code and testing it.

I don’t have any valid data points here. This is something I just started. Still, it’s something I definitely recommend other people try, as it’s giving me a new perspective on code writing and debugging.

Creating a tweening library for Unity

Posted by Zeh Fernando on 29/September/2014 at 10:02

A week or so ago, I started toying with the idea of creating a tweening solution for Unity. Trying experimental tweening solutions are the kind of thing I find myself always spending time with: in the work I usually do, tweening is something I constantly have to deal with, and I strive to find the most efficient syntax that fits my coding style. That’s one of the reasons why I like reinventing the wheel even when mature solutions already exist, or at least the excuse I like giving myself for doing so.

Like for most visual platforms, the concept of tweening is not new to Unity. There are several different solutions available for animating UI elements in the platform, from LeanTween to iTween to the (more popular) HOTween and its successor DOTween. Still, it is something worth exploring, especially as I’m still learning the platform and the language.

Tweening values in Unity presents two interesting challenges when compared to the syntax one would use for tweening in, say, JavaScript or ActionScript.

The first challenge is the language itself. While Unity also supports for a JavaScript-based language normally called “UnityScript”, C# is a more popular choice for developers, and the design of this language make some things a little bit more tricky to write. To illustrate, consider this Tweener call:

Tweener.addTween(myMovie, {x:10, y:10, time:1, transition:"linear"});

In the above line, the second parameter is an Object (sometimes called a map) that can contain any number of properties, and, if desired, even other objects. Tweener uses that for parameters that are optional, and, indeed, for any number of properties that one wants to tween. The name of those propertie are not known beforehand; they’re not hard-coded in the library, but instead read at runtime.

That is very easy to read and understand, but that syntax doesn’t work in C#. While the language has many advanced features over ActionScript 3 (or JavaScript), it forces a certain strictness to the way methods and functions are called. The closest equivalent of an AS3 Object in C# would be a Hashtable, and its inline syntax would look something like this:

Tweener.addTween(myMovie, new Hashtable() { {"x":10}, {"y":10}, {"time":1}, {"transition":"linear"} });

While close in structure, the syntax is a little bit more verbose, harder to write and read, and more error-prone. While not a total disaster, it doesn’t make the act of writing the code particularly swift.

The second challenge for tweening engines in Unity the way the Unity platform’s works. In Unity, many objects are considered virtual getters and setters, and cannot be changed directly. Consider the scale of a GameObject/Transform object, controlled by its localScale member, which is a Vector3 instance. Using C#, you can set the new scale like so:

transform.localScale = new Vector3(2, 2, 2);

This scales your object up to 200% of its original size, in all dimensions. Now, imagine you want to change the scale of your object in the X axis only. This may look like a possible solution:

transform.localScale.x = 2;

This code, however, is invalid in Unity’s C# implementation. The reason is that the localScale object is created when the property is read, and changes to it wouldn’t be reapplied to the object. The correct solution would, then, look like this:

Vector3 newScale = transform.localScale;
newScale.x = 2;
transform.localScale = newScale;

Or if you want to be succinct, and can afford creating a new Vector3 instance:

transform.localScale = new Vector3(2, transform.localScale.y, transform.localScale.z);

What this means is that you cannot tween values of certain object properties directly. This is similar to the problem of tweening filters such as blur in ActionScript 3: you may change one of the filter values, but unless you re-apply it to the Sprite that holds it, you wouldn’t see any change. That’s why tweening engines in AS3 had to work around that limitation with additional functionality. In Unity’s case, sometimes you will even run into compilation errors when trying to modify some of these properties directly.

Put together, those two challenges mean that one cannot take the syntax solutions matured in JavaScript and ActionScript and apply them to Unity. The platform requires solutions of its own.

The aforementioned iTween adopts two different syntaxes that somewhat resemble the way classic ActionScript 3 libraries deal with tweening, with all the problems it entails:

// Scale to 2x in 1 second
iTween.ScaleTo(gameObject, new Vector3(2, 2, 2), 1);

// Scale to 2x in 1 second, with more options
iTween.ScaleTo(gameObject, new Hashtable() { {"scale": new Vector3(2, 2, 2)}, {"time": 1}, {"easetype":iTween.EaseType.easeOutBack}, {"delay": 1} });

// Same as above, using a helper function
iTween.ScaleTo(gameObject, iTween.Hash("scale", new Vector3(2, 2, 2), "time", 1, "easetype", iTween.EaseType.easeOutBack, "delay", 1));

LeanTween takes a somewhat different approach:

// Scale to 2x in 1 second
LeanTween.scale(gameObject, new Vector3(2, 2, 2), 1);

// Scale to 2x in 1 second, with more options
LeanTween.scale(gameObject, new Vector3(2, 2, 2), 1).setEase(LeanTweenType.easeOutBack).setDelay(1);

This is a very interesting syntax: it allows the library to use strongly-typed methods for all kinds of actions. While it demands a number of methods to be added to the library’s interface, errors due to mistyping are minimized and developers can rely on auto completion for all their needs. In my option, it’s a much more elegant solution both for writing and reading code. Also, while it requires specialized method calls to tween some of the properties of a GameObject, this is an inevitable side effect given the way Unity works.

DOTween adopts a similar syntax with an interesting twist: it uses a C# feature called extension methods to add methods to the objects themselves, creating a syntax that reminds me of MC Tween:

// Scale to 2x in 1 second
transform.DOScale(new Vector3(2, 2, 2), 1);

// Scale to 2x in 1 second, with more options
transform.DOScale(new Vector3(2, 2, 2), 1).SetEase(Ease.OutBack).SetDelay(1);

Indeed, this “chaining” of method calls has been popular among libraries for other platforms for a while (jQuery was the library I saw using it). Some tweening solutions use this approach not just to set optional parameters for animations, but to create complex sequences in a very elegant way. Take TweenJS’s approach:

// Wait 0.5s, change alpha to 0 in 1s, then call a function
Tween.get(target).wait(500).to({alpha:0, visible:false}, 1000).call(someFunction);

Which, to me, looks like a solution that is as elegant as it gets.

When deciding on a tweening solution for Unity, I wanted something that was easy to write, read, fully type safe, and with easy auto completion. The two latter points are a given based in the language of choice, the two former ones are a little bit subjective and may require some creativity. I also wanted to explore bigger tweening chains that operated as animation sequences, something that in a solution like Tweener would require several lines of code.

The solution I decided on looks like this:

// Scales a gameObject to 200% in the Z axis in 0.2 seconds using an EaseOutExpo equation
ZTween.use(gameObject).scaleTo(new Vector3(1, 1, 2), 0.2f, Easing.expoOut);

What the code does is generate an object instance (via the use method) that can then have commands issued to it. Each new method return the same object, so they can be chained and new methods can be added. For example:

// Scales a gameObject to 200% in the Z axis, then 200% in the Y axis, then back to 100%
ZTween.use(gameObject).scaleTo(new Vector3(1, 1, 2), 0.2f, Easing.expoOut).scaleTo(new Vector3(1, 2, 2), 0.2f, Easing.expoOut).scaleTo(new Vector3(1, 1, 1), 0.2f, Easing.expoOut);

And additional methods can exist to enforce initial values, if necessary:

// Scales a gameObject from 100% to 200% in the Z axis
ZTween.use(gameObject).scaleFrom(new Vector3(1, 1, 1).scaleTo(new Vector3(1, 1, 2), 0.2f, Easing.expoOut);

What’s interesting about all this chaining is that the concept of events is more or less ignored – you don’t have events for the beginning and ending of a particular tween, but instead you can add calls to the method chain itself. For example:

// Call functionA(), scales an object, then call functionB()
ZTween.use(gameObject).call(functionA).scaleTo(new Vector3(1, 1, 2), 0.2f, Easing.expoOut).call(functionB);

And given how C# works, parameters can be passed via lambdas, another C# feature:

// Scales an object, then writes to the console
ZTween.use(gameObject).scaleTo(new Vector3(1, 1, 2), 0.2f, Easing.expoOut).call(() => Debug.Log("Done animating");

And delays are another method, rather than a parameter:

// Wait 1s then scale an object
ZTween.use(gameObject).wait(1).scaleTo(new Vector3(1, 1, 2), 0.2f, Easing.expoOut);

Custom numeric properties get a tweening object of their own, via a reference (when a pure member):

// Transition "something" from the current value to 1
ZTween.use(ref something).valueTo(1, 0.2f, Easing.quadOut);

Or via lambdas created as a get and set parameters (for pure members, getter/setters, or get/set pairs):

// Transition "something" (a numeric property or a getter/setter) from the current value to 1
ZTween.use(() => something, val => something = val).valueTo(1, 0.2f, Easing.quadOut);

// Transition using getSomething() and setSomething() from the current value to 1
ZTween.use(getSomething, setSomething).valueTo(1, 0.2f, Easing.quadOut);

This is not a full solution by any stretch of the imagination; it currently only works for scaling, translating, and value transitions (for real world use, I’d suggest DOTween instead). However, this is a solution I’m enjoying building and what I’ll be using in future projects, including a little game test I’m building:

Again, this is just an experiment, but for the curious, source code is available on GitHub.

New usfxr release (1.3)

Posted by Zeh Fernando on 8/August/2014 at 17:02

I have just published release 1.3 of usfxr on its GitHub repository. This is a small update, but it provides some important fixes (especially when publishing on mobile platforms) and adds the option to export your audio as WAV files (similarly to other SFXR ports). I have to thank Michael Bailey and Owen Roberts for their help with bug detection and fixing in this version.

The Asset Store version should be live some time next week is also available now.

Presentation: Using Unit Testing

Posted by Zeh Fernando on 7/August/2014 at 14:35

Every Thursday morning at Firstborn is “Scrummy Thursday”. For one hour, Firstborn developers present random topics to each other. Those are usually little, 15-minute presentations on projects we’ve worked or things we discovered, and it is meant as a way to share knowledge among the development team. It’s a nice idea, and it has been working well for a while.

Today I had a little presentation on how I used Unit Testing to help me refactor a part of Pepsi Spire (my most recent project) with some confidence. I’ve made the slides available publicly, and you can check them below.

Click the slideshow and press “S” to show the presentation notes, which will help make sense of what’s being displayed.

The content of the presentation is not anything groundbreaking (there are better unit testing introductions out there), but I thought I’d share it nonetheless. For the longest time, I’ve looked at unit testing with some contempt; it normally wouldn’t work with the kind of UI-heavy, animation-heavy, short-lived code I had to create. Still, this was an instance where unit testing helped me avoid trouble and surprises, and save time in the long run. And while this particular project was made in ActionScript 3, this is a tale that can be repeated on any given platform.

usfxr now supports BFXR’s advanced audio synthesis features

Posted by Zeh Fernando on 14/July/2014 at 9:03

header_bfxr

In preparation for the next Ludum Dare, I have finished adding all advanced sound synthesis features first introduced by BFXR to usfxr, my own Unity port of the SFXR game audio synthesis engine. The new version is 1.2 and is available as a zip download on the GitHub usfxr repository (the asset store version will be updated later this week).

(If you don’t know what usfxr or SFXR is, this post is less cryptic)

Interface for usfxr 1.2

The slightly updated interface for usfxr 1.2

At first, I wasn’t so sure I’d like to add the BFXR features to my port; I have to confess I always saw BFXR as a rogue fork of SFXR, and the fact that parameter strings were incompatible between the two projects always rubbed me the wrong way. However, after testing BFXR for a while, I came to really like its original features, and decided to adopt them in usfxr. This is what this update is about.

The new features are as such (as described by BFXR’s interface):

  • New wave form types
    • Triangle: robust at all frequencies, stand out quite well in most situations, and have a clear, resonant quality
    • Breaker: a little bit more hi-fiwave type; like a smoother, slicker triangle wave
    • Tan: a potentially crazy wave, tends to produce plenty of distortion
    • Whistle: a sine wave with an additional sine wave overlayed at a lower amplitude and 20x the frequency; it can sound buzzy, hollow, resonant, or breathy.
    • Pink noise: random numbers with a filtered frequency spectrum to make it softer than white noise
  • New filters
    • Compression: pushes amplitudes together into a narrower range to make them stand out more; very good for sound effects when you want them to stick out against background music
    • Harmonics: overlays copies of the waveform with copies and multiples of its frequency;g ood for bulking out or otherwise enriching the texture of the sounds
    • Bit Crusher: resamples the audio at a lower frequency, for that extra retro feeling
  • Expanded pitch-jumping abilities; good for arpeggiation effects

On top of that, this new version is still compatible with previous versions, as well as SFXR itself; instead of starting anew and breaking compatibility, usfxr accepts both standard (SFXR/as3sfxr style) parameter strings, as well as the new BFXR parameter strings. This means old code will still work, but you can also copy & paste effect parameter strings directly between usfxr’s Unity interface and BFXR.

There are a few additional non-core BFXR properties that I will have to add support for in the future, specifically property locking for mutation and the UI. This should be added soon along with other UI updates.

Round-up of usfxr uses from around the web

Posted by Zeh Fernando on 6/June/2014 at 17:09

Now that usfxr has an in-editor window for audio generation right inside Unity, I’m considering it stable. I’ll probably do a few benchmarks in the future to improve any performance bottlenecks I’m able to identity, and maybe add a few more examples to the repository, but for all intents and purposes it is production-ready.

Given that, I figured I’d do a search to see if anyone was using the library, and how. I found quite a few, so I’d do like to share some picks here.

D-Lask has been posting a few interesting videos of Unity experiments into his Vine stream, including one using usfxr with the Playstation move (enable audio to hear it):

Love Connection is a Ludum Dare game by thecodermonkey using the “Minimalism” theme. It’s a great, great entry for Ludum Dare, and it’s not surprising it ranked very well (#8) in the overall Ludum Dare results.

1Fuel is an old Ludum Dare entry by unitycoder_com under the theme “You Only Get One”. It’s a simple game, but one with an interesting gameplay mechanic.

The same developer has a blog with a lot more small examples and tests that use usfxr.

Super Minimalistic Nuclear Space Potatoes! is a Unity game by Hatscat also for Ludum Dare under the “Minimalism” theme.

And finally, a while ago, Jorge Garcia tweeted a picture of the “SpaceGame” example running on a PS Vita, which makes me pretty giddy:

Awesome seeing other people trying it out!

Create 16-bit sound effects right inside Unity with usfxr

Posted by Zeh Fernando on 4/June/2014 at 15:49

It’s been a little more than one year since I introduced usfxr, a Unity/C# version of the well-known real-time procedural game audio generator sfxr. What it has been lacking for quite some time was the ability to generate sound effects right inside the Unity interface; developers were forced to visit an online source like as3sfxr to generate their audio parameters (as a string), and only then use usfxr to play the audio in Unity games.

Well, no more. I finally got around to improving the in-game editor window first created by Tiaan Geldenhuys, and now you can use a Unity tab/window to generate audio parameters, and then copy the parameters so you can use them in code. The sound generating window looks like this:

usfxr Sound Generator

It also plays audio automatically every time one of the parameters is changed, so it’s easy to mess around with it until you get the sound effect you want. The generator window also works whether you’re in play mode or not, so it should make it easier to create sounds on-the-go.

As a reminder, this is how a usfxr sound is played in Unity by using a generated parameter string:

SfxrSynth synth = new SfxrSynth();
synth.parameters.SetSettingsString("0,,0.032,0.4138,0.4365,0.834,,,,,,0.3117,0.6925,,,,,,1,,,,,0.5");
synth.Play();

The new version of usfxr can be found on GitHub.

Implementing Kongregate’s statistics API in Unity using pure C#

Posted by Zeh Fernando on 7/May/2014 at 13:07

After creating an updated version of my latest Ludum Dare game, I decided to use it as an exercise in web publishing of games. To me, the interesting thing about publishing a game on a web portal – even one as simple and rough as one created in a little bit more than 48 hours – is that you can get a community of players to test, rate and give suggestions on your games, as well as get access to custom APIs for things like site-wide high-scores, achievements and statistics.

A website I decided to target for this experience was Kongregate, one of the biggest web game portals our there. A web gaming portal normally means “Flash games”, but like many others of its kind, Kongregate also accepts Unity games.

Kongregate also implements an interesting API that can be used by games when they’re published there. I believe the API is only officially supported for Flash and JavaScript, more or less, but Unity developers can still use it with the help of Unity’s external application interface (which allows for JavaScript calls from the Unity web plugin).

However, when reading on how to do this, I ran into two issues. First, most of the examples on how to use the statistics API from Unity are based on using JavaScript within your game (1, 2), while I was using C# instead; and second, and most importantly, all implementations depended on the oh-so-common pattern of creating an empty GameObject instance in a Unity scene and then attaching scripts to it (then getting a reference to it everywhere else via magic strings), rather than just a pure code-based solution. In fact, the API itself looks for a GameObject when returning the result of calls.

As much as I’ve been trying to give in to the proliferation of GameObjects everywhere that seem to be mandatory in Unity projects, to me the only solution was a clean C#-based implementation of the code – one that creates its own GameObject when necessary. The result is a single class that can be copied anywhere on your “Scripts” folder:

using System;
using UnityEngine;

public class KongregateAPI:MonoBehaviour {

	// Properties
	private bool _isConnected;
	private int _userId;
	private string _userName;
	private string _gameAuthToken;


	// ================================================================================================================
	// MAIN EVENT INTERFACE -------------------------------------------------------------------------------------------

	void Start() {
		_isConnected = false;
		_userId = 0;
		_userName = "Guest";
		_gameAuthToken = "";
	}

	void Awake() {
		// Instructs the game object to survive level changes
		DontDestroyOnLoad(this);

		// Begin the API loading process if available
		Application.ExternalEval(
			"if (typeof(kongregateUnitySupport) != 'undefined') {" +
			"    kongregateUnitySupport.initAPI('" + gameObject.name + "', 'OnKongregateAPILoaded');" +
			"}"
		);
	}


	// ================================================================================================================
	// PUBLIC INTERFACE -----------------------------------------------------------------------------------------------

	public static KongregateAPI Create() {
		// Create a game object with a reference to the API
		GameObject newGameObject = new GameObject("KongregateAPIObject-" + (Time.realtimeSinceStartup));
		KongregateAPI instance = newGameObject.AddComponent<KongregateAPI>();
		return instance;
	}

	public void OnKongregateAPILoaded(string __userInfoString) {
		// Is connected
		_isConnected = true;
 
		// Splits the user info parameter
		string[] userParams = __userInfoString.Split('|');
		_userId = int.Parse(userParams[0]);
		_userName = userParams[1];
		_gameAuthToken = userParams[2];
	}

	public void SubmitStats(string __name, int __value) {
		Application.ExternalCall("kongregate.stats.submit", __name, __value);
	}

	public bool isConnected {
		get { return _isConnected; }
	}

	public int userId {
		get { return _userId; }
	}

	public string userName {
		get { return _userName; }
	}

	public string gameAuthToken {
		get { return _gameAuthToken; }
	}
}

And it works like this:

// Create an instance of the API during setup in your game Main class
KongregateAPI kongregate = KongregateAPI.Create();

// Later, submit stats using it
kongregate.SubmitStats("high-score", 1000);
kongregate.SubmitStats("tanks-destroyed", 1);

I have also added this class and some instructions to a GitHub repository, just in case.

With this implementation, I was able to easily add statistics to the Kongregate version of the game, creating a “high score” table of sorts countint the number of moves performed by players prior to completing each level. I don’t really anticipate any high competition for this. As an exercise in C# and Unity development, however, it was surprisingly pleasant and straightforward to get results.

50 years of BASIC

Posted by Zeh Fernando on 1/May/2014 at 11:59

The BASIC language is now 50 years old. TIME Magazine has a cool article on it and how it came to be.

When I started using computers, first with a ZX81 clone, then an Apple II clone, and then a MSX 1.0 clone, BASIC was the only thing that I knew exusted. It was synonym of computers for me, and for the longest time, the only programming language I knew as I didn’t even know other programming languages existed. I have very fond memories of the Usborne books and its little robots teaching me about arrays. I still remember very distinctly the moment I discovered sprites in Basic; you’d stop your program and the sprites would remain on screen, obscuring your code. I later fell in love with GWBasic, learned assembly through it, discovered how to better engineer programs with QBasic, and learned to create applications with QuickBasic. I’ve never owned an Amiga or a Commodore 64, but I remember finding old magazines dedicated to these platforms (with program sources) and dreaming of the possibilities. One of the reasons I learned English without consciously trying was spending so much time reading and re-reading those magazines. BASIC was the step stone to a dream world.

I’ve used C and Turbo Pascal at the same time I was learning GWBasic, and while those had their advantages – I especially liked Turbo Pascal for its speed – I kept coming back to QBasic just because things were so much easier and quicker. It remains as the only language or platform I’d do development and debugging with breakpoints and on-the-fly changes. The QuickBasic IDE had an awesome help system too, something that made me realize a good reference is fundamental for development and that you can’t know it all on the top of your head. For the longest time, it was the platform I knew and used the most, and that’s what I used even for large database merging when I started working professionally, even if faster runtimes existed: it was just much easier to tinker with. And despite what other well known names might think of the language, it was the gateway drug that took me to greener pastures. BASIC wasn’t perfect, but when used well (especially with QBasic/QuickBasic), it was beautiful. I think the same can be said of most programming languages or platforms.

I don’t think BASIC or any of its alternatives are the best language for first-time programmers anymore; I think Processing is a much better choice for a number of reasons. Still, the immediacy of BASIC is what got me interested in this whole game: I’m not sure I’d have felt the same way with some other platforms that required a massive bootstrap before I saw something on the screen.

So, thanks, BASIC.

Ludum Dare 29 Post Mortem

Posted by Zeh Fernando on 28/April/2014 at 18:14

Once again, a Ludum Dare game compo has ended. This means I spent this past weekend creating a game from scratch, and even though it’s a simple thing, I think this is the first time I can say I created a more well-rounded game experience for Ludum Dare.

The result is called Escape Enclosures Expeditiously (or Escape Drill in the improved, post-Ludum Dare version), and it’s a simple puzzle-like isometric game where the player should to the exit of each map without being touched by one of the enemies.

Escape Enclosures Expeditiously

It’s a short game – only 3 levels (plus an ending level). Here’s a video of a full playthrough with no deaths.

But of course, this being a post-mortem, what follows is some more information about the game development process.

First implementation of the tile terrain game object

First implementation of the tile terrain game object

I started development without much of an idea of what I wanted to create. My initial approach – which is fast becoming my common approach for Ludum Dare compos – is to just try something different for the sake of learning, rather than actually trying to create a full fledged game. This means that my emphasis is normally on the technical side of things – as long as it’s new to me – rather than on the fun. It sounds harsh, but that’s what makes it fun to me.

Getting tile height and color calculation working

Getting tile height and color calculation working

This time around, I had my mind set on using Unity to create a custom level editor to make game creation easier. I was inspired by several different things: Hitman Go, a surprisingly simple but ingenious twist on the Hitman series; isometric turn-based games to the likes of Final Fantasy Tactics, Tactics Ogre and Disgaea; and a series of tutorials of someone recreating Doom’s classic E1M1 level on Unity using a plugin of some sort, something that showed me how powerful custom editors can be in Unity.

Allowing tile heights to be edited in the 3d view

Allowing tile heights to be edited in the 3d view

My goal, however, was not really to create a game in itself, but to learn how to extend Unity in a way that allowed that kind of map to be created more easily. I ended up with enough time to actually create some gameplay, but it’s almost a side effect.

Painting tiles with different surface types

Painting tiles with different surface types

What went right

Using Unity: in a previous Ludum Dare post mortem I mentioned Unity as one of the sore spots of my experience (although it doubles as one of the pros too). This time around, it shows exclusively as a positive point. I was able to leverage my past experience with the platform (as little of it as I have), work on top of what I learned (and side projects as usfxr) and basically spend more time implementing things rather than learning how to implement them.

Design of the first level

Design of the first level

That’s not to say there was not much to learn. Quite the opposite. At any point in development, I had dozens of browser tabs open with the most random items from Unity’s references or random tutorials, blog posts, and questions/answers pages out there.

Testing enemy models

Testing enemy models

What’s more, my conclusion is that Unity is pretty powerful when creating custom editors and elements. I can see how the ability to customize your working environment can be a big boost for developers working on a game, especially on members dealing with content creation such as level design. Even if my level design solution was pretty ghetto (I had the worst method for painting tiles with different terrain types, for example), it was still a pretty important time saver.

Implementing movement and enemy AI

Implementing movement and enemy AI

I still feel a little weird with the platform. Its emphasis on what I can only describe as concrete elements, such as adding scripts to physical objects and using the 3d view for everything, is still what feels strange to me. That, and maybe the emphasis on global access to everything from everything and a lot of helper functions to query the level elements. However, I believe I’m slowly learning to ignore my impulse for abstractionism and just get stuff done as inelegant as I might think it is. My hope is that with time I’ll understand what’s the actual ethos of the platform for performance and correctness, but I know I’m still far from that point.

Making multiple levels possible

Making multiple levels possible

Using simple assets: this time I purposely used very simple assets. The only texture used in the game was a 256×256 noise texture quickly created in Photoshop, and all the 3d models were simple boxes spliced and reshaped into basic shapes using Blender. I get very easily distracted into doing mundane tasks such as making sure none of my vertices are duplicated and everything invisible has a proper name or something of the sort, so I’m happy I managed to not spend too much time doing complicated art. I only had to deal with broken normals twice!

The final game scene with all levels

The final game scene with all levels

Conclusion

Now, what went wrong? Nothing, I think. This was fun, and I learned a lot. Given more time for level design and testing, I’m sure I could come up with more, and better puzzles. But that’s true of every Ludum Dare, so that’s something I can live with.