The Inner Product, November 2002
Jonathan Blow (jon@number-none.com)
Last updated 17 January 2004
 

Toward Better Scripting, Part 2


Related articles:

My Friend, the Covariance Body (September 2002)
Toward Better Scripting, Part 1 (October 2002)
 


Last month I proposed a scripting language that automatically maintains history about the behavior of its variables, so we can more easily write scripts that ask time-oriented questions. I implemented an array of frame-rate-independent filters for each variable. The filters converged at different speeds, and I used them to estimate the mean and variance of each value over time.


Target Applications

To experiment with the history feature, I chose three time-oriented scripting tasks. I'll run through them one at a time, discussing how they can be implemented with the history feature, versus how they would likely be done without the feature.

In this month's sample code there are working implementations of all three scripts. However, the scripting language is still very simple, because I wanted to keep the sample code minimal and easy to understand. The scripts use a command/argument syntax with many restrictions; you might think of them as being written in a high-level assembly language. Just keep in mind that the length is due to syntactic constraints, but the semantics are much more powerful. In this article, I will use pseudocode to provide clearer illustrations.


Scenario 1: Dance Dance Revolution Commentator


Dance Dance Revolution (DDR) is a game where you have to press buttons at appropriate times. The game gives you a health bar, which we can conceptualize as a scalar between 0 and 1. When the health bar reaches 0, you die. A commentator says things about your performance, like "You're doing great!"

The problem with current implementations of DDR is that the commentator's words are chosen based on the instantaneous value of your health bar. So suppose you mess up and come close to losing, but then you start doing extremely well, and you're acing all the moves and you feel really good. But then the announcer says "You're not following the music!" because, even though you hit the last 20 targets perfectly, your health bar is still low. So the commentator bludgeons your feeling of satisfaction, and he just seems broken (becuse you were following the music perfectly when he said that!)

Really, we want the commentator to detect patterns in the player's performance, rather than relying on the instantaneous value of the health variable. Not only does this increase the accuracy of comments, but it expands the scope of available comments. So he can say "Great comeback! For a while I thought you were going to lose!", which would have been impossible before.

In the sample code, script1 implements such a commentator. The game engine provides an instantaneous "player_goodness" value, ranging from 0 to 1, based on how well you hit the last target. The scripting system automatically and invisibly filters this value over time. The script queries an estimate of how well you've been doing for the past 40 or so seconds, which it uses to output a Parappa-style "You're doing bad / okay / good / cool" message.

The script then asks for the derivative of the goodness and queries the derivative for a period of about 20 seconds. Along with the earlier goodness value, the script picks from a range of appropriate comments, using rules like "Was the player doing badly for a while, but now his goodness is increasing?"

The derivative is not directly maintained by the core system. When you ask for the derivative, the system approximates it by taking finite differences on the history slots for the variable in question.

In one interesting case, I wanted not only the mean value of the player's goodness, but the variance as well. I wanted the announcer to say "Great! You're unstoppable!" if you consistently perform well. One way to formulate this is to take many queries from the history, and ensure that they are all above some threshold. But it was more natural to formulate the question in terms of "Is the long-term goodness high, and has the goodness been stable?"

All of this was easy to perform due to the automatic history-keeping. Listing 1 shows a pseudocode fragment of the script.

How would a programmer typically implement this in the absence of a history feature? I think he would end up implementing an ad hoc averaging scheme somewhat like our history feature, but probably not so well-defined (not frame-rate independent, or not providing estimates of variance). Sometimes programmers don't think of maintaining running averages at all; they resort to storing the player_goodness value for each frame into a circular array and query the array. This gets extremely messy and almost never works properly, since a lot of weird code is needed to handle aliasing problems.
 

Listing 1:

recent_average = mean(player_goodness, 30); // Mean over 30 seconds
recent_change = mean(derivative(player_goodness), 20); // Mean of derivative over 20 seconds
if (recent_average < POOR && recent_change > IMPROVEMENT) {
    output("Keep going, you're starting to get it!");
}


Scenario 2: Mortar Vehicle for RTS Game

A mortar vehicle fires ballistic projectiles over obstacles. Because the projectiles are slow, they are only effective when the target's position at projectile impact time can be predicted.

Game programmers usually implement a mortar firing heuristic one of two ways. One way is to just fire at anything, and fire a lot. The mortar will miss much of the time, but at least there are a lot of cool-looking explosions. Recently, though, gamers have been clamoring for better AI; since the mortar's indiscretion in choosing its target is an obvious example of dumbness, it would be nice to avoid. The second approach is to fire only at stationary or very slowly-moving targets, but this heuristic will fail in many cases. Consider a soldier in a trench alternating between two gunning positions. Overall he stays in the trench, but his instantaneous velocity is high while he is switching from one position to another. If the mortar makes its firing decision while he is moving, it will not consider him as a valid target. Furthermore, a player can exploit the heuristic, say, by placing many units in a tiny but fast patrol route. The units are staying within a small area, so the mortar could easily hit them; but their instantaneous velocities are large, so the mortar doesn't consider them.

Prior to this scripting language project, I (and, I think, most programmers) would try to solve this problem with some kind of bounding volume. I'd create a circle of a fixed size centered on the initial position of each unit in the game. Whenever the unit crosses the border of the circle, I reset the circle to the unit's current position. But if a unit doesn't cross the border for some amount of time, it's considered "not moving much" and flagged as a mortar target.

On a full-game scale, there are a few ways to organize this computation. The way most like the real world is for each mortar vehicle to compute its own bounding volumes around all targets in the world. Unfortunately, this is O(n2) in execution time and memory usage, so we would like something faster. The most natural solution from an optimization standpoint is for each unit in the game to manage its own bounding circle. But then the implementation of each basic game entity becomes more complicated, all for the sake of one specific entity type that most of the game has nothing to do with. If you have to add this kind of uncompartmentalized handling for even a third of the entity types in your game, things quickly become a mess. This also means that the game is, in a practical sense, hard to expand; if implementing a new unit typically requires changing the core entity movement code, the system is just not very modular.

A compromise between these two approaches is to implement a separate system, not associated with entities in the game world, that serves as a "mortar brain"; there's only one of these, or one per team. The mortar brain keeps track of all the bounding circles, and the individual mortar vehicles query the mortar brain to make their firing decisions. From a software engineering standpoint, this is cleaner than the uncompartmentalized version, but it's more work to implement than either of the previous approaches. If a substantial number of the unit types in your game require supplementary systems like this, your game's complexity (and implementation time and number of bugs) will rise substantially.

All of this is an example of a common software engineering ugliness. There's probably a name for it in some pattern book, but I will call it overextended responsibility. The requirement for efficient computation of bounding volumes creates complexity that pollutes the entire system.

In the end, none of these bounding-volume schemes are very good. If the projectile travel time differs substantially over the mortar's range, a fixed-size circle will not be good enough to segregate possible targets. In this case, to avoid the n2 solution, we must maintain several bounding circles of various sizes for each entity; the mess gets bigger, and the firing heuristic is only accurate to the quantum between circle sizes, and it's hard to gauge the effect of such inaccuracy on gameplay and game tuning.

By altering the firing heuristic to use position history, many problems magically go away. We fire at a target if the ellipse representing its recent position variance is very small. Already we have a solution comparable to the bounding-circle version, but since the scripting system remembers histories for us, we get everything for free. We quantify the variance by looking at the ellipse's circumference and area, as discussed last month. Because our measurements are continuous, we don't have any quantization problems like the bounding circles had. We just ask whether the variance is small enough, based on the amount of time the projectile will take to get there.

Given many targets, some of which are valid, we still need to choose one to fire at. So we still have to deal with an O(n2) problem, but it's much simpler since there's no bounding data to manage. A "mortar brain" that sorts targets by variance is so simple it's almost not worth thinking of as a separate system.

Interestingly, using position history solves some problems that the bounding volumes don't help with. Suppose we want to fire at targets that are moving at reasonable speeds, but in predictable fashions. In other words, we want to do dead reckoning. Most games simply extrapolate the instantaneous velocity of the target. Suppose you have a vehicle that is generally traveling west, but is constantly zig-zagging in an attempt to be evasive. If you sample the instantaneous velocity, the mortar shells will land to the northwest or southwest of the vehicle, whereas any dummy can see that they should be targeted to the vehicle's west. With position history, we easily query for the vehicle's average velocity over an extended period of time, which leads us to fire at the right spot.

The sample code script2 implements this. A target is controlled by the mouse pointer, and a mortar vehicle attempts to fire at it. The mortar shells are color-coded: white shells mean that the mortar sees the target traveling in a coherent direction and is dead-reckoning it; red shells indicate that the target's movement is too erratic to predict, but it is remaining within a confined area, so the shell is targeted at the center of that area. If the target is moving erratically over a large area, the mortar will not fire.

You can still confuse the mortar by moving in large but obvious patterns, like large circles or other curves. That's because the current dead-reckoning is only linear. You could implement nonlinear dead-reckoning by making several queries of the mortar position and curve-fitting them; and with the finite-differencing operator mentioned in the previous section, the implementation could become easier than that.


Scenario 3: A Tailing Mission

Sometimes you want to know if one entity has been in proximity of another for a period of time. Maybe one entity is a proximity mine, or an unidentified cargo ship that you are supposed to scan in an X-Wing style space game. The fiction of script3 is that you are a Thief and you're supposed to follow a messenger at night-time long enough to get a good look at him and identify him.

This is a case where we want a primitive that operates on the entire history of its inputs. We want to say (difference_vector = the_whole_history_of(messenger_position - thief_position)), then ask "What is the typical length of difference_vector over the past 30 seconds?"

This is easy; we can define a whole-history version of subtraction that returns a variable whose mean is the difference of the arguments' means. Some simple math tells us that this gives the same result as if we had subtracted the two values every frame and built a history that way. So to ask our question, we just query the average of this result for 30 seconds, then take its length.

In some cases, like in the tick code that evaluates mission objectives, it would be just as easy and clean to find the difference vector between the thief and messenger every frame, and put the result into a variable that builds its own history. But suppose we're not writing tick code; we're writing one-shot code that happens when an event is triggered. Say, the messenger has a conversation with a third party, and the end of the conversation fires an event that asks whether you have been near enough to witness most of the conversation. With a whole-history subtraction, we perform a single query and we're done. Without history, we need to add tick code to the Thief that maintains a distance measure for the event script to refer to. So we need to coordinate information between two scripts that run at different times, which is the kind of spaghetti code that strangles you if too much of it builds up. You ought to be able to implement a simple event without having to modify the entity tick, right?

Without a history mechanism, we would have to implement this with a per-frame distance measurement built into the Thief tick; so in the event-driven case we must suffer the two coordinating scripts. Essentially this solution is a bounding-volume implementation like Scenario 2, but it's not as nasty because the volume only concerns two specific entities and the measurement is 1-dimensional. Even so, this solution suffers from discontinuities. We would have a timestamp that gets reset whenever you are standing a distance greater than k from the messenger; our query would end up being something like "Is the current time, minus that timestamp, greater than the number of seconds required for success?" Success-or-failure becomes determined by a discontinuity that is hard for the player to intuit; if he steps a tiny distance over some invisible line, the timestamp resets and he probably fails, regardless of how well he did during the rest of the mission. The history solution is continuous with respect to the instantaneous distance, and has some forgiveness built in (you can make up for some poor performance by exhibiting some better performance). This is probably better default behavior for most game design cases. Though there are times when you do want a hard discontinuity, they're rare.


Conclusion

We've only just touched the surface of the huge pile of time-related features a scripting language could support. I hope I've encouraged you to think about the space of features available to you when designing your next language.

Even if you never make a scripting language, the history techniques described in this column are still very useful. It is not difficult to code them up in C++ or whatever your favorite language is; they won't be as transparent, but the problem-solving power is still there.