⑧ ericd.net - iPhone Development RSS
Top

Go ahead and jump to one of the articles:


Local Aggregation: iPhone Development RSS   Adobe Labs RSS   Macrumors RSS


I have become pretty heavily involved in iPhone and Mac development, so I offer up some aggregation for visitors as well as myself. Enjoy.

iPhone Development RSS (aggregated)
iPhone & Mac Development

Application Development for the iPhone using Apple's official SDK.

WWDC 2014

We're less than a week out from WWDC. The conference really snuck up on me this year. I never even got around to updating my WWDC First Timer's Guide. Re-reading the guide from last year, I don't think I would've changed much anyway. If you're coming to WWDC for the first time this year, last year's post is probably worth a read, as are several other similar posts. For a somewhat less serious take, Mark Dalrymple's 2013 first timer's guide is a fun read.

For the first time in a long time, I really don't have a firm idea what to expect this year. The past several years, I've had a pretty solid notion about what was coming. I usually didn't know the fine details of what was coming, but I usually had a solid idea about the broad strokes. This year, I have some guesses and some wishes, but nothing I'm super confident about. I don't have any real idea about what we'll see in iOS 8 or in Mac OS X Eureka¹

While I don't know what is coming, I do know how many of the Apple engineers feel about what's coming out next week, and they seem to be pretty excited about it all this year. That's enough to make me excited and confident that it's going to be a good year for developers.

It's been a strange WWDC lead up this year for me for a number of reasons, not the least of which is that this will be the first year since before the iPhone came out that I don't have a ticket. I'm not upset about that - I actually didn't even put my name into the lottery - but it does change the nature of my anticipation somewhat. I will, of course, be in San Francisco for the week. Even without a ticket, it's still my favorite week of the year.

I am a little saddened that the new, larger MartianCraft will have such little representation inside the walls of Moscone West, though. We have 48 people between our employees and full-time contractors and the vast majority of our work is iOS and Mac software development of one form or another. Around twenty of our people attempted to get tickets, and only one person got one. From what I've seen from friends and acquaintances in the community, that ratio is not out of line with what others experienced. The breaking of my seven-year streak pales in comparison with some other people who didn't get tickets this year. In fact, most people I personally know in the community are going without tickets.

I've noodled a bit in the past about how Apple might "fix" WWDC. My general opposition to WWDC becoming a mega-conference like JavaOne remains unchanged. But… I can't help feeling that the need for Apple to do something becomes more imperative every year. As a matter of fact, I think we're past the point where that something should already have been done. Apple needs to accept that a single 5200 person conference and a few scattered tech talks every year or three just isn't meeting the needs of the community. A lottery makes it as fair as such a thing can be, but it's addressing a symptom, not the actual problem.

While the WWDC sessions and labs are awesome, the WWDC moments that have stuck with me have been the chance meetings. It's been the opportunity to meet people who created something I use or to talk with people who have done things I admire. It's been the chance to sit down in a seat next to somebody I didn't even know I wanted to know.

It's the moments that have made me feel like I belong that stick with me and make me want to keep being part of all this. That's what WWDC has always been to me: the chance to connect and reconnect with the people in our community. Some of that has always happened outside Moscone West in the surrounding restaurants, bars, hotels, and even on the streets of San Francisco. Most it, however, used to happen inside the conference center. Even a lot of what happened in the surrounding areas happened because of connections originally made inside.

But that's no longer the case. It no longer can be the case as long as the conference stays the same size and the community grows.

The number of people who can attend WWDC has remained constant for over a decade, but the number of people coming into town for WWDC has risen steadily since 2008.  We've now reached a point where the importance of the actual conference has been eclipsed by what's happening in the surrounding areas. WWDC sets the time for when we all come into town, but it has already stopped being the center of the universe for us that week.

That's not a good thing for Apple or for us. Apple should be at the center of the universe for its third party developers. Session videos and documentation become available online. Keynotes are streamed. You don't need to be in San Francisco to take advantage of any of those things.

But having so many of us in one city at the same time for a week is still important. Personal connections matter. Sharing food and drink and war stories helps create and maintain a sense of community and makes us feel like we belong to something.  It allows our increasingly larger industry to keep some of the character that made it so special when it was small.

Hopefully, the fine folks at Apple realize that WWDC is not really sessions, labs, and boxed lunches.

WWDC is people. And because of that, I can't wait for next week.


P.S. We're doing MartianCraft shirts again this year, with pickup at the conference available as long as inventory hold out.

1 - I honestly have no idea what city they're going to pick for the next version of Mac OS X. I don't like the sound of "Mac OS X 10.10", so I randomly picked a city name from a Wikipedia list. If it actually turns out to be "Eureka"… well… in that case, let's just say I should've bought a lottery ticket instead of writing this post.


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 27 May 2014 | 12:26 pm

Announcing: Republic Sniper

Sorry for being so quiet of late. We've been busy at MartianCraft of late.

In addition to merging with Empirical Development, we've also been studiously working on the first game set in the Turncoat Universe™.

It's called Republic Sniper™.

Today, we released a teaser trailer for the game. This trailer was done entirely in-house. Most of the production was done over just a two week period by a team of four people, only one of whom who was dedicated full-time. They did a hell of a job and I need to give a special shout-out to Patrick Letourneau for not just carrying the ball across the goal line on the trailer, but running it the entire length of the damn field.  He accomplished an amazing amount in a completely ridiculous amount of time and we're both grateful and impressed.

We've also been sharing concept art, trivia, and WIP screenshots from the Republic Sniper twitter account.

We haven't announced a ship date for Republic Sniper, but you can sign up for our mailing list on the website to be notified as soon as we have more information to share!

Our much larger combined company has a dedicated game team with multiple in-house and contract games currently under development and we're always happy to talk with people about new game (or non-game) projects.


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 13 February 2014 | 2:08 pm

Turncoat Dev Diary: The Grind

This "every week" commitment has turned out to be harder than I expected. I've been pretty heads down working on game mechanics of late. I actually found time to write a couple of posts that I can't publish yet because we're still trying to sort out a problem with our Apple developer account. We've got our domain reserved, but we don't want to reveal the game's name or details until we've also reserved the app name. At present, we can't do that. Hopefully we'll get that resolved this week.

But work continues on the game. The gun you get at the start of the game has now been fully modeled and textured and we've generated a low-poly normal-mapped version for the game.




The yellow paint denotes that this is a training range rifle - it will only be on the gun when you're on the training range, though we may end up going with a different color or ditching the paint altogether. While it looks pretty nice in the renders, it tends to blow out in the actual game when you're near lights.

We've also got the initial model for the RAR-14 assault rifle:


This is the standard Republic assault rifle, the gun from which the sniper rifle above was derived. If you look at the receiver, you'll see that it's essentially the same gun with a different stock and barrel. The sniper variant also uses slightly different ammunition, so the clip is a little longer to account for the longer cartridges it uses.

You can see from the orange marks that we're playing around with different color markings, trying to figure out what will look good in-game.






Before too long, we should have the training range done. The first five levels — a tutorial plus four challenge levels — will take place on the training range. The range will also be used for a training mode that will let players test out weapons and weapon modifications. The level has been blocked out and the area behind the shooter has been partially detailed:



I've also made quite a bit of progress on developing the game mechanics. I've got a working prototype with bad guys. There's no real AI to speak of yet, just an algorithm that I like to call "lambs to the slaughter". The bad guys will avoid obstacle and each other, but otherwise, they just walk towards the shooter until they get shot and die.

The basics are working, though. There's a basic hit point system with location-based damage adjustments, particle-blood on impact, and death animations. It's not exactly a game yet, but it's actually starting to actually be kinda fun testing the builds.

I was pretty depressed earlier this week, though. The test level was working great on desktop builds, but trying to play on any mobile device except the iPhone 5S resulted in really bad framerates, and this was without much in the way of dramatic lighting or complex models. The test level I was using has a relatively tiny poly count and the proxy bad guy models I was using do as well. The textures were reasonable in size and using hardware-supported compression and I was using only simple shaders designed for mobile use. Yet, on an iPad Mini, the level would start at 15fps and quickly drop to 3 or 4. It was playable, but not great at 15fps, but when it dropped below that, it became unusable.

I reached out to some friends who have more experience with Unity for advice. With their help and the help of Instruments and Unity's excellent profiler, I found a couple of things that were just killing performance by forcing the physics engine to re-create a bunch of objects every frame. A few hours of poking around and asking questions and I had the prototype running at 30fps even on an iPad Mini and even with a dozen or more bad guys visible.

Once I got performance working well, I started playing around with Unity's light maps and light probes, which let you do fairly impressive lighting without the use of performance-killing rendered lights. I'm really happy with the results I've gotten so far and can't wait to see what we can do with these features on an actual artist-created level.

Overall, things are moving along well.

©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 24 October 2013 | 11:30 am

Turncoat Dev Diary: Going Ballistic

We're still working on getting our ducks in a row administratively so we can actually announce the name and basic details of our first game, but as I've mentioned before, it is going to be centered around the use of guns. The Turncoat universe is set four hundred years in the future, so there will be fancy futuristic weapons available, but I wanted the first weapons you get access to in the game to be essentially more advanced variants of modern ballistic firearms.

You could argue that by the year 2400, cartridge-based combustion-propelled firearms will be horribly obsolete. Certainly many fictional futures have taken that route and opted for only rays guns, lasers, blasters, phasers, ion cannons, and other such options.

But, if you think about it, the sophisticated modern firearms of today are based on the exact same principles as weapons created in the fourteenth century. Using combustion to propel a small piece of metal very, very fast has proven to be a very effective way to harm living things. Firearms have been around for seven hundred years without becoming obsolete, so there's no reason why they wouldn't still be in use in some form four hundred years from now alongside whatever other new ways of killing get created.

When it comes down to it, however, it really doesn't matter how likely anything is in reality. From a gameplay perspective, we want to have many options for our players. We want them to be able to use different kinds of guns with different gameplay characteristics and to be able to upgrade those guns in numerous ways. Our goal is to add variety to the experience of playing the game, not accurately predict the future of weaponry.

As I started prototyping the gun mechanics in code, I found a lot of examples and tutorials scattered around the Internet about how to do guns in Unity. Most of those tutorials recommended a simple raycast (combined, of course, with sound and visual effects). You cast a ray out from the gun's barrel and see if it collides with something. If it does, you have a hit and the result of that hit happens immediately. And why not? As far as human senses are concerned, bullets from modern guns might as well be instantaneous except at the very longest range of the most powerful sniper rifles.

But that approach is no good for our game. One of the main benefits of the later advanced weapons in the game is that they don't suffer from some of the problems that ballistic firearms have. For example, when shooting at long range with a sniper rifle, you have to account for the trajectory of a bullet, the fact that gravity pulls the bullet downward as it moves forward, and the fact that other forces, like wind, can act on the bullet. Ray guns don't have that problem. They're simply point and shoot, so to speak, and raycasting makes perfect sense for those later, more advanced weapons. But raycasting doesn't take the realities of a physical world into account for a ballistic firearm.

I want the firearms in the game to "feel" real, and I want the bullets to behave the way a real bullet would. I'm all for cheating when it makes for a better experience and raycasting bullets is a great solution for many types of games, but the mechanics I've been working on really put the behavior of the guns front and center, so I really want the bullets to be part of the physics simulation.

I did find some tutorials and code examples that created bullets as rigidbody objects and applied force to them, which is the basic approach I wanted to use. There are some problems with this approach, however. First and foremost is simply that bullets travel very, very fast, and physics calculations only happen so many times a second. On mobile devices, those calculations tend to happen less times per second than on a desktop computer or console because there's simply less computing horsepower available. What can happen as a result, is that bullets can pass right through objects they should have hit. In one frame, the bullet is on the near side of the target, and by the time the next physics frame rolls around, the bullet is on the other side of it, and no collision is detected.

For a desktop game, this is easy to rectify; you just crank up the physics frame rate (which is distinct from the display framerate in Unity) so that the calculations happen more often. For a mobile game, that's not an ideal solution. You have to use the available CPU (and GPU) power efficiently on mobile if you want an overall experience to be good. Fortunately, there's a good solution to this problem on the Unity Wiki. You have your projectile do short ray casts in any physics frame where it travels far enough between frames to have missed a collision.

The bigger problem for me was trying to figure out just what values to use in the physics system. How much mass should the bullet have? How much force do we need to apply to that bullet?

The examples I found seem to have arrived at values by pure trial and error, and they all felt "off" to me. Many of the examples I found used the default mass value for the bullet, for example. In Unity, the default value of "1" is equivalent to 1 kilogram. If you've ever held a bullet, you know that it masses nowhere near a kilogram. Even the giant .50 caliber BMG round doesn't come close. You know what shoot bullets that weighs a kilogram? Battleships, not rifles.

Instead of taking the same trial-and-error approach to getting mass and force values that feel right, I decided I'd do a little research. There's a lot of science behind guns and a lot of people who are interested in guns, so I figured it couldn't be too hard to find real data on real bullets.

It ended up being even easier than I thought it would be. Wikipedia has gathered that data for pretty much every modern form of ammunition, including the exact mass of the bullet, the muzzle velocity, and the amount of energy used to propel the bullet to that velocity.

So, I gathered up that data for an assortment of assault, sniper, and high-powered hunting rifles in a spreadsheet. You can download that spreadsheet here, if you're interested.


Using the bullet's mass in Unity's physics system is easy enough. Just divide the grams by 1000 and that gives the value to use as the projectile's rigid body mass. But, how do we know how much force to apply to the bullet? Unity's documentation for the AddForce() method doesn't say what units it wants for input.

After digging around, I found that somebody had actually gone through the process of figuring out the answer to that while trying to counteract gravity for an object in their game. They determined that the AddForce() method uses 1/50th of a joule as its unit. Since we know how many joules of energy propel each of these modern bullets, we just multiply the number of joules by 50 and feed that value to the AddForce() method.

Great! But modern guns also spin the bullet as it travels down the barrel. In fact, the name "rifle" comes from the grooves in the barrel that cause that spin. After experimenting a bit, I came to the conclusion that for purposes of the game physics, rifling really isn't needed. Rifling helps deal with real world problem that just aren't present in the game's physics engine unless we add them.

But, I decided I still wanted the bullets to spin.

That may seem like an unnecessary bit of realism, but there's actually a reason for it. In some situations, like if you finish a level with a head shot, we're going to slow down time and follow the bullet to its target with the camera. It's a little clichéd, but it's still a cool effect when used sparingly. When we do it, though, I don't want people noticing that the bullet isn't spinning.

And they will.

In the real world, the twist rate of rifling is measured a couple of different ways, including revolutions per minute and the length required to complete one revolution inside the barrel. What it's not measured in is joules. And since this is just for show, I don't want to actually model the rifling into the gun's physics model, because that would be a lot of work and would force the physics engine to do an awful lot of calculations. Instead, I just want to spin the bullet right the moment it is spawned. Unity will let me do that in one line of code using the AddTorque() method. This method takes the same 1/50th of a joule input as AddForce().

But how much torque in joules should I add to the bullet's Z axis?

Honestly, I have no idea, and I really don't think it's worth spending a huge amount of time trying to figure it out since it doesn't actually affect the bullet's trajectory. I know it's a lot less force than is used to propel the bullet itself, so I'm going to start with a small number - 100 units (2 joules) - and see how it looks when we switch to the bullet cam. I'll then tweak the value if it doesn't look right. Sometimes trial and error is the right approach. Or maybe it's the lazy approach. Maybe it's both. Regardless, it's the approach I'm taking here.

I threw together a quick shooting gallery to test my real-world-based gun physics. Yes, it's an ugly shooting gallery. This is what you get when a developer throws something together quickly instead of asking his artists to make it for him.  Despite the ugliness, I'm actually pretty darn happy with the results. Here's what it looks like shooting a gun based on values taken from the .300 Winchester ammunition:



There's still a lot of work to be done on my gun class. I need to get recoil in there, for example, as well as muzzle flash. But, I've got most of the basics down for building a variety of weapons by simply configuring parameters in Unity's inspector. Change the weight and force and a handful of other parameters and you get a gun that behaves and feels very differently. Change the 3D model as well, and you basically have a new, different gun.

On a related note, my early prototypes used the gyroscope for aiming on mobile devices. It had a really natural feel that I loved, but it proved problematic when you zoomed in very far. The tiniest movements from holding the device in your hands would translate into very noticeable, unwanted movement. That movement actually felt like actual shake and scope drift, but beyond about 4x magnification, the game became basically unplayable. I spent some time trying to add stabilization and smoothing to the gyroscope input, but was never happy with the result or the amount of control we had over it.

After a while I admitted defeat and ended up ripping out the gyroscope code and replacing it with code that used the accelerometer for aiming. I then added scope drift back in algorithmically and created a parameter for it. That means we can easily change how steady a gun is when used. A rifle with a bipod, for example, will have almost no drift, while a large gun used while standing will have a fair bit more.


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 12 October 2013 | 9:02 am

Turncoat Dev Diary: Concept and Mechanics

As I mentioned in my last post, we now have dedicated game artists working on Turncoat, and we've mostly decided on the mechanics and basic structure of our first game. I'm not quite ready to announce the game's name or share much detail about it until we've finalized that decision and taken some administrative steps such as reserving the app and domain names.

For the first game, we've decided not to make it story-driven. This was a tough call, because our eventual goal is to create large, story-heavy cinematic games. However, we also need to run this as a business. Long cutscenes and story-driven plot would greatly increase the budget and timeline of this first game and probably wouldn't greatly increase our sales.

So, what we're doing is a game that's smaller in scope but set in the universe and takes place in and around the main storyline. Developing a more casual game will allow us to build up a library of game assets to eventually be used in the full story-driven game yet still keep a reasonable timeline for shipping something.

In the last post, I showed you the selection of gun silhouettes that Alex came up with:



We decided to start working with the silhouettes J and K (which are similar) for the first gun the player gets to use. I did a write-up of the characteristics of the gun and wrote a little in-universe history for it to help Alex while visualizing it. My thought was that the player would start with a more general purpose gun; something modular that came in several variants. We settled on an assault rifle that came in regular, sniper and tactical variants. After noodling around for a bit, Alex came turned that into these designs:


When he first sent it to me, I really wanted to find something that needed to be improved. I pretty much failed to find fault with the designs, though. They're pretty much exactly what I was hoping for. The only flaw I found was in the variant names. The "X" designation only applies to the sniper variant. The tactical variant is the RAR-14T, and the regular version is the RAR-14.

RAR stands for "Republic Assault Rifle", and it's pronounced "rawr fourteen".  I was originally going to drop the first R, making "Republic" assumed.  "AR-14" or "Republic AR-14" sounded more like a gun to me than "RAR-14". It turns out, there was a reason for that. The original name of the M16 rifle was "AR-15". Even today, the civilian semi-automatic variant of that gun is sold under the trademarked designation AR-15 by Colt. To keep our distance and not sound too derivative, I decided to stick with the original three-letter acronym pronounced like a word.

Patrick, our other game artist, took Alex's designs and started working on the 3D model for the gun. The model's not finished, but it's looking pretty sharp so far:


Meanwhile, needing a break from designing guns, Alex started working on concept art for the game's first level. We decided that the first level would be a shooting range on board a ship. Our earliest idea was to make a small, long, windowless range. In Deep Fleet ships, space is at a premium, so I initially wanted the space to feel cramped to reflect that, almost as if the designers of the ship had to make room for the rifle range as an afterthought. We explored that idea for a while, but after talking through it, we opted to go in a different direction. We decided that the first level the player sees needed to have a little "wow" to it, and a cramped, dingy range squirreled away in the bowels of the ship just wouldn't give us that. It makes sense in-universe, but it doesn't work for the game.

Essentially, we decided to let aesthetics trump in-Universe realities, and went with a range with large (very bullet proof) windows through which stars, the sun, or maybe even Mars or Earth can be seen. Alex isn't done with the concept art for the shooting range yet but he's off to a good start if you ask me.


The range has a large overhead window looking into space that prevents the room from feeling cramped or small. The shooting stalls can slide in for individual practice, or can be slide out for tactical training. The large pyramid above with the catwalks extending off of it is a holographic projector. Although some elements on the range are real physical items, the targets themselves will be projected holograms. We were originally going to go with targets that pop up the way they do on modern tactical training ranges, but then decided we wanted to go a bit more futuristic.

It'll be interesting to see how much of this changes before we ship the first version of the game, but so far I'm incredibly happy with the progress we're making. On the game mechanics side, I've been experimenting with the gryoscope and trying to make a decision about whether it can be used for certain game mechanics. What I've found is that it's quite well suited to certain situations, but not to others. For example, when you use a scope like the one on the rifle above, and zoom in far, the tiniest movements of your hands cause large movements in the scoped view. While this is somewhat realistic for shooting at long range - holding a gun absolutely perfectly still is impossible - it takes scope drift out of our control.

We need to be able to control things like that. How do we make a gun on a bipod more stable than a gun that's just held, otherwise? How do we make a large, heavy gun behave differently than a smaller one? Contrariwise, how do we keep them from stabilizing their device on a table?

No. Scope drift has to be something we have precise control over. It can't be a byproduct of our control mechanism.

Although I really like the feel of the gyroscope for shooting, I'm becoming convinced that it's not the right mechanism for this game. That being said, I think it might be the right way to control the view in at least some situations when you're not using a scope. The gyroscope is far more accurate than the accelerometer, and any control mechanism that requires screen touches would require screen real estate we're going to need for other controls.

We're making progress and I look forward to sharing more designs with you as we go along. Once we have our ducks in a row and have finalized the game's name and basic story, I'll also share that.


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 3 October 2013 | 10:46 am

Turncoat Dev Diary: Visual Design Begins

We're starting to narrow in on our first game after putting the stealth game on the back burner. I'll be ready to share more about that in the next week or so. Today's post, though, is not about game mechanics, it's about the look and feel.

We've brought two excellent visual artists — Patrick and Alex — on board to help establish the visual style of our game universe and the first game.

You can check out Patrick's work at his Tumblr and on his blog. You can also follow him on Twitter… um… if you dare.

You can see some of Alex's stuff on his blog and follow him on Twitter.

I'm really excited to be working with these guys and can't wait to share some of the stuff they create.

In one of my next few posts, I'll talk about the mechanics of our first game, but for now, I'll just say that guns — and especially scoped rifles — are an important element of the game, so one of the first things I wanted to explore was what those guns might look like in the 24th century.

The process started with silhouettes. Alex came up with a sheet of different gun outlines based on both historical and modern weapons as well as taking inspiration from a variety of fictional sources. Here is a low-res version of the first silhouette sheet:



Talk about decision paralysis. So many cool looking guns silhouettes!

While we'll have multiple guns in the game when it ships and we'll eventually explore several of these designs, we have to start with one. Picking just one wasn't easy, though. Instead of deciding based on aesthetics, I decided to look at function. Our protagonist needs to start with a gun, but we don't want them to start with the coolest, fanciest, or biggest gun. Rather, we want them to start with something practical and multi-purpose. Both J & K looked to me like assault rifles that have been modified for sniping, and that feels like a good starting point for the default weapon. It's the weapon of a newly-qual'd sniper deployed with his or her squad.

So, Alex is now working on variations of J & K to come up with the design of the first gun our players will use. We'll be exploring some of the other silhouettes later and evolving those into finalized designs as well.

While Alex is exploring guns, Patrick has been exploring environments. The logical starting point for him was to create a rifle range for practice and training levels. We don't want players to worry about enemies shooting back at them until they've had a chance to at least try out their gun against inanimate objects, so Patrick is working on figuring out just what the rifle range on a 24th century spaceship might look like. None of the environment stuff is far enough along to share yet, but I'm looking forward to when we can.

©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 24 September 2013 | 11:06 am

Turncoat Dev Diary: Touch Controls are Hard… Let's go Shopping!

I haven't been making my "every week" blog post commitment for the last couple weeks. I apologize for that. There are few reasons on top of the ordinary work life busy-ness that have caused it.

First… well, touch controls are hard. I've got a partially written post exploring the use of touch controls for stealth games, but I haven't been able to hone in on something I'm 100% happy with. I've got something that I like better than any stealth-based iOS game I've found, but it's still nowhere near being shipworthy. Part of that is because this type of game grew up in the console world, where you have controllers like this:


Have you ever thought about the sheer amount of input that you can take through one of these modern joysticks?  The Xbox 360 controller, for example, has two analog joysticks, each of which allows analog input on two separate axes. That's four inputs that accept a range of values each, letting you (for example) not just specify that you want to move forward, but to actually specify the speed at which you want to move.

But there's actually another two analog controls on top of those. The left and right triggers are not buttons, they're also analog controls with one axis each. The harder you press them, the higher the value received. The DPad is the equivalent of eight tac buttons. There are four standard buttons (A,B,X,Y) and two shoulder buttons (RB, LB). Even without counting the start, Xbox, and back buttons, and without using combinations of buttons, we're talking about 14 buttons and 6 analog axes. Oh, but wait… each of the analog sticks can be pressed down and used as a button, so it's 16 buttons and 6 analog axes. If you count all the buttons, it's 19 buttons and 6 axes. You can also chord the A/X, A/B, X/Y, and B/Y buttons, allowing the equivalent of an additional four inputs.

That's an awful lot of input. These controllers are well designed, so you don't think about just how much data you're able to submit to a game using them, but as a game designer, it's something you have to think about.

If you look at the most successful and popular iOS games, they're not (generally speaking) copies of console games. There are exceptions, of course, like the recent Deus Ex game but, frankly, that one got by on its production value and franchise nostalgia. The controls are actually quite frustrating. A sloppy combination of direct manipulation, virtual joystick, and on-screen buttons that's hard to learn and hard to use.

I still believe that there's a way to do a stealth game on a touchscreen well without using an external controller, but I haven't found it yet. I think I'm going to put this idea on a back burner and return to it in a little while, maybe for the second or third game in the series. 

Another reason I haven't blogged recently is because I've been busy recruiting some pretty amazing artists to work on Turncoat. Pretty soon, I should be able to start posting some concept art and pictures of game assets. I'll tell you more about these artists in a future post but, for now, I will say that I'm super excited to be working with them and I can't wait to start showing you some of the art they create for the game.

So, where are we going from here? Well, we're probably going to be focusing on some high level look-and-feel stuff for the next few weeks and are also going to explore alternate game mechanics for the first game. It's important to me that the first game be really solid and also that it be produced in a timely manner. I just don't think that's going to happen with our original concept.

I'm also thinking about getting away from the prequel idea. There's something in the backstory that I was going to have to reveal if we kept going the prequel game as originally imagined, and it's something I really don't want to reveal yet for a couple of reasons. Instead, I'm thinking about focusing on origin stories for the main members of the squad. Everybody who gets recruited into The Squad, did something to get noticed. Some act of heroism, selflessness, or brilliance that caused the Squad's Commander to recruit them.

So, instead of going a hundred years in the past, we're going to only go back 2-5 years. We're in the same universe, dealing with a lot of the same characters, but they're not on The Squad yet. These will be fairly self-contained stories that can be told without having to reveal any of the secrets of the universe.

At this point, I know which character's origin story we're going to do first, but I don't know for sure the game mechanics that will be used to tell that story. I've got some ideas that I'm going to explore, though, so look for future posts.

©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 17 September 2013 | 4:37 pm

Turncoat Dev Diary: Help! I'm Falling and I Can't Stand Up…

(This is part of a series. The first post in the series is here.)

Just as I started trying to figure out how the game's touch controls should work, I  began to be really bothered by a couple of problems in the basic movement of my character. One of those things, I've mentioned before, is the funky camera accordioning in the arc right and arc left animations. Turns out, those issues were more than cosmetic; the stuttering camera combined with the fact that stopping isn't instantaneous made it virtually impossible to line up the character precisely as you stopped moving.

It was easy enough to solve, though. I simply removed the arc left and right animations from the blend trees in my state machine, then added some code to my character controller class to simply rotate the whole character as she walked:

    transform.Rotate(0, horizontal * turnSpeed * Time.deltaTime, 0);

The turnSpeed variable can be set in the inspector, so it can be adjusted on a per-character basis. The horizontal value is pulled from the x-axis of the joystick or determined from the left/right buttons or touch screen controls. The resulting turn animation is a tiny bit less realistic than using the animated left and right turns. You'd think that just rotating the whole character a small amount while they walked forward would look really fake, but it doesn't. Maybe it's simply the fact that this is the way most third person games ever created, including pretty much every MMORPG, have worked. Maybe our eyes are just accustomed to this particular cheat. Either way, I'm willing to sacrifice that tiny bit of realism for better, more precise controls.



After playing with it a bit, I decided that turn speed probably shouldn't usually be the same when walking and running. Instead of just setting a single turn speed in the inspector, I'll let you set both a walk and run speed and then interpolate between them. They can be set the same using this approach, but they don't have to be.

float turnSpeed = (turnSpeedDifference * currentRun) + walkingTurnSpeed;
transform.Rotate(0, horizontal * turnSpeed * Time.deltaTime, 0);

The variable turnSpeedDifference gets calculated once at startup, since I don't anticipate these values changing at runtime:

turnSpeedDifference = runningTurnSpeed - walkingTurnSpeed;

I'm pretty happy with turning now, but there's another problem that I didn't notice until I expanded the playing field. The original field was sufficient for testing the basics of movement, but I realized that once I moved beyond the basics, I'd need ways to test things like crawling through vents and taking cover, so I expanded the test field, making it taller and longer. I added some ducts the same size as the ones in the prototype level, added some objects to take cover against, and added a third level platform and another set of stairs. That made the test level look like this:


As I started exploring this expanded test level, I realized that falling from a distance greater than, maybe the equivalent of two to three meters, looked unnatural because my character would try to  walk or idle. Walking on air is a pretty neat trick, but not very realistic.

The provided CharacterController class, which is what I've been using to handle basic interaction with the environment (climbing stairs, being affected by gravity) has a method called IsGrounded()¹ that will tell you if if you're standing on the ground. If you're walking, running, or idling, this will return true. If you're jumping or falling, it will return false.

That's the theory, at least. It always returns true for me no matter what my character is doing. Now, I understand why it might not work when jumping because the elevation increase is baked into my jump animation - the character controller doesn't actually leave the ground. The bone colliders move up into the air, so interaction with props is correct, but the implicit collider used for interacting with terrain does not. As a result, IsGrounded() is returning true. More confusing to me, though, was why it's returning true when I fall off of one of the higher levels. I had no working theories about why it wasn't working as expected. Even when falling off a third story platform, it would never report false for IsGrounded().

Because CharacterController is an opaque class provided by Unity, there wasn't an easy way to debug why it wasn't working as expected, so I decided to stop using the provided class and roll its functionality into my own controller. I removed the CharacterController component and added a RigidBody component (the component in Unity that makes something part of the physics world) as well as a CapsuleCollider. Because my character is part of the layer Player Controller, just like CharacterController used to be, CapsuleCollider should only interact with terrain, not with props, which will be left to the bone colliders. In theory, everything should work just like before except for situations that were being explicitly handled by the CharacterController class.

Surprisingly, the swap worked really well. It works way better than I expected, actually. Without implementing the IsGrounded() functionality, I'm already able to move around the level just as I was before. I had to tweak various values on the RigidBody and CapsuleCollider components to get things just right, but it turns out I was getting far less benefit from the CharacterController component than I realized. Even climbing up slopes and stairs works pretty much as expected.

Pleasant surprises like this one are few and far between. I expected to put a lot more work into replicating the functionality I was getting from CharacterController, so I took a moment to savor the victory.

Then it was time to turn my attention to figuring out when my character is grounded, when they're jumping, and when they're falling so that I can show the correct animation for each situation.

I tacked whether they're grounded first. There's a couple of different possible approaches here. The one I opted for is to simply cast a ray straight down from the player to determine the distance to the ground. If that distance is greater than what it is when they're just standing, we know the character is not grounded. In my case, that looks a bit like this:

        RaycastHit groundHit;
Physics.Raycast (origin, transform.up * -1, out groundHit, 100f, groundLayers);
grounded = groundHit.distance - groundedDistance <= 0f;

The variable origin is the calculated center of the capsule collider. The second parameter to Physics.Raycast is the direction I want the ray cast in, which is straight down. If we multiply transform.up by -1, we get a vector pointing straight down from the character. I don't know why Unity provides a method to give you a vector pointing up, but not one for pointing down, but multiplying the up vector by -1 gives us a down vector.

The third parameter is used to determine what object was hit, if any. C#, like Java, doesn't have pointers, so that funny out keyword is used to pass groundHit by reference rather than by value. As I've said before, I don't hate C# nearly as much as I hate Java, but there are still times when this language bugs me. Here's one example. I miss pointers. I know many devs feel we've outgrown the need for pointers and that our languages should hide them from us but, personally, I find this whole out² business to be far clunkier than simply passing the address of a variable. I understand some of the security concerns around pointers, but all the other arguments against them ring hollow to me.

Anyway, the next argument (100f) simply tells the ray cast to stop looking if it hasn't found something within 100 units. In my test level, units are roughly equivalent to meters, so that should be far enough to hit the ground no matter where I am on the level. The final argument is called groundLayers, and this one's a little confusing.  This is a bitwise mask field used to specify which layers I want it to look for when ray casting. It's very similar to the physics settings I used previously to keep the bone colliders and character collider from interfering with each other.

Determining which values correspond to which layers is a little confusing but, fortunately, you don't need to. You can declare a public LayerMask variable, and Unity will present a user interface in the inspector to let you select the layers to be included.


Once I have the results of my ray cast, it's relatively easy to figure out if I'm grounded. The variable groundedDistance is half the height of the capsule collider plus a small amount extra to account for small terrain changes. I'm ray casting from the center of the collider, so the ground should be half the collider's height away. If it's further than that distance (plus a little slop), we're not grounded.

In my testing, this works perfectly, except the jump problem is still there. My capsule collider doesn't move up as the character jumps, so this code reports that we're grounded when we're jumping.

For the jump problem, all I have to do is add a boolean variable to the class to track when a jump starts, and when it ends. Only, it's not quite that simple. When you tap the jump button, that starts the jump animation. With a running jump, the character immediately springs into the air, but with a standing jump, there's a build up as the character bends their knees down and then springs up. In both instances, the character's feet hits the ground some time before the animation ends. It seems like that's the point where we want them to start falling. We don't want them to land on thin air and then start to fall down.

The first thing to do was to figure out the exact timings for my two jump situations. After some trial and error, I came up with these values:


    // Timings used on the jump. Running jump starts immediately and transitions immediately back
private static float jumpResetDelay = .1f; // Used to set the Jump input back to 0
private static float runningJumpAnimationDuration = .416667f; // Running jump animation is reported as .867f seconds,
// but is actually .416667, Mixamo probably trimmed on import

// Standing jump has a build up and recover, so feet don't leave the ground immediately, and animation continues
// for a short period of time after
private static float standingJumpLeavesGround = .5f; // When the feet leave the ground on standing jump
private static float standingJumpBackOnGround = 1.2f;

Now, the trick is to use them. This is one of those areas where language differences bite you. Pretty much every mechanism that I would use to accomplish this in Objective-C or C aren't available in Unity using C#. Apparently, for performance reasons, Unity's APIs are not threadsafe. Even though C# supports threading, Unity kinda doesn't. Instead, the suggested way to do something like this is to use these funky things called co-routines, which are functions that yield execution back to the calling thread. In Unity, these functions will fire on the main thread, but they can yield time back to the main thread, similar to a thread sleeping for a specified period of time.

After some playing around, I cam up with something that seems to work well. When the jump button is tapped, this co-routine fires:

IEnumerator TriggerJump(bool isRunning)
{
if (isRunning)
{
jumping = true;
AnimatorSetJump(true);
yield return new WaitForSeconds(jumpResetDelay);
AnimatorSetJump(false);
yield return new WaitForSeconds(runningJumpAnimationDuration - jumpResetDelay);
}

else
{ AnimatorSetJump(true);
yield return new WaitForSeconds(standingJumpLeavesGround);
jumping = true;
yield return new WaitForSeconds(jumpResetDelay);
AnimatorSetJump(false);
yield return new WaitForSeconds(standingJumpBackOnGround - (standingJumpLeavesGround + jumpResetDelay));
}


jumping = false;
}


If the character is doing a running jump, the public variable jumping gets set to true immediately, but if they're doing a standing jump, then we wait until the character's feet actually leave the ground to set it. In both cases, we set the Jump input to the animation state engine back to false after a short delay to make sure we don't accidentally trigger a second jump animation, and then, when the character's feet are back on the ground, we set jumping back to false.

Back in our Update() method, we should not be able to check at any point to see if our character is jumping or not and get the correct value (though some tweaks to the timing are to be expected during testing). Knowing this will help us avoid falling into gaps that we're trying to jump over, for example. Now that I can tell when we're jumping, I can update the grounded check to take jumping into account.

    Vector3 origin = transform.position + transform.up * movementCollider.center.y;

RaycastHit groundHit;
Physics.Raycast (origin, transform.up * -1, out groundHit, 100f, groundLayers);
grounded = groundHit.distance - groundedDistance >= 0f && !jumping;

Now that I have a reasonably accurate way to determine if the character is grounded, I should be able to tell when to fall and when to stop falling, right?

*sigh*

I knew I'd pay for that earlier bit of serendipity. Turns out, the whole falling thing is a harder than I expected. I implemented the code to start falling when the ground is a certain distance away. I made that distance configurable, since it could conceivably change based on the character's height, and then set it for this character. It mostly worked. There are some edge cases, such as when going up stairs fast, where it needs to be tweaked but, for the most part starting a fall works as expected.

Landing, however… Well, landing doesn't work so well. The character "lands" a few feet above the ground and then settles down to the ground as they start to stand up.

This one made me pull my hair out. It made no sense to me.

It wasn't until I watched the character in Unity's scene view that I realized what was happening. The character's height changes as they fall, and then again as they absorb the impact of the fall, but the capsule collider being used to figure out when they've hit the ground doesn't change in height, so we detect hitting the ground while our character's feet are still a few feet above the ground.

That might make more sense if you see it in action:


You can see how significant the difference in height is in this screenshot:


There's a couple of ways I can fix this. The way that the Unity Mecanim tutorials show is to use an animation curve and tie the height of the capsule collider to that curve.

We do need to make the capsule smaller and adjust its origin up a little so it overlaps our character while falling but, in addition to that, we're raycasting from the center of the capsule in code to figure out if we're grounded, so we have to account for this change in height in that code as well. Since I have to write code to deal with this, I think I'd rather handle the capsule collider changes there as well. By saying that, I probably sound to the Unity folks the way people who refuse to use Interface Builder sound to us old school Mac and iOS devs, but it seems logical to keep the functionality in one place.

With some trial and error, I found the right values and timings for resizing the collider. Those will likely need some tweaking as I test more, but I'm pretty happy with the overall result. I was just about ready to move back to figuring out touch controls when I started noticing another movement problem. When I ran up stairs or up the slope, it would sometimes start falling at the top, even though there wasn't any way they could possibly fall there.

Ray casting doesn't take into effect the size of the collider, it just draws a line straight down from specified point. There's a small gap between the top stair and the platform. It's tiny - not big enough for a person (or our collider) to fall through, but if the ray cast happens to be exactly over that gap when I do my check for falling, we get a false positive for needing to fall, and the wrong animation gets kicked off.

I could cast multiple rays down to make sure the gap isn't too small to fall through, but Unity actually provides a way to do a ray cast that takes X and Z size into account. It's called a Sphere Cast, and I stumbled upon it purely by accident.  Fixing this issue turned out to be a matter of simply changing my ray cast cal to a sphere cast call, using the radius from the capsule collider

    RaycastHit groundHit;
//Physics.Raycast (origin, transform.up * -1, out groundHit, 100f, groundLayers);
Physics.SphereCast(origin, movementCollider.radius, transform.up * -1, out groundHit, 100f, groundLayers);
grounded = groundHit.distance - groundedDistance <= 0f && !jumping;

At this point, basic movement is working pretty well. I can walk, run, jump, and fall down fairly realistically. I still have to do crouch and and cover, but with these fixes, I think I'm finally ready to start exploring touch controls.

Next: Touch Controls
PreviousPrototyping Player Game Mechanics, Episode II



1: Yes, this is correct. The accepted convention in C# for naming methods is to start them with a capital letter. Considering this language came from the same people who gave us Hungarian Notation, however, this is a pretty tolerable bit of ugliness.

2: It seems simple, right? Specify out if you want to pass by reference, leave the keyword out if not. Only, it's not quite that simple. You can also use the keyword ref to specify you want an argument passed by reference. Two ways and both that do the same thing but if you use ref, the variable has to  initialized before it can be passed in. With out, the variable doesn't need to be initialized. This isn't simplicity, it's just different complexity with less power.

©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 4 September 2013 | 8:31 am

Delayed

Just wanted to post a quick note to say that there's a slight chance I won't be able to keep my promise of "at least one Turncoat Dev Diary entry a week" promise  for this week. My next post is rather long and involved -  an exploration of touch controls for first and third person games and the process I went through trying to adapt my game's controls to the touch screen - and I've been on travel all this week with a very busy schedule. Worst case scenario, I'll have the next diary entry posted by next Monday, though I'm hoping to get it out earlier.


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 28 August 2013 | 1:39 pm

Turncoat Dev Diary: Prototyping Player Game Mechanics, Episode I

(This is part of a series. The first post in the series is here.)


The first game mechanic that needs to be nailed down is player movement. This is the most important mechanic to get right because it's on the screen all the time. Up to this point, I've just been navigating the prototype map using the stock first person controller that Unity provides. It's time to move past that and figure out the actual player movement and controls for the game. I'm sure we'll be tweaking these right up until release, but we at least need to get to a good starting point created.

I've been going back and forth in my own mind over whether the Escape game should be a first-person or a third-person game. Both have merits. For shooters, often the first person-perspective works better because it's easier to aim guns and other distance weapons from this point of view. For games that use melee weapons or that aren't primarily about combat at all (like ours), I tend to think that third person works better. From a storytelling perspective, I kind of want the main character visible on screen. They are the protagonist, after all, so I want the player to be able to see them. The high-level considerations seem to be pointing more toward third-person perspective.

But third person perspective falls apart sometimes. The situation in our game that seems potentially problematic for third party control is when you're crawling through the ducts. Space is tight and the player will be in front of the camera taking up most of the available space. That's going to make it impossible to see what's in front of the character and difficult to effectively control their movements.

On the other hand, there are times where first person perspective isn't ideal either. I want the player to have a number of options for hiding and taking cover. If using first-person perspective and the character does something like flatten herself up against the wall or takes cover behind a piece of furniture, it's going to be hard to see everything the player needs to see to effectively play the game. Those perspectives may be potentially disorienting as well. In real life, when you're hiding behind something, you can't see the stuff on the other side. But, in a game, you have to be able to see at least some of what's going on for the game to be playable.

Perhaps, the camera needs to be able to change perspective. Crawl into a vent? Move to first person view. Press up against a wall to hide? Force third person view. 

What about when both perspectives work, such as when simply walking or sneaking around the cell block? It seems like there are two possible choices there. We can either be opinionated and force one perspective or the other on the player, or we can let the player choose the point of view they want. I'm leaning toward letting the player choose, but I'm going to play wait and see before making the final decision on that. I think we need to let testers try it both ways and see the response.

Time to start building a character controller.

Of course, we don't have character designs yet, let alone completed 3D characters. So,  how do we prototype character movement?

We use this:


This is a free character from Mixamo designed specifically for prototyping. With Unity 4's Mecanim animation system and its ability to retarget motions, pretty much any animation designed for a bipedal character can be used with any other bipedal character. That means we can write a generic controller object for this prototyping character, then simply swap in the correct character model later once it has been completed. As long as our models are designed correctly and aren't radically different in their basic proportions, it should should just work.

Mecanim is impressive. The motion retargeting is some of the best I've seen and the importer almost always maps all the bones correctly regardless of the naming convention used or the number of bones in the model. The only downside to Mecanim is that it's still fairly new, so a lot of Unity users haven't moved to it yet and are instead sticking with the legacy animation stuff for their current projects. As a result, there's just not as much out there in terms of tutorials or available help. Mecanim makes it really easy to do the basics, but once you start going beyond the basics, you pretty quickly get into uncharted territory.

Uncharted territory can be fun, but it's almost always time consuming. Before we can even get to Mecanim, though, we need to get our prototyping model and animations into Unity.

Over the last year, I've purchased a selection of animations from Mixamo that I thought we'd be likely to need for the game. I supplemented those with a few packs bought from the Unity Asset Store. In the future, I'll be buying any stock animations directly from Mixamo. The Mixamo motions in the Asset Store are much cheaper, but you only get the proprietary Mecanim animation file, not the original FBX file, so you can't modify the animations, you can't set curves (which are used to tie the timing of other actions to the animation), and you can't fix mistakes in the motions. 

Unfortunately, the three packs I bought through the Asset Store - the male and female locomotion packs and the prototyping pack, all contained mistakes. In fairness, Mixamo very quickly responded to my feedback. In less than twenty-four hours, they fixed the worst problems - the ones that made the packs unusable. They seem less inclined to fix the more minor issues or to provide some way to use Mechanim curves, so from now on, I'm paying extra for the full motions.

By mixing and matching animations from different packs with the ones bought directly from Mixamo, I should have most of the animations I need to get started with basic character movement. When I discover gaps, I can buy or create animations to fill them.

Rather than test character movement in the prototype level, I'm going to work in a fresh Unity file with a simple map. Once I have the basic movement mechanics working well here, I'll then export the asset over and start testing it in the prototype level. The reason to work in a fresh file is to isolate what's causing problems I encounter. Determining the cause of a problem is much harder if both the map and the character are being constantly changed.

Here's the simple prototyping level I'll be using:
There's not much to it other than a ramp, two sets of stairs, a partial second story, and some room for dropping in objects so we can see how the character interacts with the virtual world.

Because physics calculations can be processor-intensive, game engines usually keep a second set of 3D models in memory that mirror the display model. These collision models (or colliders) are not displayed to the user and are only used for calculating the physical interactions in the virtual game world. These colliders are comprised of lower-resolution models and mathematically defined "primitives" like spheres, cubes, and capsules. These collision models allow the physics engine to provide fairly realistic physical interactions using a smaller amount of processing power than it would take if using the higher-resolution display models.

This is why neither the stock first- or third-person controller provided by Unity is going to work for our game. I want our characters to interact with the world in a believable manner. Both Unity's first- and third-person controllers use a single, large capsule collider to represent the character inside the physics engine. Although the characters look like the 3D model you create, they interact with the world like a giant floating pill.

The green capsule is how your character looks to the physics engine
when using Unity's stock character controllers

For many games, especially first person shooters, this provides a sufficiently believable interaction. It's not going to give the result I want for this game, however. I want interactions with the world to have a higher fidelity than that. Our game is going to rely less on combat and more on stealth and problem solving. Having objects bump away when you get near them, but not move when an extended arm or leg passes through them, is just not going to cut it.

I spent a lot of time trying to modify the stock controllers to give the results I want, but eventually decided I either had to roll my own, or find a third-party controller that works how I want. I found several third-party controllers that were better for my needs than Unity's, but none were perfect, so I'm going to have to build my own.

As I started working on my character controller, I ran into an unexpected limitation of Unity: It lacks support for animated collision meshes. Game models in many engines are comprised of two meshes - a higher resolution mesh that gets displayed to the user and a very low-resolution mesh used for physics calculations. Both meshes are rigged to the model's armature (the virtual skeleton used to animate the model), which allows the physics engine to calculate interactions based on the actual position and pose of the character. By using the animated collision mesh, the engine knows the general shape of the body at any given moment and can calculate physics accordingly.

I built this type of physics mesh for my prototyping character.  In Unity, I added a mesh collider to the character using that lower-fidelity mesh, but then discovered that it didn't animate along with the armature as I expected it to. A little research turned up that this is the documented behavior of Unity. For performance reasons, mesh colliders do not deform with a character's armature.

Needless to say, I was surprised to find out that I couldn't do what I thought was a fairly standard practice. The response from many Unity users in the forums and on Stack Overflow can be paraphrased as "just use the provided controller; it's good enough for us, so it should be good enough for you," which isn't a particularly helpful bit of advice.

I spent a lot of time experimenting, trying to find a way to use an animated collision mesh in Unity. I came up with a way that I thought would let me get around Unity's limitations. Since Unity, Blender, and the FBX file format all allow objects to be parented to individual bones, I thought I could create separate collision objects for different part of the body and achieve the same result as using a mesh that animates along with the character. You can see this attempt below; the collision objects are the orange wireframe shapes surrounding the model. Instead of a single animated collision mesh, I built nineteen separate meshes, each of which moved along with a single parent bone.



I rendered a short animation to see if the physics meshes would animate properly built this way.  Everything seemed to be in line with what I needed.


As I moved over to Unity, things looked good at first. The exported model looked right. All the physics meshes were in the correct places. They weren't physics meshes, though, they were being displayed. That didn't concern me. I just needed to turn off the mesh renderer for them and add a mesh collider to the appropriate bones.

I should've known it wasn't going to be that easy. 

Unity mesh colliders automatically place their mesh so that they take on the parent object's transform (position, scale, rotation). In the case of a bone, that means the collision mesh gets moved to the head of the bone and rotated 90°. All my colliders ended up in the completely wrong place. Here, for example, is where the left thigh collider ended up.



I know how to fix this — I just have to move the origin of each collision object to match its parent bone's transform and then adjust the mesh's shape to overlap the length of the bone. But, I was starting to feel like I was fighting Unity… that I wasn't working with the system the way it was intended to be used.

I went back to researching, and found a few people saying to build your character's collision mesh right in Unity rather than in a modeling program. Primitive colliders like capsule, box, and sphere colliders give much better performance than mesh colliders, even mesh colliders that use low-resolution meshes. Building the collision mesh in Unity is a little tedious, but probably less tedious than fixing the collision mesh in Blender and importing it, and I'll get better performance. I'll lose a little precision, but probably not enough to matter. Most importantly, I won't be fighting my tools.

So, back to Blender I went to re-export the model without its collision meshes. After bringing the updated model back into Unity, I began adding primitive colliders to major bones and ended up with this:


Just like with the earlier model, I wanted to make sure the collision objects moved appropriately when animated, so I made the character dance and filmed it. Yikes. That sounds way creepier typed out than it did in my head.


It looks pretty good, right? But there is a problem with this collision model that's not obvious from the animation above. Everything looks good… until I enable rigid body physics on the character itself. If you look at the green outlined collision objects in the animation above, you'll notice that they overlap at times. The arm, chest, and shoulder collision objects overlap each other, for example, as do the pelvis and thigh objects. Since these are used in physics calculations, this becomes a problem if the parent object (the imported character model) uses physics.

Unity's physics engine is going to try and prevent these meshes from intersecting because it wants to treat them as solid physical objects. That's the whole point of a collision mesh. Unity wants to bounce these off each other. Even if I make all the colliders kinematic (which means they affect other colliders, but don't get moved around by the physics engine themselves), these colliders still cause problems because it will still try to bounce the character off the colliders attached to its bones.

This causes weird, erratic results. I've seen my character start walking on air as if there existed an invisible staircase of randomly sized steps in front of her, I've seen body parts suddenly fly away for no apparent reason, as if they were connected to the body only by a skin of really pliant rubber. 

I could have solved this by leaving large enough gaps between the collision meshes so that they simply never overlap during normal movement. To do that, though, the gaps would have to be large enough that smaller objects in the world could pass through them or, worse, get stuck in them. 

Fortunately, Unity provides two different ways to tell objects not to collide with specific other objects.

The first way is to simply attach a script to objects with colliders that specifies which other objects they shouldn't collide with. Doing that looks something like this:

using UnityEngine;
using System.Collections;

public class IgnoreOtherObject : MonoBehaviour
{
public Collider objectToIgnore;
void Start()
{
Physics.IgnoreCollision(objectToIgnore, collider);
}

}

This solves the problem, but it's tedious, fragile, and a pain in the ass to maintain. There are nineteen collision meshes in this model, each of which is a separate object that has to be told to ignore each of the eighteen other collision objects. Any changes to the collision model means that we'll have to update all the scripts.

There are ways to do this in code that would be less tedious than writing nineteen different scripts, of course. We could, for example, have a single shared script that iterates over all the bones in the character. The script could tell the physics engine to ignore all the other bones that make up the character. That's way better than hardcoding eighteen objects into nineteen different scripts, but it's still a bit tedious because it has to be attached to all of the bones individually. It's also unnecessary.

There's a better way: Unity Layers.

Any object in a scene can be assigned to a layer and the physics engine can be set to have layers interact — or not interact — with any other layer. I created a new layer called Player Collision and assigned all the player character's bones to that layer:



Once they were all assigned to the same layer, turning off collisions between objects on that layer was a simple matter of going into the scene's Physics Settings:


By unchecking the box where the row is Player Collision and the column is also Player Collision, I've effectively told Unity to ignore any collision between two objects that are both assigned to the Player Collision layer, but to continue calculating collisions between those objects and all other objects in the world. In other words, the physics engine ignores when one part of the player collides with another part of the player, which is exactly the behavior we need.

That's enough for today's installment. In the next episode, I'll get our protagonist moving around and interacting with the world.

Next UpPrototyping Player Game Mechanics, Episode II
PreviousThinking About Characters

©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 21 August 2013 | 11:55 am

Turncoat Dev Diary: Prototyping Player Game Mechanics, Episode II

(This is part of a series. The first post in the series is here.)

As I started working on the basics of movement, I decided that my prototyping model needed a little something extra. I want my character controller to support "physics bones", which are bones that aren't pre-animated, but instead are controlled by the physics engine. You might use physics bones if a character has a pony tail, for example, so that the pony tail moves naturally. If a character has an item hanging from their belt, you might put a physics bone on it to make the item bounce around as the character walks. You can also use physics bones to fake cloth and hair physics. The results aren't as good as you get from true physical simulations, but those are often too processor intensive to do in real time, especially on mobile devices. You can get surprisingly good results by faking more complex simulations using a number of constrained physics bones.

In order to ensure physics bones work with my controller, I went back and added a pony tail to my prototyping character. Well, more like a pony-spike-out-the-back-of-the-head, but it'll work.


Because these bones are not standard parts of a bipedal character, they're not part of any of the animations I have. By default, bones that aren't part of an animation just move along with the nearest ancestor bone that is part of the animation. In our case, that's the head bone, and the result is less than realistic.


To get the pony tail to behave the way I want it to behave, I have to add colliders to each of the new bones and make them all part of the physics system. It was a little tedious getting the colliders set up for the pony tail. I have no idea why Unity places colliders, by default, at the head of bones rather than at the mid-point of the bone. While it might make some sense on paper since the head of the bone is its pivot point, in practice, you're almost always going to want the collider to be placed so it covers the length of the bone. If the colliders defaulted to the midpoint of the bone (halfway between the tail and head), it would take a lot less time to set up physics bones and skeleton colliders.


Once I placed all the colliders and added rigid body physics to each of the bones, I fired it up to try it out. The bones were definitely affected by gravity, just not in the way I wanted; the pony tail fell right to the ground and bounced around on the floor because I forgot to connect the bones to each other with joints.


Joint components are Unity's way of telling the physics engine that certain things should stick together. There are several types of joints in Unity, but the one I want here is called a Character Joint. Adding a character joint to each of the pony tail bones will keep them connected to each other, but will allow them to swing and twist within set limits. After some playing around, I came up with these values for my pony tail.


If this were a real model, and not just a prototyping one, I'd probably spend a lot more time tweaking these values to get them just right. Since all I really care about right now whether parts of the character can interact with the physics engine properly, it doesn't make sense to spend a lot of time tweaking these parameters. Testing it out, it looks okay. Or, at least, it looks okay as I can tell without being able to move the character around.


I guess I know what I need to do next: basic movement controls. I'm eventually going to work on touch controls, since this will be released on iPad, but for now, I'm just going to use keyboard and joystick to get the animations working. Translating these types of controls to touch is, I think, going to be a fairly time-consuming task, so I want to tackle that by itself separately.

Unity's Mecanim system lets you automatically transition between different animations or even combine animations by building a state machine. You can pass different parameters into this state machine from your code and set it up to transition between various animations based on the values you pass in.

I started with a state machine provided by Mixamo with one of the motion packs that I bought from the Unity Asset Store, but I had to fix quite a few things to get it functioning to my satisfaction, then I expanded on it to get the basics of movement covered. This state machine includes stand, turn in place, walk forward, walk backward, run forward, run backward, jump in place, and jump while walking.

Here's what it looks like in Unity's editor:


I'll have to add stealth movement, crawling, and taking cover to my state machine later, but I want to get these basics working well before I start on the more complex parts. Down in the lower left corner, you can see the parameters that can be passed into my state machine. Here's how a parameter gets passed in from code:

Animator anim = GetComponent();   
float vertical = Input.GetAxis("Vertical");
anim.SetFloat("Speed", vertical);

Pretty straightforward. Just grab a reference to the animator component that represents this state machine and use SetFloat, SetBool, SetInteger, etc. to pass in whatever value is needed.

The orange Idle state in the middle of the picture above is orange because it's the default state. That means that when the level starts, the character will immediately start animating using the Idle animation, which is just a short looping animation of standing still. Using an animation for the idle state is more natural looking than just having the character stay frozen in place when not moving.

Every white line in the state machine is a transition that defines when the character should switch from one animation to a different animation.  For example, if the Speed parameter becomes greater than 0.1, the character will automatically transitions from the Idle animation to Forward Locomotion because that's the condition I've specified for it:


This ability to move between animations based on parameters is pretty neat in and of itself, but this system is even more powerful than that. Forward Locomotion isn't actually an animation the way Idle is. It's what Unity calls a Blend Tree, which is a grouping of animations that can be interpolated together to create new animations based on input parameters, to create new animations. This allows you to take, for example, a walking and a running animation, and blend them together to create a fast walk animation. Here's the blend tree called Forward Locomotion that gets fired when a character starts moving forward:


This blend tree takes two parameters - Run and Direction. The Direction parameter goes from -1.0, representing turning as far left as possible, to 1.0, which represents turning as far to the right as possible.  Run, similarly, goes from -1.0 representing full backwards motion, to 1.0, which is full forward motion. This particular tree will only ever get Run values between 0.0 and 1.0, however, because there's a separate tree for handling backwards movement.

This tree will automatically create new animations by mixing the run and walk animations together based on the value of the Run parameter. If Run is 0.0, the character will walk. If Run is 1.0, they will run. For any value in-between, it will mix the two animations to create something between a run and a walk. Similarly, the tree will also interpolate between the walking forward and walking left or right animations based on the Direction value so that the character turns left or right naturally. All the hard work of interpolating between multiple animations is done for you. You just pass in two parameters in and everything else gets handled for you based on the way the state machine is set up.

It's a pretty great system, and I'm mostly happy with the basic locomotion as it exists right now. There's a few things I don't like, however.

The first is that the arc left and arc right animations I have from Mixamo all seem to have a problem. I'm not exactly sure what the root cause of the problem is because I don't have access to the source animations, but whenever you transition into running or walking all the way to the left or right, the camera starts moving in a jerky, accordion-like manner that just looks terrible. It's not as noticeable with the walk animations, but when you start running, it's really noticeable.

But that's not a problem with the controller or state machine, it's a problem with the animations themselves, and I can replace those at any point. I'm not going to worry about this problem for now. I'm just going to make a note to find replacement animations that don't have this problem or else to buy the full version of these animations and fix the problem myself. If I can't do either of those, then I'll get rid of the Direction part of the blend tree and turn the character left and right in code, which is the traditional way of turning a character that uses an animated walk cycle.

The other problem that I notice is that there's no way to do a standing forward jump. The state machine jumps forward if you're already moving forward and jumps straight up in place otherwise. If the forward button is pressed (or the joystick is pushed forward), but you haven't hit that 0.1 Speed threshold yet, you jump straight up, which doesn't feel like correct behavior to me.
Are you wondering why there's both Speed and RunSpeed represents the actual value taken from the Y-axis of the joystick (if not using a joystick, Speed is 1.0 if the forward button is pressed, 0.0 otherwise). Run, on the other hand, is a calculated acceleration value that builds up over time based on SpeedSpeed is used to determine when to transition to a forward or backward movement, but Run is used to interpolate between the walk and run animations in a natural manner.
My first thought for fixing this was to simply change the transition to jump forward if Speed is greater than 0.0 rather than 0.1, but I realized that 0.1 threshold wasn't the problem. The problem is that there's currently no transition that goes directly from Idle to Running Jump. To fix this, I need to add a new transition from Idle to Running Jump, and then make sure the transitions to the jump states are never ambiguous. I also needed to add transitions back from Running Jump to Idle if the character's not longer moving forward, otherwise there's a little stutter step as the state machine has to go back first to ForwardLocomotion and then to Idle. Here's what my state machine looked like after tweaking the basic movement:


Happy with the basic movements, I decided to take my character out for a stroll around the level. She knocked over some barrels, which told me she was interacting with the physics engine appropriately, but when I got to the stairs or the ramp, she wouldn't climb.

Making my character part of the physics system isn't enough to get her to walk up stairs or slopes. The physics engine will keep her from walking through walls, but it won't handle changes in elevation. Dealing with that requires some logic.

There's two possible ways to fix this. I can write code to handle elevation changes needed to properly traverse stairs and slopes, or I can use the provided CharacterController class, which already contains code to deal with slopes and stairs.

My favorite kind of code is the kind of code you don't have to write at all, so I decided to try using the provided class before setting off to write my own physics-friendly replacement. Unfortunately, this particular class is the one I talked about in the last post that turns your character into a giant floating pill in the physics engine. Once I added a CharacterController component to my character, it instantly started doing wacky things like walking on air.

I added the character to the same Player Collision layer as the bone colliders. Because that layer is set to not interact with itself, it stopped the problem. Suddenly, my character could walk up stairs, climb slopes, and just generally move around the level pretty naturally. But, there were new problems. For one thing, my physics bones in the pony tail weren't bouncing off the character's back, they were passing right through them. For another, I was back to interacting with the physics objects like the barrels on my test level, as if I was a giant valium pill instead of bipedal creature.

I tried moving the physics bones in the pony tail to a different layer. That caused them to begin bouncing off the other bone colliders, which is good, but it also caused them to interact with the character controller, which is bad, because it puts us back to crazy stuttery walk-on-air time.

All is not lost, though. Taking a step back to think about what should interact with what, I realized there was a way to make this work. I don't want any of the bone colliders - neither the physics bones, nor the regular ones - to interact with the CharacterController because that causes weird, undesirable behavior. I also don't want the CharacterController interacting with props or moveable items because I went through all the effort of setting up bone colliders to do that. I do, however, want the CharacterController to interact with the terrain, but I don't really want all the bone colliders to do that because it would be redundant.

That means all I need to do to get this working is to add a few more layers. If I move the CharacterController to a different layer than the bone colliders and put the physics bones back on the same layer as the rest of the bone colliders, then create separate layers for the terrain and moveable objects, I can then use the physics preferences to make everything work correctly.


With these settings, terrain is handled by the character controller, props are handled by the bone colliders and ne'er the two shall meet. Props and terrain still interact with each other, which is necessary because the props would fall through the floor if they didn't.

This seems like it should work. Let's see if it does.


Ahhhh....


That's not a bad start as far as I'm concerned. Time to start working on more advanced movement, like taking cover, as well as switching between first and third person perspective. But not today.

Next Up: Prototyping Player Game Mechanics, Episode III
Previous: Prototyping Player Game Mechanics, Episode I

©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 21 August 2013 | 11:44 am

Turncoat Dev Diary: Thinking About Characters

(This is part of a series. The first post in the series is here.)

The reason why our original Turncoat concept was going to cost so much to create is that the scope of
the concept was just massive. The squad of soldiers the story follows consists of thirty-four soldiers. They are stationed on an enormous starship with over eight thousand people on board and which is part of a battle group containing about fifty other ships. In addition to the squad, there are many other characters that play into the story, including the Admiral of the fleet and her staff, the ship's captain and bridge crew, the ship's chief medical officer and sick bay staff, the "regular" complement of shipboard marines, and several command officers stationed back at Earth. Then, there's the Seditionists.

There are a lot of fully developed characters required to tell that story and a lot more that would need to be on screen at times in order to make the universe feel as if it was fully populated. On board ships and on an overpopulated planet, space is tight. To convey a proper sense of claustrophobia, we need lots of characters.

Characters are expensive to create.

Characters are also really important to the type of games we want to make. A casual game relies more on the mechanics of the game to generate interest, but a cinematic game puts the story — and thus, the characters — front and center on more equal footing with the gameplay. But, a fully detailed, sculpted, rigged, and facially animatable character like the ones you see in AAA console games might take a single artist a full month to create. We toyed with several ideas for reducing the cost of creating all these characters, such as doing cut scenes with comic-book-like motion graphics instead of fully animated and lip-synched characters and using a lot of close-cropped shots but, in the end, we decided that we couldn't reduce the cost enough without hurting our vision of what the game should be.

When we decided to put the main story on the back burner and start working on the Turncoat Escape game, one of our guiding mandates was to keep the scope of the game down. Way down. That meant having only a single level at first. It also meant keeping the number of characters as small as we could while still making the game work.

That's not an easy mandate to stick to. Stories are about people. If you can get your audience to buy into your characters… to empathize with them and care about what happens to them, you can get those viewers to overlook a lot of other things. But you need that emotional attachment to the characters if your goal is to tell a story.

Or, to put it another way: "It’s the characters, stupid."

There's obviously one character in the Escape game that we just can't do without: the protagonist. The player has to be somebody in the game. There has to be somebody who needs to escape the facility.

In our earliest version of the original Turncoat story, we had one character, nicknamed "Rook", who we envisioned as a customizable character. The player would see events unfolding through this character's eyes, and the player would be able to choose what that character looked like. They could decide what Rook's real name was, whether Rook was male or female, and they could select from a range of skin tones and adjust facial and body proportions and hairstyle.

As the story evolved and grew in complexity, Rook morphed into a specific character with defined traits. I still liked the original idea, but it eventually became clear that the story needed us to know more about Rook.

I like games that let you decide what your character looks like, though. Certainly, there are times when it's important to control the narrative and specify exactly what the main character looks like. Hell, we did exactly that with Rook in order to make our original story work. But, there's also value in making a game welcoming. There's value in telling the player that they can be whoever they want to be in the game, and that whatever they want to be, is okay.

The hero of the story doesn't have to look any certain way.

While we chose to abandon the character customization idea in the larger Turncoat game, the reasons we did that don't apply to the Escape game. This story is less complex. Making the main character customizable won't interfere with our ability to tell the story.

So, that's what we're going to do.

It will add some work for us, which runs contrary to our mandate to keep scope down, but we're still going to do it. It means we need at least two base models for the protagonist - a male and a female - and we need to put some effort into allowing them to be customized. We'll want the player to be able to change the skin tone, body shape, and the facial features in addition to letting them select gender.

So, what other characters do we absolutely need to tell this story? There have to be other people in this facility. Well, I guess there doesn't have to be other people. Through both Portal games, you never see another living person, and that game works extraordinarily well. But we're not making Portal; our story and our game relies on there being more people.

We need guards and inmates. Neither the guards nor the inmates will feature prominently in any cutscenes, so we don't need to make them as detailed as the player model, but we don't want them all to look exactly the same, either. Similar to character customization, we can randomize the features of the guards and inmates to make it feel like there are many different guards and many different inmates in the facility.

We also need a Guard Supervisor. The Guard Supervisor will have a different uniform from the rest of the guards as well as a defined non-random appearance; this character will always look the same every time you play the game. Some of the ways of escaping will require you to either find or avoid the Guard Supervisor, who is more observant and generally more competent than the regular guards.

Finally, we have the "white coats". The white coats are not part of the regular game level. They're never in the cell block or the guard areas. They're only inside "the theater" - that part of the map that the player can see from some of the ventilation ducts, but can't directly interact with. Exactly what it is that the white coats are doing in this facility is one of the things the player will be able to discover if they explore. They won't need to know that information in order to escape, but they'll have a better understanding of why they need to escape and why they're in the facility in the first place if they do.

One thought I had for the white coats was that since the vents are mostly down low at floor level, we might be able to make it so you simply never see their faces. As long as the the characters aren't too far away from the vent, the angle should hide their face from view. This might add to the mystery of these characters, and will have the added bonus that we won't have to fully model or animate the white coats' faces. That should reduce the amount of modeling effort.

So, our tentative list of characters right now is:

  • Configurable female protagonist
  • Configurable male protagonist
  • Randomizable female guard
  • Randomizable male guard
  • Guard Supervisor
  • Randomizable female inmate
  • Randomizable male inmate
  • White Coat Male (no face rigging)
  • White Coat Female (no face rigging)
That seems doable. It's, perhaps, a little longer of a list than would be idea, but as long as we don't go too crazy with the ability to customize / randomize, we should be okay, especially if we try to reüse parts, like faces and hands, between the guards, inmates, and white coats.

While we're going to create some of the characters in-house, I think we've probably got more characters than we can do without some outside help, so if you know any good character modelers with game experience looking for a little freelance work, have them drop me a line at jeff at martiancraft dot com.

Next Up: Prototyping Basic Game Mechanics Part 1
PreviousExperiments in Environment Creation

©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 19 August 2013 | 10:39 am

Turncoat Dev Diary: Table of Contents

A few people have asked for an index to the Turncoat Dev Diary posts, so here it is. I'll try to keep this updated as I post new entries in the series.


If you're new to these diaries and are not sure where to start, the first one is probably your best bet. Each entry is linked to the next and previous entry, so navigating between them is pretty straightforward.

  1. IntroductionThe Turncoat Dev Diary
  2. Origin of the Universe
  3. Life in the Turncoat Universe
  4. Finding a Smaller Game in the Backstory
  5. Platform Decisions
  6. Deciding on Tools and Frameworks
  7. Just Getting Something Running
  8. Experiments in Environment Creation
  9. Thinking About Characters

©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 15 August 2013 | 12:04 pm

New Ventures: The Turncoat Dev Diary

At MartianCraft, we've done a lot of work over the past few years under what are called "no-publicity" clauses. That means we can't talk about those projects or put them in our portfolio. In fact, most of the really interesting work we did our first couple years was done that way. That was one of several reasons why, about a year ago, we chose to create a products division. By developing our own software, we're also filling out our portfolio with apps that we can show to prospective clients. Of course, that's not the only reason we decided to write our own software, but the fact that we couldn't talk about our most interesting projects was certainly a factor.

Getting Briefs out the door was a long slog, but it has been a great experience for us. The app has been well received and is being used regularly by an active and engaged community of users. Development has continued unabated since the 1.0 release, with new features being planned and actively developed.

Part of what made Brief's development so hard was keeping the very existence of the project under wraps until a few months before release. Spending nearly a year working on something we couldn't talk about really took its toll on our team.

But Briefs wasn't the only product idea we came up with last year. It's not even the only idea that we began to work on when we decided to create our own products.

We've had a second skunkworks project going from the time Briefs started, but on a much slower burn. For over a year, we've been working on ideas for a series of games. The project has been so secret that most of our staff at MartianCraft know little more about the project than the fact that it exists.

At first, there wasn't a lot of day-to-day work happening on this other project. It was mostly just Rob and me brainstorming ideas for what to do after Briefs. Then it morphed into something that Rob and I would talk about in the evenings, on the weekends or when we just needed a break.

And then it took on of a life of its own.

It turns out that we're both interested in games as a storytelling medium and we both wanted our next project to be a game, preferably one that provides an immersive, cinematic experience for the player. We also realized that we already have an awful lot of the talent in-house needed to create these kinds of games. While we've done a small amount of graphics and game work for clients over the years, we both wanted to create something that we controlled… something that was completely ours.

Over several months, we created a universe and populated it with dozens of characters. We explored the state of technology and the politics of the universe and mapped out a hundred years or so of history. We came up with ideas for several interconnected games set in the universe and wrote scripts for game cinematics. We also wrote scripts and stories that weren't tied to a specific game, but were written to help us get to know our characters and our universe better.

After a year, we had several hundred pages of back story and scripts and we began to realize we would need an AAA game budget to fully implement our vision. We had no way to fund that size of an undertaking without outside investment, so we began looking to carve out a smaller, standalone game that we could bootstrap ourselves, just as we did with Briefs.

Eventually, we picked a piece of our fictional history suited to making a good immersive game that was smaller in scope and not directly connected with the other games we'd mapped out. Then we decided to turn that idea into the game equivalent of a movie short: Polish the hell out of a short game and release it for free.

With Briefs and our non-NDA contracting work, we now can demonstrate the ability to create — soup to nuts — a wide variety of Mac, iOS, web, and Android apps. But when it comes to games, a market that interests us, we don't have anything in our portfolio to show.

We want that to change.

Here's the thing, though: Unlike Briefs, we're not going to develop this project under a veil of secrecy. In fact, we're going the exact opposite route. I'm going to share a lot of the process with you right here as it happens. I'll be blogging at least once a week, and often more frequently, until we ship. I'm going to talk about how we plan and design the app, how we create the assets used in the game. I'm going to show in-progress screenshots and concept art, and even share code. I'm going to talk about the tools we're using and why we chose them, and I will even admit to the mistakes we're going to make along the way, because we will make mistakes along the way.

There will be things about the game we're going to keep secret, but only so you can experience it as it was intended, spoiler-free. And, as long as the spam doesn't get too bad, I'm going to enable comments on these posts and will answer any questions people want to ask about the undertaking.

Wish us luck.

Next Up: Origin of the Universe


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 15 August 2013 | 5:48 am

Turncoat Dev Diary: Just Getting Something Running

(This is part of a series. The first post in the series is here.)

Once I had made the list of tasks that had to be accomplished in order to get the game created, my brain started going in a lot of different directions all at once. I kept flitting between thinking about different tasks, trying to judge whether I could do the task, one of our existing MartianCraft developers or designers could do it, or if I'd need to go out-of-house to get it done at the level of quality and finish I wanted. A few items, like sound effects and music, I felt comfortable pushing to the back burner for now, but I felt a need to do an informal triage of many of the tasks. Some of them would require finding either freelancers or new hires and that adds time. I wanted at least some idea of what kind of outside talent I would need.

After about a half day of bouncing between the various tasks and just generally being a disorganized mess, I realized I was putting the cart in front of the horse. I need to just get something running to make sure the core idea was even worth pursuing. The greatest art and sound can't rescue a game that's not enjoyable. So, I pushed aside all my other concerns and thoughts to try and get a simple prototype up and running.

Fortunately, with Unity, that can be done pretty quickly. It took me less than day to get a prototype level built so that I could navigate it. Today's dev diary is about that process.

Unity doesn't have a level editor, per se. It has a scene editor with some basic primitives and some really good terrain tools. Levels are generally built in the scene editor using components built in an external 3D program, unless it's an outdoor environment, in which case you can often do everything right in Unity.

From the time I first thought of the escape game concept, I had some idea of how I thought the level should look and about it's overall layout. Now, that layout wasn't really driven by gameplay concerns, but rather by storytelling concerns. I had certain information I wanted to get to the player and some information that I wanted to make available to more adventurous players who explored beyond what was strictly necessary to escape. I envisioned the basic map as looking something like this:

The black area I envisioned as a somewhat standard prison cell block, although probably a little more futuristic looking than a prison you might find today, and the light blue boxes I thought of as solitary confinement cells, some of which would be used as starting points for the level. The darker blue areas would be administrative areas: the guard break room and the supervisor office. Marked in red is a series of ducts that can be used by some characters (those who aren't too big) to hide from guards and navigate around the map where the guards can't see them. The green area I think of as "the theater." The ventilation grates in those rooms don't open and the player can't actually go into them, but if you they are in the ducts and get close to the grates in any of those rooms, it will trigger in-game animation sequences (and not necessarily always the same one). These sequences will hint at additional ways of escaping the level and also fill in more background information about the universe and why the character has been imprisoned.

The goal of the game is to get to the security elevator, which is outlined in gold. It looks like a straight shot up the center from the isolation cells — and it is — but that hallway is well lit and well guarded. Plus, you can never go directly to the elevator. You always have to do something first before you can go there. You might have to disable a force field, find a key, or restore power to the elevator before going to it makes any sense. Once you've done that, then you need to get past the various guards without being detected in order to escape.

Once I saw the map laid out, I realized it wasn't enough. There needed to be more than one way to get to the elevator so that we could give the game some amount of replayability and also give the player more stuff to explore. I felt that there needed to be more rooms outside of the main prison block to give the player places to hide and explore.

I came up with the idea of adding on an "Intake Processing Center". The existing entrance to the elevator would be the one that guards and other staff used, but new inmates would come in a different way. They would come in through a series of rooms where their belongings were stored, mugshots were taken, and prison clothes were issued. I added on a series of rooms to the map for this purpose, as can be seen in purple below.
I originally started trying to draw out the level map old-school style. But, it wasn't really working for me. It was keeping me from thinking in three dimensions. So, I fired up Blender and started planning the map in 3D. The maps above are actually screenshots of the top orthographic view in Blender that I added some color to using Photoshop.  As you can see, it's actually a three dimensional map:
Working in 3D seemed to make sense, since the file I created can be exported right to Unity for prototyping. At least, it can if I built it right. There was only one way to find that out, though: export it from Blender and import it into Unity to try it out.

My first attempt didn't work out very well. I dropped a First Person Controller (something provided by Unity for creating first person games) onto my map so that I'd be able to navigate around the map. For the final game, I won't be able to use the provided Unity component, but it will work plenty well for letting me look around my map. 

I hit play, and saw… nothing.

Oh, right. Lights! The "real" lighting for the game will be done much later in the process, but I needed some light to see anything. I could've just turned on global ambient lighting, but that wouldn't give any shadows to judge shapes or distances. Instead, I dropped a somewhat random assortment of real time point lights onto the map. Performance won't be good, but at least I'll be able to see well enough to navigate the map. Since I'm using my dev machine to navigate around the prototype level right now, I'm not overly concerned about performance issues yet.

Once I had the lights added, I hit play again. And saw… nothing. Again.

Then I swore at my computer.

Fortunately, I realized what the problem was before the swear words were even completely out of my mouth.  Blender (and, I'd imagine, most 3D programs) assume that objects are going to be viewed from the outside, not from the inside. While Blender supports two-sided polygons, Unity doesn't, so when designing interior architecture, you have to make sure your objects are built, essentially, inside out - with the face normals — which mark the forward or visible direction of the polygon — pointing inwards.

You can see in the Blender screenshot below that the normals (the light blue lines) are facing outward. Most 3D objects get created this way so they can be seen from the outside when you're using one sided polygons.

Outwards, in our case, is bad. Outward pointing normals mean you can see this room from the outside, but not when you're standing inside of it. Fixing it was a simple matter of selecting each room in Blender, going into Edit mode, selecting all faces and then hitting the "Normals / Flip Direction" button in the left toolbar.


Once I fixed the normals, I re-exported, went back into Unity, waited for the level to re-import, then hit play. This time, I actually got something:


Well, yay! I've got a map and I can even walk around the level. I haven't written any code yet, but I can actually navigate from the user's perspective and get a feel for the level. I'm really liking my decision not to try and write my own game engine right now.

I found it kind of hard to maneuver, though. It is soooo white in here that it's hard for the eye to grab onto anything, especially when you're not near a point light. Even for a throwaway prototype, I needed some textures for the eye to grab onto, so I pulled down a few tileable images from CGTexture

Using repeating textures won't cut it for the final game. That was the state of the art a decade or two ago, but not today. The human eye is just too damn good at picking out patterns for us to rely on repeating images for very much. But, for testing, getting any kind of texture on the floor and ceiling was going to make a big difference. I also made the cells a different color than the hallways, which helped quite a bit as well.


It's not going to win any awards for level design or aesthetics, but it's a starting point. I can walk around, find the more glaring problems, and get a feel for how the layout will work. The first thing I did was to make sure I could get everywhere I wanted the player to be able to go. I found a few mistakes along the way - polygons that should've been deleted to make doors and normals that didn't get flipped. I also found a few gaps between rooms that were noticeable. Those were all pretty easy to fix in Blender, which I did.

The one thing that did strike me, as I walked around the map, was that the level is too small. Probably a lot too small. At a walking speed that feels natural, you can navigate the entire map fairly quickly. Even accounting for the fact that you'll be sneaking and avoiding guards much of the time, it's still too small. I'll have to go in later and make it bigger. But this is enough for an early prototype - to test out game mechanics and bad guy AI.

Now, if I were focused, this is the point where I would start working on some actual game mechanics. I'd drop in a few bad guys with simple AI so I could start actually playing the prototype and figuring out what works, if anything. But, I'm not focused. I'm the kind of person who…

Ooh, look! A shiny object…

What was I saying? Something something, focused… oh, right. There are some tasks that I know we're going to have to go out of house for, and some that I know we can do in-house. Then, there are the ones that I'm just not sure about yet. One task that I think we can handle in-house with existing talent, but don't know for sure, is painting and lighting the environment. So, I allowed myself to be distracted from the prototype to do a quick paint-up of one room on the level. That room will probably need to get re-done at least once before we ship, but it seemed a worthy experiment and something that would be kind of fun. Plus, it would help me get familiar with some Unity functionality that I've not used before (light maps), some new 3D painting improvements in the latest version of Blender, and it will just generally help with the decisions that need to be made about the overall aesthetic feel of the game. I'm a visual person and sometimes need to see something to know if I like them, because they always look good in my imagination.


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 15 August 2013 | 5:47 am

Turncoat Dev Diary: Experiments in Environment Creation

(This is part of a series. The first post in the series is here.)

Today, I'm going to talk about a little side quest I took while prototyping the game: painting and lighting one of the rooms. When I initially planned the level out in Blender, I did it almost entirely with cubes. I scaled and extruded and adjusted vertices, of course, but when you come down to it, it really was just a bunch of boxes. Of course, a lot of buildings in real life are just assemblages of boxes if you look at them from far enough away, but to try and make an environment that feels real, I knew I needed more than just boxes.

I selected one of the smaller room to paint up - a sort of locker / shower room part of the intake processing area. I chose it because it's small and because I should be able to texture it quickly using readily available stock textures. While the game is going to be set in the future, I want to base the look of it on the real world, so I started looking for reference images as well as tileable images suited to how I was picturing this room in my head. It's funny the kind of little details you don't really notice until you try to recreate something, though. In most cases, for example, doors just don't look right without some kind of molding or frame. It took me surprisingly long to figure out what it was about the doors in my prototype that bothered me. Just cutting a hole in a wall is perfectly functional, but it doesn't feel right because in the real world, doors almost always have a visible frame around them.

Another thing I noticed is that when you start looking closely, even at a relatively clean environment, there are marks, scuffs, and smudges on nearly everything. They're often subtle, but they're almost always there. Now, a lot of that subtle smudge detail in a normal room would fall below the fidelity of the medium, but in this case, I want to exaggerate it and make everything extra grimy. I want to convey not just that this is a depressing, unfortunate place to be, but I almost want a sense of neglect and even foreboding. I want it to feel like the people running the facility don't care about this particular place or the people it contains. Even before props or characters are added or dialogue is recorded, I want this level to convey a sense of desolation. Things are dark, dirty, and just generally unpleasant. This is an out-and-out bad place; escape is imperative.

After spending a few hours with Google's image search looking for inspiration and reference, I started adding some details to the cubes that made up the room I had chosen to paint. I cut out a window between the two rooms, added moldings to the doors and window. I cut out drains in the floor of the shower area and cut vents in the wall. I added planes to hold some signage and also for the ceiling lights.

I really don't know how much of this detail will survive through to the final version of the game, and it really doesn't matter. The goal here is to explore the process and to try and get a feel for whether I need to bring in a dedicated environment artist. If the actual work gets thrown out, that's okay. That's the nature of prototyping.

Once I had the details modeled, I started mapping textures from CGTextures, the Blender Texture CD, and a few other source onto the room's surfaces. I did this to give me a starting point. It's easier to work from something rather than painting on a completely blank canvas. 

Once I had all the surfaces mapped to textures, I "baked" them to a fresh texture map. Baking the textures to a single image will allow me to paint right on the 3D model directly using Blender's paint tools. Here is a reduced size version of the baked map for the shower room (left), along with how the room looks in paint mode in Blender (right):


It's certainly more realistic than my earlier white-walled prototype. Firing up Unity, this is what it looks like now:


It's definitely better, but it still falls way short of feeling real and it's way too clean and bright. On top of that, the current lighting just makes no sense. There's no visible light source, but the room is lit as if there was a light floating in the middle of the room. I'll fix the lighting later, but first, I want to work on the textures.

I went back to blender and started to filthy the place up. The grime will help make the room feel decrepit and it will also break up the repeating patterns from the tiled images I used, making it harder for the eye to pull them out. I loaded up an assortment of dirt, grime, and grunge images to use as brushes and started painting on the room's textures. It's a little weird painting on the inside of an object. Blender's Texture Paint mode works like Unity in that you can only see the "front" of polygons which, in this case, means the inside. Like one of those optical illusions, my eye wanted to think the texture was on the outside rather than the inside, but after a few minutes, I adjusted and work started going faster.

Doing something like this really makes you appreciate the power of undo.  I hit ⌘Z a lot while working on this, and it took me a long time to find an approach that gave results I liked. 

Once I had the new details and grime added to the texture, I pulled the new maps over to Unity to try them out. I also tweaked the shaders a bit, adding a distortion map to the window and creating normal maps for the tiles to give them a tiny bit of depth.


It's definitely an improvement. I might've been a little heavy handed with the grime, but I want to wait until I've done some work on the lighting to decide whether I should pull back on it. I envision this area being much darker and poorly lit than it is now, so I want to judge the texture in the correct lighting. 

In addition to being darker, I also want some of the lights to be flickering or burnt out. A little googling around found me a great light flickering script that gives just the effect I want. In Unity, I started placing lights around the room. Each of the two sections of the room has four light fixtures, but I decided that each section would have one light that was burnt out. The shower room would additionally have a flickering light and an an illuminated exit sign.

In order to keep performance reasonable with so many lights, I decided to bake light maps for the lights except for the one that flickers. "Baking" a light map essentially creates a texture map to store pre-calculated lighting information. They let you do lighting effects that are normally too processor intensive for real time calculation. The downside is that baked lights don't cast shadows on dynamic objects (like the player, moveable props, or bad guys). For a desktop computer or console version of the game, we'll probably want to make more of the lights real-time to get realistic shadows, but for mobile, we really have to be careful to limit the number of dynamic lights to keep the framerate up.

The Beast Lightmapper that's built into Unity is quite good and not that hard to learn, but calculating lightmaps is a fairly processor intensive task. Getting all the lights configured the way I wanted ended up taking a long time. I'd tweak one or two settings, then re-bake and wait five minutes or so. Lather, rinse, repeat.

After a while, I got the light maps baked mostly to my satisfaction. I started liking the overall look much better. The grime, which was kind of overpowering in the brightly lit room, feels just about right now. It still could use some furniture and other props, and the overhead lights could use some more detail, but on the whole, it's not terrible.


I wanted the shower room to be even less well-lit than the outer room, possibly to be used as a hiding place in the final game. This is how it looks after re-doing the lighting and baking the light maps:


It's not perfect by any stretch of the imagination, but the overall effect is roughly in line with what I want to achieve for the final game, and it only took me two days of experimenting to get to this result.

Here's a short video if you want to see it in motion.


At this point, I've done enough to believe we can do the environment work in-house. I understand the technical process well enough, so if I team up with one of our designers who has a better aesthetic sense than I do, I think I can get an environment that looks the way I want it to look.

Well, enough of this side quest. I think it's time to return to prototyping. Before that, however, I want to talk a little about the game's characters.


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 15 August 2013 | 5:47 am

Turncoat Dev Diary: Origin of the Universe

Once we decided to make a game, the next thing we had to do was figure out what game we were going to make. That's a surprisingly hard thing to do, not because it's difficult to come up with ideas, but because it's hard to pick just one. We toyed with a few different genres, but both Rob and I kept coming back to science fiction, so we decided to run with that.

The first concrete idea we had for a game was inspired by one part of the book Starship Troopers - an important part that was completely absent from the horrible Verhoeven movie: the drop.



If you haven't read the book, the troopers in the book (unlike in the movie) wear massive powered armor and are dropped onto planets from orbiting spaceships. The first chapter of the book, and one of its most memorable scenes, describes the anticipation and terror experienced by the main character before a drop. On the very first page of the book, the protagonist, Johnny Rico, tells us:
I always get the shakes before a drop. I've had the injections, of course, and hypnotic preparation, and it stands to reason that I can't really be afraid. The ship's psychiatrist has checked my brain waves and asked me silly questions while I was asleep and he tells me that it isn't fear, it isn't anything important -- it's just like the trembling of an eager race horse in the starting gate. I couldn't say about that; I've never been a race horse. But the fact is: I'm scared silly, every time.
The actual drop and the combat that follows are described in quite a bit of detail by Heinlein, but I always found this prelude much more moving. A few pages later, we see Rico in combat and learn that he's a badass. But the very first thing we learn about him is that he's scared. Being Mobile Infantry makes him cool, but being afraid, makes him human and relatable.

From a game perspective, the drop itself seemed to have the potential to make an interesting physics-based game and this human element of fear and anticipation is exactly the kind of thing we want to drive the cinematic portions of our games.

It was a good starting point, but there were some obvious problems. Even if we could obtain the license (unlikely), we didn't really want to work in someone else's universe. We also didn't want to create something overly derivative. Like programming, all stories build upon what came before: Nobody writes in a vacuum. But there's a huge difference between being inspired by something and flat-out copying it. The success of Zynga and GameLoft show that creating blatantly derivative games can be a path to financial success, but that's not a path we're interested in.

So, what we did next was look at what it was about the this idea and the book that appealed to us, not so much in terms of game mechanics, though we may very well revisit the idea of an orbital drop in a game at some point, but in terms of the story and character.

We liked that the protagonist was relatable. He might be a badass, but he's a badass who gets scared, who has self-doubt, who makes mistakes, and who gets nervous around girls he thinks are pretty. He reminds us of real people we've known, or maybe even of ourselves a little bit.

We liked that the characters in Starship Troopers are in serious peril. Rico becomes a non-com fairly quickly because mortality is so high. An awful lot of the people Rico meets along the way die. These folks live in a dangerous universe and they have dangerous jobs. That creates an awful lot of stress, which brings out the best in some people and the worst in others. Both extremes are ripe for creating dramatic tension.

But the danger has to feel real. We don't want to succumb to the Red Shirt cheat. We don't want the viewer to feel like the main characters are safe and we don't want to be afraid, ourselves, to murder our darlings.

We also don't want to be mindless entertainment. We want to, at times, challenge the assumptions of our viewers. There's a nuance to Starship Troopers that's often lost on modern readers. Toward the end of the book, Rico tells another soldier that his native language is Tagalog. That means that the protagonist, Johnny Rico was Filipino. Why does this matter?  Well, the book was written in 1959, and Heinlein was challenging preconceived notions by getting the reader to relate to and like Johnny before revealing his ethnicity. But, there was more to it than that. Heinlein had been a Navy man and he was also challenging something very specific. At the time Starship Troopers was written and, indeed, up until 1971, Filipinos were restricted to a single Navy rating: Stewardsman. In other words, the only job enlisted Filipinos were allowed to have in the US Navy was serving food and cleaning up after meals. This, despite the fact that the U.S. Military had been officially desegregated since 1948. Heinlein wasn't in your face about this point he was making - in fact, many people completely missed it — but, as an author, he was never afraid to challenge preconceived notions.

Frankly, that's what good science fiction does: it challenges you and makes you think a little while it entertains you. A science fiction game should be no different in that regard.

Another thing we really wanted was for the person playing the game to go through a process of discovery about the universe and the characters. We wanted to hint at, but not tell the viewer outright early on, important things that drive the story. We wanted a certain complexity to the universe and we wanted real motivations for the actions of the characters, even if those motivations aren't immediately obvious. We really didn't want the viewer to know at the start very much about the characters, the history, or even the reason they're at war. We almost wanted a murder mystery vibe to it, only without actually having it be a murder mystery.

Once we had identified these goals, our "hook" almost wrote itself. We decided to call the series of games "Turncoat". We didn't know many of the details yet, but we knew that the game would follow the exploits of some kind of military squad. This would be a very loyal, very highly trained group with a lot of esprit de corps. It's a little cliché, but these would be the best of the best: a small elite that does what others can't or won't.

The original idea was that we would create a number of short games. Early on, each game would introduce one new major character. Before each episode, the viewer would be told or reminded of a single important fact: on a specific date, about a year after the events of the first game episode, one of the characters will betray the squad. Each game would have cinematics or cutscenes that would slowly, over the course of many episodes, fill in information leading up to that foretold act of treason.

This is the point where the project started to take on a life of its own. During the day, I would do my regular MartianCraft work. But, in the evenings and on the weekends, I'd often sit at my computer working on our Story Bible. Periodically, I'd ship what I had over to Rob, and we'd hash out things back and forth, trying to suss out what didn't work and what we should keep. After the universe was sufficiently fleshed out, we started working on individual game ideas and scripts. After more than a year, we ended up knowing an awful lot about the Turncoat Universe.

Picture of Scrivener window of Story Bible document

I still really like this original idea and hope we can do it someday, but for it to work, we realized that the episodes would have to come out relatively frequently to keep players interested. That would mean dedicating a large team to the project, larger than we can currently fund without outside investment. For that kind of undertaking, we don't just need developers and designers. We need concept artists, modelers, animators, voice actors, music composers, musicians, and people coordinating all of the various efforts.

It's a big universe, though, and there are plenty of other stories we can tell from the universe. We just needed to find one that was self-contained, wouldn't spoil the later turncoat story, and that was small enough in scope that we could fund it ourselves.


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 9 August 2013 | 8:30 am

Turncoat Dev Diary: Life in the Turncoat Universe

Since much of the Turncoat Universe's history exists to provide the backstory for a mystery, I'm hesitant to say too much very specific about it, but throughout this series of blog posts, I'll be talking about the process and the motivation that went into building our universe as well as the nuts and bolts of building the actual game. For the former to make any sense, you'll need at least a little context.


The original Turncoat story is set in the late 24th century during a war being fought between Earth and its former scientific colony on Mars. People of Earth call the people of Mars "Seditionists" or "Seds"; they only rarely call them "Martians" and never "Colonists".

This is not a civil war, however. Though no Earth government ever officially recognized Mars' sovereignty, neither had there ever been any attempt to prevent the secession. The two planets had been peacefully coexisting until about thirty years ago. The Mars Colony actually seceded a century earlier during a bloody multi-nation (but fortunately, non-nuclear) conflict on Earth, a conflict that led to the eventual creation of a single, unified Earth government.

Why are Earth and the Mars Colony at war? What started it, and why can't the two planets find a peaceful resolution? Those are some of the bigger mysteries of the Turncoat Universe; the player and the major characters don't know all the forces that are driving the conflict. In fact, most of what they do know is based solely on propaganda from their own side. What the player does know for sure from early on is that the war was started by the Seditionists who launched a surprise attack on Earth using devastating long-range weapons. Why the Seds attacked Earth is the subject of much conjecture, but the real reasons are not known on Earth. Players also find out quickly that the war has been going on for a very long time and shows no signs of ending any time soon.

In the Turncoat universe, there is no faster-than-light travel and there are only a handful of humans who have ever left the solar system. Travel between Earth and Mars is still a danger-frought six to eight week journey depending on the relative positions of the planets. That makes the war difficult and expensive to carry on. Large scale engagements are relatively rare, and only a small percentage of Earth citizens are involved in fighting the war. War happens "out there" and doesn't really affect the day-to-day life of average Earth folk who aren't in the military.

But "out there" is a dangerous place. Ships don't have shields like in Star Trek, nor do they have FTL capabilities. They're operating millions of kilometers away from home with very little in the way of support. When a ships takes a hit from enemy weapons, people die and parts of the ship become unusable.  But these ships have a job to, and if a ship is still able to fight, it stays "out there" and fights.

The player in Turncoat sees events from the point of view of elite Earth soldiers stationed on one of Earth's "Deep Fleet" ships. The Martians are the "bad guys". They're the mostly-nameless and mostly-faceless soldiers who are trying to kill them. At first, we don't even really see them as people. They're represented by enemy ships and mirror-faced space suits that are shooting at the player.

While players don't know much about the culture, history, or internal politics of these faceless enemies, at least at the start of our story, they do start finding out about life on 24th century Earth right from the get-go, so we had to put a lot of thought into what life would be like in our 24th century Earth.

One of the nice things about writing fiction is that you get to decide how things play out. We decided that we wanted our 24th century Earth to be, on whole, better than now. We want to present an optimistic outlook, but one tempered by reality. There will always be problems and conflicts; we do not want to present a utopia. People are still self-interested and petty. There are still rich and poor, and there are still people willing to profit at the expense of others.

But, overall, life is better for most people than in the past thanks to steady advances in medicine and technology and other fields.

Thinking forward nearly four hundred years is not as easy as you might think. If you work backwards that same amount of time, you'd be in the early seventeenth century. To put that in perspective, the early seventeenth century was the dawn of the Age of Sail. Firearms existed, but were not the primary weapons used to wage war. Only one of the American colonies had been founded. It would be nearly two hundred years before the British Empire would outlaw Slavery and over two hundred and fifty years until the American Civil War and the Emancipation Proclamation would do the same in the United States. Throughout most of the world women were considered property, without the right to vote or hold title in their own name. People were wildly xenophobic and superstitious. In the early sixteen hundreds, people were still being put on trial and often hanged or burnt at the stake for witchcraft and heresy, and both the Spanish and Portuguese Inquisitions were in full force.  It would be another two hundred and twenty-five years, give or take, before Darwin’s voyage on the HMS Beagle.

A person from the early seventeenth century would have a very hard time comprehending the modern world. It would be naïve and more than a little arrogant to think we would do any better comprehending life four hundred years from now. Realistically, we can't change our fictional world as much as the real world will actually change by then, and even if we were capable of doing so, we probably wouldn't want to because our player would feel out of place and uncomfortable.

But we need to convey to the player that social mores and culture have, in fact changed. In my mind, to be science fiction, rather than just futuristic fantasy, social mores and cultural standards have to be different from today. Those changes can be for the better or for the worse, but there have to be tangible differences in how people think about the world and how they interact with others, otherwise you're just creating General Hospital in Space. With lasers.

Not that there's anything wrong with that, but it's not what we're going for with Turncoat.

So, what cultural changes did we decide on for our Twenty-Fourth Century Earth?

First, we decided that the entire world had moved to a single-unified government. People are no longer citizens of countries, they're citizens of Earth. Countries are more like states or provinces or maybe even counties in today's world, and the existence of a common enemy has reinforced this feeling of planetary patriotism. To the mainstream twenty-fourth century Earthling, all other affiliations and identitifications are secondary to world citizenship, whether it's cultural heritage, ethnicity, religion, or anything else. Those things still have some importance to some people, but they're almost always of secondary importance.

There are a small number of people in the twenty-fourth century who consider their association with a cultural group, ethnicity, race, or religion to be more important than their citizenship. Those people are called "Traditionalists", and they tend be looked down upon by mainstream citizens, much the way many people today might look down upon uneducated rural people (but more so). There aren't a lot of these Traditionalists, though, and they tend to live in isolated communities far away from population centers.

A world-wide government doesn't come without its problems, of course. There is a lot of bureaucracy and waste in both the civilian government and in the military. Even with four hundred years of technological advancement, logistics are never handled perfectly. Just like today, large bureaucracies mean mistakes get made and change is often slow. Shipments and personnel often arrive at ships completely wrong. A ship might requisition food and end up with parts for equipment that the ship doesn't even have. They might desperately need a repair technician but be sent a trauma nurse instead.

Second, not only has racism been defeated, but few people in the twenty-fourth century outside of historical sociologists would even understand what the term means. Most people in the twenty-fourth century have ancestors from many ethnic backgrounds. It's not uncommon for people to have a surname from one culture, a first name from a different culture, and the physical traits that today we would associate with a third culture. You might have someone, for example, with a Japanese surname and a Portuguese first name,  but who has blue eyes, pale skin, freckles, and red hair. Essentially, there are no rules when it comes to names or physical characteristics. The world has become a true melting pot.

Third, twenty-fourth century Earth has long-sinced achieved true gender equality. Women fight alongside men, and both the civilian and military leadership have nearly identical numbers of men and women at all levels of leadership. We decided to have a little fun with this idea, though. We made that equality almost — but not quite —  universal. We decided, for grins and giggles, to put in one little tiny bit of subtle gender bias in our universe, but in just a single occupation. The job of fighter pilot, in the Turncoat universe, is a job heavily weighted toward women. We thought it would be interesting to reverse the situation of today. We decided to play "what if", and make it so that in the Turncoat Universe, for some reason, women just tend to be better fighter pilots in zero-gravity. We don't go into why — whether it's genetic or cultural or what, but the CAG, LSO, wing commanders, and most (but not all) of the accomplished pilots we meet in the Deep Fleet, are women. In literally ever other job in the universe, though, there is absolute and complete gender equality. 

Fourth, we decided that the world had evolved in terms of sexuality and sexual relations, but that this will be something that's more of a background element rather than something we put right out in front of the viewer. Mainstream Earth citizens have mostly moved beyond caring about things like sexual orientation or sexual identity. Except for Traditionalists, people just aren't uncomfortable with the idea that other people are different than they are. Related to that, the 24th century has no body modesty taboo. The ship's facilities — showers, bathrooms, barracks, and locker rooms — are all mixed gender except for a small portion on some ships that are set aside for "Borderliners" - Traditionalists who have chosen to try and live in mainstream society for one reason or another.

Overall, the citizens of twenty-fourth century Earth are generally rational, literate people. Public education systems are fairly good and most people have at least a basic foundation knowledge of the maths, sciences, and history. Most people are also not overly superstitious. There are some exceptions to that, though. One area where most people are very superstitious is with regards to the Seditionists. People just don't have much hard information about the Martians because it's been over seventy-five years since there was any meaningful communication with the red planet. As a result, there are a lot of rumors, stories, and myths about them, ranging from the plausible to the completely absurd. Almost all of these stores are completely untrue and yet many are fervently believed by some and are given some credence by many.

This view of Earth people towards the Martians is actually inspired by U.S. attitudes about the Soviet Union during the Cold War, especially the Reagan era Cold War. If you look at popular culture back then, you see a lot of examples of "red fear" in movies and television shows. Stories like No Way Out, Red Dawn, and Little Nikita  played on American fears about the threat posed by the Soviet Union and possibility of embedded Soviet "sleeper" spies living in America. While entertaining, these stories were fueled as much by half-truths and ignorance as they were by reality. Earth beliefs about Martians are similar, but the stories are even more dramatic and even more dramatically wrong.

In addition to cultural changes, there are also some important aspects of twenty-fourth century Earth that will color the events and decisions made by characters. The most important of these is that Earth is teetering on the brink of overpopulation and starvation. Restrictions on population growth combined with advances in agriculture and genetics, and the use of the moon and several agricultural space stations to grow food mean there's enough sustain Earth's population, but just barely. People on Earth live on a rationed 1800 calorie diet except for the very wealthy, and most of that comes from processed foods. Fresh fruits and produce are relatively rare treats for the middle class and almost unheard of for the poor. Only the very wealthy ever have meat, and it is looked upon by the non-wealthy the way some people today look upon caviar and foie gras: as unappetizing foods the wealthy eat just to show off the fact that they're wealthy.

For the Deep Fleet, keeping everyone fed is a challenge, and it's not uncommon for the ships to go on "starvation rations" of 1000 or 1200 calories a day for days at a time, sometimes even longer. Supply convoys are often targeted by the Seditionists and bureaucratic mix-ups often result in too little food being sent in the first place. There is a thriving (and tolerated) black market in the fleet that uses foodstuffs as the primary currency, and several larger ships have taken to converting unused space into makeshift hydroponics bays.

More than one Earth politician has looked at the sparsely populated and lushly terraformed Mars as an option for alleviating the overpopulation problem once the war has been won.

There's more to the Turncoat Universe, of course. In fact, there's several hundred pages more, but this should give you enough context to play along at home without spoiling anything too important.

Next Up: Finding a Smaller Game in the Backstory
Previous: Origin of the Universe

©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 9 August 2013 | 8:29 am

Turncoat Dev Diary: Finding a Smaller Game in the Backstory

After investing a lot of time into Turncoat, we started pre-production, with our first task being to figure out just what it was going to take to make the series of games we had envisioned. We didn't go too far down that road before we realized it was going to take an awful lot of work and resources. Too much work and too many resources: far more than we could swing without outside funding.

That's not exactly the kind of realization you enjoy having, but we were happy to have realized it before we actually started sinking money into development. At this point, we had really only invested our time, and since we had enjoyed the process, it couldn't really be considered a loss. So, we decided to put the larger Turncoat story on the back burner, but use the universe we had created for it as the setting for another game of smaller scope. The tricky part here was that Turncoat was designed as a mystery with a complex backstory, most of which exists to support that mystery. Any number of facts from the existing story could spoil Turncoat for players if and when we're finally able to revisit the original concept.

We started investigating ideas for both 2D sprite-based games and full 3D games. We came up with several ideas, and decided to actually run with two of them. One is a take on a traditional 2D platformer that we're going to use the new SpriteKit framework for. The other is a stealth-based 3D game. It's the latter that we've already started working on and which I'm writing this dev diary for.

In this game, you have to escape from a jail-like facility using a combination of stealth and problem solving with maybe an occasional spot of violence. It will have things in common with several existing stealth-based first- and third-person shooters, but the focus will not be on combat. If you get in a firefight, you've probably already failed. Funny thing about jails; they're harder to escape from if the guards know you're trying to escape.

The game will take place about a hundred years before the original Turncoat story, during the "Last Great War" — the war that led to the creation of a single Earth government. During that war, several of the nation states created internment facilities and rounded up a certain class of people. There were no mass killings or attempts at genocide like in the Holocaust, but the treatment was degrading and often violent and the inmates were treated as less than human. Though there were no systematic executions, many inmates were killed for one reason or another.

Exactly who these inmates are and why they were imprisoned won't be revealed at first, but we'll drop some hints throughout the game, both in regular dialogue and in hidden easter eggs that will give the player pieces of the game's puzzle. We'll also throw in few small hints about the mysteries of the larger Turncoat Universe. The important thing that we'll convey to the player at the start, though, is that they have been imprisoned unfairly, that they are being treated poorly, and that they really need to get the hell out. That's all they really need to know to get started, but we want them to be able to discover more about why they've been imprisoned and more about the world they live in as they play the game.

Once we had decided on a basic concept, next up was figuring out what we need to do to actually build it, keeping an eye on creating something polished and professional while keeping the scope of the game reasonable. Here's the initial, high-level list I came up with of things that need to get done, in no particular order:

  1. Level Design. We need a place to escape from. To keep scope down, we're going to limit ourselves to a single level for this project, though we're going to leave the possibility for additional levels in the future if the game is well received. To increase re-playability for the one level, however, there will be multiple starting points and multiple ways to get out. Some options will always be available, others may only be available from a certain starting point or using a character with certain attributes.
  2. Script Writing. We're still trying to tell a story, even if it is a much smaller story than our original vision, and to tell that story, we need dialogue. We'll need an opening cinematic to set up the game's scenario and make sure the players knows what they need to do. We'll need to figure out what the guards and other people in the facility say if the player gets near them. There are going to be hints about the world that will be dropped through dialogue in certain places. When a player escapes, there will be another ending cinematic to tease possible future levels and to reward them for their accomplishment, and it will probably be a different cinematic for each possible exit. All of that dialogue needs to be written.
  3. Overall Aesthetic. We needed to figure out, stylistically, how everything will look. Will we try and make it realistic, or will it be somehow stylized? Will we favor bright colors or muted ones, or will that vary depending on some factor. 
  4. Environment Design. Once we have a level, we have make it feel like a real place by layering textures and lighting to create an environment that is believable and immersive. 
  5. Character Design. A story means people, so we need to figure out who the people in our story are — both the protagonist and antagonists. We need to know why they're in the facility, what they're going to look like, and at least a little bit about their background.
  6. Character Modeling. Once we know what characters are going to look like, their models have to be created. For some characters, we'll need both high resolution models for cinematics and low-resolution models for game play. For others, we'll only need the game-resolution models.
  7. Game Mechanics. We need to figure out how the player maneuvers around the level and what tasks they have to accomplish to get out. 
  8. Sound Design and Foley. An environment won't feel real if it's dead silent. Even games set in the vacuum of space (which, in reality, should be silent) don't feel right without some kind of sound. 
  9. Music. Just like with movies, games need music to set the mood. Music, like other sounds, can also be used to give the player feedback. We might have a different musical theme playing, for example, when the player is heard by guards, arousing their suspicion and putting them on alert.
  10. Animation. Although we can use stock animations from companies like Mixamo for some of the character and enemy movements, there will likely be some game-specific motions that we're not going to be able to buy, so we'll need to animate them or use motion capture to create them.
  11. "Finding the Fun". This is a term that a friend of mine who has worked in the game industry uses. It describes the process of iterating over the basic game mechanic until you find something that's enjoyable to play. Unfortunately, you can't "find the fun" until some of the other work has been at done. You don't have to have a fully polished game to start, but you need something. If you can't find the fun, the game should be abandoned or drastically overhauled, so it's best if you can start this process early.
  12. Voice Acting and Direction. Dialogue means voice acting, so we're going to need to find voice actors and we're going to need to direct them to make sure they say lines the way they were intended.
  13. Branding and Marketing: Even a free product is a product and it does no good if people don't find out about it.
  14. Testing, Testing, Testing: Just like any other kind of software, games need to be tested extensively before they can be shipped. Fortunately, it's often easier to find people willing to try out an unfinished game than, say, an unfinished productivity tool.
Yikes.

I know I've probably missed a few things, but I think that covers the bulk of the tasks at a very high level. I'm not sure about you, but to me, that's a pretty intimidating list, but also one with a lot of potentially fun, cool tasks. It's also not nearly as intimidating as it would be for an individual developer.  Fortunately, we have a team and resources to hire freelancers to handle those tasks we can't handle in-house.

Although many of these things can be done in parallel by different people, some of the tasks have to happen before others can be started. You can't really begin character modeling until you've got character designs, for example, and you can't really begin working on game mechanics or "finding the fun" until you have at least part of a level to maneuver around. None of these things happen in a vacuum, though, and none can be considered completely done until the game has shipped. Characters might get redesigned, for example and the level will almost certainly need to be tweaked as we test. As with any software development, the process will be one of iteration, so we can't get too married to anything.

Before we can begin even the first task, though, we need to decide what platforms we're targeting and what tools or libraries we're going to use to build the game.


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 9 August 2013 | 8:29 am

Turncoat Dev Diary: Platform Decisions

In our earliest brainstorming for Turncoat, there wasn't really any debate about which platform we were going to target. The iPad was going to be our first priority: our reference platform, if you will. We would also ship on iPhone if our game mechanics worked well on, or could be adapted to, that device's smaller screen. We'd look at porting to the desktop or to Android devices later if we felt the response warranted it.

Our earliest visions for Turncoat were, quite honestly, driven by the potential we saw in the iPad's big, beautiful Retina screen as a storytelling medium.

After more than a year of developing the idea, this is still the approach we're going to take. But, there is one thing about targeting the iPad as our principle platform that honestly gives me a little bit of pause: the App Review process.

Because of the size and makeup of the iOS market, it's really the best platform for what we want to achieve, but, the arbitrariness of app review and the vagueness of the review guidelines really  do concern me. The fact that Apple's official App Store Review Guidelines glibly adopt the famous Potter Stewart line about pornography and puts it forth as a valid approach to reviewing Apps borders on being childish.

They'll know it when the see it?

Really?

Worse than that, Apple says, flat out,

We view Apps different (sic) than books or songs, which we do not curate.
 If you want to criticize a religion, write a book. If you want to describe sex, write a book or a song, or create a medical App. [emphasis mine]
They say "Apps are different" but clearly what they're implying is that "Apps are less". They're telling us that if we want to do certain things, we shouldn't try and do them in an App, regardless of context, regardless of value, regardless of how well-suited an App might be to the task. Apps, Apple tells us, are a less valuable means of expression. They're less appropriate for challenging people and making them think. They're less valuable for making a personal or a social statement or for pushing any kinds of boundaries.

See, this notion is exactly, 180° turned around from what I think. I think Apps, and especially immersive Apps for the iPad, not only have the same potential as books, movies, and songs, to challenge, educate, and enlighten people… I think they have tremendously more potential. They can be made to interact. Stories can be told, but can be customized to the viewer or changed based on any number of inputs or conditions.

Put simply, Apps are a medium with nearly unlimited potential as a storytelling tool. Apps can be more than useful utilities and fun diversions. They could be used to really explore human interactions, to play on emotions, to let people experience life from the perspective of others, to force people to think, or to challenge what they think they already know. Given the chance, Apps could surpass the far more limited traditional forms of media that the App Review Guidelines seem to hold in such high regard. But right now, nobody's going to take a chance on doing anything like that, because Apple has decided to set themselves up as gatekeeper, and as gatekeeper, they have decided that the App, as a form of communication, must remain in perpetual adolescence and never grow up.

Apps are the Lost Boys of media.

And that's really a shame, because there is so much that can and should be done with these amazing little devices we all carry around every day.

Even worse than the fact that Apps are viewed, by Apple, as less than books, movies, and songs is the fact that Apple never lays out definitively what is okay and what is not okay. The rules aren't fixed or concrete.  There's no way to predict whether specific content will be allowed onto the App Store until after you have already invested the money to create it. Content similar to material that is available on the iTunes Store in R rated and sometimes even in PG-13 rated movies can, and have, been grounds for rejection in the App Store.

Where's the line? We don't know. Apple won't tell us.

They'll know when they see it.

We know there are some things you'll never get on the App Store, like hardcore pornography. Or, maybe you will if you're Brian K. Vaughn, but probably not if you're anybody else. But there's a huge amount of gray area short of that. There's an awful lot of content that might be okay and might not. There's a lot of stuff that might get in one day but not the next, or that might be allowed by one reviewer but not another. Developers are expected to invest substantial time and money into creating apps and then submit them to Apple knowing full well that they might get rejected for violating some unwritten rule. We're all expected to be okay with the fact that the fate of our app will be a subjective decision made by some faceless stranger who will probably have, at most, twenty minutes to look at and judge our app. And we're expected to be okay with all this despite the fact that we have no alternative market for our creation. If Apple rejects us, we can go nowhere else with our creation easily.

This situation creates what First Amendment attorneys call a "chilling effect". Content creators tend to intentionally stay well behind the line of what they think will be accepted because the financial implications of crossing the line are high. But at least nobody will be offended, nobody's world views will be challenged, and nobody will ever have to think. What a brave new world it is.

I'm not arguing that Apple doesn't have the right to be gatekeeper and decide what content gets put on their store: they most certainly do. I'm just saying that as a writer, developer, and content creator, this ambiguity and treatment of Apps as a less mature and less worthy medium bothers me and seems more than a little short-sighted. It saddens me that Apple is actually discouraging creators from exploring this new medium to its fullest.

Go write a book or a song.

But, I don't want to write a book to tell this story. I don't want to create a song about these people. I want to leverage everything that an App has to offer to make the most impact and to make people care and think.

I don't think any of the scripts written for Turncoat so far would be particularly offensive to most reasonable, mature people. I'm not on a mission to get embroiled in controversy or push any boundaries.

But I am on a mission to tell a story, and that story has pleasant and unpleasant parts. Things happen that people won't like, and the characters have a strange knack for acting like real people.

It concerns me that by targeting iOS as the primary platform for Turncoat, the power to decide whether I can tell the story I want to tell, the way I want to tell it, will reside with some anonymous app reviewer sitting in Cupertino working a thankless job and doing the best they can to follow intentionally vague guidelines.

For that reason, and that reason alone, I seriously debated trying to convince Rob that we should target Desktop computers first, and then maybe bring the Turncoat games to the iPad later. But, when it comes down to it, iOS is just too big and desirable of a market. The Turncoat Escape game, and likely every other Turncoat game we're able to produce, will target iOS first. If we run into problems with App Review, we'll take whatever steps are necessary to pass review.  Then, maybe, we'll release the full version on another platform so people can see the story the way it was intended.

It's funny, though. I can't help but think about another medium as I write this. See, I stopped watching television around 1991. For a very long time, didn't watch any television at all. I still owned a TV, but I had not cable or satellite – not even a UHF antenna. I stopped watching for practical reasons; I was very busy at that point in my life and quite literally had no time to watch. But for the next decade or so, any time I thought about starting to watch television again, I found myself unable to invest myself in the medium. Between the fact that a third of the airing time was devoted to grating, obnoxious advertisements and the fact that stories had to follow all the unwritten rules of the medium, I found any attempt to get back into watching television again annoying. I mean, it's hard not to lose your suspension of disbelief when a hardened mob boss says "fudge" instead of "fuck", or a major character waxes poetic about some brand of automobile for no apparent reason. After being away from it for a while, the flaws of the medium became really obvious to me, and I continued not to watch.

But something happened. Television — or at least some of it — became good. By the time the TV and Netflix were available (which is when I started watching some television shows again), there was a lot of halfway decent television to watch and there was some that was very good.

What changed? I can give you my theory.

The Sopranos changed. The Sopranos, and other cable television shows like it that didn't have to try and guess at and conform to the arbitrary guidelines of the FCC or the demands of advertisers. Freed from the possibility of being fined for random violations of unwritten rules, television got better as a medium, and some of it became great. Some people might be tempted to point at technological advances for the fact that television became better, but I really don't think that's the real reason. TV got better because the writers were allowed to write without having to worry about what some bureaucrat might do as a result of what they wrote or what some uptight zealot in Utah might put in a letter to their member of Congress.

The more things change, the more they stay the same. App developers are stuck back in the world where Lucy and Desi had separate twin beds.

As you can probably tell, I don't like the present situation. I don't mind constraints, but I want to know what those constraints are so I can work around them intelligently, and I want them to be reasonable rather than catering to the least common denominator. I don't think the way Apple handles App Review right now is a good long term strategy but, hey… you play the cards you're dealt.

Someday, though… I do hope Apple will decide to let Apps grow up and we can start seeing some truly great storytelling happen on our platform.

Next Up: Deciding on Tools and Frameworks
Previous: Finding a Smaller Game in the Universe

©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 9 August 2013 | 8:28 am

Turncoat Dev Diary: Deciding on Tools and Frameworks

Once we knew our platform, it was time to start figuring out the toolset or frameworks that we were going to use to make the game. The essential decision we had to make was whether to build our games from scratch, essentially creating our own game engine in the process, or leveraging one of the many existing commercial or open source game engines that are available. Although the idea of creating our own game engine had some appeal, we knew that practical considerations weighed heavily in favor of using an existing one. Although there are costs associated with using many engines, and even though it makes you dependent upon somebody else's work, the cost/benefit equation really makes the decision pretty simple. We want to tell a story and create games; we don't want to reinvent the wheel, and writing a 3D game engine from scratch is very much reinventing the wheel.

It actually didn't take us very long to figure out which engine to use. We ruled out a few very quickly. Cocos2D wouldn't work because we want to create a full 3D game. Cocos3D is still a little too immature for us to be comfortable relying on it. Since we want to keep our options open for releasing on other platforms, some other engines were ruled out. Sio2, although a good mobile engine that supports both iOS and Android, doesn't have desktop or console support.

Ogre3D, a well-regarded open source game engine, just has too many rough edges for my tastes. The cost savings from the fact that it is free and open source seemed to be far more than offset by the additional time and headache involved in using it. I'm all for open source software when it's the right tool for the job — Blender is still my general purpose 3D app of choice — but the gap between Ogre3D and the commercial engines is pretty wide, not in terms of what you can achieve, but in the amount of effort it takes to achieve it.

We very easily got the list down to just three: the UDK, the Source Engine, and Unity3D. Then, two of those three got quickly crossed off the list, as well.

Although the UDK is an amazing engine, we ruled it out for one simple reason: the toolset is entirely Windows based. Although it can create iOS and Mac games, most of the work involved in creating the game has to be done on Windows. We're almost entirely a Mac shop and I, personally, am much more productive and happy when working on a Mac. Even if I didn't mind spending much of my day in Windows, I'd still have to compile, test, and upload to the App Store using a Mac, which seems a rather convoluted and inefficient process. It's probably not much overhead for a large game shop, but it's more hassle than I'd want to deal with.

The Source Engine has similar limitations. Although Valve has been promising Mac tools for a while, they have not shown up yet and there have been no recent comments from Valve about Mac support, leading me to question whether that they've dropped the plan. On top of that, Valve hasn't delivered official support for any mobile platforms yet. There are rumors of a Source 2 engine in the works that will likely address these issues, but we can't develop with something that's not out yet.

Before long, there was only one engine left standing: Unity3D. We've used Unity for a few client projects in the past and I'm, frankly, rather impressed with it. I thought that I would really hate working in C# but it turns out I don't mind it at all. I don't like it as much as Objective-C, but I don't have the kind of hatred for it that I seem to have developed for Java and C++ over the years. Like all languages, it has its quirks, but I don't feel like the language is working against me and I don't have problems context shifting between Objective-C and C# like I do with Objective-C and Java. Objective-C and C# are surprisingly compatible languages given their differences.

Although I've only got a few hundred hours of experience with Unity under my belt, it strikes me as having the right balance between ease of use and power. The development environment runs natively on the Mac (and Windows also) and it is capable of generating iOS, Android, Mac, Windows, and Linux executables. It's even possible to build your apps for the Xbox, PS3, and Wii, though doing so requires contacting Unity and negotiating separate licenses. There is, of course, some work involved to account for the various platform differences, but a surprising amount of it is handled for you.

Once we got the licenses squared away, it was time to get something built. There's one school of thought in game development that says you should try and get a prototype up and running as soon as possible. The earlier you start being able to play, the faster you'll know whether the game's going to work. So, let's get a skeleton of our level hashed out so that we can get our first rough prototype stood up.

Next Up: Just Getting Something Running
Previous: Platform Decisions


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 9 August 2013 | 8:26 am

Turncoat Tool Time #1: The Story Bible and Scrivener

After posting the first Turncoat Dev Diary post yesterday, I received a number of questions from people about what software was featured in the screenshot of our Story Bible. The software we're using to write the Story Bible is Scrivener by Literature and Latte.

We looked around a little bit for apps designed specifically for game design, but nothing we found really jumped out at us as a good tool for what we needed. A few months ago, a program called Articy came out. If it had come out a year earlier and if it weren't Windows only, I might have taken a hard look at it. I'm not bashing Windows, it's just that I'm comfortable and therefor more productive when using a Mac.

But, I had used Scrivener quite a lot and it seemed like an awfully good fit for what we wanted to do, especially since we didn't completely know what we wanted to do. Scrivener is great at collecting and organizing research and it lets you write styled or unstyled text and then easily reorganize what you've written. One of the nicest feature for what we were doing turned out to be something called "Binders".

Binders allow you to organize a subset of your Scrivener project's files, sort of like virtual notebooks (hence the name). We have our big Turncoat project, but then we also have a binder called "Story Bible", which stores all the information about our universe and the characters, but none of our research or game-specific information. We have individual Binders for game concepts we came up with, but also have a binder called "Scripts", which contains all the scripts from all the various game ideas we came up with in order. Items can be in more than one binder and stay automatically in sync, so they're incredibly useful when you have a lot of information that might need to get presented in different ways at different times or to different people.

Scrivener is quite easily one my favorite pieces of software ever. It's like an IDE for writing. It doesn't matter what type of writing you do, either. Whether it's fiction, screenplays, academic theses, or something else altogether, Scrivener can make the process better. I always hated that I was never able to find a way to use Scrivener in Apress' publishing workflow without problems.

If you do any serious amount of writing, you should probably take a little time to check out Scrivener. It's got a bit of a learning curve, but the instructional videos are well done, and once you get over the learning curve, it's a huge help.

My only complaints about Scrivener is the fact that it doesn't have better collaboration tools, and that it can be a little tricky using it with source control. You have to make sure you never, ever commit when the program is open. Those are relatively minor quibbles, though, and I don't think I could write without Scrivener these days. Well, maybe I could, but I wouldn't want to.


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 30 July 2013 | 5:40 am

iOS 7: Update or Languish

I spent last week in San Francisco attending Apple's World Wide Developer Conference. I'm always excited by the new stuff that Apple releases there. I spend that week each year lodged firmly inside the RDF.

But this year was different.

This year was a whole new level of "Wow!" for me. Given that it was my seventh consecutive WWDC, that's saying something. And the bulk of the wows actually came after the keynote during the various NDA sessions.

During the keynote, the focus was on the user-facing changes to OS X and iOS, plus some hardware announcements, which included a completely redesigned Mac Pro (another "Wow!"). I may talk about the new hardware at some point, but right now, I want to focus on the software changes, and specifically the changes to iOS, because they are substantial and somewhat in-your-face.

We all saw the new UI that was shown during the keynote, and it's been the subject of much debate ever since. Every single designer with a dribbble account or a copy of Photoshop has spent the last week or so telling anyone who would listen why the design of iOS 7 sucks. Now, I'm not a designer, so I'm not going to enter the fray except to say two things:

  1. What they're all judging is a developer preview released at a developer conference and made available to registered developers only. We're at least three months, possibly more, from a final product. It's very likely that many of the "design problems" that people are pointing out in these early builds will be gone by the time iOS 7 is released. Apple doesn't design and then go build as two distinct and separate steps. They iterate, and they are still iterating, and they will continue to iterate for quite some time. The right place to point out actual design flaws on a pre-release version of iOS is right here, not on a blog or on dribbble.

  2. I've read a lot of posts claiming the new iOS design breaks various "rules of design" and that "no designers" think what Apple has done with iOS 7 is right. They're pointing out things like iOS 7 "using straight-from-the-tube colors" and explaining how "the new icon grid is wrong". I'm sure there are many valid and worthy points buried amongst all the whining (but, again, see #1). Of course, when I hear these comments, I can't help but think back to something an art teacher once said to me. I can't remember the exact quote, but it was something along the lines of: "Competent artists know the rules and follow them. Masters know when to break them." When the dust settles and iOS 7 ships, most of the "broken" design rules at that time will likely have been broken intentionally. Maybe you're a better designer than Jony and his team, but you're a dark horse in that race if so.
Don't get me wrong: iOS 7 isn't perfect. But, nobody should be expecting it to be perfect at this point. It's not done. That's why it was only released to developers.

Whether you like it or not, though, we've seen enough to know the general direction Apple is taking for the foreseeable future. While the look and feel will evolve a little with each beta, the broad strokes we've been shown will still be there when iOS 7 ships. Lighter colors, thinner fonts, playful physics-based animation? Those, without a doubt, are going to be prominent parts of iOS 7, and likely iOS 8 and 9 as well.

So, if you're an app developer and you don't update your apps or if you continue to create what I'd call "heavy" skeuomorphic interfaces, your application is going to look out of place on iOS 7. It's going to look outdated no matter how well designed it might be. Go back and look at a screenshot of Mac OS 9. What you just felt looking at that is what people will feel when they look at your "heavier" iOS apps six months or a year from now.

These changes to iOS 7 mean an awful lot of work for developers and designers alike. But, for the most part, the developers are not complaining. Almost every developer I've talked to is incredibly excited about everything that came out this year. We like progress. We're okay having to do extra work to keep up and we're happy to file bug reports to tell Apple what's not working. We want to help Apple iterate toward a better final product. We like this game and we're really happy to be playing it.

Which is good, because it's time to "update or languish". Marco Arment got it basically right: everything is in flux right now. Whether you like Apple's new direction or not,  apps that don't revisit their interaction model and visual design are, in most cases, going to be pushed aside by newer, lighter, more playful apps that take advantage of the cool stuff that iOS 7 has given us.

Don't get hung up on what you don't like. Focus on what you need to do to keep moving forward and to keep your apps relevant and exciting. That's going to help users far more than knowing that the corner radii on their home screen are "wrong".



If you need help figuring out what to do with an existing app, or want to create a new one, that's what we do at MartianCraft, so feel free to drop us a line. We'd love to talk with you.

©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 18 June 2013 | 7:57 pm

What a Long Strange Trip It's Been…

In 2009, Briefs began its long, strange journey to the App Store. It took a year of sitting in review, changes to the App Store rules, and a complete re-envisioning requiring a ground-up rewrite, but at midnight last night, Briefs opened its eyes and woke up from its coma.

It has taken a huge amount of work to get to this point. The version of Briefs that's now available on the App Store has taken nine months of active development to create. It had a core team of seven people, but seventeen different developers and designers were directly involved in its creation at different points over the course of those nine months. Other than the design of Brief's icons, which was handled by the awesome folks at Pacific Helm, we did everything in-house. We did both the interaction design and the graphic design. We did the development work. We did the product photography, the website and the promotional videos.

As you can imagine, there were many very late nights along the way. And none of it would have been possible if we didn't have an amazing team of multi-talented, devoted, and generally kick-ass people working for us.

The original Briefs was a tool for testing app designs on a device. The new Briefs is designed to be that, but also much more. The design of the new Briefs grew out of our experiences at MartianCraft working with clients. What we realized after working on many client projects, both big and small, is that clients, designers, and developers don't always speak the same language. Static mockups and traditional design documents rarely communicate everything about an app's design that needs to be communicated. Once development starts, there's always lot of back and forth necessary to clarify intent and to handle issues not dealt with or anticipated by the design documents. This adds time and cost to the development process. Sometimes, it adds a lot of time and cost.

Briefs are more than prototypes. They're also schematics that tell developers exactly how an app should look and behave. They communicate with pixel precision how the app needs to be built.

Briefs can be purchased directly from the Mac App Store. Its companion app, Briefscase, can be downloaded for free from the iOS App Store. Not ready to buy, but curious? Check out the Briefs website, where we have informational videos and a free trial of the app with no time limit. You can also check out reviews by MacStories, iMore, TUAW, and Macworld. If you're interested in the thought process that went into the design of the new Briefs, check out this article Rob wrote for Fast Company.


©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 1 May 2013 | 11:00 am

WWDC First Timer's Guide 2013 Edition

Well, this year's WWDC announcement ended up being a little bittersweet for me, since a lot of people I'm used to seeing at WWDC didn't get tickets this year. For those of you who did, especially those of you attending for the first time, I've decided to update my First Timer's Guide to WWDC. If interested, you can read past versions of it here (2012201120102009), though they don't change all that much..


Remember that WWDC is different every year, so don't take anything written here as gospel. Things changes every year, and I expect this year that things will change, just as they've done every year. Hopefully these hints and suggestions will help some of you.

  1. Arrive on Sunday or Earlier. Registration is usually open most of the day on Sunday. You really, really want to get your badge and related swag (usually a bag and shirt or a jacket, etc) on Sunday if you plan to get in line for the keynote. The line for the keynote will start forming many hours before the doors to Moscone West open up on Monday. The past five years, people have started lining up before midnight Sunday. The last two years the line started late afternoon on Sunday. If you do not have your badge when you get to Moscone on Monday morning, you will almost certainly end up in an overflow room for the Keynote and may even miss part of the presentation. Even if you don't care about being in the main room, there's still a lot going on on Sunday and it's a good time to meet new people and catch up with old friends. You really don't want to deal with the badge process on Monday. Developers, especially those coming from overseas, will start coming into town much earlier, so it's not even a bad idea coming in Saturday or even Friday if you have developer friends to catch up with.

  2. Do not lose your badge. If you lose it, you are done. You will spend your time crying on the short steps in front of Moscone West while you watch everyone else go in to get schooled. Sure, you'll still be able to attend many of the unofficial after-hours goings-on (aka "showcializing"), but not the Thursday night party, which is often a blast (though the band quality has been in a downward spiral for several years now). Without a badge, you'll miss out on some of the really important stuff if you're a first timer. No amount of begging or pleading will get you a replacement badge, and since they sold out, no amount of money will get you another one, either. And that would suck. Treat it like gold. When I'm not in Moscone West or somewhere else where I need the badge, I put it in my backpack, clipped to my backpack's keyper (the little hook designed to hold your keys so they don't get lost in the bottom of your bag). Yes, there have been isolated stories of people managing to convince a sympathetic conference worker to print them a new badge, but don't expect it, those are exceptions. The employees are not supposed to print new badges, and most won't.

  3. Eat your fill. In the past, they've provided two meals a day; you're on your own for dinner. Breakfast (starts a half-hour before the first session, and it's most likely going to be a continental breakfast - fruit, pastries, juice, coffee, donuts, toast, and those round dinner rolls that Californians think are bagels, but really aren't. If you're diabetic, need to eat gluten-free, or are an early riser, you'll probably want to eat before-hand. Lunch used to be (IIRC) a hot lunch, but several yeas back they switched to boxed lunches. They're okay as far as boxed lunches go, but they are boxed lunches. A lot of people complain (loudly) about them and choose to go to a nearby restaurant during the lunch break, which is pretty long - at least 90 minutes.

  4. Party hard (not that you have a choice). There are lots of official and unofficial events in the evening. A list of WWDC events is maintained at http://wwdcparties.com/, but your best bet is to follow as many iPhone and Mac devs on Twitter that you can - the unofficial gatherings happen at various places downtown, often starting with a few "seed crystal" developers stopping for a drink and tweeting their whereabouts. The unofficial, spontaneous gatherings can be really fun and a great opportunity to meet people. The sponsored parties often start before WWDC - there are usually a few on Sunday, and there have been ones as early as Friday before. Pretty much any other bar within stumbling distance of Moscone West will be used for both planned and informal gatherings. As we get closer, there will be lists and calendars devoted to all the events and parties. Some are invite-only, but many are first-come, first-serve. Although there's a lot of drinking going on, these are worth attending even if you don't drink. Great people, great conversations... good times, whether you imbibe or not. And even if you do enjoy alcohol, it's not a bad idea to take a night off during the week. WWDC is a marathon, not a sprint. Learning to pace yourself is a survival skill.

  5. Everything is Crowded As you probably guessed from how quickly it sold out, WWDC is popular. This extends to pretty much any organized event, official or unofficial. WWDC parties are often invite only and whether they are or not, they often have a long line to get in. So, if you're in a long line for too long, grab a few people who are in line with you and go to a nearby bar or restaurant that doesn't have a sponsored event and tweet your whereabouts. You might be surprised at what happens.

  6. Take good notes. You are going to be drinking knowledge from a firehose there. The information will come at you fast and furious. As an attendee, you will get all the session videos on ADC on iTunes. It used to take some time before the videos were available, but hopefully they'll continue to get them out quickly as they have the last two years. The rumor is that many session videos will be available even before the conference is over. That's just a rumor, though, so make sure you write down any information you might need immediately.

  7. Collaborative note taking A few years ago, people started taking communal notes using SubEthaEdit and Panic's Coda (they are compatible with each other). That worked out really, really well. My notes from the past few years are ten times better than from previous years. With collaborative note taking, you don't have to type fast enough to catch every detail. Instead, the audience works as a team and everybody gets great notes. The license fee pays for itself in one WWDC, especially considering you can see notes being taken in other sessions, not just your own.

  8. Labs rule. If you're having a problem, find an appropriate lab. One of the concierges at any of the labs can tell you exactly which teams and/or which Apple employees will be at which labs when. If you're having an audio problem, you can easily stalk the Core Audio team until they beat the information into your skull, for example. It's unstructured, hands-on time with the people who write the frameworks and applications we use every day. It used to be that people started remembering about the labs later in the week, but now they fill up extremely quickly, so sign up early! 

  9. Buddy up, divide and conquer There will be at least a few times when you want to be at more than one presentation offered at the same time. Find someone who's attending one and go to the other (Twitter is a good way to find people), then share your notes. Also, see #6 above.

  10. Make sure to sleep on the plane. You won't get many other chances once you get there. Everybody is ragged by Friday, some of us even earlier. Everyone remains surprisingly polite given how sleep-deprived and/or hungover people are.

  11. Thank your hosts. The folks at Apple - the engineers, managers, and evangelists who give the presentations and staff the labs, kill themselves for months to make WWDC such a great event. So, do your mother proud and remember your manners. Say thank you when someone helps you, or even if they try and don't. And if you see one of them at an after hours event, it's quite alright to buy them a beer to say thanks.

  12. Remember you're under NDA. This one is hard, especially for me. We see so much exciting amazing stuff that week that it's natural to want to tweet it, blog it, or even tell the guy handing out advertisements for strip joints on the corner all about it. Don't. Everything, from morning to night except the Keynote and the Thursday night party are under NDA.

  13. Brown Bag it. Many days there are "brown bag" sessions. These are speakers not from Apple who give entertaining, enlightening, or inspiring talks at lunchtime. Check the schedule, some of them are bound to be worth your time.

  14. Monday, Monday I don't know what to say about Monday. The last few years, people started lining up before midnight the night before. I'm typically on East coast time and usually walk over around 4:15 to see what's going on. I've done the line, and I've done the have-a-leisurely-breakfast route, and both have their merits. If you straggle too much, they may start before you get in the room, however. This has happened to me twice. The tradeoff, of course, is that you'll be much better rested for the rest of the day.

    Waiting in line is not really my thing any more, but you do get to talk to a lot of very cool people while waiting in line, and there is a sense of camaraderie that develops when you do something silly with other people like that. Some people probably want me to suggest what time to get in line. I have no idea. Most people will get into the main room to see the Keynote. There will be some people diverted to the overflow rooms, but because the number of attendees is relatively low and the Presidio (the keynote room) is so big, it's a tiny percentage who have to go to the overflow rooms (maybe the last 1,000 to 1,500 or so, depending on number of VIPs in attendance). On the other hand, you'll actually get a better view in the overflow rooms unless you get in line crazy early - you'll get to watch it in real time on huge screens and you'll get to see what's happening better than the people at the back of the Presidio. So, go when you want to. If you want to get up early and go be one of the "crazy ones," cool! If you want to get up later, you'll still get to see the keynote sitting in a comfy room with other geeks.

  15. Turn off your MiFi/Clear/other wireless router. I'm so totally not kidding on this one. People will punch you if they find out you've got one turned on. Two years ago, so many people had MiFis and other mobile hotspots running during the keynote that it interfered with the conference center's (usually very good) WiFi network and disrupted some of the tech demos. Once you're in the building, you don't need it. They have a crazy fast pipe in the building, so just use the provided WiFi or wired connection and turn your wireless router off. Seriously.

  16. Park it once in a while There will be time between sessions, and maybe even one or two slots that have nothing you're interested in. Or, you might find yourself just too tired to take in the inner workings of some technology. In that case, there are several lounges around where you can crash in a bean bag chair, comfy chair, moderately-comfy chair, or patch of floor. There is good wi-fi throughout the building and crazy-fast wired connections and outlets in various spots. So, find a spot, tweet your location, and zone out for a little while or do some coding. You never know who you might end up talking with. If you move around too much, well… let's just say a moving target is harder to hit than a stationary one.

  17. Twitter is invaluable. There's really no better way to hook up with people you didn't travel with than Twitter. There used to be problems with Twitter staying up during the keynotes, but that seems to be resolved and we've had several years without major outages during the keynote.

  18. It's okay to leave. Don't worry if a few minutes into a session you decide that you've made a horrible mistake and it's too boring/advanced/simple/etc, or you're just too damn hungover. Just get up and leave quietly and go to a different session or sit down somewhere. Nobody is going to be offended if you leave politely and without causing a disturbance.

  19. Bring proof of age on Thursday night. The official party is always on Thursday night, and it's always a blast. There's good food, good drink, great company, and sometimes a pretty good band. They are pretty strict about making sure only people who are over 21 get alcohol. So, if you want to have a drink or five on Thursday, don't leave your license or passport in your hotel room, even if you're 70 years old. Also, if you're under eighteen, I have some bad news: you can't attend the bash, sorry.

  20. It's okay to take breaks. Your first time, you're going to be tempted to go to every session you possibly can. Somewhere around Wednesday or Thursday, though, that effort combined with lack of sleep, is going to take its toll on you. If you're too tired or overwhelmed to process information, it's okay to hole up on a couch or at a table instead of going to a session, or even to go back to your hotel (you did get a close one, right?). In fact, it's a darn good idea to map out a few "sacrificial" time slots that won't feel bad about missing just in case you need a break. You don't want to burn out and then miss something you are really interested in. And some of the best, more advanced sessions fall at the end of the week, so don't shoot your wad early in the week.

  21. Get a close hotel If at all possible, try and get a hotel within two blocks and definitely not more than five blocks from Moscone West. Five blocks doesn't seem like a lot, but it can become quite a hassle, especially if you're North of Moscone West because you'll be climbing up a pretty decent hill to return to your hotel each night.

  22. Official Evening Events In addition to the Thursday night Beer Bash, there are other official activities in the evening that are very entertaining and usually happen in the early evening before the parties really get going. The two stalwarts are the Apple Design Awards and Stump the Chumps (it's actually called "Stump the Experts", but most of the participants refer to it as just "Stump"). Stump the Experts is an Apple trivia game-show-like event with notable tech luminaries and former Apple employees. Lots of sharp wits and deep knowledge of Apple make for some good entertainment. There used to also be a Monday night reception and cocktail hour, but if memory serves, it hasn't happened in several years now.

  23. Take BART If you're flying into either SFO or OAK and are staying near Moscone West (or near any BART station) there's really no reason to bother with renting a car or taking a cab from the airport. Just take BART and get off at the Powell Street station and walk up 4th street (South). Moscone West will be about four blocks on your right.

  24. Bring a Sweatshirt or Jacket A lot of first-timers assume that it's California in the summer so it's going to be hot. Well, it could be, during the middle of the day, but look up Mark Twain's quote about San Francisco in the summer. It can be downright chilly in San Francisco in the summer time, especially in the evenings and early morning. Bring a sweatshirt or light jacket, and wear layers because the temperature differential over the course of the day can be forty or fifty degrees fahrenheit.

  25. Sample Code Many sessions will have sample code, usually downloadable from the schedule or class descriptions web pages. The sample code will stay up for a while, but may not stay around forever, so it's a good idea to download any code samples you want as soon as you can. Edit: It looks like starting with 2009, you can get to the old source code for years you attended by logging in to ADC on iTunes, however I always save off a copy just in case.

  26. Get a Battery Pack You might want to consider a battery pack for your iPhone and/or iPad. You'll be in for some very long days, and it's not uncommon for your phone to be bone dry by early evening if you don't remember to charge it during the day. AT&T reception in San Francisco is notoriously bad, and that takes a toll on battery life.

  27. Don't Sound Like a N00b It's technically called the "World Wide Developer's Conference", so logically, you'd expect people to refer to it as "the WWDC" (e.g. "I'm going to head over to the WWDC")… only people rarely do. It's just "WWDC" ("are you going to WWDC this year?). Less commonly, it's also called  "DubDubDeeCee" or just "Dubdub": ("Man, what an awesome Dubdub that was", or "What time are you heading over to Dubdub?").

  28. American Drinking Age if you're coming from a country with a civilized drinking age, and you're under the age of 21, you're in for a bit of an unpleasant surprise: You won't be allowed to drink here, and most places are very strict about it because they will lose their license to serve alcohol if they're caught serving to an underage person.

  29. Clean up your mess! I never thought I'd have to say this, but the last two years, I noticed a disturbing thing. People leaving trash and garbage all over Moscone. It was especially bad during the keynote line, but even during the rest of the conference, it was embarrassing. Don't. Just really don't. There are garbage cans and recycling bins. Use them. You're an adult, and even if you weren't, your mom's not at Moscone to clean up after you.

  30. Update your Avatars: I know, you like that picture from ten years ago better. I know Calvin and Hobbes are just to die for. I know everybody loves those eight-bit avatars (or not). But, if you want to people you know online to be able to find you in meatspace, it really helps if they know who they're looking for.

Have more suggestions for first-timers? Throw them in the comments!

©2008-2013 Jeff LaMarche.
http://iphonedevelopment.blogspot.com

Posted on 29 April 2013 | 8:02 am