Blog

News, Updates and General Ramblings

Recipes 1.5.0 Running on iOS 4.x/5.x

Hi everyone,

unfortunately, the brand new 1.5.0 update contains a bug that crashes the app every time you try to open a recipe, author, drink, and so on on iOS 4.x and 5.x. A fix is already on its way to Apple…

Alex

Update (Nov 13, 2012): Version 1.5.1 containing the fix is now available in the AppStore.

Featured Post

Recipes on AppStore Front Page

… granted, it’s only the “food and drinks” section AND it’s only in the Austrian AppStore, but Recipes is currently listed in the “What’s hot”-section!!! Missed Germany by one place (!) a couple of days ago… pretty damn cool… : )

Featured Post

Totemo Prototype

Just uploaded a video of the new Totemo for iPhone/iPad prototype. While showing the guys and gals of Surprised Stare Games the current version of Streetsoccer, Tony – the author of Totemo – suggested that the mixed 2D/3D camera control would be perfect for Totemo. I don’t know why, but for some reason, his idea struck me and I immediately was thinking “I wonder how fast I can do this”. So a train ride back to Hamburg and a few hours here and there over the course of the week resulted in this first prototype.

Totemo Prototype
Totemo Prototype

The graphics have been taken from the original rule book, the model and ambient occlusion maps were created very quickly in Blender. Since I was in a hurry, the blocks look rather dark. Anyway, this is just a tech demo, the final version will probably look nothing like this. I had an idea last night on how to turn this into a fun single player solitair version as well, probably will do that before I rework the Streetsoccer AI for Totemo.

Funny thing is, I should have spent that time continuing to work on Streetsoccer. However, by changing the “perspective” on the old code base helped a lot in solving a number of problems Streetsoccer had. I’ve found various bugs in the animation system, camera control, rendering, … so in the end, I made good progress on Streetsoccer this week as well…

Featured Post

Recipes 1.4.0 and 1.5.0

Hi everyone,

first post in a while and first Recipes update in a while, too : ) . As already mentioned in another post, the introduction of iClouds had changed some rules on what Apple permits when submitting updates. Although I don’t plan to support iClouds in the near feature (database synchronization = programming headaches), code that had been in the app since day one had to be changed. Well, as of last week, 1.4.0 is out and finally the app is up to date again…

… or almost so. What was planned as a quick 1.4.1 has become probably the biggest update ever in the history of the app: 1.5.0. The new version – just submitted to Apple a few seconds ago – contains a huge list of changes, some of which have been on the hot list of requests for a while! Here are a few I just want to mention quickly:

  • A new web-importer: A while ago, I stumbled upon Google Recipes, a dedicated search engine for recipes. What they do is use HTML meta data (Microformat, Microdata, RDF) to allow specific searches based on ingredients and so on. I’ve added a new importer to the App that uses the exact same information to import recipes into the app. Along the theme of the app, I didn’t want to add something which automatically rips of huge bunches of recipes from other people. Think of this more as a replacement for writing the ingredients list down manually.
  • Import/export via iTunes: It’s no longer necessary to mail recipes to yourself. Simply upload them via iTunes to the app’s document folder on the device.
  • Description/comment for ingredient entries in a recipe.
  • Images no longer stretch when being imported.
  • Authors can be created directly while entering new recipes.
  • Fixes to various importers, mostly Rezkonv.
  • … and lots more

I was actually surprised myself how much stuff came together. It feels like a very mature product which is fitting considering the fact that it’s now almost 2 years (!) since the very first version came out. Feedback over the years has mostly be overwhelming, I just wished I had managed to iron out a few of the nasty bugs early and avoid a few 1-2 star ratings in the app store. But apparently, every app has those, so… : )

Once 1.5.0 is out, development on Recipes will go to maintenance mode again. I’ve pretty much packed in everything I personally ever wanted (although the list of ideas is still quite long) and the next big step will be adding iPad support. But first, it’s full steam ahead with Streetsoccer again…

Alex

Featured Post

iOS 6 or the Price of Progress

Just downloaded XCode 4.5 and migrated Streetsoccer to support iOS6 and the new, longer iPhone 5 screen. This was necessary as Apple only allows app submissions built with the latest development environment which as a developer I can totally understand. Supporting old versions is always a source of major headaches. What I cannot really understand is why they dropped support for pre-4.3 devices in XCode 4.5! iOS 5.1 SDK was still able to do it, so why drop it now?

I was totally unaware of this and – of course – only noticed it AFTER updating my system. This rendered two of my test devices (a 1st and 2nd gen iPod Touch) completely obsolete, one of which I had sent someone just recently to do some testing of the Streetsoccer beta builds and who I was planning to send an updated version soon.

So after 5 years, my very first iPod Touch has lost its last reason to stick around. I basically got it the first day it was available in Germany and started programming for it the first day the SDK went public. I distinctly remember a good friend of mine asking why I “did buy this piece of electronic garbage” that has no real use compared to his stylus-based mobile phone. Aaaah, those were the days. Now you almost cannot buy an iPhone because everyone else has one!

Hopefully, my 1st-gen iPad will stick around just a bit longer. They dropped iOS 6 support for it already, so it’s only a question of time until I won’t be able to develop for it I guess…

Featured Post

Streetsoccer Database and Statistics

Hi everyone,

long time no post, I know … as always, it’s been a mixture of ambitious feature development and live simply getting in the way of ambitious feature development. For the last couple of weeks, I’ve been implementing and testing a new database feature inside the app which stores all sessions played.

Up to now, all the app did was store each session in its own file and basically just dumping the runtime data to the device. This worked fine, but after talking to a couple of beta testers and playing the new Lost Cities for iOS, I realized that Streetsoccer definetely needed some form of statistics. Although building and testing the database system was quite a lot of work, it actually paid of and solved a number of other problems I had postponed. For example, the Formation menu already did let users customize the name and numbers of their players, but that information was not used in the in-game graphics simply because there was no data structure to put them in yet. Same for the team name and storing a customized team kit. While the in-game graphics still doesn’t use that data, it’s mostly a question of designing where/how to show them than actually implementing it (except for the team kits which probably will take a lot of work).

The actual schema got a bit more complex than I initially thought it would but it opens up a lot of possibilities. For example, the soccer players are not fixed to one team but are instead assigned on a per match base. This way a team can potentially at some point later in development change its line up of players for each match and a player can be used for multiple teams. A separate guest account has been integrated so that you don’t mess up your team statistics when handing your device to others. A human player can have multiple teams and so on and so forth. Of course that’s not implemented in the UI right now, but the database is already able to handle it.

The most apparent bonus by having a database is of course the possibility to do proper statistics. One beta user (hey Antii) did an astonishing analysis of his matches against the AI using an Excel sheet and now I’m trying to re-create something similar in the app. Most important are of course die role distribution, but fields run per player piece, ratio of spending points on running or shooting the ball, most occupied positions and much more should be interesting. I’m still trying out mockup designs, trying to figure out how to show the raw numbers. Here are some early ideas:

Feel free to post ideas/suggestions/critic in the comments.

In other news, I’ve got a couple of beta users testing the app and the feedback so far has been quite good. Of course there are still a lot of things to do, but the most apparent problem seems to be the still-prototype nature of the in-game graphics. Unfortunately, all potential 2D/3D artists I were in contact with at one point or the other have dropped out so I’ll guess I’ll be doing the final ones myself. The main question is still whether to keep the top down 3D look with a user adjustable camera or just redo everything in hand-painted 2D…

That’s it for now. There have been other developments with the app but I’ll save those for the next post…

Alex

Featured Post

How To Code a Strong AI for an iOS Boardgame

One of the most daunting tasks when developing a computer version of a boardgame is creating a good computer opponent. Granted, with the advent of mobile devices and MMOs, it has become more important to connect users with each other than to have a strong AI. However, there are still lots of situations where it is simply more convenient to play against the computer: The AI won’t complain if you take a break and continue the game days later, it’s independent of your network connection and it’s available even at the strangest hour. So having an AI that is fun and/or challenging is great. A lot of users won’t notice it or will simply take it for granted but pretty much everyone will notice a bad one or none at all.

Looking Back

Streetsoccer the Boardgame
Streetsoccer the Boardgame

I still remember playing the original Age of Empires which I loved with a passion. I may not be the most astonishing general there ever was and – to be honest – I cannot really remember if I actually played it at the highest level of difficulty. Still, after all those years I remember something strange: Time and again, it worked to attack an opponent from the sea side and – when done right – used it to completely crush the opponent. What happened was that the ships had limited range but were pretty strong, so when a guard tower placed on the coast was heavily damaged, the AI would sent a worker to repair it. The trick was not to destroy the tower but seize fire and instead hit the poor repair guy. Again and again, the AI would sent new repair guys until it ran out of workers. One just had to make sure to never actually destroy that tower…

So although the AI was in general pretty strong, it managed to appear stupid. The inverse is also possible: For years people have wondered about how the ghosts in Pacman cooperate to corner you. As far as I’ve heard, they simply had a different – fairly simple – strategy to choose their path and users interpreted that as cooperation. So developing an AI that is fun is a tricky thing, especially if you have limited computational resources as on a mobile device. This article will show you a number of techniques that I applied when writing the AI for StreetSoccer.

Alpha-Beta Pruning

Book artificial intelligence
Book artificial intelligence

On a basic level, StreetSoccer is similar to chess: There are two players, all information is available to both players and there is one winner in the end. So it stands to reason that the same techniques developed for chess engines can be used for StreetSoccer as well. The basic algorithm I used is called alpha-beta pruning. In essence, the AI generates all possible moves for a given situation, evaluates each possible move and picks the one with the highest score. That is called a 1-ply search. A 3-ply search would first generate all moves for the current situation, then generate all the opponent’s moves for each of the generated moves and again all the AI’s moves for the opponent’s moves. The decision which top level move should be chosen is done by minimizing/maximizing the scores evaluated on the lowest level: If it’s the AI’s turn, it will pick the branch with the highest score, and if it’s the opponent’s turn, it assumes the opponent will choose the branch that puts the AI in the worst situation. Alpha-beta pruning is an optimization technique on top that drops a branch when one can already derive that that branch is worse than a previously found result.

Describing alpha-beta pruning in total is outside the scope of this article and there are already a lot of good resources for this. I recommend checking out the book Artificial Intelligence for Games by Millington and Funge which helped me a lot with the basic concepts. The standard volume on the topic is of course “Artificial Intelligence – A Modern Approach” by Russel and Norvig but personally I found it to “theoretical” for lack of a better word. There is also a series of tutorials on Chess Programming on GameDev which can be recommended. The rest of this article assumes that the reader is familiar with the basic concepts and implementations of alpha-beta pruning and focuses rather on the implementation details not discussed in the literature.

Iterative State Changing

One of the first lessons learned was that it’s usually a bad idea to generate all moves for a level, store it in a list and then evaluate each one in turn (which might require to recurse to the next deeper level. The main motivation to use this approach is usually a desire to filter out duplicate moves or do some sort of ordering.

Especially when doing a multiple-ply search, the cost of temporarily allocating the data structures to store the moves and destroying them again will slow down the computation tremendously. Instead, it worked much better to just have one state/board situation in memory and iteratively modify it. So for example if the move generation wants to move a piece by one field, apply that move to the state, call the next level of recursion and when returning from the call simply reverse the step. This usually just requires a very small number of variables to be stored on the stack instead of allocating massive numbers of moves on the heap.

Killer Moves

Alpha-beta pruning works best if the moves are order to maximize the pruning effect. While doing a complete ordering is a bit difficult when using the iterative state changing, what can easily be done is to at least do the single best move first. For example in StreetSoccer, movement is first computed for the player that is closest to the ball by the reasoning that he will have the biggest potential of actually moving the ball to a good location. That was quite a noticeable improvement in performance.

Use Transposition Tables

A transposition table is a hash table that “caches” evaluation results. If you are wondering whether you should implement one or not, do it! The key term to look for is “Zorbrist Keys” which are encoded representations of a board situation and used for indexing the entries in the transposition table.

Depending on the cost of your evaluation function and how likely your move generation is to produce duplicates, this can save you a lot of time. But make sure to add some form of debugging statistic and optimize the number of hash collisions.

Avoid the Cost of Method Calls

A turn in StreetSoccer gives the player an amount of movement points that can be used to move the pieces. So unlike chess, a turn is not done after moving a piece a field but instead, a piece can for example move two fields to the left, one up and one down again to spent 4 points of movement.

The original implementation of the StreetSoccer AI used a single call to the move generation function to move a player piece by exactly one field and deducted one point of movement. It then recursively called itself until all points were spent. That makes the implementation fairly easy but produces a large call tree. It is especially true for Objective-C but on some level also for pure C that each method invocation comes at a small performance cost. However, since we work recursively and do a multiple-ply search, that small cost will quickly sum up to a considerable amount of time just spent in invoking the method. Or in other words: If you do something a million times, you will notice it even if each individual execution takes just a fraction of a second.

In a long computation that took like 60 seconds, obj_sendmsg took up to 4 seconds before I optimized the call tree. So I first changed all except for the highest level of methods to pure C functions. Then I rewrote the move generation to produce all possible moves within a single call to the generation function

Avoid Duplicate Moves

Even if one is using a transposition table to store already calculated board positions, checking the table whether a given situation has already been calculated costs time. So I rewrote the move generation yet again and made sure that it would not generate duplicate positions. For example, a player piece may move left and then up or up and then left to come to the same board position. My initial idea was that having a fast move generation and then relying on the transposition table to filter out duplicates would be more efficient than having a complex move generation. However, in my case it turned out to be vastly more efficient to have a slightly more complex move generation and with it avoid a lot of duplicates. For one, checking the transposition table takes time, and for the other, transposition tables only have a limited capacity. What happened was that although the transposition table should have had stored the duplicated boards once and then reused the result, by simply having more requests going to the table the likeliness of hash collisions increased and already pre-computed results were overwritten by other board situations. So when a duplicated move checked against the transposition table, chance were that the previous entry was already overwritten by something else.

Chance Nodes: The Role of a Die

A turn in StreetSoccer starts with a die role, giving the player between 1 and 6 movement points. Unfortunately, chance nodes are often left out when it comes to discussions on alpha-beta pruning. I stumbled upon Joel Veness Bachelor thesis “Expectimax Enhancements for Stochastic Game Players” which shortly discusses the required extensions.

Instead of generating the movement tree for a specific die value, a new level of recursion starts by generating and evaluating the movement tree for all possible results of the die role. The score of the tree is than the sum of the scores weighted by their probability of occurring. To do alpha-beta pruning, one checks after each possibility if the sum of already computed scores plus the worst/best score of the remaining possibilities already invalidates the alpha/beta criteria. Or in other words: If I have computed the score for die role 1-4 and it is so bad that even if the opponent could score a goal with 5-6 he wouldn’t pick that movement branch, I don’t have to compute 5-6 at all.

char evaluatedDieValues = 0;
float summedScore = 0.0f;
for ( char dieValue = 1; dieValue <= 6; ++dieValue, ++evaluatedDieValues )
{
    float const upperBound = ( summedScore + ( 6 - evaluatedDieValues ) * AI_GOAL_SCORE ) / 6.0f;
    float const lowerBound = ( summedScore + ( 6 - evaluatedDieValues ) * -AI_GOAL_SCORE ) / 6.0f;
    float subAlpha = MAX(-AI_GOAL_SCORE, ( *alpha - upperBound ) * 6.0f + AI_GOAL_SCORE );
    float subBeta = MIN(AI_GOAL_SCORE, ( *beta - lowerBound ) * 6.0f - AI_GOAL_SCORE );
    float possibleScore = ( currentState.isPlayer1Turn ? -10000.0f : 10000.0f);
    if ( [self generateMovementWithAlpha:&subAlpha beta:&subBeta bestFoundScore:&possibleScore] )
    {
        summedScore += possibleScore;
    }
    else
    {
        if ( currentState.isPlayer1Turn == NO)
        {
            summedScore += possibleScore + ( 6 - evaluatedDieValues - 1 ) * -AI_GOAL_SCORE;
            evaluatedDieValues = 6;
            break;
        }
        else
        {
            summedScore += possibleScore + ( 6 - evaluatedDieValues - 1 ) * AI_GOAL_SCORE;
            evaluatedDieValues = 6;
            break;
        }
    }
    return summedScore / (float)evaluatedDieValues;
}

Note that depending on your specific game rules, it might make a difference in what order you evaluate the different possibilities. For example, evaluating the 6 before the 1 might lead to more pruning or not, it depends on your game. But it is one thing that you should profile for.

Optimize the Evaluation Function

On an average board situation with a 4 roled, a 3-ply search of StreetSoccer can lead to millions of board situations that have to be evaluated. So even checking a bool for being true will produces a cost. Here are some tips on what worked for me:

Lazy Evaluation

Think carefully about the order of computing the individual factors that contribute to your evaluation function. For example, the StreetSoccer AI uses the distance of the ball to both of the goals as one criteria. That criteria is also used to check if a move is valid: The rules say that no player is allowed to block the ball such that the other player cannot score a goal if he had an infinite amount of movement points. So if the distance computation fails, the rest of the evaluation function can be aborted, saving a couple of operations.

Remove Debug Code

Again, do something which is very fast often enough and it will hurt you. I had some conditional code in both the movement generation as well as the evaluation function, all of which was used just for debugging. I replaced the if-operations with preprocessor defines and again, a small speed boost.

Bit-Shifting

The distance computation in StreetSoccer is rather tricky. A player piece cannot walk through a field that is occupied by any other piece. The ball cannot be moved onto fields that are occupied by the opponent by one gains a point of movement when it moves onto a field occupied by a member of the the own team.

I tried at least four implementations of those distance functions: The simplest one was already a highly optimized region growing algorithm but it required a lot of ifs and some temporary memory to mark which fields had already been processed. The final version (which gave me a 50% speed boost) is actually just a set of bit-shifting operations!

The trick was to notice that the 6×10 fields within the lines can be represented as bits in a single long long variable. So at first, only the field containing the ball is set to one. Then for each iteration, the fields next to the already processed fields are calculated. For example, all fields adjacent and to the north of the already processed fields are simply calculated by shifting the process fields by 6 bits to the left, all to the south by 6 bits to the right. All fields to the west and east are done by shifting exactly one bit, one only has to be careful of not producing any overflow to the other side. Handling the player pieces is a bit tricky, but in the end the algorithm uses about 20 bit operations times the number of iterations it takes to reach the ball which needless to say is much, much faster than checking of fields individually.

........     ........     ........
.  1   .     .  1   .     .  1   .
. C4D  .     . C4D  .     . C4D  .
.     2.     .     2.     .     2.
.      .     .      .     .  X   .
.      .     .  X   .     . XXX  .
.  X  B.     . XXX B.     .XXXXXB.
.    5 .     .  X 5 .     . XXX5 .
.      .     .      .     .  X   .
.  3   .     .  3   .     .  3   .
.  A  E.     .  A  E.     .  A  E.
........     ........     ........

So what about player movement? Players can walk outside the lines, so all 8×12 fields have to be respected and that doesn’t fit into a single long long. However, it does fit into two, so the player distance computation actually uses two long long to represent the left/right half of the field. One just has to be careful to have the front correctly grow from one half to the other.

Don’t Assume, Profile

In general, never assume anything. The great thing about iOS development is that one has a great profiler already included in the development environment. So use Apple’s Instruments tool and use it often. Make sure that you have a fixed set of board positions that you run through the computation as a personal benchmark and check if the total time goes down. Instruments can only give you the percentage of time spent in a function and it is easy to optimize one function to notice that it has increased the computation time somewhere else. Which brings me to the next two point:

Record Games

Add something to your game to record the games you play and allows you to watch it again. Not only is it a great feature for your users, it will be invaluable to you. Record everything and then later analyze where the AI worked badly. Use these situations to verify that modifications you did actually solved he problem.

Test, Test, Test

If you have never ever used Unit Tests, you should do so now. It’s next to impossible to debug a highly recursive algorithm such as alpha-beta pruning with stepping through the code lines. What worked great for me was to play against the AI, and code unit tests for each misbehavior of the AI with the “correct” solution as expected test result. Over the time, I accumulated 20-30 different situations and now every time I do a modification to the AI, I just run the test suite to verify everything still works as expected.

Remember to also throw in some things that you always expect to fail and always expect to be true. Rather late in development, I noticed that a sub-function that checked whether a certain game rule was fulfilled always returned true. Now I got like three different tests where the sub-function always has to fail and the problem will never come up again.

Unit tests are also great as a performance benchmark. Just make sure you run them on the device and not on your Mac as the behavior can be quite different. For StreetSoccer, there is a speed factor of 5 or so between running the same situations on the Mac and my iPad.

Character

Don’t just have one AI, give it different characters. This is usually done by artificially limiting the ply-level and/or using different evaluation functions. Especially when someone is new to the game, it won’t be fun to play against an AI that you have been tweaking for ages. It’s just plain bad salesmanship if your AI beats a potential customer 5:0 in the very first game… : )

Images are very important so that users can associate behavior to an AI, so put in something that allows the user to distinguish the AIs. There is a reason why the ghosts in Pacman had different colors!

It’s hard to do right, but having the AI doing trash talk can be great. Things like “awh, you got me”, “damn, you’re lucky” or some character tag line can go a long way in actually loving/hating your AI.

Summary

So hopefully you got some inspiration from reading this. It by far is not meant as a complete “here is the code” article but something which should point you in the right direction. All in all, the StreetSoccer AI is about 3000 lines of code, 50% of which are written in pure C instead of Objective-C. When I began, the AI would take about an hour for a single 3-ply search, now it takes about two minutes to run my entire test suite. Compared to chess, you might be wondering why so slow but consider this: A single ply can lead to roughly 300 different board situations, plus there are chance nodes which add a factor of 6 plus the movement rules are rather complex. So all in all, StreetSoccer is a surprisingly complex game to compute!

Writing AIs is a lot of fun and it is strangely satisfying when some code you wrote actually beats you for the first time. As always, feel free to add your thoughts in the comments.

Featured Post

Streetsoccer Play-By-Mail – Manual Move Entry

I’m just playing a game of StreetSoccer against Corne Van Moorsel, the author of the StreetSoccer. We’re testing a new feature that allows one to play against people that don’t own the app (yet 🙂 ). Basically I do my move, the app generates an email with a nice screenshot of the board and he writes me back his move in a chess-like notation (e.g. “A3-B3-B4”). Haven’t seen another app doing it like this, but it feels awesome! Copy & paste his move into the app and boom, the pieces move as if he’s here. I just have the biggest smile on my face that it works this great…

Featured Post

Xcode Bash Script for On-Demand PVR Texture Encoding

Do you know those tiny little things that always annoy you – for a reeaaaallly long time – but you never get around to fix them? For me, one of those things was the custom build phase step to convert my texture to compressed PVR format. Streetsoccer uses some big textures for the ground as well as the skymap and for some reason, xcode seemed to often rebuild those PVRs. So I just sat down and finally modified my build script to check if the PVR version already exists.

To be honest, before rewriting the script, Xcode sometimes did rebuild the PVRs and sometimes it did not. So perhaps this is a moot point and texturetool does this internally, but I don’t thing so. Anyway, here goes:


function convert
{
    # Only convert if PVR does not exist or source texture has a more recent date
    if [ ! -f "$2" ] || [ "$1" -nt "$2" ]; then
    echo converting "$1" to "$2"
    xcrun -sdk iphoneos texturetool -m -e PVRTC --bits-per-pixel-4 -o "$2" -f PVR "$1"
    else
    echo skipping "$2", already up-to-date 
    fi
}

convert "$SRCROOT/Models/BallDiffuseMap.png" "$SRCROOT/Models/PVR/BallDiffuseMap.pvr"
convert "$SRCROOT/Models/Baselayer.png" "$SRCROOT/Models/PVR/Baselayer.pvr"
convert "$SRCROOT/Models/BlueGoalieDiffuseMap.png" "$SRCROOT/Models/PVR/BlueGoalieDiffuseMap.pvr"

# ... of course you got to modify this list to match your textures and folders ...
Featured Post
Stylish UILabel

Stylish UITextField & UILabel with Gradients and Drop Shadows

As promised in another post, here are some recent findings on achieving stylish text rendering on iOS: If you’re anything like me, you’re doing design mockups in Adobe Photoshop first, put different elements into PNGs and then code the UI behavior and animations in XCode. This works great up to the point where you hit one of two things:

  • You want to localize your app to various languages and have to render PNGs for each language
  • You have dynamic textual content, e.g. player names, high scores, …

The problem is that in order to get high quality typography, one usually needs to add either a blurred drop shadow, a gradient or an outer glow. Since this is just a ticking of a checkbox in Photoshop’s layer style pallet, we all have become so accustomed to seeing this that plain solid color text just doesn’t do anymore.

While there are a number of tutorials and source code examples on the web for this, I’ve found that in my case, they lacked something in one area or the other. Hence this post.

While there are a number of tutorials and source code examples on the web for this, I’ve found that in my case, they lacked something in one area or the other. Hence this post.

Gradients

Let’s start with gradients: After some research, I stumbled upon Dimitris Doukas Blog http://dev.doukasd.com/2011/05/stylelabel-a-uilabel-with-style/ . He creates a gradient image and then set’s the UILabel’s textcolor property to a pattern brush using UIColor::colorWithPatternImage: . There were two problems that his code did not handle that were quite important for me:

  1. He does it for a UILabel but I needed it to work for a UITextField as well
  2. His code does not handle a UILabel that has multiple lines of text
  3. It did not work well for me when the frame is much larger than the text contained in it.

The second is quite easy to fix by analyzing how many lines the text will be split to and creating appropriate gradient image. The only tricky bit is to make sure the first line of pixels from a text line does not spill into the previous line. In my first approach, I had a blue to red gradient and the second text line sure enough started with a thin line of bright red. The frame issue can also be addressed by modifying his gradient creation routine a bit, no biggy.

Adapting his code for UITextField was rather straight forward except for – of course – the usual unexpected problems. Chief among which was that upon setting a pattern image color, the UITextField would not show any text while in editing mode. The only solution I found for this this far is to implement the UITextFieldDelegate::textFieldShouldBeginEditing: method and temporarily set the textColor back to a non-image-based color. I would love to have this handled inside my ExtendedUITextField class as well, but using key-value-observing did not seem to work.

One trivial optimization of Dimitris code was to create the gradient with a width of 1 pixel. Since the pattern color brush repeats the texture anyway, it should reduce the memory footprint and be faster to generate although I didn’t do any profiling on that. It just seemed to make sense.

Drop Shadows

Drop shadow’s were also an issue when going from UILabel to UITextField. There are two main approaches to doing drop shadows in iOS:

  1. CoreGraphics: Overwrite drawTextInRect and use the CGContextSetShadowWithColor.
  2. CoreAnimation: Use CALayer::shadowOpacity and the various other shadow properties on CALayer and have CoreAnimation render the shadow for you.

Again, it turns out that UITextField is a bit tricky. I wanted to use CoreGraphics as this gives you the best performance but on UITextField, the drop shadow ended up being cropped at the bottom all the time. So I currently use CoreAnimation for my ExtendedUITextField and CoreGraphics for my ExtendedUILabel. At first I – for the sake of consistency – tried to use CoreAnimation for both labels and text fields, but when animating the various elements in my UI, performance was just too bad.

On a side note, I found the shadow stuff to nice to use as an outer glow replacement when I don’t need a drop shadow. For example, the score board in Streetsoccer uses subtle gradients and outer glows which are hardly noticeable but if you see the before and after, it makes a huge difference.

Summary

I wish I had done this research sooner. Everything looks much more professional now. The only problem I have is that my labels currently use Photoshop’s Trajan Pro font and that one is a) not available on iOS and licensing fees for embedding fonts are in general quite outrageous and b) I need full unicode support for the text fields while Trajan Pro only has like the ASCII characters. I almost see myself buying a font creator tool and doing my own custom true type font…

– Alex

Source Code

For completeness sake, here is the code as I currently use it. It is far from perfect, so if you decide to use it, do so at your own risk. I posted it just for educational purposes.

ExtendedLabel.h

#import <UIKit/UIKit.h>

@interface ExtendedLabel : UILabel
{
    NSArray *gradientColors;
    UIColor *strokeColor;
    
    CGFloat shadowBlur;
    CGSize shadowOffset;
    UIColor * shadowColor;
    BOOL isGradientValid;
}

@property (retain) NSArray *gradientColors;
@property (retain) UIColor *strokeColor;

- (void)setShadowWithColor:(UIColor *)color Offset:(CGSize)offset Radius:(CGFloat)radius;

@end

ExtendedLabel.m


#import "ExtendedLabel.h"
#import <QuartzCore/QuartzCore.h>

@implementation ExtendedLabel
@synthesize gradientColors, strokeColor;

- (void)dealloc
{
    [gradientColors release];
    [strokeColor release];
    
    [super dealloc];
}

- (void)resetGradient
{
    if (CGRectEqualToRect(self.frame, CGRectZero))
    {
        return;
    }

    if ( [self.gradientColors count] == 0 )
    {
        self.textColor = [UIColor blackColor];
        return;
    }
    if ( [self.text length] == 0 )
    {
        return;
    }
    
    UIGraphicsBeginImageContext(CGSizeMake(1, self.frame.size.height));
    CGContextRef context = UIGraphicsGetCurrentContext();
    UIGraphicsPushContext(context);
    
    int const colorStops = [self.gradientColors count];
    CGSize lineSize = [self.text sizeWithFont:self.font]; 
    CGSize textSize = [self.text sizeWithFont:self.font constrainedToSize:self.bounds.size lineBreakMode:self.lineBreakMode];
    CGFloat topOffset = (self.bounds.size.height - textSize.height) / 2.0f;
    CGFloat lines =  textSize.height / lineSize.height;
    
    size_t num_locations = colorStops * lines + 2;
    CGFloat locations[num_locations];
    CGFloat components[num_locations * 4];
    locations[0] = 0.0f;
    [[gradientColors objectAtIndex:0] getRed:&(components[0]) green:&(components[1]) blue:&(components[2]) alpha:&(components[3])];
    locations[num_locations - 1] = 1.0f;
    [[gradientColors lastObject] getRed:&(components[(num_locations-1) * 4]) green:&(components[(num_locations-1) * 4 + 1]) blue:&(components[(num_locations-1) * 4 + 2]) alpha:&(components[(num_locations-1) * 4 + 3])];
    for ( int l = 0; l < lines; ++l )
    {
        for ( int i = 0; i < colorStops; ++i )
        {
            int index = 1 + l * colorStops + i;
            locations[index] = ( topOffset + l * lineSize.height + lineSize.height * (CGFloat)i / (CGFloat)(colorStops - 1) ) / self.frame.size.height;
            
            UIColor *color = [gradientColors objectAtIndex:i];
            [color getRed:&(components[4*index+0]) green:&(components[4*index+1]) blue:&(components[4*index+2]) alpha:&(components[4*index+3])];
        }
        
        // Add a little bit to the first stop so that it won't render into the last line of pixels at the previous line of text.
        locations[1 + l * colorStops] += 0.01f;
    }
    
    CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
    CGGradientRef gradient = CGGradientCreateWithColorComponents(rgbColorspace, components, locations, num_locations);
    CGPoint topCenter = CGPointMake(0, 0);
    CGPoint bottomCenter = CGPointMake(0, self.frame.size.height);
    CGContextDrawLinearGradient(context, gradient, topCenter, bottomCenter, 0);
    
    CGGradientRelease(gradient);
    CGColorSpaceRelease(rgbColorspace);
    
    UIGraphicsPopContext();
    self.textColor = [UIColor colorWithPatternImage:UIGraphicsGetImageFromCurrentImageContext()];
    UIGraphicsEndImageContext();
}

- (void)setShadowWithColor:(UIColor *)color Offset:(CGSize)offset Radius:(CGFloat)radius 
{
    shadowOffset = offset;
    shadowBlur = radius;
    [color retain];
    [shadowColor release];
    shadowColor = color;
        
    [self setNeedsDisplay];
}

- (void)setText:(NSString *)text
{
    [super setText:text];
    isGradientValid = NO;
}

- (void)setFont:(UIFont *)font
{
    [super setFont:font];
    isGradientValid = NO;
}

- (void)setFrame:(CGRect)aFrame
{
    [super setFrame:aFrame];
    
    isGradientValid = NO;
}

- (CGRect)textRectForBounds:(CGRect)rect
{
    return CGRectMake(rect.origin.x + MAX(0, shadowBlur - shadowOffset.width), rect.origin.y + MAX(0, shadowBlur - shadowOffset.height), rect.size.width - ABS(shadowOffset.width) - shadowBlur, rect.size.height - ABS(shadowOffset.height) - shadowBlur);
}

- (void)drawTextInRect:(CGRect)rect
{
    if ( isGradientValid == NO )
    {
        isGradientValid = YES;
        [self resetGradient];
    }
    
    CGContextRef context = UIGraphicsGetCurrentContext();
    
    //draw stroke
    if (self.strokeColor != nil)
    {
        CGContextSetStrokeColorWithColor(context, strokeColor.CGColor);
        CGContextSetTextDrawingMode(context, kCGTextFillStroke);
    }
   
    // Note: Setting shadow on the context is much faster than setting shadow on the CALayer.
    if ( shadowColor != nil )
    {
        // We take the radius times two to have the same result as settings the CALayers shadow radius.
        // CALayer seems to take a true radius where CGContext seems to take amount of pixels (so 2 would
        // be one pixel in each direction or something like that).
        CGContextSetShadowWithColor(context, shadowOffset, shadowBlur * 2.0f, [shadowColor CGColor]);
    }
    
    [super drawTextInRect:rect];
}

@end

ExtendedTextField.h


#import <UIKit/UIKit.h>


@interface ExtendedTextField : UITextField
{
    NSArray *gradientColors;
    UIColor * placeholderColor;
    UIColor *strokeColor;
    
    CGFloat shadowBlur;
    CGSize shadowOffset;
    BOOL isGradientValid;
}

@property (retain) NSArray *gradientColors;
@property (retain) UIColor *placeholderColor;
@property (retain) UIColor *strokeColor;

- (void)setShadowWithColor:(UIColor *)color Offset:(CGSize)offset Radius:(CGFloat)radius;

@end

ExtendedTextField.m


#import <QuartzCore/QuartzCore.h>
#import "ExtendedTextField.h"


@implementation ExtendedTextField
@synthesize gradientColors, placeholderColor, strokeColor;

- (void)dealloc
{
    [gradientColors release];
    [strokeColor release];
    [placeholderColor release];
    
    [super dealloc];
}

- (void)resetGradient
{
    if (CGRectEqualToRect(self.frame, CGRectZero))
    {
        return;
    }

    // create a new bitmap image context
    UIGraphicsBeginImageContext(self.frame.size);
    
    // get context
    CGContextRef context = UIGraphicsGetCurrentContext();
    
    // push context to make it current (need to do this manually because we are not drawing in a UIView)
    UIGraphicsPushContext(context);
    
    //draw gradient
    CGGradientRef gradient;
    CGColorSpaceRef rgbColorspace;
    
    CGSize textSize;
    if ( [self.text length] != 0 )
    {
        textSize = [self.text sizeWithFont:self.font]; 
    }
    else
    {
        textSize = [self.placeholder sizeWithFont:self.font];
    }
    if ( textSize.height == 0.0f )
    {
        return;
    }
    
    //set uniform distribution of color locations
    size_t num_locations = [gradientColors count];
    CGFloat locations[num_locations];
    for (int k=0; k<num_locations; k++) {
        locations[k] = textSize.height / self.frame.size.height * (CGFloat)k / (CGFloat)(num_locations - 1); //we need the locations to start at 0.0 and end at 1.0, equaly filling the domain
    }
    
    //create c array from color array
    CGFloat components[num_locations * 4];
    for (int i=0; i<num_locations; i++) {
        
        UIColor *color = [gradientColors objectAtIndex:i];
        [color getRed:&(components[4*i+0]) green:&(components[4*i+1]) blue:&(components[4*i+2]) alpha:&(components[4*i+3])];
    }
    
    rgbColorspace = CGColorSpaceCreateDeviceRGB();
    gradient = CGGradientCreateWithColorComponents(rgbColorspace, components, locations, num_locations);
    CGPoint topCenter = CGPointMake(0, 0);
    CGPoint bottomCenter = CGPointMake(0, self.frame.size.height);
    CGContextDrawLinearGradient(context, gradient, topCenter, bottomCenter, 0);
    
    CGGradientRelease(gradient);
    CGColorSpaceRelease(rgbColorspace);
    
    // pop context
    UIGraphicsPopContext();
    
    // get a UIImage from the image context
    UIImage *gradientImage = UIGraphicsGetImageFromCurrentImageContext();
    
    // clean up drawing environment
    UIGraphicsEndImageContext();
    
    self.textColor = [UIColor colorWithPatternImage:gradientImage];
}

- (void)setShadowWithColor:(UIColor *)color Offset:(CGSize)offset Radius:(CGFloat)radius 
{
    shadowOffset = offset;
    
    self.layer.shadowOpacity = 1.0f;
    self.layer.shadowRadius = radius;
    self.layer.shadowColor = color.CGColor;
    self.layer.shadowOffset = offset;
    self.layer.shouldRasterize = YES;
    
    [self setNeedsDisplay];
}

- (void)setText:(NSString *)text
{
    [super setText:text];
    isGradientValid = NO;
}

- (void)setFont:(UIFont *)font
{
    [super setFont:font];
    isGradientValid = NO;
}

- (void)setFrame:(CGRect)aFrame
{
    [super setFrame:aFrame];

    isGradientValid = NO;
}

- (CGRect)textRectForBounds:(CGRect)rect
{
    return CGRectMake(rect.origin.x + MAX(0, shadowBlur - shadowOffset.width), rect.origin.y + MAX(0, shadowBlur - shadowOffset.height), rect.size.width - ABS(shadowOffset.width) - shadowBlur, rect.size.height - ABS(shadowOffset.height) - shadowBlur);
}

- (void)drawTextInRect:(CGRect)rect
{
    if ( isGradientValid == NO )
    {
        isGradientValid = YES;
        [self resetGradient];
    }
    
    CGContextRef context = UIGraphicsGetCurrentContext();
 
    //draw stroke
    if (self.strokeColor != nil)
    {
        CGContextSetStrokeColorWithColor(context, strokeColor.CGColor);
        CGContextSetTextDrawingMode(context, kCGTextFillStroke);
    }
    
    [super drawTextInRect:rect];
}

- (CGRect)placeholderRectForBounds:(CGRect)rect
{
    return CGRectMake(rect.origin.x + MAX(0, shadowBlur - shadowOffset.width), rect.origin.y + MAX(0, shadowBlur - shadowOffset.height), rect.size.width - ABS(shadowOffset.width) - shadowBlur, rect.size.height - ABS(shadowOffset.height) - shadowBlur);
}

- (void)drawPlaceholderInRect:(CGRect)rect
{
    if ( isGradientValid == NO )
    {
        isGradientValid = YES;
        [self resetGradient];
    }
    
    [super drawPlaceholderInRect:rect];
}

@end
Featured Post
Streetsoccer achievements

Streetsoccer Achievements and Team Formation

Hey everyone,

long time no post, I know, but I’ve picked up the pace again! I’ve just completed the Achievement system for Streetsoccer. I always loved the achievements in “Tilt to Live” where they are more like riddles than gaming points without much purpose, so that’s why I did for Streetsoccer. There will be roughly 30 achievements to start with and for each only get a hint. Some are pretty obvious, some are play on words, some you probably will get by some nifty move inside the game without aiming for an achievement. They sync with Game Center and everything.

The main reason why this took me a while was I had to figure out how to do nice looking text inside iOS. You see, most of the buttons and labels I simply render in Photoshop with a nice gradient and drop shadow but that of course is no option for dynamically generated text. I’ve found a couple of code snippets here and there on how to do fancy labels but all of them had some flaws. For example, most of them didn’t seem to handle text gradients with multiple lines of text correctly. Another thing was that I needed UITextField controls with gradient and shadows as well and they were a bit tricky. I’ll probably do a post just on that topic soon.

In the end, it works great and I’m seriously thinking of replacing all the pre-rendered text elements with my fancy labels now. Only problem is that I don’t particularly like the iOS built-in fonts and for some reason it is ridiculous expensive to license a font for mobile apps. I see almost see myself creating my own true type font…

The other great news is that the Team Formation view is done as well. This will be very you can change the player numbers and assign names to your team as well as the individual team members.

Streetsoccer Formation
Streetsoccer Formation

I would love to add that one can design his own team shirts but the list of todos is probably too long that this will make the first release. Speaking of which, I guess I have a deadline. I was just reminded today that the European soccer championship is coming soon…

Featured Post
Streetsoccer play by mail

Streetsoccer Play-By-Mail – Part 2

Work on play-by-mail continues. Funny enough, it takes longer to design the dialogs than to actually code it, thanks to the maturity of the underlying GameController code. I was just reminded the other day that I’m now coding on StreetSoccer for just over a year, not counting that a large part of the code was re-used from MonkeyDash. It’s about time this thing get’s finally out of the door and in the hand of you, the users…

I’m still following the concept that both two app owners as well as one app owner and a non-owner can play using play-by-mail. As part of this, the rules for Streetsoccer are now also online.

Featured Post
Streetsoccer play by mail

Streetsoccer Play-by-Mail

I’ve started coding the play-by-mail mode of Streetsoccer. At first I though it would not be necessary for the initial release but lately I more and more get the feeling that this should be an App that you play with others rather than against the computer. Maybe it comes from the fact that I’m constantly loosing against the AI now that I fixed some more bugs in its evaluation code… : )

One idea which might be somewhat unique is that – if possible  – I want to implement play-by-mail so that only one of the players needs to have the app. So for example one player creates the new match inside the app, enters the email address of the opponent and the app sends him a mail with a screenshot and instructions. The opponent can then simply reply with a list of fields he wants to move (i.e. “A3-B3-B4-B5”) or, if he owns the app, import the session and do his turn inside the app. I hope I can re-use most of the playback code so that a player can re-watch the previous moves before entering the new one.

I also re-did the iTunes artwork and app icon. Though I’m not completely happy with it yet, it’s at least much of an improvement over the previous one.

Streetsoccer app icon
Streetsoccer app icon

What I’m currently struggling with is a good design for the goal animation. I’m thinking of lot’s of tiny people jumping up but haven’t found a way to make it work yet…

Featured Post

Recipes iPad Version

Ever since the original iPad was released, a number of people have asked me about an iPad version of Recipes. Since I’ve received another such mail today, I thought I quickly do an official post on it.

Read More

Featured Post

Blog moved to tumblr

Believe it or not, up to now I’ve been doing blog posts the hard way, i.e. hand crafted HTML. While this produced very clean HTML output, it just lacked a number of features I really wanted like tags, comments and so on. So when I found out about tumblr, it was rather a question of finding the time to move everything than do it or not do it…

Actually, the whole procedure was rather painless although it took a couple of hours to figure all the customization stuff out:

Getting tumblr to work as a subdomain of athenstean.com was quite easy: add a new configuration to my DNS account, add the address inside tumblr, done. Customizing the appearance was a bit more tricky. Basically one does write a HTML template and adds certain text fragments defined by tumblr which are then filled in to show the blog text, comments, images, … the only tricky part was that I like centered images/videos inside text and sometimes like to have text floating around images. The trick is to define a custom CSS inside the “customize appearance” tumblr dialog and then use CSS-classes not CSS-styles inside the blog posts. For some reason tumblr’s HTML editor removes all style attributes when saving changes. This is most likely to support a very cool feature that allowed me to copy&paste my old posts from the browser into the editor and automatically matched what was a heading, what was an image and what was normal text.

I’m now trying to figure out how to add the tags and comment part to my customized appearance…

– Alex

Featured Post

Streetsoccer Menus

As promised, here is a video of the new menus in action. Most of the work is done, but what is missing are the team profile, career mode and the AI character images. As some may notice, I just dumped in the ones from Monkey Dash for now.

On the coding side, a lot has been achieved: OpenAL sound and background music, fixed lots of bugs, reduced memory consumption, … all in all, it gets closer and closer to being a proper, stable app. I’m currently hoping for end of January as a possible release date but that will probably depend on how long it will take to do the in-game 3D assets properly.

– Alex

Featured Post
Streetsoccer menu design

New Years Post

Happy new year everyone! Life is what’s happening while you’re making plans, and so it has been quite a while since the last post. A lot has been going on, both on and off topic. I don’t do new years resolutions but otherwise I “post more regularly” would definetely be one of them…

To make good, here is a quick run-down of what’s been happening in the meantime: Recipes is going same as always. There is still an unfinished iPad version lying around somewhere on my hard disk but I haven’t made progress on it in a while. A couple of mails from happy users, a few bug reports, not much development on the code side. I hope to do an update soon to fix some issues for various people, but by and large all my efforts currently go into getting a first public version of Streetsoccer done.

Speaking of which, Streetsoccer is going great. I finally nailed the design for the menu graphics which was a big problem. I had contacted various people to help me with the graphics work but in the end I had to do them myself. I’ll probably do another post and video for that soon since it’s not the same when you don’t see the animations. I used a technique similar to what I did with Monkey Dash and have all the elements fly in and out, creating a sort of parallax effect. Feedback so far has been awesome.

I also managed to solve quite a few problems in the code. First there were frequent crashed due to the amount of memory the app required during loading of all the 3D assets. That again is worth another post of its own. In the end it amounted to using hardware texture compression and avoiding usage of the garbage collection. When I started, the maximum memory usage was about 60MB. After a day of tweaking and reading up on various topics, the peak is now at 5MB with no visual difference.

Another interesting issue comes from the way the internal architecture is designed. The 3D view and user interface are completely decoupled from the actual game logic which created all sorts of multi-threading problems. It was only a couple of days ago that I finally resolved what I hope was the last of a ton of related issues. What makes the whole thing so difficult is that the game controller actually fires a series of events and has to wait for all players and the 3D view to acknowledge that they are done processing/playing animations for one event before playing another. On the other hand, the user interface has to use the game controller to do various checks (can piece X move to Y and so on) which produces an intricat interlocking event system. Long story short, it now works! Also another post-to-be.

The main topics for Streetsoccer are now doing the final meshes/textures for the in-game view, implementing the career mode and if possible a tournament server system. All in all, it’s still lots of work but I’m seeing the end.

Totemo has been on hold as well while I was working on Streetsoccer, but since it’s using 90% of the same code it also progressed in one form or the other. There should be more development on this soon as Vicki Paull will jump on board to do the graphics. I’m really excited about this after seeing her other work. Still no word from Rob Fisher concerning Monkey Dash so I’m not sure if this is DOA or MIA. I’m not giving up on this just yet.

Well, what’s left? I bought a 5th-gen iPod Touch which I love, wish the iPhone would have that form factor. Funny thing is that it’s now my third iPod Touch (1st gen, 2nd gen, 5th gen) in addition to an iPad 1. Doing the iPhone SDK has probably been the best “marketing plot” Apple ever came up with as it “encourages” developers to constantly by new devices for testing *g*. My Cintiq has been idle for a bit due to a lot of personal stuff happening at the end of 2011 but I’m looking very much forward to spending some more quality time with it. If I wouldn’t be writing this post, I would probably be sitting at it right now… 🙂

All in all, the last two months should be worth at least 5 follow up posts. I learned a lot to say the least. Now, let’s see what 2012 will bring. For me, I’m very excited about what’s up on the horizon…

– Alex

Featured Post
Streetsoccer chat system

StreetSoccer Chat System

StreetSoccer now has an internal chat system that players can use to talk to each other while playing via GameCenter. So besides some robustness improvements and testing, online multiplayer should be complete.

Featured Post
Wacom Cintiq 24HD

Wacom Cintiq 24 HD vs. Intuos 4 Review

A dream has come true… my Wacom Cintiq finally arrived! I wanted to own one of these since I first heard of the concept a few years back. A display on which one can draw! How amazing would that be? …

A Personal History

A friend of mine from university first got me interested in Wacom. He had an Intuos (no number, it was first generation!) and when he showed it to me, I was mesmerized. I never was much of an artist, the days where I was well versed in using pen and ink long gone, but this was something else. A pen which can draw into a computer. I didn’t have much money so I went to pick up a cheap knock off and learned the first lesson when it comes to pen tablets: Always buy Wacom! It may be a bit pricy, but everything else doesn’t come close. The Wacom just worked, mine needed a battery for the pen, plus the whole shape and feel was worlds apart. Actually my cheap board gathered dust pretty soon…

Fast forward a couple of years, I believe it was 2008 when I got my hands on an Intuos 3. The company I worked for at the time had a few of those lying around and one day I asked one of the artists if I could borrow it for a few hours. I stayed at work until midnight or so and although my drawing skills really were rusty, it was immense fun. I really liked the new pen design, the shape of it felt just great in my hand.

Last year, I finally bought a Wacom Intuos 4. Man, had they made progress over the years. The pen is perfect, the touch wheels are awesome, it even has got customizable buttons which you can label with your own text. And with labeling I don’t mean “insert your paper snippet here” but little displays just for labeling the buttons. Most people consider the A5 wide the best size and I agree completely. Big enough that you don’t feel limited, small enough it still fits into a bag. The only thing is, your pen is disconnected from your drawing. If you’ve never tried it, it’s a bit weird at first. You must not look at the pen but on to the screen while you are drawing. After a couple of minutes, your brain adapts and your hand moves the way you want the cursor to move on the screen. Unfortunately, I didn’t have much time for drawing since then and so again, another tablet gathered dust. But this time, it was my fault, not the tablets.

The Present

Then Engadget leaked an FCC filing a couple of months ago. 24 inch, a new stand, Full HD resolution… this did sound interesting! When I saw the first videos a couple of weeks ago, I new this was what I was waiting for all those years. I decided to sell my Intuos 4 and buy The Cintiq 24HD. If it wouldn’t work out, I could probably sell it again without loosing too much money. Plus, I needed to redo the graphics for Streetsoccer, so the timing was perfect.

Well, yesterday it arrived! You know you’re in for a treat when it takes 2 (!) UPS guys to actually bring in your new gadget. With one word, this thing is MASSIVE. It weights just shy of 30kg and is huge. Actually, it’s the size of my small kitchen table. You can think of the 21UX as a big laptop that fit’s on to your desk, but for the 24HD, you really need a dedicated space. At this time, I got no piece of furniture in my appartment where I can properly put it on except for my kitchen table! The box it comes in is 95x75x35cm… easily the biggest box any technical hardware I’ve bought ever came in. I took some quick pictures this morning to show you guys what I mean:

Wacom Cintiq 24HD
Wacom Cintiq 24HD

Well, I knew it would be big, but wow, I wasn’t expecting THIS BIG. I particular like the large arm rests on both sides of the display area. They make the whole thing feel like your sitting on an architect’s drawning table instead of a display screen. As you can see, the new stand allows for the Cintiq to actual extend down over the edge of the table. For me, just one more thing that makes this thing feel “just right”. The new stand is actually quite different from the older Cintiq 21UX’s construction:

The New Stand

Wacom Cintiq 24 HD
Wacom Cintiq 24 HD

By pulling the lever behind the display, the angle can be adjusted. In it’s upmost position, the display is almost perpendicular to the table. When pushing the hinges away from you, the metal bar holding the screen actually locks in and you have to pull the release bar in the center of the stand to release the lock. Once unlocked, the metal hinges actually do not lock all the way done! That came as a surprise to me but I guess mechanically it would be difficult to have a screen so heavy be held by those hinges. Instead, you adjust the angle and lower the screen until the backside rests on the front of the stand. Feels quite weird at first but seems to work alright. I actually liked the way how the Cintiq 21UX’s stand seemed to hold its own at every angle. Although the new stand can be adjusted to pretty much the same angles, it feels weird to not have the hinges lock and instead relie on the whole thing touching the front of the stand. At first I thought my Cintiq was broken. One actually needs quite a bit of push to have the Cintiq lock up in the top position, more than I was confortable to try at first with a piece of equipment that is so expensive. On second try, it’s actually not an unusual amount of force and the Cintiq takes it well. After a couple of times, it just felt like a good, firm, secure lock. Releasing the lock on the other hand still feels strange. One has to pull the big release lever and slightly lift the display. This works fine, but when moving the hinges, a plastic-sounding clonk occurs when the lock snaps back. To my ears, it sounds like some plastic snapping. Got to call tech support on Monday to make sure this really is okay. If so, this is a design fault for me. On the other hand, I don’t plan on having the Cintiq in its upright position anyway…

Wacom Cintiq 24 HD
Wacom Cintiq 24 HD

The two side panels have the same touch wheel as the Intuos 4 which just works great. The three buttons on the left of the wheel change the mode the wheel is in. This way, for example, one can switch between zooming or changing the brush size in Photoshop. The other buttons can be set to shortcuts for easy access. I, for example, like to have history step forward/back on them when working in Photoshop. If you’ve worked with an Intuos 4, there won’t be anything particular about those buttons. For all those that haven’t had the chance: The OSX configuration panel of the Cintiq allows assigning custom shortcuts to those buttons which even can be different for each application. For example, the left most button can be one thing in Photoshop and another shortcut when working in Blender without the user having to switch configurations.

Another cool little detail are the three buttons at the top right of the screen. Actually they are more indentions as actual buttons. The left shows a display overlay which function is currently assigned to which button. This is a very handy replacement for the small displays next to each button on the Intuos 4. The middle button let’s the OSX on-screen keyboard pop up so you can “type” with your pen when the keyboard isn’t handy. Also a great idea, although I wasn’t able to produce an uppercase letter with it yet. The right button opens the Cintiq’s preference panel for adjusting the button assignments and re-doing pen calibration.

Display and Pen

Wacom Cintiq 24 HD
Wacom Cintiq 24 HD

The display looks great, as you would expect from at this price range. The viewing angle and colors are awesome, but it is a bit weird to sit so close to a 1920×1080 resolution. If you look close, you can see individual pixels which is weird in today’s age of retina displays on mobile phones. Drawing at a normal upright position, I haven’t found this to be any inconvenience. At first I thought the whole screen was a bit “mushy” but that was of course a user error. I had setup the display to mirror my MacBook’s display which of course meant the Cintiq 24 HD to match the MacBook’s 1280×800 to its 1920×1200 resolution. Properly setup to extend the MacBook’s display, it’s as crisp as it should be.

There is little to tell about the pen: It’s perfect, same as it was with the Intuos 4. I seriously don’t know what they will come up next for the Intuos 5 to really improve this. The shape is great, the feel is great, it’s all great. Due to the nature of painting on the top of the glass, there is of course a small offset between where your pen nib is and where the cursor is, but I haven’t found this a problem. It is in fact the same phenomenon as with using an Intuos: Your brain seems to adapt for this once you draw a few minutes and you hardly notice it.

One little tidbit I hadn’t noticed during my research: The Cintiq 24 HD actually has a little fan in it. Luckily it is so quite, you will most likely not notice it or at least not in a negative way. It’s a gentle and constant “wush” that reminds me of that extra quite desktop PC I had put together a while ago.

First Impression

As you may have guessed, I’m really excited about the Cintiq 24 HD. Granted, it’s a bit more “huge” as I had expected, but I really like it. I just haven’t found the right piece of furniture to put it on yet I guess. It seems to be the right device for me, but is it for you?

Well, the first decision should be if you would want an Intuos or a Cintiq. If you’re on a tight budget, well, the Cintiqs are out since the price difference is considerable. But don’t despair, there is actually a pretty good reason for buying an Intuos anyway: You see what you draw. This may sound odd since the main point of the Cintiq is that you exactly see what you are drawing. However, with the Intuos, your hand can actually be in the way of what you want to see. For example, consider the Photoshop tool bar. It’s on the left of the screen by default and when you want to reach one of the secondary tools, you press and hold the tool bar and… yes, that’s right, the list of secondary tools is right under your hand so you can’t see it. No biggy, just move the toolbar to the other side, but it’s small things like that that can be annoying. I did a bit of research before ordering the Cintiq and there are in fact a lot of professionals that prefer an Intuos to a Cintiq for exactly that reason. If you have a bit of practice and draw every day, your brain will have no problem at all that your pen does some precise movement while you look at the screen. On the other hand, having your pen where you draw is an awesome experience. For me, it just feels “more right” than using an Intuos. So if you can, go to a store and try out any Cintiq, says won’t matter.

Say you want a Cintiq, which of the three should it be? Size really matters in this case. I can imagine that a lot of people won’t have enough space for the Cintiq 24 HD even on their office desk, let alone at home. I’m a developer, not an artist, so I need 2+ displays, a keyboard and mouse plus space for scribble paper, tea mug and so on. I can tell you, the Cintiq 24HD wouldn’t fit on my desk at work unless I would seriously re-arrange things. The Cintiq 21UX rather felt like a piece of the workspace, one more display so to speak. It’s a bit clonky as “just an additional display”, but it works. The Cintiq 12WX for my taste is just too small. It’s great to be able to put it on your lab but you will really have to optimize your screen real estate. That said, I know a few people who feel the intimacy being able to lift it and put it on your lap is worth more than a few more pixels.

So here is my purely personal opinion:

  • Intuos 4: Great value for money, absolute work horse, highly portable.
  • Cintiq 12WX: If you imagine yourself sitting on your couch drawing, this is the one for you.
  • Cintiq 21UX: If you need a Cintiq that is part of your workspace, buy this one.
  • Cintiq 24HD: This IS your workspace.

At first I was wondering why Wacom decided to keep the Cintiq 21UX in their product line, but rightly so I have to say. It is a completely different beast compared to the Cintiq 24 HD and both have their place.

Time will tell, but for now I just love the new Cintiq 24HD. I wanted do have a dedicated space to draw in my appartment anyway so this is exactly what I needed. All the small details feel right and I’m happy I didn’t buy the Cintiq 21UX instead but waited for this one.

… that said… let’s see where I can put that monster…

– Alex P.S.: All opinions are purely subjective and all specifications are just rough measurements. Please refer to the wacom webpage for exact dimensions…

Featured Post
Blender Shadow Baking

Blender’s “No Objects or Images to Bake” Error

Today, I was once again in the situation that I wanted to hook up a prototype and started modeling some simple shapes in Blender. When I got to the step of actually baking shadow textures, I ran into the old “no objects or images to bake” error message. I remembered I had that one before but it took my quite some while to figure out the solutation once again. So this time, I’m going to do the proper thing and document it. Let’s start with the default Blender scene (I’m using Blender 2.5 by the way).

Blender Shadow Baking
Blender Shadow Baking

First thing we need is a UV-map. So switch to the

UV Editing

layout by clicking on the button to the left of “Default” in the top bar. With the mouse cursor over the right 3D view, right-click the cube to select it and press TAB to go to

edit mode

. Switch to

edge selection

mode, select the proper edges, press

space

and then choose

Mark Seam

. Press

A

two times to select all faces, press

space

and choose

Unwrap

. We now have our basic UV map.

Blender Shadow Baking
Blender Shadow Baking

In the left view, press S and scale it down a bit. There should be a little space between the boundary of the grid and the edges of the UV map. This is done so blender can “overbake”, i.e. fill some extra pixels around the actual UV map area. This helps solving filtering problems which are produced by texture lookups using pixel that are outside of the UV-map part that is actually filled by triangles.

Now comes the important part. While still in edit mode, click on New (right of the “UV” menu point in the bottom bar) to create a new image. The background of the UV view will now turn black as the image is completely empty. This does two things: for one it creates an image and for the other it assigns it to the UV map. The “No object or image to bake” error usually comes when an image has been created but not assigned to the UV map!

Switch back to the Default view layout. In the right menu bar, click on the camera to get to the rendering panel. At the bottom, switch the bake mode to Ambient Occlusion and press Bake. A progres bar appears at the top menu bar, but nothing else changes. Go back to the UV Editing layout to see the baked shadow texture (in this case everything is gray as there is nothing that could block the ambient occlusion). Note the star next to “Image” in the bottom bar. Press it and select Save Image As. Choose a name and location where to store the texture to.

Blender Shadow Baking
Blender Shadow Baking

To see the shadow map applied to the cube, go back to the Default layout and click on the Texture panel button in the right menu (the small checker board icon to the right of the camera panel icon). On my Macbook, that button is of the screen and one has to grab the left boundary of the menu to extend the menu first. Under type choose Image, in the “Image” section click on Open and the shadow texture should becomes visible in the small preview window above. In the “Mapping” section, switch Coordinates from “Generated” to UV.

Finally, move the mouse cursor to the large 3D view and press N to show the render window properties, unfold the Display section and switch from “Multitexture” to GLSL. At last, the shadow texture is on our cube. You can now head over to Photoshop, Gimp, Pixelmator or whatever image editing tool you like best and tweak your shadow map.

Blender Shadow Baking
Blender Shadow Baking

Of course a cube is pretty boring. Go ahead and use something more interesting yourself as an exercise. If you run into problems, check these things:

  • Did you really assign an image to the UV map or just create one?
  • Is the layer the object is on selected in the render panel?
  • Do you have an active UV map for the object?
  • Did you save the image after baking?
  • Have you changed the texture mapping to use the object UVs?

If you’ve found this while searching the net for a solution and this helped you, drop me an email. It’s nice to know the time writing it down was well spent… – Alex

Featured Post