Blog

News, Updates and General Ramblings

Concordia iOS UI Study

A while ago, I had a very brief chat with Mac Gerdts (the author of Concordia) about whether or not someone had approached him to do Concordia for iOS. It was an interesting discussion and he thought that getting the AI right would be trickiest part. My 2 cents where that – in most cases – doing board games on iOS is unfortunately not a (financially) viable enterprise.

Read More
Featured Post
My Favorite Recipes iPad Recipe View

My Favorite Recipes Is Back in the AppStore

It’s back! After a massive overhaul a while ago, I finally managed to tie up all the loose ends and re-submit the app back to the AppStore. The app first hit the AppStore almost 10 years ago!

That meant porting it from what basically was iOS 3.1 code to iOS 12, switching to Storyboard-based interface, creating an iPad version, supporting Dynamic Type (resizable fonts) and many, many more thing that iOS users today simply expect. It’s still not the most flashy recipe app, but darn it if it isn’t the one with the best user interaction : )

My Favorite Recipes iPhone Recipe View

Unfortunately, I haven’t managed to implement iCloud Syncing yet which would be very handy for keeping my iPhone and iPad in sync. But implementing this correctly will take some serious time and so I opted for rather getting the app out into the wild again and adding stuff like that later.

I also dropped the price to the lowest tier, will be interesting to see what effect that will have.

Featured Post

My Favorite Recipes – Moving from iOS 3 to 11

Due to popular demand, I’ve reactivated my recipe management app for iOS. It dropped out of the AppStore with the release of iOS 11 and the requirement that apps had to be 64bit only from that point on. Truth be told, the code had gotten a bit dated as the initial version was launched all the way back in September 2009. As a frame of reference, the first ever iOS SDK was launched in March of 2008! So we are talking the early days of iOS…

As a consequence, getting the app ready for iOS 11 would have taken a major rewrite and I didn’t have THAT MUCH time to spend on this project. But after a couple of users contacted me, I was curious: what does it take to bring an old iOS app up to speed with the latest in iOS technology? Also, I like my 12.9″ iPad Pro a lot and it always seemed a shame that recipes didn’t run on it!

This post is meant as a short review/history lesson in iOS development and it’s evolution through the ages. For more of an end-user perspective on the app itself, I’ll update the app’s page soon. But what I can tell you is: Boy, things have changed over they years! While the database schema and the data layer required hardly any adaption at all, pretty much every line of code in the user interface layer had to be changed. Unfortunately, for a recipe app the user interface is about 80% of the code!

Autolayout, Dynamic Type and Storyboards

The biggest advancement in iOS comes in the form of auto layouts and storyboards. Back in the days, life was simple: the iPad didn’t exist, all iPhones had the same screen resolution of 320×480 (no retina, no larger iPhone 5 aspect ratio, etc) and the general consensus was to avoid Interface Builder and rather do everything in code. Nowadays, we have iPhones and iPads of various physical sizes and resolutions, iPad apps can run in split screen with other apps and so on. So a) you have no idea at what resolution the user will run your app and b) even if you do, it can change at any moment (for example if the user starts another app side-by-side on iPad).

The only sensible way of handling this is to use Storyboards: They are a visual way of designing the UI and enforce splitting controls / layout from the controller classes that implement behaviour. The nice thing about Storyboards is that they allow connecting different pages of the app with transitions called “Segues” and thus represent the whole user flow of the app.

Microsoft tried to do a similar split with WPF/XAML/Expression Blend: Have one language/tool for the UI designer and another for the developer. Well, it didn’t work there because most designers cannot code and most developers cannot design. But when you try to implement a new feature, you need both aspects at the same time.

The reason why it seems to work here are:

  • The design language of iOS is more restrictive and as long as you stick to it, “designing” a user interface amounts to placing controls and not worrying about pixel spacing. Even if a company decides to enforce their own CI, the framework and tools actively “encourage” the use of standard controls, gestures and animations. E.g. if you stick to the standard font definitions (“body”, “caption”, “heading”, …) instead of using custom font sizes/types, you get Dynamic Type support (see below) for free.
  • Storyboards – even when only used as a developer-non-designer tool – can be used as a means of communication with a designer. The developer can do the basic layout and transitions and then show the Storyboard to the designer. It really gives you a nice overview of the user flow of the app.
  • Clear view/controller-separation: In the Microsoft WPF world, people started to use weird XAML-constructs to put code into their UI-description that would have belonged into the controller. Since Interface Builder doesn’t give you options to do that, you end up with better code and increased likeliness of being able to reuse the same controller in different views of the application.

Unfortunately, Interface Builder gets slower the larger the Storyboards gets. So I had to split them into a main one and sub-Storyboards for the individual tab-contents (such as the main recipe, shopping list, sharing, etc). Even on my well speced 13″ MacBook Pro, loading the Storyboard below freezes XCode for 20-30 seconds!

My Favorite Recipes Storyboard - Main
My Favorite Recipes Storyboard – Main

My Favorite Recipes Storyboard - Recipes
My Favorite Recipes Storyboard – Recipes

In addition, Storyboards support variants such as changing individual font sizes or switching a horizontal to a vertical layout depending on the available screen space. One can even add/remove individual controls and it is all handled pretty much for free.

So yes, starting with Recipes 2.0 the app will finally support iPads as well! Note in the screenshot below that the layout automatically adapts to the huge iPad Pro 12″ screen size and increases the left and right margin. This is one of the many small things iOS does pretty much out of the box for you if you stick to using the system default layout margins.

My Favorite Recipes iPad Pro Layout
My Favorite Recipes iPad Pro Layout

Another recent addition to a the iOS world is Dynamic Type. What this means is that the user can increase (or decrease if your eyes are good enough) the font size in the system settings and – if your app supports it – the app changes the layout. The changes can be pretty dramatic, especially for the huge accessibility font sizes.

Dynamic Type: Normal and largest accessibility font size
Dynamic Type: Default and largest accessibility font size

Dynamic Type is one of those features where you listen to the WWDC session, think “hey, that should be done in a few hours” and then it takes a couple of days. The first 90% are pretty easy and basically amounts to settings font sizes correctly, enabling multi-line UILabels, dynamic cell height, etc. Then comes the hard part:

  • If you have any label that uses a custom font size/type (instead of the predefined “body”-, “caption”-, …- styles), Dynamic Type won’t scale the font. You have to write code to do that.
  • If you want to use a different layout when the user switches to one of the huge accessibility sizes, you have to programatically remove the auto layout constraints and add new ones.
  • If you use standard subtitle cells, the resizing does not seem to work correctly.

It’s all doable. However, I kept having the feeling that Dynamic Type isn’t as “out of the box” as the WWDC session lets you believe. In the end it took quite a lot longer than I expected, but it’s a great feature to have.

As a bonus, this finally fulfils the user request to have multi-line cell labels to support very long recipe names!

iOS 11 Style

Compared to iOS 3, the style of iOS has changed a lot. So I had to redesign:

  • the app icon (much flatter and simpler design)
  • tab icons (still working on it)
  • placeholder icons (still working on them as well)
  • interaction mechanisms (such as export using the standard iOS share sheet instead of custom menus)
  • control layout (users have different expectations where to find what nowadays)
  • use flat buttons instead of old “glass button look”
  • decide which interaction style dialogs on iPad should use (popups, modal dialogs, …)

My Favorite Recipes - iOS 11 App Icon
New iOS 11 style app icon

The actual coding for Recipes 2.0 is complete. However, I’m still struggling to find a consistent style for all the tab-icons, placeholder images and so forth. On the one hand, it seems idiotic to not release the app right away just because some icons look crappy. On the other hand, after spending so much time on redoing the layouts and improving the usability, it would be sad to loose potential users just because the icons look crappy.

Objective-C and ARC

Under the hood, things have also changed. Someone has asked me lately if they changed for the better or worse. I guess it depends. The world was easier back in the days: there was only one screen size, multitasking was limited, no device syncing, no dynamic type, … In general, user expectations have grown with the maturity of the platform. So while it has gotten way easier to write a basic app, all that improvement has been eaten up by having to support more features.

On the level of programming language, things have noticeably improved:

  • Automatic Reference Counting (ARC) gets rid of all the memory management code
  • Properties are auto-synthesized by default (the getter/setter are added by the compiler)
  • Static and Runtime Analyzers help reduce the number of bugs
  • Grand Central Dispatch (GDC) and blocks (=lambda expressions) make it easier to write multi-threaded coded. Using NSOperation or performSelector to dispatch code to a background thread had caused a pollution of the class scope because you needed a separate method to be called. Things got even worse because UI elements have to be updated from the main thread which caused – (void)doSomething, – (void)doSomethingOnThread and – (void)doSomethingAfterThread triples of methods.
  • Using blocks instead of delegation: The new pattern seems to be to set functors (=blocks) instead of having to derive from a delegate-protocol. As with the previous point, this helps keeping code that belongs together in the same place.

Misc Stuff

There is also lots of small stuff that has changed over the years. Here are just some of the more noteworthy things.

JSON-LD and the Structured Web

My Favorite Recipes was one of the first apps to use meta-information provided in HTML pages to extract and import recipes from websites. This was based on three formats (Microdata, Microformats and RDF) that the Google Recipes initiative had supported. At some point, they switched to JSON-LD which Recipes 2.0 now also supports. This is a nice, structured way to make web pages machine readable and way easier to implement than the older formats.

Unit Tests and UI Tests

I cannot remember if XCode actually offered unit test integration back in the days but nowadays it’s there. I currently use unit tests for the import/export code and UI tests for pretty much everything else. Recipes is such a UI-heavy application, there isn’t much code where non-UI unit testing makes sense.

As on other platforms, UI testing works by using an apps Accessibility Support to identify individual controls. So as long as you set proper accessibility labels (which you should anyway to support blind users), things are ready to go. The record functionality in XCode seems great at first as it creates test code while you run the app in the simulator and tap on the various controls. However, I’ve found that it often doesn’t work or produce crappy code, so I just use it as a quick way to identify controls and then rewrite the test code manually.

Unified Logging

It’s a small thing, but it is welcome. Apple has introduced a unified logging system (unified in the sense that it behaves the same on all of their platforms) which replaces NSLog. It’s pretty easy to use and allows grouping log messages into sub-categories which is nice.

Files App

Back in the days, one of the most common user questions was “how do I get my recipe files into the app”. The old upload-via-iTunes mechanism still exists but by using the standard file browser, there is now a nice, unified interface for it. And if you use iCloud-Drive, it’s part of the same dialog and makes moving files from your desktop to the app even easier.

Browser

The new WebKit-view makes it easier to have a fully functional embedded browser. Over the years, websites have changed and much of the content you see on a page is actually loaded via java script and not part of the original HTML page. The new browser control makes it possible to grab the web content as it is rendered and thus produces way more reliable results when trying search websites for the recipe information contained in them.

Share/Action Sheets

Those simply didn’t exist back in the days! Now, a single button in the recipe view allows sharing a recipe, adding it as a note, putting it on the integrated shopping list and more.

Summary

Things have improved a lot over the years. Complexity no longer comes from the language or shortcomings/bugs in the iOS frameworks. It rather depends on what kind of feature/usability level you want to achieve. iOS 11 has a lot of features that aren’t strictly necessary but are kind of expected at this point in time.

I’m wondering what the role of platform-independent UI frameworks or HTML5/JavaScript-UIs are in iOS development. For me, the number of small details (dynamic type, readable content margins on large iPads, …) or options of deep system integration (files dialog, share sheet, …) I have found during this project are so large that I wonder what kind of user experience a framework that tries to unify Android and iOS can even provide. My guess is as long as you “just need an app” they are fine but for a great user experience you simply need to develop a native UI.

My Favorite Recipes has always been trying to behave as much as possible like one of the built in apps. This meant a lot of work adapting to all the new capability of iOS 11 (and I still haven’t had the time to implement iCloud syncing). But finally having an iPad version feels great and using Storyboards has helped to improve the user experience a lot.

Hope you like the new 2.0 version of the app when it comes out!

Featured Post

Website redesign

As you probably have noticed, this website has changed a lot. I’ve moved away from hand crafted HTML + Tumbler blog to using WordPress now. This allows for a better structure, improved design and should make it easier to post content in the future. It’s also a good occasion for a general status update:

  • Recipes: My Favorite Recipes automatically dropped out of the AppStore with the introduction of iOS 11 which requires 64bit. As you may or may not know, the app has been around for a long time. To be precise, it had it’s origins in the iOS 2 area which was the first iOS version one could develop apps for! During almost decade, a lot of things change and the way apps are build today is completely different then back then. Things like multi-tasking, storyboards or auto-layout UIs did not exist. So it isn’t easy to bring such an old app up to iOS 11 but I’ve started the process – which is the main reason for this website revamp. I’ll post more on this soon…
  • I’ve converted all the old blog entries but split all content concerning my 3D Modeler project to a separate site. It can now be found at https://metashapes.com. This has been my primary focus over the last couple of years which is also the reason why there was little activity on this site.
  • Streetsoccer has been removed from the AppStore for a while now. While I still get occasional requests to bring it back, the license agreement I had with Cwali (the publisher of the original boardgames) has expired and so unfortunately there is no chance to get it up and running again.
Featured Post

StreetSoccer V1.1.0 – German Localization and Movement Area

StreetSoccer V1.1.0 is now heading for Apple review. This again brings a bunch of bugfixes as well as two new features:

  1. German Localization
  2. Movement Area Visualization

Read More

Featured Post

Streetsoccer in the News

Exciting times! Essen game convention has just started and Streetsoccer is in the news on a couple of sites:

Read More

Featured Post
Featured Post
Streetsoccer app icon

Streetsoccer for iOS available on the AppStore

It’s finally here. Spread the word, post reviews, let us know what you would like to see in feature updates… and have fun playing : )

Featured Post

DICE+ Support for Streetsoccer

I just received a package from Poland containing my DICE+ DevKit. If you haven’t heard of DICE+, it’s a die connecting to iOS devices via bluetooth that has a tons of sensors in there. It even detects if the role isn’t good or someone tried to cheat!

While waiting for Streetsoccer V1.0.0 to pass review, I’ve started working on a DICE+ support for Streetsoccer V1.1.0. Had to change the internal state engine a bit as die rolls weren’t a user triggered action. But most of the work is done and now comes verifying nothing has been broken and adding the user interface.

Below is a picture of the dev kit (underneath are the loading cable, wristband and other goodies). Made me smile when I opened the package:

Would be fun to have the support ready in time for Essen and show up at their booth with it : )

Featured Post
Streetsoccer app icon

Streetsoccer is on the way!

Well, if hell didn’t just freeze over: After almost three years of work, I’ve submitted Streetsoccer to the app store! Actually, today was the third time in three days : )

Read More

Featured Post

Recipes iOS 7 Updates and other news

I took a small brake from working on Streetsoccer and did the necessary adjustments to make Recipes work with iOS 7. The update went into review yesterday and will hopefully be public in a week or so.

As for Streetsoccer, beta users had found a bug in the Game Center integration which I already fixed but otherwise feedback is excellent. I only need to paint the four AI character images and it should finally be good to go…

Featured Post

Streetsoccer Full In Game Recording

I’ve just uploaded a video showing of the various menus and a full match against the AI. As you can see, some art assets like the AI characters are still missing but coding wise the game is pretty much done. Have fun…

Featured Post

Streetsoccer Coding Complete

After 2 ½ years, coding on Streetsoccer is finally complete. Well, truth be told, I still have to add something to handle potential support cases tomorrow but basically the game itself is done! I’ve just re-checked the repository: The very first commit for Streetsoccer was done on Dec 25, 2010. Before then I had worked on Monkey Dash and used a lot of that code as a basis for Streetsoccer. So all in all let’s call it three years of spare time coding, testing, trying to find artists to help … all interrupted by a few things like day jobs, relationships and other things that were simply more important… : )

The game has gained a tremendous level of polish in the last couple of weeks.

  • After a match, there are now some basic statistics and a direct way to have a re-match or re-watch the game.
  • There are two new, easier AI characters: one that plays like a novice player and one that plays completely defensive (it even get’s challenging to reach the ball at all).
  • Finally some background music!
  • The jersey generator mentioned in previous posts
  • Better 3D camera controls (one-finger dragging)
  • Skip button for the intro camera-flight
  • New, more obvious undo-all button is shown next to the confirm button when a turn is complete
  • Better lighting of the 3D models
  • Lots of bug-fixes

I’m not sure how much work on the in-game 3D assets will be done before submitting the app. Probably a lot has to happen with the first update just to finally get the app out!

The plan is to send new test builds to beta testers later today, paint the missing AI character images and then call it quits … for now! I’ve already got like 70 tickets full of ideas and improvements for future updates.

Stay tuned…

Featured Post

Streetsoccer Custom Jerseys – Part 2

This is the continuation of the previous post about custom jerseys inside Streetsoccer. The in-game rendering code for custom jerseys is now complete and I also baked the player numbers into the individual shirts. The scoreboard now reflects the team colors which was possible by a similar compositing operation as I use for generating the jerseys. Luckily Core Graphics already has blending modes such as “Overlay” implemented, so it amounted to writing out individual layers instead of one baked PNG.

One thing that proved particularly difficult was resolving conflicts between team jerseys that looked to similar, especially in network situations where the opponent color is not know in advance. Finally, I’ve spent some time one making the goalies clearly identifyable. There is now a “1” as an overlay marker on the field a goalie is standing in addition to increasing the brightness of their jerseys.

To achieve all this, a lot of work had to be done behind the scenes, restructuring the database and the renderer itself. But all looks well and according to my tracking system, I’m at 65% of the final tickets done before doing a first public release. What’s left is mainly drawing AI characters, modeling trees and stuff, adding music and a few things that will make support easier once the game is out. Fingers crossed, but it’s looking like the end is indeed near…

I’ve also uploaded an extended version of the previous video that now shows the in-game graphics. Note that the iOS simulator slowed down considerably when I did the screen recording. Everything runs smoothly on the device of course!

Featured Post

Streetsoccer Custom Jersey Generator

I’ve just completed a first version of the new jersey generator. The user can pick his own base color, a color for the shorts, one of the pre-defined patterns and a color for the pattern. The app internally does some on the fly compositing to generate the baked texture which is then used in the game. The 3D model is still the ugly player piece from all the prototypes, still no time yet to improve it.

While the generator itself is done, there are still a number of edge cases I have to handle. What if both players choose a very similar jersey and are hard to distinguish? Is the jersey correctly transferred in online sessions? … there a quite a few non-obvious consequences of having custom jerseys but hopefully they will be easy to fix.

Featured Post

A HSV Color Picker Control for iOS

Things like this surprise me: How many years has it been since the first iPhone SDK came out? They just announced iOS 7 and I haven’t checked it out yet, but in iOS 6 at least, there is still no color picker control! When doing some googling, there are a number of people that have implemented custom color picker controls but none of those seemed to be simple to use or looked okay visually. Even worse, when looking for some code snippets to base a custom implementation on, those snippets had bugs! Kind of feels like the stone ages, but on the other hand, C#/WPF doesn’t have a stock color picker either…

So a couple of days ago when I needed a color picker for Streetsoccer, I found myself in the situation of having to write one myself and I thought I spare everyone the trouble of going through the process by writing this post. What we are about to set off to is a journey on how to create a basic hue/saturation/brightness circular color picker like in the screenshot above.

The control itself consists of four parts:

  • The hue circle
  • The saturation/brightness box
  • The two current value markers (gray circles)

Note that the black background and the shape do not belong to the control itself but to a UIPopoverController that simply hosts the ColorPicker. The screenshot is taken from the custom jersey configuration in StreetSoccer.

The first part I started with is the hue circle. The easy way to do this would be to generate it in Photoshop and then bake it into an image. However, I wanted the control to work on various resolutions so Core Graphics was the obvious candidate. Unfortunately, Core Graphics can only draw linear gradients and so some folks over at StackOverflow proposed the solution I’m also using: Drawing a number of circle segments, each with a flat color. If the number of segments is high enough, it looks like a smooth gradient.

There were however three problems with that code:

  1. The code draws sort of a skewed rectangular shape and rotates it around the center of the control. When the number of subdivisions is small, we don’t have a circle but some n-gon shape.
  2. As some commentator noted, when the number of subdivision isn’t quite high, gaps between the segments are noticeable. The reason for this is that the calculation of the segment vertex positions is done using the circle perimeter (for an x-offset) and the radius (for a y-offset), neglecting the fact that such a point will not lie on the circle itself! It simply does not follow the curvature of the circle. This was easily solved by using the proper trigonometric functions.
  3. It’s a lot of stuff to draw.

Okay, the last point requires some more explanation. My initial design of the control used a class derived from UIView that did all the drawing by overwriting UIView::drawRect. However, when the user starts modifying the hue/saturation/brightness values, I had to draw the whole control over and over again and drawing all those segments slowed down the UI considerably.

So I scrapped that idea and instead of overwriting UIView::drawRect, I implemented three custom Core Animation Layers, one for each element in the control. The hue circle for example only has to be redrawn when the control size changes, not when the user changes the color values. A Core Animation Layer nicely caches the rasterized image of the circle and saves us from many drawing all those individual segments again. This by the way works even more efficient than manually drawing the circle into a UIImage and then drawing that image in UIView::drawRect.

With the circle done, the next head scratcher was on how to do the complex gradient inside the saturation/brightness box. At first it looks like one could layer multiple linear gradients to get the correct result, but in fact that’s not possible. And since the whole box changes very often (every time the user changes the hue value), we cannot rely on caching as easily as above. Luckily, I had done some HLSL shader programming lately and therefore an almost trivial solution came to mind:

Use a layer with an OpenGL ES 2.0 context and draw a rectangle with a custom shader. This is very efficient since the shader is very simple and it uses the GPU, so frequent updates should not be a problem. I therefore took the OpenGL ES 2.0 shader example from the iOS documentation, searched for HSV to RGB conversion code and hooked it all together.

The third type of layer was the simplest one: The markers are simply layers that draw an ellipse. With all three layer types complete, the control itself only has to do the touch handling and layouting. The result is a single class consisting of two files which reacts very fast to user changes. It probably isn’t perfect but it work’s well enough for me at this time, so I thought I post it:

ColorPicker.h


//
//  ColorPicker.h
//  StreetSoccer
//
//  Created by Alex Klein on 6/21/13.
//  Copyright (c) 2013 Athenstean.com. All rights reserved.
//

#import <UIKit/UIKit.h>

@class HueCircleLayer;
@class SaturationBrightnessLayer;
@class MarkerLayer;

@protocol ColorPickerDelegate;

// A Hue/Saturation/Brightness (HSB) color picker control that shows hue as a
// color gradient circle and saturation/brightness in a box inside the circle.
//
// Note, everything is rendered in layers to maximize caching. The hue circle
// is drawn using core graphics and the saturation/brightness box is drawn
// using an OpenGL ES 2.0 layer with a pixel shader.
@interface ColorPicker : UIView<UIGestureRecognizerDelegate>
{
    HueCircleLayer * layerHueCircle;
    SaturationBrightnessLayer * layerSaturationBrightnessBox;
    MarkerLayer * layerHueMarker;
    MarkerLayer * layerSaturationBrightnessMarker;
    CGFloat colorHue;
    CGFloat colorSaturation;
    CGFloat colorBrightness;
    CGFloat colorAlpha;
    CGFloat boxSize;
    CGPoint center;
    CGFloat radius;
    CGFloat thickness;
    unsigned int subDivisions;
    UILongPressGestureRecognizer * hueGestureRecognizer;
    UILongPressGestureRecognizer * saturationBrightnessGestureRecognizer;
    
    NSObject<ColorPickerDelegate> * delegate;
}

// The color represented by the control.
@property (retain) UIColor * color;

// Subdivisions is currently only there to adjust the smoothness of the
// hue circle, but in the future we might actually clip to a lower number
// of discrete values (e.g. allow the user to pick only from 6 values).
@property (assign) unsigned int subDivisions;

@property (assign) NSObject<ColorPickerDelegate> * delegate;
@end

@protocol ColorPickerDelegate <NSObject>
- (void)colorPicker:(ColorPicker*)colorPicker changedColor:(UIColor*)color;
@end

ColorPicker.m


//
//  ColorPicker.m
//  StreetSoccer
//
//  Created by Alex Klein on 6/21/13.
//  Copyright (c) 2013 Athenstean.com. All rights reserved.
//

#import <OpenGLES/ES2/gl.h>
#import <OpenGLES/ES2/glext.h>
#import <QuartzCore/QuartzCore.h>

#import "ColorPicker.h"

// This defines the thickness of the hue circle.
static float const CIRCLE_THICKNESS = 0.2f;
// This defines the size of the saturation/brightness box.
static float const BOX_THICKNESS = 0.7f;


@interface HueCircleLayer : CALayer
{
    unsigned int subDivisions;
}

@property (assign) unsigned int subDivisions;
@end

@implementation HueCircleLayer
@synthesize subDivisions;

- (void)drawInContext:(CGContextRef)context
{
    // First, draw the Hue gradient circle. This is based on
    // http://stackoverflow.com/questions/11783114/draw-outer-half-circle-with-gradient-using-core-graphics-in-ios
    // but with a few bug fixes and changes.
    float const radius = MIN(self.bounds.size.width, self.bounds.size.height) / 2.0f;
    float const thickness = radius * CIRCLE_THICKNESS;
    
    // Bugfix: Opposed to the original code, we draw proper curved pieces and calculate the correct
    // circle position. The original code calculated an incorrect offset that caused gaps between the
    // segments.
    float const sliceAngle = 2.0f * M_PI / self.subDivisions;
    CGMutablePathRef path = CGPathCreateMutable();
    CGPathMoveToPoint(path, NULL, cos(-sliceAngle /2.0f) * (radius - thickness), sin(-sliceAngle/2.0f) * (radius - thickness));
    CGPathAddArc(path, NULL, 0.0f, 0.0f, radius - thickness, -sliceAngle/2.0f, sliceAngle/2.0f + 1.0e-2f, false);
    CGPathAddArc(path, NULL, 0.0f, 0.0f, radius, sliceAngle/2.0f + 1.0e-2f, -sliceAngle/2.0f, true);
    CGPathCloseSubpath(path);
    
    // Move origin to center of control so we can rotate around it to draw our
    // circle.
    CGContextTranslateCTM(context, self.bounds.size.width/2.0f, self.bounds.size.height/2.0f);
    
    float const incrementAngle = 2.0f * M_PI / (float)self.subDivisions;
    for ( int i = 0; i < self.subDivisions; ++i)
    {
        UIColor * color = [UIColor colorWithHue:(float)i/(float)self.subDivisions saturation:1 brightness:1 alpha:1];
        CGContextAddPath(context, path);
        CGContextSetFillColorWithColor(context, color.CGColor);
        CGContextFillPath(context);
        CGContextRotateCTM(context, -incrementAngle);
    }
    CGPathRelease(path);
}

@end

@interface SaturationBrightnessLayer : CAEAGLLayer
{
    CGFloat hue;
    EAGLContext * glContext;
    GLuint framebuffer;
    GLuint renderbuffer;
    GLuint program;
    
    // attribute index
    enum {
        ATTRIB_VERTEX,
        ATTRIB_COLOR,
        NUM_ATTRIBUTES
    };
}

@property (assign) CGFloat hue;

@end

@implementation SaturationBrightnessLayer

-(id)init
{
    self = [super init];
    if (self)
    {
        self.opaque = YES;
        glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
        [EAGLContext setCurrentContext:glContext];
        glGenRenderbuffers(1, &renderbuffer);
        glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);
        [glContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:self];
        
        glGenFramebuffers(1, &framebuffer);
        glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
        glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbuffer);
        
        [self loadShaders];
    }
    return self;
}

- (void)dealloc
{
    if (framebuffer)
    {
        glDeleteFramebuffers(1, &framebuffer);
        framebuffer = 0;
    }
	
    if (renderbuffer)
    {
        glDeleteRenderbuffers(1, &renderbuffer);
        renderbuffer = 0;
    }
	
    // realease the shader program object
    if (program)
    {
        glDeleteProgram(program);
        program = 0;
    }
	
    // tear down context
    if ([EAGLContext currentContext] == glContext)
        [EAGLContext setCurrentContext:nil];
	
    [glContext release];
    glContext = nil;
   
    [super dealloc];
}

- (void)layoutSublayers
{
    // Allocate color buffer backing based on the current layer size
    glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);
    [glContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:self];
}

- (void)loadShaders
{
    // create shader program
    program = glCreateProgram();
    
    const GLchar * vertexProgram = "precision highp float; \n\
        \n\
        attribute vec4 position; \n\
        varying vec2 uv; \n\
        \n\
        void main() \n\
        { \n\
            gl_Position = vec4(2.0 * position.x - 1.0, 2.0 * position.y - 1.0, 0.0, 1.0); \n\
            uv = position.xy; \n\
        }";
    
    GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
    glShaderSource(vertexShader, 1, &vertexProgram, NULL);
    glCompileShader(vertexShader);
    glAttachShader(program, vertexShader);

    // https://gist.github.com/eieio/4109795
    const GLchar * fragmentProgram = "precision highp float; \n\
    varying vec2 uv; \n\
    uniform float hue; \n\
    vec3 hsb_to_rgb(float h, float s, float l) \n\
    { \n\
        float c = l * s; \n\
        h = mod((h * 6.0), 6.0); \n\
        float x = c * (1.0 - abs(mod(h, 2.0) - 1.0)); \n\
        vec3 result; \n\
         \n\
        if (0.0 <= h && h < 1.0) { \n\
            result = vec3(c, x, 0.0); \n\
        } else if (1.0 <= h && h < 2.0) { \n\
            result = vec3(x, c, 0.0); \n\
        } else if (2.0 <= h && h < 3.0) { \n\
            result = vec3(0.0, c, x); \n\
        } else if (3.0 <= h && h < 4.0) { \n\
            result = vec3(0.0, x, c); \n\
        } else if (4.0 <= h && h < 5.0) { \n\
            result = vec3(x, 0.0, c); \n\
        } else if (5.0 <= h && h < 6.0) { \n\
            result = vec3(c, 0.0, x); \n\
        } else { \n\
            result = vec3(0.0, 0.0, 0.0); \n\
        } \n\
     \n\
    result.rgb += l - c; \n\
     \n\
    return result; \n\
    } \n\
     \n\
    void main() \n\
    { \
        gl_FragColor = vec4(hsb_to_rgb(hue, uv.x, uv.y), 1.0); \
    }";

    GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
    glShaderSource(fragmentShader, 1, &fragmentProgram, NULL);
    glCompileShader(fragmentShader);
    glAttachShader(program, fragmentShader);
    
    // bind attribute locations
    // this needs to be done prior to linking
    glBindAttribLocation(program, ATTRIB_VERTEX, "position");
    
    glLinkProgram(program);
    
    glDeleteShader(vertexShader);
    glDeleteShader(fragmentShader);
}

- (void)setHue:(CGFloat)value
{
    hue = value;
    [self setNeedsDisplay];
}

- (CGFloat)hue
{
    return hue;
}

- (void)display
{
    // Draw a frame
    [EAGLContext setCurrentContext:glContext];
    const GLfloat squareVertices[] = {
        0.0f, 0.0f,
        1.0f, 0.0f,
        0.0f, 1.0f,
        1.0f, 1.0f,
    };
    
    glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
    glViewport(0, 0, self.bounds.size.width, self.bounds.size.height);
    
    // use shader program
    glUseProgram(program);

    glUniform1f(glGetUniformLocation(program, "hue"), hue);
    
    // update attribute values
    glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
    glEnableVertexAttribArray(ATTRIB_VERTEX);
	
    // draw
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    
    glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);
    [glContext presentRenderbuffer:GL_RENDERBUFFER];
}
@end

@interface MarkerLayer : CALayer
@end

@implementation MarkerLayer

- (void)drawInContext:(CGContextRef)context
{
    float const thickness = 3.0f;
    CGContextSetLineWidth(context, thickness);
    CGContextSetStrokeColorWithColor(context, [UIColor grayColor].CGColor);
    CGContextAddEllipseInRect(context, CGRectInset(self.bounds, thickness, thickness));
    CGContextStrokePath(context);
}

@end

@implementation ColorPicker
@synthesize subDivisions, delegate;

- (id)initWithFrame:(CGRect)frame
{
    self = [super initWithFrame:frame];
    if (self) {
        // Initialization code
        self.opaque = NO;
        self.color = [UIColor whiteColor];
        
        layerHueCircle = [[HueCircleLayer alloc] init];
        layerHueCircle.frame = self.bounds;
        [layerHueCircle setNeedsDisplay];
        [self.layer addSublayer:layerHueCircle];
        
        layerSaturationBrightnessBox = [[SaturationBrightnessLayer alloc] init];
        layerSaturationBrightnessBox.frame = self.bounds;
        [layerSaturationBrightnessBox setNeedsDisplay];
        [self.layer addSublayer:layerSaturationBrightnessBox];
        
        layerHueMarker = [[MarkerLayer alloc] init];
        [layerHueMarker setNeedsDisplay];
        [self.layer addSublayer:layerHueMarker];

        layerSaturationBrightnessMarker = [[MarkerLayer alloc] init];
        [layerSaturationBrightnessMarker setNeedsDisplay];
        [self.layer addSublayer:layerSaturationBrightnessMarker];
        
        hueGestureRecognizer = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(handleDragHue:)];
        hueGestureRecognizer.allowableMovement = FLT_MAX;
        hueGestureRecognizer.minimumPressDuration = 0.0f;
        hueGestureRecognizer.delegate = self;
        [self addGestureRecognizer:hueGestureRecognizer];
        saturationBrightnessGestureRecognizer = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:@selector(handleDragSaturationBrightness:)];
        saturationBrightnessGestureRecognizer.allowableMovement = FLT_MAX;
        saturationBrightnessGestureRecognizer.minimumPressDuration = 0.0;
        saturationBrightnessGestureRecognizer.delegate = self;
        [self addGestureRecognizer:saturationBrightnessGestureRecognizer];
        
        self.subDivisions = 256;
    }
    return self;
}

- (void)dealloc
{
    [layerHueCircle release];
    [layerSaturationBrightnessBox release];
    [layerHueMarker release];
    [layerSaturationBrightnessMarker release];
    [hueGestureRecognizer release];
    [saturationBrightnessGestureRecognizer release];
    
    [super dealloc];
}

- (void)layoutSubviews
{
    [super layoutSubviews];
    
    float const resolution = MIN(self.bounds.size.width, self.bounds.size.height);
    
    radius = resolution / 2.0f;
    thickness = CIRCLE_THICKNESS * radius;
    boxSize = sqrt(BOX_THICKNESS * radius * BOX_THICKNESS * radius / 2.0f) * 2.0f;
    center = CGPointMake(self.bounds.size.width / 2.0f, self.bounds.size.height / 2.0f);

    layerHueCircle.frame = self.bounds;
    layerSaturationBrightnessBox.frame = CGRectMake((self.bounds.size.width - boxSize) / 2.0f, (self.bounds.size.height - boxSize) / 2.0f, boxSize, boxSize);
    layerHueMarker.frame = [self hueMarkerRect];
    layerSaturationBrightnessMarker.frame = [self saturationBrightnessMarkerRect];
}

#pragma mark - Properties

- (void)setColor:(UIColor *)aColor
{
    colorHue = 1.0f;
    colorSaturation = 1.0f;
    colorBrightness = 1.0f;
    colorAlpha = 1.0f;
    if ( [aColor getHue:&colorHue saturation:&colorSaturation brightness:&colorBrightness alpha:&colorAlpha] == NO )
    {
        colorHue = 0.0;
        colorSaturation = 0.0f;
        [aColor getWhite:&colorBrightness alpha:&colorAlpha];
    }
        
    layerSaturationBrightnessBox.hue = colorHue;
    layerHueMarker.frame = [self hueMarkerRect];
    layerSaturationBrightnessMarker.frame = [self saturationBrightnessMarkerRect];
}

- (UIColor*)color
{
    return [UIColor colorWithHue:colorHue saturation:colorSaturation brightness:colorBrightness alpha:colorAlpha];
}

- (void)setSubDivisions:(unsigned int)value
{
    subDivisions = value;
    layerHueCircle.subDivisions = value;
}

- (unsigned int)subDivisions
{
    return subDivisions;
}

#pragma mark - Marker positioning

- (CGRect)hueMarkerRect
{
    CGFloat const radians = colorHue * 2.0f * M_PI;
    CGPoint const position = CGPointMake(cos(radians) * (radius - thickness / 2.0f), -sin(radians) * (radius - thickness / 2.0f));
    return CGRectMake(position.x - thickness / 2.0f + self.bounds.size.width / 2.0f, position.y - thickness / 2.0f+ self.bounds.size.height / 2.0f, thickness, thickness);
}

- (CGRect)saturationBrightnessMarkerRect
{
    return CGRectMake(colorSaturation * boxSize - boxSize / 2.0f - thickness / 2.0f + self.bounds.size.width / 2.0f, (1.0f - colorBrightness) * boxSize - boxSize / 2.0f - thickness / 2.0f + self.bounds.size.height / 2.0f, thickness, thickness);
}

#pragma mark - Touch handling

- (BOOL)gestureRecognizerShouldBegin:(UIGestureRecognizer *)gestureRecognizer
{
    if ( gestureRecognizer == hueGestureRecognizer )
    {
        // Check if the touch started inside the circle.
        CGPoint const position = [gestureRecognizer locationInView:self];
        CGFloat const distanceSquared = (center.x - position.x) * (center.x - position.x) + (center.y - position.y) * (center.y - position.y);
        return ( (radius - thickness) * (radius - thickness) < distanceSquared ) && ( distanceSquared <= radius * radius );
    }
    else if ( gestureRecognizer == saturationBrightnessGestureRecognizer )
    {
        // Check if the touch started inside the circle.
        CGPoint const position = [gestureRecognizer locationInView:self];
        CGFloat const saturation = (position.x - center.x) / boxSize + 0.5f;
        CGFloat const brightness = (position.y - center.y) / boxSize + 0.5f;
        
        return (saturation > -0.1) && (saturation < 1.1) && (brightness > -0.1) && (brightness < 1.1);
    }
    return YES;
}

- (void)handleDragHue:(UIGestureRecognizer *)gestureRecognizer
{
    if ( (gestureRecognizer.state == UIGestureRecognizerStateBegan) || (gestureRecognizer.state == UIGestureRecognizerStateChanged) )
    {
        CGPoint const position = [gestureRecognizer locationInView:self];
        CGFloat const distanceSquared = (center.x - position.x) * (center.x - position.x) + (center.y - position.y) * (center.y - position.y);
        if ( distanceSquared < 1.0e-3f )
        {
            return;
        }

        CGFloat const radians = atan2(center.y - position.y, position.x - center.x);
        colorHue = radians / (2.0f * M_PI);
        if ( colorHue < 0.0f )
        {
            colorHue += 1.0f;
        }
        layerSaturationBrightnessBox.hue = colorHue;
        [CATransaction begin];
        [CATransaction setValue: (id) kCFBooleanTrue forKey: kCATransactionDisableActions];
        layerHueMarker.frame = [self hueMarkerRect];
        [CATransaction commit];
        
        if ( [delegate respondsToSelector:@selector(colorPicker:changedColor:)] )
        {
            [delegate colorPicker:self changedColor:self.color];
        }
    }
}

- (void)handleDragSaturationBrightness:(UIGestureRecognizer *)gestureRecognizer
{
    if ( (gestureRecognizer.state == UIGestureRecognizerStateBegan) || (gestureRecognizer.state == UIGestureRecognizerStateChanged) )
    {
        // Check if the touch started inside the circle.
        CGPoint const position = [gestureRecognizer locationInView:self];
        colorSaturation = MAX(0.0f, MIN(1.0f, (position.x - center.x) / boxSize + 0.5f));
        colorBrightness = MAX(0.0f, MIN(1.0f, (center.y - position.y) / boxSize + 0.5f));
        [CATransaction begin];
        [CATransaction setValue: (id) kCFBooleanTrue forKey: kCATransactionDisableActions];
        layerSaturationBrightnessMarker.frame = [self saturationBrightnessMarkerRect];
        [CATransaction commit];
        
        if ( [delegate respondsToSelector:@selector(colorPicker:changedColor:)] )
        {
            [delegate colorPicker:self changedColor:self.color];
        }
    }
}

@end
Featured Post

Planar Quads in Blender – Making Faces Flat

I’ve seen a lovely flat shaded rendering style yesterday and while trying to emulate it, I noticed it’s important to have quads actually being flat for flat shaded rendering. As the rendering system splits everything into triangles, having a non-flat quad results in a edge that is visible as both triangles of the quad will have a slightly different shading. To my surprise, no good information on how to make a polygon flat could be found on the net…

The easiest way that I came up with to make a non-axis aligned polygon flat is this:

  1. Change the Transformation Orientation Mode from Global to Normal(combo box next to the transformation gizmo settings)
  2. Change the vertex snapping mode from Increment to Vertex and deactivate it (the little magnet is grayed out, combo box next to it shows two dots on a cube)
  3. Select three vertices that form the final plane you want the quad to lie in.
  4. Press Ctrl+Alt+Space to create a new custom orientation (transformation coordinate system), give it some name and make sure the Overwrite Previous is turned on. The coordinate system’s z-axis will be orthogonal to our defined plane (left part of the screenshot).
  5. Now select the off-plane vertex, press g to translate, and twice z to limit the transformation to the newly created coordinate system’s z axis (notice the blue line in the right part of the screenshot).
  6. Press and hold ctrl and move the mouse cursor over one of the three reference vertices so that the transformation snaps its z-value to that vertex (notice the orange snap circle in the right part of the screenshot).
  7. Release the mouse button and voila, the quad is now perfectly flat.

Since we enabled “Overwrite Previous”, for the next quad one just has to select three vertices, press ctrl+alt+space (overwriting the previously generated coordinate system), g, z, z, hold ctrl and move over a reference vertex (to snap), release, done. Sounds complicated at first but it’s actually very fast to do. Hope that helps

Alex

Featured Post

Detecing and fixing encoding problems with NSString

When you’re working with strings on iOS, it’s only a question of time before you start using stringWithContentsOfURL, either for downloading something from the web or handling a file import to your App. One of the major pains of working with strings is the encoding issue: a string is an array of bytes and to make sense of it, you got to know how what the bytes mean.

In the early days, one just used one byte for a character and came up with the famous ASCII encoding. But of course 256 characters is by far not enough to handle all the characters in the world (think of all the Asian languages) so different people invented different encodings until at one point, the Unicode people came around in an effort to propose an encoding that contains all characters for all languages. Unfortunately, there is both UTF8 and UTF16, so there is not even a single Unicode encoding, but hey, that’s besides the point here. Unicode made a lot of stuff simpler and the world a better place.

Classes like NSString do a good job of hiding away that problem. The problem hits you when you’re receiving bytes from an external source (a.k.a. a webpage) and have to figure out what encoding the stuff is. Let’s take the German character “Ä” for example: in Latin1 encoding, that’s just one byte with a value of 196. In Unicode UTF8, its two bytes: 0xc3 0x84. So you download that list of bytes and have to figure out what’s what. If you have a UTF8 encoded page and incorrectly assume it’s Latin1, you end up with “Ä”. Luckily, most modern formats like HTML or XML suggest that the encoding should explicitly be stated somewhere in the file.

What’s that got to do with iOS you may ask yourself. Well, I ran into a couple of problems when trying to use Apple’s methods to automatically detect the correct encoding when it comes to Latin1 encoded data. So here is some source code to help others with the same problem:

StringUtils.h


#import <Foundation/Foundation.h>

@interface NSString (NSStringAdditions)
// Checks for UTF8 German umlauts being incorrectly interpreted as Latin1.
- (BOOL)containsUTF8Errors;
// Replaces the umlaut errors with the correct characters.
- (NSString*)stringByCleaningUTF8Errors;
// Uses various attempts to guess the right encoding or fix common
// problems like NSStrings problem to detect Latin1 correctly.
+ (NSString*)stringWithContentsOfURLDetectEncoding:(NSURL*)url error:(NSError**)error;
@end

StringUtils.m


#import "StringUtils.h"

@implementation NSString (NSStringAdditions)

- (BOOL)containsUTF8Errors
{
    // Check for byte order marks
    // http://en.wikipedia.org/wiki/Byte_order_mark
    if ( [self rangeOfString:@"Ôªø"].location != NSNotFound )
    {
        return true;
    }
    // Now check for weird character patterns like
    // Ä ä Ö ö Ü ü ß
    // We basically check the Basic Latin Unicode page, so
    // U+0000 to U+00FF.
    for ( int index = 0; index < [self length]; ++index )
    {
        unichar const charInput = [self characterAtIndex:index];
        if ( ( charInput == 0xC2 ) && ( index + 1 < [self length] ) )
        {
            // Check for degree character and similar that are UTF8 but have incorrectly
            // been translated as Latin1 (ISO 8859-1) or ASCII.
            unichar const char2Input = [self characterAtIndex:index+1];
            if ( ( char2Input >= 0xa0 ) && ( char2Input <= 0xbf ) )
            {
                return true;
            }
        }
        if ( ( charInput == 0xC3 ) && ( index + 1 < [self length] ) )
        {
            // Check for german umlauts and french accents that are UTF8 but have incorrectly
            // been translated as Latin1 (ISO 8859-1) or ASCII.
            unichar const char2Input = [self characterAtIndex:index+1];
            if ( ( char2Input >= 0x80 ) && ( char2Input <= 0xbf ) )
            {
                return true;
            }
        }
    }
    return false;
}

- (NSString*)stringByCleaningUTF8Errors
{
    // For efficience reasons, we don't use replaceOccurrencesOfString but scan
    // over the string ourselves. Each time we find a problematic character pattern,
    // we copy over all characters we have scanned over and then add the replacement.
    
    NSMutableString * result = [NSMutableString stringWithCapacity:[self length]];
    NSRange scanRange = NSMakeRange(0, 0);
    NSString * replacementString = nil;
    NSUInteger replacementLength;
    for ( int index = 0; index < [self length]; ++index )
    {
        unichar const charInput = [self characterAtIndex:index];
        if ( ( charInput == 0xC2 ) && ( index + 1 < [self length] ) )
        {
            unichar const char2Input = [self characterAtIndex:index+1];
            if ( ( char2Input >= 0xa0 ) && ( char2Input <= 0xbf ) )
            {
                unichar charFixed = char2Input;
                replacementString = [NSString stringWithFormat:@"%C", charFixed];
                replacementLength = 2;
            }
        }
        if ( ( charInput == 0xC3 ) && ( index + 1 < [self length] ) )
        {
            // Check for german umlauts and french accents that are UTF8 but have incorrectly
            // been translated as Latin1 (ISO 8859-1) or ASCII.
            unichar const char2Input = [self characterAtIndex:index+1];
            if ( ( char2Input >= 0x80 ) && ( char2Input <= 0xbf ) )
            {
                unichar charFixed = 0x40 + char2Input;
                replacementString = [NSString stringWithFormat:@"%C", charFixed];
                replacementLength = 2;
            }
        }
        else if ( ( charInput == 0xef ) && ( index + 2 %lt; [self length] ) )
        {
            // Check for Unicode byte order mark, see:
            // http://en.wikipedia.org/wiki/Byte_order_mark
            unichar const char2Input = [self characterAtIndex:index+1];
            unichar const char3Input = [self characterAtIndex:index+2];
            if ( ( char2Input == 0xbb ) && ( char3Input == 0xbf ) )
            {
                replacementString = @"";
                replacementLength = 3;
            }
        }
        
        if ( replacementString == nil )
        {
            // No pattern detected, just keep scanning the next character.
            continue;
        }

        // First, copy over all chars we scanned over but have not copied yet. Then
        // append the replacement string and update the scan range.
        scanRange.length = index - scanRange.location;
        [result appendString:[self substringWithRange:scanRange]];
        [result appendString:replacementString];
        scanRange.location = index + replacementLength;
        
        replacementString = nil;
    }
    
    // Copy the rest
    scanRange.length = [self length] - scanRange.location;
    [result appendString:[self substringWithRange:scanRange]];
    
    return result;
}

+ (NSString*)stringWithContentsOfURLDetectEncoding:(NSURL*)url error:(NSError**)error
{
    NSError * errorBuffer = nil;
    NSStringEncoding encoding;
    NSString * result = [NSString stringWithContentsOfURL:url usedEncoding:&encoding error:&errorBuffer];
    if ( errorBuffer != nil )
    {
        errorBuffer = nil;
        result = [NSString stringWithContentsOfURL:url encoding:NSUTF8StringEncoding error:&errorBuffer];
    }
    if ( errorBuffer != nil )
    {
        errorBuffer = nil;
        result = [NSString stringWithContentsOfURL:url encoding:NSISOLatin1StringEncoding error:&errorBuffer];
        if ( ( errorBuffer == nil ) && ( [result containsUTF8Errors] ) )
        {
            result = [result stringByCleaningUTF8Errors];
        }
    }
    if ( errorBuffer != nil )
    {
        errorBuffer = nil;
        result = [NSString stringWithContentsOfURL:url encoding:NSASCIIStringEncoding error:&errorBuffer];
    }
    
    *error = errorBuffer;
    return result;
}

@end
Featured Post

Turn-Based Game Center Support and more

Hi all,

well, once again I missed a self-set deadline. I wanted to finish the first version of Streetsoccer and submit it to Apple end of the first week of January but I guess that one is over now. I’ve been busy though and the list of missing things has gotten pretty short. What’s mainly preventing me from submitting the app are a few art assets plus some bigger changes to the internal data structures that require some more testing. But hopefully, another week and than it’s finally on it’s way…

Turn-based Game Center support

The app now also offers asynchronous online play via Apple’s Game Center infrastructure (besides the real-time Game Center support and play-by-mail that were already implemented). Unfortunately, this is probably the worst documented and worst implemented API on iOS I have seen so far and so the implementation has cost me a lot of nerves (and more time than I would have thought). But it seems to work quite well now and this will probably be the preferred way to play online for most players.

When you complete your turn, the app automatically sends your move to the Game Center server and your opponent receives a notification. You can now either quit the app and wait for the notification that it’s your turn again. Or you can stay in that match. If you receive your opponents move, the app will automatically run the appropriate animations and you can continue with your next move. It feels so much like playing in the real-time server mode that I’m actually thinking about removing that part altogether.

I’ll probably write a follow up post on Game Center and describe the implementation details soon. The only thing that is missing right now is something to transfer the match results to a custom server and thus offer proper rankings. Unfortunately, Game Center’s leadership boards are to limited to use for this.

Player Labels

There are now labels underneath each player piece that show the name and number of the player. It’s surprising how much personality this simple feature brought to the game. I’ve implemented this in such a way that the labels are hidden when a player does his move so they don’t obfuscate things.

Bug Fixes and Misc

  • Tons of bug fixes, too many to go into details here.
  • Added end game animations and an animation when going into overtime.
  • Reworked some menu graphics.
  • Removed the achievement system (didn’t feel right, will be re-added later).
  • Support for alpha maps for improved graphics (see previous post).
Featured Post

Alpha is not Transparency – Premultiplied Alpha, Alpha Maps and Trees on iOS

I’ve lately been working on creating 3D models for Streetsoccer that act as background props. One of the most interesting areas is low poly trees and if you look through the Internet, there is hardly any good information out there. Most low poly assets or tutorials one finds do not work well in real time engines due to two things that are always difficult in real time 3D: Transparency and self-shadowing. Since a lot of people I talked to didn’t have the experience of falling into all the pit falls related to that topics yet, I thought I just quickly write down some of them and what one can do about them.

A common technique to create low poly trees is to use a texture of a cluster of leaves, put it on a plane, sub-divided it and bend it a little. Use 5-10 of those intersecting each other and it looks quite alright. The problem is that in reality the upper parts of a tree are casting shadows to the lower parts. So if you use just one texture of leaves, you either end up with a tree that has the same brightness everywhere. If you want to do it right, you end up using multiple textures depending on which part of the cluster has shadow on it and which doesn’t. If you go to stock asset suites, those trees usually look great because they have been raytraced and have some form of ambient occlusion on them.

The other area is transparency. As you may or may not know, real time 3D rendering is rather stupid: Take a triangle, run it through some matrices, calculate some simplified form of lighting equation and draw the pixels on the screen. Take next triangle, run the math again, put pixels on the screen, and so on. So things order of occlusion is in general dependent on the order of triangles in a mesh and the order of meshes in a scene. To fix this, someone invented the Z-Buffer or Depth Buffer which for each pixel stores the depth of the pixel that has been drawn to the screen. Before drawing a pixel, we check whether the pixel the new triangle wants to draw is before or behind the depth value stored in the depth buffer for the last stuff we put at that pixel. If the triangle is behind it, we don’t draw the pixel. This saves us the trouble of sorting all triangles by depth from the viewer position before drawing. By the way, all of this is explanation is rather over-simplified and boiled down to what you need to know for the purposes of this discussion.

Considering that real time 3D graphics work on a per-triangle basis, transparency obviously becomes difficult. Following the description above, there is no real “in front or behind” but rather “what’s on the screen already and what’s getting drawn over it”. So what real time APIs like OpenGL or DirectX do is use Blending. When a non-100% opaque pixel is drawn, it is blended with what is already on the screen proportionally with the transparency of the new triangle. That solves the color (sort of) but what about depth? Do we update the value in the depth buffer to the depth of the transparent sheet of glass or keep it at its old value? What happens if the next triangle is also transparent but lies in between the previous? The general rule is that one has to sort all transparent objects by depth from the viewer and after rendering all opaque objects, render the transparent one in correct order.

If you’ve read a bit about 3D graphics, that should all sound familiar to you. So here comes the interesting parts: The things you don’t expect until you run into them!

Filtering and Pre-Multiplied Alpha Textures

Whenever a texture is applied and the size of the texture does not match the size of the pixels it is drawn to, filtering occurs. The easiest form of filtering is called nearest neighbor where the graphics card just picks the single pixel that is closest to whatever U/V-value has been computed for a pixel on the triangle. Since that produces very ugly results, the standard is to use linear filtering, which takes the neighboring pixels into account and rather returns a weighted average. You probably have noticed this as the somewhat blurry appearance of textures in 3D games.

For reasons of both performance and quality, a technique called Mipmaps is often used which just means lower resolution versions of the original texture are pre-computed by the graphics card. If an object is far away, the lower resolution version is used which better matches the amount of pixels that object is drawn on and thus improves quality.

What few people have actually dealt with is that filtering and transparency do not work well together in real time 3D graphics. When using a PNG texture on iOS, XCode optimizes the PNG before bundling it into your app. Basically it changes the texture so that the hardware can work more efficiently. As one of the things, XCode pre-multiplies the alpha component on to the RGB components. What this means is that instead of storing r, g, b, alpha for each pixel, one stores r times alpha, g times alpha, b times alpha and alpha. The reasoning is that if an image has an alpha channel, the image usually has to be blended when it is rendered anyway and instead of multiplying alpha and RGB every time a pixel in an image is used, it is done once when the image is created. This usually works great and saves three multiplications.

The trouble starts when filtering comes in. Imagine a red pixel that has an alpha value of zero. Multiply the two and you get a black pixel with zero alpha. Why should that be a problem, it’s fully transparent anyway, right? As stated above, filtering takes neighboring pixels into account and interpolates between them. What happens can be seen in Photoshop when creating gradients.

The closer the U/V-values are to the border of the opaque region of the texture and the larger the region of texture that is filtered to a single pixel, the more grayish the result becomes. I’ve first learned this the hard way when it came to the goal nets in Streetsoccer. As probably everyone would, I had just created one PNG with alpha in Photoshop and this is what it looked like:

Although the texture is pretty much pure white, the premultiplied alpha at that distance makes the goal net look dark gray. So how do you get to the version below? Avoid premultiplied alpha!

What I’ve done in the shot below is use a separate alpha texture that is black and white in addition to the diffuse texture. During render time, the RGB values are used from the diffuse map and the alpha value is interpolated from the alpha map. I filled the previously transparent parts of the diffuse map with pixels that matched the opaque parts and the result speaks for itself.

Since the Streetsoccer code uses OpenGLES 1.1 right now, I couldn’t simply use a pixel shader but had to use register combiner. Since that’s kind of legacy code and information is hard to find, here is the code:


// Switch to second texture unit
glActiveTexture( GL_TEXTURE1 );
glEnable( GL_TEXTURE_2D );

// Active texture combiners and set to replace/previous for RGB. This
// just takes the RGB from the previous texture unit (our diffuse texture). glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvf(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_REPLACE);
glTexEnvf(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS); // diffuse map
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR);

// For alpha, replace/texture so we take the alpha from our alpha texture. glTexEnvf(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_REPLACE);
glTexEnvf(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_TEXTURE); // alpha map
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA);

[self bindTexture:mesh.material.alphaTexture];

glActiveTexture( GL_TEXTURE0 );

[self bindTexture:mesh.material.diffusTexture];

One important thing though is that the alpha map has to be uploaded as an GL_ALPHA texture instead of the usual GL_RGB or GL_RGBA, otherwise this won’t work. Speaking of which, I might probably just have combined the two UIImages during upload and uploaded them as one GL_RGBA texture… got to check that one out… : )

Intra-object Occlusion

A lot of people are aware of the object-2-object occlusion problem when using transparency and that one has to use depth-order-sorting to solve it. However, what I noticed just lately is that – of course – the same problem can also arise within a single object.

The screenshot above was generated during early testing of the alpha map code. I used an asset from the excellent Eat Sheep game which they kindly provide on their website. Again, it is quite obvious, but again, I was surprised when I save this. What happens here is that the triangles with the flowers are rendered before the stone but all are within the same mesh. Doing depth sorting for each triangle is a bit overkill and sorting per-object clearly does not work here. In the original game, this is not a problem because the asset is usually seen from above.

Not sure what to do about this one just yet. One could edit the mesh to have the flower triangles behind the others inside the mesh’s triangle list but that would be have to re-done every time the mesh is modified. The other idea is to split it into two objects, which of course produces an overhead of a couple of context switches for OpenGL. But for trees with a large viewing angle, that will exploded the number of meshes…

Update Dec 17, 2012

Well, I did a bit more of digging yesterday and the situation gets even weirder. According to some sources on the net:

  • Photoshop produces PNGs with pre-multiplied alpha
  • The PVR compression tool shipped with iOS does straight alpha (but the PVR compression tool from the PowerVR website can also do pre-multiplied)
  • XCode always does pre-multiplied for PNGS as part of its optimizations

And to make things even more interesting, pre-multiple alpha seems not only to be the source of my original problem but also the answer. The most cited article on this topic seems to be TomF’s Tech Blog. Turns out, if your mipmap texture is in pre-multiplied alpha, filtering does not cause any fringes, halos or whatever, one just has to switch to a different blending function (that is ONE and ONE_MINUS_SRC_ALPHA … which matches my equation from above)…. well, in fact it doesn’t. For as long as I’ve been doing OpenGL, I’ve always read “use alpha and 1-alpha” but that’s wrong! If you check the equation above and assume you are blending a half-transparent pixel on to an opaque pixel, you get 0.5×0.5+1.0×0.5=0.75. That’s clearly not what we want. I’m seriously wondering why this hasn’t caused more problems for me!

The right way to do it is use glBlendFuncSeparate to have a different weighting for the alpha channel, which gives us a new equation and finally one that matches what pre-multiplied alpha does (note that most sources use ONE and not ONE_MINUS_SRC_ALPHA as destination alpha weight in the non-pre-multiplied alpha case which doesn’t seem right if you ask me):

There seems to be concerns on whether or not premultipled alpha causes problems when using texture compression. However, fact is that using a separate texture map adds a number of OpenGL calls for the texture combiners (less important of an argument for OpenGLES 2.0 shaders) and another texture bind. So I guess I’ll try to change my content pipeline to use all pre-multiplied alpha textures!

– Alex

Featured Post