Using NSOperation to speed up compute-intensive tasks.

April 29, 2011

The NSOperation class, an abstract class available in Mac OS X 10.5 and newer, is a fantastically simple class to use and makes it very easy to gain performance on Mac OS X computers with multiple processors.

The iOS version of Lens•Lab did not have any multi-processor optimizations in it: the only iOS device with multiple processors is the iPad 2 and it wasn’t even released when I began work on Lens•Lab.

The biggest performance problem I was facing with Lens•Lab is the simulated blur. At the beginning I always feared that any performance problems I was going to see would be related to drawing the background artwork. As it turns out, that job is trivial, performance-wise.

One of the cool things (I think) about Lens•Lab is the blur we do on the near and far out of focus areas. It really drives home the depth of field idea and gives one a more intuitive grasp of how optics works. But I immediately found out that doing a blur in realtime on mobile devices would be a challenge.

(At this point some programmers more smarter than I might ask why I just didn’t use OpenGL for the blur. The answer is that at this point I don’t know OpenGL and learning that would have pushed back the release of Lens•Lab. I really wanted to get it out in the world, plus it was fun getting the blur algorithm tuned and trying to squeeze every drop of sweet, delicious performance that I could out of it.)

After doing some performance analysis and fine-tuning of the blur algorithm I was fairly satisfied with the results. Lens•Lab got released for iOS devices.

One thing that didn’t make the iOS cut is the fancy blur we do where the blur increases gradually as we get further and further from the exact near and far depth of field distances. This gradually increasing blur is super-expensive computation-wise. It could be that I have a crappy algorithm, but I’ve made the algorithm work as fast as I can. So what to do? Multi-cores to the rescue!

I took the blur algorithm code and removed it from where it was (in my subclass of NSView) and moved it to its own object subclassed from NSOperation. The header looks like this:

#import <Cocoa/Cocoa.h>

@interface BlurOperation : NSOperation {

CGImageRef inImage;

int pixelRadius;

float scale;

int iterations;

BOOL near;

CGImageRef outImage;

}

@property CGImageRef outImage;

– (id)initWithImageRef:(CGImageRef)image pixelRadius:(int)radius scale:(float)imageScale iterations:(int)iterationAmount isNear:(BOOL)near;

– (void)main;

@end

So we have some ivars (including references to the inImage and outImage CGImageRefs), an init method that gets the info we need, and a main.

The implementation is as simple as you would expect. The init method takes the properties and sets up the ivars. Main looks like this:

– (void)main {

NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

….// blur stuff

[pool release];

}

And that’s it. In our NSView subclass, all we need to do is set up an NSOperationQueue when we initialize:

operationQueue = [[NSOperationQueue alloc] init];

And then when we want to perform the blur, we just have to do this:

BlurOperation *blurNear = [[BlurOperation alloc] initWithImageRef:nearCropRef pixelRadius:15 scale:1.0 iterations:kBlurIterations isNear:TRUE];

[operationQueue addOperation:blurNear];

BlurOperation *blurFar = [[BlurOperation alloc] initWithImageRef:farCropRef pixelRadius:20 scale:1.0 iterations:kBlurIterations isNear:FALSE];

[operationQueue addOperation:blurFar];

[operationQueue waitUntilAllOperationsAreFinished];

Of course we release the BlurOperation instances when we’re all done.

What I found when I did this is a 50% speed up in frames per second when manipulating the controls or resizing the window. Which makes sense: the near blur area is roughly 1/2 the size of the far blur area with 1/2 as many pixels to process. When the two operations are added to the queue, the near blur is almost always going to get done first. Either way, taking the compute intensive task of doing this graduated blur in real time and turning it into an NSOperation subclass was really easy and made a huge difference in how fast the Mac version of Lens•Lab runs. Hopefully you can be inspired to try this with your app as well!

The cool thing about this is that I will be able to add this to the iOS version of Lens•Lab and have awesome performance with devices that use the new Apple A5 dual-core processor!

Advertisements

Can you keep a secret?

April 28, 2011

Check out what we’re cooking up for the Mac!

If you wanna take a sneak peek, click here. It may crash on you and might not work exactly right. Don’t tell anyone!

(Two things about the Mac version of Lens•Lab that I think are cool:

  1. It’s multi-threaded so we compute the blur on two cores if you got ’em.
  2. The blur is more advanced: the amount of blur increases as the distance from the defocus point increases. Neat!)

International Support Quick Note

April 27, 2011

After seeing the sales data, I’m super glad I have this little thing before we set settings for the first time:

if ([[[NSLocale currentLocale] objectForKey:NSLocaleCountryCode] isEqualToString:@”US”]) {

[userDefaults setInteger:kImperialSegment forKey:@”units”];

} else {

[userDefaults setInteger:kMetricSegment forKey:@”units”];

}

Half the sales of Lens•Lab are from places outside the US where they use the much more intelligent and rational metric system. We set metric as the default for these lucky souls.


Our First Professional Review!

April 27, 2011

Check it out! Lens•Lab got reviewed over at 148Apps! 4 out of 5 stars. Not bad!

The review was really well written!


Lens•Lab is New & Noteworthy!

April 21, 2011

Yay! Lens•Lab is in the “New & Noteworthy” section of the Photography category for iPad:

I don’t know how that works but OK!

It’s also “New & Noteworthy” in the Photography category for iPhone but hidden away in the “See All” part:


Lens•Lab 1.0 Released for iOS Devices!

April 19, 2011

We are pleased to announce the release of Lens•Lab 1.0 for all iOS devices!

What is Lens•Lab? Here’s a short description:

Lens•Lab is a virtual lens simulation tool for photographers of all skill levels. It graphically illustrates how depth of field and field of view relate to your SLR camera and lenses.

Lens•Lab works with all iOS devices running firmware 3.0 or newer and works great with iPad and iPhone 4 retina displays!


Method for Interpreting Illustrator Art Assets as Cocoa CGPathRef

April 17, 2011

In the creation of Lens•Lab, we had to figure out a way to create an art asset that contained visual detail that would scale from 10cm to 100m. we know Adobe Illustrator very well so we were hoping we could create the art asset there. The problem was this: we had to take this path in Illustrator and scale it programmatically in realtime over 4 orders of magnitude.

The solution we came up with is an Objective-C method that parses a specially prepared chunk of an Illustrator EPS file and creates a Cocoa CGPathRef object which one can then manipulate to one’s heart’s content.

Step 1: create your path in Illustrator. Since this is going to end up as a CGPathRef object, we won’t be getting any color or stroke data from Illustrator. So basically, limit yourself to one single shape (compound paths are fine), fill it with black, and then you’re done. Oh, and you have to flip it vertically because of differences in coordinate systems between Illustrator and Core Graphics. Save this artwork as an Illustrator 3 EPS with no preview.

Step 2: Open this EPS in your favorite text editor. Find the chunk between %%EndSetup and %%PageTrailer. Copy and paste this into either an NSString or a text file for reading in later.

Step 3: In your code,

#include “ShapeUtilities.h”

and call

CGPathRef yourPath = [ShapeUtilities createCGPathRefFromEPSString:yourString];

Oh look! I made you a zip archive with an Xcode 4 iPhone project all set up! And here’s what it looks like!

I’m sure this class could be extended to do much better (read: any) error checking and I suppose you could even try to parse appearance stuff as well. Perhaps even wrap up the CGPathRefs in an array if there are multiple shapes. Just go for it!