Some Github Projects

Greetings y’all. I’ve been uploading some newer / older / whatever iOS and MacOS projects to github. Some of these were just random ideas that I wanted to try out, others are products I was pursuing that never made it to the final stages for whatever reason.

I thought it’s better for these to die on the vine in public where others can either take a look at them or perhaps get inspired to create their own projects.

Let’s take a gander at a couple, shall we?

First up is dynamic pixelizer |

It’s an iOS app that uses video from the front facing camera to create a pixelated grid. Tap the pixels and they are turned into spritekit nodes with associated physics bodies. The pixels will then drop and bounce while being updated with the video feed in realtime.

The first incarnation of this used box2d, then I tried using the physics engine built in to iOS (which is box2d under the hood AFAIK.) Box2d by itself was more performant (perhaps because of precision defaults?), but then I tried chipmunk physics and found that to be roughly twice as fast as box2d while behaving basically the same as far as I could tell.

Next is Fractal Xplorer Mac |

I’ve been fascinated by fractals since the early ’80s (I typed a fractal generation BASIC program into my Atari 800XL when I was 14; it took 8 hours to generate a low resolution image, gah!), and was interested in learning OpenCL programming. So this little app has been evolving for years getting tiny amounts of attention whenever I was inspired.

The app will use your Mac’s GPU if possible to generate a Mandelbrot Set image (and the associated Julia Set fractal) and display them in realtime. You can pan and zoom, and also hold the command key while hovering over the Mandelbrot / leftmost pane to update the corresponding Julia.

Finally, there’s EZHDR |

Photography is (or was at least) an interest of mine and I’ve been playing with High Dynamic Range images for a decade at least. I’ve used various HDR generation software and have developed my own approaches using Photoshop, but I thought it would be fun to try to generate some programmatically on a telephone.

The basic idea is that we get realtime video output from the iPhone’s camera, do some basic analysis of the pixels to see to what extent the image is under or over exposed. We use that to configure a series of three bracketed exposures and do some post-processing.

First all images are “auto adjusted”. Then we take the underexposed image, create a greyscale version, do a fast box blur, and then use that as a mask to mask in the darkest areas of the overexposed image. After that we do the opposite for the overexposed image (use its data as a mask to get the best parts of the underexposed image), and then we merge them all together to create the final image that should hopefully have no underexposed or overexposed areas.

So the technique works fine I guess, but it wasn’t as robust and effective as I was hoping. But I came up with some neat things along the way (check out the categories on CIImage!).

I hope you enjoy and/or find these experiments useful.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: