13″ Retina MacBook Pro (Late 2012) sort of review: That’s a lotta pixels!

March 10, 2013

I kind of want to do a review of the 13″ Retina MacBook Pro I got a few weeks ago. But I want to do it in a way that doesn’t rehash what others have already written. Let’s see how this goes.

There are two aspects of computers that have really not improved over the past two decades: hard drive speeds and display densities. The hard drive problem has been ameliorated by SSDs so that really leaves only displays as the final piece that needs to play catch up. And catch up they have! But first things first.

Apple announced the 15″ Retina MacBook Pro models last summer. The announcement filled my heart with joy but I was also a little disappointed for two reasons. One, I’m not a fan of the 15″ form factor (it’s just too damn big). And two, the starting price of $2,200 was way above what I’d ever be willing to pay for a laptop.

When the 13″ Retina MacBook Pro was announced last November, one of those disappointments floated away. But I was still left with the prospect of paying $1700 for a new laptop. For the record, the most I’ve ever paid for a laptop (of any kind) was $1600 for a iBook back in late 2001. My normal laptop purchase M.O. is to get the lowest end 13″ or so model and do upgrades myself.

Recently the price for the 13″ retina dropped by $200. Also, it’s interesting to note that the educational discount on this model is $100. So, what that means is that one could get a 13″ Retina MacBook Pro for $1400 with an educational discount. It looks like the planets are aligning nicely!

So, after having had it for a few weeks I can safely say that this is by far the best laptop I have ever had. The screen still blows me away every day and the form factor is almost perfect. Let’s go over the highlights, shall we?

The display: oh my god the display

The star of the show really is the 13.3″ 2560 x 1600 IPS display. The size is the same as my old 13″ MacBook Pro but there are four times as many pixels and the color and viewing angles are significantly better. At normal viewing distances it’s nearly impossible to resolve individual pixels.

Here are some observations I’ve made over the past few weeks in no particular order.

There is really something strange about the retina display and I think I have figured out what it is: solid colors (like the solid white background of this page) seem to “disappear” because there is no pixel grain for your eyes to focus on. That is to say: with a normal low resolution display, solid blocks of color still have the grid inherent in the display technology as a thing that your eyes can focus on. With a retina display, there is just solid color and nothing else. This is not to say that the display is hard to focus on (quite the opposite), I’m just saying that there’s something strange about not being able to focus as easily on solid colors.

wordpress

I thought that web browsing was going to be a painful experience (since the web is mostly not retina aware, i.e.; images and other in page UI elements will display at poor resolutions), but it has actually been pretty damn good. It helps that all text renders at high resolution. But many more sites are serving up high resolution assets than I thought would be the case. Just for example: I’m typing this into WordPress and most of the UI elements are being drawn in high resolution. Other sites like Facebook, Flickr, and Imgur are retina aware to a greater or lesser extent.

Apple has done a fantastic job of creating high resolution UI assets. All the normal buttons, sliders, widgets, etc. that you’ve grown to love over the years have been given a high resolution makeover and the results are just amazing.

Apps that don’t use the Cocoa frameworks directly display in the horrible “normal” resolution. This was clearly evident when I installed the Android Development Toolkit (which is based on Eclipse.) There is a fix though, and it’s as easy as modifying a value in the app’s plist. There’s a handy app that does this for you called Retinizer.

Screen Shot 2013-03-10 at 4.15.22 PM

As much of a revelation as browsing and developing in Xcode was, I was totally unprepared for how amazing it was going to be to load up my photos in Aperture. Being able to view and work on photos on a display that is basically not having to do any kind of downsampling is just amazing. It’s like looking at an actual printed photo but so much better. It’s truly hard to get across to you how amazing it is.

Screen Shot 2013-03-10 at 4.17.07 PM

The way apple implemented HIDPI display scaling is pretty much mind blowing. In the displays settings panel, in addition to the “native” resolution of 1280 x 800, I can choose two scaled resolutions of 1440 x 900 and 1680 x 1050. What this does is create a virtual retina display at either 2880 x 1800 or 3300 x 2100 and scale those pixels to fit the display in real time. (It looks like it’s doing bilinear scaling which is perfect.) These alternate modes are very handy when running Xcode or tools like Photoshop or Illustrator.

Speaking of Illustrator: if you get a chance, totally run the retina aware version of Illustrator on a Mac with a retina display. It’s kind of like these two things were made for each other. If you run it at the HIDPI 1680 x 1050 you’re basically running Illustrator on a virtual 7 megapixel display. Holy cow!

Screen Shot 2013-03-10 at 3.18.19 PM

Other various observations

Being able to run multiple external displays is really kind of neat. And also makes my pile of Dell 20″ displays more useful. I’ve got a suspicion that it’s possible to run three displays in clamshell mode if two of them are DisplayPort but I haven’t yet been able to test this hypothesis. I would need either two Thunderbolt displays or two displays using active DisplayPort adapters + HDMI.

Thunderbolt is cool but for right now USB3 is where it’s at. Being able to buy a 2.5″ USB3 enclosure for $20 and slap an SSD in it and get most of the performance out of it over USB 3 is very handy. I’ve got my Aperture library on an external USB3 enclosure with a 250GB Samsung 840 drive in it. I get 200MB/sec reads and writes and random access is as speedy as you would expect. WIN!

8522064220_5fa8bd6d2c_o

The Intel HD4000 graphics do fine at 2D and merely adequate at 3D. I play computer-based games mostly on my Windows machine but it’s fun to play some Portal occasionally at 2560 x 1600.

The battery life and i5 CPU are both fine. And boring.

The built-in 128GB SSD is quite speedy (we’re talking reads and writes at over 400MB/sec).

The machine boots up from cold boot to desktop in under 10 seconds. Yowzers!

The iPhone/iPad simulator in Xcode is retina aware so basically I can run a simulated retina iPad in landscape mode without any kind of scaling or clipping. Big win!

I don’t miss having an optical drive one little bit.

To sum up, this is just an amazing piece of technology. And it’s really only possible to do something like this when you have a single company that:

  1. Controls the hardware
  2. Controls the software
  3. Cares about quality, design, aesthetics, and user experience

The two pieces of technology that have been lagging horribly over the past two decades—hard drives and displays—are finally becoming a thing of the past: SSDs are now mainstream and high resolution displays are not only affordable, they have been made immediately useful by being backed by excellent operating system software support.

PCs as we know them traditionally still have years of life left. I’m excited that these years are going to be spent with beautiful and functional displays.

Advertisements

New App, Etc.

March 8, 2013

New app has been on the app store for a couple of weeks. It’s a QR Code Clock. You read that right.

https://itunes.apple.com/us/app/qrcodeclock/id601461609?mt=8

Also, I still have more Android retrospective stuff and a review of the new 13″ Retina MacBook Pro I got! Wowzers!


Dot Notation in Objective-C

February 4, 2013

I’m going to jump right into this contentious issue. There has been a lot written about using dot notation in Objective-C. Before we get into the arguments, let me first show you something from some actual code I’m working on right now:

“Normal” bracket notation Objective-C:

[fetchRequest setSortDescriptors:[NSArray arrayWithObject:sort]];

Dot notation Objective-C 2.0:

fetchRequest.sortDescriptors = @[sort];

Alright, so I used some literals in there as well. But that’s really part of my point. The language evolves over time and just because something has “always been done this way” doesn’t mean it’s necessarily the best way to do it.

The dot notation line is almost half as long and, to my mind, much easier to read. Having been typing Objective-C for a few years now I can tell you that brackets are tough to manage. Xcode seems to get its “bracket helping” wrong almost half the time. And they are just visually cluttery.

Some say “OK, use dots but just for properties!” Me, I’m getting used to using the dots for everything except (void) methods with no arguments.

Everything else is up for grabs. I guess because I think of everything as a property and/or object to grab. For instance if I use:

UIColor.blackColor

I immediately think “OK, we’re peeling off a blackColor from the UIColor class.”

Here’s a more involved example. I have a “SensorManager” class that is a singleton. This class basically wraps up the functionality and behavior of the location manager and the motion manager. Here’s some actual code from the app I’m working on:

SensorSample *newSample = SensorSample.new;
sensorSample.latitude = @(SensorManager.sharedInstance.locationManager.location.coordinate.latitude);

This may make you cringe or it may make you go “ah-ha!”

When I look at it, I think to myself, “OK, we’ve got a new sample that we’re peeling off of the SensorSample class. We’re setting it’s latitude to a boxed up NSNumber that we get from the Sensor Manager’s shared instance’s location manager’s location’s latitude coordinate.

The other way to write this is:

SensorSample *newSample = [[SensorSample alloc] init];
[sensorSample setLatitude:[NSNumber numberWithFloat:[[[[SensorManager sharedInstance] locationManager] location] coordinate].longitude]];

ARE YOU FUCKING KIDDING ME?!?!? I couldn’t even type that second line without Xcode fucking up the brackets. Only it works fine IF YOU KNOW EXACTLY HOW MANY BRACKETS YOU HAVE TO TYPE FROM THE BEGINNING.

Also, the only time I needed to use the dot was to get the longitude out of the struct. Are the dot-notation-naysayers really saying that I have to type all those brackets because of that single dot attached to the struct? Whatever.

Geeze. This post went from casual to rage in a hurry.


Ooh!

January 26, 2013

Look what else we’re doing:

iOS Simulator Screen shot Jan 26, 2013 10.32.39 AM


Lens•Lab 1.1 for iOS is here!

January 25, 2013

I guess we didn’t have to wait that long! Lens•Lab for iOS has been bumped to version 1.1! We’re super excited about this release and hope you like it too!


Lens•Lab for iOS. Submitted, now we wait.

January 21, 2013

Just submitted version 1.1 of Lens•Lab for iOS. If you’re following along you can look forward to these groovy features in a week or so!

 

Lens•Lab, the world’s most advanced yet simple to use depth of field tool has been updated! Finally, after almost two long years. Wowsers! 

Here’s what’s new: 

• iPhone 5 wideness! 

• Focal length constraints! You can customize the virtual lens’s minimum and maximum focal length. Values from 1mm to “very large number” are supported. 

• More sensor formats! By popular demand, we’ve added APS-H and DX formats. 

• Better imperial measurements! 8.7 feet? What was I thinking? Now we report imperial measurements in feet, feet and inches, and 1/4, 1/2, and 3/4 inches. Metric measurements are unchanged. 

• Better aperture controls! Pick an arbitrary value from ƒ/0.95 to ƒ/64! 

• Precision sliders! The slider controls now have precision built-in. Just slide your finger vertically to make the slider more or less precise. Check out this neato feature! 

• Better interaction! Slide your finger in the left 1/4 of the screen to adjust focal length (up & down) and aperture (left & right). 

This is the best version of Lens•Lab yet! We hope you like it and it makes your life better. We appreciate awesome ratings!

 


Android Development Retrospective, Part 1

January 20, 2013

After wrapping up development on the Android version of Lens•Lab last night, I though I should sit down and write a retrospective of my experiences.

I’ve been developing apps for iOS for a few years now. Before that I had *some* programming experience (creating desktop apps in VB.Net and C#) and really no formal education in programming or computer science (or anything for that matter.)

6 months ago I got hired to work for a really cool company on some neat iOS projects. Some of the iOS stuff relied on back-end services written in Java. So I got to learn a little Java.

It sounds like my team at work is going to be taking over some Android projects so I figured now would be a good time to get my Android learn on!

I started building the Android version of Lens•Lab a few months ago. I got bogged down in coming up with the UI (that took far longer than it should have) and really accelerated the development within the last few weeks. So really, the bulk of the development time has been spent in my free time and weekends over the past couple of weeks.

So I’m going to look at building Android apps from an iOS developer perspective. I’m by no means an expert on Android or Java so bear with me. Some of my criticisms of Android might be coming from a place of ignorance. If there’s anything that I’ve got wrong, tell me so I can change my wrongness!

One of the major pain points when starting to develop software for Android is the fragmentation in the software and hardware ecosystem. There are (according to Google Play) over 2,200 different devices that can run Lens•Lab. Since version 2.2 of the Android OS there have been 8 updates (both major and minor) with 8 different API levels. There are 4 different screen density tiers. There are countless third party “themes” and skins that affect the appearance and behavior of Android devices. There are a multitude of display resolutions and CPU performance levels.

One person’s “diversity” is another person’s “fragmentation.” I suppose the vast differences in hardware and software can be seen as a plus but from the beginning developer’s perspective it’s a little intimidating. Where do you start? What types of devices will you target? What is going to be your minimum OS version?

Operating system uptake in Android land is far different than in iOS land. With iOS, *any* device that can run the newest operating system tends to within a month or two. Right now, iOS 5 covers 95% of the iOS installed base and it’s only 1 version behind.

In Android land, the decision to update OS versions is, most of the time, left up to the carriers and hardware manufacturers. You might get lucky and have a carrier or hardware manufacturer who bestows an operating system update on you but most of the time most people won’t be so lucky.

So there’s that.

Personally, I didn’t run into any major API issues (Lens•Lab doesn’t use too many whiz-bang API features) but I was a little concerned about CPU performance. Blurring bitmaps in realtime (even with a fast box blur) is no trivial task. Not knowing anything about the hardware you’re running on makes it hard to make performance tradeoff decisions at runtime. More about how I solved this later.

Language:

Java compared to Objective-C is more simple and straightforward but it also seems much more pedestrian. I’ve come to love Objective-C’s syntax and named parameters so fucking much. It just makes code more easy to read and understand, and also helps the code convey more meaning.

Where Objective-C is lousy with brackets, Java is riddled with parentheses. Keeping track of all the parentheses is a full time job.

IDE:

Xcode, for all its faults, is a very beautiful IDE. Once you get it nailed down, it’s a great environment in which to build great applications. I’ve really gotten used to Xcode 4 and all the niceties it provides. Using the iOS simulator is just fantastic. Since iOS API and code is compiled to x86, it ends up running faster in the simulator than id does on any real device. Compare this with Android land where running in the emulator is an exercise in frustration.

Eclipse has some really cool refactoring tools but that’s about all I can say for developing Android apps in that IDE. Everything from autocomplete to the severely broken GUI building tools feels, well, just wonky and janky.  I don’t know how else to describe it.

Oh, and don’t get me started on all the XML editing you’re going to be doing when you make an Android app.

Hardware:

To develop Lens•Lab for Android I relied on two pieces of hardware: a Nexus 7 running Android 4.2.1 and an older Motorola Droid running Android 2.2.3. The Nexus 7 is hard to beat price wise (only $199) and the Droid was a friend’s that was just sitting around. I felt it was important to experience the app on as wide a range of hardware as possible and I also thought it was important to support older versions of the operating system to a reasonable extent. Supporting down to 2.2 (API level 8) covers around 95% of the installed base so that’s pretty good.

More soon…