out of confusion

attempting to make sense out of a crazy world.

Retina iMac Reality

I was wrong when I posted some time ago about Retina display iMacs.

I assumed Apple would try to release them as soon as possible, and that pixel doubling the 27" iMac would be too hard.

Instead, Apple waited until they could “do it right” and released a pixel-doubled 5K iMac, despite having to use crazy technology internally to get it done.

In retrospect this seems obvious - Apple doesn’t care about being first, they care about waiting until they can release a product the “right way.” But I was overly enthusiastic.

Two Things

or What’s Wrong with Facebook?

thefacebook.com timeline

This is a sanitized screenshot of my Facebook News Feed. What do you see? Your timeline probably looks pretty similar. I’m on a desktop monitor, not a mobile device, but I see two things. Heinz Ketchup and an article on Rolling Stone.

Two things. That’s all I can see when I load the page.

And neither of those things is a picture one of my friends uploaded, or a status, or has anything to do with Facebook. They both link outside the website. And it’s content I don’t really care about, it’s only there because someone clicked “like,” or commented on a website, or otherwise “engaged” with an outside brand or piece of content.

Facebook used to be about what your friends are doing. How they’re feeling. And their content - pictures, notes, etc.

The problem with Facebook is that it’s not Facebook any more. Most of the content I see links outside the website and isn’t about my friends.

Here’s what I care about:

  • Events
  • Photos
  • Status updates
  • Conversations

Sure, events show up, but they are not front and center. Messaging works but is tucked in the side. Photos are mixed in with all this stuff I don’t care about, and have to scroll past. Status updates and likes now also have their own miniature feed on the side.

Facebook is spending most of the real estate promoting other websites and services. And mostly it’s commercial brands, external articles, and news. Worst of all, you only see a couple things on a page at a time.

I see two things on a page. These things are not info on my friends. They are commercial content. And they encourage me to leave facebook.

Maybe my timeline is different than yours. Maybe I need to unfollow all these people who click “like” or “share” on these links and brands and articles. But my guess is that most people see something pretty similar. And it’s not hard to see why young teens are leaving facebook if what they see is mostly commercials encouraging them to visit other sites or brands.

Uncool.


Two notes.

First, the original version of this post incorrectly confused the timeline and news feed. They shouldn’t be confusing, but it’s my mistake.

Second, I scrolled down a bit in the screenshot down because the landing page was actually worse, since I have injudiciously followed a few brands and news sites like NPR, which litter my news feed needlessly with more trash. This is still Facebook’s fault, but we share the blame in that case, so I scrolled to a point in my news feed where the content came from friends, not pages or brands.

What’s the Apple Equivalent?

Google bought Nest for around $3.2 billion. That’s a lot of money, but Google has a lot of money. A lot of people think that Google and Apple are in a race to see who can get better at the other’s competencies first. There’s no denying Google and Apple are competing directly, but they do things differently. The thinking goes: Apple is good at products, design, hardware, and retail, whereas Google is good at scale, web services, backend, and advertising. If Apple gets good at web stuff faster than Google gets good at design and hardware, Apple will beat them. If Google gets good at design and hardware faster than Apple can become good at web and scale, then Google will win.

I don’t think the situation is quite so binary (and neither do most commenters online, to be fair to Gruber). There’s Twitter and Facebook, and even Amazon, Uber, and Square hanging out as potential or real competitors in various different ways (for example, Square and Amazon compete with Google Wallet and iTunes in certain respects). But there’s a lot of truth to the statement that Google and Apple are trying to copy the “good parts” of each other.

Nest is popularly known as CEO Tony Fadell’s personal vision post-Apple. He is also popularly known as the “father of the iPod.” Nest has a lot of great people in their group, and the iPod team also had a lot of other great people involved. It’s not as simple as “the iPod comes to Google.” But it’s a fair thing to consider. Google obviously thought Nest had a good product. They probably had decent revenue ($110-120 million maybe? Rumors are hard to quantify). And they have a good team. But $3.2 billion isn’t the value of a thermostat and fire alarm (obviously) and it also isn’t the value of the team. Some individuals online have argued that Google is going to monetize the data stream coming back from these internet-of-things devices in people’s households. Nest put out a press release saying Google won’t do that (for now).

But let’s say Google saw Nest as a piece of Apple they could buy, that would instantly give them a great leg up in hardware and design. Buying an actual piece of Apple is, of course, impossible, but this may be the next best thing. And it’s probably a good buy if it moves Google ahead in those areas.

What’s the equivalent purchase for Apple? Dropbox?

I think that’s an interesting question.

Photography and the iPad

The Problem is Transport

I recently had a brief twitter conversation with Scott Williams (swilliams) which he was kind of enough to follow up with a blog post on the topic of photography and the iPad. I think there are a number of problems with the current ecosystem of tools and apps, and no good solution to photography on the iPad. I think there are a use cases which Apple could target, or more realistically third parties could address, with future hardware, online services, and/or changes to the OS in iOS 8. I also think that the possibility of a larger format iPad (12-13”) might form part of Apple’s motivation for making some of these changes. Apple likes to introduce new hardware with new, impressive applications that show it off. A “pro” iPad might merit some “pro” apps.

PhotoStream

First, I’ll address the immediate and most obvious problem, or distinction, in use cases. Some users have cameras; other users have iOS devices with cameras. The experience of getting a photo onto your iOS device is very different (obviously) depending on whether you use your iOS device to take the photograph, or whether you use an unconnected device. If you have an iPhone or iPad, your photographs will appear by default in PhotoStream. PhotoStream is confusing and has spurred a number of back-and-forth discussions in the tech press. But fundamentally PhotoStream is positioned by Apple as a “your pictures everywhere” temporary transport system for your images. PhotoStream is not backup. PhotoStream is not archival. PhotoStream is not storage. It is transport. PhotoStream gets your images onto your device.

For users who take their photos somewhere else, however, PhotoStream right now is not a good solution. This means everyone who downloads their images from point & shoot cameras, digital SLRs, or whatever device they own has to jump through hoops to get the photos on their iPad or iPhone. Yes, if you download the images from your camera onto your Mac, and then upload them to iPhoto, they will appear in PhotoStream. I don’t know anyone who does this regularly.

The problem is compounded for pro users or home users with “prosumer” desires. RAW format images, and RAW conversion, are not handled in a good way by PhotoStream. Ideally, PhotoStream would upload the RAW file, metadata tags for any adjustments or settings, and possibly then the converted JPEG image. However, PhotoStream will only store the RAW file if you haven’t made any adjustments. If you make adjustments, PhotoStream instead only stores the JPEG preview image, which is useless for any workflow centering around RAW images. Worse yet, if you have a camera and take a considerable amount of images, and only download them occasionally, it appears that PhotoStream will delete the oldest images – even if those images are newly uploaded. So if you upload a few hundred photos which are a month or two old, they might already be past the 1000 image limit and end up immediately pushed out of PhotoStream since there are newer pictures. This behaviour is documented on Apple’s support forums, but I haven’t reproduced it personally.

The Computer

All this discussion ignores the fundamentally frustrating problem of actually downloading images from the camera to the computer. It’s a tedious, annoying process. But maybe it won’t go away. Theoretically this is a distinct job-to-be-done that is separate from the iOS photo app question. There are current wifi SD-card products, wifi-battery-grips for digital SLRs, and other solutions which attempt to address the question of uploading pictures, but these all require a nearby computer when the photograph is taken. An LTE cellular data connection on every camera body would address this problem, but only if manufacturers could agree together on some shared standard (Android?) for applications, or some shared platform (Dropbox?) for storing the images. This isn’t going to happen, and wouldn’t be cost effective or technology-effective in most cases (cellular technology changes faster than digital SLR or point & shoot camera technology, and you might easily end up with an obsolete and unsupported Android-based camera with great lenses and a usable sensor).

Scott Williams posits a “camera wireless connector” of some kind on his blog, which would upload each photo immediately, as taken, to the nearest iOS device (or the app, directly). I don’t see this as a working solution. First of all, the hypothetical photo app would fundamentally be an iPad app, and you might not have your iPad around with you when taking pictures. Second of all (as swilliams notes), pairing a camera with an iPad is hard on a hardware level and software level. Cameras mostly do not have wifi or Bluetooth, so you would need an adapter card. Then you’d have to create an ad-hoc network somehow, and join both devices. Finally, the app would have to be open, and accept data from the camera. Background app restrictions make this final piece annoying as things currently stand in iOS. I believe the computer is a frustrating, but still necessary piece of the puzzle.

Thinking with Portals

If we focus on the computer then, as the portal through which images must flow to the app, a simple “camera gateway” photo uploader application seems like an obvious solution. This tool would function like Apple’s Image Capture, and simply upload images, whether RAW, JPEG, or other format, into a cloud platform. This cloud platform could be PhotoStream, or it could be a third party service. Currently, Image Capture does not support PhotoStream.

There’s another job-to-be-done, though, and that is the need for archive and storage of images. EverPix tried to provide a service like this, and there are other competitors, but ideally this service would store full resolution, RAW or JPEG images, without additional compression, permanently. This application and cloud platform would perform the image transport job and image archive job simultaneously. I’d like to see Apple do this, but I don’t have any confidence that Apple cares about this market. Storing this kind of data permanently also gets expensive very quickly, and the cost would almost certainly have to be passed along to users in the form of a recurring monthly or yearly subscription payment. Apple does not have much interest in this kind of model. Dropbox (or a similar file-storage service) could be used, even as it works today, for this, though it would quickly become pricey for the average user. Overall there is no good solution to get bulk images from external cameras into a cloud platform for long term storage and use on iOS.

This brings me then to the question of the app itself. Unfortunately the above digression was necessary because I will now pretend that such a cloud photo ingestion, storage, and archive platform exists. It doesn’t.

What’s in an App?

Jobs-to-be-done

I imagine a solid iPad to handle several photo-related tasks. The jobs-to-be-done would mostly overlap with the current feature sets of program like Adobe’s Lightroom or Apple’s Aperture. This app wouldn’t be a companion to either program, it would (ideally) be a replacement for a computer-based photo workflow. Again, we’re assuming the problem of transport is solved somehow.

These are the basic jobs in my mind:

  • Sorting and grouping images
  • Choosing good images (some call this “editing” or making “selects”)
  • Making RAW adjustments and conversion
  • Simple photo editing like contrast, color, sharpness, curves
  • Exporting images or sharing them

There are other jobs which people might want the app to do. Tagging images and marking them with metadata is very important for some workflows. Facial identification matters to some people. Creating a website or gallery matters to people. Printing matters a lot to certain users. But I think the jobs above would be the basic requirements of the app.

The Photos app on iPad currently doesn’t do these jobs. iPhoto on iPad doesn’t do these jobs. There are number of photo-editing iPad apps, but no good all around tool to manage these things that I have ever run into. Admittedly, this is mostly because the first problem, the problem of transporting and storing images, holds apps back.

I said I would assume that we have the cloud platform solved, but let me go back briefly and talk about why current apps can’t do these things. Without getting too technical, iOS holds them back. Apps can’t delete or mess with the Camera Roll or PhotoStream. Apps can add photos, but can’t replace them (iPhoto can edit, but it’s a special case). Apps can’t easily create a shared pool to move images between themselves. So each app is an island, and they can’t solve the bigger picture workflow problems.

But let’s say this cloud service works, and works really well. You could use it instead of the native Camera Roll or PhotoStream, and avoid the above issues. All you’d need to handle photos capture on iOS (from iPhones or iPads) would be an uploader/connector app on the phone or iPad. Dropbox and others already offer this functionality.

Workflow

Many people don’t manage their photos. But this doesn’t mean they don’t want to - it’s just that the current tools are bad, confusing, and time consuming. Apple’s started to recognize this, and the iOS 7 photos application is an improvement in that it lets you see thumbnails and zoom in/out of them chronologically. Events in iPhoto on the Mac were another attempt to deal with this problem of organization. Images need to be grouped into logical groups, whether you call them “events,” “albums,” “folders,” or something else. Tagging might also solve this problem, if implemented correctly (and I think tagging is harder than folders to get right, and mostly more confusing for users).

Space matters for this. The iPhone is too small to do much sorting and dragging. I would argue even the iPad mini is effectively too small for this. I think a large format iPad could shine in this area. Retina displays and room to see many thumbnails or small images together lets people drag or quickly flick images around. I think Light Tables in Aperture don’t make a lot of sense (and I’ve never understood what they’re really meant for) but that kind of approach might work on a large iPad. Or there might be another way. But sorting and grouping matters, and I think it’s one of the key things people need to do. Sometimes an “import session” might be a logical grouping, and maybe the portal application could handle some of this automatically based on import and other metadata, but I think there needs to be a manual process, and a good user interface.

Making selects, or choosing images, is the second most important part. More important than editing images. You may have 10-15 images in a burst or short succession, and you only want to work with one. Apple tried to do this with stacks, and sort of had some success, but they really nailed it with the burst mode in iOS 7. Combining the burst mode picker from iOS 7 with some kind of rating system or choosing system would be fantastic.

Everyone choose images differently, but when I’m working with an album I usually use the “comb” approach. I’ve heard this described different ways, but it works like this:

  • First, go through an album and tag anything that’s decent (you might use two/three stars, or some other method). If some are really good then tag or rate them appropriately at this point.
  • Next, looking only at the “decent” images (three star or better smart album, for example), make a pass to ensure they’re all up to snuff. Inevitably some need to be revised down and will pop out. If some outshine the pack, revise their ratings upwards.
  • Now, make another group of only the better images (e.g. a four star+ smart album) and go through them with the same logic - revise up and down as necessary.
  • Finally, make a “top” or “selects” album and you should have the best images. Go through and purge any that don’t make the grade. Usually there is a rough number target I’m trying to hit for how many “good” images I need to output. Sometimes that number is tiny, like 5-6. Usually it’s around 50 or so (starting from hundreds). Adjust ratings as appropriate.

The only real issue with the “comb” method occurs when you need things like a photo of each person in a group (so you have to make sure at least one picture of each person “makes the grade”), or certain short timed events. Professional photographers have many more use cases and workflows for selecting images, but making selects is by far the most annoying and time consuming part of the process for me. I put it off. I procrastinate. I don’t like to do it.

One of the caveats for this process, of course, is that you’d need to be able to zoom in and out of the image, or use a loupe of some kind to asssess sharpness and detail. A larger iPad with a higher-resolution display would help, but there are no current monitors which can display images from even consumer digital SLRs at full resolution. Canon’s Rebel T3i (600D) costs $469 on Amazon and takes 18MP images. The file pixels dimensions are 5184 x 3456. That’s more than double the resolution of the Retina-display iPad Air in every dimension. A hypothetical larger iPad would probably not have the screen size or pixel density to show that in full 1:1 pixel resolution.

Editing is fun. It can be work, it can be tedious, but you take something and improve it. It can be done in chunks. Organizing and selecting is like working with email (think Inbox zero?). And it requires self judgement and criticism. I believe doing this is the hardest part of the application (though obviously not the only important part). It’s a frustrating and tedious job, so the user experience should try to make it a bit less bad.

Adjustments and Editing

RAW adjustments are not easy on iOS. Top-tier RAW converters are numbered in the handful, and this is probably the most difficult part of the application from a technical perspective. New cameras come out all the time. Individual lenses can require different conversions and settings. Maybe just let Apple handle this somehow (but how? in the gateway uploader?). I don’t have any good ideas about the RAW bottleneck, but I know people aren’t going to accept JPEG only. I wouldn’t. I have no clue how to do this and I think maybe the only hope here is a cloud-based backend for processing RAW. Otherwise, the app would always have to know about new cameras, and would have to bring a lot of processing horsepower to the table.

Ideally with RAW conversion the metadata for how the RAW conversion occurred will be stored, available, and editable at any time. In a cloud-based system, maybe the app only works with JPEG, but there’s a “preview” of the RAW edits, and a cloud server delivers a JPEG to the app any time the metadata is changed. You play with some sliders, see a preview, and hit “develop” to get the full resolution JPEG output. This is ugly, but usually people convert from RAW in batches. Maybe the cloud service could “learn” over time (privacy implications notwithstanding) how you, or even everyone, likes to convert pictures from certain camera/iso/lens combinations. Excepting certain over/underexposed images, people could play with settings on their shoot or import, and then batch convert it to work in the app with individual photos.

Other editing is more straightforwards. Non-destructive isn’t necessary, but reverting to the original image is always going to be required. Just keep two copies of everything. Remember, I’m positing a magical cloud service that handles and archives everything!

There are some editing wrinkles, of course. People will want to edit during the selection and organization process. There will have to be some brush-based editing tools. People will want to certain global edits (noise, sharpness, blurring) with painted masks or gradients.

Well, that’s about it. It isn’t easy, and it’s not simple, but I think there’s a potentially large market for a great iPad photo organization and editing application, and a service to back it.

What’s it worth?

I would pay upwards of $100 a year for a lossless cloud photo service that integrated well with my iOS devices (even without this application), regardless of having to use a gateway or portal application on a Mac. However, some people might not pay this. Professional photographers already have solutions. Consumers are conditioned these days that online services should be free. So maybe it wouldn’t work out.

I would also pay $100 for this application, every two years. This is more than Aperture or Lightroom cost on the Mac. I think both Aperture and Lightroom leave a lot to be desired, and would prefer that the tedious task of making selects be performed on an easier platform such as an iPad. Again, I think consumers are trained not to value software on iOS, but in this case I believe professional photographers would be willing to pay for this since anything that saves a significant amount of time saves a significant amount of money.

In the end, I think iOS and Apple’s mobile hardware probably isn’t ready to handle the workflow, file size, RAW conversion, and we won’t see this kind of application. And the store might not support it or the business model necessary (remember, EverPix went out of business, and they only stored lossy compressed images). Maybe if we see an iPad Pro with a larger screen at the end of 2014, and changes to what apps can do along with iOS 8. But maybe not.

I do think it is inevitable that the photo workflow will go mobile eventually. It’s just a question of time. I hope someone gets this right sooner, rather than later, though.

iOS outnumbers everything else on in-flight wifi

Gogo just released their new wifi usage numbers.

There’s obviously a few caveats to the numbers, chief among them the fact that this only really represents people who fly and are willing to pay for in-flight wifi. Gogo is also not available on international flights.

I use Gogo when I fly. Business travelers are fairly heavy users.

Here’s their infographic. I thought there were two important numbers. First, despite a slight shift towards Android since 2011, iOS still dominates the mobile numbers, at 84% usage. Second, mobile represents 67% of all connections.

Combined, this means iOS represents around 56% of all connections. Put simply, iOS connections on Gogo outnumber all Windows PC users of Gogo, all Mac users, and all Android users, combined. A majority of all users are accessing in-flight wifi on iOS devices.

That’s a big deal.

Now, there’s a bit of a post-script here. First, there are different prices for mobile usage of Gogo versus laptop/PC usage. Second, it’s a lot more convenient to use a mobile device during a flight compared with digging out your laptop from under a seat or in an overhead bin. And battery life constraints, particularly for a lot of business-type PC laptop users, may greatly restrict laptop usage during flights (most planes still do not have free power).

But a majority of devices being iOS is still a big deal.

CoreStorage and the Fusion Drive

The Fusion Drive

Following today’s Apple Event, there’s been some discussion of the new “Fusion Drive” Apple is featuring in the new iMac models. To clarify, the Fusion Drive represents a combination of a fast SSD and a large traditional platter based hard disk drive. Initially many commentators speculated this was simply Apple rebranding or reinventing the concept of a “hybrid drive,” technology which many hard drive makers have tried (to varying degrees of success) to push to solve the gap between the performance of solid state drives, and the enormous storage capacity of traditional hard disk technology.

The Fusion Drive is not this kind of “hybrid drive.” Apple hasn’t explained much as of yet beyond what was contained in their presentation, but just from the presentation it is made clear that there are two physically separate drives, “paired together” through software.

This is almost certainly done through the magic of CoreStorage.

CoreStorage

By far one of the most exciting things happening under the hood of OS X in the last couple years has been the advent of CoreStorage. Apple’s filesystem of choice, HFS+, is old. This is simply the truth. It’s not as modern, or as fancy as some competing solutions. It does a good job for most users, but Apple’s clever engineers had to jump through some hoops to do things like Time Machine which, on some competing filesystems, might be nearly built-in. The thing is, changing filesystems is hard. Applications write expecting certain behaviors and break when those behaviors change. It’s confusing for consumers.

CoreStorage is part of the answer to Apple’s filesystem questions.

Apple’s approach is to abstract the filesystem one step away. It’s a different idea - instead of writing or finding a new filesystem and converting everything over, you abstract things one level up and you can start playing fast and loose with what’s happening on the disk, with applications and users none the wiser. CoreStorage is a volume manager. It’s not a filesystem, and it’s not a disk - it’s what sits between the two.

Most people probably interact with CoreStorage most significantly if they use Apple’s FileVault2 whole disk encryption. CoreStorage allows for the disk to be encrypted as you go, piece by piece, and is part of the reason speeds are so good when using FileVault after the disk is encrypted and running.

Lots of platforms have volume managers. CoreStorage isn’t even used for what appears to be much… yet. But the Fusion Drive points the way Apple is heading in the future. Doing clever things with hardware and using CoreStorage to make it seamless is clever, and doesn’t require breaking compatibility. And it’s just the start.

How the Fusion Drive Might Work

I don’t have any details yet, but my guess is Apple’s created a logical volume group using a new feature of CoreStorage that’s somewhere between JBOD and RAID. This doesn’t appear to be in the released OS X 10.8.2 build, but there have been undocumented CoreStorage commands before, and could already be squirreled away somewhere.

CoreStorage is already able to write out pieces of a filesystem and move them around; being able to reposition data after operations is critical to the FileVault2 creation process that lets you work while it’s going on. Joining two disks together and using them in a volume group was already possible with CoreStorage, though the whole magic of the Fusion Drive certainly was not, to my knowledge.

Tracking how often you hit something isn’t part of CoreStorage as far as I was aware, but it certainly would have to either be integrated into the volume manager, or (perhaps more likely) the OS. If the OS or volume manager marks blocks of files (representing an application, or a media file, or a library, etc) as being used “more often” then the volume manager could easily move them to the “fast” designated physical volume portion of the logical volume group. Right now that’s the SSD drive.

But this could be something else in the future. Imagine a volume group with block level storage on the network and locally. Files get used more, they move local. Used less, they are “archived” automatically by CoreStorage on the network. It would obviously require a persistent connection for things to make sense (you can’t lose that OS library you rarely use if the network goes out) but it’s a fascinating concept.

I’m excited, and I can’t wait to see how this works.

Retina iMac Speculation

Retina iMacs

In anticipation of the upcoming Apple event, speculation has swirled (to varying degrees) about the possibility of Retina iMacs.

While I personally have no information on what Apple will, or will not, release, I think some very simple math on the concept and definition of a “Retina iMac” is worth exploring.

Retina and Pixel Doubling

First of all, Retina and Pixel Doubling are two distinct concepts. While the iPhone and iPad both received exact pixel-doubled screens, the Retina MacBook Pro did not exactly receive the same treatment.

Yes, the resolution of the MacBook Pro with Retina display (Apple’s terminology is a mouthfull, hereafter rMBP) is 2880 by 1800, or exactly double 1440 by 900. However, the “traditional” (for lack of a better term) MacBook Pro offered a 1680 by 1050 display as well - a choice almost every “professional” or “power user” selected. If pixel doubling were really the Only Way, we would see a shockingly high resolution of 3360 x 2100 available as a CTO option for the rMBP. This is not the case.

In fact, I’m typing this very post on a 13" MacBook Air, which has that same 1440 x 900 resolution. Would a theoretical 13" “MacBook Air with Retina display” offer the same 2880 by 1800 resolution as the 15" rMBP? Likely not.

Sufficient versus Necessary

Pixel doubling is therefore sufficient to achieve Retina resolutions, and is and was convenient for the iPad and iPhone, but it is not necessary.

So what is necessary?

This is where it becomes slightly complex. Apple’s definition of Retina is not completely explicit, but in essence boils down to “When the average person, at the normal viewing distance, can’t see individual distinctions between pixels.”

It doesn’t mean no person can distinguish pixels at the normal viewing distance, and it doesn’t mean an average person can’t distinguish pixels at any arbitrary viewing distance.

This excellent article on Retina mathematics by Richard Gaywood on The Unofficial Apple Weblog goes into the mathematics in-depth, and is definitely recommended reading. As he notes:

The usual figure quoted in the literature for 20/20 vision is that the eye can tell the difference between two lines that are more then one arc minute apart – i.e., 1/60 of a degree.

What this boils down to, simply, is that there is a series of mathematical equations you can compute involving viewing distance, and 20/20 vision, and the necessary resolution to achieve Retina status.

Viewing Distance

Viewing distance is where things get a bit more complex. With a phone or tablet, we can already compute Apple’s “intended” Retina viewing distance by assuming they intend you to view the device at the maximum distance at which the assumptions behind Retina still hold. In other words, the farthest distance from the device at which it is still Retina.

For the iPhone 4 and 4s, with a ~330ppi display at 3.5" diagonal, that viewing distance is around 10.4". For the New iPad, a ~264ppi display at 9.7" diagonal, the Retina distance is roughly 13".

And for the only laptop Apple has thus far released with a Retina display, the ~220ppi display at 15.4" diagonal yields a Retina distance of around 15.6"

I’ve charted below the approximate values for all three of Apple’s current Retina products:

   Product         Screen           Display           Distance   
iPhone 4/4s3.5"~330 ppi10.4"
The New iPad9.7"~264 ppi13.0"
Retina MBP15.4"~220 ppi15.6"

We can see two trends here: * As the display gets larger, the Retina viewing distance increases. * As the display gets larger, the ppi decrease.

In fact, with the rMBP, the Retina viewing distance is already equal to the screen size.

HDTV Viewing Angles

The iMac has never truly been a television, or a television set competitor, but nonetheless it is often used to watch television and film content. From wikipedia THX recommends:

THX recommends that the “best seat-to-screen distance” is one where the view angle approximates 40 degrees…. Their recommendation was originally presented at the 2006 CES show, and was stated as being the theoretical maximum horizontal view angle, based on average human vision… This equates to multiplying the diagonal measurement with about 1.2.

This can be thought of in a sense as a a minimum viewing distance - if you get any closer, you’re not going to be able to see the whole screen at one time. We noticed previously that while the ppi of Retina displays decreased with size, the viewing distance increased. The 1.2x multiplier factor would suggest the minimum viewing distance for the rMBNP has already been exceeded for the Retina displays, implying that most individuals view the screen from a position farther away than the Retina resolution would imply.

It is worth considering that for the iPhone 4 and 4s, this distance would be 4.2", which seems ludicrously close for viewing iPhone content (though not, perhaps, a motion picture in landscape mode). For the iPad, this corresponds to 11.6", which is still somewhat close than the distance at which most people likely view iPad content. For the iPhone and iPad, however, these 1.2x distances are closer than the Retina distance. For the rMBP, this distance is farther than the Retina distance.

In other words, if you fill your field of view with the content on a rMBP, it will all appear Retina, whereas for the iPhone and iPad, at fill-your-vision distances pixels might still be visible.

This observation squares well with experiences in the field - to me, personally, the rMBP seems “magically sharp,” even more than my iPhone or iPad. Others have likewise commented (I seem to recall Gruber stating this on an episode of the Talk Show) that the rMBP feels the sharpest at their personal normal viewing distances.

Amending the previous chart:

    Product        Screen          40° Distance    Retina Distance   
iPhone 4/4s3.5"4.2"10.4"
The New iPad9.7"11.6"13.0"
Retina MBP15.4"18.5"15.6"

The MacBook Pro with Retina display is the first product where the Retina Distance is less than the fill-your-vision 40° distance recommended by THX. Apple went ahead and used, roughly, the size of the screen as a guide for the Retina Distance.

We can therefore use the 40° angle (1.2x diagonal display size) and the size of the screen itself (1.0x diagonal display size) as the bounds for viewing distance on a Retina iMac. Most people’s viewing distances will likely fall somewhere in-between these two ranges. If watching or observing content over the whole screen, perhaps people will move back to 1.2x, and if looking at a portion of the screen they perhaps will move closer, as close as the diagonal screen size, or even further.

Aspect ratio

I will confess to glossing over the various differing aspect ratios above, as well as throughout the rest of this piece. Suffice to say, I am ignoring the effect aspect ratio has on viewing distance, and assuming that any possible iMacs would all feature the same 16x9 or 1.77778 aspect ratio of the current iMac line.

I think this is reasonable, and it simplifies the process of thinking about the question of Retina iMacs.

PPI and Display Manufacturing

Apple, in the modern “Tim Cook era” has been extremely conservative with manufacturing, and likes to execute on tried-and-true processes. They reuse technology, test processes in low volume products (e.g. die-shrunk AppleTV) and don’t like to take unnecessary risks in unproven manufacturing processes.

Many speculate that an upcoming iPad Jr. (credit to Dan Benjamin at 5by5.tv) would have a screen size of around ~7.8 inches, and reuse the extremely mature LCD fabrication in the iPhone 3GS, simply cut into larger sheets.

It’s worth exploring this as a concept for Retina iMacs.

There are two products that stand out as already having very high ppi for use in Retina iMacs.

The first, most obviously, are those iPhone 3GS screens. At 164.8 ppi, they would offer a huge resolution increase over the current 27" iMac’s display density of 108.8 ppi. However, while it might be possible to cut these displays in 7.8" iPad Jr. sizes, it may be less feasible to scale them up to 21.5", 24", or 27".

The second would be another less-explored display from either Apple’s 11" MacBook Air or the iPad ½. These displays have pixels density of ~135.2 ppi and ~132 ppi, respectively. Either of these displays could serve in a Retina iMac, and the choice would likely come down to whichever display was more mature. Apple has shipped far more iPads than they have 11" MacBook Air computers, and the scale-up of either display would likely present a similar challenge; scaling up to 27" from 11" is not likely to be more difficult than scaling up from 9.7".

Thunderbolt and Bandwidth

The rMBP has a display density of ~220.5 ppi, but this is simply not feasible at iMac sizes for the simple reality of video bandwidth. Ignoring the cost issue, a Retina iMac at 27" build with a display density of 220.5 ppi would require a resolution of around 5188 by 2920. That is 15,148,960 pixels, or around 15.1 Megapixels.

This stack exchange discussion indicates that (rephased) thunderbolt allows 20Gbps for a display, which at 32bpp and 60Hz yields around 10 Megapixels.

Internally, however, Thunderbolt currently uses DisplayPort for sending the video signal to monitors. Assuming Apple is using 4 lanes for the forward-link DisplayPort inside Thunderbolt, it can transfer up to 17.28 Gbps for a monitor. (From Wikipedia’s article on DisplayPort)

Wikipedia also suggests 30bpp as the correct display depth, or even 24bpp, since there is no alpha channel necessary for a display and computer display standard is 8bpp x RGB, with “large gamut” displays offering 10bpp per color channel.

We can therefore divide up 17.28 Gbps by 30bpp and 60Hz to get around 9.8 Megapixels, or at 24 bpp this yields 12.3 Megapixels.

All these numbers, regardless of calculation, are significantly below the 15.1 Megapixels a 220 ppi Retina iMac would require.

It seems logical that Apple would want to both stay within the engineering constraints of their current iMac line, and at the same time be able to product a Retina thunderbolt display. Thunderbolt target display mode, external Retina monitors in the iMac form factor, and other engineering constraints would all seem to point to a display below the constraints of current thunderbolt/displayport technology.

Furthermore, it is simply not necessary to use a 220 ppi display to achieve retina resolutions at iMac screen sizes.

It is therefore prudent to assume that whatever display density Apple elects to build for a theoretical Retina iMac, they will stay within this ~10 Megapixel limit of thunderbolt.

Two Possible iMacs

From the previous section on PPI, it seems likely Apple would use a display at either 132 or 164.8 ppi. Personally, it is my inclination that for cost and manufacturing reasons, the iPad 2 screen process would be used, but it could admittedly go either way. What kind of screens would we see with these display densities?

Recall from the previous chart:

    Product        Screen         Display       40° Distance     Retina Distance   
iPhone 4/4s3.5"~330 ppi4.2"10.4"
The New iPad0.7"~264 ppi11.6"13.0"
Retina MBP15.4"~220 ppi18.5"15.6"

Now, since the viewing ranges we have (from 1x display size to the 1.2x THX 40° angle) are fixed, we can make some tables and see what combinations of ppi, viewing distance, and monitor size quality for Retina status.

So for iMacs at 132 ppi (iPad 2 display density):

 Display    Viewing Distance      Retina ppi      % of Retina   
27"27.0" (1x)~127 ppi104%
27"32.4" (1.2x)~106 ppi124%
24"24.0" (1x)~143 ppi92%
24"28.8" (1.2x)~120 ppi111%
21.5"21.5" (1x)~160 ppi83%
21.5"25.8" (1.2x)~132 ppi100%

I’ve bolded the ones which qualify under our definition as Retina, but to summarize, the 27", thanks to its extended viewing distance, qualifies as retina regardless. All other display sizes require the 1.2x 40° viewing distance to qualify as Retina.

Now, if Apple were instead to use the higher 164.8 ppi (iPhone 3GS display density):

 Display    Viewing Distance      Retina ppi      % of Retina   
27"27.0" (1x)~127 ppi129%
27"32.4" (1.2x)~106 ppi155%
24"24.0" (1x)~143 ppi115%
24"28.8" (1.2x)~120 ppi138%
21.5"21.5" (1x)~160 ppi103%
21.5"25.8" (1.2x)~132 ppi124%

In this case all theoretical iMacs, at all viewing distances, meet and in some cases drastically exceed the required Retina ppi.

So if Apple meets the standard set by the 15" Macbook Pro with Retina Display, which is to say a 1:1 ratio of display size to Retina viewing distance, they would have to use 3GS display densities at somewhere around 164.8 ppi.

If instead Apple moves to something like the THX standard for viewing at 1.2x the display size, they might use 132 ppi screens from the iPad ½ process (or 11" MacBook Air).

Without any other information, the trend does seem to be towards increasing the viewing distances with monitor size. The only question is whether Apple will stabilize at a ratio of 1x or 1.2x for viewing distances for the purpose of their internal definition of Retina. However, we can judge the implications of that decision.

iMac Resolution and Thunderbolt, revisited

Recall that we have assumed thunderbolt at an absolute maximum, can carry just under 10 Megapixels of resolution for a display monitor. What are the implications for the size assumptions made above?

The current 27" iMac features a resolution of 2560 x 1440, which works out to around 3.7 Megapixels. High-Resolution 30" displays (such as the models sold previously by Apple, and currently by Dell and HP) push 2560 x 1600, or around 4.1 Megapixels.

Thunderbolt currently has no issues connecting to either display.

The minimum resolution of a retina-qualifying iMac would be the 132 ppi 21.5" iMac, assuming a 1.2x viewing distance, at roughly 2474 by 1390. This is obviously within reach of current Thunderbolt display capacity.

The maximum resolution of a retina-qualifying iMac would be the 164.8 ppi 27" iMac, which would have a resolution of 3878 by 2182. This is around 8.5 Megapixels and while it would fit in the theoretical display bandwidth of Thunderbolt / DisplayPort I have calculated previously, this does seem like pushing the boundaries of what is possible.

At 132 ppi that same 27" model would have native display dimensions of 3106 by 1748 pixels, or 5.4 Megapixels. This isn’t so much higher than the current resolution required by 30" displays.

Usable Points

Despite my previous explanation of why pixel-doubling of the current iMac resolutions is both impractical and unnecessary, we must briefly return to pixel doubling in the context of the points versus pixels divide.

OS X, in this Retina era, finally supports what used to be called HiDPI mode through 2x assets. In other words, The OS pixel doubles. The vagaries of how this is implemented are somewhat more complex (with non-native resolutions involving higher scale factors) but for the purposes of this discussion we can assume the “native” retina resolution of these monitors to be half the pixel doubled resolution.

For the two possibilities we explored - 132 ppi and 165 ppi - this yields, with some rounding, the following charts of possible native resolutions in horizontal and vertical points.

For 132 ppi (iPad 2 density) at 2x:

 Display    Horizontal Points      Vertical Points   
27"1600900
24"1366768
21.5"1280720

And for 164.8 ppi (iPhone 3GS density) at 2x:

 Display    Horizontal Points      Vertical Points   
27"19201080
24"1720966
21.5"1600900

We can see from this simple chart that if Apple wants to be able to offer a pixel doubled 1080p, they will have to use a higher ppi, such as the 164.8 ppi display of the iPhone 3GS.

This for me is perhaps the clearest indicator that Apple will not, despite my previous inclinations, use the iPad ½ or 11" MacBook Air display process. It would seem silly not to offer a native pixel doubled 1080p at a minimum. While Retina makes things look beautiful, screen real-estate is still important for the 27" iMac purchaser.

In summary, I think Apple can and will release Retina iMacs. Maybe not the near future, but the obstacles aren’t as high as people make them out to be, simply because the resolutions necessary aren’t as extreme as intuition (pixel-doubling existing screens) would suggest. Thunderbolt can probably support them, and Apple can probably fab them.

Whether it’s an iPhone 3GS resolution screen at 164.8 ppi, or more like an iPad or Macbook Air screen at 132 ppi, I don’t know. But I’m leaning towards the higher number.

2010 Nano Gets New Nano Interface

Available today in iTunes, last year’s iPod Nano gets the new clocks, interface, and changes. This is a nice bonus from Apple for owners of the previous iPod Nano.

As someone who owns the 2010 iPod Nano, along with a few watch cases, it’s nice that Apple released this free update. This goes back to my post about the iPhone 4S and my note about Apple’s continued support for legacy devices.

The Smartphone Virus is Coming

In case you haven’t heard, pretty much all of HTC’s (slightly-custom) Android phones have a huge data logging problem and backdoor vulnerability.

This security hole was publicized by Trevor Eckhart, Justin Case, and Artem Russakovskii at AndroidPolice.com. It’s since been reported on all over the web, but I’ll quote some key bits from their original post about the problem.

In recent updates to some of its devices, HTC introduces a suite of logging tools that collected information. Lots of information. LOTS.

[…]

Theoretically, it may be possible to clone a device using only a small subset of the information leaked here.

[…]

HTC also included another app called HtcLoggers.apk. This app is capable of collecting all kinds of data, as I mentioned above, and then… provide it to anyone who asks for it by opening a local port. Yup, not just HTC, but anyone who connects to it, which happens to be any app with the INTERNET permission. Ironically, because a given app has the INTERNET permission, it can also send all the data off to a remote server, killing 2 birds with one stone permission.

In short, this disastrous back door that HTC created allows almost any program installed on your phone to steal your confidential data and upload it onto the internet. Basically all HTC Android phones released recently suffer from this security hole.

Android Police now reports that HTC admits the problem and is “working very diligently” to eventually release a patch, after a “testing period by our carrier partners.” Which phones will get this patch? When will it come out? How long is this testing period? Nobody knows.

This is a big deal. Hopefully it will be fixed soon. Unfortunately, it was also nearly inevitable that this would happen sooner or later.

Open

Android is “open.”

Well, in theory at least. At this point it’s basically only theory, since the source code for Honeycomb hasn’t been released after the better part of a year of devices being on the market. There’s all kinds of excuses for this, but they’re not so important.

Android is open to manufacturers and carriers. This is more true. Manufacturers definitely mess with the OS on their phones and tablets. HTC installs their “Sense” user interface. Samsung has their own thing (which, notably, has landed them in legal hot water due to its similarity to iOS). Motorola has played with customization as well, but now that Google’s taking over that desire may well disappear.

Manufacturers want to customize their Android phones because they are trying to differentiate themselves from their competitors. But they’re not as rigorous with their software development practices as Google, and just don’t care as much about things like security. Google uses the threat of decertifying their phones as “Android Compatible” to keep manufacturers in line, but this is more about protecting Google services than ensuring high quality code. Google can’t do all the programming and QA for their partners, so they have to assume some amount of competency.

Carriers also want to customize phones, add carrier-specific app stores, and prominently place various sponsored apps and links to websites. They don’t do as much customization as the manufacturers - they just don’t need to differentiate in that way. The stuff they do put in place is more annoying then dangerous. Links you can’t move, apps you can’t remove, phones you can’t root or side load. It’s more like the junk ware you get with a new Windows PC than an attempt at a custom operating system.

So there’s a lot of customization - on almost every phone sold. The only real ‘stock’ Android phones are the Nexus One and Nexus S. They’re not exactly big sellers. And more changes, more companies messing with things, means more potential holes.

The only solution in this situation is to identify problems as quickly as possible, and fix them through updates once they appear.

Updates

However, cellular carriers are the ones who approve updates. With a few exceptions, neither google nor manufacturers can release or push updates to phones without the mobile carriers signing off. Verizon doesn’t want untested code causing problems with their network. Likewise with AT&T, T-Mobile, Sprint, and so on. There’s a lot of carriers, and a lot of testing.

Getting carriers to approve and push out updates, in fact, has been such a big problem for Android that Microsoft specifically tried to address this on their Windows Phone 7 platform. From the beginning, they included contractual agreements with carriers who ship Windows Phone 7 devices to deliver timely updates - and specifically restricted carriers’ ability to skip updates. Despite this, delivering updates has still been a problem for Microsoft, and their first updates to WP7 were sort of disasters. Android, mostly, just doesn’t get updated. Customers tend to get updates late, or in some cases never. There are point releases that people get, but updates, especially major ones, are few are far between.

In all fairness, there’s a huge combination of phone models, manufacturers, and carriers, and expecting any vendor to push software simultaneously across all these competing platforms and interests is a tall order.

(Apple does it just fine, but they rule with an iron fist, and don’t have to deal with third party manufacturers.)

But updates are critical to software security. Even in the best, most secure operating systems, security problems are inevitably found. Patching and applying updates in a timely fashion is critical. Once a problem is discovered and publicized it’s open season for whoever wants to take advantage of it. At that point, the only fix is deploying an update to the affected systems, immediately. Anything running the old software is at risk. And there’s lots of Android phones running old versions of Android, and old versions of manufacturer software.

All this adds up to a jungle of messy software, spanning multiple releases, multiple operating system versions, and with a mishmash of update policies. And the only key stakeholder in the operating system quality is Google.

Loyalty and Customers

Google is the only one who wants to update things quickly, but Google, in many ways, has the least power. While consumers, in some way, are the end-customers of Google’s Android, they never interact with Google directly. Google licenses the OS to manufacturers, approves their implementation as Android compatible, and their real power ends there. Manufacturers sell the phones to carriers and stores. Carriers subsidize the purchase of new phones, and whether they sell the phone directly or through an intermediary, ultimately hold the customer’s contract. And unless your contract is up, carriers aren’t really motivated to care about your phone experience. They want their network to work well, so they’ll test things that affect them, but once that contract is signed, consumers aren’t their priority.

HTC doesn’t have a lot of brand loyalty - to Google, or from phone users. Phone users - and by this I mean actual human beings - buy their phone from a Verizon store, or a kiosk in a mall, or by going to a website. They don’t get their phones from HTC. They’re not the customer. HTC’s customers are the mobile phone carriers, and they care more about things like price and branding than whether HTC has rigorously audited the security of software subroutines included with their custom user interface.

HTC (along with the other Android vendors) also doesn’t have a great financial motivation to update existing phones with new software. They’d rather people just buy a new phone, which gives them a chunk of change from the carrier subsidizing the purchase. They don’t make money when they have to code up software updates and security fixes. Bad PR will force them to do it now and again, but it’s not their priority.

The App Store

But there’s another, even bigger Android security problem - their wild-west anything-goes app store.

Google’s “open philosophy” is most clearly displayed in the official Android application store. It’s full of junk. Applications pretty much just get approved. If they do something crazy, or access private data, they flash a warning to users. People click through these things - just like they do on Windows, and Mac OS X. Training people to read and understand some complicated dialog about how a program might be dangerous because it can invoke these rights is a losing battle.

People don’t read that stuff. They open the sexy-pics.jpg.exe file every time.

Google does have a kill switch. They can forcibly remove apps from your phone, remotely. And they do it, all the time. It seems like every week there’s a new story about some Android malware of some kind or other that Google has to remotely disable.

Google’s approach is “approve first, ask questions later.” This is undoubtedly more open. More stuff gets approved. But it’s also more risky. Their limited automated testing can’t catch everything. And it doesn’t. It’s not only malware - the Android store is full of copyright infringing apps, ripoff apps, lookalike apps, cheat-code apps that do nothing, and so on. There’s a low average quality of apps. A chaotic sea of bad software. But you can install basically whatever you want.

Apple has the ability to remotely kill apps as well. But they’ve never used it - not even once. Apple has pulled apps from the App Store - they do it all the time. They reject a ton of stuff. Sometimes it seems like they behave capriciously. But if you have an iPhone App they removed from the store, it stays on your phone. It isn’t deleted. It stays in iTunes. You keep it. Apple hasn’t had to remove any malware remotely, because there hasn’t been any.

(This isn’t to say there aren’t junk apps in Apple’s App Store. There certainly are. But they’re fewer in number, they get taken down more, and they are more of the boring crappy-app variety.)

But not only is Google’s app store a bit crazy, but they’re not the only ones. There’s the Amazon App Store. The Verizon App store. The Samsung App store.

Carrier App stores. Manufacturer App Stores. Third Party App Stores.

And you can side load any program you want, downloaded off some Russian bulletin board if you like. Your phone has to be unlocked, but there are apps on Google’s Android store that will do that for you.

An Android Virus: the real threat.

This combination of sloppy programming by hardware manufacturers, old code floating around thanks to late updates by carriers, and lack of oversight regarding installed software produces an environment that is ripe for security problems. There’s just too many combinations to test.

HTC’s exposed, unprotected backdoor may be the wakeup call Google needs. It looks like they’ve already begun to understand- Honeycomb has been closed and controlled. Ice Cream Sandwich may be as well.

Users are getting a different message, though. There’s already Android antivirus software for sale. There hasn’t even been a real, remote-execution, self-propagating Android virus. But it seems like, at some point, there will be. If things continue this way, it’s only a matter of time.

The discovery of HTC’s vulnerability here is mind-blowing. There’s a backdoor to your personal data that any app can secretly access. And it was not an accident - this is something they coded intentionally. Today, this can, and probably will, be used for malware. But tomorrow, someone might figure out a way to combine a poorly-inspected app with some chink in Android’s armor. I’m betting on another hole related to custom software, either carrier or manufacturer-installed. And then then it will be too late.

People entrust an enormous amount of personal, sensitive data to their smart phones. Your entire identity is wrapped up in your phone. And every day the phone gets more access. With online banking apps, software like Square, and ideas like Google Wallet, there are obvious financial implications.

Innovation is possible precisely because people do trust their devices with all this information. In the simplest sense, if people didn’t trust their phones with their contacts, they wouldn’t save any numbers. If they didn’t trust the privacy of their calls and call logs, they might not even use their phones at all. The recent voicemail hacking scandal in the UK has shown how hurt people feel when this trust is damaged.

If that trust is lost, it will be bad not only for Google, or HTC, or Verizon, but the whole industry. Innovation will be hurt across the board.

May the 4S be with you.

Apple announced the iPhone 4S today. Many people were upset by what seemed like a minor update - or perhaps by the lack of a phone from Apple with a new, distinct appearance. By most other metrics the phone adds a lot; the A5 represents a big processor bump, the new phone has a huge increase in camera quality, the change in the antenna design and “WorldPhone” dual-function is great (particularly for Verizon customers who travel outside the US). There’s also some other changes, like Bluetooth 4.0, HSDPA+, and so on. Mostly, it’s just a solid successor to the hugely successful iPhone 4.

The only new “killer feature” is Siri - Apple’s voice recognition “intelligent assistant.” Siri looks promising, though I question how heavily this sort of thing will be used in practice.

However, the real news is Apple’s current phone lineup.

Apple is selling three models. They’re all iPhones. They all run iOS 5. And you can get an iPhone, new, with contract, for free.

This is a big deal for a couple reasons.


Competing at every price point.

Apple is now competing with Android across the board, with phones that range from $0 to $400. Supercuts offers a free Android Phone on contract. Now Apple offers a free iPhone. Sure, these are prices on contract, with carrier subsidies. And the 3GS is only available for GSM. But Apple’s no longer giving up the low end to Android. The 3GS is a very capable phone, and will certainly compete with the cheap android phones that are being given away with contract renewals.

Additionally, the $99 iPhone 4 looks identical to the iPhone 4S. Apple will sell a lot of these. People won’t be embarrassed to have an “old” phone since it’s (a) an iPhone, and (b) looks identical to its faster, newer, and pricier sibling. And in six months or so, it wouldn’t surprise me if there’s a price drop across the board, with the iPhone 4 taking the place of the 3GS, followed by a price drop or capacity shift on the 4S models.

Most important of all, these are all iPhones. They all have, roughly, the same features. Sure, the more expensive phones do some new things, but they all run iOS, they all run apps, they all browse the web - they all do the things you expect an iPhone to do.

It’s Apple’s old standby lineup of three models - Good, Better, Best


Longevity

The second reason this is a big deal is that Apple is still manufacturing, still selling, still supporting, and still updating a phone that is more than 2.5 years old. The 3GS was released in June 2009. And it’s still working, still getting the latest operating system, and still a supported phone.

As a point of comparison, the T-Mobile G2 (aka HTC Droid Eris) was released in the US in October 2009. The original Motorola Droid was released in November 2009. These phones have not received the latest Android software. And they’re certainly not still being supported by their manufacturers.

Apple’s sending a very clear message. We love our customers - and we want to keep them. Your phone will still work next year, and we’ll still be updating it.

Apple releases iOS updates all the time. Somehow, they’re the only phone manufacturer that’s able to consistently release updates, across their whole product line, without carrier politics or deployment issues getting in the way. With iOS 5, those updates are even better, since you won’t even need iTunes.

People understand this. When you get an iPhone, you get new features Apple adds later. Current iPhone owners have seen this happen. While android phone owners, well, Google’s problems getting their partners and carriers to update are just about as complicated as you’d think a three party negotiation would be.

Apple could make more money if they withheld features from older phones, or if they didn’t update them at all. (And in fact it’s worth pointing out that they may be withholding Siri from older phones as a differentiating factor.) But Apple’s historically erred on the side of updating too much - the iPhone 3G probably shouldn’t have been allowed to run iOS 4. Apple had to fix that by then updating the iPhone 3G again - to improve performance.

What Apple gets from this is incredible brand loyalty. They give up a few short term phone sales to grumpy customers who want some new feature. But people remember that they got new stuff for free, and it pays off in the long run.


So, in the end, is the 3GS a good buy?

Well, probably not. If you’re entering into an expensive two year contract, the additional money at signing required to get the very best phone hardware is inconsequential compared to the total cost of contract. You should force your cellular provider to subsidize your phone purchase as much as possible - and that means buying the latest and greatest phone. That’s true for any phone though - the ones they offer for free are always a bad deal.

But it’s amazing Apple is still supporting this stuff, still selling it, and has found a way to compete at this price point.

iTunes, the App Store, and Apple’s Revenue

People talk about Apple’s dominance in the online music industry, and Amazon, who is their only real credible competitor (Google is trying, but without much commercial success). Apple’s movie and TV show business also get a fair amount of press, and Apple has clear plans in this area, despite their continued description of the AppleTV as a “hobby.” Furthermore, the App Store, and in particular how it is tied to the iOS platform also receive a lot of discussion from both traditional media and blogs.

Somewhere among all this, however, the numbers tend to get lost. Apple just doesn’t make much money on this business of “digital distribution.” Time and again, they’ve described the margins as small, and from the beginning they have made clear that these businesses were started, and continue to exist, in order to further their hardware businesses.

The iTunes music store was originally created, according to Apple, to make iPods better. Before the iTunes Music Store, getting songs on portable devices, legally, was a huge headache, and involved buying a CD and ripping it to disc. The iTunes store made it easier for people fill iPods with content. The TV and Movie stores and rental business let Apple sidestep Blu-Ray on their computers, and made the AppleTV possible. And the App store is the backbone of the developer community surrounding iOS. Its catalog is arguably one of the biggest selling points of iPhones, iPod touches, and iPads.

Apple hasn’t wanted other companies, like Palm or Real, syncing with iTunes and getting songs off the iTunes Music Store. In the past, Apple has pursued legal and engineering approaches to preventing other devices from syncing store content. These digital distribution stores aren’t profit drivers for Apple. They’re meant to protect and serve Apple’s real products - their hardware business.

Apple says these businesses are not profit centers, but putting profit aside, looking at pure revenue is very enlightening.

The chart below shows revenue numbers from Apple’s FY2011 Q3 quarterly report (10-Q) through June 25, 2011. The graph is small, unfortunately, but you can see the small bar revenue from the iTunes/App/Movie businesses.

The long bar in the middle shows the revenue from Apple’s combined hardware businesses, and the bottom bar shows Apple’s total revenue for the quarter.

Apple Q3 Revenue FY2011

Ignoring profit entirely, it’s obvious the iTunes Music Store, App Store, Movie and TV businesses are just not a huge part of Apple’s business.

In the second quarter, Apple produced around $1.57 billion of revenue on the combined digital distribution business (this figure does not include the nascent mac app store, for various reasons). This represents about 5.49% of their total revenue - or roughly about 1/20 of their business.

1/20. That’s the fraction of Apple’s revenue that the App Store, iTunes Music Store, Movie sales, Movie Rentals, and TV sales combined represent.

In comparison, the Mac Desktop business alone similarly earned $1.58 billion this quarter. The Mac Laptop business yielded more than double that figure, at $3.58 billion, the iPad produced $6.04 billion, and the iPhone brought in a whopping $13.31 billion in revenue. iPhone sales alone produce an order of magnitude more revenue than the entire iTunes ecosystem.

Combining Apple’s various hardware businesses shows $26.30 billion in revenue - a full 92% of Apple’s total revenue for the quarter. This dwarfs the 5.5% of the App and iTunes businesses.

Apple Q3 Revenue FY2011 Pie

Even if Apple were earning much better profit margins on digital distribution businesses as compared to hardware sales, there is simply no way for it to generate more cash than their hardware businesses.

Apple is a hardware company, that produces software and online services to promote their hardware. The numbers don’t lie.

Kindle Confusion, continued

In my previous post, Too Many Kindles, I outlined how, in my opinion, Amazon’s Kindle product line has too many models, and why that’s a bad thing.

What can they do about it?

There’s two steps Amazon needs to take:


Drop the Old Models

The first thing they need to do is to stop selling the legacy Kindles. Sell them through other channels, offer them to colleges and institutions only, or hide them somehow. But don’t make things complicated for customers.

Amazon should sell the Kindle, the Kindle Touch, and the Kindle Fire. That’s it. Consumer shouldn’t see the other models, or have to worry about the difference between “Kindle with Keyboard” and “Kindle Touch.” This is pretty easy, and I have to think Amazon’s planning on doing this, anyways. It’s just crazy right now with all the different models.


Make Choosing Easy

The second, and more important step Amazon needs to take is to make clear the difference between the Kindle e-reader models and the Kindle tablet models.

The Kindle Fire is completely different from the Kindle e-readers. There’s no hint of that in the name. If the choice were up to me, I might have called it the Kindle Pad or Kindle Tablet, but the details are not so important. I don’t love the Kindle Fire name, but people made fun of the Nintendo Wii, the Apple iPod, and the Apple iPad for being stupidly named, and all were huge successes.

The critical thing, though, is to make it impossible for someone to get confused between an e-reader and a multipurpose tablet. One is for reading books. The other is for everything.

When people click on “Kindle” they should see two large buttons. One says Kindle e-Readers and the other says Kindle Tablets.

Throw in the Kindle Fire brand under the Kindle Tablets, I’m sure there are plans for other tablets (rumors suggest a 10" model is coming soon). Make it flexible. But don’t conflate tablets and e-readers. They’re not the same.

Below these buttons, they should have bullet points explaining the features of each. Make it big, make it easy, make it simple. There are two products lines: e-readers and tablets. They aren’t the same. They’re both part of the Kindle brand, but don’t mix them together.

How would Apple do it?

Apple makes laptops and desktops, but you don’t pick the size of your MacBook Pro on the same screen where you customize an iMac.

If Apple sold an e-Reader (alongside the iPad), they’d make the choice clear. E-Ink, long battery life, and simplicity versus color, movies, apps, the web, and flexibility. This is the tradeoff. Explain it. Who knows, some people might want both. I certainly know people who own an iPad and a Kindle (or Kindle with Keyboard 3G, as it’s now called).

Once consumers choose whether they want an e-reader or a multipurpose tablet, then show them the options for the type of device they’ve chosen. I think it would be beneficial to simplify the product lines further, but even the existing choices between a touchscreen or buttons, advertisement-supported or full price, and 3G or Wifi are fairly straightforward decisions.

Personally, I would have Amazon offer the Kindle Basic - the cheapest possible model - and the Kindle Touch with 3G included. Eliminate the odd middle choice where there is touch sensitivity but no 3G connectivity. Whispernet is one of the Kindle’s best features, and people will have a better experience if you force them to buy it. You still sell the cheap model for the price-conscious, and people who want a touchscreen get the premium experience all around.

However, regardless of whether they simplify their product lines, Amazon should present these options separately. First, have the consumer pick whether they want a touchscreen. Show the starting prices for each choice. Then, ask them if they want 3G, and show the prices there. Finally, present the option of the ad-subsidized price.

Right now Amazon shows most of these choices side by side. Maybe that’s how their storefront is built, but it’s too confusing, and bad design.

Too Many Kindles

Amazon announced their widely anticipated kindle lineup this week. While there has been a lot of discussion regarding the new Kindle Fire, Amazon’s $200 low cost custom-android competitor to the iPad, there hasn’t been much talk about the lineup they are fielding, and what it means for the Kindle brand.

First, it’s worth mentioning that Amazon has, over the last few years, developed a very strong brand with their Kindle e-reader. People know what a Kindle is. They probably know someone who owns a Kindle. They understand the brand, and it is seen in a positive light in the context of reading books.

Amazon is seeking to bring this Kindle brand into the more multi-purpose tablet market, while simultaneously continuing to refine and sell the existing e-reader business. This is not a bad approach. In fact, some people speculated before the announcement that Amazon’s tablet would be simply called the “Kindle Touch.” I am using the term “tablet” here in the sense of distinguishing the product from a device intended primarily as an e-reader.

However, Amazon surprised a lot of people by not only releasing a low-cost general purpose tablet, but updating their existing line of Kindle products, and continuing to make available the “old” product line. Amazon’s calling their new 7" tablet the Kindle Fire, while the other products are called Kindle, Kindle Touch, Kindle Touch 3G, Kindle with Keyboard, Kindle with Keyboard 3G, and Kindle DX.

This raises the question of whether Amazon’s Kindle product matrix is simply too large, and too confusing. Are there too many Kindles?

If you look at Amazon’s homepage, you get this image:

Amazon's Kindle Promo

This is reasonable, and makes it look like there are three product lines:

  • Kindle
  • Kindle Touch
  • Kindle Fire

The 3G option for the Kindle Touch makes it a little bit confusing, but it’s not crazy.

The problem is when you actually click on the image, it drives you to their store page, which proceeds to show you this:

Subset Kindle Product Matrix

Now it looks like there’s six products - or four categories if you don’t count 3G separately , which cover an entire gamut of the price range - from the Kindle at $79 to the Kindle Fire at $199.

  • $79 - Kindle
  • $99 - Kindle Touch
  • $149 - Kindle Touch 3G
  • $99 - Kindle with Keyboard
  • $149 - Kindle with Keyboard 3G
  • $199 - Kindle Fire

But these prices aren’t so simple. Some of these prices shown for these products are the dollar figures based on the consumer choosing “Kindle with Special Offers” - in other words, an ad-supported price. Choose the version without ads, and the price jumps up.

Kindle Prices

The Kindle Fire, on the other hand, doesn’t show any ads. So you have the Kindle Touch 3G at $189, versus the Kindle Fire at $199. Whats the difference? Well, there’s the name, and obviously a color screen - that’s shown in the little thumbnails. But you have to scroll down to actually see what’s going on. And here is where things get really ugly.

Full Kindle Product Matrix

The first thing you see is that there’s actually five categories - not the four shown above. The Kindle DX is also available, though why it’s not given top billing is confusing and unclear. And it’s more expensive than the Kindle Fire?

Imagine your mother, or grandmother goes online, and wants to buy this “Kindle” thing she’s heard about:

It’s like an iPad, except cheaper! And you can read books on it! (I think it does apps, too?)

That’s going to be the extent of most people’s information on the Kindle. So she clicks on the graphic in the middle of the page on Amazon.com, and is confronted with all these options. What’s the difference? Which ones do apps? Which ones play video? Which have a touchscreen? Which let you write emails?

They’re all called Kindle. And the Kindle brand, as I mentioned at the beginning, is a strong brand. It’s a good name. People know about it. With the popularity of the iPad, and Amazon’s publicity regarding their new announcement, people will be interested. But what should they buy? Do they want an e-reader? Do they want to play movies? Do they want to run Apps?

You have to keep scrolling down, and then you see this confusing mess. Things were ugly before, but it actually gets worse.

Kindle Feature ComparisonKindle UI Comparison

I’ve cut down the images to save space, and only show the most relevant parts, but there’s at least a full page and change of just product matrix - and on some monitors it’s probably two pages of information.

What’s the difference? And why does it matter?


You can’t differentiate by price.

The feature set doesn’t just get better - it gets different. An e-reader doesn’t necessarily compete with a tablet. This is the major problem. The feature sets are distinct - and don’t completely overlap. There’s all these different models. But they’re all called “Kindle.” And you have to choose between, well, all of them.

Which have e-ink? Amazon’s been running advertisements about how great e-ink is, for reading at the beach, on a plane, everywhere. Which have good battery life? What’s the screen size difference? Which have a keyboard? Touch screens? Multitouch? Which run apps?

The Kindle Touch has some of this stuff, the Fire has other stuff, but there’s not a clear progression of features with price. It’s confusing, and that’s not good.

Fundamentally the problem is that the name “Kindle Touch 3G” compared to “Kindle Fire” tells you nothing about what each product does. You can’t tell from the price, and you can’t tell from the name.

You have to look at this giant product matrix and actually read the feature list to figure things out. That isn’t good design, or a good business model.

Dell gets away with this (if you can call it their business to be successful right now) because you are buying a product from them, and customizing it at the time of purchase. It’s like options on a car. In some ways, Dell has a simple product matrix - they sell a couple laptops, a couple desktops, and consumers customize them. Dell has, of course, drifted away from this simplicity, but they’ve probably done so to their detriment.

A confusing product matrix just isn’t a good strategy. Apple learned this the hard way, in the mid 90s. Performa, Quadra, Centris. PowerMac 6XXX, 7XXX, 8XXX, 9XXX. AV models. WGS models. It was all over the place.

When Jobs came back, one of the first things he did was start slashing the product line, until they had a very simple product matrix - four boxes. A Pro Desktop, Pro Laptop, Consumer Desktop, Consumer Laptop.

Here’s a confusing, but relevant photo from Wikipedia:

Apple Product Matrix Timeline

The vertical line occurs around the time when Steve Jobs took over the CEO position at Apple, and started discontinuing models left and right. Before the vertical line, you have all these colored products, and then suddenly you don’t. Simplicity wins, because it’s easier for consumers to understand.

Apple let you customize these computers, and in some cases choose the color, but it was obvious what each part of the product matrix represented. If you were a home user, you’d get an iMac or an iBook. Need a professional or business machine? PowerMac or PowerBook. Apple’s product matrix doesn’t look so different today, more than 10 years later.

In the same way, Apple lets you customize the iPad. They really only sell one thing: the iPad2. That’s it.

They sell other things, like iPhones, iPods, but these don’t have the same name. They’re obviously different.

When you go to Apple’s site, and you click “buy now” on the iPad button, it asks you whether you want a black or white model.

Apple iPad Colors

Then, Apple has you choose between six options, which reflect different combinations of size and 3G wireless capability. This isn’t complicated, though, and, crucially, the price is an easy indicator of what is better. You don’t lose anything with the more expensive model. There’s no tricks. There’s only two questions, “Do you want 3G?” and “How many gigs?”

Apple iPad Choices

These are questions consumers understand. People know about gigs. They may be fuzzy on gigabyte versus gigabit versus gibibyte, but GB is one of those computer things people “get” - you have more, you can store more stuff. It’s been drilled into people’s minds. And Wifi versus 3G is also a pretty easy choice. Apple lays it all out for you.

Finally, if you do choose a model with 3G, Apple asks if you want AT&T or Verizon for your carrier. Another easy question:

Apple iPad 3G Carriers

This isn’t a big product matrix. It isn’t a confusing grid. It’s easy.

There’s one product. It’s called the iPad2. And you can customize it with the options you want.