Wednesday, September 15, 2010

Thoughts on iPad Mini

There have been rumors swirling that Apple will release a new version of the iPad in time for the holidays. There have been other rumors that Apple will release an iPad in a smaller form-factor.

In general I subscribe to the philosophy that most rumor-mongers are full of shit.

However, I think these rumors do make sense, and I'll go into why exactly in a minute. But first, my guesses. Everyone likes to make guesses. Here's mine:

Apple will release an "iPad Mini" in November, running iOS 4.2, with a 6" screen at 1024x768 (213dpi), putting it's screen resolution and size on par with the Kindle 3. The smaller form factor (60% reduction in size) would lead to a lighter device. You've now eliminated 2 of the 3 advantages the Kindle has over iPad: higher resolution reading, lighter weight, leaving only one: viewing in direct sunlight.
Will Apple also re-design the screen to work better in sunlight (or at least eliminate the polarity issue that causes the screen to look completely off when wearing sunglasses)? Maybe. I doubt we'll see many advances along that line until next year's release. (I'm lumping "overheating" issues into that third Kindle advantage of viewing in direct sunlight, by the way.)

I think it goes without saying that any new release of iPad will come with a front-facing camera for FaceTime support.

How I came up with the precise specs for my guess above is by looking at what Apple did already with iPhone 4, and how they are trying to inflict minimal pain on iOS developers. iPhone 4 doubled the resolution and kept the screen size the same. This makes it fairly easy for iOS developers to adapt their applications -- double your graphics sizes, and you're done.

What would be even easier? If the resolution increased but the pixel count remained exactly the same. Then developers don't need to do anything! Obviously the screen size is going to have to come down if the resolution increases, which is no surprise if one of the goals is to compete against the Kindle form-factor at six inches diagonal. The 213 dpi I've posited is not "retina" level, per se, though it's still in the range of Kindle 3 DPI. And I think that's where this device is squarely aimed. (Alas, I was not able to find the actual Kindle 3 DPI documented anywhere officially, but it's said to be in the 200s.)

I've read rumors of a 7" iPad Mini, but I would be surprised if Apple went with that size, since it would mean a sub-200 DPI resolution at 183 DPI, and if you think about it, not a whole lot more than the resolution of the iPhone 3, which was 163 DPI. It also would not allow Apple to cut the weight as much as they can with a smaller screen.

Apple wants to have their iPad cake and eat Kindle's, too. What's that, you want a full-sized mobile computing device? Here's the iPad. Oh, you're mainly interested in reading, but wouldn't mind getting these hundreds of thousands of apps that Kindle doesn't offer? Here's iPad Mini.

Apple could release this product in November for the holidays, and developers would have to do nothing at all -- all of their software would run perfectly on this new device because nothing has changed in terms of the pixel count or aspect ratio.

Food for thought... Next Spring... will Apple release iPad 2 with a full retina display increase to ~326dpi, and a doubling of the pixels for the large form-factor (original) iPad line? If so, how long until Apple finally eliminates pixels entirely? Why are developers still using PNGs sprinkled with pixel fairy dust, rather than vector graphics? Will iOS 5 and Mac OS XI eliminate these relics once and for all?

Full disclosure: I am a long-term investor in both Apple and Amazon stock.

Monday, September 13, 2010

Geo::IP built for ActivePerl 5.12

You can find it here.

MSDeploy vs. xcopy network bandwidth throughput

MSDeploy (aka Microsoft Web Deploy) is great for syncing up all kinds of things between two machines. One issue I've noticed recently is, for whatever reason, it doesn't always do well at maxing out your network bandwidth.

We were recently syncing up large SQL Server backups between two boxes connected through a gigabit ethernet switch. I initially decided to use msdeploy.exe, because I wanted it to not only bring over the latest files, but delete the ones on the destination server that no longer exist on the source server. Using -verb=sync and an xml -source:manifest and -dest:manifest file, I ran the test sync from the destination server, and watched the network usage with task manager.

The average throughput was in the 5-15% range on our gigabit connection.

As a test, I ran xcopy /S /D for the same directory tree, and was getting over 90% throughput most of the time, often at 99% throughput!

Unfortunately, xcopy will not truly sync, it's only copying the files over. So now what we do in our .bat file is run the xcopy command first to copy over all the newest data at a high rate of speed, and then we call the same msdeploy command as before, and it deletes any files on destination not found on source. Since all the copying of new data has been done by xcopy, it finishes very quickly and now we're synced up.

I am aware of robocopy's ability to do a lot of this stuff, but it's clear that msdeploy is where Microsoft is putting it's eggs for a lot of the newer syncing technology, so we default to using that first, and now just augment with xcopy when pure throughput is needed.

If anyone knows why msdeploy is so much slower, please comment and I will update this article with any tips to gain performance. My guess is it's due to using the HTTP agent? It just seems weird, because I'm talking about copying of really huge files, so the slowdown is not with hash checking.