Thursday, October 13, 2011

iOS 5 Killed (Temporarily) my AT&T Tethering

It came back when AT&T tech support told me to go to Settings : General : Rest and hit the Reset Network Settings button. Which dropped my call to AT&T and caused my phone to sort of reboot, but when it came back up, there was my tethering option in the Settings : General : Network pane.

Sunday, July 31, 2011

Why Some Internet Plugins Stopped Working With Safari 5.1

When I was a junior developer working his first pro job in 1995, we didn't have Safari we had Netscape, and if we wanted to make the browser do something it couldn't do, like play a video or render a PDF, we wrote Netscape plugins against the NPAPI. I wouldn't say we were happy because the API was complex and fragile, but it was what we had. Come forward 16 years, and I'm dealing with NPAPI plugins again. It's only been in the last few years that interested parties have been actively improving to move the API to something that fits in better with a modern OS X browser environment.

Step one was a couple years ago, and involved removing the classic QuickDraw based drawing flavored plugin, replacing it with rendering into a browser provided Quartz context. This also tightened up the rules of when a plugin could draw--in response to a drawing event only--no more drawing while in a mouse tracking loop.

Second step was changing the event model, from one loosely modeled around the pre-Carbon event loop to one loosely modeled around the messages received by a Cocoa view. When Safari dropped support for the old event model between 5.0.1 and 5.1, a bunch of Internet Plugins just stopped loading.

Third step involves the use of Core Animation layers as a drawing environment instead of the Quartz context. Currently an optional drawing mode, the CALayer based model is much more powerful and convenient then the alternative, and I suspect will become the primary flavor of NPAPI plugins on "OS X". They might even allow for the sharing of code between iPhone apps, Mac applications and browser plugins if developers are careful to wrap them carefully—CALayers are the underlying drawing environment of both UIViews (iOS) and NSViews (OS X).

As Safari 5.1 is the default browser on 10.7, and it was recently pushed out to 10.6 users, there are probably a fair number of users searching for updates and tardy developers getting their mouse tracking code finalized. Including me.

Wednesday, June 15, 2011

Neato xv-11 Robot Vacuum

I have never been a regular vacuumer. I have a moderately priced vacuum I bought 20 years ago while in graduate school, and I've probably gone through all of 8 bags. However, I'm a home owner now, with kids, kids who bring in bits of sand, drop crumbs everywhere, methodically chop bits of paper into confetti, and generally make a mess. And a wife who's busy starting her own business. My floors need vacuuming.

I perked up my ears on hearing the new editor of Engadget say his favorite product review of all time was the Neato xv-11 robot vacuum cleaner. I had, of course, heard of the iRobot Roomba—and drive by the iRobot headquarters in Bedford, Massachusetts 5 days a week—but had never heard of the Neato. Reading through the Amazon reviews gave me a picture of a more robust product than the Roomba, with more cleaning per charge and a square face capable of doing corners. So I bought one.

I like it quite a bit, but it cannot operate in the chaotic child-infested environment of my home without supervision. The kids are likely to drop a brush fouling yoyo string or a pile of clothes right in front of its charging station so if I set it to operate on a schedule, I'm likely come home and find it stopped after half a room with something jammed in its brush or it stuck behind a moved space heater. Scheduled cleanings are for people who's houses are always neat.

I've found that the robot and I work best as a team. I'll take it down in the basement, start it going in one end and I'll start picking up in front of it, quickly getting much farther in front of it so that I can clean up the basement in 10 minutes while it takes 45 minutes to vacuum. I'll still have to come rescue it a few times as it will sometimes wedge itself under a piece of furniture or get its drive wheels lifted up, but this takes a few seconds and barely cuts into the labor savings of me vacuuming as thoroughly. It will leave a few things on the floor, it isn't a human that will go back and forth and back and forth over a clingy bit of paper until it finally does get sucked in, but frequent vacuuming quickly leads to pretty floors. I just wish it could climb stairs.

Noise is like a distant jet turbine, not as loud as my old traditional vacuum but enough to distract you from any productive work.

One major problem is that for whatever reason, the Neato does not keep its clock time, like a blinking VCR it can't be relied upon to keep a schedule.

The Neato differs from the Roomba in its approach. The Roomba uses a random walk algorithm to achieve coverage in a room. The Neato finds a path around the circumference of a room and then vacuums it in a grid; I believe this is more efficient and allows it to cover more area on a charge. The Roomba also has beacon lights you are supposed to setup to aid in navigation, the Neato uses an internal laser range finder to map rooms. Whatever it is doing, it almost always knows how to find its way back to where it started, and it has an amazing ability to follow curved surfaces, like the circular base of my recliner.

My wife is most happy with the fact it will clean under beds, picking up years of dust bunnies the first time and keeping it tidy going forward. I'm most happy emptying it's reservoir, seeing all that dust and dirt that was messing up my floors, getting my feet dirty, and potentially getting into my lungs, and sending it along to the landfill.

Saturday, June 04, 2011

Fear, Uncertainty and .Net

[Update: Mary Jo Foley has posted a blog entry saying that her sources are saying that .Net will be available for immersive Windows 8 development. If true, it'd be nice if Microsoft would actually come out and say that.]

Microsoft demoed the tablet application framework for Windows 8 development Thursday. Going forward, traditional apps written in C++, .Net and other legacy technologies will be available to tablet users but utterly painful to use while away from one's keyboard and mouse. You could see a bit of that in the conference video where the presenter fumbles several times trying to snap Excel's document window into place. Old apps are going to be dreaded while in tablet mode. People will need new applications, and combined with the new application store, some developer is going to make a bundle on a touchable version of Notepad.

Flashy new apps were presented written in HTML 5 and Javascript. The interface might look like the Window's Phone 7 Metro UI, but this is not the Silverlight based technology beloved by C# coders. Pure web technologies with no plugins have been embraced by Microsoft, and this is strange and unexpected to me as an observer.

Why is this strange?

One. Microsoft developers are going to go through Denial, Anger, Bargaining, Depression and Acceptance, but mainly anger if this is true. .Net programmers believe they have the best tools, language and framework—as odd as Cocoa programmers might find that belief—they really do. They also tend to have a distain for dynamic languages like Javascript, especially Javascript with it's odd object model, and not quite C syntax. And they've spent the last years mastering the C# language and the massively large .Net frameworks. Any platform company gets its strength from its developer community and this just seems gratuitously hurtful. Finally, they expected to be part of a gold rush to fill both a new Microsoft Windows application store, and custom development orders for the new platform.

Two. It is my impression that Javascript is not an appropriate language for large app design—although I could be convinced otherwise. It's possible that the Javascript part was what was working, and it would be appropriate for lightweight widget like applications like weather apps, twitter feeds, etc. At a later date, Microsoft could add support for other frameworks. This would be the same path as Apple took with the iPhone—web development first followed by native development.

Three. It is conversely and perversely possible that is .Net itself that doesn't scale well in performance at least, and wasn't up to the task of being scaled up from a little phone screen to 30 inch monitors, at least in how Microsoft's OS team would have used it. Presumably, if Microsoft could have released a version of Office built off the .Net framework, they would have. I've had some tangential experience with .Net and complicated renderings, and it hasn't been good, but I'd always assumed things would work out given time to optimize.

Fourth. It is unlike Microsoft, to want their developers to use easily portable technologies. To the extent Windows 8 Javascript and HTML5 isn't littered with calls to Microsoft extensions and framework, will be an indication of a surprising lack of strength on Microsoft's part to convince developers to make unhedged bets on Windows.

As I said perplexing.

And frustrating for anyone seeking to use a base of code across a range of platforms. At one time, a story was building up that you could write a chunk of code in C#, and execute it on the Mac and Linux via Mono, on Java platforms via a code translator, on Android via either Xamarin's .Net framework or a Java translator, on iOS via Xamarin, on Windows Phone 7 via Silverlight, and Windows 8 tablet via either SIlverlight or extensions to .Net. Maybe it wouldn't run well, and wouldn't compete with natively developed apps, but the story was there for project managers to believe. And now that story is sounding iffy.

This might just be a miscommunication. It seems insane for Microsoft to take such an odd route. Maybe when the whole story comes out, JavaScript+HTML5 is just a presentation layer and the bulk of an app will be written in .Net., maybe they just don't have the .Net code ready to demo. But whatever it is, Microsoft owes it to its developers to let them know now, because they can't afford to wait to choose technologies.

Thursday, May 19, 2011

Hint, You Want Your App to be Hard To Program

I come up with many ideas for iPad apps. I have almost zero time to write them, but I do come up with ideas every couple weeks. And when I take my vacation time, I'll write one. How to choose.

Well here's an idea. Choose the hardest one you can write in the time allowed, assuming you think it will sell. Hard means people will be agreeable to paying $5 for an app. Hard means you won't have cut rate competition the next week. Hard means you can be proud of showing what you are capable of.

iOS is filled with programming tasks that should be hard but aren't. Want to play a H.264 MP4 movie, it's like 5 lines of code. Want to play streaming broadcast MPEG2 off your HDHomerun, well that's hard (and probably involves getting an expensive license from Dolby for the sound amongst other IP fees) to do without hardware decoding. Nobody will pay you for the former, there are people who will pay for the latter. Then again, the latter might also be impossible on even an A5.

So I have my own hard idea, which I'm not sharing, and I'll be spending the summer writing it. I'll let you know how it turns out.

Wednesday, March 09, 2011

Setup of iPhone 4 Hotspot

I called AT&T as soon as I saw that iOS 4.3 had gone live. When I called, they had no information about setting it up and didn't seem to be ready for any influx of setup requests. But, eventually I was handed off to a technician and that person was able to find out the setup information.

So over the phone, they were able to convert me from my legacy unlimited plan to the monthly 4 gigabyte plan with hotspot. I received a text message saying the data plan had changed, and the hotspot button in the network section of the general settings pane allowed me to access the hotspot settings. I turned it on, changed the password, and saw that it worked fine with my MacBook Pro. I'm using it right now.

Pretty cool.

When I plugged the syncing cable into my MacBook Pro's USB port, I got a notification of a new network connection. For whatever reason, I had to restart the Mac System Preferences application to get the "iPhone USB" connection in the Network settings panel to connect. I will delete this connection, as I wouldn't want to accidentally spend data when WiFi is available.

The CNET Online Speed Test gave me 1288 kbps which is about what you'd expect for mid-day in Cambridge on AT&T.

A Facetime chat over the Internet to my iPod Touch from my MacBook Pro was surprisingly good. My wife in Nashua only looked a little pixelated and the audio was as good as it ever is.

I'm looking forward to dropping my iPad service, which will save me $15 a month off my old $30+$30 plans, and also allow my kids to use their iPod Touch in the car, and occasional use of my MacBook. I've never come close to using 2GB on either my phone or my iPad, so there is no real downside in terms of cost.


So a great service to finally have on iPhone.

Wednesday, December 29, 2010

On Keeping Politics Out of Non-Political Blogs

This is a blog about technology and programming, that is its mission, and why readers find it*. They do not come for my political opinions. As far as my readers are concerned, I don't have them. It would be betrayal of the trust between me, and whomever visits my blog to insert snarky little bits of politics. I certainly feel betrayed and imposed upon when an otherwise fine blog or podcast feels the need to sneak in something political; do it too often and that podcast is out of my iPhone, i.e., Grammar Girl. Same with blogs, i.e., Roughly Drafted.

This comes up because of the last several minutes of the Engadget Podcast of Christmas 2010. Now the Engadget Podcast is my favorite podcast, I will actually re-listen to some episodes. It can be very funny, although the Engadget Bingo thing is lame, and it's about gadgets, so what's not to love? Well, when Joshua Topolsky and Nilay Patel decide that doing their actual job of putting on a gadget podcast and making amusing pop culture references is too unimportant and start verbally abusing Paul Miller, who disagrees with them, but more to the point, would rather talk about gadgets.

If Joshua or Nilay would like to build up an audience for a political podcast, that would be swell**. But taking their pre-existing audience, who's only shared interest is the love of devices, and subjecting them to their views is a breach of the compact that comes from their podcast being listed in the Gadgets section of iTunes.

Is this post an imposition on my readers? Well a bit. In it's defense, it is a meta-post about political content, not that content itself. And I have few frequent readers to offend.


* I'm not claiming I have regular readers, or many readers. I have people Googling specific technical problems and never coming back. If I'm lucky, this post will be read by a couple friends of mine, as it serves no practical purpose other than me venting.

** I wouldn't actually listen to such a podcast, because for whatever reason I have no interest in podcasts or TV shows about politics, although I read a large number of political blogs.

Tuesday, December 21, 2010

On The Need for MonoMac

I continue to listen to the Dot Net Rocks podcast even though I've yet to write a line of C#. It is a well done and even handed podcast, and one of these days I shall write a post about how the hosts demonstrate weekly how one can advocate a technology while maintaining credibility. But this post is about MonoMac, which I haven't used but someone at my company is, and I go to the meetings. If this means I'm writing without enough facts, then this might be a blog.

[Update: I've been informed that what I've written below is not possible in MonoMac. Apparently, the MonoMac developer community cannot imagine the concept that C# isn't the solution for every programming situation. I am not a publicly profane person, but I'm tempted here. But please keep on reading about an alternate universe where programmers don't fall in love with a language at the expense of reason.]


Tireless advocate for all things Mono, Miguel de Icaza was on the show this week and he got me thinking about why anyone would use MonoMac. And I do think many people will use MonoMac, but (I hope) not for the reasons Mr. de Icaza gave on the podcast.

To summarize de Icaza's pitch for using MonoMac. If you love C# and managed code, then you can use your favorite language to write Mac applications. You'll have to do at least some work outside your beloved Visual Studio, and you'll have to use completely unfamiliar APIs but at least it'll be in C#. And you'll get to use the whole Cocoa API, plus various other OS X technologies like Core Audio, Core Animation, etc., but they'll be wrapped in C# shims. Oh, and you'll get a strongly typed language instead of the horrors of dealing with the chaos of loose typing.

As someone who's spent his professional life writing cross-platform code, this is a really bad and limiting sales pitch. It is pitched at people who just don't want to learn another language for whatever reason, but would be willing to learn the entire OS X API, except trapped in a language for which it's calling syntax becomes cumbersome.

First, of all, you have to realize that the Objective-C language is small. It is not C++, which even a professional coder might never learn in its entirety. It is a elegant little language which adds object oriented extensions to C. The fact that you spend most of your time using those extensions doesn't make them any more verbose. It has had some further extensions in recent years, but it is still reasonably compact.

Also realize that the OS X APIs are huge, composed of literally thousands of messages, methods, structures, enumerations and constants in dozens of frameworks. I've been programming on OS X for a decade, and Apple adds APIs at a rate faster than I can learn them.

So, MonoMac saves you the trouble of learning a small language, while expecting you to master a large API which has been transmogrified to make sense to C# programmers. And the vast majority of the documentation and forum postings for that API assume it is in another language. That's not much of a pitch.

I used to work on a karyotyping application when I was living in suburban Chicago. It was Mac/PC and was written in C++. 90% of our code was shared between the platforms. We used standard application frameworks, at the time PowerPlant and MFC which relieved us of maintaing code every application uses, and gave us a lightweight entry into native appearances and behaviors. Almost all the document data structure and rendering code was cross-platform. So 10% of the code was dealing with the user facing part of the application or calling into native operating system services and the rest was vanilla C++ dealing with an abstract environment.

Consider a typical Model-View-Controller design. If one has a shared language like C# running in Mono, then we can factor our code such that all or almost all of the Model code is in shared vanilla C#, some or even most of the Controller code can be written in C#, although I'd prefer that Objective-C be used for Controllers on the Mac, and the bulk of the View code is no code at all, but what you get from the framework by using standard widgets, windows and other classes using the provided GUI tools. Basically, you want the framework to handle everything outside your document frame in your window. Depending on the application, you can have very high code reuse, or less if you need high performance access to operating system services, such as the example de Icaza gave of using the Audio Units API for sound processing. Regardless, this should be abstracted away and created with factories to make it callable from vanilla code.

So you'd end up with 4 kinds of code:
Vanilla C# that doesn't do much beyond manipulate vanilla objects and simple types, and implement various abstract interfaces or protocols.
Windows specific C# that makes use of Windows frameworks
MonoMac specific C# that makes use of native Mac services
Objective-C for Mac specific GUI code and isn't as awkward as C# would be.

For a lot of applications the first kind can be the bulk of code.


This takes discipline but it results in a more maintainable, focused code base.

I think that's a better pitch.

Not that I'm advocating it, I haven't tried to see if my decade old experiences with cross-platform design are transferable to the Mono runtime environment. Nor do I think Mono has proven itself enough for risking a project on it. I was floored that such a well known project only has 200 or so active users.

And I like Objective-C, it's a lot of fun.

Wednesday, November 03, 2010

Logitech Still Sells the Z-5500

I just yesterday replaced the decade old Logitech Z-5500 Digital speaker system in my TV room. It had spent several years on my desk when I had an apartment, and then spent the last 5 years giving surround sound to my house. It gave me reliability and a nice set of inputs. As a gadget guy, I'm just shocked that something as archaic is still being actively sold. Go look at its specification page and it touts its “Analog stereo-mini (on side panel of control center) for portable CD, MP3,or MiniDisc® players”. Now that's a blast from the past, the MiniDisc and portable CD player.

I replaced the Z-5500 with a Denon AVR 591 which is a real receiver with modern HDMI ports, modern audio codecs and an auto calibration system. Still, I am going to miss the conveniently small control module of the Z-5500.

Thursday, September 30, 2010

New version of Signal GH

Just got a note from Apple saying that a new version of my Signal GH iOS app to monitor over the air TV signal quality with HDHomeruns was approved. Which is good. I make little money on the product, but I like tweaking it.

It has two major features: an iPad layout and a map of broadcast towers in the United States (sorry Canada).
Both of these features were a lot of fun. The size of an iPad screen is luxurious and freeing. In this case, I just scaled everything up, as I think the tabbed interface works for this particular application. It might have been more iPadish if I'd put the settings in a popup control, but I felt people would be spending a good amount of time in setup, and would get nervous spending too much time in a spring loaded widget. Other iPad apps I've written have gone whole hog for using popups for accessing functionality.

The map was a joy to create. MapKit is a great example of a framework that Apple just gets right. Getting my TV towers from a Core Data database to the screen could not have been easier due to the flexible use of protocols and categories. However, the standard drop pin wouldn't work because TV stations tend to share a single tower. So I went with a custom flower petal annotation giving me room for 7 major networks and a generic logo.

Compass support was nearly as easy, although I did have to use my own custom annotation for that, when it would be nice just to set a flag and have it added. On the other hand, I felt the standard radar wave animation for the user's current location was too distracting and brought attention to the wrong detail, so I replaced it with a more static graphic merged with the compass.

Thanks to my sister, Sarah Howes, for doing the research needed for populating my TV station database.

Wednesday, September 15, 2010

Finally! Apple Embraces A Standard for Metadata in PDFs

I draw your attention to the header file CGPDFContext.h in the iOS 4 SDK:

void CGPDFContextAddDocumentMetadata
(CGContextRef context, CFDataRef metadata) CG_AVAILABLE_STARTING(__MAC_10_7, __IPHONE_4_0);


The iOS 4 SDK has been out for several months but I hadn't noticed this change until today. Not that I have any use for it today on iOS, it's on the Mac where it is crucially needed.

Starting with Mac OS X 10.7, developers will be able to embed arbitrary XML data in the PDFs they can generate with Apple's APIs. [Update: I guess there is no actual requirement of XML based on just the API. I'd recommend standardizing on XML though.] They have decided to use the method advocated by Adobe in which a metadata stream object with a compressed XML payload is inserted into a PDF (Apple does the compression for you, you just have to provide a block of XML data). This is a good way of doing it, but more importantly, it is a simple and standard way of doing it. I would have preferred having an additional vendor tag where I could mark the metadata with a "com.genhelp.mydrawingapp" identifier, but that is not crucial; I can get that data from the XML.

Why is this Important?

As I've explained several times before, on the Mac we used to have this feature which I will call Round Trip Editing, wherein a user could make a drawing in one application, copy and paste the drawing into another application, and then later copy and paste back into the original application, and still be able to edit the drawing. Generations of Mac users relied on this feature to go from applications like ChemDraw into PowerPoint and back again.

This feature has been lost as applications transition from using the archaic PICT clipboard flavor to the modern and beautiful PDF clipboard flavor. There was no direct way to embed large data in Apple generated PDFs, and thus developers were left on their own to munge the format if they dared. And, with no standard or expectation of data embedding, applications did not bother to preserve the original PDF resulting in data loss.

Getting round trip editing working again has required 3 steps. Apple has had to provide an API for data embedding. Content generating applications have to be modified to use that API. Office applications have to be modified to return original PDFs to the clipboard when selecting a single image. With Mac OS X 10.7, step one will be here; about 4 OS versions tardy.


BTW Here is how to call it:

CFDataRef MakeAPDF(CFDataRef someXML)
{

CGRect mediaRect = CGRectMake(0, 0, 400, 600);
// use your own rect instead

CFMutableDataRef result = CFDataCreateMutable(kCFAllocatorDefault, 0);
CGDataConsumerRef PDFDataConsumer = CGDataConsumerCreateWithCFData(result);

// mark the PDF as coming from your program
CFMutableDictionaryRef auxInfo = CFDictionaryCreateMutable(kCFAllocatorDefault, 1, NULL, NULL);
CFDictionaryAddValue(auxInfo, kCGPDFContextCreator, CFSTR("Your Programs Name"));
CFDictionaryRef auxillaryInformation = CFDictionaryCreateCopy(kCFAllocatorDefault, auxInfo);
CFRelease(auxInfo);

// create a context to draw into
CGContextRef graphicContext = CGPDFContextCreate(PDFDataConsumer, &mediaRect, auxillaryInformation);
CFRelease(auxillaryInformation);
CGDataConsumerRelease(PDFDataConsumer);

// actually make the call to embed your XML
CGPDFContextAddDocumentMetadata(graphicContext, metaData);

CGContextBeginPage(graphicContext, &mediaRect);
// do your drawing, like this grey rectangle
CGContextSetGrayFillColor(graphicContext, 0.5, 0.5);
CGContextAddRect(graphicContext, mediaRect);
CGContextFillPath(graphicContext);
// end your drawing

CGContextEndPage(graphicContext);
CGContextFlush(graphicContext);
CGPDFContextClose(graphicContext);
return result;

}

And here's how to get the data back. Check that the XML is yours instead of some other program's.
CFDataRef ExtractMetaDataFromPDFData(CFDataRef pdf)
{

CFDataRef result = 0;

CFRetain(pdf);
const UInt8 * pdfData = CFDataGetBytePtr(pdf);
CFIndex pdfDataLength = CFDataGetLength(pdf);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(kCFAllocatorDefault, pdfData, pdfDataLength, NULL);
CGPDFDocumentRef pdfDocument = CGPDFDocumentCreateWithProvider(dataProvider);
CGDataProviderRelease(dataProvider);

if(pdfDocument)
{
CGPDFDictionaryRef docDict = CGPDFDocumentGetCatalog(pdfDocument);
CGPDFStreamRef metastream = 0;
if(CGPDFDictionaryGetStream(docDict,"Metadata", &metastream))
{
CGPDFDataFormat format = CGPDFDataFormatRaw;
CFDataRef streamData = CGPDFStreamCopyData(metastream, &format);
if(streamData)
{
if(format == CGPDFDataFormatRaw)
{
result = streamData;
CFRetain(result);
}
}
}
CGPDFDocumentRelease(pdfDocument);
}
CFRelease(pdf);

return result; // check to see if this is your XML
//remember to release result when done

}

Monday, August 23, 2010

The Three Numbers in Computer Programming

When I was working at a small company in Illinois, Vysis, my team had an architect named Ian Poole who taught me many things, and one of them was the simple fact that there are only three numbers in software design: zero, one and every. By this he meant, you can support the case where there are zero copies of an entity, one copy of an entity or an arbitrary number.

I ran into this problem when revamping my iPhone app,  Signal GH, to monitor antenna signal strength from an HDHomerun. I had mistakenly written it to handle two tuners in a single device. The code was littered to references to the "yellow" tuner and the "cyan" tuner, as that was the colors of their graphs. Well, Silicon Dust came out with a version with a single tuner, which sort of shot me in the foot. And then I bought that device which gave me three tuners on my network. I had a painful time backing out my bad design choice in favor of a more arbitrary number (I did limit it to keeping track of 8 tuners, but that number is kept in only one place and could be changed in a few minutes.)

The new version will be out shortly, fighting off one last bug.

Thursday, July 08, 2010

Wednesday, July 07, 2010

iPhone 4 improves on audio output isolation

I often hook up my iPhone to my car stereo and listen to podcasts while driving. I tend to drive with one earphone in place so I can click to pause or answer phone calls. In the past, whenever I turned off my car's stereo, I could still hear a indistinct and annoying noise coming through my headphones even though the sound was supposed to be going out the line level output in the phone's dock connector. Now, with the iPhone 4, it's quiet. Just another little improvement.

Tuesday, June 01, 2010

On Fixing the Personal Computer's Original Sin

A conversation with an old friend the other day brought up the limited file handling ability of the iPad. Apple really doesn't want you to deal with files as such on the iPad. When the original iPod came out, there was a lot of criticism about not allowing the user to maintain their own MP3 file trees and drag and drop onto the device as competing products allowed; and Apple kept it that way. These are two examples of rectifying one of the original sins of the computer: forcing users to deal in an ad hoc manner with individual documents as files.

Oh, you might say that it is freedom itself to be responsible for filing away today's expense report as a spreadsheet inside of a folder with 500 not quite identical others, or knowing where each of 10 versions of Margaritaville is located, and which is your favorite. To the contrary, it is just another example of the user being trapped by a lack of imagination on the software architect's part into doing something the computer should be much better at doing.

Or you might say that the individual file is a natural unit of information, like the Boolean bit; and that is not true, the computer file was invented by a man (unless it was invented by Grace Hopper, which I doubt); and computers could have evolved with some other mechanism; perhaps mimicking the human brain which I don't believe uses files.

If I were a bench chemist, and every day I drew some new variant of a steroid, where would I want my drawings, in individual proprietary documents, or in a database giving my work structure and context? Pretty obviously in a database. It creeps up on us, the slow flood of documents; the more organized among us can keep it together through ever deepening trees of folders and files, but eventually individual files in folders becomes unmanageable. I've been using a computer for 25 years, I look forward to at least another 40 years of use. There is no way I will be able to productively keep track of all my creations over that time in a tree structure. As it is, anything older than 5 years might as well be non-existent.

So the operating system vendors introduce search, and we get by like our computers are mini-Googles; as long as we can remember some key phrase we can find it; except when words fail us, or the document can't be parsed by the indexer.

And the cloud beckons. Anyone who thinks the cloud is just a well sorted FTP site, doesn't understand. The iPad is right now at one of those points where Apple can see but cannot provide, or even enunciate the ultimate solution, but does not want users to get into the habit of using individual files for their document needs; so we get the hacky solutions that will be cast aside the moment anything remotely elegant is provided. And I don't know what this solution is either. People have been trying to improve upon the file based system for years; the Newton didn't have files, BeOS had some sort of database file system. OpenDoc tried to get all documents to live together in harmony, etc. And none of these were a market success. Files will not die easily; and you will still have to export to a file for a good long time.

Baring a solution, the point of this entry was just to ask people to keep an open mind about files or the lack of them. Files are not the thing you want; you want to create, edit and view media and documents, and how they are stored in secondary to how quickly you get to them, and how safely they are stored.

Monday, May 24, 2010

Flash's History of Neglect on the Mac - An Example

I turn your attention to this apparently genuine June 18, 2008 forum comment by Tinic Uro, which I will quote:
We have identified the bottleneck in the Flash Player for OSX. Like in the other plugins the culprit is text rendering, in this case rendering using device text. This benchmark spends >50% in a single OSX function: ATSUGetUnjustifiedBounds. You can verify this yourself using Shark. I am working on a change which will cache the results returned by that API to where this call should completely disappear from the performance profile.

If everything turns out well I hope to see that change in future release. Some new numbers on my Mac Pro (note that these represent numbers which can change at any time if we decide to do things differently):

Before the change: ~8.5fps
After the change: ~28fps in Safari (~26fps in Firefox)

This speaks volumes about how little platform specific TLC Adobe historically put into the Mac plugin. A performance bottleneck as glaringly obvious as this should never have been seen by a user, much less bedeviled users and Flash developers for years before someone bothered to run Shark, or now days the CPU Sampler Instrument. Adobe must never have tried to optimize on the Mac at all; if I found a common use case where I was spending half my time measuring the same block of text over and over again, I'd joyously jump on the opportunity to make my user's lives easier because I could fix that in an hour. Apparently, Adobe couldn't be bothered to do the right thing until years later, and countless hours were lost by Flash developers trying to coax the Plugin into looking just as smooth on the Mac as it did on Windows.

And now that Apple is the big dog in the mobile application space, it's payback time for all the years of neglect and shoddiness. And with the eyes of the world upon it, Adobe is having to do the actual heavy lifting of getting decent performance and battery life out of a handful of Android phones just to prove Apple wrong. At least something good might come of this.

In the meantime, I've installed Click to Flash on my Mac and gotten iPad like silky smooth performance out of Safari. it is amazing what all those little ads cluttering up the dozens of open pages I tend to accumulate were doing to my poor MacBook.

Friday, May 07, 2010

Will the Next 13" MacBook Pro have an Optical Drive?

I've been following the news about the new MacBook 13" still having a Core 2 Duo when its bigger brothers have i5 and i7s. Apparently this was because there was not room to fit the added NVidia chipset needed to allow GPU switching. If this chipset problem persists, I'm going to bet that Apple removes the optical drive from the 13" MacBook Pro, giving room for the chipset and a bigger battery, and further I'll speculate that they begin selling a little NAS computer that includes: a shareable optical drive, a centralized iTunes server, a Time Machine server, and AppleTV functionality. This mashup of 4 separate products: Time Capsule, AppleTV, MacBook Air optical drive, and the iTunes library functionality makes a lot of sense, and would be billed as an energy efficient and environmentally friendly way to reduce redundancy. And somehow, this will tie into their cloud strategy which will be coming online shortly. And it would probably have enough GPU powered oomph for on the fly re-compression of video for use on iPhones and iPads.

Wednesday, March 24, 2010

Bye Bye Pentium 4 Server

No sooner had I assembled a mini home theater PC for the TV room, but Amazon Vine offered me a Zotac MAG HD-ND01-U which is basically the same thing in a more appliance oriented packaging.

The guts of my original HTPC will now be moving into the house's new Linux/MythTV server: add a large hard drive, a cheap DVD drive and a larger case. This is going to save me a lot of electricity. A Pentium 4 is a pretty poor choice for a lightweight server, and is now draining over $10 of power a month; the new Atom based server should draw less than a third as much energy and maybe less, as I'm hoping the more modern design will have better idle characteristics. I'll save quite a bit of money in the long run in mothballing my old Dell. And the new Atom/Ion motherboard should be fine for serving the occasional file and recording TV shows.

2.0 GHz Pentium 4 + hard drive + external hard drive = 110 W (or so) at idle
Atom N330 + laptop drive + sleeping hard drive = 25 W idle

If I pay a dollar for ever 10W per month then I'm saving around $8 a month or $96 a year, so the added $180 I'm spending will be paid for in 2 years (or in 4 years if I had had to buy a motherboard and RAM). And I'll have a quiet server that isn't annoying me constantly with its fan.

[Update: It's so pleasantly quiet in my laundry room today. Just have to migrate the svn server on the little NAS to the new server and it's going to be so peaceful in the basement.]

Tuesday, March 09, 2010

Holding onto Legacy PCI Cards is Limiting

After the happiness of putting together an NVidia Ion HTPC for the TV room, I've been looking at replacing the Linux server in the laundry room. It's a typical noisy, energy hog of a Pentium 4 Dell, non-gigabit ethernet, with multiple hard drives either inside it or in an external case. I'd like to replace it with a low power, quiet, modern model. I could collapse my MythTV backend, SVN server, and file server, into one box and save on the order of $15/month in electricity. So, I'd like to move to something like:
  • Dual core Intel Atom N300 processor
  • Ion chipset
  • Small SSD boot drive
  • Large (1.5 or 2.0TB) Green Drive
  • 2 GB RAM
  • mini-ITX form factor

The fly in this ointment is that I have a ATSC tuner on a PCI card in the current box (a very reliable PCHDTV 3000), and getting a mini-ITX Ion with a PCI slot is limiting. As far as I can tell, it is limited to one motherboard: the ASUS AT3N7A-I. And anytime I see customer reviews that include such tidbits as "noisy fan" or "uses more energy than other Ion motherboards" I start wishing for other options.

And the other option is to go with another network tuner. I already have one HDHomerun which along with my PCI tuner gives me the capacity to record three simultaneous shows on the rare days when that's a good thing. Replacing the PCI version with another HDHomerun would open up a larger world of Ion or Atom motherboards with more modern card slots. I even see that Silicon Dust has recently released an economy single tuner model which can be had from newegg for $81.97 shipped. And this will keep the new server cooler and quieter, as well as having another tuner Windows Media Center on my wife's computer can access. Win, win, win.

Tuesday, February 23, 2010

Could Apple Transition to Using iPhone Apps as Safari Plugins?

I've been knee deep in trying to keep an old style NPAPI (Netscape) plugin working under Snow Leopard, and believe me it is not easy. And one of the reasons it is not easy is that Apple has used the 64-bit transition to lock down what an NPAPI plugin can do and when it can do it. No longer are we allowed to do things like bring up our own windows, draw whenever we feel like it, create our own timers, track our own mice and generally do things the easy way; we have to live in the browser and follow the browser's rules.
I refer you to the Web Plugin Programming Topics Guide
Beginning in Mac OS X v10.6, on 64-bit-capable computers, Netscape-style plug-ins execute in an out-of-process fashion. This means that each Netscape-style plug-in gets its own process separate from the application process. This design applies to all Netscape-style plug-ins. It does not apply to WebKit plug-ins, nor at present to WebKit-based applications when running in 32-bit mode.)


It goes on to enumerate a large number of new restrictions on how plugins are supposed to operate in their new out of process home. At this point, it becomes pretty hard to justify using a NPAPI style plugin for any but the simplest plugin for Safari; it is just too strangling. So the mind turns to WebKit plugins which are basically Cocoa NSViews with a few limitations and added message handlers. As the quote above says they are not (at present) limited in such a way as Netscape plugins; in this way Apple is nudging people to use WebKit plugins if for no other reason that they allow you to do things that the NPAPI plugin do not.

And here we come to the idea for this posting, it seems as though Apple is interested in locking down and rationalizing Safari plugins. It is also reasonable that at sometime in the future, they will allow plugins for Safari on the iPad. It is also almost a given that those plugins will not be NPAPI style plugins, but will instead be UIView or UIViewController derivatives. And at that point, the question becomes, wouldn't the UIView style plugin with its simpler and more modern object design make a cleaner foundation for plugins for Safari on the Mac? Apple already has an iPhone simulator on the Mac. Of course, there would have to be some extension allowing mouse hovering events.

    Advantages
  • Code sharing between Mac and iPad
  • Plugins are closer in scope to iPhone Apps then full blown Mac apps.
  • At this point most Objective-C programmers are iPhone developers so developer pool would be larger
  • Expand iPhone apps to the Mac
  • iPhone Apps are used to being sandboxed

Anyway, just a thought.