Is Apple Crazy to Screw Adobe?


There’s a lot of chatter on the nets about Apple’s new developer agreement that generally obliterates the use of 3rd party tools to make iPhone apps. It says, in a nutshell, you must use C, C++, Objective-C, and, okay, Javascript, to write — not ship, but write — your apps. Essentially, they might as well have said "no middleware."

The problem for Apple was that extremely cool packages like Unity3D and Adobe CS5 were poised to make it really easy for developers to write great apps in a safe clean sandbox and press a big orange button to deploy to iPhone.

This was formerly a very clever way around Apple’s previous requirement that all apps be native, as in rock solid compiled vs. flexibly interpreted/JITed like Flash and Silverlight. But Apple, clearly masochists the lot of them, wanted everyone else to use their circa-1980s-with-GUI-glued-on XCode development environment too. Or they just wanted to screw their whole ecosystem because they’re too arrogant to fail, right?

Well, unfortunately, that’s not really what’s going on here. Apple isn’t stupid. The problem with CS5 and Unity3D for Apply is that they are inherently cross-platform. The people inside Apple, I’m sure, don’t really care what you use to develop their apps as long as they make money (for Apple), sell more iPhones, iPads & iUnderwear, and the apps generally don’t suck (at least in aggregate).

But Apple does care when I write a game that runs as well or better on Android or WinMo7 or Netbook+Flash, with no added effort, and gives users a great set of choices, some of them free. For developers, this is exactly what we want — the most/cheapest outlets for our work. So Apple is forcing developers to choose. "You want our luscious money-printing (for a select few) platform, then we want 100% loyalty."

This is the same reason other companies, ahem, look for games that offer exclusives to their console. Do I like it? As a developer, of course not. As an employee, meh. Not my call. But it does work, more than it doesn’t, and helps build a brand instead of a commodity.

Where it fails, and where Apple is being stupid IMO (and only IMO) is when they no longer have the sexiest hardware or lion’s share of mindshare, which may be sooner than they think. When someone else has something more appealing, then Apple will be forced to help developers cross-develop onto Apple’s platform, as other companies do now for theirs, and as Apple once did for all sorts of creative apps like Adobe’s.

All of that said, I’m ditching my iPhone anyway — the only reason I got it was because there was nothing better at the time. I have no loyalty to brands, only people.

 

 

  1. #1 by ianf on June 15, 2010 - 5:51 am

    I think that without resorting to your –not illogical– conclusion of Apple wanting the best out of its iPhone (now iOS) developers, therefore forcing them to develop for their unique own, not to cross-platform requirements, there is one major reason why they had no option but to go that only-Xcode-will-do route. With 2000+ iOS apps submitted for approval every day, and allegedly 95% of these approved within a week, they must’ve developed advanced validation tools to abet that process. That spells checking for known API-entry points, etc. in the compiled object code etc. as a first step. Had they allowed any language/ libs/ tools to produce code, they could never automate such checking. So even if their decision was anchored in “corporate ego,” this need to have the code conform to certain validation parameters trumps everything else.

    • #2 by avi on June 15, 2010 - 6:57 am

      If I use Unity3D, there is compiled object code running native — it’s Unity’s code in this case, and it was probably compiled with Apple’s tools. Unity may also use Mono to compile my “non-native” code into IL or probably pure native code in this case to comply with Apple’s older TOS. In any event, that code sits on an iPhone and can be tested with static analysis because it must call Apple’s libraries to actually do anything.

      There is no way I can think of that would allow me to write .Net, Python, LUA or any scripting code that does not eventually need to result in some compiled native code calling Apple’s APIs. So looking for those entry points, as you say, is no different than if I’d written the application natively but used any sort of data-driven routine.

      In fact, let’s point out that most of the “best” apps out there are in some way data-driven, even the ones with top performance and slick UI. Think about any game you’ve ever played that has config files that people can modify. Does Apple require that every IF/THEN statement in a program be hard-compiled, every character’s AI tree or state machine be fully expressed in native code? That would result in worse, not better apps, because the cost of developing such code would be too high, and the bloat of the code would consume available memory.

      A simple 2D texture map is a great example of data-driven results. If every texture map was implemented in unique native code, one would need to write code to color every pixel of a triangle, probably for every shape triangle per unique texture. Yet we ship what is essentially a table of color values that hardware can interpret to color a polygon. No one can express a color that is unsupported or crash a computer by their choice of texture values.

      Similarly for apps, using scripting or IL results in lower-level native code being called. In the case of iPhone apps, even those scripts were generally being converted to native code to comply with the previous TOS. Now, Apple is saying that even the original source language and compiler must be theirs. I say that if this is the only way they can check to ensure people aren’t calling illegal functions, then Apple is probably doing it wrong.

  2. #3 by ianf on June 16, 2010 - 9:12 am

    Fair enough, Avi, I won’t dispute checking compiled object code at such levels of granularity, because that competent I am not. I am merely pointing out that Apple’s changing the ToS in favor of known tools needn’t all be (if at all) for “Xcode-hegemonic” reasons, but may well be due to the practicalities of handling the volume of submitted apps. Logistics, dammit!

    When SDK and the AppStore first appeared 2 years ago it started with a trickle –which, as I recall, Apple had difficulties handling without delays– which grew to present 15000 units/week. In mere 6 weeks since the iPad went on sale there already are over 10000 approved apps – I don’t know how many of them were written for and/or take advantage of iPad’s specific capabilities, but still. That’s quite a volume to “CURATE” no matter how one counts it.

    So, in an attempt to walk back the cat, I asked myself what Apple must have done to ensure scalable sanity and future dependency of the app-approval[sic!] process. Even with half the volume, say 1000 apps/day, the workload defies belief. Clearly, the only sane answer lies in automating as much of it as possible. But for that to work, the tests need to be constrained to specific range of checkable APIs; library subroutines, etc. Such as are native to Apple’s own tools.

    If I read you correctly, you claim that, since at some point all written code needs to latch onto Apple kosher libraries anyway, the resulting objects functionally (if nor binarly) will be equivalent to that of X-code’s own. I don’t think so. If it were that easy to circumvent Apple’s ToS constraints, nobody would make much of a fuss. But I suspect that Apple’s validation controls must be pretty “tuned in” to that precisely in order to detect, flag, and thus eliminate submissions that clearly were written using other tools and methods. Incidentally, I kepp posting questions of that Apple’s methodology in various Mac fora without receiving a reliable answer. So there.

    • #4 by avi on June 18, 2010 - 9:52 pm

      I think I get your point, but I don’t think it holds up. Ask yourself, which of the following requires more work to test?

      1. 1000 unique compiled apps a day.
      2. 1000 unique scripted apps which re-use one of ten popular engines, where each such engine was previously exhaustively vetted for its compiled calls into the OS.

      In other words, a big benefit of scripting (to Apple) is that you can’t generally call functions that the scripting engine doesn’t support. That means much less work to validate in terms of OS calls, not necessarily in terms of wacky combinations thereof.

      OTOH, each custom native app can do things like self-modifying code, fetching code from data, exploiting their own crash or buffer overrun bugs, etc.. to hide all sorts of nastiness. Mono (.Net), for example, is extremely robust and is designed to prevent those sorts of exploits through explicit security and code signing models.

(will not be published)