Have you noticed that lately, there has been a debate ripping at the fabric of our society?
No, not politics: operating systems. Like the infamous Cola Wars, sides have been chosen. Debates rage about which one is better. But, far more important than the question of which is better, is the question – what about the children? Seriously, why are our schools taking sides in this corporate and technological debate? A number of schools require students not just to acquire a computer, but require their students to buy specific brands of computer or tablet. The most egregious examples revolve around Apple’s MacBook, but this practice also extends to tablets, both drawing tablets like Wacoal and multipurpose tablets like the iPad.
On first blush this may seem reasonable. Schools make choices about brands all the time, for students in the form of textbook choices, to a school-wide level in the form of multimedia display system, but the problem is more fundamental than the question of endorsement. At its root, this is a problem of technological choice. As users and consumers, when we make those choices, we solidify or change the trajectory of technologies.
Technologies are not inevitable: the way we use a technology can have a real impact on how it develops. One example of this can be found in the bicycle. Originally, bicycles were thought of as a daredevil sport, but as more people wanted to use the bicycle for transportation, innovations changed the bicycle from the traditional penny farthing design, to a design with pneumatic wheels of equal sizes. These changes made the bicycle a viable form of transportation, incorporating a wider user base than the young racers. But, this moment of technological flexibility is fleeting. Technologies solidify and when they do, they lock in the value judgments of those users who are engaged and vocal about their requirements for use.
This is the root problem of the operating system wars, and the role schools are playing when they participate in them. The personal computing industry has been developing for approximately 30 years, but it is difficult to say that the market is mature; given that prior to 1997 less than a quarter of US households had a computer. Since then that number has climbed to 75% of households. So, more realistically, the personal computing industry has been in a period of significant growth for 16 years. When compared to the automobile industry, which began its first period of significant growth after the release of the Model T, we are comparing the personal computers of our time with the cars of 1929. The question then becomes: would it have been appropriate in 1929 to teach students to drive only one specific model of car? Would that knowledge have held them in good stead throughout the maturation of the automobile? Given that the number of automobile manufactures in the US dropped from 253 in 1908 to our current 3 major manufactures, I doubt it.
This problem is not limited to personal computers, because it is really a problem of teaching a specific piece of software, not teaching a global skill. In a graphic design course, is it really appropriate to teach students how to use Photoshop? Or, should they be required to illustrate the skill, regardless of what piece of software they use to create it? In a photography class, we don’t teach students to use a Cannon, we teach them photography. In music, we don’t teach students to play a Fender, we teach them to play guitar. This is a significant cultural shift in how we teach students to engage with the world and yet, we appear to be satisfied with the explanation that it is easier to teach everyone using the same technological framework.
In part, I think the reason that we are prepared to accept such a simplistic answer is because of a profound misunderstanding of the maturity of the current industry. While it may appear to be maturing, it is in fact, stagnating. With only two large operating systems available, sold by companies with very different approaches, to be satisfied with either is to accept that this industry has matured and that there isn’t room for alternative approaches to compete. And, the more we lock students into a choice between one or the other, the less likely it is that there will be an alternative that can compete.
This argument is equally applicable to Microsoft’s pre-installation agreements. Unlike Apple, who controls their entire product, Microsoft bundles their operating system into the computer you purchase from a separate entity, the manufacturer. This illustrates the difference between the two companies. Apple is, ostensibly, manufacturing a single product, while Microsoft is a software supplier with a Machiavellian spider web of licensing agreements. This alone indicates a significant problem with the argument that the personal computing field has reached a level of maturity that justifies cementing two major players into the field’s future development. Even comparing the two major players in the field is the equivalent of confusing a car manufacturer with a company that builds only engines. What becomes troubling is that in allowing and perpetuating the idea that there is no third way, we are almost guaranteed to force a nascent industry to conform to a status quo that developed absent of critical assessment, inhibiting legitimate competition. In a country that has 145 manufactures dedicated toilet paper and related sanitary paper products, is there not a market for more than two commercial operating systems?
Instead of accepting this status quo, we should be taking a cultural stand against it, encouraging users to experiment with different software suites. Our educators and employers should accept work in different software formats – or expect work to be submitted in a format independent of the hardware and software used in its creation, like the portable document format (pdf files). Instead of being drafted into a battle between giants, we should be staging a revolution. We need to lay down our arms in the Operating System Wars, and join together to demand more from the personal computing industry.