Protected: Styleguide

This content is password protected. To view it please enter your password below:

Two Worlds, One Wall, No Sense

Trashy reality tv with a sci fi twist – what could be better…turns out, pretty much anything.

I had the opportunity to review the show “Opposite Worlds” on the channel formerly known as “Sci Fi” and now  known as SyFy – television for people who can’t spell phonetically.

Let me assure you – this is one of those times when you really only want to read the review. So you should! Check out my review at happynicetimepeople.com

http://happynicetimepeople.com/tomorrows-future-today-yesterday/

Pageant Princesses, Apathy and Crazy Eyes

I have previously alluded to my love of all things trashy reality TV.

Well, this week I get to prove it, by reviewing the almost premiere of the amazingly absurd “Kim of Queens”. If you have not yet seen the amazing Kim Gravel attempt to teach girls to “become ladies”, you should check out my review and then schedule your live tweet for next week’s episode.

CrazyEyes

 

Those that don’t know me

…might not realize that Janet Evanovich is one of my favorite authors. Evanovich’s books are fantastically funny, well written with outlandish, yet somehow believably funny.

And I got a chance to review her latest book “Takedown Twenty” for my friends over at HappyNiceTimePeople!

 

Malcolm Gladwell and Kelly Clarkson: great thinkers of our generation

This week I had the opportunity to write a review of Malcolm Gladwell’s latest book, “David and Goliath”, for a very funny website: HappyNiceTimePeople.com. It’s a very different style from what I use here, and it’s a little risque!

Read it here, at your own risk!

Is Dave Eggers Prescient or Just Lucky?

I had the opportunity to review Dave Eggers latest book, “The Circle” for a website that I love: happynicetimepeople.com. The style is very different than I use here and it’s a little risque, but for those not for the faint of heart, I bring you a semi-snarky review of “The Circle”.

“The Circle” was a an interesting journey into the near future, and makes you wonder – should we worry that google has part ownership in the first quantum computer?

Update! Or that they bought the creators of Big Dog, Boston Dynamics?!

 

The Great O/S Debate

Have you noticed that lately, there has been a debate ripping at the fabric of our society?

No, not politics: operating systems. Like the infamous Cola Wars, sides have been chosen. Debates rage about which one is better. But, far more important than the question of which is better, is the question – what about the children? Seriously, why are our schools taking sides in this corporate and technological debate? A number of schools require students not just to acquire a computer, but require their students to buy specific brands of computer or tablet. The most egregious examples revolve around Apple’s MacBook, but this practice also extends to tablets, both drawing tablets like Wacoal and multipurpose tablets like the iPad.

On first blush this may seem reasonable. Schools make choices about brands all the time, for students in the form of textbook choices, to a school-wide level in the form of multimedia display system, but the problem is more fundamental than the question of endorsement. At its root, this is a problem of technological choice. As users and consumers, when we make those choices, we solidify or change the trajectory of technologies.

Technologies are not inevitable: the way we use a technology can have a real impact on how it develops. One example of this can be found in the bicycle. Originally, bicycles were thought of as a daredevil sport, but as more people wanted to use the bicycle for transportation, innovations changed the bicycle from the traditional penny farthing design, to a design with pneumatic wheels of equal sizes. These changes made the bicycle a viable form of transportation, incorporating a wider user base than the young racers. But, this moment of technological flexibility is fleeting. Technologies solidify and when they do, they lock in the value judgments of those users who are engaged and vocal about their requirements for use.

This is the root problem of the operating system wars, and the role schools are playing when they participate in them. The personal computing industry has been developing for approximately 30 years, but it is difficult to say that the market is mature; given that prior to 1997 less than a quarter of US households had a computer. Since then that number has climbed to 75% of households. So, more realistically, the personal computing industry has been in a period of significant growth for 16 years. When compared to the automobile industry, which began its first period of significant growth after the release of the Model T, we are comparing the personal computers of our time with the cars of 1929. The question then becomes: would it have been appropriate in 1929 to teach students to drive only one specific model of car? Would that knowledge have held them in good stead throughout the maturation of the automobile? Given that the number of automobile manufactures in the US dropped from 253 in 1908 to our current 3 major manufactures, I doubt it.
This problem is not limited to personal computers, because it is really a problem of teaching a specific piece of software, not teaching a global skill. In a graphic design course, is it really appropriate to teach students how to use Photoshop? Or, should they be required to illustrate the skill, regardless of what piece of software they use to create it? In a photography class, we don’t teach students to use a Cannon, we teach them photography. In music, we don’t teach students to play a Fender, we teach them to play guitar. This is a significant cultural shift in how we teach students to engage with the world and yet, we appear to be satisfied with the explanation that it is easier to teach everyone using the same technological framework.

In part, I think the reason that we are prepared to accept such a simplistic answer is because of a profound misunderstanding of the maturity of the current industry. While it may appear to be maturing, it is in fact, stagnating. With only two large operating systems available, sold by companies with very different approaches, to be satisfied with either is to accept that this industry has matured and that there isn’t room for alternative approaches to compete. And, the more we lock students into a choice between one or the other, the less likely it is that there will be an alternative that can compete.

This argument is equally applicable to Microsoft’s pre-installation agreements. Unlike Apple, who controls their entire product, Microsoft bundles their operating system into the computer you purchase from a separate entity, the manufacturer. This illustrates the difference between the two companies. Apple is, ostensibly, manufacturing a single product, while Microsoft is a software supplier with a Machiavellian spider web of licensing agreements. This alone indicates a significant problem with the argument that the personal computing field has reached a level of maturity that justifies cementing two major players into the field’s future development. Even comparing the two major players in the field is the equivalent of confusing a car manufacturer with a company that builds only engines. What becomes troubling is that in allowing and perpetuating the idea that there is no third way, we are almost guaranteed to force a nascent industry to conform to a status quo that developed absent of critical assessment, inhibiting legitimate competition. In a country that has 145 manufactures dedicated toilet paper and related sanitary paper products, is there not a market for more than two commercial operating systems?

Instead of accepting this status quo, we should be taking a cultural stand against it, encouraging users to experiment with different software suites. Our educators and employers should accept work in different software formats – or expect work to be submitted in a format independent of the hardware and software used in its creation, like the portable document format (pdf files). Instead of being drafted into a battle between giants, we should be staging a revolution. We need to lay down our arms in the Operating System Wars, and join together to demand more from the personal computing industry.

Mundane Explosions

Last week I had the opportunity to speak about “Spectacular Spectacles” at MCAD.  I chose to look at software failure as an example of  spectacle.  Software often fails spectacularly, which corresponds well with the theme, but there is also an interesting argument to be made about how software fails.  Software failures are often a result of the most mundane of mistakes.

Photo credit: Unknown

One such spectacular failure is the explosion of the Ariane 5 Rocket.

The rocket was part of a family of  rockets intended to carry a payload into orbit.

The specific payload that the Ariane 5 was supposed to launch was a three-ton satellite.

A $7 billion joint venture of the European Space Agency (ESA) and the Centre national d’études spatiales (CNES), the rocket took ten years to complete.  It was intended to give Europe an edge in the competitive, private space industry.

On its maiden voyage, the unmanned rocket exploded just forty five seconds after lift-off from French Guiana.

The cause of the failure was a software error in the Inertial Reference System.  Worse, the failure is a frequent mistake made in coding:

When programming languages store data, they store it as a particular type of data.  Integers, alphanumeric strings, and  Boolean variables are examples of data types.

Numbers can be stored in a variety of ways, but in the specific case of the Ariane 5 there was an error in the way a particular number was being stored.   The inertial guidance system used an integer type to store the  horizontal velocity.  This had worked perfectly well in the previous Ariane 4 rockets.  But, the Ariane 5 was faster than its predecessors.  *The horizontal velocity exceeded 32,767 – the largest number that should be stored as an integer.  Instead, the value should have been stored as a floating point value.  But, floating point values are less efficient than integers, and since the system was not mission critical, the integer value was overlooked.

 

This caused an error in the Inertial Reference System.  That error message was then interpreted by the on board computer as flight data, at which point it altered its course.  A part of the rocket (the launcher) began to disintegrate because of the steep angle of flight, and that disintegration triggered the self destruct mechanism.

 

This isn’t an esoteric error in computing: typed variables and the limitations of each type of variable is a first year programming topic at any university.  This is certainly not a dig either, the software for this rocket was sophisticated code that had developed over the course of a decade.  Moreover, even if the error had been noticed, this is the metaphorical equivalent of your house falling down around you because you accidentally slammed a door.  But it is remarkable, that such a mundane mistake can have such spectacular consequences.

And this leads us to the point of this post: software fails; it fails frequently and for easily understandable reasons.  The inability to verify that software works as intended has been one of the foremost research topics in computer science since the mid 1960s.  While there has been significant progress, it is still a challenge for the field.

For the public, it should be becoming a greater concern as modern, western society becomes more and more dependent upon computers and the software that drives them.

 

*More information on data types

References:

Lions, J.L. et. al. (1996) ARIANE 5 Flight 501 Failure: Report by the Inquiry Board, Paris

Gleick (1996, Dec. 1) A Bug and a Crash, Sometimes a Bug is more than a Nuisance, New York Times

The Public Trust: What Sesame Street, Honey Boo Boo and the Real Housewives have in common

 

The kerfuffle over Big Bird that came out of last Wednesday’s Presidential debate has been great fodder for the 24 hour news cycle and comedians, but it seems to be considered only in the context of the different policies of the two candidates.

In passing, I saw a post on Facebook about the history of TLC and how it changed after privatization.  And that prompted me to consider the underlying assumptions of this story.  When Romney spoke about this, his underlying assumption was that outside of the addition of commercials, programming would undergo minimal change if PBS was privatized.

“We’re not going to kill Big Bird. But Big Bird is going to have advertisements. Alright?”

The history of TLC really challenges that underlying assumption.  TLC was originally “The Learning Channel”.  That name was not a marketing tool – in 1972  the channel was launched as a joint venture of the federal Department of Health, Education and Welfare and NASA.   It was to be provided free using NASA satellites.  The channel was privatized in 1980.  In the period directly following its privatization, TLC continued to provide educational programming like Ready, Set, Learn and Paleoworld.  After having changed hands a number of times, TLC ended up housed in the Discovery Communications company. With increasing pressure on the channel for ratings share, their programming has devolved into sensationalist fodder for the popular culture mill.  Case in point – Honey Boo Boo.  But this isn’t a criticism of TLC – far be it for me to criticize anyone from watching sensationalistic, voyeuristic crap, I live on that type of programming.

But, we also need our broccoli.  And, in this analogy, PBS is the broccoli.  Public television is a national trust, one that we need to protect.  It provides reliable sources that inform our understanding of the world.  It helps to create an engaged, informed citizenry.  And, it uses a modern technological achievement to contribute to the education of children – particularly in lower income demographics who can’t afford to purchase cable packages that provide  dedicated child-centric networks.  In short, public television raises the bar of social discourse for a very reasonable price.

TLC is not the only example of how programming can decline in the face of the constant competition for ratings:  Bravo and A&E also gave up their highbrow focus in favor of less educational, more sensationalist programming.  And again, not a criticism of those choices – I will watch Andy Cohen and the Real Housewives of [any city large enough to house a camera crew]  every day of the week.  But, surely, the stories of these channels’ intellectual demise challenges the underlying assumption that privatization will not change the nature of PBS.

You can argue that we don’t need to subsidize educational programming.  I believe we do, but at least that could be a discussion rooted in reality.  What we can’t do is pretend that PBS will survive in its present incarnation if we decide to change its underlying principles.

For more information about the transformation of TLC see this great article at Modern Primate.
For details about the benefits of public broadcasting, see Brown, A. (1996). Economics, Public Service Broadcasting, and Social Values. Journal Of Media Economics, 9(1), 3

 

 

 

How did the Dinosaurs Die?

Image credit: Don Davis, NASA

Since we are celebrating the discovery of a new dinosaur, the Pegomastax africanus, I thought about the Alvarez (meteorite impact) theory of Dinosaur extinction.  This theory was hotly contested in the scientific community, but it has gone on to be almost universally accepted.  And it doesn’t just describe the demise of the dinosaurs – in fact, the Alvarez team argued that the giant meteorite wiped out half of all species of plants and animals, including the mighty dinosaurs.  Before the Alvarez team published the article in Science in 1980, scientists had given little serious consideration to the demise of the dinosaurs.

R. P. Walker, S.T. Smith, and S. M. Smith, USGS
The K-T boundary layer in Caravaca, Spain

The theory stemmed from the work of Walter Alvarez, a geologist, on the clay layer that separates the Cretaceous and Tertiary geologic eras, commonly referred to as the KT boundary.  This boundary is approximately sixty-five million years old and is defined by a distinct change in the earth’s flora and fauna: it marks the disappearance from the fossil record of about fifty percent of the Earth’s species at that time.   In the mid-1970s geologists had not dated the KT layer precisely, which led to confusion about the amount of time it represented.  Alvarez proposed to determine the rate of the clay’s deposition, in a bid to understand the Cretaceous extinctions.

He enlisted the help of his father, nuclear physicist and Nobel laureate, Luis Alvarez.  Luis and his associate Frank Asaro decided to measure the amount of Iridium in a clay sample from Gubbio, Italy—a location where the KT boundary was exposed.  Iridium is a metal found in trace quantities in Earth’s crust, deposited steadily and slowly as meteorite dust.  If no Iridium was found in the clay layer, the Alvarez team could assume that the layer had been deposited in a relatively short amount of time-less than one thousand years.  A trace amount of Iridium (approximately 0.03 parts per billion) suggests that the clay layer had been deposited over a longer period of time—more than one thousand years.  The test results took the Alvarez team by surprise.  The sample yielded three parts per billion of Iridium.   It seemed incredible that such a thin layer could correspond to a period of one hundred thousand years!

After waiting nine months for the results of the Iridium test, the Alvarez team was finally able to go to work.  Asaro tested layers above and below the KT boundary to make sure that the high Iridium content was unique to the KT layer.  Walter Alvarez looked for other locations where the KT layer was exposed to make sure that the high Iridium content was not an anomaly local to Gubbio.  In time, they determined that high levels of Iridium were unique to the KT layer and not confined to the Gubbio sample.

What did the Alvarez team make of these results?  While they could have adopted several alternative explanations, they concluded that some extra-terrestrial event had produced the Iridium spike and that this event was also the cause of the mass extinction seen in the fossil record.  They put these conclusions in the form of a provocative question—

What extra-terrestrial even could have caused the sudden extinction of half the genera on earth, while depositing the tell-tale Iridium anomaly?

By 1979, the Alvarez team had definitively settled on the impact of a meteorite the size of Mount Everest.
With this answer in hand the Alvarez group focused on how a meteorite could have caused the mass extinction—what became known as the killing mechanism.  The first mechanism they proposed, inspired by the 1883 eruption of Krakatoa, was that the impact had launched a giant dust cloud into the upper atmosphere.  This dust would choke of sunlight, end photosynthesis, and create a deadly rift in the food chain.

However, this was quickly discounted because some of the affected species lived in dark artic regions.   Instead, impact supporters proposed global wildfires and an ‘impact winter’ (similar to nuclear winter), with accompanying acid rain, which would disrupt the food chain causing the mass extinction.   This killing mechanism was accepted because it explained the selectivity seen in the K2 extinction event.  While there is still debate, there is by now a broad, if by no means complete, consensus that a meteorite impact is the likely cause of the mass extinction at the KT boundary.

For more information about the demise of the dinosaurs, check out Alvarez’s book.  In it, he details the many anomalies that supported the theory – including the shocked quartz found at the boundary.  Walter Alvarez, 1997. T. rex and the crater of doom. Princeton University Press, Princeton, N.J