That was the profound question asked by a skinny freshman in my English classroom eight years ago. His query was prompted by a new directive from Pierre: our ninth-graders would take their state-mandated writing test online. Rather than penciling essays for well-indoctrinated moonlighting English teachers to grade, the students would type their essays online, and a computer program would instantly assign numerical scores representing the quality of their writing. That year, the essay type was persuasive, so the implication was that the computer was now able to assess the persuasive quality of student writing.
My student posed a perfectly logical question, and we started looking for a logical answer. The result: this essay in which a little research and reverse-engineering showed what one would expect: the computer had no concept of the actual persuasive quality of writing. The software the state purchased could only turn words to numbers, counting word frequencies and other interesting statistical data about each essay. Students could mash key topic words into Yoda-like sentences of consciously varied length infused with random prepositions, sporadic unusual big words, and occasional semi-colons and dashes, and ace the test.
For example, the above paragraph might score just as highly as the following string:
Computer concept persuasive a writer mash unusual sporadic could really fly. Frequencies word intelligent with the quality of logical freedom only turn my result fruitless; never will show the actual words to a big persuasive computer. Quality arises? Certainly. Again concept consciously varied semi-colons—exception to the rule!—can better convince computer that this essay by macaroni rocks, though state money of software the purchase replace Pogany hilarious would be. Random, infused, yet persuasive writing beats essay actually composing understand the readers will not, but bean-counters of souls of students cog-in-the-machine always souls of students degradation to data cold digital.
The state didn't run the computer-scored essay test again while I was teaching. But the Department of Education has apparently found a better algorithm and are returning to computer-scored essays. Kids, start your writing engines!
As state director of assessment Wade Pogany notes, the computerized writing tests do offer some advantages. Practically speaking, teachers can derive some data about student spelling, grammar, and word choice. Where the human-scored essays have to be read and returned, computers will score these essays immediately, meaning teachers can use the software over and over to get data about their students' writing and help them improve (the fancy term: "formative assessment").
And of course, the testing companies can increase their profit margins by decreasing their labor costs. They pay a panel of writing experts to score maybe several hundred sample essays, then pay a few computer geeks to run statistical analyses of those human scores and the correlated quantitative linguistic features in the essays. After that, the testing companies don't have to pay anyone but tech support and the marketing people who convince state departments of education to spend our tax dollars on this soul-numbing technology.
So remember, kids, you're not writing for humans any more. Your words aren't art. Play the game, pass the test... and heaven help you if you ever need to express your creativity.
Sentences per paragraph: 4.5
Words per sentence: 19.2
Characters per word: 5.4
Passive sentences: 3%
Flesch Reading Ease: 41.6
Flesch-Kincaid Grade Level: 12.0