PROFESSORS at two United States universities say after ten years of research they have perfected technology that can accurately evaluate student essays.
The Intelligent Essay Assessor uses mathematical analysis to measure the knowledge content of essays by counting the words related to the topic and by comparing the text to a sample paper written or graded by a human expert.
"The program has perfect consistency, an attribute that human graders almost never have," said Darrell Laham, a doctoral student at the University of Colorado who helped invent it. Also, he said, it does not get bored, rushed, sleepy, impatient or forgetful.
Researchers tested a version of the program last autumn on students at New Mexico State University who submitted essays to a computer and immediately received the estimated grade and suggestions for improvements.
"The students' essays all improved with each revision," said Peter Foltz, an assistant professor of psychology at New Mexico State who also helped develop the technology. And when the 500 students involved in the experiment were given the choice of having later essays graded by a human all chose the computer.
Essay exams provide a better assessment of students' knowledge than other types of tests, according to most teachers and professors, but are considerably more difficult and time-consuming to grade.
The essay assessment software relies on latent semantic analysis, a type of artificial intelligence. Information about a topic is entered.
The inventors concede that someone could include all the right words in random order and get a good grade, but say there are safeguards to alert a human supervisor to unusual sentence structure. Besides, they say, listing all the words requires that a student learn them in the first place.
"The easiest way to cheat this system is to study hard, know the material and write a good essay," said Thomas Landauer, a University of Colorado psychology professor who was the third member of the research team.