Judging educational software

Judging educational software

I have been an educator in my life, but I am an academic researcher at heart. I’ve also spent the last three years running an educational software company, so I wear three sets of interchangeable glasses when assessing the quality of educational software.
Having toiled in the belly of the beast, I know too well the compromises that must be made to get a product out the door, but I am first and foremost a cognitive scientist who understands what good pedagogy looks like. What parents, teachers, and administrators should ask before investing in educational software is this:
Is the pedagogy strong?
In other words, has the product been designed to use the strengths of a computer to optimally increase learning by the user? There is a productive intersection between what computers do well and what learners need. Before discussing this intersection, the term “generativity” needs to be defined.
In the educational arena, generativity means that the user has generated a response that required cognitive effort – in general, the more effort, the better the learning. Research dating back to the 1960s shows that students who create their own word pair antonyms or their own summaries of texts produce significantly better recall of the word pairs or texts than students who merely read or heard the words or text.
One of the goals of good software should be to make users generate responses, not allow them to default to a one-out-of-four guessing game. Obviously, I do not consider multiple choice questions to be particularly generative, I consider the creation of constructed responses (writing free text) to be generative. And, yes, computers now are able to score constructed responses with an accuracy that mimics professional human scorers. Generativity leads to deeper processing, and this leads to better learning.
Optimal learning also is predicated on timely feedback. Learners need fast feedback on the quality of what they have generated.
Where learning and computers intersect
This list is not exhaustive, but to learn people need certain things, such as:
• To practice until mastery.
• Sensitive, ascending levels of difficulty.
• Immediate feedback in learning and post-assessment.
• Appropriate, well-timed rewards.
• And here’s that term again, generativity.
The things computers do well include:
• Practice.
• Individual placement at levels of difficulty as fine-grained as the designer can make them.
• Provide immediate feedback.
• Games as rewards.
• Facilitating certain types of generativity.
• Scoring from a rubric. (Whenever I speak of computer scoring, I am talking about automatically and in real time, which is the most meaningful window for the learner).
The things computers don’t do well are:
• Engage thoroughly unmotivated users.
• Personalize feedback at an emotional level. (Actually, they don’t do anything at an emotional level, which is especially devastating for the unmotivated).
• Score unconstrained tasks. (The more generative the task, the worse the computer will be at scoring it).
The last point is important. There is a trade-off between generativity and automated, and I’ll use the verbal domain as an example. A computer will always do a lousy job of analyzing and scoring the quality of a poem. Computational linguistics can approximate semantics or “meaning” to a degree, but computers never really “get it.” (See my earlier lament about emotion.)
This is because our “wetware” is not predicated on binary-coded symbol manipulation. Software currently exists that does a very nice job of scoring short answers in a constrained space, but a smart-aleck kid intent on flummoxing the system can wreak havoc.
That said, software that scores constructed responses is extremely valuable because it forces users to be generative. Software producers just need to be careful about the promises they make, and topics they apply to automated scorers. This is one of the reasons we chose to apply our short answer scoring system to science topics.
What should you look for when purchasing educational software? Listed below are three sites that evaluate educational software. Great strides have been made in the usability realm, so let’s assume that the big companies have figured out how to make the upload, navigation, and tech support processes friendlier. These sites are very practical, so I would encourage thoughtful buyers to place the following questions on top of their criteria:
• Does it encourage generativity?
• Will it automatically score along the way?
• Will it give the user feedback during the learning process based on those scores?
• Will it engage at an individual-appropriate level?
This is a subjective call, and it encompasses several attributes, including the level of difficulty and rewards. You want software to be difficult enough to require cognitive effort, but amenable enough to allow for completion and a sense of mastery and fulfillment. You’ll have to use your best judgment here because if the entire class groans when the teacher asks students to log on, engagement has not been achieved.
And one more bit of advice: Always go for the Web-based version because it means immediate updates and bug fixes, and the kids can use it at home. Here, then, are the sites that evaluate educational software.
California Learning Resource Network – The most thorough site, but very California-centric. It includes description, standards, profiles, abstracts, additional features, and reviewer notes.
Learning Village – Reviewers are teachers, university instructors, and educational professionals who believe that good, quality educational software, matched with the right person at the right time, can be “magical.”
They pick the best of the lot and then review on a 4.0 to 5.0 scale, but the hard part is deciding about software that is not listed. Was it poor software or has it simply not been reviewed yet? The assessment includes description, commentary, who it’s for, why they selected it, and computer requirements.
Superkids – Superkids includes a 1 to 5 rating on three categories: educational value, kid appeal, and ease-of-use and installation. It also includes a “Bottom line” write-up.

Dr. Mina C. Johnson-Glenberg, Ph.D. is president of The NeuronFarm, LLC, and an Honorary Fellow at The Waisman Center at the University of Wisconsin-Madison. She can be reached at mina@neuronfarm.com.