Balancing human-human and human-computer interaction

A fundamental challenge in implementing personalized learning is in determining just how much it should be personal—or interpersonal, to be more specific. Carlo Rotella highlights the tension between the customization afforded by technology and the machine interface needed to collect the data supporting that customization. He narrows in on the crux of the problem thus:

For data to work its magic, a student has to generate the necessary information by doing everything on the tablet.

That invites worries about overuse of technology interfering with attention management, sleep cycles, creativity, and social relationships.

One simple solution is to treat the technology as a tool that is secondary to the humans interacting around it, with expert human facilitators knowing when and how to turn the screens off and refocus attention on the people in the room. As with any tool, recognizing when it is hindering rather than helping will always remain a critical skill in using it effectively.

Yet navigating the human-to-data translation remains a tricky concern. In some cases, student data or expert observations can be coded and entered into the database manually, if worthwhile. Wearable technologies (e.g., Google Glass, Mio, e-textiles) seek to shorten the translation distance by integrating sensory input and feedback more seamlessly in the environment. Electronic paper, whiteboards, and digital pens provide alternate data capture methods through familiar writing tools. While these tools bring the technology closer to the human experience, they require more analysis to convert the raw data into manipulable form and further beg the question of whether the answer to too much technology is still more technology. Instructional designers will always need to evaluate the cost-benefit equation of when intuitive human observation and reflection is superior, and when technology-enhanced aggregation and analysis is superior.

 

What should we assess?

Some thoughts on what tests should measure, from Justin Minkel:

Harvard education scholar Tony Wagner was quoted in a recent op-ed piece by Thomas Friedman on what we should be measuring instead: “Because knowledge is available on every Internet-connected device, what you know matters far less than what you can do with what you know. The capacity to innovate—the ability to solve problems creatively or bring new possibilities to life—and skills like critical thinking, communication and collaboration are far more important than academic knowledge.”

Can we measure these things that matter? I think we can. It’s harder to measure critical thinking and innovation than it is to measure basic skills. Harder but not impossible.

His suggestions:

For starters, we need to make sure that tests students take meets [sic] three basic criteria:

1. They must measure individual student growth.

2. Questions must be differentiated, so the test captures what students below and above grade-level know and still need to learn.

3. The tests must measures [sic] what matters: critical thinking, ingenuity, collaboration, and real-world problem-solving.

Measuring individual growth and providing differentiated questions are obvious design goals for personalized assessment. The third remains a challenge for assessment design all around.