Is adaptivity a qualitative or quantitative problem?

One common criticism of adaptive learning is that by tailoring instruction so closely to students’ needs, it doesn’t challenge them enough. As embodied by James Paul Gee’s critique:

People who never confront challenge and frustration, who never acquire new styles of learning, and who never face failure squarely may in the end become impoverished humans. They may become forever stuck with who they are now, never growing and transforming, because they never face new experiences that have not been customized to their current needs and desires.

While I agree with the dangers of what he describes, I question the causal attribution.

First, adaptive learning systems that indulge in too much customization may instead be guilty of relying on a too-narrow prescription for the student’s “zone of proximal development (ZPD)”. Individualized learning does not require giving only incremental steps; it can (and should) include more ambitious steps to occasionally challenge students, perhaps just beyond their conventional ZPD (or at the limits of their ZPD when defined by “lots of help”). Students need to struggle—manageably—as part of their learning. Adapting to students’ needs can include optimizing the nature and amount of that struggle based on past experiences and future expectations.

Second, this can also be overcome by building a certain amount of variability into the system, for the sake of both the students and the system. Occasionally presenting students with problems that may or may not lie within their ZPD can help them learn “what to do when you don’t know what to do” (in the words of a dear colleague of mine, Joe Wise). Whether framed as desirable difficulties, germane cognitive load, preparation for future learning, or the development of adaptive expertise rather than just routine expertise, unexpected challenges can offer invaluable learning opportunities. Further, adaptive learning systems need to reach beyond what is already known in order to improve themselves. A truly intelligent system should be discovering new knowledge about its particular learners and even about learning in general. The possible paths a student might take are infinite, and the system’s designers don’t know what’s best—only what tends to be better compared to other paths that have already been examined. That is, adaptive learning must itself be an adaptive learner.

Both of these issues point to a quantitative problem due to adapting too narrowly or too often. The deeper question is whether adaptivity is a fundamental, qualitative problem: Does having any adaptivity at all invite complacency among students accustomed to having their learning experiences at least partly tailored to their needs? Given the well-established importance of scaffolding instruction according to students’ needs, I would argue that adaptive learning is a valuable tool not simply for accelerating but also for enriching instruction.

Advertisements

Personalized instruction: The other half of personalized learning

As I have explained in a previous post on personalized learning, an important dimension along which personalized learning goes beyond merely adaptive learning is to personalize the experience on the instructional side, not just the learner side. Amidst all the excitement about adaptive learning, teachers remain an often-forgotten yet crucial part of the equation. Well-designed personalization takes advantage of the human intelligence embedded in expert instructors, including opportunities for them to exercise their professional judgment in deciding which activities will work best for their students given their particular contexts and constraints.

This EdSurge report mentions Rocketship’s upcoming changes, as “New model attempts to bring teachers closer to students’ online learning experience” by returning some classroom control back to the teacher:

Rocketship’s new model will shift focus from running purely adaptive programs, to using programs that give teachers greater control over content that gets assigned.

What this highlights is the need for the design of personalized learning programs to identify when to allocate decisions to teachers (possibly with recommendations among which to choose) and when to adapt the students’ learning experience immediately, without need for waiting for additional human input. While this depends in part on the professional knowledge of the instructors implementing the system, some decisions may be straightforward or simple enough to automate. Decisions best left to expert human intervention are likely to be more complex, to depend on more contingencies, to require interpersonal contact, or to have more uncertainty in their effectiveness. Where that balance lies is subject to continual readjustment, but since there are always unknowns and since social interaction is fundamental to the human experience, there will always remain a need for personalization.

MOOCsperiments: How should we assign credit for their success and failure?

That San Jose State University’s Udacity project is on “pause” due to comparatively low completion rates is understandably big news for a big venture.

We ourselves should take pause to ponder what this means, not just regarding MOOCs in particular, but regarding how to enable effective learning more broadly. The key questions we need to consider are whether the low completion rates come from the massive scale, the online-only modality, the open enrollment, some combination thereof, or extraneous factors in how the courses were implemented. That is, are MOOCs fundamentally problematic? How can we apply these lessons to future educational innovation?

Both SJSU and Udacity have pointed to the difficulties of hasty deployment and starting with at-risk students. In an interview with MIT Review, Thrun credits certificates and student services with helping to boost completion rates in recent pilots, while the inflexible course length can impede some students’ completion. None of these are inherent to the MOOC model, however; face-to-face and hybrid settings experience the same challenges.

As Thrun also points out, online courses offer some access advantages for students who face geographic hurdles in attending traditional institutions. Yet in their present form, they only partly take advantage of the temporal freedom they can potentially provide. While deadlines and time limits may help to forestall indefinite procrastination and to maintain a sense of shared experience, they also interfere with realizing the “anytime, anywhere” vision of education that is so often promoted.

But the second half of “easy come, easy go” online access makes persistence harder. Especially in combination with massive-scale participation that exacerbates student anonymity, no one notices if you’re absent or falling behind. While improved student services may help, there remain undeveloped opportunities for changing the model of student interaction to ramp up the role of the person, requiring more meaningful contributions and individual feedback. In drawing from a larger pool of students who can interact across space and time, massive online education has great untapped potential for pioneering novel models for cohorting and socially-situated learning.

Online learning also can harness the benefits of AI in rapidly aggregating and analyzing student data, where such data are digitally available, and adapting instruction accordingly. This comes at the cost of either providing learning experiences in digital format, or converting the data to digital format. This is a fundamental tension which all computer-delivered education must continually revisit, as technologies and analytical methods change, as access to equipment and network infrastructure changes, and as interaction patterns change.

The challenges of open enrollment, particularly at massive scale, replay the recurring debates about homogeneous tracking and ability-grouping. This is another area ripe for development, since students’ different prior knowledge, backgrounds, preferences, abilities, and goals all influence their learning, yet they benefit from some heterogeneity. Here, the great variability in what can happen exaggerates its importance: compare the consequences of throwing together random collections of people without much support, vs. constraining group formation by certain limits on homogeneity and heterogeneity and instituting productive interaction norms.

As we all continue to explore better methods for facilitating learning, we should be alert to the distinction between integral and incidental factors that hinder progress.

Expensive assessment

One metric for evaluating automated scoring is to compare it against human scoring. For some domains and test formats (e.g., multiple-choice items on factual knowledge), automation has an accepted advantage in objectivity and reliability, although whether such questions assess meaningful understanding is often debated. With more open-ended domains and designs, human reading is typically considered superior, allowing room for individual nuance to shine through and get recognized.

Yet this exposé of some professional scorers’ experience reveals how even that cherished human judgment can get distorted and devalued. Here, narrow rubrics, mandated consistency, and expectations of bell curves valued sameness over subtlety and efficiency over reflection. In essence, such simplistic algorithms resulted in reverse-engineering cookie-cutter essays that all had to fit one of their six categories, differing details be damned.

Individual algorithms and procedures for assessing tests need to be improved so that they can make better use of a broader base of information. So does a system which relies so heavily on particular assessments that the impact of their weaknesses can get magnified so greatly. Teachers and schools collect a wealth of assessment data all the time; better mechanisms for aggregating and analyzing these data can extract more informational value from them and decrease the disproportionate weight on testing factories. When designed well, algorithms and automated tools for assessment can enhance human judgment rather than reducing it to an arbitrary bin-sorting exercise.

Individualized instruction as a subset of personalized learning

David Warlick muses on the distinction between individualized instruction and personalized learning, noting that the former is decreasing while the latter is increasing in popularity, according to Google Trends. As he summarizes:

Personalized learning, in essence, is a life-long practice, as it is for you and me, as we live and learn independent of teachers, textbooks, and learning standards.  Individualized instruction is more contained.

Part of me is tempted to wonder what a word-cloud analysis would reveal as the key differences between how the two phrases get used. Absent such an analysis, I would focus on the two dimensions highlighted by the words themselves: personalized vs. individualized, and learning vs. instruction. The latter distinction is quite straightforward, with instruction emphasizing what others do to the student and learning emphasizing what the student does to learn.

The former distinction highlights the learner as a person, not merely an individual. As articulated in my earlier post explaining personalized learning, the core of personalization is the role of the learner as an intelligent and social person making choices for herself and interacting with others in order to learn. I would thus add to Warlick’s matrix, under “student’s role,” an explicit expectation for the student to direct her own learning and collaborate with and challenge fellow learners in making sense of the world. Warlick already emphasizes the role of the teacher’s expertise in deciding how to craft the learning environment; here, under “teacher’s role,” I would also add the responsibility to create and guide learning experiences within social settings. This highlights the importance of how students learn from communicating and collaborating with each other in an environment that truly recognizes them as intelligent, interdependent people.

Alternate models for structuring learning interactions

Timothy Chester ponders the power of many-to-many peer networks in facilitating learning:

If there is to be a peer-based, many-to-many collaborative structure ensuring rigor and the mastery of learning outcomes, it must also be deemed authoritative and persuasive by participants. Some ways to ensure authority and persuasiveness might include the following:

  1. The teacher must drive the collaboration. While teachers engaged in many-to-many relationships with students are not the authoritative center of the collaboration, they are responsible for structuring the student experience and stewarding the learning processes that occur.
  2. The collaboration has to be bounded by a mutually agreed upon scope and charter. Compared to traditional one-to-many collaborations, many-to-many forms can appear chaotic or disorganized. In order to drive effective learning, many-to-many collaborations must operate within a set of boundaries – those things we might define as learning objectives, outcomes, standards, or rubrics. As steward of the learning process, the teacher must take responsibility for structuring the learning collaboration within a set of consistent and firm boundaries that include these structures.
  3. There must be incentives for full student participation. Critics of peer grading systems in MOOCs note that such interactions by students many times lack significant investment of time and focus – resulting in peer feedback that is spurious. Both the quality and the quantity of peer feedback within a many-to-many system have to be statistically significant in order to avoid such spuriousness.

There are many models of such networks in both formal and informal learning settings: peer review systems (e.g., Calibrated Peer Review, SWoRD peer review, Expertiza), tutoring and peer learning communities (e.g., Grockit, P2PU, Khan Academy, OpenStudy), Q&A / discussion boards (e.g., StackOverflow), online communities (e.g., DIY, Ravelry), and wikis. The challenge for formal learning environments is to foster and nurture the kind of authentic, meaningful social interactions that emerge from sustained interaction within informal communities, in the context of the top-down and often short-lived peer experiences typically associated with school classes. Yet for personalized learning to succeed on a large scale, it needs to solve this problem effectively, so that learners are not isolated but can benefit from each other’s presence, support, errors, and wisdom.

Standardized tests as market distortions

Some historical context on how standardized tests have affected the elite points out how gatekeepers can magnify the influence of certain factors over others– whether through chance or through bias:

In 1947, the three significant testing organizations, the College Entrance Examination Board, the Carnegie Foundation for the Advancement of Teaching and the American Council on Education, merged their testing divisions into the Educational Testing Service, which was headed by former Harvard Dean Henry Chauncey.

Chauncey was greatly affected by a 1948 Scientific Monthly article, “The Measurement of Mental Systems (Can Intelligence Be Measured?)” by W. Allison Davis and Robert J. Havighurst, which called intelligence tests nothing more than a scientific way to give preference to children from middle- and upper-middle-class families. The article challenged Chauncey’s belief that by expanding standardized tests of mental ability and knowledge America’s colleges would become the vanguard of a new meritocracy of intellect, ability and ambition, and not finishing schools for the privileged.

The authors, and others, challenged that the tests were biased. Challenges aside, the proponents of widespread standardized testing were instrumental in the process of who crossed the American economic divide, as college graduates became the country’s economic winners in the postwar era.

As Nicholas Lemann wrote in his book “The Big Test,” “The machinery that (Harvard President James) Conant and Chauncey and their allies created is today so familiar and all-encompassing that its seems almost like a natural phenomenon, or at least an organism that evolved spontaneously in response to conditions. … It’s not.”

As a New Mexico elementary teacher and blogger explains:

My point is that test scores have a lot of IMPACT because of the graduation requirements, even if they don’t always have a lot of VALUE as a measure of growth.

Instead of grade inflation, we have testing-influence inflation, where the impact of certain tests is magnified beyond that of other assessment metrics. It becomes a kind of market distortion in the economics of test scores, where some measurements are more visible and assume more value than others, inviting cheating and “gaming the system“.

We can restore openness and transparency to the system by collecting continuous assessment data that assign more equal weight across a wider range of testing experiences, removing incentives to cheat or “teach to the test”. Adaptive and personalized assessment go further in alleviating pressures to cheat, by reducing the inflated number of competitors against whom one may be compared. Assessment can then return to fulfilling its intended role of providing useful information on what a student has learned, thereby yielding better measures of growth and becoming more honestly meritocratic.

Distinguishing MOOCs from OER

Stanford mathematics professor Keith Devlin suggests that we should drop MOOCs and focus on MOORs (massively open online resources) or OERs (open educational resources):

no single MOOC should see itself as the primary educational resource for a particular learning topic. Rather, those of us currently engaged in developing and offering MOOCs are, surely, creating resources that will be part of a vast smorgasbord from which people will pick and choose what they want or need at any particular time.

Yet even if current MOOCs follow a mediocre model for structuring learning experiences, they do still attempt to meet a need for learners who seek guidance, structure, and social cohorting for the way they access educational resources. I would be interested in decoupling OERs from MOOCs and similar pathways, in order to broaden the scope of available OERs from which anyone can choose. That opens up possibilities for more innovative approaches to enabling diverse learning paths and cohorting models.

Learner, Know Thyself

As “Big Data” loom larger and larger, the value of owning your own data likewise increases. Learners need to have access to all of their prior educational data, just as much as patients need access to all of their prior medical records, especially as they move between multiple providers and change over time. Instead of locking up valuable information in the hands of individual organizations with their own proprietary or idiosyncratic institutional habits, this lets the learner share their data for new educational providers to analyze.

Putting data back in the learners’ hands also empowers them to act as their own student-advocates, not just recognizing patterns in when they are learning more effectively (or less), but having the evidence to support their position. With accurate self-assessment and self-regulated learning becoming increasingly important goals in education these days, having students take literal ownership of their own learning and assessment data can help them make progress toward those goals.

Beating cheating

Between cheating to learn and learning to cheat, current discourse on academic dishonesty upends the “if you can’t beat ’em, join ’em” approach.

From Peter Nonacs, UCLA professor teaching Behavioral Ecology:

Tests are really just measures of how the Education Game is proceeding. Professors test to measure their success at teaching, and students take tests in order to get a good grade.  Might these goals be maximized simultaneously? What if I let the students write their own rules for the test-taking game?  Allow them to do everything we would normally call cheating?

And in a new MOOC titled “Understanding Cheating in Online Courses,” taught by Bernard Bull at Concordia University Wisconsin:

The start of the course will cover the basic vocabulary and different types of cheating. The course will then move into discussing the differences between online and face-to-face learning, and the philosophy and psychology behind academic integrity. One unit will examine the best practices to minimize cheating.

Cheating crops up whenever there is a mismatch between effort and reward, something which happens often in our current educational system. Assigning unequal rewards to equal efforts biases attention toward the inflated reward, motivating cheating. Assigning equal rewards to unequal efforts favors the lesser effort, enabling cheating. The greater the disparities, the greater the likelihood of cheating.

Thus, one potential avenue for reducing cheating would be to better align the reward to the effort, to link the evaluation of outputs more closely to the actual inputs. High-stakes tests separate them by exaggerating the influence of a single, limited snapshot. In contrast, continuous, passive assessment brings them closer by examining a much broader range of work over time, collected in authentic learning contexts rather than artificial testing situations. Education then becomes a series of honest learning experiences, rather than an arbitrary system to game.

In an era where students learn what gets assessed, the answer may be to assess everything.