Sadly, it’s not particularly surprising that it took a proclamation by researchers from prominent institutions (Harvard and MIT) to get the media’s attention to what should have been obvious all along. That they don’t have alternative metrics handy highlights the difficulties of assessment in the absence of high-quality data both inside and outside the system. Inside the system, designers of online courses are still figuring out how to assess knowledge and learning quickly and effectively. Outside the system, would-be analysts lack information on how students (graduates and drop-outs alike) make use of what they learned– or not. Measuring long-term retention and far transfer will continue to pose a problem for evaluating educational experiences as they become more modularized and unbundled, unless systems emerge for integrating outcome data across experiences and over time. In economic terms, it exemplifies the need to internalize the externalities to the system.
The social component of learning has long been overlooked from both a regulatory and a design perspective, with community formation often assumed to happen through the traditions of brick-and-mortar institutions. But as students spend less time at physical campuses, whether due to part-time status, family and work commitments, or online classes, deliberately planning how students will connect meaningfully with each other becomes necessary.
Coursera’s partnership to create “learning hubs” offers one example of how the education, business, and government worlds are exploring solutions to strengthen the tenuous social fabric that keeps students in class. Along with the basics of internet and technology access, these hubs also offer a more fundamental reason to return: social ties. Fellow classmates can offer instrumental support by sharing knowledge and experiences, but they also offer emotional support and validation when uncertainty strikes. While the time and effort required to build social ties may initially seem costly, the investment can pay off through higher enrollment and retention, as well as improved learning and satisfaction.
As these initiatives reveal, personalizing learning effectively goes beyond mere individualization to include genuine integration of the participants as people connected in a community.
In The Coming Big Data Education Revolution, Doug Guthrie argues that “big data”, rather than MOOCs, represent the true revolution in education:
MOOCs are not a transformative innovation that will forever remake academia. That honor belongs to a more disruptive and far-reaching innovation – “big data.” A catchall phrase that refers to the vast numbers of data sets that are collected daily, big data promises to revolutionize online learning and, in doing so, higher education.
I agree that there are exciting new discoveries and innovations still yet to be made through the advent of big data in education, and I also agree that MOOCs’ current reliance on scaling up delivery of existing content isn’t particularly revolutionary. Yet I see the two movements as overlapping and complementary, rather than as competing forces.
While MOOCs may not (yet) have revolutionized instruction, they have revolutionized access for many learners. Part of their appeal for those interested in their growth is their potential for enabling large-scale analysis due to the high enrollments as well as the availability of online data. The opportunity to study such large numbers of students across such disparate contexts is rare in traditional academic settings, and it permits discoveries of learning trajectories and error patterns that might otherwise get missed as noise amidst smaller samples.
Another potential innovation which traditional MOOCs (xMOOCs) have not yet explored is new models of building cohorts and communities from amidst a large pool of learners, a goal at the heart of “connectivist MOOCs” (cMOOCs) that highlights peer-learning pedagogy. Combine xMOOCs and cMOOCs, and you can improve educational access even further by enabling courses to spring up whenever and wherever enough people, interest, and resources converge. Add in the analytical power of big data, and then you have the capacity to truly personalize learning, by providing both the experiences that best support students’ learning and the human interactions that will enrich those experiences.
That San Jose State University’s Udacity project is on “pause” due to comparatively low completion rates is understandably big news for a big venture.
We ourselves should take pause to ponder what this means, not just regarding MOOCs in particular, but regarding how to enable effective learning more broadly. The key questions we need to consider are whether the low completion rates come from the massive scale, the online-only modality, the open enrollment, some combination thereof, or extraneous factors in how the courses were implemented. That is, are MOOCs fundamentally problematic? How can we apply these lessons to future educational innovation?
Both SJSU and Udacity have pointed to the difficulties of hasty deployment and starting with at-risk students. In an interview with MIT Review, Thrun credits certificates and student services with helping to boost completion rates in recent pilots, while the inflexible course length can impede some students’ completion. None of these are inherent to the MOOC model, however; face-to-face and hybrid settings experience the same challenges.
As Thrun also points out, online courses offer some access advantages for students who face geographic hurdles in attending traditional institutions. Yet in their present form, they only partly take advantage of the temporal freedom they can potentially provide. While deadlines and time limits may help to forestall indefinite procrastination and to maintain a sense of shared experience, they also interfere with realizing the “anytime, anywhere” vision of education that is so often promoted.
But the second half of “easy come, easy go” online access makes persistence harder. Especially in combination with massive-scale participation that exacerbates student anonymity, no one notices if you’re absent or falling behind. While improved student services may help, there remain undeveloped opportunities for changing the model of student interaction to ramp up the role of the person, requiring more meaningful contributions and individual feedback. In drawing from a larger pool of students who can interact across space and time, massive online education has great untapped potential for pioneering novel models for cohorting and socially-situated learning.
Online learning also can harness the benefits of AI in rapidly aggregating and analyzing student data, where such data are digitally available, and adapting instruction accordingly. This comes at the cost of either providing learning experiences in digital format, or converting the data to digital format. This is a fundamental tension which all computer-delivered education must continually revisit, as technologies and analytical methods change, as access to equipment and network infrastructure changes, and as interaction patterns change.
The challenges of open enrollment, particularly at massive scale, replay the recurring debates about homogeneous tracking and ability-grouping. This is another area ripe for development, since students’ different prior knowledge, backgrounds, preferences, abilities, and goals all influence their learning, yet they benefit from some heterogeneity. Here, the great variability in what can happen exaggerates its importance: compare the consequences of throwing together random collections of people without much support, vs. constraining group formation by certain limits on homogeneity and heterogeneity and instituting productive interaction norms.
As we all continue to explore better methods for facilitating learning, we should be alert to the distinction between integral and incidental factors that hinder progress.
Stanford mathematics professor Keith Devlin suggests that we should drop MOOCs and focus on MOORs (massively open online resources) or OERs (open educational resources):
no single MOOC should see itself as the primary educational resource for a particular learning topic. Rather, those of us currently engaged in developing and offering MOOCs are, surely, creating resources that will be part of a vast smorgasbord from which people will pick and choose what they want or need at any particular time.
Yet even if current MOOCs follow a mediocre model for structuring learning experiences, they do still attempt to meet a need for learners who seek guidance, structure, and social cohorting for the way they access educational resources. I would be interested in decoupling OERs from MOOCs and similar pathways, in order to broaden the scope of available OERs from which anyone can choose. That opens up possibilities for more innovative approaches to enabling diverse learning paths and cohorting models.
EdX, the most prominent nonprofit MOOC provider, plans to use and share automated software to grade and give feedback on student essays. On the heels of this announcement come legitimate skepticism about how well computers actually grade student work (i.e., “Can this be done?”) and understandable concern whether this is a worthwhile direction for education to proceed (i.e., “Should this be done?”). Recasting these two questions in terms of how, when, and why to apply automated assessment yields a more critical framework for finding the right balance between machine-intelligent and human-intelligent assessment.
When evaluating the limitations of artificial intelligence, I find it helpful to ask whether they can be classified as issues with data or algorithms. In some cases, available data simply weren’t included in the model, while in others, such data may be prohibitively difficult or expensive to capture. The algorithms contain the details of how data get transformed into predictions and recommendations. They codify what gets weighted more heavily, which factors are assumed to influence each other, and how much.
Limitations of data: Train on broader set of sample student work as inputs
Todd Pettigrew describes some familiar examples of how student work might appear to merit one grade on the surface but another for content:
it is quite common to see essays that are superficially strong — good grammar, rich vocabulary — but lack any real insight… Similarly some very strong essays—with striking originality and deep insight—have a surprising number of technical errors that would likely lead a computer algorithm to conclude it was bad.
This highlights the need to train the model on these edge cases to distinguish between style and substance, and to ensure that it does not false-alarm on spurious features. Elijah Mayfield points out that a training set of only 100 hand-graded essays is inadequate; this is just one example of the kind of information such a small sample could fail to capture adequately.
Limitations of data: Include data from beyond the assignment and the course
Another relevant concern is whether the essay simply paraphrased an idea from another source, or if it included an original contribution. Again from Todd Pettigrew:
the computer cannot possibly know how the students answers have related to what was done elsewhere in the course. Did a student’s answer present an original idea? Or did it just rehash what the prof said in class?
Including other information presented in the course would allow the model to recognize low-level rehashing; adding information from external sources could help situate the essay’s ideas relative to other ideas. A compendium of previously-expressed ideas could also be labeled as normative (consistent with the target concepts to be learned) or non-normative (such as common misconceptions), to better approximate the distance between the “new” idea and “old-but-useful” ideas or “old-but-not-so-useful” ideas. But confirming whether that potentially new idea is a worthwhile insight, a personal digression, or a flawed claim is probably still best left to the human expert, until we have better models for evaluating innovation.
Limitations of data: Optimizing along the wrong output parameters
Scores that were dashed off by harried, overworked graders provide a poor standard for training an AI system. More fundamentally, the essay grade itself is not the goal; it is only a proxy for what we believe the goals of education should be. Robust assessment relies on multiple measures collected over time, across contexts, and corroborated by different raters. If we value long-term retention, transfer, and future learning potential, then our assessment metrics and models should include those.
I recognize that researchers are simply using the data that are most readily available and that have the most face value. My own work sought to predict end-of-course grades as a preliminary proof of concept because that’s the information we consistently have and use, and our society (perhaps grudgingly) accepts that. Ideally, I would prefer different assessment data. In pointing the direction in which we ought to go with such innovations, ultimately we should identify better data (through educators, assessment experts, and learning scientists), make them readily available (through policymakers and data architects), and demand their incorporation in the algorithms and tools used (through data analysts, machine learning specialists, and developers).
Limitations of algorithms: Model for meaning
Predictive or not, features such as essay length, sophistication of vocabulary, sentence complexity, and use of punctuation are typically not the most critical determinants of essay quality. What we care about is content, which demands modeling the conceptual domain. Hierarchical topic models can map the relative conceptual sophistication of an essay, tracking the depth and novelty of a student’s writing. While a simple semantic “bag-of-words” model ignores word order and proximity, a purely syntactic model accepts grammatical gibberish. A combined semantic-syntactic model can capture not just word co-occurrence patterns, but higher-order relations between words and concepts, as evident in sentence and document structure. Compared to the approaches earning such public rebuke now, more sophisticated algorithms exist, although they need more testing on better data.
The question here is which parts of the assessment process are best kept “personalized,” and which parts are best made “adaptive.”
Rapid, automated feedback is useful only if the information can actually be used productively before the manual feedback would have arrived. For a student whose self-assessment is wide of the mark, an immediate grade can offer reassurance or a kick in the pants. For others, it may enable doing just enough to get by. Idealist instructors might shudder at the notion, but students juggling competing demands on their time might welcome the guidance. How well students can make sense of the feedback will depend on its specificity, understandability, actionability, and perceived cost-benefit calculus, all open questions in need of further iteration.
For instructors, rapid feedback can provide a snapshot of aggregate patterns that might otherwise take them hours, days, or longer to develop. Beyond simply highlighting averages which an expert instructor could already have predicted, such snapshots could cluster similar essays that should be read together to ensure consistency of grading. They could flag unusual ideas in individual essays or unexpected patterns across multiple essays for closer attention. Student work could be aggregated for pattern analysis within an individual class, across the history of each student, or across multiple instances of the same class over time. Some forms of contextualization may be overwhelming or even undesirably biasing, while others can promote greater fairness and enable deeper analysis. Determining which information is most worthwhile for an instructor to know during the grading process is thus another important open question.
Both cases explicitly acknowledge the role of the person (either the student or the instructor) in considering how they may interact with the information given to them. An adaptive system would provide first-pass feedback for each user to integrate, but the instructor would still retain responsibility for evaluating student work and developing more sophisticated feedback on it, with both instructor and student continuing the conversation from there— the personalized component.
Essay-grading itself may not be the best application of this technology. It may be more aptly framed as a late-stage writing coach for the student, or an early-stage grading assistant for the instructor. It may be more useful when applied to a larger body of a given student’s work than to an individual assignment. Or it may be more effective for both student and instructor when applied to an online discussion, outlining emerging trends and concerns, highlighting glaring gaps, helping hasty writers revise before submitting, and alerting facilitators when and where to intervene. While scaling up assessment is an acknowledged “pain point” throughout the educational enterprise, automation may fulfill only some of those needs, with other innovations taking over the rest.
The purpose of technology should be to augment the human experience, not to replace or shortchange it, and education– especially writing– is fundamentally about connecting to other people. Many of the objections to automated essay grading reflect these beliefs, even if not explicitly stated. People question whether automation can capture something which goes far beyond that essay alone: not just the student’s longer learning trajectory, but the sense of a conversation between two people that extends over time, the participation in a meaningful interpersonal relationship. Whether our current instantiation of higher education currently meets this ideal is not the point. Rather, in a world where we can design technology to meet goals of our own choosing, and in which good design is a time-consuming and labor-intensive process, we should align those expensive technologies with worthwhile goals.
By these standards, any assessment of writing which robs the student and author of these extended conversations fundamentally fails. Jane Robbins claims that students need “the guidance of experts with depth and breadth in the field at hand”, a teacher who can also be “mentor, coach, prodder, supervisor.” As the students on the Brown Daily Herald’s editorial board argue:
an evaluation of an essay by a professor is just as important, if not more, to a student’s scholarship and writing. The ability to sit down and discuss the particularities of an essay with another well-informed and logical human is an essential part of the essay writing experience.
Coupled with arguments that other types of (machine-graded) assessment are better for evaluating content knowledge or even low-level critical thinking, these arguments beg the question: Why try to automate assessment of writing at all? After all, much of what I have advocated here is simply accelerating the assessment process, not truly automating it.
The most basic reason is simply that writing instruction is important, and students need ongoing practice and feedback to continue improving. To the extent that any assessment feedback can be effectively automated, it can help support this goal. That raises two additional questions: How deep must feedback be for a writing exercise to be worthwhile? More controversial, can writing that never sees a real audience still serve a legitimate pedagogical purpose?
Considering the benefits not just of actively retrieving and generating information, but also of organizing one’s thoughts into coherent expression, I would argue that some writing exercises can facilitate learning even without an audience. Less clear is how far that collection of “some” stretches, or what the specific parameters are which demand feedback from an expert human. Likely factors include more complex assignments, more extreme work quality, weaker feelings of student belonging, longer intervals between receiving human feedback, less sophisticated automated feedback, and more nuanced expert feedback. Better articulating these limits and anticipating what we can gain and lose will help guide future development and application of automated assessment.
Much of the recent buzz in educational technology and higher education has focused on issues of access, whether through online classes, open educational resources, or both (e.g., massive open online courses, or MOOCs). Yet access is only the beginning; other questions remain about outcomes (what to assess and how) and process (how to provide instruction that enables effective learning). Some anticipate that innovations in personalized learning and assessment will revolutionize both, while others question their effectiveness given broader constraints. The goal of this blog is to explore both the potential promises and pitfalls of personalized and adaptive learning and assessment, to better understand not just what they can do, but what they should do.