Backup of Research Overview Jan 31 2015

Leveraging Online Educational Resources for Research on Learning

The prevalence of online educational resources (like videos, exercises, and courses) has dramatically increased in the last few years and may play an even larger role in K-12 education and our universities in the near future. Regardless of the staying power of Massive Open Online Courses or specific platforms like Khan Academy, new opportunities are emerging for research as everyday learning occurs through interactions with digital online resources. An online mathematics exercise is far more amenable than pen-and-paper homework to automatic collection of data, experimental comparisons of different versions of an exercise, and iterative revision and improvement.

My research adapts blended/online educational resources that are widely used by students, using embedded experiments and computational methods to identify practical improvements to these resources, as well as conduct novel research on learning processes. So far, I have investigated why and how generating explanations improves learning, and how motivation can be improved by teaching students “growth mindset” beliefs about intelligence being malleable. This work is complemented by laboratory-style experiments and use of computational models from machine learning and data mining.

Investigating the role of explanation in learning

While instruction (especially online) too often merely provides students with explanations, people’s internal or external efforts to generate explanations and answer questions powerfully impacts learning and transfer. Supporting our anecdotal experience that we can learn by teaching, this self-explanation effect (Chi, 2000) and importance of explanations is reflected in research across science and mathematics education, cognitive psychology, artificial intelligence and other fields. Moreover, prompting for explanations can be valuable from a design perspective, because it can be incorporated into a wide variety of learning resources, is technically easy to manipulate, and combines the virtues of guidance from an external interface with the benefits of actively engaging people to interact with the software and internally drive their construction of knowledge.

A Subsumptive Constraints Account of Explanation. 

My research into what mechanisms underlie this effect has considered whether asking “why?” produces general benefits – by boosting overall engagement, monitoring, or use of language – or selectively guides learning. I have developed a Subsumptive Constraints account  (Williams & Lombrozo, 2010, Cognitive Science) of explanation and learning, according to which explaining “why?” drives people to seek patterns or generalizations that underlie the facts or observations they are explaining, and to particularly privilege broad, unifying generalizations. For example, explaining why 2 x 3 = 6 goes beyond remembering the fact, instead appealing to principles that characterize multiplication as repeated addition.

Williams & Lombrozo (2010, Cognitive Science) used controlled artificial categories to provide evidence that explaining “why?” selectively promotes the discovery of patterns, supporting the Subsumptive Constraints account over alternative accounts in terms of elaborative or metacognitive benefits from verbal description. Williams & Lombrozo (2013, Cognitive Psychology) further found that explaining increased people’s consultation of their existing knowledge in guiding which pattern was discovered, and which pattern was expected to generalize to novel observations.

Recent work with a developmental graduate student has found that explaining helps children as young as five to draw on their prior knowledge and find underlying patterns and causal relationships (Walker, Williams, Gopnik, & Lombrozo, 2012; under revision). Other collaborative work (Edwards, Williams & Lombrozo, 2013; in prep) has investigated how the benefits of explaining are similar to and different from those of comparing examples, another learning strategy known to promote abstraction and pattern discovery.

Selective enhancement & impairment. 

The Subsumptive Constraints account proposes that explaining does not simply boost attention and motivation, but selectively guides learning and reasoning. This has both theoretical and practical importance, with implications for when to expect prompts to explain to be educationally beneficial, irrelevant, or even harmful. In fact, Williams, Lombrozo, & Rehder (2013, Journal of Experimental Psychology: General) found that explaining “why?” can actually impair learning about specific cases through over-generalization of misleading patterns.

Using the educational task of learning about z-scores, I further investigated this phenomenon to determine what factors determine whether explaining promotes vs. inhibits revision of belief in overgeneralizations. Explaining specific cases preferentially promoted belief revision (relative to articulating thoughts) only when sufficiently many conflicting observations were present (Williams et al, 2012), and when these decisively ruled out alternative beliefs (Williams et al, 2013).

Self-explanation strategies in learning from online mathematics exercises.

My current work extends the real-world relevance of the previous studies on explanation by examining the context of actual students using Khan Academy. This choice of platform and partnership occurred after I evaluated the technical implementation details and suitability for research of three other online platforms. A valuable feature of Khan Academy's website for research is its collection of over 400 kinds of online mathematics exercises, which are used monthly by over one million students, from middle school through community college and university. These provide a paradigm that furnishes continual measures of learning over many weeks, and allows random assignment to different versions of exercises, which can be changed through simple modifications of text and HTML.

To maximize the impact beyond merely the targeted exercises, my addition of explanation prompts was framed to students as learning a strategy for self-questioning. Messages explaining the strategy were added above math exercises and prompts to explain interspersed into problems and worked-out example solutions. The study is ongoing, but the results after one month and with 3000 students suggest that encouraging students to explain solution steps (e.g. Why is it helpful to take this step? , see a demo at tiny.cc/whatwhyhow) or to reflect on their problem solving (e.g. What are you doing or thinking right now? tiny.cc/whatwhyhow2) increases engagement and results in more learning (when compared to Khan Academy’s typical exercises). A more stringent control condition requesting paraphrasing (e.g. Restate what this solution step is saying in your own words, tiny.cc/whatwhyhowcontrol) is better than typical exercises without prompts, but does not match the benefits of explaining or reflecting on thinking.

The great potential of teaching general study skills or metacognitive strategies is often countered by the difficulty of showing robust benefits of such training, particularly in the abstract. Online and blended resources may present a unique opportunity for such research. Methodologically, research embedded in an online resource can reach large numbers of participants and therefore have sufficient power to detect and further explore promising effects. Moreover, the logging of repeated observations of a student’s behavior over several months is difficult to achieve in a typical classroom. From a theoretical perspective, using online interfaces to embed such strategy training in the context of solving specific exercises has greater promise, given discouraging results for teaching such strategies in the abstract without links to specific content. And online environments are uniquely well suited to repeatedly prompting and reminding students to use such strategies, allowing extended interventions and boosting of adaptive study behaviors.

One of the most exciting features of research using online resources is the potential for extensive investigation and iterative revision. The next line of planned experiments rolls out prompts to a wide range of exercises across the site, in a design that will allow assignment of over 200 000 students to 50 different conditions (more details about experiments are at tiny.cc/crowdsourcedresearch). To support future research using this paradigm of embedding experimental manipulations in mathematics exercises, in September I wrote and submitted a 25-page grant through the Vice Provost of Online Learning to the Department of Education’s Institute of Education Sciences.

Promoting motivation by changing beliefs about intelligence

The most recent stage in my research on explanation and learning focuses on teaching students educational habits and behaviors, so that the intervention to augment instruction does not only help learning of a specific concept or problem type, but engenders dispositions and beliefs that improve learning across a wide range of future content.

To continue this focus, I have begun a complementary line of research that aims to encourage “Growth Mindset” (Dweck, 2007) beliefs about the malleability of intelligence, which can then increase investment of effort and resistance to challenges and errors over the long term. Not only is increasing motivation a ubiquitous challenge in education (both online and “offline”), but more motivated students may be more likely to use adaptive strategies like self-explanation.

The current study on Khan Academy finds that both effortful behaviors (like students attempting more problems) and learning gains are increased when messages are added above exercises which prompts students to see intelligence as malleable (have a “Growth Mindset” of intelligence, e.g. “Remember, the more you practice the smarter you become!”), although in this first study the effects are small.

However, there is no effect of inserting positively valenced messages of encouragement that do not emphasize that intelligence can be changed (e.g., “This might be a tough problem, but we know you can do it.” This subtle change confers no benefit beyond the original exercises with no messages at all, and are significantly worse than messages with the Growth Mindset messages (Preliminary results: Williams, Paunesku, Haley, Sohl-Dickstein, 2013).

This isolates the effect of emphasizing intelligence’s malleability above and beyond reminders or encouragement, and provides the first real-world experimental study to directly pit these alternative interpretations against each other. A pending study with Neil Heffernan on the www.Assistments.org educational website aims to evaluate the effect of providing targeted motivational messages only in response to students’ wrong answers, leveraging complementary functionality of this platform to that of Khan Academy.

Computational Modeling & Experimental Design at Scale

I received a small ($22K) grant from the Gates Foundation & Athabasca University’s MOOC research Initiative to conduct more extensive statistical modeling and data mining research with the Khan Academy data set associated with the completed experiment. I am identifying mediators and moderators of the effects of the motivational messages, such as additional time invested, strategic use of hints, item difficulty, and time course of effect. In addition, the data set of 100 000 students in the control condition reflects learning behaviors over two months of learning from fractions exercises. One future direction is using novel non-parametric Bayesian models (like CrossCat, in collaboration with Pat Shafto) to categorize different features of learners and kinds of problems.

Such current modeling work targets new questions about the massive data sets available in traces of online learning and behavior, but draws on my computational background using Bayesian models from statistics and machine learning to characterize people’s knowledge and reasoning in computational terms. This includes past work on modeling how people learn about functional relationships, causal connections, and characterizing the knowledge that guides their reasoning about randomness, probability and similarity (paper in JEP: Learning, Memory & Cognition).

The unique affordances of online environments for embedding experiments raise a need for bridging computational and behavioral sciences, as many people in industry and computer science will increasingly conduct experiments or “A/B tests”, and could benefit from collaboration with social & behavioral scientists. A course proposal I submitted to CHI 2014 aims to elucidate practical principles of experimental design, while a paper at the NIPS Data Driven Education workshop outlines the logic underlying examples of experiments in online courses, and considers whether experimental comparisons might serve as a conceptual tool for instructional design.

Synthesizing, Applying and Linking Research to Practice

In addition to the previous experiments investigating novel uses of prompts and messages, in a workshop paper at AIED MOOCshop and the NIPS Data Driven Education workshop (Williams, 2013) I synthesized some implications of existing research for augmenting online videos and exercises with motivational messages and prompts to explain, and created concrete prototypes to illustrate the kinds of instructional features that can be made.

Many of the online learning resources now in everyday use through industry and startups may not incorporate the existing decades of research on pedagogy and behavior conducted in HCI, education, and the cognitive and learning sciences. Part of my work is therefore reviewing and synthesizing research on learning with particular applicability to online contexts (e.g., Williams, 2013; www.josephjaywilliams.com/education) and disseminating these potential applications to organizations like iNACOL, EdX, Khan Academy, Google, particularly in the context of how researchers can make contributions to instructional design even when they are not domain or content experts (as seen in a recent EdX talk, worksheet guide, and NIPS Data Driven Education workshop paper).

I have also done such synthesis work in the apparently very different setting of online lessons teaching Cognitive Behavioral Therapy (CBT), working with the UCSF Internet World Health Research Center and Allison Harvey’s clinical psychology research group at Berkeley. This has been advantageous in testing the generalizability of these learning and design principles for question prompts and messages, and also allowed me to link my work more directly to research on habit and behavior change. This can raise new directions and insights in understanding people’s interactions with online software, and in doing research especially germane to fostering “learning” that closely resembles habit/behavior change, like acquiring study skill habits or changing metacognitive behaviors. This line of work in fact provided substantive insights and motivation for the recent experiment on prompting for explanations on Khan Academy.

This work has resulted in a co-authored review paper in Perspectives in Psychological Science. Experience developing interactive online “coaches” for applying CBT to everyday life also served as the basis a current project that aims to embed interactive “Learning Assistant” prompts that adapts explanation prompts from my own work, self-explanation training (McNamara, 2004), and reciprocal teaching strategies (Palinscar & Brown, 1984) to the study of video, and remind and guide students in learning these new behaviors.

Current work on adding what we term cognitive support in the PPS paper to people’s learning of CBT has involved the development of the Cognitive Support Rating Scale (CSRS), an instrument for measuring the extent to which an online resource or instructor uses our target cognitive support techniques and other evidence-based pedagogy. We have also developed training guides and manuals for psychotherapists who are taught to apply cognitive support strategies – like prompting for explanations of therapy points – which are being evaluated in an ongoing RCT.

My goal for synthesis work is not only to yield practical benefits to the many people reached by these resources, but also to interest designers of practical resources in collaborating with scientific researchers. I am hopeful that the approach I am taking to embedding scientific research within widely used online educational products can also ensure the expertise of other cognitive and learning sciences researchers is leveraged to impact learning and conduct research in these settings. 

Bridging research and practice in a way that is mutually beneficial is a core goal of the symposia I organized at the Association for Psychological Science, the Annual Meeting of the Cognitive Science Society, and American Education Research Association. By forming relationships with and understanding the technical and practical constraints facing platform innovators and industry practitioners, and identifying researchers whose focus targets a particular platform’s practical needs and affordances, I hope that we can use Internet environments and resources to ensure that future developments in online education have both a rigorous scientific foundation and substantial practical benefits for students.