Projects

Recent Posters

Projects

MHA

“A digital intervention focused on improving mental health”

Mental health is an area where personalization and user-focused design can be especially beneficial for people. With collaborators from Northwestern University, we are working on a digital intervention for Mental Health America which consists of treatment modules that involve psychoeducation material, interactive activities, and supportive messaging. Using machine learning, we will personalize content by determining which messages/prompts are most effective and engaging for users. Additionally, we are working on deploying and measuring the effects of TenQ, a condensed ten-question survey informed by cognitive behavioural therapy meant to help people reflect on their mental health, with the goal of lowering the barrier to accessing mental health resources.

Identity Reframing

Interventions designed to shift students' perceptions”

This project aims to examine interventions designed to shift students' perceptions of failure and challenges in academic settings. Specifically, the study seeks to reframe students' negative self-perceptions associated with receiving poor grades by fostering a growth mindset. Through the use of quotes and role models, the study aims to encourage students to view challenges as opportunities for personal development rather than as indicators of weakness. Additionally, the project aims to identify factors that may influence students' expectations and the challenges they face to pinpoint effective intervention strategies. This research has the potential to provide insights into effective approaches for promoting resilience and a growth mindset among students in academic settings.

Students OnTrack

“A digital intervention focused on improving homework completion”

Randomized A/B experiments provide one source of evidence for designing instructional interventions and understanding how students learn. Generally, conducting experiments in the field raises the question of how the data can be used to rapidly benefit both current and future students. Adaptive experiments provide the solution as these can be deployed to increase the chances that current students also obtain better learning outcomes. Our goal is to provide a standalone system that can send students reminders, prompts to reflect. We know instructors have their own style and approach for messaging, and so this OnTrack messaging service is meant to be a separate channel that is clearly different from the high-priority announcements from instructors. 


OnTrack utilizes randomized A/B comparisons to evaluate different reminders or alternative ideas about how best to encourage students to start work earlier and, crucially, to measure the impact of these interventions on student behavior. The effect of homework email reminders on students is further evaluated by conducting an adaptive experiment using multi-armed bandit algorithms which identifies and allocates more students to the most optimal treatment. The rationales are: (1) Students can receive extra support with everything being online, with IAI lab putting energy into crafting different messages, based on student feedback. (2) OnTrack emails won't 'interfere' with instructor communication or make students start to ignore them (concerns raised by instructors, which we share). (3) Instructors still have the option to get data from what we test out, and give suggestions or requests for what emails they want to see us test out. 

Personalised Explanations/PCRS DropDowns

“A digital intervention focused on improving student learning”

Different students are likely to get different levels of learning outcome even if they receive the same prompt or message. For example, although there is a large body of academic literature showing that students learn better when they write explanations of the concepts they are learning, some of them might not be affected in the real world because they rush to finish their problem set without time to reflect. In the personalized explanations project, we design randomized experiments to examine what educators should tell students in what context to improve their learning and engagement. We then apply contextual bandits to tailor students’ experience on online education systems. We embed this system in the University of Toronto’s PCRS online learning system.


Attributions & Decision-Making in Goal Pursuit

"How do people’s causal reasoning (attributions) influence their interpretation of positive and negative feedback information?"


Throughout daily goal pursuit, humans continually receive feedback from the environment regarding their performance and goal progress. This enables them to make decisions about their efforts, strategies, and perseverance but also allows them to determine when their abilities are not suited to a goal, when circumstances are unfavorable for goal progress, and when the goal should be adjusted or abandoned. A variety of factors influence both how an individual will interpret performance feedback and how that interpretation will influence their subsequent behaviors. These include attributions, self-efficacy beliefs, and various individual and motivational factors. 


This study explores: 

1) how people’s causal reasoning (attributions) influence their interpretation of positive and negative feedback information and their subsequent decisions (do they quit? Do they change strategy?) 

2) How individual, cultural, and motivational factors influence our causal reasoning, as well as our subsequent decision-making. 


This study also leverages the results of theoretical experiments to inform interventions that could help people make more adaptive attributions and decisions in the face of negative feedback.


Statistically Considerate Bandits

Hypothesis testing for adaptive experiments”

The IAI group uses bandit algorithms - a type of statistical tool - to complete our research. We want to reduce the rate of false positives that bandit algorithms produce, and make them more accurate. The Lab attempts to accomplish this by using a variety of statistical approaches to improve bandit algorithms.

Contextual Bandits

Research on contextual bandits and their applications to various areas of human-computer interaction”

Contextual bandit algorithms are powerful reinforcement learning techniques that enable personalization and user-focused design. They balance the exploration of new possibilities with the exploitation of the best existing options to learn and act optimally. Our lab conducts research on contextual bandits and their applications to various areas of human-computer interaction, such as encouraging people to exercise (where we use them to balance showing people new motivational messages with showing them those that have been proven to be effective in the past.)

Reflective Questioning Activity

Investigating users’ perspectives on an online reflective question activity”

We investigate users’ perspectives on an online reflective question activity (RQA) that prompts people to externalize their underlying emotions on a troubling situation. Inspired by principles of cognitive behavioral therapy, our 15-minute activity encourages self-reflection without a human or automated conversational partner. A deployment of our RQA on Amazon Mechanical Turk suggests that people perceive several benefits from our RQA, including structured awareness of their thoughts and problem-solving around managing their emotions. Quantitative evidence from a randomized experiment suggests people find that our RQA makes them feel less worried by their selected situation and worth the minimal time investment. A further two-week technology probe deployment with 11 participants indicates that people see benefits to doing this activity repeatedly, although the activity may get monotonous over time. In summary, this work demonstrates the promise of online reflection activities that carefully leverage principles of psychology in their design.

Voice Reflections

Interventions aiming to support students in reflecting  on course material by voice.

We previously deployed a reflections system in an Introduction to Databases course (3rd year) which had a flipped classroom environment. Students in this environment watched mini-lecture videos and worked on exercises before coming to class. To make sure they were understanding the lecture videos, we prompted them to reflect via text-based responses right after they watched the videos, and before they worked on exercises about the content. We noticed that when students reflected on a topic, and then answered questions on that topic, they took fewer submissions to get an answer right than students who did not reflect on that topic. We aim to check if letting students reflect on a lecture topic using voice recordings is as beneficial to them as using text-based responses when they work on homework. We divided the students into 3 groups: Students who reflected using voice responses, text responses, or who chose between voice and text responses. More recently, we have been focusing on prompting students after their reflection to get them to think deeper and take advantage of the activity. We are currently working on using large language models to make the prompting more interactive, so students are able to answer and ask questions based on their reflection.

Gratitude Interventions

Testing the feasibility of a gratitude app.”

We test the feasibility of a gratitude app built by collaborators from University College London using react-native. The deployment will take place on Android and IOS platforms. During this process, we will fix minor bugs and design our own study. We are doing this project during the 2022 summer period and hopefully will submit something at the end of this session. Right now we are at the second round of deployment. Participants of this study will receive a prompt asking them to record their mood before and after an event they are grateful for on a 5-point scale system. They will also need to briefly describe this event. This stage will last for approximately 2 weeks. By practicing this process, participants acquire gratitude skills of noticing and acknowledging people and things worth being grateful for in their life. This helps to improve the app as well as investigate how recording gratitude affects people’s well-being. 

Personalized Explanations

Designing randomized experiments to examine what educators should tell students in what context to improve their learning and engagement.”

Different students are likely to get different levels of learning outcome even if they receive the same prompt or message. For example, although there is a large body of academic literature showing that students learn better when they write explanations of the concepts they are learning, some of them might not be affected in the real world because they rush to finish their problem set without time to reflect. In the personalized explanations project, we design randomized experiments to examine what educators should tell students in what context to improve their learning and engagement. We then apply contextual bandits to tailor students’ experience on online education systems. We embed this system in the University of Toronto’s PCRS online learning system.

Student MetaSkills Interventions

Effects of multiple interventions intended to improve first-year students’ meta-skills”

We are investigating the effects of multiple interventions intended to improve first year students’ “metaskills”, which are transferable skills that can help students in multiple areas of their lives (e.g. planning, growth mindset, stress management, etc). These interventions have been studied separately, but it is not yet known what interactions these interventions have together, or whether there are crossover effects. Students receive a random subset of these “metaskills” modules, while also receiving generic study advice in the control (null subset) condition. We plan to measure the effects of these interventions on midterm and final grades, as well as on student mindset.

Statistical Inference with Multi-armed Bandit Algorithms

Identifying techniques to control the type 1 error rate of MAB algorithms in online educational experiments.”

Multi-armed bandit algorithms (MAB) maximize expected reward, whereas randomized experiments maximize statistical power and control type 1 error rate. Randomized experiments may not be ideal in the context of deciding which version of an online educational technology to present to students (ie. text vs. video explanations), since students may receive inferior versions of this technology during the experiment. MAB algorithms are thus appealing, as maximizing reward is done by assigning more students the better version. However, MAB algorithms have been shown to inflate type 1 error rate, and reduce power. In this project, we are working on techniques to control the type 1 error rate (while minimizing loss of power) of MAB algorithms in online educational experiments.

Factorial Experiment Design

Evaluating how Thompson Sampling performs in different problem settings.”

Factorial design is a systematic way of designing experiments used by scientists. The purpose is to examine experimental variables (a.k.a. factors) to see if and how each of them affects the outcome. We frame it as a bandit problem and use Thompson Sampling to approach it. Our goal is to evaluate how Thompson Sampling performs in different problem settings as well as provide insights on details and nuances of such solutions.

Contextual Bandits with non-stationarity in factorial designs

Estimating the causal relationship between an outcome of interest and an intervention.”

Real world data collected through multiple time points are very complex in general, often including non-stationarity problems which have to be appropriately modeled. This is particularly true when study designs get more complicated, as in the case of multi-factorial designs with several levels for each factor (e.g., the DIAMANTE Study). If the goal of the study is to estimate the relationship, or more in depth the causal relationship, between an outcome of interest and some interventions, with the final goal being to identify the best intervention, typically, (contextual) multi-armed bandits are used. 

However, while there’s a broad literature dealing with algorithms’ theoretical regret bounds, in some cases accounting also for non-stationarity, non-stationarity in real world settings based on complex factorial designs has not yet been addressed. In this project, we want to investigate through (real world data-based) simulations, the performance of some of the existing bandit algorithms in different types of non-stationary scenarios. The ultimate goal would be to develop a new bandit algorithm which may appropriately incorporate this problem in mHealth, where non-stationarity is manifested in habituation phenomena.

Personalized Support Through Large Language Models for Supporting Mental Health and Learning

Combining Reinforcement Learning with Large Pre-trained models to personalize interactions”

This class of projects (led by Harsh Kumar) involves combining our research in Reinforcement Learning with Large Pre-trained models to personalize interactions, as well as make these models more useful for real-world problems. Recent work on this has been presented at CHI, LAK, and SIGCSE.


Supporting Students and Instructors on Q&A Forums

“Interventions designed to make Q&A forums more inclusive.

Q&A forums provide academic support and reduce isolation as course enrollment increases. They have also been linked to reduced drop-rates among students. However, evidence suggests that these forums may cause stress and exacerbate feelings of isolation for some students who perceive their peers as having more experience and authority. Therefore, this study seeks to develop interventions that address this issue by setting more realistic expectations for forum participation and providing tools and frameworks that promote confidence and comfort among students. Previously, we have explored the use of exclusive groups as a means of creating a safer and more inclusive forum environment. The insights gained from this research have the potential to improve the effectiveness of Q&A forums in promoting student engagement and reducing drop rates, ultimately contributing to a more positive and supportive learning environment.