
A research team from the MIT Integrated Learning Initiative (MITili) traveled to the Canadian Red Cross (CRC) International First Aid Education Conference in Niagara Falls Ontario to conduct a study of workplace learning effectiveness. Close to fifty conference participants joined in the study, which explored two variants on a learning experience covering the topic of what to do before, during, and after a power outage. In one variant, an instructor led learners through the content; in the other, learners navigated a mobile app. Read on to learn what MIT found, skipping ahead to results if you’d just like the answer.
Subscribe to our newsletter or follow / like us:
1. Setting the Stage
One aspect of the CRC’s mission is emergency preparedness. The best way to deliver that preparedness was an open question, however. MITili joined forces with CRC and participants at its 2018 International First Aid Education Conference in Niagara Falls, Ontario, to address that question.
“The International First Aid Education Conference is an ‘un-conference’. We are expecting over 400 attendees to come prepared to share, network, and build new strategies to address how we educate and support Canadians and other global citizens on first aid to improve people’s confidence in first aid skills and their propensity to act.”
Specifically, the team decided to evaluate two variants on delivering learning in the service of this preparedness: in person with an instructor and via the CRC’s “Hazards” app for mobile devices.
The Hazards app contains emergency preparedness information for a number of potential disasters: house fires, hurricanes, power outages, and the like. The figure below shows these and other disasters organized within the app’s “home” page.
Working together, CRC and MITili selected the “Outage” disaster as the source of content for the research project. MITili then reviewed the content within the Outage topic to understand its breadth and organization. The following two screenshots show the organization of the Outage topic and the content within the “During” sub-topic; the “Before” and “After” topics were similarly configured.
1a. The learning objectives
From the Outage content, and following the Wiggins and McTighe “backward design” approach (spelled out in detail in their book “Understanding by Design”), MITili extracted eight learning objectives. These learning objectives are spelled out in the appendix.
MITili shared the learning objectives with the instructor who would be leading the in-person sessions at the conference. The instructor then built course materials aimed at helping one group of learners gain the necessary information. The second group of learners would be navigating the app to gain this information.
1b. The assessment questions
The next step in the Understanding by Design (UbD) approach was to create assessment questions that would measure whether learners achieved the learning objectives. MITili developed 12 open-ended and multiple choice assessment questions with a target of no more than 5-10 minutes for the average learner to complete the assessment.
1c. The content
Typically, following the UbD approach means ending with the creation of content. In this case, that content was pre-existing for the app case, so a truncated UbD approach was used to arrive at the assessment questions above and by the instructor to develop the content for the in-person sessions.
2. On-site at the Conference
Conferences afford an excellent opportunity to work with large numbers of research participants, albeit not for longitudinal cases. MITili worked with CRC to recruit participants both in advance of and during the conference. The following text was used—it was kept purposefully vague so as not to bias the participants.
People and organizations that learn are people and organizations that succeed. To deliver on learning, the workplace, like K-12 and higher education, can’t just rely on practices of the past. In addition, they must implement the findings of learning science in areas ranging from personalized learning to just-in-time learning to learning while performing.
The mission of the MIT Integrated Learning Initiative (MITili) is to conduct research on learning effectiveness and share the resulting findings. Key variables included the characteristics of the learner, the nature of the instruction, and the corporate policy environment in which the learning takes place.
Conference gatherings afford researchers the opportunity to interact with a large and diverse group of learners to explore areas of relevance to these audiences. At the invitation of the Canadian Red Cross, MITili will conduct an on-site research project at the 2018 International First Aid Education Conference. So as not to bias the results, MITili is unable to share a full description of the project. Its general direction, however, will explore differences in delivery for a specific set of learning objectives.
CRC provided two classrooms for the experiment. One classroom was used for four sessions of the instructor-led (IL) version while the other was used for four sessions of the app-delivered (AD) version. Following the learning experience, participants in both rooms were given the option to see a short presentation on representative science of learning findings.
- Monday AM 11a – 12n
- Up to 10 instructor-led (IL) and up to 10 app-delivered (AD) learners
- Monday PM 2p – 3p
- Up to 10 instructor-led (IL) and up to 10 app-delivered (AD) learners
- Tuesday AM 11:30p - 12:30p
- Up to 10 instructor-led (IL) and up to 10 app-delivered (AD) learners
- Tuesday PM 2:45p - 3:45p
- Up to 10 instructor-led (IL) and up to 10 app-delivered (AD) learners
Before attending the learning experiences measured by the study, participants signed a consent form. The learning experiences themselves (instructor-led and app-delivered) were set at 12 minutes in length. An online survey and post-assessment was administered the next day rather than immediately following the learning experience, as research shows that results collected a day later are a better indicator of longer-term retention.
3. Results
The number of participants in the learning experience sessions and the subset of these learners who filled out the survey and assessment the next day was as follows (the numbers were not equal across instructor-led and app-delivered because the number of people who participated in a session tended to be a few less than the number who signed up for that session).
- Instructor-led
- 21 participants
- 19 respondents (90%)
- App-delivered
- 25 participants
- 25 respondents (100%)
The following sections summarize the findings of the study. It is important to note that any differences were not found to be statistically significant given the number of post-test respondents.
In short, it was a tie: instructor-led and app-delivered learning did not yield statistically different outcomes; post-assessment scores were in the low to mid 60% range for both modes of delivery.
3a. How similar or different were the two learner populations (IL and AD)?
- How familiar were you with the material beforehand?
(1-5 scale, 5 most relevant)- Instructor-led
- Average: 3.7
- Median: 4.0
- App-delivered
- Average: 2.9
- Median: 3.0
- Instructor-led
- How relevant was the content to you?
(1-5 scale, 5 most relevant)- Instructor-led
- Average: 3.6
- Median: 4.0
- App-delivered
- Average: 3.6
- Median: 4.0
- Instructor-led
- How important do you think it is to learn about the material?
(1-10 scale, 10 most important)- Instructor-led
- Average: 8.1
- Median: 9.0
- App-delivered
- Average: 8.7
- Median: 9.0
- Instructor-led
- How interested are you in learning about the material?
(1-10 scale, 10 most important)- Instructor-led
- Average: 7.1
- Median: 7.0
- App-delivered
- Average: 7.9
- Median: 8.0
- Instructor-led
3b. How similar or different did the two learner populations perform (IL and AD)?
The following differences were not found to be statistically different based on the number of study participants. Note that per the first question in section 3a above, the app-delivered learners entered the learning experience with less familiarity. To the extent that the answers to this question represent a pre-test of sorts, the small gap in favor of app-delivered learning is widened somewhat (but again, not to the point of statistical significance).
- Post-test score
- Instructor-led
- Average: 63.3%
- Median: 61.8%
- App-delivered
- Average: 65.6%
- Median: 66.0%
- Instructor-led
3c. What might be done differently?
It’s valuable to hypothesize how the learning experiences and the study might have been conducted differently to show one form of learning to yield better outcomes than the other.
- Despite the lack of statistical significance, the app-delivered learning did show a small improvement over the instructor-led learning. If that gap persisted with a larger set of participants, it might be found to be statistically significant.
- The provision by MITili/CRC of more dynamic instructional materials for the instructor-led sessions might have improved outcomes for that mode of delivery.
- Modifying the app to add interactive elements (for instance, inserted quiz questions) might have improved outcomes for that mode of delivery.
- Choosing subject matter of greater complexity might have swung the balance in favor of one mode of delivery or the other.
- Of course, instructor-led and app-delivered learning aren’t the only options; delivery that included video or even virtual reality activities might have yielded different outcomes from the two modes examined.
3d. Implications for organizations
Given the statistical tie, what might we conclude from the results? One hypothesis going into the study was that the interactivity of instructor-led learning would yield better outcomes than the relatively static app-delivered option as currently configured. With equal outcomes, we might turn our attention to other reasons to choose one mode over the other.
Two such reasons are cost and learner convenience. The cost comparison depends on the instructor cost vs. the cost to create the app (which generally favors the instructor for lower numbers of learners and the app or other scalable approach for higher numbers).
The convenience comparison tends to favor the app or other asynchronous approach. Learners (or their employers) forgo travel cost and may access the learning on their own schedule, not that of the instructor or of cohort-based compromise.
MITili appreciates the opportunity that CRC provided to join its conference and conduct the experiment described above. The ability to test hypotheses, even when no differences emerge, is crucial to the advancement of the science of learning.