Understanding learning differences and how best to accommodate them in the classroom is critical to providing high quality, equitable education. But with relatively small numbers of both funded researchers and study participants in the special education field, the research process is often slow and challenging.
A team of professors at the UVA School of Education and Human Development – Vivian Wong, Bill Therrien and Bryan Cook – believe that open science, a movement to make scientific research transparent and accessible, could offer a solution.
Their new pilot project, called the Special Education Research Accelerator, uses open science principles in a bid to shake up how special education research operates. The platform is testing how crowdsourcing, or pooling data from multiple teams of researchers across the country, could complete research that would typically take decades in a year or less – transforming the education research landscape.
What is open science?
In recent years, a growing open science movement has emerged within the social sciences, fueled by reports that researchers have been unable to replicate findings from many prominent studies published in distinguished journals.
Concerns about replication pose a threat to the value of applied research. “Replicability of results is a cornerstone of science,” said Wong. “If we ask policymakers and practitioners to make decisions based on scientific evidence, we want confidence that the results are replicable for some target population of interest.”
According to Cook, the literature base may be biased toward results that are new, positive, and statistically significant. Researchers are human and are incentivized (consciously or not) to pursue research topics or strategies that provide such results. To combat these realities, the open science movement promotes practices like open access journals, shared data, and registered reports – where a journal agrees to publish a study before it’s completed, whatever the results turn out to be.
“The hard part about talking about open science is that it really is an umbrella term,” Cook said. “But the basic idea is that we want to make the process of science as open as possible to improve transparency, credibility, and impact.”
Another benefit, Cook explained, is that open science techniques help diversify and democratize who is able to conduct research. By connecting researchers and pooling resources crowdsourcing opens the door for small teams and individuals without their own funding to participate in high-quality, large-scale research projects.
Diversifying the researchers themselves then ultimately leads to more diverse and representative data. “With [more diverse researchers] comes diversification of the regions and the students that are represented in that research,” Cook said. That, in turn, only improves the research findings by making them more broadly generalizable.
Interest in open science principles and how to apply them to education led to many conversations with UVA psychology professor Brian Nosek, co-founder and executive director of the Charlottesville-based Center for Open Science. Inspired by the Psychological Research Accelerator, which crowdsources data at an international level with hundreds of research partners, Cook, Wong, and Therrien set out to create a similar platform for special education.
The value of crowdsourcing
The Special Education Research Accelerator focuses on one important open science principle: crowdsourcing data collection. To explain the value of crowdsourcing, Cook likes to quote a 2018 publication: “Crowdsourcing flips research planning from ‘What is the best we can do with the resources we have to investigate our question?’ to ‘What is the best way to investigate our question, so that we can decide what resources to recruit?’”
In a niche research area like special education, sharing data sets is particularly promising because it’s challenging for individual researchers to put together large data sets on their own. Studying rare conditions or disabilities on a large scale is even more difficult.
“Education – especially special education – research is often underpowered, largely due to relatively small numbers of participants in a study,” Cook said. “To do an adequately powered group study where you’ve got a control condition and an experimental condition, and large groups to really test out the effects of an intervention, it just doesn’t happen often.”
With grant funding from the Institute of Education Sciences, the team is developing and testing the platform. In collaboration with Brian Wright at the UVA School of Data Science, they have already launched a website, developed resources and training, and created templates for partnership agreements.
So far, the team has recruited about 70 research partners in total, with plans for more systematic recruitment as the project grows. They are currently testing out the platform with a pilot study focused on teaching science vocabulary to students with autism. The study is a replication of a 1994 study that examined the effects of different teaching methods on the acquisition and retention of science facts among elementary-age students with high-functioning autism.
With 21 planned research partners, the study will include approximately 150 participants – a relatively large number in special education studies. The participating research partners are spread across the country, representing each of the nine census districts. The website hosts training videos and serves as a hub for each of the research partners throughout the data collection. It allows the team to track each research partner and pool the data from each site.
The future of special education research
Ultimately, the team envisions a network of special education researchers – working with carefully tested infrastructure and processes – conducting high-quality, large-scale, and open replication studies.
“The long-term goal is to provide researchers with access to both the technological tools for collecting, processing, and analyzing data in systematic replication studies, as well as to methodological experts on the best ways to design, implement, and analyze systematic replication studies in field settings,” Wong said.
With the tools and expertise readily available on a platform like SERA, teams of independent researchers could essentially conduct scores of replication studies at once.
“If we do this on a broad scale, we could look at different replications and see what are the boundaries of the effectiveness of a practice,” Cook said. “Who does it work for, who does it not work for? In what settings does it work and not work? For what outcomes does it work or not work? Instead of one research team doing a series of studies that might take decades, if we have 50 research partners, we could find all that out in a year.”
In short, Therrien said it’s about the power of bringing special education researchers together. “When we band together as a field, we can do things we would never be able to do on our own,” he said. “Multiply that many times over and you can see what a game changer crowdsourcing research could be.”
Cook emphasized that the goal is not to point fingers or discourage researchers from pursuing any traditional small-scale studies, but to improve the credibility and accessibility of research by encouraging new methods. Not everything should be crowdsourced – smaller studies provide important opportunities for innovation, for instance – but he would like to see crowdsourced research as a viable alternative. “I think it’s just a fascinating way to think about research a little differently,” he said. “We’d like to see it as something that’s on the menu of possibilities for education researchers to help advance the field.”
According to Therrien, researchers also hope to someday expand their work into other sub-disciplines of education. “Open science is about sharing and working together,” he said. “We will continue with that spirit in mind.”