- Research & Initiatives
- Research Centers & Labs
- Research Labs
- Consortium for the Advancement of Special Education Research
- CASPER Research Projects
CASPER Research Projects
Welcome! Here you can learn more about our research projects. Below you can learn about your ongoing projects. On the menu at the left, you can learn about our completed projects.
Ongoing Projects
Motivated by the findings of the special issue documenting the scarcity of replication research, especially direct replications, in special education (see Cook, Therrien, & Coyne, 2016), CASPER is conducting a series of direct replications.
We are replicating five influential studies in special education that utilize various research designs and focus on different outcomes and learner populations. See below for brief descriptions of each of the five direct replication studies:
Nick Gage, Betsy Talbott, and Bryan Cook
Gage, Talbott, and Cook will conduct a replication of Morgan, Frisco, Farkas, and Hibel’s (2010) study examining the efficacy of special education in math, reading, and behavior among a nationally representative group of 5th-grade students using propensity matching scoring and the ECLS-K data set. Gage et al. will replicate the study among a nationally representation group of 4th-grade students in mathematics achievement using the National Center for Teacher Effectiveness Main Study data set.
Michael Coyne and Bill Therrien
CASPER members Michael Coyne and Bill Therrien are conducting a direct replication of O’Connor, White, and Swanson’s (2007) study that compared the effectiveness of repeated reading and continuous reading on students reading fluency and comprehension achievement.
Replication of Jenkins, Schulze, Marti, and Harbaugh’s (2017) study
Two analyses will be replicating Jenkins, Schulze, Marti, and Harbaugh’s (2017) study:
CASPER member Chris Lemons and Samantha Gesel are conducting a direct replication of Jenkins et al.’s (2017) study, which estimated students’ “true growth” on oral reading fluency CBM measures, simulated different progress monitoring schedules, and conducted OLS regression and binomial tests to examine the relative decision-making accuracy and timeliness of each schedule. In addition to replicating the analyses used by Jenkins and colleagues with a different sample of students, Gesel and Lemons will be conducting additional, extension analyses that will use (a) a priori decision rules to assess the sufficiency of each progress monitoring’s accuracy and (b) statistical tests to assess differences in timeliness across progress monitoring schedules.
CASPER member Erica Lembke and Erica Mason are conducting a conceptual replication of Jenkins et al.’s (2017) study. They will be replicating the analyses used by Jenkins and colleagues with a different sample of students using CBM measures in mathematics.
Jason Travers, Daniel Maggin, and Bryan Cook
Travers, Maggin, and Cook will conduct a replication of Sham and Smith’s (2014) review examining publication bias in single-case design research. Sham and Smith (2014) compared points of nonoverlapping data between published studies and unpublished dissertations using experimental single-case designs to examine the effects of pivotal response therapy for individuals with autism. They found that published studies had, on average, 22% more nonoverlapping data points than unpublished dissertations. They replication study will use a variety of effect sizes to evaluate differences between published and unpublished research.