October 22, 2024

Emerging Best Practice for Determining ROI #1

As part of my dissertation, I’ve compiled a list of what I think are emerging best practices for determining rate of improvement.  In other words, what are the necessary conditions for having confidence in the ROI statistic?  The first emerging best practice I’m considering is the use of technically adequate and psychometrically sound measures.  If we don’t have good data, there’s no point in creating a trend line or calculating an ROI statistic.  One of the best resources is the progress monitoring tools chart.  Note that this chart can still be accessed through the National Center on Response to Intervention but is now housed at the National Center on Intensive Intervention.  Not all assessments are located on this chart as they are submitted voluntarily but it’s good place to start if your school team is looking for ideas.

When school-based teams are considering making high stakes decisions such as special education eligibility, the quality of data needed is much higher.  Teams need to consider if the assessments they are using is measuring what they intends to measure (validity) and if they can measure skills consistently (reliability).  Other aspects of assessments to consider in relation to generating a stable trend line are whether the measure can demonstrate small increments of growth and if the assessments can be repeated through alternate forms.  An example of a technically adequate measure that was designed to produce ROI is the computer adaptive tests from Renaissance Learning called STAR Reading and STAR Math.  After only four data points, the system will generate a stable trend line that can be used to interpret student progress.  A non-example would be teacher-made assessments or unit tests.  The latter measures may be helpful for teachers to know how students are performing with concepts they are learning in class, but have not been validated for the purpose of generating trend lines or ROI from the results.

Questions and comments are welcome! What are your school teams using to document student progress?

FacebookTwitterPrintFriendlyPinterestTumblrGoogle+EmailShare

Computer Adaptive Tests

I’m in my second year at my current district where I serve the secondary student population (grades 7-12). My district’s elementary schools are doing a really nice job with their response to intervention (RtI) framework but we have a lot of work to do at my middle and high schools. At the end of last year, we purchased licenses to use STAR Math and STAR Reading (and STAR Early Literacy for the Elementary folks) through Renaissance Learning. I’ve recently been through a couple of trainings for how to administer and interpret the assessment and reports. I’m hopeful that these assessments will do a better job of capturing my older students’ skills, especially for the student’s in special education since they’ve been using the same CBM probes for years. Our STAR assessments can be used for universal screening, progress monitoring, and diagnostic purposes. Plus, they tie into the Common Core standards, which is so helpful for teachers who need to make the connection between assessment to high school classes!

Another feature I’m impressed with so far (surprise) is that the system provides you with a Student Growth Percentile. For instance, if a 7th grade student scores a grade equivalent of 4.5 on the Reading assessment, the report will tell me how much a student with that same profile (7th grade scoring 4.5) will typically “grow” by the end of the year. I’ve always wondered “how much growth can we expect?” from our students. I’ll have to see how this plays out for the school year. Because I need another project… :)

After attending some sobering workshops at my national school psych convention in February, I’m a little worried about the weight we place on student rate of improvement data when research suggests that we need at least 14 data points to have a reliable oral reading fluency trendline (Christ, Zoplouglu, Long, & Monaghen, 2012)!!! The STAR assessments can provide a reliable trendline after 4 data points. Think about how much sooner we could be making solid instructional decisions?! I’m curious if anyone else is using computer adaptive tests. It will certainly be a learning curve (ha…) for me this year!

FacebookTwitterPrintFriendlyPinterestTumblrGoogle+EmailShare