Key obstacles include the following: (1) There is substantial variation in what types of user interactions are recorded and how they are presented by programmers; and (2) With use data, there is wide variation in terms of accessibility and ease of use.
While the tool is free and open for any school or district to utilize (and many have done so already, with over 1,700 individuals registered), we worked closely with 12 districts to pilot the RCE Coach, and half those pilots are already complete. The pilots spanned both test designs and studied how selected math or literacy products affected student academic achievement. Below are eight lessons we have learned from these first pilots.
In the coming months, we’re soliciting more districts to pilot with us. We’re also collecting and building resources aimed at helping districts and schools determine concrete outcome measures for ed tech applications that fall outside of the student academic achievement realm. These include identifying outcomes for student non-academic achievement (such as grit, motivation, self-awareness, participation, etc.), and results for ed technology that facilitate teacher professional development and staff productivity.
Lots of developers have shown interest in getting involved with the RCE Coach in order to demonstrate the value of their products and deepen engagement with districts. However, we have also encountered reluctance from a number of developers to take part in RCEs, primarily due to the possibility of unfavorable results, the possible drain on time and staff, and their lack of control over implementation.
At the moment, within a district, people can use inconsistent approaches to evaluating the effectiveness (and cost-effectiveness) of ed tech. Consequently, weighing the relative effectiveness of different technologies and prioritizing use of tools can be challenging. One pilot district views the RCE Coach as a tool for encouraging common approaches to test across a school, district, or state so that conclusions could be made based on more comparable information.
We hope that these anxieties will abate as RCEs become more recognized. We also hope that programmers will come to look at RCEs as an chance to learn how and when their products are most effective and to build their evidence base.
Moving from broad to narrow research questions is an important part of the procedure.
Rebecca Griffiths is a senior researcher at SRI International’s Center for Technology in Learning.
We hope to see more districts and schools pilot the RCE Coach and continue to help us learn and grow from the lessons we have already gleaned. For those interested, you can complete a brief survey here.
In discussions with district staff, we heard repeatedly that people want to know whether the technology they utilize is making a difference for students and is worth the cost, and that evidence should be more rigorously and systematically created.
Many districts need to know whether the technology they are already using is helping students, but they lack the ideal conditions for a causal effect study. By way of example, a school might have rolled out a new program to all students, but only certain teachers really used it with their students.
A large district with a central data analysis, program evaluation or research unit may decide to train staff in schools to utilize the RCE Coach in order to build local capacity and enable the study of more technology than one central staff could test alone. Several state departments of education also expressed interest in disseminating use of the RCE Coach into their districts.
Districts can in theory conduct RCEs without programmer assistance, provided that they have information about who is using the technology and who is not. However, RCEs will often provide more meaningful insights about effectiveness and strength of implementation with the cooperation of programmers. Furthermore, a productive partnership can facilitate the process of assembling data sets and make best use of use data.
The RCE Coach can support common approaches to test.
Jackie Pugh was a research fellow at the Office of Educational Technology at the U.S. Department of Education for a year, through May 2017.
Rapid-cycle tests can fall into the tricky space of being perceived as important but not urgent. Thus, they are susceptible to flaws when more pressing tasks arise.
- Matched comparison, which generates two similar classes of users and non-users of an ed tech application already in use at a school site and;
- randomized pilot, which randomly assigns participants to groups of consumers and non-users of an ed tech application which hasn’t yet been implemented.
Practices associated with collecting, reporting, and distributing usage data continue to be emergent.
In theory, detailed information about whether and how students, teachers, and other users interact with systems, as well as regarding their performance on embedded tests, should be a treasure trove for ed technology tests. In practice, several obstacles impede routine use of the data for test purposes.
Rapid-cycle tests — rigorous, scientific approaches to study that provide decision makers with timely and actionable evidence of whether operational changes enhance program results — work best for narrow questions that address specific implementations of technology, but most districts begin from a different point.
Additionally, as we continue to pilot the RCE Coach, we’re planning to document, in detailed case studies, the areas that cause the most confusion in the rapid-cycle evaluation procedure. By way of example, be on the lookout for an upcoming resource that will be embedded right in the RCE Coach that details how to design a successful pilot. This resource covers topics like randomization, number of participants, unit of assignment, data accessibility, and selecting meaningful probability thresholds. Additionally, we have added a facilitator’s guide on the best way best to demonstrate the platform for all those school and district leaders who would like to direct their own trainings around the RCE Coach.
For the last 18 months, the Office of Educational Technology at the U.S. Department of Education, in partnership with the Institute of Education Sciences (IES) at the Department, has been working with Mathematica Policy Research and SRI International to build the Rapid Cycle Evaluation Coach (the RCE Coach). The RCE Coach is a free, open minded, web-based platform to assist districts and schools generate evidence about whether their educational technology apps and tools are working to achieve improved results for students. The platform was released in Beta in October 2016 and upgraded in January 2017. The RCE Coach currently includes two types of evaluation designs:
We hypothesize that districts are most likely to complete the tests when there are staff devoted to data analysis or program directors who have less exposure to the pressures of day-to-day school operations. Over the next year, we hope to learn more about the skill sets necessary to successfully navigate the RCE Coach independently and how the RCE Coach can best be inserted into existing operations.
Lesson 5. Large systems may observe the RCE Coach as a resource for local capacity building.
Lesson 1. The fundamental problem addressed by the RCE Coach — the need for greater evidence for making decisions about whether to use ed tech in schools to inform implementation of best practices and procurement — has broad resonance in the field.
Therefore, it’s important that the RCE Coach help users decide what types of analyses are appropriate and possible given their specific circumstances. Additionally, it is important to be clear about the strength of the evidence provided under these various cases so that districts may use the data appropriately.
Ed tech programmers are important partners for RCE.
From a policy standpoint, it may make more sense to encourage developers to invest in standardized reporting operation than to encourage responsiveness to requests for customized reports. For the RCE Coach, we’ve developed several templates with common indicators of usage, progress, and performance. However, we recognize that the development of criteria for system data reporting is very likely to be a long-term, more organic procedure.
Many of our pilot districts said their research questions in rather broad terms. By way of example, they wanted to know whether technology in general is moving the needle or whether a school wide technology-based intervention is working. Rapid-cycle evaluations can be most useful in examining whether components of a school improvement plan have the desired effect on student outcomes or whether the desired effect is coming from one particular technology for a targeted group of students.
Having a champion in the right role at the school or district is a must.
Source: TPd Paying for College Feed
Leave a Reply