The adoption of educational technology has increased significantly over the past decade, and it is clear that K-12 schools are becoming accustomed to and embracing new technology norms. The next step for school leaders is to focus on strategically purchasing educational technology to ensure these tools have a true positive impact on teaching and learning.
But effectively evaluating EdTech products is no easy task. School districts must balance diverse needs, ensure data privacy, and align technology initiatives with educational goals. Part of the process involves navigating budget constraints, integrating new tools into existing systems, and ensuring access for all students. To uncover how school districts are tackling these challenges, EdSurge spoke with three education technology leaders.
“We’re pleased to be working with UC Berkeley’s Director of Educational Technology Susan Ulam,” said Rockford Public Schools April Chamberlain of Illinois is using her background as a classroom teacher, curriculum dean and instructional coach to bridge the gap between IT initiatives and classroom instruction. Trussville City Schools Alabama native Jessica Peters also began her career in the classroom before assuming a key role in aligning technology initiatives with educational needs. KIPP DC Public Schoolsleverages her experience as a classroom teacher and instructional technology coach to oversee the integration of educational technology across 22 schools while implementing effective educational technology solutions.
Together, they provided valuable insight into the challenges and strategies surrounding the procurement and implementation of educational technology in their respective districts, as well as shared expectations for participating in the Benchmarking Project. Benchmarking is an ISTE research project funded by the Walton Family Foundation and the Chan Zuckerberg Initiative to support school districts working to improve how they assess, measure, and report student progress based on needs and context. As part of the Benchmarking Project, ISTE worked with six public school districts across the United States to explore practice issues related to assessment and selection within their districts.
EdSurge: How does your district approach evaluating and selecting EdTech products, and what challenges do you face in the procurement process?
Ulam: Rockford Public Schools is a relatively large district with 27,000 students. We have a high level of mobility within the district – almost 20% – and balance the varied needs of each school. As such, we strive to respect the professional choices of our teachers while providing a consistent education and experience to families across the district.
When a request for a new EdTech product comes in, there are checkpoints we use to evaluate whether the tool meets our needs: Does it duplicate an existing tool? What is this tool different and better? Can the pilot provide a true trial? [Product evaluation] It’s not just a matter of whether teachers and students like the tool, it has to be worth the time and effort it takes to learn to use it effectively.
chamberlain: We ask similar questions. Our state has a multi-year program that helps evaluate current resources to determine if they need to be realigned, eliminated, or new ones added. We use a multi-tiered system of support (MTSS), so bringing together representation from all stakeholders when considering EdTech is important, but also a challenge.
Last school year, we audited our district’s programs, initiatives, and projects. The district meeting was attended by representatives from technology, student services, administration, counseling, and curriculum. Then, principals conducted a similar audit at the building level. We started by listing all the instructional and operational EdTech products that teachers were using, which revealed some surprising results. We then categorized these resources by subject, such as English, math, behavior, or basic health, and further subdivided them by the settings (Tier 1, 2, 3) that each product addressed. This allowed us to identify gaps and overlaps with EdTech products.
Going forward, we have a form that teachers fill out to request new products. Teachers answer questions about the tool, including technical details and how it fits into or improves their instruction. The completed form is sent to the school’s technology team, who discuss the product and compare it to products already in use at the school or district. Once approved at the school level, we will move forward with a pilot to determine if there is sustained value in introducing the new product in other settings in the school or district.
Peters: At KIPP DC, we have a few checkpoints: Mid-term, around January or February when budget planning begins, we do a quick analysis of all current products to identify underutilized, ineffective, or overlapping products. Pilot programs are generally very open to requests, but we may decline some where there is extreme overlap. Each summer, we do a thorough effectiveness analysis of all core and pilot products. Due to the work of the KIPP Foundation and strong support from top education leaders, some products may escape data review and we must adapt accordingly.
what should I do Teacher assessment frameworks and tools Do you want to support educators and district leaders in evaluating and selecting educational technology products?
Peters: This tool is more thorough than any I’ve used and covers just about every question you can think of. I think if you go into the tool for every product, you’ll have more confidence that it’s actually appropriate to use and meets all the criteria. This is a heavy tool, so going through the entire framework is time-consuming and not something you can ask a teacher or your average school principal to do. But I think it’s great for district-level evaluations.
Ulam: When COVID-19 first started, we were overwhelmed by the thousands of products that teachers were using. We needed a better language, a framework, that could address all of the products. This tool helped us filter out all the language vendors talk about their products and ask questions like, “What are the accessibility features? Where are they? Is it interoperable?” This makes our evaluation more fact-based and removes emotion and opinion.
There are a lot of questions for tools, so we’ve put together each part of the framework and have some guiding questions based on that part, and if your product passes these questions we can dig deeper. [The tool] Seeing a shiny new product before buying it has given me the opportunity to take a deep breath.
chamberlain: We learned to change the question [we ask] Vendors go from asking, “Does this product do this?” to, “Tell me how does this product do this?” This tool guides us to ask the right questions and think about what we’re trying to accomplish with our products. So instead of saying, “I want this math product,” we say, “I want a way to better assess the skills of my third-grade students, who our data shows are low-performing.” That’s very powerful.
Ulam: We need to think about the role of technology in schools and how we evaluate whether products are improving teaching and learning. We are at a critical crossroads where we understand data privacy and online presence in ways we haven’t had to before. Things were different when kids were simply playing on the Oregon Trail. The risks are rising. We’ve been hit by ransomware ourselves. So it’s imperative that data privacy is part of the product evaluation discussion.
Peters: The Teacher Preparation Framework takes emotions out of the conversation and instead bases the conversation on data. The great success we’ve seen at KIPP DC is that we’re no longer base the conversation on emotions. [product purchasing] It’s how cool something looks. Now we have an effectiveness analysis. This tool tells us what’s working and is worth the effort. This tool has made a huge difference in the standards we look for in a product.