What role does research play in EdTech decision-making?
Published by: WCET | 8/3/2017
Tags: Research, Technology
Published by: WCET | 8/3/2017
Tags: Research, Technology
How are edtech related decisions made at your institution? Do your decision makers review research to make decisions about what edtech is selected for use in the classroom? This week we welcome Fiona Hollands from the Teachers College with Columbia University, to discuss the role of research in edtech decisions. Thank you Fiona for this post!
Enjoy the read,
~Lindsey
In the spring of 2016, I was invited to participate in a symposium that aimed to bring together a variety of stakeholders – researchers, entrepreneurs, school district and higher education leaders, investors, philanthropists, teachers, and professors. The symposium focused on the role of efficacy research in the development, adoption, and implementation of educational technology. Ten working groups were formed to study various topics to present at a gathering of the entire group in May of this year.
My group was tasked with investigating the role of research in higher education EdTech decision-making. Being a researcher myself, this topic was of particular interest. I always wonder whether my work makes any difference to what practitioners do.
My big takeaway from the symposium: we collectively need to find more and better ways to use research to inform decisions about acquiring and using technology in education to improve student outcomes.
To get things in perspective before honing in on the specific role of research, we set out to understand how EdTech decisions are made in higher education:
We interviewed 52 CIOs, Presidents, Directors of IT, Digital Learning or eLearning, other administrators, and academics who actively participate in EdTech decision-making at their colleges or universities. Our sample included both 2-year and 4-year institutions, publics and privates, for-profits, and non-profits.
Our line of questioning implicitly assumed that EdTech decision-making is rational, that is, it starts with a need and ends with a solution.
In practice, we found that wasn’t always the case. There were a number of situations in which an EdTech administrator came across an EdTech product or service that seemed too appealing to pass up. They purchased the product and then engaged faculty members in trying to figure out how to make it useful in the classroom.
There is a formal name for this type of decision-making – it’s called the garbage can model.
But, for the majority of EdTech decisions described to us, the process did start intentionally with one or more specific educational goals to be addressed – for example, providing individualized math instruction at scale – and proceed to a search for viable solutions.
Final decisions about EdTech acquisitions were most frequently made by administrators. Non-profit institutions usually engaged faculty members and students in testing out different EdTech options and providing input about usability and preferences before making a final selection. This approach helps to create buy-in. Buy-in is conducive to more successful implementation. While a non-profit might spend 1-3 years (and a lot of stakeholder time) choosing among 2-3 platform options that really aren’t that dissimilar from each other, for-profits sometimes reported making important EdTech decisions around a C-suite table in the course of one afternoon. If faculty and student input were sought, it was generally after the decision was made.
One interviewee at a for-profit institution amusingly contrasted non-profit and for-profit decision-making as follows: “Our previous president was the Chancellor of University of Maine’s system. When he came here, he said the difference was like [the difference] between driving a cruise ship and driving a sports car. Kind of good and bad. You could make bad decisions really quickly.”
There’s probably a happy medium that allows the institution to build buy-in and capacity for a technology adoption without being an excessive drain on time and resources.
On average, decision-makers considered six distinct aspects of EdTech products during the selection process. These fell into the following five categories:
Category of decision criteria
|
% of interviews in which criteria in this category were listed |
Features and functionality |
95% |
Feasibility of implementation |
82% |
Cost or Return-on-Investment considerations |
82% |
User experience or usability |
61% |
Vendor characteristics |
41% |
No-one listed the existence of research about the product’s impact on student outcomes as a criterion for choosing among the possible solution options. However, everyone claimed to do research about EdTech and many collected significant amounts of data to inform their decisions.
What they meant by “doing research” varied. In all cases, this included an ongoing effort to stay abreast of EdTech developments and applications through constant interaction with colleagues at conferences, via social media and internet sources, and by reading EdTech news and publications. Peer-reviewed academic journals were listed as a source of EdTech information in only 9% of interviews (which is one reason I am writing this blog post instead of revising and resubmitting a journal article I wrote previously).
One explanation given for lack of reliance on existing research evidence is that the results of studies conducted in different contexts and with different student and faculty populations may not be relevant in the decision-maker’s own context. Instead, decision-makers prefer to collect their own local evidence. For example, for almost 40% of the EdTech decisions discussed in our interviews, the college or university engaged in a pilot of one or more alternative products. Typically, this would involve asking a portion of the faculty to use the product in regular classes during the semester to assess pedagogical usefulness, ease of use, and feasibility of implementation. In a few cases (11%), impact on student engagement, completion, retention, or other student outcomes was also investigated at this stage. Alarmingly, impact on actual learning was rarely discussed at this point. And, curiously, impact on student outcomes was far more often assessed after a product had been acquired and implemented. While these data may be helpful at that point to make decisions about whether to continue using a product, it might be wise to put this horse before the initial purchasing cart.
One of the consequences of this preference for local evidence is that the same products are simultaneously being piloted at many institutions, at no small cost, without the results being shared. Context is certainly critical, but it is likely that colleges and universities have an exaggerated sense of their uniqueness when it comes to end-user needs for and reactions to technology. A second issue is that most of these pilots are not particularly rigorous in terms of assessing whether students using one technology solution perform better academically than those using another solution, or no technology at all.
It may be the case that pilots provide more value in building buy-in and gradually ramping up implementation capacity than in assessing technology’s contribution to improved learning. To achieve the latter, more rigorous studies will be needed, ideally with comparison groups. It might be helpful for someone – perhaps like WCET – to provide guidelines for robust design of EdTech pilot studies. It would also be helpful to establish an online repository for members to share the results of their internal EdTech studies. If study results are accompanied by descriptions of the implementation context and of the types of students and faculty involved, other institutions can look for “near peers” to gauge the potential for a technology product’s success at their own site.
More detailed findings and recommendations from our study and some resources that EdTech decision-makers shared with us are available at https://www.edtechdecisionmakinginhighered.org.
Fiona Hollands
Center for Benefit-Cost Studies of Education
Teachers College, Columbia University
2 replies on “What role does research play in EdTech decision-making?”
[…] What role does research provide in EdTech decision-making? (Hint: it’s not as rational as we would hope). […]
[…] “What role does research play in EdTech decision-making?” via WCET; […]