What role does research play in EdTech decision-making?

Published by: WCET | 8/3/2017

Tags: Research, Technology

How are edtech related decisions made at your institution? Do your decision makers review research to make decisions about what edtech is selected for use in the classroom? This week we welcome Fiona Hollands from the Teachers College with Columbia University, to discuss the role of research in edtech decisions. Thank you Fiona for this post!

Enjoy the read,

~Lindsey


In the spring of 2016, I was invited to participate in a symposium that aimed to bring together a variety of stakeholders – researchers, entrepreneurs, school district and higher education leaders, investors, philanthropists, teachers, and professors. The symposium focused on the role of efficacy research in the development, adoption, and implementation of educational technology. Ten working groups were formed to study various topics to present at a gathering of the entire group in May of this year.

My group was tasked with investigating the role of research in higher education EdTech decision-making. Being a researcher myself, this topic was of particular interest.Quote reads I always wonder whether my work makes any difference to what practitioners do.

My big takeaway from the symposium: we collectively need to find more and better ways to use research to inform decisions about acquiring and using technology in education to improve student outcomes.

The Role of Research

To get things in perspective before honing in on the specific role of research, we set out to understand how EdTech decisions are made in higher education:

  • Who are the stakeholders?
  • Who are the actual decision-makers?
  • Who identifies the needs to be addressed?
  • Where do these decision-makers get their information about EdTech products and trends?
  • What criteria do they use to choose among alternative products and services?
  • How do they evaluate the different options?

We interviewed 52 CIOs, Presidents, Directors of IT, Digital Learning or eLearning, other administrators, and academics who actively participate in EdTech decision-making at their colleges or universities. Our sample included both 2-year and 4-year institutions, publics and privates, for-profits, and non-profits.

The Garbage Can Decision Making Model

Our line of questioning implicitly assumed that EdTech decision-making is rational, that is, it starts with a need and ends with a solution.

In practice, we found that wasn’t always the case. There were a number of situations in which an EdTech administrator came across an EdTech product or service that seemed too appealing to pass up. They purchased the product and then engaged faculty members in trying to figure out how to make it useful in the classroom.

red garbage can

There is a formal name for this type of decision-making – it’s called the garbage can model.

But, for the majority of EdTech decisions described to us, the process did start intentionally with one or more specific educational goals to be addressed – for example, providing individualized math instruction at scale – and proceed to a search for viable solutions.

Final decisions about EdTech acquisitions were most frequently made by administrators. Non-profit institutions usually engaged faculty members and students in testing out different EdTech options and providing input about usability and preferences before making a final selection. This approach helps to create buy-in. Buy-in is conducive to more successful implementation. While a non-profit might spend 1-3 years (and a lot of stakeholder time) choosing among 2-3 platform options that really aren’t that dissimilar from each other, for-profits sometimes reported making important EdTech decisions around a C-suite table in the course of one afternoon. If faculty and student input were sought, it was generally after the decision was made.

One interviewee at a for-profit institution amusingly contrasted non-profit and for-profit decision-making as follows: “Our previous president was the Chancellor of University of Maine’s system. When he came here, he said the difference was like [the difference] between driving a cruise ship and driving a sports car. Kind of good and bad. You could make bad decisions really quickly.”

There’s probably a happy medium that allows the institution to build buy-in and capacity for a technology adoption without being an excessive drain on time and resources.

Choosing Between EdTech Options

On average, decision-makers considered six distinct aspects of EdTech products during the selection process. These fell into the following five categories:

Category of decision criteria

% of interviews in which criteria in this category were listed

Features and functionality

95%

Feasibility of implementation

82%

Cost or Return-on-Investment considerations

82%

User experience or usability

61%

Vendor characteristics

41%

Notes: There were 44 interviews in which criteria for making a specific EdTech decision were elicited. In total, 277 criteria were named by interviewees and these were initially sorted into 88 categories. Subsequently, these were further aggregated into the 5 categories shown above.

No-one listed the existence of research about the product’s impact on student outcomes as a criterion for choosing among the possible solution options. However, everyone claimed to do research about EdTech and many collected significant amounts of data to inform their decisions.

What they meant by “doing research” varied. In all cases, this included an ongoing effort to stay abreast of EdTech developments and applications through constant interaction with colleagues at conferences, via social media and internet sources, and by reading EdTech news and publications. Peer-reviewed academic journals were listed as a source of EdTech information in only 9% of interviews (which is one reason I am writing this blog post instead of revising and resubmitting a journal article I wrote previously).

Decision-makers Prefer Local Evidence

One explanation given for lack of reliance on existing research evidence is that the results of studies conducted in different contexts and with different student and faculty populations may not be relevant in the decision-maker’s own context. Instead, decision-makers prefer to collect their own local evidence. Picture of a horseFor example, for almost 40% of the EdTech decisions discussed in our interviews, the college or university engaged in a pilot of one or more alternative products. Typically, this would involve asking a portion of the faculty to use the product in regular classes during the semester to assess pedagogical usefulness, ease of use, and feasibility of implementation. In a few cases (11%), impact on student engagement, completion, retention, or other student outcomes was also investigated at this stage. Alarmingly, impact on actual learning was rarely discussed at this point. And, curiously, impact on student outcomes was far more often assessed after a product had been acquired and implemented.  While these data may be helpful at that point to make decisions about whether to continue using a product, it might be wise to put this horse before the initial purchasing cart.

Implications

One of the consequences of this preference for local evidence is that the same products are simultaneously being piloted at many institutions, at no small cost, without the results being shared. Context is certainly critical, but it is likely that colleges and universities have an exaggerated sense of their uniqueness when it comes to end-user needs for and reactions to technology. A second issue is that most of these pilots are not particularly rigorous in terms of assessing whether students using one technology solution perform better academically than those using another solution, or no technology at all.

Moving Forward

It may be the case that pilots provide more value in building buy-in and gradually ramping up implementation capacity than in assessing technology’s contribution to improved learning. To achieve the latter, more rigorous studies will be needed, ideally with comparison groups. It might be helpful for someone – perhaps like WCET – to provide guidelines for robust design of EdTech pilot studies. It would also be helpful to establish an online repository for members to share the results of their internal EdTech studies. If study results are accompanied by descriptions of the implementation context and of the types of students and faculty involved, other institutions can look for “near peers” to gauge the potential for a technology product’s success at their own site.

More detailed findings and recommendations from our study and some resources that EdTech decision-makers shared with us are available at https://www.edtechdecisionmakinginhighered.org.

hollands-e1501695829228.jpg

 

Fiona Hollands
Center for Benefit-Cost Studies of Education
Teachers College, Columbia University

 

 

 

 


CC Logo

2 replies on “What role does research play in EdTech decision-making?”

Comments are closed.

Subscribe

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,537 other subscribers

Archive By Month

Blog Tags

Distance Education (340)Student Success (313)Online Learning (242)Managing Digital Learning (241)State Authorization (230)WCET (223)U.S. Department of Education (215)Regulation (212)Technology (169)Digital Learning (164)Innovation (125)Teaching (121)Collaboration/Community (114)WCET Annual Meeting (106)Course Design (103)Professional Development (101)SAN (101)Access (99)Faculty (90)Cost of Instruction (89)Financial Aid (84)Legislation (83)Completion (74)Assessment (69)Accessibility (68)Instructional Design (68)Open Educational Resources (68)Professional Licensure (66)Accreditation (65)COVID-19 (64)SARA (64)Credentials (62)Competency-based Education (61)Quality (61)Data and Analytics (60)Diversity/Equity/Inclusion (59)Research (58)Reciprocity (57)WOW Award (54)Outcomes (47)Workforce/Employment (46)Negotiated Rulemaking (45)Regular and Substantive Interaction (43)Policy (43)Higher Education Act (41)Virtual/Augmented Reality (37)Artificial Intelligence (36)Title IV (36)Practice (35)Academic Integrity (34)Disaster Planning/Recovery (34)Leadership (34)State Authorization Network (33)Every Learner Everywhere (31)WCET Awards (31)IPEDS (28)Adaptive/Personalized Learning (28)Reauthorization (28)Military and Veterans (27)Survey (27)Credits (26)Disabilities (25)MOOC (23)WCET Summit (23)Evaluation (22)Complaint Process (21)Retention (21)Enrollment (21)Correspondence Course (18)Physical Presence (17)WICHE (17)System/Consortia (16)Cybersecurity (16)Products and Services (16)Blended/Hybrid Learning (15)Forprofit Universities (15)Member-Only (15)WCET Webcast (15)Digital Divide (14)NCOER (14)Textbooks (14)Mobile Learning (13)Consortia (13)Personalized Learning (12)Futures (11)Marketing (11)Privacy (11)STEM (11)Prior Learning Assessment (10)Courseware (10)Teacher Prep (10)Social Media (9)LMS (9)Rankings (9)Standards (8)Student Authentication (8)Partnership (8)Tuition and Fees (7)Readiness and Developmental Courses (7)What's Next (7)International Students (6)K-12 (6)Lab Courses (6)Nursing (6)Remote Learning (6)Testing (6)Graduation (6)Proctoring (5)Closer Conversation (5)ROI (5)DETA (5)Game-based/Gamification (5)Dual Enrollment (4)Outsourcing (4)Coding (4)Security (4)Higher Education Trends (4)Mental Health (4)Fall and Beyond Series (3)In a Time of Crisis (3)Net Neutrality (3)Universal Design for Learning (3)Cheating Syndicates Series (3)ChatGPT (3)Enrollment Shift (3)Minority Serving Institution (3)Nontraditional Learners (2)Student Identity Verification (2)Cross Skilling/Reskilling (2)Virtual Summit (2)Department of Education (2)Higher Education (2)Title IX (1)Business of Higher Education (1)OPMs (1)Third-Party Servicers (1)microcredentials (1)equity (1)Community College (1)Formerly Incarcerated Students (1)Global (1)Compliance (1)