Research Dispels Myth that Adult Students Don’t Cheat in Online Classes
Published by: Lindsey Rae Downs | 10/2/2018
We love insights and stories about improving academic integrity in all settings and we welcome today’s guest post. Beth Rubin, Ph.D., has been practicing and leading online education at for-profit and non-profit universities for the past 19 years. She is currently the Dean of Adult and Online Education at Campbell University.
Relatedly, WCET invites you to join us and participate in the “International Day of Action Against Contract Cheating” on October 17. Make your opinion known!
We’ll also be hosting a webcast on October 16th: The Cheating Economy and Integrity, which is free and open to the public. Register now!
— Russ Poulin, WCET
It’s a tough time in higher education, and resources are tight. I was sitting at the Dean’s Council of a large university, and raised the issue of proctoring software.
“Students cheat,” I said. “If we want to grow online learning, we can’t have online testing without proctoring that faculty can easily monitor.” I was trying to convince the Provost and Deans to upgrade our software that provided computer-based monitoring of online tests.
I shared the results of a study my colleagues and I conducted, examining 9 sections of a popular online course; we varied which of the four exams, all taken from the same exam pools, that were proctored and looked at the effects of proctoring and the amount of time students took to complete the tests. This controlled for teacher effects and exam difficulty effects.
We found that when exams were proctored with automated software that recorded both the students and their computers, students scored more than 15 points lower (out of 100), and completed exams much more quickly.
The evidence indicated that, when exams were not proctored, students were looking up answers.
To my surprise, the head of our adult education unit argued against new proctoring software. I heard these arguments:
I was amazed; we followed all those rules about course and test design. Authentic assessments were used regularly, and the tests accounted for less than half the points of the course. The tests were well-constructed with random drawings from an exam pool, with responses varied and limited time periods.
Administrators ignored the data in front of them, with multiple faculty teaching sections of the same course. I explained that many of the studies concluding that adult students don’t cheat used surveys asking students if they had cheated or would cheat rather than looking at actual test behavior (e.g., Watson & Sottile, 2010). The ones that looked at actual behavior focused on single sections of one course taught by one professor, making the results subject to effects of idiosyncratic teacher behavior (e.g., Beck 2014; Ladyshewsky, 2015). Systems for all professors should not be based on the behavior or skills of one.
Fast forward two years: our study was published in one of the more prestigious journals on online teaching (Online Learning), followed by another based on our work the following year (International Review of Research in Open and Distributed Learning). Additional studies were published, looking at actual student behavior in multiple sections of online classes (Daffin & Jones, 2018; Hylton, Levy, & Dringus, 2016), and at adult learners working both on MOOCs and on work-for-hire (Corrigan-Gibbs, Gupta, Northcutt, Cutrell, & Thies, 2015).
Researchers have used different techniques to study online test cheating, including our practice of varying the exams that are proctored in multiple sections of a class and examining the effects on exam scores and time taken to complete a test. Others have created online “honey pots” of fake answers that are designed to show up in Google searches, and tracked test-takers’ visits to these spots.
The results have been consistent: adult students often cheat on exams. They search the web for answers. They use smart phones and mobile devices, so locking down browsers while taking a test has little, if any, effect on cheating. They use Google searches and specialized websites that provide answers to open-ended as well as closed-ended assessments. They perceive less opportunity to cheat when they are monitored by automated proctoring systems.
When I read the reviewer feedback on our most recent article, I saw some of the exact same arguments that I heard from other Deans: adult students don’t cheat, it’s all in the design of assessments, and if you just use open-ended assessments then students can’t cheat, or at least can reduce it to being a negligible problem. I realized that this is a myth in our field. Many of us believe that adult students simply don’t cheat, while the data are piling up and telling a different story.
Proctoring software is not a luxury for high-end programs; it is a necessary element to ensure academic integrity. It has to make it easy for faculty to review the results; very few will comb through hundreds of thumbnail images for each test in a class of 25 or 30 students to find evidence of cheating.
The solution is acquiring proctoring software that is cost-effective and easy to use. It involves improving the analytic techniques to increase accuracy of identification of cheating; they need to distinguish whether the student’s eyes are moving because she is looking up information on her cell phone, or because she is nervous. And it involves changing the institutional culture so that faculty regularly require their use for testing in online courses. A key part of this culture change is puncturing the myths and bringing in reality.
Dean of Adult and Online Education
Author Bio: Beth Rubin has been practicing and leading online education at for-profit and non-profit universities for the past 19 years. She is currently the Dean of Adult and Online Education at Campbell University.
Alessio, H.M,. Malay, N.J., Maurer, K.T., Bailer, A.J. & Rubin, B. 2017. Examining the effect of proctoring on online test scores. Online Learning, 21(1), 146-161.
Alessio, H.M,. Malay, N.J., Maurer, K.T., Bailer, A.J. & Rubin, B. (in press).
Interaction of proctoring and student major on online test performance. The International Review of Research in Open and Distributed Learning.
Beck, V. 2013. Testing a model to predict online cheating – much ado about nothing. Active Learning in Higher Education, 15(1), 65-75.
Corrigan-Gibbs, H., Gupta, N., Northcutt, C., Cutrell, E., & Thies, W. (2015). Deterring cheating in online environments. ACM Transactions on Computer-Human Interaction, 22(6), Article 28. Retrieved from: http://dx.doi.org/10.1145/2810239
Daffin, L.W. & Jones, A.A. (2018). Comparing Student Performance on Proctored and Non-Proctored Exams in Online Psychology Courses. Online Learning, 22(1), 131-145. doi:10.24059/olj.v22i1.1079
Hylton, K., Levy, Y., & Dringus, L. P. (2016). Utilizing webcam-based proctoring to deter misconduct in online exams. Computers & Education, 92-93, 53-63.
Ladyshewsky, R.K. (2015). Post-graduate student performance in ‘supervised in-class’ versus ‘unsupervised online’ multiple choice tests: implications for cheating and test security. Assessment and Evaluation in Higher Education, 40(7), 883-897. DOI: 10.1080/02602938.2014.956683
Watson, G. & Sottile, J. (2010). Cheating in the digital age: Do students cheat more in online courses? Online Journal of Distance Learning Administration, 13(1). Retrieved from http://www.westga.edu/~distance/ojdla/spring131/watson131.html