Rigor, Meet Reality
Published by: Lindsey Rae Downs | 4/5/2018
How do you define academic rigor? I know when I was completing my undergraduate and graduate coursework, I could tell the difference between a rigorous course and one that would be a little less time consuming. I also understood, especially in graduate school, that the more rigorous a course was, the more I “got out of it.” Is there a way to capture the difference so instructors can ensure the best educational experiences for their students?
To help us do just that, today we’re excited to welcome Andria Schwegler from Texas A&M University. Andria is here to discuss her search for the definition of academic rigor, gathered through discussions with her colleagues and presentations at Quality Matters conferences.
Enjoy the read and enjoy your day,
Lindsey Downs, WCET
Rigor is touted by many educators as a desirable quality in learning experiences, and many institutional mission statements promise rigorous academic experiences for students. Unfortunately, a definition of rigor has been elusive. How do institutions leverage evidence to support their claim of rigorous experiences?
In search of a definition of academic rigor, I sought out the types of evidence that faculty members at my institution use to document rigor in their academic programs and courses. The search was prompted by an invitation to serve as a panel discussant representing a faculty member’s perspective in one of two special sessions “Quality Online Education: What’s Rigor Got to Do with It?” at the Quality Matters Connect Conference in 2017. The purpose of the sessions was to open dialogue across stakeholders in higher education regarding the role of rigor in traditional and alternative learning experiences. As a faculty member, I could articulate rigor in my own courses and program, but because defining rigor had not been a topic of discussion across departments, I sought to learn what my colleagues considered evidence of academic rigor.
Several of my colleagues accepted my invitation to discuss the issue, and they provided a variety of measurable examples to demonstrate rigor in their courses and programs. Rigorous learning experiences they described included:
Distilling a broader summary from course-specific assignments and program-specific assessments revealed that the most frequently cited evidence to support claims of rigor were student-created artifacts. These artifacts resulted from articulated program- and course-learning outcomes that specified higher-level cognitive processing. Learning outcomes as static statements were not considered evidence of rigor in themselves; they were prerequisites for learning experiences that could be considered rigorous (or not) depending on how they were implemented.
Implementing these activities included a high degree of faculty support and interaction as students created artifacts that integrated program- or course-specific content across time to demonstrate learning. The activities in which students engaged were leveled to align with the program or course and were consistent with those they would perform in their future careers (i.e., authentic assessment; see Mueller, 2016). Existing definitions of rigor include students’ perceptions of challenge (“Rigor,” 2014), and the evidence of rigor faculty members provided highlighted the intersection of students’ active engagement with curriculum relevant to their goals and interaction with the instructor. These conditions are consistent with flow, which is characterized by concentration, interest, and enjoyment that facilitate peak performance when engaged in challenging activities (Shernoff, Csikszentmihalyi, Schneider, & Shernoff, 2003).
Creating meaningful assessments of learning outcomes that integrate academic content across time highlights the importance of planning learning experiences, not only in a course but also in a program. Faculty colleagues explicitly warned against considering evidence of rigor in a course outside of the context of the program the course supports. Single courses do not prepare students for professional careers; programs do. It was argued that faculty members must plan collaboratively beyond the courses they teach to design program-level strategies to demonstrate rigor.
Planning at the program level allows faculty members to make decisions regarding articulating curriculum in courses, sequencing coursework, transferring coursework, creating program goals, and assessing and implementing program revisions. Given these responsibilities, instead of being viewed as an infringement on their academic freedom (see Cain, 2014), faculty members indicated that collaborative planning and curriculum design were essential in setting the conditions for creating assessments demonstrating rigor.
Though specific operational definitions of rigor provided by colleagues were as diverse as the content they taught, the underlying elements of their examples were similar and evoked notions of active student engagement in meaningful tasks consistent with a “vigorous educative curriculum” (Wraga, 2010, p. 6).
Stepping back from the examples of student work that faculty members offered as evidence of rigor, I reflected on student activities that were missing from our conversations. None of my colleagues indicated that attending class, listening to lecture, and taking notes were evidence of rigor. Though lecture is “the method most widely used in universities throughout the world” (Svinicki & McKeachie, 2011, p. 55), student activities associated with it never entered our conversations.
In fact, one colleague’s example of rigorous classroom discussion directly contradicted the approach. She explained that during discussions, she tells her students not to believe a word she says, though she was quick to add that she does not mislead students. Her approach puts the burden to obtain support for discussions on students, who cannot passively rely on the teacher as an authority. Instead, students are held accountable for substantiating claims provided. This technique offers more evidence of rigor than simply receiving the content via lecture.
None of my colleagues suggested that students’ grades were evidence of rigor.
One noted that some students may not meet high standards, leading them to fail a course or program. But, these failures in demonstrating performance were viewed as unfortunate consequences of rigor, not evidence to document its existence. This sentiment was complimented by another colleague’s comment that providing support (e.g., remedial instruction, additional resources) to students was not a threat to the rigor of a course or program. Helping students meet high standards and improve performance was evidence of rigor, whereas failing grades because students found the content difficult were not.
None of my colleagues suggested that students’ evaluations of a course or an assignment were evidence of rigor. When documenting rigor, faculty members offered students’ performance on critical, discipline-specific tasks, not their opinions of the activities. Supporting this observation, Duncan, Range, and Hvidston (2013) found no correlation between students’ perceptions of rigor and self-rated learning in online courses. Further, definitions of rigor provided by graduate students in their study were strikingly similar to the definitions provided by my colleagues (e.g., “challenge and build upon existing knowledge…practical application and the interaction of theory, concept, and practice…must be ‘value-added’” p. 22). Finally, none of my colleagues indicated that mode of delivery (e.g., face-to-face, blended, online) was related to rigor, an observation also supported by Duncan et al. (2013).
Thanks to my colleagues, I arrived at the Quality Matters Connect conference with 17 single-spaced pages of notes documenting an understanding of rigor. Though the presentation barely scratched the surface of the content, I was optimistic that we were assembling information to facilitate multiple operational definitions of rigor that could be used flexibly to meet assessment needs. This optimism contributed to my surprise when, during small group discussion among session attendees, the claim was made that academic rigor has too many interpretations and cannot be defined.
I cannot support this claim because most variables addressing human behavior have multiple ways they can be operationally defined, and converging evidence across diverse assessments increases our understanding of a given variable. From this perspective, a single, narrow definition of rigor is neither required nor desirable. A research-based perspective allows for multiple operational definitions and makes salient the value of assessment data that may be underutilized when it informs only a single program’s continuous improvement plans. As Hutchings, Huber, and Ciccone (2011) argue, faculty members engage in valuable work when they apply research methods to examine student learning and share results with others. When assessment and improvement plans are elevated to the level of research, the information can be shared to inform others’ plans and peer reviewed to further improve and expand the process.
Articulating and sharing ways to observe and measure rigor can provide educators and administrators a selection of techniques that can be shaped to meet their needs. Engaging in an explicit examination of this issue across institutions, colleges, programs, and courses facilitates the identification of effective techniques to provide evidence of rigor to support the promises made to our stakeholders.
Associate Professor, Counseling and Psychology
Texas A&M University – Central Texas
Cain, T. R. (2014, November). Assessment and academic freedom: In concert, not conflict. (Occasional Paper No. 22). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved from http://www.learningoutcomesassessment.org/documents/OP2211-17-14.pdf
Duncan, H. E., Range, B., Hvidston, D. (2013). Exploring student perceptions of rigor online: Toward a definition of rigorous learning. Journal on Excellence in College Teaching, 24(4), 5-28.
Hutchings, P., Huber, M. T., & Ciccone, A. (2011). The scholarship of teaching and learning reconsidered: Institutional integration and impact. San Francisco, CA: Jossey-Bass.
Mueller, J. (2016). Authentic assessment toolbox. Retrieved from http://jfmueller.faculty.noctrl.edu/toolbox/whatisit.htm
Rigor. (2014, December 29). In The Glossary of Education Reform. Retrieved from https://www.edglossary.org/rigor/
Shernoff, D. J., Csikszentmihalyi, M., Schneider, B., & Shernoff, E. S. (2003). Student engagement in high school classrooms from the perspective of flow theory. School Psychology Quarterly, 18(2), 158-176.
Svinicki, M., & McKeachie, W. J. (2011). How to make lectures more effective. McKeachie’s teaching tips: Strategies, research, and theory for college and university teachers (13th ed., pp. 55-71). Belmont, CA: Wadsworth.
Wraga, W. G. (2010, May). What’s the problem with a “rigorous academic curriculum”? Paper presented at the meeting of the Society of Professors of Education/American Educational Research Association, Denver, Colorado. Retrieved from https://eric.ed.gov/?id=ED509394