The Case for Evidence-Based Practice in Assessment of Student Learning
This topic brings me back to an earlier time of my life. During my undergraduate days as a psychology major, professors lamented ad nauseam about the scientist-practitioner gap and its threat to the public good. I was fascinated by the information and, although I didn’t know back then that I would devote my life’s work in service of higher education, I focused my early studies through a series of advanced statistics courses (I thought, at the time, that my growing statistical prowess was the primary remedy for addressing my professors’ problem).
While most of my professors’ grieving was directed toward a cross-section of clinicians delivering shabby psychotherapies, I later discovered that their complaints applied equally as well to the field of higher education. In fact, to my surprise, I was once recognized during a department meeting for my meaningful use of student achievement data to understand variation in performances. As a contingent faculty member, I smiled as I was recognized in a congratulatory fashion, but I secretly thought at the time, “How are these people allowed to teach? Doesn’t everyone do this?”
Yes, there’s the time availability issue. Professionals require mental breaks. Instructors are busy. Very busy. Not to mention, due to the reality that the majority of colleges and universities rely quite heavily on contingent faculty labor, part-timers (or, “freeway flyers,” or “academic gypsies,” or “fly-by-night faculty” if you are so inclined) deliver the bulk of courses. And, although recent evidence suggests that contingent faculty are stronger instructors on average compared to their traditional counterparts, complex academic realities compete for time in the same way that students’ off-campus jobs require certain obligations.
At the most basic level, the case for instructors’ evidence-based practice is grounded by accreditation standards and criteria. Reflecting on my institution’s accreditor, the WSCUC, I can justify my activity by referencing certain criteria for review, like 2.7, 2.10, 4.1, 4.2, and 4.6, just to name a few. Regardless, it seems to me that our professional use of student achievement data shouldn’t be compliance-driven, but rather should naturally spring forward (I know that I’m an idealist).
Approaches for Instructors to Use Student Achievement Data in their Courses to Support the Accreditation Agenda
While I am quite religious about dissecting student achievement data to maximize my instructional effectiveness and grow as a teacher, I recognize that the nature of students’ performances will vary by both course and examination types. There are a number of actions I use to cultivate high-quality achievement data in the spirit of student success, and they’re quite scalable.
My favorite course in which to use the following actions is statistics, and it’s the course I’ve taught the longest. After presenting my recommendation, I’ll shed some light on how the suggestion transpires in the context of my statistics course. Feel free to reach out or comment for additional information.
- Design assessment tools to collect a variety of types of student-level data to evidence their learning. — In my statistics courses, students usually think that the entirety of their class experience will involve “too many numbers with Greek symbols.” I get them to see, over time, that this initial notion is a common misconception, and that statistical information is applicable to non-numerical data, as well. To achieve the type of conceptual change required to do this, I carefully design assessment protocols that request qualitative feedback from students, in addition to more traditional measures they’d usually associate with the class when they first signed up. What do I mean by this? We learn to analyze patterns in string data by using a common opensource software tool, and then summarize the information using elementary statistics, like measures of central tendency or spread. This is all to say that my assessment activities gather evidence of student learning using multiple means and methods. This makes more type of information available to me for self-improvement purposes. Likewise, please consider disaggregating your data by various characteristics (more on this another time).
- Involve students in their own achievement analysis to set their own learning goals. — Using the same example, students have found that examining their class’s spread of scores (anonymously, of course) is a valuable exercise. Not to mention, I’m able to relate introductory topics to their own achievement data. From our presentation using the data, students draw conclusions on items and areas which inform their future academic behaviors. This activity becomes habitual over time and students have shared with me that they’ve extended this approach to other classes. Monitoring their own performance grows their metacognitive abilities via our cooperative exercises.
- Use data to encourage evaluation flexibility. — When an instructor examines variability in performance in scores, often additional insights are developed about the evaluation procedure or protocol pertaining to instructional weakness, rather than some other common factor, like a student’s lack of preparation. I am not infallible, and acknowledging this fact is healthy. There have been occasions, although infrequent, that my examination of student achievement data revealed something unnoticed (e.g., word choice error, awkward phrasing, or other absent-minded omission). What’s more, it is important for me to maintain assessment flexibility for improving the tool, as well as scores. For instance, it’s my philosophy that if a strong group of students overwhelmingly performs poorly on an exam, then clearly some intervening factor external to students’ preparedness may have led to the result. My instructional flexibility allows me to courageously consider the chance that my discussion of a topic wasn’t rich enough to encourage high achievement. Instructional flexibility ensures that we remain open-minded to the range of variables impacting student success, and, as gatekeepers, shouldn’t we promote students’ open-mindedness by example?
- Share and discuss student achievement in department meetings or with colleagues. – What do you discuss in your department meetings, assuming they regularly occur? Sometimes these meetings risk assuming the same useless aroma that cloaks other meetings in higher education. Let’s try to focus together and keep things on our shared student success agenda. Other disciplinary experts have helped me while I’ve led this activity, and vice versa. Collaborative discussion transcends the meeting room and indirectly impacts student achievement.
- Reflect on the ways in which student achievement data can inform both current and future instructional approaches. – Sometimes the data tell me that students continue to struggle with a certain concept (e.g., experimental design is always a good example of a content area struggle in my statistics courses). Over time, after engaging high quality student achievement data, I get a sense for what works or what doesn’t for a particular group. Admittedly, some of this stems from my active searching, while other insights might burst into my consciousness spontaneously. While reflecting on my instructional approach, which is primarily Socratic in nature, I’ve realized over the years that certain areas might demand a little more explicit detailing on my part. I’ve been able to use my data to discover points during the curriculum where I ought to discern my use of disciplined questioning or disciplined telling. It’s a fine line and only my use of student achievement data have enlightened my approaches to instruction and student-faculty interactions.
But, how about administrators? How can the academic management support the faculty’s use of student achievement information? This is very important for building an accreditation agenda, and doubly valuable for evidencing educational effectiveness.
Action for Administrators to Document Faculty’s Use of Student Achievement Data for Compliance and Regulatory Purposes
I believe that if administrators are going to help students achieve, then they need to assist in the building process by creating academic and non-academic structures that allow faculty members to routinely and systematically use data to guide their decision-making.
How can the administration begin to do this? I’ve listed my top 5 recommendations below. Please note that your activities nested within these five thematic areas provide thick evidence for your accreditation work and various other self-reflective exercises. Feel free to ask me how this can be done, as these events tend to crystallize differently across college and university contexts.
- Establish institutional scaffolding to support a data-driven culture and inquiry.
- Develop and maintain a strong central repository of data.
- Empower department chairs and instructional leaders to collect, analyze, and disseminate student achievement artifacts.
- Extend sustained professional development opportunities.
- Identify auxiliary tools and methodologies for supporting school-wide data use and evidence of educational effectiveness.
References for Further Reading
Anderson, S., Leithwood, K., & Strauss, T. (2010). Leading data use in schools: Organizational conditions and practices at the school and district levels. Leadership and Policy in Schools, 9(3), 292-327.
Boyd, D. J., Grossman, P. L., Lankford, H., Loeb, S., & Wyckoff, J. (2009). Teacher preparation and student achievement. Educational Evaluation and Policy Analysis, 31(4), 416-440.
Hamilton, L., Halverson, R., Jackson, S. S., Mandinach, E., Supovitz, J. A., Wayman, J. C., … & Steele, J. L. (2009). Using student achievement data to support instructional decision making.
Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to promote data use for instructional improvement: Actions, outcomes, and lessons fromthree urban districts. American Journal of Education, 112(4), 496-520.
Mandinach, E. B. (2012). A perfect time for data use: Using data-driven decision making to inform practice. Educational Psychologist, 47(2), 71-85.
Means, B., Chen, E., DeBarger, A., & Padilla, C. (2011). Teachers’ Ability to Use Data to Inform Instruction: Challenges and Supports. Office of Planning, Evaluation and Policy Development, US Department of Education.
Vescio, V., Ross, D., & Adams, A. (2008). A review of research on the impact of professional learning communities on teaching practice and student learning. Teaching and teacher education, 24(1), 80-91.
Wayne, A. J., & Youngs, P. (2003). Teacher characteristics and student achievement gains: A review. Review of Educational research, 73(1), 89-122.