The Relationship Between IR and Assessment in Higher Education

Hi, Folks. It’s been a while — sorry. Accreditation activities have kept me busy. Fortunately, I recently received a spark to reconnect due to this ongoing debate. Read on. 

What is IR and Assessment?

This might strike you as a basic line of inquiry, but you’d be quite surprised. I’m going to go out on a limb and submit that the majority of higher education professionals have no clue what constitutes institutional research. Assessment is only a little less fuzzy, especially among full-time faculty, but remains shrouded in mystery.

First, institutional research (hereafter referenced as IR) is a pretty variable category of work in colleges and universities whose activities inform decision-making across functional areas. IR’s “nine-to-fivers” systematically collect and analyze data with the aim of transforming their products into actionable information for key groups. This is my distilled interpretation, of course. Anyone interested in coming to know the vastness of the field is encouraged to consult publications on the topic. Indeed, the field continues to evolve at an exponential pace, and experts continue to debate what IR ought to comprise (e.g., the extent to which IR and IT responsibilities should intersect).

Second, assessment in higher education can be roughly divided into these two areas: academic and co-curricular. For the purpose of advancing our common understanding, I will only shed some light on the academic area. Assessment is a systematic process that benefits from harvesting empirical data on various phenomena within the teaching and learning environment. The purpose is maintained through refining programs and implementing enhancements to advance students’ learning. What’s more, assessment should be seen as a continuous cycle explicated by measurable student learning outcomes, sufficiency in opportunity for students to acquire said outcomes, and demonstrated use of information for further development. Still not sure? Please consider this priceless resource, as your work (and your students) will be tremendously benefited.

The ticker stands on the relationship between these two crucial functions, as an institution cannot thrive without careful articulation between the areas.

The Debate: Contact Between IR and Assessment

I’m going to come right out and say it — I think that folks who argue that institutional research and assessment must be fully separate entities do not fully understand the capacities and opportunities within said areas. Phew. By this point, I’m sure a number of folks are about to divert to Facebook or other social media site, and I apologize. For the remaining, welcome. This can be about as bad as talking politics or religion.

The last national-level IR meeting I attended was the Association for Institutional Research of 2016 (hoping to see colleagues in Orlando, Florida at the end of May). I vividly recall during our last meeting that professionals hotly debated the IR-Assessment relationship. Had I not been so exhausted at the time, I’d have suggested a dinner meeting with the horde to further unpack the issue.

I remember that I didn’t like what I heard. My colleagues took intense positions on either side. It was sort of funny, now that I think about it, retrospectively. The reception of new information was effectually stunted and nothing fresh emerged.

But, this is important. Your future accreditation peer evaluators may take a position and you must be prepared to respond (please note that I am not taking a compliance position, but rather suggesting that you adopt a model appropriate for advancing the dialogue internally for quality and improvement purposes).

institutional research

Interacting IR and Assessment to Support Viability and Avoid Potential Pitfalls

My recommendation should be obvious due to the fact that I’ve taken my position. Combine. Combine. Combine. A number of advanced California Community Colleges (CCCs) have already initiated the connection between these areas. After all, many institutions within the sector have the financial resources to refine and perfect, so let them pay for it and adapt the learning (not to mention, institutional funding for their basic aid districts has revealed even more sophistication).

If this recommendation philosophically departs from your position, consider any differences in our assumptions. The service of IR is not only to supply data. This would represent an underdeveloped function, and I’ve seen too many IR offices like this. It stands then that my first assumption is that IR guides constituencies on meaning-making and subsequent action planning, which is particularly helpful for the faculty. This is a foundational assumption, which justifies the need for unity between IR and assessment with only one word: faculty. My second assumption is that an institution’s instructional personnel is charged with spearheading assessment activities, as the faculty are owners of the curriculum and possess disciplinary expertise that lends itself for assessing students’ learning. The usefulness of IR for these purposes is self-explanatory.

A successful combination of these units is largely dependent upon high-level administrative support and a thriving culture of evidence-based decision-making. Not to mention, a thirst for data and relevant information is quite helpful due to subsequent ad hoc requests and visibility.

Staffing. I was recently asked about the credentialing of folks who ought to lead this type of work, and I firmly believe that a terminal degree is required due to the necessity of methodological expertise and effective collaboration with faculty groups. Analyst- and coordinator-level positions are greatly benefited by some level of post-baccalaureate training in the social and behavioral sciences, as well.

Institutional Culture. The cultural piece is a bit more challenging, but institutionalizing policies and practices to build a college or university committed to quality assurance and improvement is key. The Research and Planning (RP) Group does fantastic work to help institutions accomplish this and, while their service is dedicated to California Community Colleges, the work is generalizable to 4-year institutions, too. What’s more, the Bridging Research, Information, and Culture (BRIC) initiative offers tremendously valuable insights, including foundational resources, to support such efforts. The initiative’s Model for Building Information Capacity offers additional inspiration.

Hard and Soft Skills. It is no surprise that the debate of required skills lives on. Some of my colleagues emphasize a highly technical skill set that aligns closely with what we might find in information technology offices. I’ve even heard some IR professionals indicate their strong preference for technical skills over graduate-level training. Again, I philosophically differ on this, as graduate-level training, namely at the doctoral level, fulfills a powerful socialization function. The outcomes of said socialization ensure disciplinary understanding (i.e., content knowledge), stronger critical thinking abilities, and habits of mind for lifelong learning. We can easily teach someone how to employ domain-specific language to manage information within a relational database system, for instance, but cultivating the other intellectual features require significantly more effort. That aside, mixed-methodological prowess is crucial for assessment work due to the need to relate divergent empirical streams when working with faculty to accomplish improvement. Other characteristics of successful individuals include abilities of emotional intelligence, self-awareness, self-regulation, initiative, empathy, and collaboration. These soft skills, too, are much more valuable than one’s incoming technical ability at a given point in time and indicate capacity for training and growth. Finding the right mix is priceless.

Policies and Practices. None of these efforts to marry the two functions is sustainable without the appropriate structures and activities. Thoughtful articulation of governance groups will birth meaningful curricular and assessment planning exercises, which may serve to fulfill program review findings. Conversations will emerge spontaneously by virtue of the process. Campus-wide visibility of attending to IR and assessment issues will occupy mental real estate among key actors and transverse communication boundaries. Faculty involvement will benefit for the sake of the work. However, contingent faculty’s involvement begs targeted motivational strategies due to their absence of contractual expectation (contact me if you need concrete ideas).

Your ability to forge a profitable relationship between these crucial functional areas can be likened to an organ transplant — if the proper procedure is not ensued, the union risks rejection. Loose coupling between these two units only perpetuates the myth.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s