April 22, 2010

Why all the course-level assessment?

When students are asked an open-ended question on an exam and they don't know the answer, they tend to write everything they do know that they weren't asked...and they write and write and write. Why do they do that? Probably for one of two reasons: they hope the faculty member will find the right answer in all their ramblings or they will be given extra "points" for their effort. Why do we approach program assessment the same way? I can hear it now, "We don't know what the accreditors want, so we will just give them everything."

This approach always ends up in creating undue work on the part of faculty and, while producing massive amounts of data, produces absolutely no information that can be effectively used to improve the student learning experience. We have lost sight of the fact that we are assessing the "program" not the student. Yes, we gather evidence from students but it is not for the purpose of assessing them--we are already doing that in their courses. It is for the purpose of assessing ourselves. Can we provide evidence that by the end of the academic program students have attained the ability to (fill in the blank)? If we believe that student learning is cumulative over time and that what is learned in one course is applied in another course, built upon in another course, etc., throughout the curriculum, then by the time students are ready to graduate, their learning should be more than the sum of all their courses. Why do we collect data in lower level courses and average them with the data taken in upper level courses and pretend like we know what they mean? Are we really saying that all courses are equal in how they contribute to cumulative learning and that the complexity and depth/breadth at which students are to perform is the same in all courses for any given outcome? Why not only collect "evidence" of student learning in the course where students have a culminating experience related to the outcome. Yes, collecting evidence in a lower-level course would be helpful in understanding student strengths and weaknesses related to any given outcome. This would enable faculty to reinforce and emphasize those concepts where students were weak prior to the culminating experience. However, these data should not be aggregated with the data collected in the culminating experience. Why do you need more than one data point for each student from whom you have collected data?

We need to bring some sanity back into what we are doing or the entire "outcomes assessment" process will collapse under its own weight--as well it should.


Rebeca Leal said...

Dear Gloria, are you taking about us? It's impressive to read your comment because it's exactly what is happening to us. We are conducting our first outcomes assessment for two programs, and we are facing the problem that we are generating a lot of data (percentages, graphs, table, etc) but we don´t know what they mean. And it seems to be more complicated than that: it seems that we didn´t define correctly the Performance Indicators for the Learning Outcomes, that´s why we are now lost in pages and pages of information. Thanks for being so direct and clear in your advices. (Rebeca, from México)

Nils Peterson said...

Gloria, Washington State University's Office of Assessment and Innovation has begun implementing a process to help programs assess how well they are doing their outcomes assessment. A second round of self-studies is being prepared for formative feedback May 17. You can see the packet we provide to programs here

Especially look at the Guide to Assessment, our rubric for thinking about a program's assessment system. We'd welcome feedback. Our vision is that programs will move to getting feedback on their assessment from interested stakeholders. If you'd be interested in piloting that online with us, let Gary Brown know.

Thanks for this post, it's the simple clear version of the advice I give to programs new to this work.

Rebeca Leal said...

Gloria, hello!

What do you think about assessing outcomes using group evidences as team projects?



Site Meter