Pages

August 08, 2012

How far have we come?


We have come a long way but do not want to fail to appreciate how far we have come.

Before:  What are programs DOING?
Now:  Is what programs are doing achieving the desired outcomes?

Before:  Focus on "inputs" (e.g., quality of faculty, quality of classrooms, etc.).
Now: Focus on the "effect" of the programs--"outcomes."

Before: Educational activities as an end.
Now: Educational activities as a means to an end.

Before: Educational practices determine the outcomes.
Now:  Outcomes inform educational practices.

Before: Assessment as a process to meet external requirements.
Now: Assessment as a process for feedback to the program with the purpose of improving student learning.

Unfortunately, there are still those who long to go back to the "good old days."  Those who felt (with good reason) that if they waited long enough, it would just all go away.  Are we where we really need to be?  Probably not.  Having worked with many faculty who serve on visiting teams for accrediting agencies (both regional and program), I have learned that there are still those who cling to the past and are resistant to the change that needs to happen to truly have a continuous process of improving student learning.  Only many examples of how the process can work efficiently and effectively while not inundating faculty with unnecessary and unproductive requirements will make this a truly sustainable process.  

January 09, 2012

Students are important, too!

I continue to be surprised when I visit colleges and universities who are investing hundreds if not thousands of hours of faculty time in learning outcomes data collection but do not provide the students with feedback on their performance. That is, they have mapped courses to outcomes, they have (in some cases) developed well written analytic scoring rubrics for the outcomes, faculty are scoring student performance in multiple classes, and results are being aggregated and reported as evidence of student learning for the program/institution to satisfy accreditors and/or state agencies.  When I ask faculty if students are provided with the rubrics before the scoring so they understand how they are going to be evaluated, I am generally told, "No." I then ask if the students are provided with the scoring results along with the rubric so they can see what they need to do to improve their performance, I am again told, "No." It seems that, in our anxiety over collecting data for accreditation and/or program review,  students are not even an afterthought. We know that student performance is cumulative over time and what they learn in one class, they use, practice, and develop in other classes. This is especially true for the core curriculum competencies as well as the discipline outcomes. Without being intentional, there is no reason to believe that learning will be enhanced in this data collection process.

For example, several years ago I was at a comprehensive university and was speaking before a large group of faculty about student outcomes assessment and their accreditation process. While talking about core competencies, one of the faculty members from the School of Business stood up and said, "Our students cannot write! They are terrible writers and I am embarrassed to think that they will represent our institution in the workplace." Then one of the faculty members from the English Department stood up and said, "Well, I can tell you one thing! They could write when they left our courses!" I then asked the Business faculty member if, when students turned in these terrible papers, did he return the papers to the students and ask them to rewrite them? "No." Did you mark their grade down for bad writing? "No." Do you give them any feedback at all on their writing? "No." When this happens, what is the message that we send our students when we don't give them feedback on their writing? "Writing doesn't matter!"

Why should we expect student performance to improve if we don't give them feedback on their performance on program outcomes over multiple courses? Providing students with well defined rubrics written to describe levels of performance can provide students with information they need to not only understand what their current performance level is related to a given program outcome, but also what they need to do to improve their performance.  The outcomes at the end of the program cannot be put at the feet of one department or course.  We have a collective responsibility for student learning when scoring program or institutional level outcomes.  Let's bring students back into the process.

November 18, 2011

Should the Accreditors Determine Your Learning Outcomes?

In the November 18, 2011 Inside Higher Education electronic newsletter, there was an article that highlighted what is happening at the Western Association of Schools and Colleges (WASC) a regional accrediting association.  Higher education professionals need to read it to be aware of the next wave in the accreditation and assessment of student outcomes wars.  The full article can be found at:

http://www.insidehighered.com/news/2011/11/18/western-accreditor-pushes-boundaries-quality-assurance#ixzz1e40Gb7lu.

The underlying premise is that, in the name of quality assurance and it's responsibility to the public to assure the credibility of the institutions it accredits, policies have been adopted where the accreditation letter and the accrediting team's report will be posted for the public to see.  The article states:

"One major prong of the package (proposal package) is that the accreditation process needs to become more transparent, and with the commission's approval this month, WASC will now be the first of the regional accrediting agencies to make public on its own website all of its "action letters" (in which the commission announces whether it has reaccredited an institution or taken some punitive action instead) and the reports of its accrediting teams on which the commission based its action. The norm for accrediting agencies to date has been to release a list of institutions that were either approved or sanctioned in some ways, and lists of the relevant provision numbers, but little to no additional detail."

Two other proposals were not adopted (yet, anyway):

"One would require institutions not only to define a "stated level of proficiency" for five skill areas for graduates (written and oral communication, quantitative skills, critical thinking, and information literacy) but to compare themselves to other institutions on at least two of those areas. The other was a suggestion that all institutions might be required to map their expectations for degree recipients to the Degree Qualifications Profile proffered by the Lumina Foundation for Education." 

If you haven't seen the Lumina Foundation Degree Qualifications Profile, it is a must read:

http://www.luminafoundation.org/publications/The_Degree_Qualifications_Profile.pdf

Outcomes assessment is coming under greater scrutiny than ever...will we be ready?

November 11, 2011

Baby or Bathwater?

I am working with a number of great institutions/programs on the development of meaningful assessment processes.  Of course, I love what I do so it is easy to let it consume me as I know that ultimately, a successful process will lead to more successful students.  However, I am struck about how easy it is to get so focused on the process that we forget the real purpose of all this activity--thus the title of this entry.  Yes, the process is designed to determine how well the program/institution is doing in producing student outcomes.  However, is that all there is?

If we believe that student learning is cumulative over time and that what students learn in one course they will use, practice, or further develop in another course, then it becomes clear that collecting data in a "summative" course that documents student learning is sufficient to know how well the program/institution is doing in delivering education.  However, we need to remember that this is about STUDENTS and what they know or can do.   Research tells us that students do best when they know the performance that is expected of them and when they get feedback on their performance.  Individual students should be provided information on what is expected of them (i.e., what will be on the test) and given feedback on their performance in formative contexts (courses/experiences before the summative assessment).  For example, do they get the scoring rubric that will be used to evaluate their performance or do they get written comments as to how to improve their performance?  Does their performance "count" in the final grading process?  If they are being asked to write an essay about the importance of diversity in the work setting, do they also get feedback on their writing as well as the "content" of the essay?  If so, is it considered a part of the grade for the assignment?  Is writing important in all contexts or just writing classes?  What is the message that is being sent to students? 

Providing students with feedback on their performance throughout the curriculum will improve the student outcomes by the end of the program.  It is important that we don't leave students out of this process.  Too many times we only get bathwater.
 
Site Meter