Pages

April 13, 2010

Accountability not going away!


In the April 13, 2010, Inside Higher Education article, "No let up from Washington" (http://www.insidehighered.com/news/2010/04/13/hlc), Molly Corbett Broad, president of the American Council on Education, is quoted as saying, "...I believe it’s wise for us to assume they (federal policy makers) will have little reservation about regulating higher education now that they know it is too important to fail." This was in the context of holding higher education accountable for learning outcomes.

Are institutions really prepared or have they been dragging their feet hoping that the change in adminstrations would make it all go away? It is clear that institutions and educational programs must get serious about demonstrating student learning and do it in a way that honors faculty time and produces information and not just tons of data that remain a mystery as to what they mean. This also means that accrediting agencies must get serious about preparing their peer evaluators to "know it when they see it." As the feds put pressure on the accrediting agencies to demand accountability for student learning, the accrediting agencies need to become more intentional in the preparation of their peer evaluators who, for the most part, are faculty and adminstrators of higher education.

2 comments:

K said...

"It is clear that institutions and educational programs must get serious about demonstrating student learning and do it in a way that honors faculty time and produces information and not just tons of data that remain a mystery as to what they mean." This statement is absolutely true and appeals to our sense of logic. Having said this, according to Linda Suskie, objective tests remain widely used for three reasons:
1. they are what testing experts call efficient;
2. while they are difficult and time-consuming to construct, they are fast and easy to score;
3. results can be simply summarized.

Objective tests can efficiently assess cognitive levels from Knowledge to Analysis (in the Bloom's Taxonomy hierarchy). Since faculty give tests routinely, it makes sense to utilize an instrument that faculty are comfortable with administering and interpreting the results. However, an objective test should be planned. Similar to developing a rubric for some types of performance appraisals or portfolios, you would similarly develop a "TEST BLUEPRINT" for a test. Test Blueprints are widely known among measurement experts and have been used for assessment purposes for decades.

Why a test blueprint?
1. They help ensure that the test focuses on the learning outcomes identified for assessment.
2. They help ensure the test gives appropriate emphasis to thinking skills, not just simple conceptual knowledge.
3. They make writing test items much easier because faculty know exactly what must be covered and at what level of the cognitive skill domain.
4. They help document what students have achieved, or not achieve, with respect to the identified learning outcomes.

Questions to help you decide if using a Locally Developed Exam (Departmental Exam or a Common Set of Items Given during key terminal courses for your program)?
1. Will the exam items help you understand what students are learning in the program?
2. Will the results help you decide how to improve student learning?
3. Will you have enough confidence in the results to use them to help make program-level decisions?


A departmental exam or a set of common items on a final exam for key courses will most likely yield valuable information for some of the technical program-level student learning outcomes.

This excerpt is not intended to persuade faculty that tests are the magical answer to your program assessment work. Quite frankly, a test that yields valid results is not easy to construct. Anything that is quick to construct, usually results in questionable results. Tests are no exception to this rule. However, even though constructing tests can be time-consuming, tests are a defensible direct method of that can be useful in program assessment.

Measurement experts typically advocate the use of multiple measures to tell the story of how an institution is fulfilling a program outcome. Having multiple measures is sometimes referred to as “triangulation.” Tests are one tool, that give faculty data, which when used with a test blueprint, produces information. Using the test information is even more powerful when used with other pieces of information - to tell a valid, reliable story.

When someone asks me, “why would you want to have more than one method of assessment?” I answer, "well, I wouldn’t want to rely on just one instrument." I remind them it’s important to remember that telling the story must not only be told expeditiously, but must also be told with accuracy. We're not looking for a micrometer level of precision. At the program level, we're looking for two poles and a chain (an analogy drawn from football in determining whether or not 10 yards have been met), but you wouldn't use one pole and a chain because you want to make sure the chain is accurately stretched to better measure the 10 years -- it's two poles and a chain.

Gloria Rogers said...

I agree that developing test blueprints for course testing is a very valuable process to assure that what is being tested matches instructional goals for the course. Anything we can do in individual courses that makes the teaching/learning process intentional will not only assist individual students in the learning process but also enhance the overall program outcomes.

It is important to remember that learning is cumulative over time and that what a student learns in one course s/he uses and/or builds on in other course (scaffolding). The overall learning experience should be more than the sum of the individual courses. We need to be very careful that we do not get lost in the weeds.

 
Site Meter