Pages

May 22, 2010

Data Do Not Always Information Make

In my last post, I indicated that it is not necessary to collect data in every course on every student in order to understand how well the program is doing in meeting its student outcomes. However, I continued to be amazed at the number of programs who have developed very elaborate data collection process which usually involve homegrown databases, spreadsheets, web applications, course reports and sometimes commercial software, etc.  Faculty members dutifully enter data related to the extent to which they believe the students in their courses have met the program student outcome(s).  Data are collected in multiple courses (sometimes all courses) and then aggregated, averaged and reported as the level to which students have demonstrated the outcome.  Targets of expectation are set (usually in the 70-75% range) and victory is declared.  Of course, all of this generally happens the year before the accreditation visit (so much for CONTINUOUS quality improvement).

It is important to remember that having data and targets for performance does not necessarily translate into information that can be used to improve the teaching/learning process.  Information means that you are able to look at the data and understand what the students strengths and weaknesses are related to the outcome--that you are able to discern from the data how well the program is meeting the outcome and what can be done to improve.   In order to accomplish this, the program needs to define the outcome into a few performance indicators and understand how the results inform the achievement of the outcome.  Data are collected related to the performance indicators and improvements are focused on improving student performance on the indicators (I will discuss performance indicators more fully in my next post).  This enables the program to focus the data collection (perhaps different courses for different indicators) and to clearly understand what student strengths and weaknesses are related to the outcome.  This will also enable the program to target improvement efforts and will set the stage for the next cycle of data collection and evaluation.  We need to stop the data dump approach to continuous quality improvement as it only promotes continuous faculty frustration.

1 comment:

Rebeca Leal said...

Three years ago, we decided to measure all POs and LEs at the same time, every semester. We used as many instruments we can, and this became in an exorbitant procees that leaved no time for significant analysys and improvement. In order to have relevant improvement actions, since this academic period, we established a time-line for our POs and LEs. We divided our Learning Outcomes in three different groups (similar relevance, different kind of outcomes) and we will start analyzing each group at a time, presenting the results to our Boards and defining improvement actions while we collect data for the other groups of outcomes. This timetable or timeline is very useful to avoid getting dumped into a lot of data. I can share this timetable if you want.

 
Site Meter