tag:blogger.com,1999:blog-46080990670785507282024-03-13T23:22:57.478-04:00Program Assessment of Student LearningUnknownnoreply@blogger.comBlogger11125tag:blogger.com,1999:blog-4608099067078550728.post-9743143529588954402012-08-08T10:02:00.000-04:002012-08-08T10:02:00.072-04:00How far have we come?<br />
<div>
<span style="font-family: Arial, Helvetica, sans-serif;">We have come a long way but do not want to fail to appreciate how far we have come.</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><u>Before:</u> What are programs DOING?</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><u>Now:</u> Is what programs are doing achieving the desired outcomes?</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><u>Before:</u> Focus on "inputs" (e.g., quality of faculty, quality of classrooms, etc.).</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><u>Now:</u> Focus on the "effect" of the programs--"outcomes."</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><u>Before:</u> Educational activities as an end.</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><u>Now:</u> Educational activities as a means to an end.</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><u>Before:</u> Educational practices determine the outcomes.</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><u>Now:</u> Outcomes inform educational practices.</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><u><br /></u></span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><u>Before:</u> Assessment as a process to meet external requirements.</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><u>Now:</u> Assessment as a process for feedback to the program with the purpose of improving student learning.</span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;"><br /></span></div>
<div>
<span style="font-family: Arial, Helvetica, sans-serif;">Unfortunately, there are still those who long to go back to the "good old days." Those who felt (with good reason) that if they waited long enough, it would just all go away. Are we where we really need to be? Probably not. Having worked with many faculty who serve on visiting teams for accrediting agencies (both regional and program), I have learned that there are still those who cling to the past and are resistant to the change that needs to happen to truly have a continuous process of improving student learning. Only many examples of how the process can work efficiently and effectively while not inundating faculty with unnecessary and unproductive requirements will make this a truly sustainable process. </span></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4608099067078550728.post-56897684029064268052012-01-09T14:13:00.001-05:002012-01-09T14:14:43.916-05:00Students are important, too!<span style="font-family: Arial, Helvetica, sans-serif;">I continue to be surprised when I visit colleges and universities who are investing hundreds if not thousands of hours of faculty time in learning outcomes data collection but do not provide the students with feedback on their performance. That is, they have mapped courses to outcomes, they have (in some cases) developed well written analytic scoring rubrics for the outcomes, faculty are scoring student performance in multiple classes, and results are being aggregated and reported as evidence of student learning for the program/institution to satisfy accreditors and/or state agencies. </span><span style="font-family: Arial, Helvetica, sans-serif;">When I ask faculty if students are provided with the rubrics before the scoring so they understand how they are going to be evaluated, I am generally told, "No." I then ask if the students are provided with the scoring results along with the rubric so they can see what they need to do to improve their performance, I am again told, "No." It seems that, in our anxiety over collecting data for accreditation and/or program review, students are not even an afterthought. We know that student performance is cumulative over time and what they learn in one class, they use, practice, and develop in other classes. This is especially true for the core curriculum competencies as well as the discipline outcomes. Without being intentional, there is no reason to believe that learning will be enhanced in this data collection process. </span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;">For example, several years ago I was at a comprehensive university and was speaking before a large group of faculty about student outcomes assessment and their accreditation process. While talking about core competencies, one of the faculty members from the School of Business stood up and said, "Our students cannot write! They are terrible writers and I am embarrassed to think that they will represent our institution in the workplace." Then one of the faculty members from the English Department stood up and said, "Well, I can tell you one thing! They could write when they left our courses!" I then asked the Business faculty member if, when students turned in these terrible papers, did he return the papers to the students and ask them to rewrite them? "No." Did you mark their grade down for bad writing? "No." Do you give them any feedback at all on their writing? "No." When this happens, what is the message that we send our students when we don't give them feedback on their writing? "Writing doesn't matter!"</span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;">Why should we expect student performance to improve if we don't give them feedback on their performance on program outcomes over multiple courses? Providing students with well defined rubrics written to describe levels of performance can provide students with information they need to not only understand what their current performance level is related to a given program outcome, but also what they need to do to improve their performance. The outcomes at the end of the program cannot be put at the feet of one department or course. We have a collective responsibility for student learning when scoring program or institutional level outcomes. Let's bring students back into the process.</span>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-4608099067078550728.post-68530464401925037412011-11-18T08:39:00.000-05:002011-11-18T08:39:36.791-05:00Should the Accreditors Determine Your Learning Outcomes?<span style="font-family: Arial, Helvetica, sans-serif;">In the November 18, 2011 Inside Higher Education electronic newsletter, there was an article that highlighted what is happening at the Western Association of Schools and Colleges (WASC) a regional accrediting association. Higher education professionals need to read it to be aware of the next wave in the accreditation and assessment of student outcomes wars. The full article can be found at:</span><span style="font-family: Arial, Helvetica, sans-serif;"> </span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;"><a href="http://www.insidehighered.com/news/2011/11/18/western-accreditor-pushes-boundaries-quality-assurance#ixzz1e40Gb7lu">http://www.insidehighered.com/news/2011/11/18/western-accreditor-pushes-boundaries-quality-assurance#ixzz1e40Gb7lu</a></span><span style="font-family: Arial, Helvetica, sans-serif;">.</span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;">The underlying premise is that, in the name of quality assurance and it's responsibility to the public to assure the credibility of the institutions it accredits, policies have been adopted where the accreditation letter and the accrediting team's report will be posted for the public to see. The article states:</span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;"><em>"One major prong of the package</em> (proposal package)<em> is that the accreditation process needs to become more transparent, and with the commission's approval this month, WASC will now be the first of the regional accrediting agencies to make public on its own website all of its "action letters" (in which the commission announces whether it has reaccredited an institution or taken some punitive action instead) and the reports of its accrediting teams on which the commission based its action. The norm for accrediting agencies to date has been to release a list of institutions that were either approved or sanctioned in some ways, and lists of the relevant provision numbers, but little to no additional detail."</em></span><br />
<br />
<span style="font-family: Arial;">Two other proposals were <strong><u>not</u></strong> adopted (yet, anyway):</span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;"><em>"One would require institutions not only to define a "stated level of proficiency" for five skill areas for graduates (written and oral communication, quantitative skills, critical thinking, and information literacy) but to compare themselves to other institutions on at least two of those areas. The other was a suggestion that all institutions might be required to map their expectations for degree recipients to the Degree Qualifications Profile proffered by the Lumina Foundation for Education." </em></span><br />
<br />
<span style="font-family: Arial;">If you haven't seen the Lumina Foundation Degree Qualifications Profile, it is a must read:</span><br />
<br />
<a href="http://www.luminafoundation.org/publications/The_Degree_Qualifications_Profile.pdf"><span style="font-family: Arial, Helvetica, sans-serif;">http://www.luminafoundation.org/publications/The_Degree_Qualifications_Profile.pdf</span></a><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;">Outcomes assessment is coming under greater scrutiny than ever...will we be ready?</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4608099067078550728.post-17243238518905141762011-11-11T09:17:00.001-05:002011-11-11T09:50:14.465-05:00Baby or Bathwater?<span style="font-family: Arial, Helvetica, sans-serif;">I am working with a number of great institutions/programs on the development of meaningful assessment processes. Of course, I love what I do so it is easy to let it consume me as I know that ultimately, a successful process will lead to more successful students. However, I am struck about how easy it is to get so focused on the <strong><u>process</u></strong> that we forget the real purpose of all this activity--thus the title of this entry. Yes, the process is designed to determine how well the program/institution is doing in producing student outcomes. However, is that all there is?</span><br />
<br />
<span style="font-family: Arial;">If we believe that student learning is cumulative over time and that what students learn in one course they will use, practice, or further develop in another course, then it becomes clear that collecting data in a "summative" course that documents student learning is sufficient to know how well the program/institution is doing in delivering education. However, we need to remember that this is about STUDENTS and what they know or can do. Research tells us that students do best when they know the performance that is expected of them and when they get feedback on their performance. Individual students should be provided information on what is expected of them (i.e., what will be on the test) and given feedback on their performance in formative contexts (courses/experiences before the summative assessment). For example, do they get the scoring rubric that will be used to evaluate their performance or do they get written comments as to how to improve their performance? Does their performance "count" in the final grading process? If they are being asked to write an essay about the importance of diversity in the work setting, do they also get feedback on their writing as well as the "content" of the essay? If so, is it considered a part of the grade for the assignment? Is writing important in all contexts or just writing classes? What is the message that is being sent to students? </span><br />
<br />
<span style="font-family: Arial;">Providing students with feedback on their performance throughout the curriculum will improve the student outcomes by the end of the program. It is important that we don't leave students out of this process. Too many times we only get bathwater.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4608099067078550728.post-80145725213319867392011-10-11T15:32:00.001-04:002011-10-11T15:33:18.143-04:00Procrastinate and Perish<span style="font-family: Arial, Helvetica, sans-serif;">As an experienced assessment professional, I often get calls to help institutions and/or programs with their student outcomes assessment plans as they are preparing for an accreditation visit. Without exception (well, almost), my phone rings a year before the report is due. Unfortunately this is an indication that this process is treated as an <u>event</u> instead of a <u>process</u>. </span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;">A well-developed assessment plan is a continuous process. This does not mean that you assess every outcome continuously. What it does mean is that you have a systematic plan that enables you to smooth out of the workload over time. There are well-defined cycles of data collection with defined timelines and areas of responsibility. For example, if you have six learning outcomes and you have a three-year cycle, you would assess two outcomes every year. This does not mean that there is no activity related to the outcomes for four more years. To see an example of what a systematic cycle might look like, see an example table below:</span><br />
<div class="separator" style="border-bottom: medium none; border-left: medium none; border-right: medium none; border-top: medium none; clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-vvTGDOMAn9A/TpSWsREqh8I/AAAAAAAAAB8/mSmevoVyUAo/s1600/Outcomes+table.JPG" imageanchor="1" style="clear: left; cssfloat: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" kca="true" src="http://2.bp.blogspot.com/-vvTGDOMAn9A/TpSWsREqh8I/AAAAAAAAAB8/mSmevoVyUAo/s1600/Outcomes+table.JPG" /></a></div><div class="separator" style="border-bottom: medium none; border-left: medium none; border-right: medium none; border-top: medium none; clear: both; text-align: left;"><span style="font-family: Arial, Helvetica, sans-serif;">Notice that data are collected in '10-11, evaluated in '11-12 and action taken with implementation of recommendations taken in '12-13 and a second cycle of data collection in '13-14. The cycle is: outcomes defined into measurable statements (performance indicators), curriculum mapped, decision made about where to collect the data, data collected, evaluation of results and processes, recommendations for improvements, improvements designed and implemented, data collection. </span></div><div class="separator" style="border-bottom: medium none; border-left: medium none; border-right: medium none; border-top: medium none; clear: both; text-align: left;"><br />
</div><div class="separator" style="border-bottom: medium none; border-left: medium none; border-right: medium none; border-top: medium none; clear: both; text-align: left;"><span style="font-family: Arial;">When last minute processes are developed, it becomes chaotic and puts an undue burden on faculty. Also, it generally results in massive data collection which is difficult, if not impossible, to interpret. Plan wisely!</span></div>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-4608099067078550728.post-28451091018917512172011-07-23T06:31:00.002-04:002011-07-23T07:11:52.889-04:00Using Sampling for Assessment of Programs<span style="font-family: Arial, Helvetica, sans-serif;">One of the most important questions around program assessment is the one of data collection: how many, from whom, how often. It is critical to remember that the process of assessing the program is about the program, NOT individual students. We are trying to answer the question: "How is the <strong><u>program</u></strong> doing in promoting student learning." Whether we are talking about general education curriculum or a specific program, the issues are the same.</span><br />
<br />
<span style="font-family: Arial;">If the number of students in any program are sufficient, sampling is acceptable. Good sampling techniques should be used and the sample being assessed should mirror the population of students on those characteristics which are identified as critical to justify generalizing the findings. For example, if there are seven sections of a particular course where students are given opportunities to demonstrate their learning it may not be necessary to assess every section. If the number of students in the population are not sufficient to justify sampling then you would be need to assess the performance of all students in the cohort.</span><br />
<br />
<span style="font-family: Arial;">Refer to my post of May 22, 2010, "Data Do Not Always Information Make." for a discussion of how many data need to be collected. In my next post, I will address the issue of "how often" to collect data.</span>Unknownnoreply@blogger.com7tag:blogger.com,1999:blog-4608099067078550728.post-76589096785670617352010-07-28T09:59:00.000-04:002010-07-28T09:59:19.969-04:00Using Techology in Program AssessmentI often get asked about the use of technology in program assessment. In the mid-90's I was involved in the development of an electronic portfolio system that was used for students to document their work related to the performance indicators (see previous posting) and for faculty to rate student performance using scoring rubrics. It was also used to map the curriculum and generate reports. It was a huge investment in human and capital resources as this was before there were commercial products available. However, I remain very proud of the effort and the results.<br />
<br />
Since that time there have been numerous commercial products developed to manage the outcomes assessment process. Is it necessary to buy commercial tools to have a robust outcomes assessment process? Absolutely not! Can technology solutions be helpful in managing the process and tracking the results? Absolutely.<br />
<br />
There are lots of considerations when considering a technology solution to the outcomes assessment process. The first thing is to be very clear about what a system can and cannot do. It CANNOT do your program assessment and evaluation for you! The institution or program must first define the intended outcomes and performance indicators. Without a doubt, that is the most difficult part of the process. Once the indicators have been defined you need to be clear about the role of students and faculty in the use of the techology. Also, who is the technology "owner"--who will maintain it, keep the outcomes/indicators current, generate reports, etc. etc.<br />
<br />
Get references! Talk to other institutions/programs to see what their experience has been using the technology. Don't just listen to the company sales people. Ask yourself, "What are we currently doing or not doing that the use of technology would make it more efficient or effective?" Use that as a starting point. <br />
<br />
For your convenience, here is a list of SOME of the commercial tools available as a way to get started in your search. I have included two commonly used survey tools as well.<object data="http://viewer.docstoc.com/" height="550" id="_ds_48267576" name="_ds_48267576" type="application/x-shockwave-flash" width="670"><param name="FlashVars" value="doc_id=48267576&mem_id=5178492&showrelated=0&showotherdocs=0&doc_type=pdf&allowdownload=1" /><param name="movie" value="http://viewer.docstoc.com/"/><param name="allowScriptAccess" value="always" /><param name="allowFullScreen" value="true" /></object><br />
<script type="text/javascript">
var docstoc_docid="48267576";var docstoc_title="Technology for Program Assessment";var docstoc_urltitle="Technology for Program Assessment";
</script><script src="http://i.docstoccdn.com/js/check-flash.js" type="text/javascript">
</script><span style="font-size: xx-small;"><a href="http://www.docstoc.com/docs/48267576/Technology-for-Program-Assessment">Technology for Program Assessment</a> </span>Unknownnoreply@blogger.com7tag:blogger.com,1999:blog-4608099067078550728.post-39422406751641445182010-05-29T11:27:00.146-04:002010-06-15T13:31:01.971-04:00What is a "performance indicator" anyway?<span style="font-family: Arial, Helvetica, sans-serif;">In the discussion of the difference between data and information in my previous post, I indicated that in order to have meaningful data collection around student outcomes it needed to be focused on student achievement of specific performance indicators that define the program outcome. What is a <strong>performance indicator</strong>? The best way to think about performance indicators is to relate them to the concept of <strong>leading indicators</strong> used in economics. Through experience, it has been found that there are certain characteristics that are the best indicators of the overall health of the economy. Taken together, these indicators can provide <strong>information</strong> about the current state of the economy and also serve to predict future economic trends. Although one could argue that there are lots of economic factors that one could look at to describe the state of the economy, experience has found that there are certain <strong>leading</strong> indicators that serve as the best describe the state of the economy.</span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;">So it is with outcomes of student learning. A program does not need to look at every possible skill or knowledge element related to a given student outcome to know how well the program is doing on student attainment of the outcome. Faculty need to have a good understand of their students, their faculty, their program educational objectives (what students should be able to do AFTER graduation), and the needs of their constituents and ask themselves, "How will we know when our students have achieved the desired outcome(s) which will prepare them for early career success?" </span><br />
<br />
<span style="font-family: Arial, Helvetica, sans-serif;"> </span><span style="font-family: Arial, Helvetica, sans-serif;">Every program in any discipline (not just technical disciplines) has as one of its student outcomes that they would like their students to be "effective communicators." What does that mean? Does being an "effective communicator" mean the same thing for a communications major as it does for a European history major? Does it mean the same thing for a civil engineering major as it does for a chemical engineering major? Does "writing skills" mean the same thing for a civil engineering program at X University as it does for the civil engineering program at "Y" college. Probably not. In order to develop meaningful performance indicators we need to think about how students will be using their communication skills related to the profession. Performance indicators should be developed which identify the focus of instruction (content referent) and the level at which students should demonstrate their performance (cognitive/affective level). Here is ONE example of performance indicators for "effective writing skills" (your indicators will undoubtedly look different).</span><br />
<ul><li><span style="font-family: Arial, Helvetica, sans-serif;">Students consistently use the rules of standard English (application level)</span></li>
<li><span style="font-family: Arial, Helvetica, sans-serif;">Word choices are appropriate to the audience (evaluation level)</span></li>
<li><span style="font-family: Arial, Helvetica, sans-serif;">Supporting details utilize appropriate graphical representation (application level)</span></li>
<li><span style="font-family: Arial;">Organizational pattern is <span id="goog_153531818"></span><span id="goog_153531816"></span><span id="goog_153531814"></span>logical (application level)</span></li>
</ul><span style="font-family: Arial, Helvetica, sans-serif;">Are there other performances we could use as indicators? Absolutely. However, we are only looking for those indicators which the faculty believe are the <strong>leading </strong>indicators of student performance related to the writing skills. Measures are taken on the indicators indicate the degree to which the outcome is being met and provide information that can be used to improve student performance.</span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4608099067078550728.post-66440852005931168122010-05-22T11:25:00.008-04:002011-07-23T06:48:49.608-04:00Data Do Not Always Information Make<span style="font-family: Arial, Helvetica, sans-serif;">In my last post, I indicated that it is not necessary to collect data in every course on every student in order to understand how well the program is doing in meeting its student outcomes. However, I continued to be amazed at the number of programs who have developed very elaborate data collection process which usually involve homegrown databases, spreadsheets, web applications, course reports and sometimes commercial software, etc. Faculty members dutifully enter data related to the extent to which they believe the students in their courses have met the program student outcome(s). Data are collected in multiple courses (sometimes all courses) and then aggregated, averaged and reported as the level to which students have demonstrated the outcome. Targets of expectation are set (usually in the 70-75% range) and victory is declared. Of course, all of this generally happens the year before the accreditation visit (so much for CONTINUOUS quality improvement).</span><br />
<br />
<span style="font-family: Arial;">It is important to remember that having data and targets for performance does not necessarily translate into <strong>information</strong> that can be used to improve the teaching/learning process. Information means that you are able to look at the data and understand what the students strengths and weaknesses are related to the outcome--that you are able to discern from the data how well the <strong>program</strong> is meeting the outcome and what can be done to improve. In order to accomplish this, the program needs to define the outcome into a few performance indicators and understand how the results inform the achievement of the outcome. Data are collected related to the performance indicators and improvements are focused on improving student performance on the indicators (I will discuss performance indicators more fully in my next post). This enables the program to focus the data collection (perhaps different courses for different indicators) and to clearly understand what student strengths and weaknesses are related to the outcome. This will also enable the program to target improvement efforts and will set the stage for the next cycle of data collection and evaluation. We need to stop the data dump approach to continuous quality improvement as it only promotes continuous faculty frustration.</span>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-4608099067078550728.post-47705598034211052812010-04-22T08:17:00.012-04:002010-05-22T11:33:09.295-04:00Why all the course-level assessment?<span style="font-family: arial;">When students are asked an open-ended question on an exam and they don't know the answer, they tend to write everything they do know that they weren't asked...and they write and write and write. Why do they do that? Probably for one of two reasons: they hope the faculty member will find the right answer in all their ramblings or they will be given extra "points" for their effort. Why do we approach program assessment the same way? I can hear it now, "We don't know what the <span class="blsp-spelling-error" id="SPELLING_ERROR_0">accreditors</span> want, so we will just give them everything."<br />
<br />
This approach always ends up in creating undue work on the part of faculty and, while producing massive amounts of data, produces absolutely no information that can be effectively used to improve the student learning experience. We have lost sight of the fact that we are assessing the "program" not the student. Yes, we gather evidence from students but it is not for the purpose of assessing them--we are already doing that in their courses. It is for the purpose of assessing ourselves. Can we provide evidence that by the end of the academic program students have attained the ability to (fill in the blank)? If we believe that student learning is <span class="blsp-spelling-corrected" id="SPELLING_ERROR_1">cumulative</span> over time and that what is learned in one course is applied in another course, built upon in another course, etc., throughout the curriculum, then by the time students are ready to graduate, their learning should be more than the sum of all their courses. Why do we collect data in lower level courses and average them with the data taken in upper level courses and pretend like we know what they mean? Are we really saying that all courses are equal in how they contribute to <span class="blsp-spelling-corrected" id="SPELLING_ERROR_2">cumulative</span> learning and that the complexity and depth/breadth at which students are to perform is the same in all courses for any given outcome? Why not only collect "evidence" of student learning in the course where students have a culminating experience related to the outcome. Yes, collecting evidence in a lower-level course would be helpful in understanding student strengths and weaknesses related to any given outcome. This would enable faculty to reinforce and emphasize those concepts where students were weak prior to the culminating experience. However, these data should not be aggregated with the data collected in the culminating experience. Why do you need more than one data point for each student from whom you have collected data?<br />
<br />
We need to bring some sanity back into what we are doing or the entire "outcomes assessment" process will collapse under its own weight--as well it should. </span>Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-4608099067078550728.post-63646491450757420512010-04-13T06:16:00.004-04:002010-04-22T08:36:48.997-04:00Accountability not going away!<span style="font-family:arial;"></span><br /><span style="font-family:arial;">In the April 13, 2010, Inside Higher Education article, "No let up from Washington" <a href="http://www.insidehighered.com/news/2010/04/13/hlc">(http://www.insidehighered.com/news/2010/04/13/hlc</a>), Molly Corbett Broad, president of the American Council on Education, is quoted as saying, "...I believe it’s wise for us to assume they (federal policy makers) will have little reservation about regulating higher education now that they know it is too important to fail." This was in the context of holding higher education accountable for learning outcomes.</span><br /><span style="font-family:Arial;"></span><br /><span style="font-family:Arial;">Are institutions really prepared or have they been dragging their feet hoping that the change in adminstrations would make it all go away? It is clear that institutions and educational programs must get serious about demonstrating student learning and do it in a way that honors faculty time and produces information and not just tons of data that remain a mystery as to what they mean. This also means that accrediting agencies must get serious about preparing their peer evaluators to "know it when they see it." As the feds put pressure on the accrediting agencies to demand accountability for student learning, the accrediting agencies need to become more intentional in the preparation of their peer evaluators who, for the most part, are faculty and adminstrators of higher education. </span>Unknownnoreply@blogger.com2