PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING
A report on a CGS project supported by
a grant from the Teagle Foundation
Preparing Future Faculty to
Assess Student Learning
Council of Graduate Schools
PREPARING FUTURE FACULTY
TO ASSESS STUDENT LEARNING
is report was prepared for the Council of Graduate Schools by: Daniel D. Denecke, Julia Kent, and William
Wiener. CGS is grateful to e Teagle Foundation for the one-year grant that supported the activities resulting
in this report.
Cover photo used with permission from Microso.
Copyright © 2011 Council of Graduate Schools, Washington, D.C.
ALL RIGHTS RESERVED. No part of this work covered by the copyright herein may be reproduced or used in
any form by any means—graphic, electronic, or mechanical including photocopying, recording, taping, Web
distribution, or information storage and retrieval systems—without the written permission of the Council of
Graduate Schools, One Dupont Circle, NW, Suite 230, Washington, D.C. 20036-1173.
ISBN-13: 978-1-933042-31-2
ISBN-10: 1-933042-31-1
Printed in the United States
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 3
Table of Contents
Preface . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Acknowledgments . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2. e National Context: Learning, Quality and Accountability in
Higher Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3. e Institutional Context: Assessment of Student Learning and
the Graduate School Mission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Student Learning Outcomes
Learning Outcomes and Regional Accreditation
Learning Assessment and the Graduate School Mission
4. Preparing Future Faculty: A Review of Past Eorts, Current
Challenges, and Future Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Preparing Future Faculty: A Model Professional Development Program?
What Do We Know about Current Practice in “PFF” Programs
and Student Learning Outcomes?
Where Do We Need to Go Next? Incorporating the
Assessment of Student Learning into Professional Development Programs
5. Insights, Lessons Learned, and Areas of Future Work:
A CGS Workshop on Enhancing Graduate Student Professional
Development Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Challenges to Creating a Culture that Values Assessment of Student Learning
Opportunities for Eecting Culture Change
e Broad Parameters of an Enhanced PFF program
Possible Curricular Content on Learning Assessment in PFF
Measuring Success in Program Integration
Developing Programs with Broader Impact
6. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Web Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING4
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 5
Preface
When the Preparing Future Faculty (PFF) initiative began, there were two major concerns facing US
higher education: rst, that universities were not doing enough to prepare graduate students with
the full range of career skills expected of faculty; second, that US institutions were paying insucient
attention to the quality of undergraduate teaching. At the graduate level, while there is still more work
to be done, universities have made huge strides in developing PFF and similar programs to prepare
graduate students for faculty careers, and through programs such as the Professional Science Masters
to prepare graduate students for non-academic careers. At many colleges and universities, centers for
teaching and learning and a host of initiatives now provide faculty with access to resources and tools
that address professional development needs in teaching and learning.
Despite improvements in undergraduate education that have resulted from such reforms, public
concerns about the quality of undergraduate education have resurfaced, especially around issues
of assessment and accountability. Today’s faculty and accredited US higher education institutions
are typically required to document learning objectives and demonstrate student learning
outcomes in ways that would be unfamiliar to their predecessors. As any cursory glance through
the weekly headlines in the higher education press will reveal, there is an entire industry growing
to meet public demands for accountability in this area. Meanwhile, there is vigorous debate,
particularly among faculty, about proposed instruments and approaches that some see as overly
bureaucratic, based on insucient evidence of eectiveness, or inappropriate to eld knowledge.
We believed that there was an opportunity, here, to explore how Preparing Future Faculty and other
similar programs were preparing graduate students to assess student learning. We suspected that
graduate institutions could play a critical role in improving undergraduate student learning, bringing
benets to graduate students and the institutions that hire new faculty, and answering public calls
for accountability through proven, faculty-tested best practices in teaching and learning. is report
is intended to catalyze broader, national discussions in the US graduate community as well as local,
campus discussions about needs and opportunities for enhancing the professional development of
graduate students aspiring to faculty careers. e immediate goal of such enhanced preparation is to
produce a generation of new faculty who are better trained to assess undergraduate student learning
with condence. e longer term goal is the improved learning in American postsecondary education
that would result from future generations of faculty more deeply engaged in meaningful assessment.
I hope this report will be useful to Graduate Deans, graduate school sta, and all those who are
responsible for generating professional development resources for graduate students, for enhancing
faculty development programs and resources, or for overseeing institutional learning assessment
eorts. I also hope that this report may provide the basis for conversations between administrators,
faculty, and students about creating vital models for the development and exchange of best practices in
approaches to learning assessment.
From our perspective, this report represents the conclusion of a rst, exploratory step in a new
direction to enhance and expand professional development programs, nationwide, that provide
tomorrow’s faculty with the skills to assess undergraduate (and graduate) student learning.
Debra W. Stewart
President
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING6
Acknowledgments
e success of a project on faculty teaching and student learning depends on the support
of a broad range of experts and advocates. For their material and intellectual contributions
to this study, the authors would like to thank the Teagle Foundation, particularly Donna
Heiland, Teagle Vice-President, and Program Ocer Cheryl Ching, who both provided
inspiring ideas and comments at various stages. We are also grateful to David Bell, Karen
DePauw, Sally Francis, Chris Golde, George Kuh, Laura Rosenthal, Debra Stewart, and Jo
Rae Wright for their helpful comments on earlier versions of this manuscript, and to the
committed graduate students, deans, and assessment experts who participated in the 2010
workshop, “Preparing Future Faculty to Assess Student Learning” (see the CGS website,
www.cgsnet.org, for the agenda and participant list). We also thank all those graduate deans
and sta who devoted their time to responding to a survey on professional development
programs that informed this publication and the workshop. For assistance with survey design
and data analysis, we thank CGS sta, Sheila Kirby, Scott Nael, and Je Allum, and we thank
Josh Mahler for his assistance with the workshop, publication design, and project website.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 7
Executive Summary
e assessment of student learning is one of the most important responsibilities in U.S.
higher education. Faculty are expected to assess student learning and use those results to
make improvements in the classroom and in the educational environment, yet few faculty
have expertise or receive formal training in learning assessment methods. e Council of
Graduate Schools (CGS) recognized an opportunity to address this need through a network
of existing professional development programs for graduate students aspiring to faculty
positions. CGS developed an exploratory project with funding from the Teagle Foundation
to understand current trends and challenges in this area and identify future directions for a
new program to prepare the next generation of faculty and university leaders with skills and
expertise in learning assessment.
is report provides a broad overview of national needs in the assessment of student
learning and gaps in existing future faculty preparation programs.
Chapters One and Two discuss student learning assessment in the context of
national discussions about higher education quality and accountability, with an emphasis on
some of the tensions surrounding the assessment issue and on challenges in achieving the
genuine faculty engagement on which rigorous learning assessment depends.
Chapter ree discusses the key role that Graduate Schools can play by bringing
multiple stakeholders together to ensure that improved assessment practices result in both
greater accountability and enhanced teaching and learning environments in US higher
education.
Chapter Four discusses the Preparing Future Faculty program as a model for the
professional development of graduate students and highlights key ndings from a CGS
survey of existing US programs that illustrate challenges and opportunities for program
enhancement.
e report concludes with a synthesis of a far-ranging discussion of opportunities,
challenges, and next steps at a national workshop hosted by CGS November 22, 2010 in
Washington, DC. e workshop brought together assessment experts, senior leaders in
graduate education, and students from universities with model professional development
programs for graduate students aspiring to faculty careers. e broader purpose of this
exploratory project was to identify the optimal elements that would shape a broad, national
initiative to encourage systematic integration of assessment skills and expertise into future
faculty preparation programs. Such an initiative has been called for by national education
leaders (e.g., Hutchings, 2010), and is here envisioned as a key strategy for shaping future
teachers, scholars, and leaders across the US higher education system.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING8
1. Introduction
Student learning is now a key focus in national discussions about the quality of higher
education. Calls for greater public accountability and, specically, for more compelling
evidence that students are learning are coming simultaneously from a number of dierent
groups: the federal government, regional accrediting bodies, state governing boards, and
the higher education community itself. As a result, higher education faculty across dierent
institutional contexts must devote a signicant amount of time to assessing student learning
in ways that are unfamiliar to them from their graduate training or past experience.
One example of such an expectation of faculty and their institutions is the development
of “student learning outcomes,” that is, explicit statements of generic skills and abilities
and disciplinary competencies that a student is expected to have acquired as a result of
successfully completing a course, a coordinated set of core courses, or other activities
including co-curricular experiences. is is commonly required at both undergraduate and
graduate levels. ese expectations can help faculty to evaluate the level of student learning
and engagement, and develop a better sense of how a particular course or activity ts into
the overall educational mission of the institution. Such requirements can encourage faculty
to reect on their scholarly responsibilities beyond research, as teachers, and to experiment
with new teaching approaches to enhance learning inside their classrooms. ey can also be
used to enhance the activities of all those working to provide a rich learning environment at
their institution, whether as mentors, lab and program directors, or administrators.
Enhanced assessment of student learning has great potential to increase the public trust
in our higher education institutions and result in long-term improvements in teaching
and learning. As these new requirements are currently being dened, communicated, and
implemented, however, a chasm is emerging between stakeholders outside the institutions
calling for greater accountability and practicing faculty within them who are responsible
for the day-to-day activity of teaching and student learning. Faculty sometimes perceive
these requirements as bureaucratic exercises to appease accreditation agencies and public
accountability champions rather than as opportunities to improve the quality of teaching
and learning. If universities want the public accountability movements to be eective,
they cannot aord to alienate those who will create, apply, and hopefully learn from these
measures in their own classrooms. It is important, then, to identify strategies for increasing
current faculty engagement in these discussions (Hutchings, 2010). One of the most
promising long-term strategies for creating a faculty and institutional culture that values
assessment is to begin work now to engage the next generation of future faculty who will
soon inherit the responsibility of educating undergraduate and graduate students (ibid.).
Background
Under the leadership of the Council of Graduate Schools and the Association of American
Colleges and Universities, in collaboration with US graduate schools and 11 disciplinary
societies, the Preparing Future Faculty (PFF) program grew into a nationally recognized
initiative for addressing the professional development needs of future higher education
faculty. From 1993-2003, with supporting funds from the Pew Charitable Trusts, the
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 9
National Science Foundation, and the Atlantic Philanthropies, hundreds of US colleges
and universities from across the higher education sector worked together to develop pilot
professional development programs for doctoral students aspiring to faculty careers.
National coordination provided a framework for graduate deans, faculty, and disciplinary
society leaders to exchange best practices and ideas on issues such as sustainability,
collaboration strategies, structure and curricular content. During this period, CGS worked
with US graduate deans to advance the notion that the PhD should represent not only
preparation for research but broader professional development for academic and non-
academic careers through national meetings, presentations, publications, and the PFF
National Oce. And CGS engaged a wide range of stakeholders in national dialogue
about strategic directions for PFF to help ensure that these programs continued to prepare
participating graduate students with the skills that would be valued of faculty in the future.
Since the end of the grant period in 2003, similar programs have sprung up at a number of
universities around the country while others have been discontinued. Meanwhile, many of
the original programs have evolved relatively independently of one another to prepare at
least a small portion of the nations graduate students for faculty careers.
e end of grant funding for PFF has meant that a powerful network for improving the
quality of US higher education in many ways lies dormant. e Council of Graduate
Schools and the Teagle Foundation recognized the potential for renewing this national
network of PFF programs and expanding that network to include other programs with a
similar purpose, such as the Teagle-funded programs to integrate learning assessment skills
into graduate student teaching in the arts and sciences. We believe that a broadened and
revitalized network of future faculty programs will provide one of the most eective means
of addressing the vital national need for faculty who are better equipped to assess student
learning and participate in their institutions’ decisions about assessment that have broader
implications for their students, programs, and colleges or universities.
e Current Project
is report, Preparing Future Faculty to Assess Student Learning, describes an exploratory
project by CGS, funded by the Teagle Foundation, to investigate the potential of PFF and
similar programs to prepare graduate students for their future responsibilities to engage in
thoughtful assessment of student learning. e report discusses the following topics:
• e importance of learning in the context of current discussions of higher education
quality and accountability;
• Outcomes assessment in the history of US graduate education and graduate reform
initiatives;
• e development of the Preparing Future Faculty initiative, characteristics of
active PFF programs, and gaps in our current knowledge and practice that call for
transformed practice in PFF;
• Opportunities for integrating enhanced understanding and skills in assessing
student learning into PFF and similar programs.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING10
Included in the discussion are key results from a 2010 survey conducted by CGS to better
understand the scope and nature of university activities in two areas: the professional
development of graduate students and faculty/graduate student preparation in learning
assessment. ose results suggest the need for better understanding of how assessment is
integrated into PFF programs, more evidence about eective strategies for higher education
learning assessment, and better communication between institutions about best practices
in learning assessment and programmatic integration of assessment skills into professional
development programs. We conclude with a summary of results from a Fall 2010 workshop
designed to stimulate discussion about the key challenges and opportunities for enhancing
programs to prepare graduate students for faculty careers.
e stakes in national discussions of these issues are high. A loss of public condence
in the ability or willingness of our institutions to communicate their impact on student
learning could have broader consequences. In the long-term, such a loss of condence could
weaken the historically close relationship between US universities and the American public.
One possible solution for increasing public accountability is greater federal oversight: as
Tennessee Senator Lamar Alexander recently warned, “if colleges and universities do not
accept more responsibility for assessment and accountability, the federal government will
do it for them” (Alexander, 2007). Many in the higher education community fear that
federal oversight could actually compromise rather than improve quality at many US higher
education institutions.
An alternative solution is that universities take a more proactive role in voluntary
assessment and accountability. Universities should embrace a more proactive role, here,
not only out of a reaction to fears of federal oversight, but also out of a genuine interest in
quality improvement. Indeed, surging public interest in student learning at US institutions
of higher education should be welcomed as providing colleges and universities with an
historic opportunity to improve quality while at the same time strengthening the compact
between the American public and American universities.
is exploratory project addresses this opportunity by facilitating discussion between
experts in the assessment of student learning and graduate education leaders from
universities across the country with a demonstrated commitment to the professional
development of graduate students. When PFF programs rst began, there was strong
interest in the improvement of teaching, but universities had not yet developed structured
programs to address the professionalization of future faculty. We are now at a newly
challenging stage where such Preparing Future Faculty program structures exist but there is
little understanding or dialogue around best practices in the assessment of student learning
within those structured programs. Our aim is to identify what would be needed nationally
to advance promising practices and identify strong models for preparing future faculty to
actively participate in the assessment of student learning.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 11
2. e National Context: Learning, Quality,
and Accountability in Higher Education
In assessment circles, it is common to say that, while we should aspire to measure what we
value, too oen we end up valuing what we measure. In other words, we come to dene
our standards by what is easiest to collect and quantify. Achieving the ideal of “measuring
what we value” is complicated, however, by the fact that this “we” is made up of dierent
communities that hold dierent perspectives on what would constitute valuable educational
outcomes. ese dierences help to explain some of the variation in approaches to higher
education quality assessment. e values, or interests, of various stakeholders in higher
education might be reected in input and outcomes measures such as the following:
1
• Students and parents typically value the private goods of enrichment, knowledge,
and employability, and ask questions such as: Is the school I am considering widely
regarded as able to provide a high quality education? Will my child (or will I) earn a
good job upon graduation?
• Government stakeholders and the public typically value a return on public investments
in colleges and universities, and ask: Are our colleges and universities eciently producing
a skilled workforce in sucient numbers? Do these investments result in a net increase in
jobs, tax revenue, security, economic competitiveness, and social well being?
• Educational institutions, especially public colleges and universities, must of course
take into consideration the outcomes valued by stakeholders both within and beyond
their campuses, and ask additional questions such as: Are we enabling students to
satisfy their educational and career goals in line with our public or private mission? And
are we able to excel among universities of the same size and type? Are we contributing to
the production of an educated population worthy of a democratic society?
Large-scale measurement enterprises have developed around such important questions, and
while each set of questions may require a dierent set of metrics, the combined data can
contribute to an overall picture of educational quality. Inuential surveys and longitudinal
data sets, for example, have helped multiple stakeholders begin to answer their questions
in ways that can serve to promote both greater transparency about returns on public and
private investment and improvements in higher education. A burgeoning number of for-
prot companies have also recently emerged to provide “data solutions” to help institutions
collect and analyze relevant information (Hutchings, 2009).
Such activities have grown in response to increased requirements by regional accrediting
bodies discussed later in this report. Also fueling this growth has been a competition among
dierent notions about “quality” in our higher education degree programs and institutions.
Such notions are expressed in a range of metrics, including: admit ratios and undergraduate
grade point averages of admitted students, degree completion rates and average time to
degree, rates of employment and salaries aer graduation, and reputational evaluations
by peer faculty and alumni satisfaction results. ese kinds of metrics can facilitate intra-
institutional comparisons (between programs and of single programs at dierent points in
time) as well as inter-institutional comparisons.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING12
But even when the metrics for comparison are easily quantiable, such as degree completion
rates achieving agreement on how to use such metrics to assess quality has been no small
task in the United States where the higher education system is among the most varied and
decentralized in the world (see Benjamin and Chun, 2003). e variety of institutions and
missions in the US has great advantages, such as the access and opportunity students have at
every level of socio-economic status and academic background to further their education in
pursuit of a full range of personal and professional/vocational goals. is variety, however, also
means that metrics for measuring quality as dened by one type of institution and mission (to
be a national exemplar in engaging students in research and scholarship, for example) may
look very dierent from metrics of quality as dened by another type with a very dierent
mission (such as to serve the community and provide access to all eligible students).
Where the metrics are not easily quantiable, the challenges of achieving consensus on
how to use the respective data to enhance quality within the academic community are
magnied. US higher education institutions have made great progress over the last decade
on developing a data infrastructure to track output measures such as degree completion
and time to degree. e “elephant in the room” in discussions of accountability and
national assessments of quality in higher education, however, has long been student
learning. Learning may be one of the few core areas many would agree upon as the common
denominator by which quality in education should be measured. And yet, as many have
noted, it has been among the least well-dened and least operationalized concepts in
calls for improvement in higher education (Adelman 2010; Banta 2007; Shavelson and
Huang 2003; Chun 2002). Methods for directly measuring student learning include:
locally prepared tests, standardized tests, course papers, presentations, performances,
portfolios, and other means that demonstrate what the student is able to produce. When
inter-institutional comparisons have been made, however, the quality of learning has
typically been inferred in national assessment eorts primarily through indirection and
proxies” (Chun 2002). Data collected to facilitate comparative institutional evaluation and
benchmarking have rarely included direct evidence of learning (see Kuh, 2010)
2
. is is true
whether we are speaking about the national college rankings and ratings that drive so many
students’ choices of where to apply, in longitudinal data sets that inform education policy, or
in alumni and employer surveys used for various purposes.
Educators, assessment experts,
accrediting bodies, and
policymakers are now asking of
institutions questions faculty
have typically asked in the
classroom: what, how, and how
much are students learning?
In recent years, however, discussions of higher
education quality and accountability have turned
the national spotlight toward the quality and
systematic assessment of student learning at all
degree levels.
3
Educators, assessment experts,
accrediting bodies, and policymakers are now
asking of institutions questions that faculty have
typically asked in the classroom: what, how, and
how much are students learning?
Understandably perhaps, many faculty have pushed back and expressed concerns that, as
professional teachers and experts in their elds, they are the best people to ask and answer
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 13
such questions in the context of the classroom.
4
e risk of focusing on measures such as
degree completion and time to degree as “proxies” for quality learning, as expressed by
critics of such an approach, is that faculty may feel pressured to compromise on quality
in their evaluation of student work and focus instead on student progress through the
system, regardless of student performance.
5
Similarly, the risk of adopting common learning
assessment instruments across dissimilar institutions and disciplines, it is feared, is that
faculty may be pressured to “teach to the test” in ways that would compromise academic
freedom and undermine teaching and learning (Shavelson and Huang, 2003). e ultimate
concern here may be that rather than “measuring what we value” we (the faculty) will come
to value what we (the assessment experts) measure, and what is easily measurable may not
be what is most valuable about higher education.
To say that “not all faculty are on board,” as one CGS survey respondent phrased it, may be
an understatement. Expressed in the 45 responses to a Sept. 7, 2010 article by David Glenn in
e Chronicle of Higher Education titled “Assessment Projects from Hell,” faculty resentment
at what is perceived to be the bureaucratization of US higher education is strong.
6
Faculty
responses to the article voice concerns about a range of issues, including: increased paperwork,
threats to academic freedom and faculty autonomy, and the need to adopt what they perceive
as fashionable assessment “jargon” and embrace reforms before they have been empirically
proven eective. Some faculty also fear that if standards for content and teaching are dened
from outside the institution, department, or classroom, they will result in nothing more than a
legitimization of compulsory mediocrity in US higher education curricula (see Fritzschler, 2010).
Without serious faculty engagement and input into the discussions shaping the future metrics
by which quality learning in higher education will be measured, ownership of the problem and
its resolution will be claimed by a limited set of stakeholders.
7
In taking these responsibilities on
for themselves, administrative bodies within the institution (and other organizations contracted
by it to help meet the new burdens of documentation) could jeopardize the capacity of senior
leaders, learning experts, and faculty to leverage improvement in the classroom or attract the
requisite respect and attention of their faculty peers. Under one increasingly plausible scenario,
a culture of learning assessment for accountability could arise that would be conducted by expert
administrative units to meet the requirements of regional accrediting bodies, state governing
boards, etc., but which is almost entirely divorced from the day-to-day practice of faculty
(see Banta 2007). [is is not the direction accrediting agencies would like to see, nor is it the
direction that is likely to result in the most meaningful improvements.]
At the same time, mechanisms to use learning assessment to improve teaching may indeed
already be in place, though those eorts currently are mostly conducted by individual
faculty members on an ad hoc basis. One could imagine a scenario in which institutional
assessment for accountability” could come to perform what might be called a “ceremonial
function of demonstrating to the public that “learning” is taking place, while the real
assessment of learning and any best practice exchange that such assessment might foster
would go undocumented. On the surface, it might seem that the advantage of such a
scenario is a certain degree of eciency. Faculty who already feel overwhelmed by a
range of scholarly responsibilities may feel relieved to know that others are handling the
accountability paperwork. e disadvantage is a lost opportunity to take advantage of public
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING14
interest in the quality of learning to better document and therefore more broadly replicate
what the country’s best faculty are nding to be the best practices in teaching, i.e. those
most conducive to learning in dierent contexts across US classrooms and institutions.
In responding to calls for action, especially in developing institutional responses to such
calls, there are challenges and risks as well as opportunities. We must steer the course
carefully on these issues or we risk jeopardizing faculty engagement with overly bureaucratic
paperwork or, even worse, devaluing real teaching by calling on quality teachers and
institutions to meet prematurely dened minimal standards. Organizations such as the
Teagle Foundation, the Lumina Foundation, the Carnegie Corporation of New York, the
Carnegie Foundation for the Advancement of Teaching, and the Spencer Foundation have
led important eorts to chart the course wisely. One of the greatest challenges in all of these
eorts has been engaging faculty from across the disciplines in discussions about how best
to articulate and meet the new expectations. e projects sponsored by these organizations
have recognized that meaningful assessment of student learning must be dened from the
“bottom up” by faculty and actively teaching scholars in the disciplines in close dialogue
with assessment experts rather than from the “top down” by government ocials. In a
recent report for the National Institute of Learning Outcomes Assessment, Pat Hutchings
(2010) recognizes this need and recommends “Build[ing] Assessment into the Preparation
of Graduate Students” as one of six key strategies for obtaining greater faculty engagement:
“Weaving assessment into courses and experiences designed to prepare beginning scholars
for their future work as educators is a promising step forward, with long-term benets as
today’s graduate students become tomorrow’s faculty members and campus leaders” (ibid.).
8
ere is an important role for government, regional accrediting bodies, and state boards
to play in setting clear expectations for public accountability and transparency in higher
education about learning outcomes. Assessment experts too have a key role to play,
especially in identifying promising strategies and techniques for assessing student learning.
But the success of all these eorts depends upon the vital engagement of faculty and of
graduate students aspiring to faculty positions. Graduate schools and graduate deans are key
gures in making such engagement happen. Graduate deans have oversight responsibility
in two arenas. First, at institutions where graduate teaching assistants serve as instructors
of record, graduate schools can ensure that the assistants understand core principles and
methods for assessing student learning and using the results to improve teaching. Secondly,
at institutions where PFF or other programs are in existence, graduate deans play a role in
ensuring that students who aspire to become faculty are prepared for their future assessment
responsibilities in all of their responsibilities: in teaching, service, and research supervision.
To explore how senior administrators and faculty might best work together to provide the
next generation of faculty with the assessment skills they need requires better understanding
of the key opportunities and challenges, as well as the institutional context for the new
assessment requirements facing faculty and institutions. e following sections provide
some of this background information through a discussion of: the institutional context
for outcomes assessment, the Preparing Future Faculty model and current institutional
programs, and the experiences of leaders at those institutions in integrating learning
assessment into faculty and future faculty preparation activities.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 15
3. e Institutional Context: e Assessment of
Student Learning and the Graduate School Mission
e following section describes the institutional context for the assessment of student
learning, including: the purposes of learning assessment; some denitions and typical
approaches; and the institutional requirements for regional accreditation in the US. is
section concludes with a discussion of the important roles that graduate deans and Graduate
Schools have to play in advancing the assessment of learning through academic program
review, benchmarking activities, and professional development programs for graduate
students aspiring to faculty positions.
Student Learning Outcomes
A quality college or university education requires a well-planned curriculum and goals
which aim to meet specic “student learning outcomes.” In the US, there are two levels of
student learning outcomes that are typically required at the undergraduate level (Huba and
Freed, 2000). First there are outcomes related to general education courses. Such courses
have the dual goals of (a) developing specic prociencies such as writing, communication,
mathematics, critical thinking, foreign language, etc., and (b) exposing students to a range
of disciplines that will broaden their understanding of such areas as ne arts, humanities,
cultures and civilizations, social and behavioral sciences, natural sciences, and health and
well-being. General education is designed to prepare students to become well-rounded
citizens and serve as a platform for advanced knowledge in their “major” or chosen eld
of study.
9
At the second level, undergraduate students are expected to develop a thorough
understanding of the eld or discipline in which they major. is understanding oen
includes the history of the eld, its theory, its methodology, and its application. Here,
students also develop a set of skills or abilities that may be specic to that eld or may have
broader application.
Similarly, at the graduate level there are also two levels of student learning outcomes.
While there generally are not common general education or core courses at the graduate
level, there are individual foundational concepts and skills that are taught throughout the
curriculum. ey may include oral and written communication, critical evaluation, research
methodology, research ethics, and professional ethics. e second level of graduate learning
outcomes is specic to the discipline and encompasses content relating to the eld of study.
ese outcomes may dier signicantly by eld or discipline.
Assessment
Attainment of student learning outcomes is evaluated by a process known as “assessment.
Although there are other kinds of quality assessment in higher education, learning
assessment has been dened as the systematic collection, review, and use of information
about student learning in order to inform decisions about how to improve teaching and
learning (Palomba and Banta, 1999; Walvoord, 2004). Assessment in this sense developed
as a type of action research intended neither to collect data for external stakeholders nor to
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING16
grade a students performance but rather to inform improvements in the curriculum and the
delivery of information. It can be used to measure student learning of disciplinary content
as well as critical thinking, scientic reasoning, or other skills.
e purposes of assessment are to describe what the student should know and be able to do
(student learning outcomes) and show evidence that documents their attainment of these
abilities (Anderson, et al., 2005). Assessment can rely on objective measures as well as on
informed professional judgments. Ideally, assessment is an ongoing cycle that consists of 1)
establishing institutional and departmental goals for student learning, 2) operationalizing
the goals into measurable expected outcomes of student learning, 3) providing sucient
opportunities to achieve those outcomes, 4) gathering data on how well students achieve
the outcomes, 5) analyzing and interpreting the evidence that has been collected, 6) using
the evidence to make changes that will improve student learning, and 7) evaluating the
eectiveness of the assessment process itself (Walvoord, 2004; Suskie, 2009). e cycle
should reoccur each year and is expected to result in changes that will make a positive
impact on the quality of the educational program. It is clear that without broad and active
involvement of faculty, however, the assessment of student learning outcomes will be weak
and tangential (Bers 2008; Grindley et al. 2010).
There are three approaches to assessing general education learning goals (Palomba
and Banta, 1999). The first is an individual course-based approach that encourages
faculty to embed their own assessments in the courses they teach. The second is
a multi-course or theme-based approach that is used across courses and across a
college. The third is a non-course approach designed by those who teach in related
disciplines within and across colleges. Ultimately it is possible to use all three or
combinations of these approaches.
Integrating evidence across different sections of courses or among different courses
is facilitated by the use of “rubrics.
10
A rubric is a guide to scoring that provides a
task description, the evaluation criteria to be used, levels of the task required for
success, and a scale for evaluation. Rubrics make scoring more consistent and allow
comparisons across different learning situations (Suskie 2009; Allen & Knight 2009;
Stevens and Levi 2005).
The evidence to be collected in assessment consists of both direct measures and
indirect measures. Direct measures are those that are based on tangible examples of
student work or thinking. They might include standardized tests, locally developed
tests, portfolios, papers, projects, presentations, and other original work. Indirect
measures are proxy signs that learning has occurred. They might include course
grades, graduation rates, admission rates into graduate programs, placement rates
of graduates, satisfaction surveys, student engagement data,
11
alumni perceptions,
and employer surveys. Final course grades are considered indirect measures because
they provide an overall view but do not measure explicit student learning outcomes.
Grading within courses can become a part of direct assessment by examining specific
results of key assignments and aggregating student learning results across sections
and courses (Suskie, 2009).
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 17
Assessment is a fruitless
operation unless faculty use
the results to examine where
improvements are possible in
teaching and curriculum design.
Assessment is a fruitless operation unless faculty
use the results to examine where improvements are
possible in teaching and curriculum design. As
with collection of any data set, assessment results
can be performed by using tallies, percentages,
aggregates, averages and qualitative summaries. It is
helpful to make comparisons using benchmarks
based on local standards, external standards, best practice or value-added contributions
(Suskie, 2009). With this information at hand, faculty can come together and examine
strengths, weaknesses, and trends. ey can then determine where changes are needed and
can implement those changes.
Learning Outcomes and Regional Accreditation
Accrediting bodies have turned their attention to the need for strong assessment programs
within universities. In 2001, in an article entitled Accreditation and Student Learning
Outcomes: A Point of Departure, the Council for Higher Education argued that it is
important for the accrediting bodies to take a more active role in encouraging assessment
through student learning outcomes (Ewell, 2001). e Council for Higher Education
Accreditation (CHEA) in its 2003 Statement of Mutual Responsibilities for Student Learning
Outcomes urged the use of student learning outcomes to improve higher education (Council
for Higher Education Accreditation, 2003). e six regional accrediting agencies
12
that are
responsible for evaluating universities in the US have similarly focused much attention
on assessment. With the changes to the Higher Education Act in 1992, these bodies
were mandated by the federal government to pursue student learning outcomes in their
accreditation requirement (Business-Higher Education Forum, 2004). e Higher Learning
Commission of the North Central Association, as an example, has created the Institute on
Assessment of Student Learning for institutions that seek instruction in how to improve
assessment processes (2010); standard 14 of the Middle States Commission on Higher
Education requires that assessment of student learning demonstrate student achievement
of appropriate “knowledge, skills, and competencies”; and the New England Association of
Schools and Colleges (NEASC), Commission on Institutions of Higher Education includes
detailed guidance on multiple methods of appropriate assessment.
13
The number-one reason for
follow ups by nearly every
regional accrediting body last
year was a deficiency in student
learning outcomes assessment.
e number-one reason for follow-ups by nearly
every regional accrediting body last year was a
deciency in student learning outcomes
assessment.
14
Institutions have engaged in a range
of strategies for meeting the assessment
requirements of accreditation bodies, a number of
which are “bottom up” eorts to develop
institutionally comparable student learning outcomes in general education, in the
disciplines, and in community colleges.
15
ese activities have great potential to assist colleges and universities in addressing the new
requirements in ways that ultimately result in improved student learning. In addition to the
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING18
regional accreditors, other accrediting bodies also require similar documentation of student
learning assessment. e professional and specialized bodies that accredit individual
programs within universities have also shown commitments to focusing upon student
learning outcomes (Palomba and Banta, 2001).
As has been pointed out in the 2010 NILOA paper on the topic of regional accreditation
and student learning outcomes, accrediting bodies may provide guidelines for student
learning outcomes, but they do not provide a solution for meeting these requirements in the
classrooms where they may (or may not be) implemented. e papers author, Staci Provezis
observes: “Despite calling for faculty involvement, all regional accreditation standards are
weak in respect to assuring such involvement” (2010, p. 13). In the section below, we discuss
several potential ways in which the current roles of graduate deans and graduate schools
may help to address this concern.
Learning Assessment and the Graduate School Mission
e mission of the Graduate School is to oversee the overall quality of graduate programs.
As noted by several graduate deans at a workshop described in Chapter 5, below, this
mission places the graduate school at the intersection of public accountability and
continued internal program improvements. ree areas in particular make the Graduate
School an essential partner in any future eorts to address the professional preparation of
future faculty in the area of student learning assessment:
(1) academic program review, where the Graduate School plays a key role in
coordination and/or oversight;
(2) the institutional adoption of benchmarking tools and practices, where graduate
deans provide important leadership in monitoring national developments and
international trends as well as identifying opportunities and instruments; and
(3) professional development, where the Graduate School has historically played an
important role in administering and overseeing strong graduate student programs such as PFF.
(1) Academic Program Review
One of the key levers for continued improvement in higher education is academic program
review, i.e., the evaluation of departmental programs for the purpose of continuous quality
improvement. Program review considers many inputs including application rates, selectivity,
yield rates, applicant grades and admission scores, curricula, and faculty teaching. Since
the quality of a program depends in part upon how well the program is achieving its goals,
academic program review must also consider program outcomes. Assessment designed
to measure student learning outcomes therefore should be a part of the program review
process. Many universities incorporate a report on student learning outcomes as part of
their program review evaluations.
Graduate deans have an important role to play in the academic program review process.
Because they have a leadership and/or support role on the graduate council or other faculty
governance body, they typically coordinate the review process for graduate programs. is
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 19
involves providing guidelines for a programs self-study, arranging for external reviewers,
coordinating the logistics for the reviewers visit, obtaining a nal report, engaging
in a dialogue with the department regarding that report, and sharing the report and
recommendations with the academic leadership. At universities where graduate teaching
assistants are an integral part of undergraduate instruction, graduate deans may also
participate in the review of undergraduate programs. In some cases, graduate deans may
be responsible for coordination of a combined review of both graduate and undergraduate
programs within an academic unit or interdisciplinary area.
Graduate schools can help to ensure that the assessment of academic programs transcends
the evaluation of specic courses and that programs are meeting goals and assessments
that go well beyond the goals of those courses. Eective curricula are evaluated not only
by what is contributed by individual courses but also by how they complement each other
(Cuevas et al., 2010). Academic program review helps the graduate school with its mission
to determine if the collective goals have been met.
(2) Monitoring the Adoption of Benchmarking Tools and Practices
In today’s post-Spellings climate, it may not be sucient to rely solely on internal goals and
metrics for improvement nor to rely solely on external data for accountability purposes.
External benchmarking measures have been developed to permit comparisons within and
between universities for undergraduate learning. Tests such as the Collegiate Assessment of
Academic Prociency (CAAP), the Measure of Academic Prociency and Progress (MAPP),
the College Basic Academic Subjects Examination (CBASE) and the Collegiate Learning
Assessment (CLA) are examples of instruments that may provide such evidence (Ewell,
2007).
16
Colleges and universities may also consider developing internally-based, but cross-
referenced performance tasks that permit institutional comparisons. Sharing the results of
such standardized tests and assessment metrics as well as student learning outcomes with
the public may be an expectation in the near future, and accrediting bodies may move
to make more information about the results of their reviews transparent to the public
(CHEA, 2009). In this environment, faculty and senior university leaders should be active
now in deciding which assessment instruments work and which fall short in adequately
assessing the student learning outcomes that they have dened for higher education at their
institution. If they do not play an active role in these decisions and related discussions, it is
possible that decisions about which instruments should be adopted and how they should be
used will be made for them by external stakeholders.
ese eorts reect a growing interest in assessment for improvement, but also assessment
for accountability. e US is not alone, and not necessarily the leader, in requiring greater
accountability of its higher education institutions. In Europe the Bologna Process attempts,
through systems of external examiners, to align subject standards across institutions
and also to move toward greater accountability. In service of these eorts to harmonize
degree structures in Europe, the “Tuning Project” was developed to establish common
frameworks for what students should know and what abilities they should have as a result of
achieving bachelor’s, masters, and doctoral degrees across European universities (Adelman,
2008). is project has been inuential upon eorts led by the Lumina Foundation to
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING20
dene similar qualications for US degrees. Work on bachelors degrees has begun, and
preliminary work to explore such qualications for master’s degrees is now underway. A
key ingredient in the success of such an approach will be the involvement of faculty in the
disciplines from multiple institutions in discussions and decisions about how to dene such
qualications. As participants in this project note, Graduate Schools can serve an important
role in brokering such faculty discussions, which also have implications for graduate student
professional development.
(3) Professional Development
Integrating learning assessment into professional development programs for future faculty
can help quell tensions between those who view improvement eorts to be an internal
matter and those who advocate for a more formal and external approach to accountability.
e trends discussed in this paper all point to a need to prepare future faculty to take
a leadership role in developing learning assessment systems and a learning assessment
culture. A CGS survey of PFF programs (see next section) found that students preparing to
become faculty members have not yet taken sides in the debates between those who support
assessment and those who see it as nothing more than another bureaucratic demand from
senior leadership. Students appear open to developing skills in learning assessment and to
documenting results. Providing students in a more structured way with such knowledge and
skills relating to assessment, and exchanging best practices in such preparation across and
between institutions, could energize a new cohort of faculty to take on their responsibilities
in this evolving arena.
If the history of PFF is an indication of future possibilities, such a structured professional
development program strengthened by national discussion of the topic will require
the leadership of the graduate community. With unique responsibilities for overseeing
the quality and review of academic programs across disciplines and a responsibility to
monitor national benchmarking practices, graduate deans have a lead role to play in the
shaping of the professional development of graduate students. What such professional
development programs look like, and how these programs might be enhanced, is the topic
of the next section.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 21
4. e Preparing Future Faculty Initiative: A Review
of Past Eorts, Current Challenges and Future
Opportunities
Universities have been engaged for decades in serious reection on the quality of learning in
higher education through a number of successful national reform initiatives.
17
In response
to inuential national reports such as the 1991 COSEPUP report, Reshaping the Graduate
Education of Scientists and Engineers, for example, and a series of widely publicized critiques
of the quality of US undergraduate education that emerged around the same time, many
universities turned inward to reect on how they could better improve the quality of
graduate students’ preparation for faculty careers and enhance the quality of undergraduate
education. Some of the most successful institutional responses were made possible by
national “best practice” initiatives focused on enhancing graduate education. ese
initiatives included the highly successful Preparing Future Faculty program, sponsored by
the Council of Graduate Schools in collaboration with the Association of American Colleges
and Universities. is project began premised on the idea that enhancing the quality of
graduate education required better preparation of doctoral students for their professional
future responsibilities and greater dialogue between institutions.
e Preparing Future Faculty (PFF) model sprang from a decades-old realization that
faculty members need to be better prepared for their multiple roles in academe. Across
the US, PFF programs prepared doctoral students in a strategic and comprehensive way
for the full range of roles and responsibilities of US higher education faculty.
18
Between
1993 and 2003, the PFF initiative evolved to award 76 grants to 44 doctoral universities
that partnered, collectively, with nearly 300 other higher education institutions, and 11
disciplinary societies; thousands of students have participated in PFF programs, many of
whom are now teaching in tenured and tenure-track positions.
Many of these programs include some training in the assessment of student learning in
post-secondary education. By participating in PFF programs, graduate students learn about
teaching styles and pedagogy (including pedagogical issues germane to their disciplines),
how to design a course curriculum, as well as how expectations and priorities for teaching,
research, and service may vary depending on institutional context and mission. By meeting
the broader professional development needs of graduate students aspiring to faculty
positions, the PFF initiative advances one of the broader goals of improving the quality of
graduate education in the US.
Another driver behind the PFF initiative is a broadly shared interest in improving the
quality of undergraduate education. By preparing doctoral students (during the grant period
and, in subsequent years), masters students, and postdoctoral scholars for the full range
of faculty roles and responsibilities for the diverse expectations of US higher education
institutions, participating universities contribute to the quality and systemic improvement
of the entire US higher education enterprise. Graduate schools and graduate deans played
a strong leadership role in this improvement. A large-scale external evaluation of the PFF
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING22
initiative found that the most successful programs were those in which: graduate schools
coordinated and provided broad student access to professional development in the general
knowledge, skills, and competencies required of successful faculty; partner institutions
shaped experiences for participants specic to their institutional contexts; and participating
programs or departments provided discipline-specic curricular content and experiences.
19
Preparing Future Faculty: A Model Professional Development Program?
PFF programs go far beyond TA-training programs in scope and content. One innovative
design feature that, during the grant-funded phases, made PFF programs dierent from
traditional TA-training programs, for example, involved institutional collaborations.
Research universities in PFF programs were originally required to partner in “clusters
with liberal arts colleges, master’s-focused universities, community colleges, and minority-
serving institutions. PFF participants in programs with institutional collaborations oen
had opportunities to observe and experience faculty responsibilities at a variety of academic
institutions with varying missions, diverse student bodies, and dierent expectations for
faculty. rough these collaborations, students participated in supervised teaching and
other professional development experiences, and received exposure to higher education
environments oen dissimilar to their own doctorate-granting institution. In many
cases, students reported that these experiences helped students identify their own career
preferences and provided them with knowledge and experience that proved advantageous
on the job market.
Since 2003, CGS has run the PFF National Oce, which provides a central clearinghouse
for information about PFF and contact information for campus PFF programs. e results
of model PFF programs are documented on a dedicated website (www.preparing-faculty.
org) maintained by CGS, through CGS-hosted workshops, deans’ dialogues, and plenary
sessions, and in a series of best practice publications.
20
Since the end of the grant period,
PFF programs have continued to evolve and thrive, and many universities have developed
similar professional development programs on the PFF model, oen in consultation with
CGS.
21
Some of these programs have expanded their partnerships; some have scaled back
student travel while maintaining institutional collaborations; and others now focus on
enhancing the on-campus curricula, and have scaled back institutional collaborations
altogether (see next section for results of a CGS survey). Many of these programs have
evolved to encompass additional professional development areas such as: using technology
in the classroom, ethical issues in research and academia, nancial management, and
university governance. e successful PFF model has been widely emulated beyond the US
as well: for example in the UK and Japan, in consultation with US universities and the CGS
PFF National Oce.
PFF programs today provide the most comprehensive and recognizable models
for preparing graduate students who aspire to teaching careers. In this climate of
accountability they aord a unique opportunity to embed assessment knowledge and
skills into their programs. ey provide a means to channel training in assessment to
the upcoming cohort of new faculty, and yet these programs have not been fully utilized
to enhance the assessment of student learning in higher education. While assessment
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 23
experts, policy makers, and foundations have been publicly deliberating about what to
do about student learning assessment on a national scale, some of these programs have
already been quietly doing it by providing numbers of graduate students with expertise
and practice in learning assessment.
Various models have been developed by some of these innovative PFF programs, but their
eectiveness has not yet been studied. Promising practices and a growing body of evidence
about what works in the assessment of learning and in the professional preparation of future
faculty to assess student learning may well be developing at universities and in clusters
where such programs exist. And because senior university leaders have typically not been
involved in monitoring and institutionalizing such cases, they have not been a part of a
national dialogue about best practices in graduate student preparation for faculty careers or
in learning assessment, generally.
What Do We Know about Current Practice in “PFF” Programs and Student Learning Outcomes?
Preparing Future Faculty programs would seem to provide an ideal opportunity for
introducing graduate students to the institutional expectations for learning assessment,
skills and techniques in assessment, and to the broader issues about how and why
student learning should be assessed and how results of that assessment can be used to
improve teaching and the curricula. CGS sought to better understand the extent to which
preparation in the assessment of student learning may already be integrated into PFF
programs and to identify opportunities for enhanced integration. We therefore designed
a survey that queried universities on the status and scope of their PFF programs, ways in
which those programs have evolved, the degree of institutional collaboration, and other
issues. Separately, we asked within the same survey about what university resources and
activities were available to help faculty with student learning outcomes assessment and
whether such resources and skills preparation were available to graduate students aspiring
to faculty positions.
We sent the survey to 57 universities, including every university that received a PFF grant
or that requested a similar program to be listed on the PFF National Oce website, as well
as other universities with professional development programs at least partly coordinated
by the graduate school or with sustained involvement in assessment. We sent the survey
to graduate deans to oversee responses, but asked for input on the survey from project
directors, sta, and campus experts who would be able to provide accurate responses to
both areas of inquiry. We received 37 completed surveys (a 65% response rate), and two
e-mail responses from institutions indicating that their PFF activities are no longer active.
e great majority of respondents (78%) reported that, over the past decade, requirements
for faculty at their university in the assessment of student learning increased. Only 14%
reported that such requirements stayed about the same, and none reported a decrease
in such requirements. We sought to learn whether or not, in response to this trend, the
development of student learning outcomes was currently integrated into structured
professional development programs for graduate students aspiring faculty positions. If the
survey demonstrated that such integration was already in place, we also sought to use the
survey to gather information about opportunities for enhancing such integration and for
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING24
making current promising practices a part of the national dialogue.
Survey results shed light on opportunities to meet national and institutional needs
discussed earlier in this paper. Some key opportunities and needs identied in survey
results are discussed below. Many of these ndings point toward the value of PFF as a
delivery model for the professional development of graduate students in undergraduate
learning assessment. But results also point to specic needs: to reach more students within
universities with active PFF and similar programs; to foster greater dialogue between
universities about how to integrate learning assessment into professional development
programs for tomorrow’s faculty in ways that are scalable, sustainable, and eective; and to
create more opportunities for dialogue within and across universities about best practices in
the disciplines as were fostered during the Preparing Future Faculty initiative.
Opportunities and Needs
1. Many programs developed with seed money from the PFF initiative remain “Active
or “Somewhat Active.
e majority of respondent institutions (76%) described their PFF or PFF-like programs
as currently “Active,” which we dened as: “continue to maintain an active professional
development program with institutional partnerships and supervised teaching experiences
and/or certicate/transcript recognition for student participation in a range of activities.” An
additional 22% described their program as “Somewhat Active,” i.e. operating under “scaled
back resources and/or activities” institutional partnerships, etc. since the original grant-
funded period. Only 3% of those that returned completed surveys described their PFF or
similar programs as “Inactive.” [Results reported below include responses from institutions
reporting on both “active” and “inactive” programs.]
2. Graduate Schools provide strong support for PFF and similar programs.
Early grant phases supported the development and institutionalization of predominantly
centralized PFF programs, which were housed in the Graduate School or in some other
central unit with graduate school input or oversight. During the latter grant phases, new
PFF programs were developed in the graduate programs and departments (oen run in
combination with centralized PFF activities or programs). e majority of institutions
surveyed (59%) described their programs as “centralized,” dened as “open to graduate
students from across the campus, focusing on issues that pertain to multiple elds and
programs,” and 35% described theirs as “hybrid,” that is, containing both centralized
and program-specic components. By contrast, only 5% described their PFF activities as
program-specic,” dened as “housed in the departments or programs, including emphasis
on issues specic to the eld or program.” ree quarters (75%) of those respondents who
described their “centralized or hybrid [programs] with centralized components” reported
their programs to be housed in the Graduate School or Graduate College. e status of PFF
programs suggests the potential for strong leadership from graduate deans and graduate
schools to inuence the priorities and activities of professional development programs in
positive ways in the area of learning assessment.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 25
3. Graduate deans and other senior administrators are leading calls, and institutional
responses to calls, for accountability in the area of learning assessment.
Presented with a variety of possible factors contributing to increased university
requirements for student learning outcomes, 100% of respondents reported that “Strategic
commitment of senior administration to improve quality of education” were either very
important or somewhat important in prompting such increased requirements, and 96%
reported that both “Institutional/regional accreditation standards” and “Specialized or
programmatic accreditation” were very or somewhat important in prompting increased
requirements for faculty assessment of student learning outcomes. e leadership of
graduate deans in working with faculty to meet these requirements and the place of
Graduate Schools at the intersection of graduate student professional development and
public accountability make the case for stronger Graduate School involvement in the
current national dialogue surrounding enhanced learning assessment.
4. Institutional collaboration appears to play a smaller role now than in the grant-
funded period of PFF programs.
As mentioned above, one of the features that distinguished grant-funded PFF programs
from typical TA-training programs is that participants in the former experience
supervised teaching and service mentoring on other campuses via formalized institutional
partnerships. Typically, a PhD granting PFF university partnered in “clusters” with at least
one master’s focused institution or liberal arts college and at least one community college,
on average involving between 3 and 6 other institutions (and as many as 15 institutions) per
cluster. Sustainability concerns, travel time, and incentives for partner institution faculty
had been identied in the past as challenges, and survey results suggest that the numbers
of institutional partnerships may have decreased. Despite the number of respondents that
described their programs as “Active” above, and the emphasis on institutional collaboration
in the denition of “Active,” results from a separate question on institutional partnerships
reect that just over one third have what they would describe as active partnerships with
either master’s focused institutions or community colleges. Respondents reported “current
institutional collaborations” with the following types of higher education institutions as
part of their PFF programs: master’s-focused/comprehensive universities (35%); liberal
arts/four-year colleges (41%); community colleges (35%); minority-serving institutions or
predominantly-minority institutions (21%); and other research/doctoral universities (19%).
5. e majority of PFF programs are providing at least some graduate students with
preparation in student learning assessment.
Over half (68%) of respondents reported that “the development of Student Learning Outcomes
(SLOs) and/or the assessment of student learning” is “an integral feature” of [their] PFF or
similar programs. Methods that PFF programs exposed participants to included: classroom
assessment techniques, use of technology to improve student learning, use of feedback from peer
or mentor observation to improve teaching and learning, and use of learning assessment data to
enhance syllabi or curricula; 86% of respondents reported that students in their PFF programs
learned about “Development of Student Learning Outcomes for individual courses.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING26
ese promising activities have not been documented, nor have they been cited in most
discussions of the need for greater accountability in higher education learning assessment.
ese ndings suggest that there is a huge opportunity to tap existing PFF programs to
explore what approaches are being used to expose graduate students to learning assessment,
how well students learn from these programs, and how such eorts might be enhanced
to impact the institutional climate surrounding assessment and accountability. One clear
opportunity in any revitalized PFF network suggested by these results would include
the documentation of current practices, and the creation of a centralized repository of
information about such approaches. Moreover, as suggested by participants at a CGS-
hosted workshop described in the next chapter, this preparation of graduate students in PFF
programs could potentially yield models to engage and enhance professional development
activities for current faculty.
6. Respondents report that faculty receive minimal preparation for student learning
assessment through mostly passive print materials and one-time orientation events.
While the CGS survey found that at least some graduate students in PFF programs receive
intentional exposure to student learning assessment issues and techniques, when asked how
faculty typically learn about Student Learning Outcomes, respondents reported a variety of
modes: university-wide handbooks (54%) , program-specic handbooks (41%), and new
faculty orientation or workshops (49%); 38% reported that faculty learn about this kind of
assessment from other “Resources provided to faculty by the graduate school and/or college
dean” and 30% reported that an oce of institutional research provided this exposure.
Several respondents cited centers of excellence in teaching and learning, where
individual faculty members who request assistance may receive it. Overall, these
findings are difficult to interpret. On the one hand, they suggest that, as faculty
development may not be under the explicit purview of the graduate school, a revitalized
PFF initiative bringing faculty, disciplinary societies, and graduate students together
under the aegis of graduate school-led campus-wide programs could answer a need
for greater faculty engagement. Most faculty are not introduced in a systematic way
to learning assessment as faculty, and most of the existing mechanisms for engaging
faculty in this kind of assessment is in orientation through either a passive/static
(handbook) format or orientation event without a chance for meaningful reflection and
formative input, follow-up, or sustained dialogue with experts.
Summary
Overall, the CGS survey identied graduate schools as playing key roles in responding
to calls for accountability and shaping accountability and assessment plans, as well as in
supporting professional development programs for graduate students, the majority of which
integrate aspects of student learning assessment. e potential of these programs to serve
broader needs for greater faculty engagement in learning assessment and enhanced national
discussion about best practices in faculty preparation is high, but, as the next section
indicates, will require signicant coordinated eorts and re-envisioning to ensure that key
obstacles of institutionalization and scale-up are overcome.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 27
Active institutional collaborations during the grant-funded phases of PFF provided some
of the richest opportunities for PFF programs to address dierences in learning assessment
expectations by institutional type. While most respondents described their programs
as “Active,” generally it is clear that many have scaled back the scope of their current
institutional collaborations since the original PFF seed grants, when external grant funding
supported travel, supervised teaching experiences, etc. PFF participants seek academic
employment at a range of higher education institutions, and to the extent that graduate
students seek and obtain academic employment at a variety of dierent institutional types,
and such dierences may have implications for how learning assessment is conducted in the
classroom and in the institution, this may by an unaddressed need. is nding also may
suggest, however, that PFF has evolved to address these needs in ways that do not require
the sustained physical time on partnering campuses that was an integral part of the PFF
model during the grant funded period.
Key Challenges
Universities face a number of obstacles in preparing graduate students to understand and
conduct assessments. Any concerted eort to address the needs of future faculty in regard to
the assessment of student learning must be designed with such challenges in mind.
1. Faculty Resistance
Many survey respondents highlighted an obstacle discussed earlier, faculty resistance to
assessment. About one-third indicated that most faculty do not currently see preparation
in the assessment of student learning as appropriate to a graduate degree program, and
several stated that negative attitudes about (or lack of interest in) student learning outcomes
are passed on from advisors to graduate students. It is important to keep in mind, however,
that the respondents of this survey consisted largely of graduate deans and administrators
of professional development programs. A number of respondents indicated that the
perceived faculty resistance reects certain values or beliefs, oen expressed in the view that
assessment is grounded in common sense rather on learned skills, or that “teaching does not
require systematic training.
Yet many respondents also indicated that faculty resistance does not necessarily reect
disagreement with the potential of assessment to improve teaching and learning. Over
one-half of survey takers who identied faculty resistance as a key challenge pointed to
broader factors that inhibit greater faculty interest in, or commitment to, assessment
eorts. ese factors include:
• Insucient time to address assessment
• Greater pressure to perform in other professional areas (i.e. research)
• A lack of knowledge or training in assessment strategies
• Unclear professional expectations regarding assessment responsibilities and few
incentives to perform well in this area
• Weak support for assessment within academic programs
• A lack of discipline-specic instruments
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING28
It is important to note that not all of these concerns may be evenly distributed across elds
(and some may vary by institution). Several respondents indicated that assessment is given
dierent weight within dierent disciplines, and one respondent stated that the humanities and
liberal arts face greater hurdles due to a perception on the part of some faculty that learning
outcomes in these elds defy measurement.
2. Student Engagement
e general nature of some responses suggests that the obstacles listed above could
apply both to current faculty and future faculty, i.e., graduate students. On the whole,
however, fewer respondents indicated that graduate students expressed strong resistance
to assessment, and a number of survey-takers characterized students as more open to this
activity than current faculty. One survey-taker opined that the tightening academic job
market would likely spark increasing interest among future faculty in assessment practices:
As the job market continues to tighten, we predict that more and more students will see the
value of developing and documenting assessment skills, as such skills give them an edge in
landing a faculty job.
How might universities respond to growing student openness and interest? Survey
responses indicated that they will need to take into consideration the developmental
trajectories of new and future faculty. It was reported that assessment may be put on the
“back-burner” as students work to balance dierent professional responsibilities at the
same time: “For doctoral students, a key challenge is balancing the multiple expectations
from their program, their major professors, and their desire to learn more about teaching
and an academic career,” one survey taker noted. Another stated that this balancing act
may continue into a students rst faculty appointment as he or she works to master
the responsibilities of teaching and learning assessment, scholarship and research, and
service or outreach. e time issue was considered to be of particular concern for students
supported by grants since they may not receive “release time” or encouragement from
faculty advisors to focus on teaching and assessment.
ese responses highlight one of the key advantages of early and gradual exposure to learning
assessment. If new faculty are introduced to assessment when they land their rst jobs, they may
be more likely to experience this responsibility as an additional burden that is disconnected from
the professional identity and practices they have developed in graduate school. If, however, they
have prior understanding of the concepts and importance and have the opportunity to develop
and practice this skill, they may have a much more positive and engaged response.
3. Pedagogical Issues
One pedagogical challenge identied by survey takers was that of teaching future faculty
to think of outcomes in a broad context—not just in the framework of a particular course.
One respondent noted that assessment encompasses a much broader range of outcomes
for programs and curricula of which individual courses are only a part. Yet another
indicated that minimal exposure leaves graduate students uncertain about how their
teaching responsibilities for individual courses contribute to the broader formation of
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 29
undergraduates within a certain major: “It is challenging for graduate students, who oen
have limited TA or teaching opportunities, to gain perspective on the breadth of student
learning outcomes dened not only as short-term knowledge (from a course), but also more
broadly as long-term, accumulated knowledge, skills, attitudes, abilities and habits of mind
from a program of study.
Additional challenges involved linking assessment to student learning. A number of
responses highlighted the need to focus more on outcomes that demonstrated evidence
of learning rather than on teaching methodologies alone. Introducing graduate students
to criterion-referenced assessments, rather than on the easier to implement, norm-based
assessments oen used in large survey courses was also perceived to be a challenge. Norm-
referenced assessments are used to compare the performance of large groups of students
on an examination that is not directly tied to a particular course or curriculum, while
criterion-based assessment is used to measure student performance in relation to a set of
explicit skills and concepts. Both forms of assessment are valid in dierent contexts, but it is
important for future faculty to understand that these assessments serve dierent purposes.
If, out of a lack of familiarity with criterion-based assessment, assessment itself is equated
with required norm-based evaluations used by institutions, then there is a danger that
young faculty will see assessment as a practice that is not relevant to their particular courses
or programs and in which they have no direct role to play.
4. Operational Challenges
e most frequently cited operational challenge to preparing future faculty to assess student
learning was a lack of centralized or equally-accessible resources. Particular concerns were:
• Variations in training and standards within disciplines and programs;
• A lack of centralized university support for, or integration, of assessment; and
• A lack of resources and programs with cross-disciplinary breadth.
In many ways these responses supported other concerns about the need to ensure that all
future faculty are given systematic training in teaching and that they are provided with
a broad context for assessment, not only the context provided through a specic course
or program. One respondent added that disciplinary norms may restrict thinking about
assessment, even if, as another noted, there is a need for more discipline-specic tools for
measuring outcomes.
Other operational challenges identied in the survey were:
• Contradictory requirements from university-wide assessment eorts required by
accreditors and the assessment eorts instituted by programs
• Over-mechanization of assessment eorts
• Lack of models/examples of eective, assessment-driven reforms
• A need for more nancial and human resources
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING30
Where Do We Need to Go Next? Incorporating the Assessment of Student Learning into
Professional Development Programs
As was clear in the responses to our question about challenges, graduate deans and
the Directors of PFF and similar programs are already working to confront obstacles
surrounding assessment and the teaching of assessment. To gain a better sense of how
professional development programs for graduate students might play a role in overcoming
these challenges, we included an additional open-ended response question on the
survey: “In your opinion, how could professional development programs for graduate
students aspiring to faculty careers best be modied, enhanced, or utilized to incorporate
participants’ training in the assessment of student learning and the development of
Student Learning Outcomes?”
1. Approaches to Integrating Assessment Training
Survey-takers provided experience-based suggestions for making this topic more central,
with varying degrees of emphasis on the need for formalized training. Many respondents
provided more specic suggestions for integration: one-half of the responses focusing on
integration recommended dedicating special sessions, workshops, courses or events to
Student Learning Outcomes (SLOs); roughly one-third recommended requiring and/or
giving credit for courses in professional development or curricula for teaching certicates;
and one-third recommended involving faculty advisors and mentors in training initiatives.
A handful of other responses highlighted a number of other strategies for integrating
assessment training within and beyond PFF programs:
• Inviting guest speakers involved in accreditation and faculty development to present
at PFF or other professional development events
• Requiring students to incorporate SLOs in their present teaching responsibilities
• Incorporating scholarship of teaching and learning methodologies in training
programs
• Training the trainers of teaching programs in the vocabulary/value of assessment
• Linking TA training to faculty development workshops on assessment
• Developing online resources, including discipline-specic resources to all graduate
students
Some of the ideas oered above have already been incorporated into existing programs,
while others were presented as possible innovations.
2. Modeling Innovations
e modeling of assessment was a topic that repeatedly emerged in comments on faculty
mentoring. ese comments might call to mind a situation in which a faculty advisor leads
a workshop on assessment, encourages a student to participate in that workshop, or reviews
a students syllabus with feedback about incorporating SLOs. Survey takers shared a number
of other innovations in modeling that could potentially help students to play a more direct
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 31
role in the creation and assessment of learning outcomes. Several respondents pointed
out that professional development programs and other graduate courses are an excellent
opportunity for making the medium the message.
One respondent wrote,
“In the [professional development] programs I currently oer […] I use an activity/
competency checklist based on the objectives that the participants should achieve before
they complete the program. ey may utilize alternate ways of attaining the program
objectives, but they are aware that they will be evaluated based on their achievement of
the expected competencies. e rationale for this type of competency-based learning
and evaluation is explained and participants are expected to try this model in their own
teaching. To assist in this endeavor, when I review the syllabi that the participants prepare
to teach in their own classes, I emphasize the need for stating specic measurable objectives
right at the beginning of the course so that the students will be aware of their intended
learning outcomes. I also demonstrate to future faculty that end of course assessment is made
much simpler because they can develop questions based on stated objectives and students
will become aware of their own learning outcomes.
In this approach, students are given the opportunity to simultaneously experience the roles
of teacher and student in the learning assessment process, making the value of assessment to
learning more tangible and real.
Yet another set of responses recommended involving graduate students in the process of
creating SLOs for their own graduate programs. is practice might also have additional
positive eects, demonstrating that faculty have an important role to play in the creation
or implementation of SLOs (beyond plugging in standards handed down to them) and
showing that learning outcomes concepts are applicable to all levels of learning (and not just
ways of enforcing minimum standards of attainment).
3. Management Strategies
Of course, all of the recommendations above require support on the part of graduate
deans and other senior administrators working to ensure that professional development
programs are eective and relevant and provided by faculty or sta well-trained in learning
assessment. A number of respondents emphasized that the graduate school plays a key role
in providing adequate funding, stang, and follow-up for programs that prepare future
faculty to understand and use learning outcomes assessments.
A number of specic strategies for this support were also suggested:
For current graduate students and future faculty:
• Make teaching experience a requirement for all graduate students
• Include questions about assessment in surveys of doctoral students to gauge need in
this area
• Provide stipends for top graduate student teachers to build their expertise in this area
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING32
For current faculty:
• Include consideration of a faculty members integration of the Scholarship of
Teaching and Learning in the tenure and promotion review process
• Create opportunities to highlight eective practices of the newest faculty members
who can provide models and serve as “change agents
ese suggestions are not comprehensive, nor will they be practical at every institution, but
they can serve as a platform for future discussion about the ways in which graduate deans
can work to create a positive culture of assessment that supports the activities of all those
involved in the professional development of future faculty: graduate students seeking to
develop their teaching skills, faculty advisors overseeing their professional development,
and Directors of PFF and similar programs.
Summary
e results of the CGS survey described above suggest that PFF programs hold promise
as sites of signicant intervention into the quality of faculty practice in the assessment of
student learning. ere is potential for graduate deans and graduate schools to build on
work already underway in professional development for graduate students and academic
program review to assume greater leadership in discussions of institutional accountability.
While graduate deans’ primary responsibility is the quality of graduate education, since
the PFF was created, graduate schools have recognized the key role graduate students play
as future faculty in the quality of undergraduate education. PFF and similar programs
with graduate school oversight and/or coordination is one important means of answering
the national calls for greater accountability in the assessment of undergraduate student
learning and improved quality in higher education. e activities that these programs
support provide the kind of rich engagement of future faculty that, nationally, we should
be seeking for all faculty.
is project has helped us to understand that most PFF programs expose participating
graduate students to learning assessment, but that these programs currently reach small
numbers of students, and signicant challenges impede the scale-up, broader impact, and
disciplinary relevance of these programs, and dialogue between senior university leadership
and faculty is largely limited to internal campus discussions. Survey respondents and
workshop participants indentied obstacles to progress on key issues such as incentivizing
participation of sucient numbers of students and providing sucient engagement in the
disciplines to make a dierence in faculty attitudes toward assessment and accountability.
e close collaboration between departments or programs, disciplinary societies, and
graduate schools under the leadership of graduate deans in the latter phases of the
Preparing Future Faculty programs suggests that a revitalized national PFF initiative may
be the most eective means of answering the core needs. External evaluators of the PFF
program identied this hybrid model as the most eective structure for PFF in ensuring
“visibility, credibility, and institutionalization” and recommended that future PFF
activity build on this optimal collaboration between graduate schools and departments
or programs (Goldsmith et al., 2004).
22
Because such a hybrid structure builds in input
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 33
from and collaboration with program faculty in the curricula, a renewed and expanded
PFF initiative building on this structure could result in promising new resources and a
change of attitudes toward learning assessment and accountability. e resources and
training models that result could also potentially answer the need for more professional
development opportunities for current faculty.
In the absence of a national, coordinating body to convene stakeholders and participants,
PFF programs face a number of challenges: confusion about how methods for assessing
student learning may vary by course, program/discipline, and institution; a paucity of
forums to support a vibrant cross-institutional conversation about promising practices in
the assessment of student learning; and missed opportunities for institutional collaboration,
where aspiring faculty may have opportunities to understand how the institutional context
of the higher education sector in which they seek careers may reect unique expectations
and student needs. As discussed in this report, many of the features of the CGS-sponsored
Preparing Future Faculty initiative could help institutions address such challenges. e
review of the national and institutional contexts described above, and the results of the CGS
survey of active PFF programs suggest that a new nationally coordinated set of pilot projects
could provide a signicant bridge between national calls for greater accountability and
current faculty practice in the assessment of student learning.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING34
5. Insights, Lessons Learned, and Areas of Future
Work: a CGS Workshop on Enhancing Graduate
Student Professional Development Programs
At a November 2010 workshop “Preparing Future Faculty to Assess Student Learning,
participants probed the observations and issues gleaned from the survey described
in chapter four. Workshop invitees included faculty and national experts in learning
assessment, graduate deans and program directors with active Preparing Future Faculty
(PFF) and similar programs, and current and recent graduate students who had participated
in faculty preparation programs with an assessment component.
23
e purpose of the
workshop was to identify opportunities and challenges for enhancing the preparation of
future faculty to assess student learning.
A discussion paper provided background, preliminary analysis of survey ndings (revised
in earlier chapters of this volume), and a framework for the workshop discussion. e
workshop began with presentations on broad trends and issues in higher education
assessment from three national experts. George Kuh, co-Director of the National Institute
for Learning Outcomes Assessment,
helped to frame discussion by providing an overview
of the national context for learning assessment. Dr. Kuh described a variety of data sources
and tools, but emphasized that choosing among these requires clear understanding of what
we value. is presentation was followed by a panel addressing current research on the
most eective approaches to assessment. Marc Chun, Director of Education at Collegiate
Learning Assessment at the Council on Aid to Education described a variety of methods
and approaches for assessing undergraduate learning, while Ann Austin, Professor of
Higher, Adult, and Lifelong Education at Michigan State University, discussed strategies for
creating a faculty culture that values assessment.
Following the three framing talks, each workshop participant presented on topics in
which they had particular interest, experience, and expertise. (See the CGS website for the
full workshop agenda). Presentations and discussion focused on issues in four areas:
• Creating a Culture that Values Learning Assessment
• e Broad Parameters of an Enhanced PFF Program
• Potential Curricular Content for Learning Assessment in PFF
• Assessing Success in Program Integration
e goal of the workshop was not to establish consensus on particular approaches to
learning assessment where, as prior chapters discuss, there is now lively debate, but rather
to identify promising models and structures for enhancing graduate students’ familiarity
with dierent approaches and skills in learning assessment. A large part of the discussion
focused on issues of how to create eective programs that have the capacity, over the long
term, to transform the broader culture of graduate education and faculty preparation. is
chapter synthesizes the results of workshop presentations and the all-group discussions that
followed. e results provide a valuable framework for future enhancements and expansion
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 35
to PFF and similar programs and a compelling case for a more strategic, national approach
to meeting graduate student needs for professional development in learning assessment.
Challenges to Creating a Culture that Values Assessment of Student Learning
“The new faculty coming into the
institutions are the great hope
for cultural transformation.”
- Dr. Eduardo Ochoa,
U.S. Asst. Secretary for
Postsecondary Education
As discussed earlier, preparing future faculty with
appropriate expertise in the assessment of learning
is an important strategy for fostering university
cultures that value assessment. At the workshop,
Dr. Eduardo Ochoa, US Assistant Secretary for
Postsecondary Education, emphasized the key role
of current graduate students in supporting the
current and future quality of higher education in
the United States.: “Bringing [assessment] into the preparation of future faculty is critical.
e new faculty coming into the institutions are the great hope for cultural transformation.
Workshop participants explored specic obstacles and opportunities in eecting such a
transformation. Some of the cited obstacles were specic to certain types of institutions,
while others were more universal. ree dominant challenges emerged from the discussion:
(1) A complex and oen confusing assessment landscape,
(2) Dierent disciplinary cultures, and
(3) A gap between assessment scholarship and faculty practice.
(1) A Complex Assessment Landscape
Participants highlighted a number of factors that make the assessment landscape dicult to
navigate for both graduate deans and faculty: real or perceived tensions between assessment
eorts that stress accountability and those that stress improvement; lags between collecting
and reporting data; a relative lack of attention to how such data might be used to improve
learning; and uncertainty about what the data might show.
Valid questions and concerns about assessment have led to caution within some sectors of
the academic community when certain assessment approaches have been recommended or
adopted. ese concerns typically cluster around issues such as: a) how assessment data will
be used, and whether they will be used at all; b) whether an assessment activity will be used
to compare the performance of individual faculty members, departments, and/or institutions;
and c) whether assessment data, once provided, will have a real impact on the quality of
courses and programs. Deans reported that while it is important to answer such questions in a
clear and coherent way, it is oen dicult to reconcile the pressures of external accountability
eorts with faculty concerns, and that they have sometimes served as translators or “honest
brokers” for two groups that speak dierent languages about student assessment.
ere was general agreement that universities must be clear and eective in addressing
questions about how assessment data will be used if they are to stimulate genuine
faculty engagement in the assessment process. It was clear from discussion, however,
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING36
that dierences in institutional context may require dierent communication strategies.
While some participants described institutional contexts already supportive of enhanced
learning assessment and spoke of the challenges of scaling up and integrating promising
practices at their institutions, others described cultures where more discussion is needed of
fundamental questions, such as “What is quantied and quantiable?” ey also said that
broader support for assessment may depend on gathering and sharing more evidence that
evidence-driven teaching reforms have been eective.
(2) Dierent Disciplinary Cultures
One of the reasons that a faculty culture of assessment at an institution typically does not
take shape of its own accord is that many faculty view their disciplines as their primary
context for scholarship, and dierent disciplines may dene and practice learning
assessment in dierent ways. Faculty members may be more likely to see the relevance
of learning assessment to their own work when its importance is framed within the
context of the discipline, rather than the institution. And yet, faculty members oen
encounter requirements and models for learning outcomes assessment presented in
the broader, institutional context. is suggests that, while learning assessment is an
institutional responsibility, disciplinary societies may be important partners in fostering the
identication and exchange of promising practices among faculty.
An awareness of disciplinary cultures can also help university leaders shape eective messages
about the value of assessment. A dean at a university with a strong focus on STEM elds and
professional degrees commented, “My university is populated with very pragmatic data-driven
people […] in a way that makes doing evaluation and assessment more straightforward because
they’re very open to using data to inform practice.” While leaders at her university can eectively
appeal to this openness to outcomes data, she explained, they must also not forget that the
purposes of the data collection process must be transparent and focused on clear goals: “faculty
are not very interested in collecting data if its seen as bean counting or [mere] accountability.
As universities move forward with new solutions to the challenges of assessment, it will
be important for graduate deans to have strategies for navigating these dierences in
institutional and disciplinary cultures. As mentioned above, the collaboration between
graduate schools and disciplinary societies in the latter grant-funded phases of PFF suggest
a model upon which new directions could build, since the disciplinary societies can help to
promote the value of learning assessment in the disciplines and encourage more discussion
about promising practices. Graduate schools, meanwhile, can address the key obstacles to
eective institutionalization such as devising appropriate incentive structures, allocating
resources, evaluating programs, encouraging diusion of best practices across programs,
and ensuring that campus accountability eorts build upon (rather than merely compete
with) genuine learning assessment practices and principles in the disciplines.
(3) A Gap between Assessment Scholarship and Faculty Practice
e growing body of scholarship devoted to teaching and learning in higher education,
along with the growing number of tools, rubrics, and templates developed for widespread use,
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 37
present both a challenge and an opportunity to universities. Some faculty have drawn from
this body of resources in developing their teaching practice in very eective and measurable
ways. As Dr. Austin observed at the workshop, online resources can strengthen and support an
emerging culture of assessment because faculty members can use them on their own to pursue
questions that they identify as important in their teaching and their students’ learning, as well
as issues important within institutional dynamics and requirements.
Most participants agreed, however, that most faculty are not aware of the many tools now
available or under development. One dean described this problem as one of bringing the
outside” world of assessment scholarship into the “inside” world of teaching and learning.
Assessment resources can also enhance the position of faculty as “stewards” of the discipline:
assessment can support their eorts to advance the discipline and form the next generation of
scholars.
24
He emphasized that faculty will have more inuence on the quality of scholarship
in their discipline if they are better able to analyze their own success as teachers.
Opportunities for Eecting Culture Change
To gain a more concrete sense of the work that could be done to stimulate more
engagement of current and future faculty in assessment, we asked workshop participants
to consider what types of incentives and rewards would encourage greater faculty
engagement in student learning assessment and to foster greater exchange of promising
practices. Suggestions and recommendations were varied, but fell roughly into the ve
following approaches:
(1) Link Assessment to Research and Scholarship;
(2) Use Data to Demonstrate the Impact of Assessment;
(3) Create Opportunities for Faculty Ownership and Leadership;
(4) Develop and Improve Incentives for Faculty and Student Involvement;
and
(5) Connect Assessment to Professional Success.
Responses focused on both current and future faculty members, and many participants
emphasized that eective change will depend on engagement of both groups.
(1) Link Assessment to Research and Scholarship
Many participants indicated that faculty may be more open to assessment and pedagogical
reection if these processes are presented in the context of intellectual and scholarly work. is
point supports one of the recommendations made by Pat Hutchings in her recent paper on
faculty involvement in assessment: that universities must “reframe the work of assessment as
scholarship” (Hutchings, p. 15). Relevant concepts cited by participants, several of them used in
the Scholarship of Teaching and Learning, include “teaching as research”; teaching scholarship;
applied research; theory-based practice; and evidence-based practice. Many of the PFF programs
represented at the workshop already integrate these concepts into their curricula.
Demonstrating that assessment may be a form of scholarly inquiry may also help spark more
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING38
campus conversations about how faculty may already be practicing assessment in ways that
can be expanded and rened. is may be particularly important for current faculty who
may not nd assessment relevant to their established teaching practices, but also to graduate
students who lack condence in their teaching skills. David Payne, Vice President and COO at
Educational Testing Service (ETS) observed, “[...] Graduate students really need to appreciate
that the skills they’re acquiring in research and scholarship can also be directly applied to
assessment: asking the right questions, looking at a variety of approaches.
25
As many participants pointed out, disinterest in or resistance to assessment may also be
rooted in traditional ways of thinking about teaching and learning. Some may hold the
view, for example, that teaching and research are distinct activities, or, as mentioned above,
that evaluation criteria serve primarily to judge students’ work rather than to improve it.
Experts in student outcomes assessment indicated that change is dicult in this area. Along
with a number of graduate deans, they stressed the importance of closely examining the
language and concepts that are used to describe teaching roles and pedagogical practice
on their campuses. is examination may be most fruitful if both university leaders and
faculty are engaged in the discussions. Some proposed shis in concepts and language, with
descriptions that emerged from the discussions, are outlined below:
What shis are needed in the way we talk about teaching?
Traditional Concepts
Concepts Based on Scholarship
of Teaching and Learning
Teaching
Teaching practice is developed independently of
learning outcomes.
Teaching & Learning”
Practice and outcomes are mutually informing.
Teaching vs. Research/Scholarship
Teaching and research are independent (and
competing) activities.
Teaching as Research/Scholarship
Research informs teaching, and teaching is an object
of research.
Assessment is Implicit and Summative
Evaluation criteria may be vague or
implicit.
Course goals are focused on content
with little attention to broader skills.
e context for evaluation is singular
(the course).
Evaluation criteria are not typically
shared or compared among colleagues.
Faculty member evaluates students.
Assessment is Explicit and Formative
Evaluation criteria are explicitly articulated
and communicated.
Course goals are related to objectives and
skills specic to the program/degree.
Multiple contexts for evaluation (course,
program, institution).
Faculty share evaluation criteria articulated
in teaching materials, print or online.
Faculty member evaluates students and
°
Student acquires tools for evaluating
his/her work.
°
Faculty member evaluates teaching
eectiveness and renes practice.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 39
One dean noted that a focus on teaching and learning is a means of reaching the goal of
accountability. She advised that university leaders should focus on “producing the best
students you can” and that “the accountability will follow.” is comment reects a view
expressed by many— that assessment is not an end in itself but a means of improving the
overall quality of higher education.
(2) Use Data to Demonstrate the Impact of Assessment to Graduate Faculty
“Show [graduate faculty] that
you’re using the data, but more
importantly that the university is
better because you used it.”
-Graduate Dean
workshop participant
A key step toward connecting assessment to
improvement is demonstrating this relationship
with data. Some participants expressed concern
that this step is le out of university assessment
initiatives, as when data are processed by an
institutional research or assessment oce that does
not communicate with faculty. One dean stressed
the importance of making graduate faculty aware of
the impact of assessment data: “Show [graduate faculty] that youre using the data, but more
importantly that the university is better because you used it.” She added that it is dicult to
improve learning outcomes for undergraduate students if graduate faculty do not take
seriously the preparation of future faculty to conduct and use assessments of student learning.
(3) Create Opportunities for Faculty Ownership and Leadership
Earlier sections of this paper highlighted a tension in current learning assessment practices:
the core purpose of assessment is the improvement of teaching and learning, yet faculty may
view the particular assessment approaches being endorsed for adoption as disconnected
from their professional values and practices. According to many deans and experts, one
way to respond to this problem is to create new opportunities for faculty ownership of
assessment. Specic recommendations that applied to current faculty members included:
• Promoting the recognition that assessment is a faculty governance issue, i.e., through
faculty governance boards or other forums;
• Facilitating faculty access to assessment tools, which can make the process of
assessment more transparent;
• Giving faculty members opportunities to share their own eective assessment
strategies and experiences with other faculty members.
Faculty leadership in this area will depend on opportunities to apply and share knowledge and
expertise in dierent contexts and to integrate this expertise into their own professional practice,
as mentors, advisors and teachers. Many of the PFF and other professional development
programs represented at the workshop have developed new strategies in this area. [See the
Teagle Foundation website for more information about current projects that focus attention on
graduate student leadership in assessment, in particular, Columbia University’s “Teagle Teaching
Scholars Program: Transforming the Way that Doctoral Students are Trained to Teach,
Stanford University’s “Graduate Student Teaching in the Foreign Literatures,” and Northwestern
University’s “Northwestern Initiative for Teaching and Learning by Graduate Students.”]
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING40
(4) Develop and Improve Incentives for Faculty and Student Involvement
Many incentives for faculty involvement in assessment are intangible, such as observing
greater understanding of course objectives in ones students and a stronger motivation
to achieve them, or being able to pinpoint less eective teaching methods and make
adjustments accordingly. Unfortunately, these incentives are not eective drivers in
circumstances where faculty lack the time, resources, training, or experience to take on new
assessment strategies. Many participants indicated that universities should explore ways to
reward faculty for their commitments to strong assessment practices with more concrete
incentives. ree potential structures were mentioned:
• Competitions and grants that reward either students or faculty for integration of
assessment into courses;
• Structures of professional advancement that reward faculty participation in
assessment, i.e. promotion and tenure (also cited in survey); and
• For graduate students, transcript notations or certicate programs that validate
assessment experience and expertise.
For more information about current programs that are developing and testing certicates
that recognize graduate student preparation in assessment, see the Teagle-funded projects
described on the Teagle Foundation website: “Cornell University Graduate Teaching
Certificate Initiative” and “Graduate Student Teaching Certicate at UC Berkeley:
Developing a Workshop and Course Module on How Students Learn.
(5) Connect Assessment Skills to Professional Success
Many workshop participants lent further weight to an observation made by respondents
to the survey discussed in the previous chapter: that there is increasing evidence to suggest
that the academic job market already values assessment expertise. Both graduate deans and
students reported that participation in PFF programs, or completion of a Graduate Teaching
Certicate, already provide students with an “edge” in the job market. As assessment
continues to be recognized as a growing area of teaching practice and responsibility, the
market value of assessment experience may grow, providing incentives to both future faculty
and to their advisors to seek professional development in this area.
e potential for culture change in this area may be two-directional, as much the result of
future faculty demonstrating their ability to participate in university accountability eorts
as of increasing demands for assessment expertise on the part of faculty search committees.
One dean gave an example of a campus activity that has given graduate students greater
inuence in this area, a workshop that encourages students to demonstrate their assessment
skills in their teaching portfolios and job interviews.
Finally, many graduate deans indicated that it would be useful to improve tracking of job
placement, experiences and career outcomes of PhD students, especially those aspiring to a
faculty career. e following types of data and their potential uses were cited:
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 41
• Placement records of students who have completed PFF and certicate programs to
further support anecdotal evidence on their role in helping students get faculty positions.
• Experiences of students who receive academic placements: are they prepared for
assessment responsibilities?
• Career trajectories (academic and non-academic) to better understand the needs of
graduate students served by professional development programs.
e Broad Parameters of an Enhanced PFF Program
By bringing together views from the graduate school and the classroom, the workshop
made it possible to think in broad terms about the structures that might best support the
preparation of faculty to assess student learning. We asked participants to reect on the
programmatic features of current PFF programs, how these programs might help enhance
the skills of students in learning outcomes assessment, and how they could be reshaped
to have the greatest possible impact on both future faculty and the broader university
community. Here, three priority areas were proposed:
(1) A Balanced Program Structure,
(2) Mentors for Teaching and Learning, and
(3) Potential Areas of Variation and Innovation.
(1) A Balanced Program Structure
Various structures for providing assessment skills to graduate students were discussed over
the course of the workshop: mandatory TA-training programs; PFF or similar programs
that focus only on graduate students aspiring to faculty careers; teaching certicates;
department-based courses and activities; educational opportunities provided by a center for
teaching and learning; or some combination of the above. Graduate schools play an important
role in many PFF and similar professional development programs for graduate students.
Participants discussed how housing a program in the graduate school can help universities
address the problem of scale. Graduate school programs make it easier to reach graduate
students from a range of programs, integrate expertise from across the campus, and enhance
visibility and access. As the survey results discussed in the previous chapter indicated, both
centralized and hybrid PFF programs are typically housed in graduate schools. Some noted,
however, that centralization poses risks if it does not also leverage the needed support and
engagement of departments and program faculty. And as one participant commented, a
structure entirely supported by the graduate school or other non-departmental body might
not encourage graduate students to become agents of change within their departments. In
theory, a hybrid approach that combined departmental and university-wide elements even
where programs are administratively centralized received strong support.
e idea of making a center for teaching and learning a potential partner in program
delivery also received focused attention. Some participants indicated that a center for
teaching and learning had been a critical partner by providing expertise in assessment and
revamping more traditional oerings in TA training. For others, centers that focused on
current faculty made it dicult to extend eligibility for training to graduate students without
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING42
redesign. Universities may wish to consider a range of possible options for providing cross-
disciplinary support to programs.
ese basic questions about structure and organization merit careful consideration and
attention given the diversity of university types and graduate school structures.
(2) Mentors for Teaching and Learning
A need for graduate faculty to model assessment strategies to their students was echoed
by all participants in the workshop, as was a concern that this modeling was not taking
place. e discussion shed light on the fact that several aspects of the PFF model have the
potential to ll this gap.
“I’ve found that it’s been
really helpful to have a research
advisor and then also a
mentor-teacher who is
someone who cares very much
about teaching.”
-Graduate Student
workshop participant
First, PFF and some similar programs typically
provide students with teaching mentors who are
not their research advisors. One student at the
workshop indicated that this had made a signicant
dierence in her preparation to teach and conduct
assessments: “Ive found that its been really helpful
to have a research advisor and then also a mentor-
teacher who is someone who cares very much
about teaching […] my teaching mentor was the
one who encouraged me to go to a lot of [events]
and forwarded emails on to me.” A graduate dean echoed this point, saying that her
graduate school had found it productive to pair students with faculty who are not
supervising their research.
Second, institutional collaboration, also a core feature of PFF models, can help provide
students with outside mentors, often at a college where teaching is a high priority.
These collaborations give students “a high level of responsibility in a mentored context,
said one dean, and allow students to learn about assessment from faculty with extensive
expertise in this area.
At least at research institutions, mentoring students in assessment is rare since research
supervisors typically focus on their students’ research training. Where students do have
access to mentors that focus on other responsibilities, such as teaching, this access is
typically only made possible by programs such as PFF that currently reach only small
numbers of future faculty, and which may not include explicit information about
assessment approaches.
(3) Potential Areas of Variation and Innovation
Both graduate deans and graduate students indicated that faculty preparation programs
had made a signicant and positive impact on students’ development of assessment
skills. ese programs varied widely in the degree to which they emphasized student
learning assessment, and the approaches used to prepare future faculty in this area varied.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 43
Characteristics of future faculty development programs cited as eective by workshop
participants included:
• Collection of information on students’ desired and actual professional development needs
• Development of learning outcomes for graduate students
• Engagement of groups with similar needs (faculty; adjunct sta; postdocs)
• Strategic recruiting of students and faculty
• Program activities matched to a students career stage
• Teaching mentor(s) in addition to research supervisors
• Leadership development opportunities for students and faculty
• Professional development workshops for faculty mentors
• Incentives and rewards for faculty mentors
• Utilization of campus expertise/resources (centers of teaching and learning,
libraries, experts, etc.)
• Partnerships with institutions committed to teaching and learning assessment
• Engagement of assessment experts from within or outside the university
• Inclusion of exposure to and practice with assessment technologies
• Means of recognizing student participation/completion (e.g., a certicate)
• Assessment of overall program success
In the specic area of curriculum development, eective characteristics cited include:
• Integration within existing structures/resources
• Progressively sequenced curricular content
• Options for students with dierent career goals
• Appropriate resources for students and faculty
• Professional development activities related to assessment
e contributions of participants to the discussion about the structural features of eective
programs are not meant to be exhaustive or prescriptive, but are presented here to help
focus future discussions of core strategies for developing and enhancing programs. Several
deans and assessment experts indicated that it would be useful to examine, more rigorously,
the ecacy of the activities and innovations that institutions are using in the area of
assessment, both to develop a better sense of the core features of successful programs and to
consider appropriate variations on these features.
Some of the features listed above might be considered core to any strong professional
development program involving graduate students and faculty, including the PFF model.
Yet it is also notable that a number of suggestions extend principles of outcomes-based
learning to the graduate context: establishing learning outcomes for graduate students
who participate in the program and developing and applying methods for assessing their
learning. One of the participants who uses this method in her program observed that
one of the best ways to teach the value of assessment to graduate students is to model it
in a professional development program. A number of other participants supported the
development of graduate learning outcomes that would include objectives for teaching and
assessment. While this issue did not fall within the scope of the current project or workshop,
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING44
CGS is currently exploring the issue and assessing next steps.
Possible Curricular Content on Learning Assessment in PFF
Participants were also invited to share ideas about the potential curricular content for
learning assessment in PFF programs. e following general questions framed discussion
of this topic: “What are the key skills and areas of content knowledge relevant to student
learning assessment and accountability that graduate students should acquire before
assuming faculty responsibilities?”; and, “What are some of the key approaches, instruments
or subject expertise that graduate students should have when they assume faculty
responsibilities for assessing student learning?”
Responses to these questions emerged in dierent conversations that took place over the
course of the workshop. Comments focused on two areas: knowledge and skills needed for
higher education in general and those needed for assessment in the disciplines.
Assessment Expertise in Higher Education
Some comments addressed general knowledge and skills needed for undergraduate
teaching across the disciplines. Many graduate students receive training or experience in
teaching without a clear understanding of American higher education or the diversity of US
institutions, some noted. Several stated that students would benet from more contextual
knowledge about the history of higher education, dierent institutional types, and the pressures
and stakes surrounding assessment debates. Some social context for learning was also
recommended, in particular, an awareness of the achievement gap and dierences in learning
styles, since these dierences among students are oen related to common learning obstacles.
Providing students with opportunities to acquire knowledge of the Scholarship of Teaching
and Learning, including some grasp of theories about how students learn, such as “higher-
order” learning, knowledge transfer, and metacognitive skills, was also widely recommended
as a key objective for curricular content. Several assessment experts said that it would be
particularly helpful for future faculty to develop an awareness of skills that undergraduates
may apply across and between disciplines, such as ethical reasoning, integrative learning, and
post-formal reasoning, i.e. creativity and innovation. Knowledge of assessment technologies
and tools, such as rubrics and E-portfolios, was also considered important.
Participants also singled out specic assessment skills in the context of teaching.
ese included:
• Applying Scholarship of Teaching and Learning SOTL in context;
• Articulating learning outcomes at the course level;
• Applying and using assessment tools and technologies in the context of a course;
• Developing “pedagogical content knowledge,” or matching teaching strategies to
subject matter; and
• Integrating assessment into curricula.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 45
Overall, there was an emphasis on skills that can be applied in the context of a course.
While some acknowledged the value of understanding and applying assessment skills at
departmental levels, a number of participants indicated that PFF and similar programs would
be most eective if they focused on assessment within courses, the pedagogical setting in
which new faculty will be most invested and for which they will have primary responsibility.
One dean commented that programs should not aim to turn students into assessment
specialists, but rather give them the skills that will allow them to enhance their teaching.
A nal area of skills development included professional values, attitudes and habits. In
general, these traits might be described as comprising a professional and scholarly ethos
that both reects and reinforces a strong investment in the outcomes of ones teaching on
students. ey included:
• A commitment to teaching as research
• An appreciation of diversity
• A concern for ethics and integrity
• Interpersonal skills
• A “sense of ongoing learning”
• Condence in ones ability to make and achieve teaching goals (“self-ecacy”)
Many of the professional attitudes listed above might be considered desirable qualities
in any undergraduate faculty member, or indeed, in any educator. ey also overlap
with qualities of ideal faculty members listed in statements by institutions and groups of
institutions seeking to recruit new faculty (Ga and Pruitt-Logan, 2000, p. 45). However, as
some noted, they also reinforce assessment in specic ways: an appreciation of social diversity
is an important foundation for understanding that students learn in dierent ways and
may not respond in identical ways to the same pedagogies or tests of knowledge and skills.
e strong focus in PFF and similar programs on developing a teaching identity rooted
in respect for students and the profession of undergraduate education has the potential to
support specic knowledge and skills related to assessment.
Assessment Expertise in the Disciplines
Due to the broad scope of the workshop, the discussion of the skills and knowledge that
might be built into PFF and similar programs was primarily focused on undergraduate
teaching and learning. Some attention was given, however, to the types of skills and
knowledge that could be honed in the context of a future faculty member’s discipline. A
general recommendation was that graduate students come to understand the relevance of the
scholarship of teaching and learning to their discipline. e value of this point was borne
out by a comment from a graduate student participant in the workshop, who explained that
she found the work of applying general assessment strategies to her eld of mathematics to
raise many challenges and questions. While she had been exposed to the use of rubrics as part
of professional development and training, she found that some metrics of student skill—in
the example she provided, the creation of “elegant” proofs to mathematical problems—were
dicult to measure using rubrics. She added that her teaching mentor had helped her to
think through such questions as they had worked together to evaluate undergraduates.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING46
Other participants in the workshop also pointed out possible areas where PFF and similar
programs might support strong assessment in the disciplines. In the area of discipline-
specic skills, some participants cited the creation of learning outcomes in relation to
departmental goals, as well as the ability to assess capstone projects. Addressing discipline-
specic professional attitudes, Marc Chun noted that eective future faculty must also be
able to relate to undergraduates who have relatively little natural interest in their discipline.
He added that many faculty members were drawn to their disciplines because they had
excelled in it as younger students, and they may nd it dicult to empathize with and spark
interest in students who are not automatically oriented to their eld.
ese discussions raise an important question for further consideration: how much
context for higher education assessment—at both the university and national levels—will
graduate students need in order to eectively assess student learning in their classrooms?
In answering this question it will be important for universities to consider issues of quality,
purpose, and scale.
Measuring Success in Program Integration
Perhaps one of the strongest signs that graduate students have integrated assessment into
their teaching practices would be measurable improvements over time in undergraduate
learning. Faculty, graduate deans, and assessment experts will need to work together to
explore the optimal models for integrating assessment into PFF and similar programs. All
stand to benet from more specic metrics focusing on the individuals directly involved in
these programs, graduate students and their mentors.
A recommendation from many workshop participants was that universities adopt better
mechanisms to assess the learning and leadership of graduate students who have participated
in PFF and similar programs. Such assessment is needed both to measure the impact of
practices and investments in such programs over time, as well as to potentially measure
the comparative eects of dierent kinds of programs. In many ways, all of the recent PhD
students who participated in the workshop modeled the skills and attitudes that graduate
deans and experts saw as learning objectives for future programs: a facility with terms and
concepts of the scholarship of teaching and learning, a strong sense of responsibility to
measure and improve the learning of their students, experience with electronic platforms for
assessment, an ability to describe the processes and challenges of their own development as
teachers, and, perhaps most importantly, a conviction that teaching is a mode of scholarly
inquiry and research. Better data on the impact of PFF and similar programs could also serve
to make these attainments more visible to the graduate community.
Developing Programs with a Broader Impact
Parallel to the discussion of programmatic features of PFF, workshop participants also
explored challenges, ideas, and issues surrounding the scale and potential impact of
programs. Many deans at the workshop emphasized that faculty preparation programs on
their campuses involve only a small number of students, oen by self-selection. A number
of deans at the workshop indicated that truly successful programs could promote changes
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 47
well beyond PFF activities and the institutions where they take place. One dean commented
that while individual university eorts are important, improvements in individual programs
depend on an integrated eort that builds on successes. In moving forward, many stressed, it
will be essential to have a better understanding of the current networks in which PFF and similar
programs are situated and generate new ideas for potential networks. Most of the observations
focused on the U.S. context, although there was also a discussion of the international context in
which outcomes assessment has become an issue of growing importance.
e Scale of U.S. Programs
e most central topic to emerge was the issue of scalability. e rst issue introduced was
the scale of programs within institutions, or the current small size of PFF cohorts. Several deans
participating in the workshop indicated that they lacked the nancial and personnel resources to
create programs that reached a large number of students. Assistant Secretary Ochoa underscored
this problem, observing that the most striking aspect of participants’ presentations was that
strong programs were only reaching a “small fraction” of graduate students.
A second and related issue concerned the national scalability of eorts to integrate
assessment into PFF and other programs. On the one hand, there was positive evidence
that university leaders are actively looking to capitalize on networks and institutional
collaborations such as the University of Wisconsin-based Center for the Integration of
Research, Teaching and Learning (CIRTL) Network focused on STEM elds, PFF and
similar programs, and more informal structures of communications. A number of deans
pointed out that they had adopted certain features of their programs from the websites of
other universities. At present, this is happening on an ad hoc basis. But many deans urged
that a better network of collaboration must be built so that individual programs are not
wholly or mostly reliant only on their own institutional expertise and resources.
Potential Global Impact
Participants briey discussed the preparation of international students who may return to
their countries of origin to teach, and of domestic students who may take faculty positions
outside the U.S. e question of whether future faculty teaching outside the U.S. would
require radically dierent preparation for teaching responsibilities was also raised: some
participants suggested that students who go abroad may not enter a context where teaching
and learning is valued, while others indicated that awareness of student learning outcomes
is essential in any national context. It was also noted that there are broader movements
underway that are making evidence-based learning a common currency in global higher
education. Donna Heiland, Vice-President of the Teagle Foundation, pointed out that
the U.S. has helped to shape a global conversation about critical thinking skills. CGS
President Debra Stewart added that the CGS Global Summit on Graduate Education had
demonstrated broad international interest in student learning outcomes, at the graduate as
well as the undergraduate level.
26
While it may be outside the scope of the PFF program to
focus on the preparation of students for teaching outside the U.S., it will be useful to think
about the potential global impact of this and other U.S. programs that prepare future faculty.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING48
6. Conclusion and Next Steps
National eorts to enhance US higher education through a focus on student learning are
well underway, but for these eorts to succeed, serious obstacles must be overcome. e
opening chapters of this publication discussed several such obstacles, the foremost being:
the perception in the academic community of a divide between assessment for improved
learning and assessment for accountability, and the related perception that “student
learning outcomes” are primarily about accountability to external stakeholders rather than
improving student learning.
e tensions that surround the assessment of student learning in US higher education
have been well documented, but a practical national strategy is still needed to ensure that
reforms serve both the purposes of enhanced learning and institutional accountability. Such
a strategy would need to be of sucient national scale and scope to have the potential to
transform faculty culture. To reect eld dierences, it should include signicant input
from scholars and, like earlier phases of PFF, benet from the contributions and support
of disciplinary societies across a wide range of disciplines. At the same time, however, it
should embrace lessons learned by assessment experts and galvanize signicant institutional
leadership from senior administrators to reinforce the notion that the assessment of student
learning is a valued faculty responsibility.
We began this project with the hypothesis that such a national strategy would need to
leverage existing professional development programs for preparing graduate students for
their roles and responsibilities as faculty. e project activities described in this report
sought to answer the following questions: Is a nationally coordinated program to integrate
student learning assessment into faculty preparation programs for graduate students viable
and needed? and, If so, what key features should such a program address and what challenges
should it anticipate? We conclude by briey summarizing what we have learned.
First, we learned that such an approach is viable, as many universities with PFF and similar
programs have already begun to integrate the assessment of student learning into them, and
that these developments have brought great benets. Graduate students, faculty and deans
report signicant benets of such exposure in terms of students’ condence as teachers
and their success in obtaining faculty positions. Professional development programs for
graduate students that eectively integrate skills in the assessment of student learning also
have the potential to benet many other individuals and groups beyond the students who
report a competitive edge in the academic job market: a students future colleagues, their
future employer institution and its students, and even the larger culture of higher education.
Future eorts to build on these successes and coordinate the integration of learning
assessment into graduate student professional development programs should dene,
document, and realize tangible benets to US higher education.
We learned that key obstacles and challenges must be overcome, however, if such programs
are to have any broader impact on faculty culture. Many deans reported that their PFF
or similar programs currently reach only a small percentage of the host institutions
graduate students who aspire to faculty careers. While many of the best programs reach
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 49
small numbers of students, other programs have scaled back their eorts and reach since
receiving the seed grants to develop them over a decade ago, and many US universities still lack
professional development programs for graduate students aspiring to faculty careers. Even active
programs are oen constrained by the expertise and resources of their particular institutions.
In light of this new information, we conclude that priority areas for future work should
include exposing more individual students to learning assessment strategies in PFF and
similar programs and creating broader networks for the exchange of promising practices
and lessons learned. A parallel and complementary goal would be to develop a framework
for facilitating the exchange of information within and across institutions about how to use
learning assessment to measure the eectiveness and success of such programs.
27
One of
the key uncharted contributions of the current integration of learning assessment into PFF
programs could be to provide a model for evaluating the eectiveness of these programs in a
way that could potentially encourage greater participation by students, greater endorsement
by faculty, and greater adoption by US universities.
e scholarship of teaching and learning was a core feature of several grant-funded PFF
programs; “pedagogy in the discipline” was an important common component of many
departmental and hybrid programs; and collaboration with strong centers for teaching
and learning was, and continues to be, a common characteristic of PFF. e assessment
of student learning, however, was neither a required nor a common feature of PFF, nor
was it a criterion for evaluating the success of PFF programs. e results of this project
suggest that while most graduate student participants in PFF programs are now exposed to
learning assessment principles and practices for individual courses, broader uses of learning
assessment receive less attention, and there is little opportunity for best practice networking
across institutions. As a result, some of the key challenges (faculty support, broad student
participation, and perceived relevance) remain, and therefore prevent these programs from
achieving their full potential, either on their own campuses or in the broader faculty culture.
Achieving this potential will require new models of collaboration to identify and document
best practices and to encourage the broad integration of these practices into all graduate
programs that seek to prepare students for faculty careers.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING50
Bibliography and Web Resources
Adelman, C. (2008). e Bologna club: What U.S. higher education can learn from a
decade of European reconstruction. Washington, D.C.: Institute for Higher
Education Policy.
Adelman, C. (2010). e white noise of accountability. Inside Higher
Education, June 4. Retrieved from http://www.insidehighered.com/
views/2010/06/24/adelman
Alexander, L. (US Senator). (2007) Statement of Lamar Alexander, May 24,
2007: Accountability in Higher Education.
Allen, S., Knight, J., (2009). A method for collaboratively developing and validating a rubric.
International Journal for Scholarship of Teaching and Learning, 3(2), 1-17.
Allen, M. J. (2006), Assessing general education programs. Bolton, MA: Anker Publishing
Company.
Angelo, T.A. (1993). A “teachers dozen”: Fourteen general, research-based
principles for improving higher learning in our classrooms. AAHE
Bulletin, 45(8), 3-7, 13.
Anderson, H.M., Anaya, G., Bird, E., and Moore, D.L. (2005). A review of educational
assessment. American Journal of Pharmaceutical Education, 69 (1): article 12.
Association of American Colleges and Universities. (2007). College learning for the new
global century: A report from the national leadership council for liberal education
and Americas promise. Washington, DC. http://www.aacu.org/leap/documents/
GlobalCentury_nal.pdf
Association of American Colleges and Universities. VALUE: Valid Assessment of Learning
in Undergraduate Education. AAC&U website, http://www.aacu.org/value/.
Accessed 11-05-10.
Association of American Colleges and Universities. (2002). Greater expectations: A new
vision for learning as a nation goes to college. Washington, DC: AAC&U.
Association of American Colleges and Universities. (2007). College learning for the
new global century: A report from the National Leadership Council for Liberal
Education and Americas Promise. Washington, DC: Association of American
Colleges and Universities.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 51
Association of American Colleges and Universities. (2004). Our students’ best work: A
framework for accountability worthy of our mission. Washington, DC: Association of
American Colleges and Universities.
Association of American Colleges and Universities. (2008). Our students’ best work: A
framework for accountability worthy of our mission, 2
nd
ed. Washington, DC:
Association of American Colleges and Universities.
Association of Governing Boards. (2010). How boards oversee educational quality: A report
on a survey on boards and the assessment of student learning. Washington, DC:
Association of Governing Boards of Universities and Colleges.
Austin, A.E. (2002). Preparing the next generation of faculty. Journal of Higher Education,
73(1), 94-122.
Austin, A.E. and McDaniels, M. (2006). Doctoral education and Boyers four domains.
In R.K. Toutkoushian, (Series Ed.) & J. M. Braxton (Ed.), New directions for
institutional research: analyzing faculty work and rewards:Using Boyers four domains
of scholarship. No. 129, 51-65. San Francisco, CA: Jossey-Bass.
Banta, T.W., Lund, J.P., Black, K.E., and Oblander, F.W. (1996). Assessment in
practice: putting principles to work on college campuses. San Francisco:
Jossey-Bass.
Banta, T.W., and Associates. (2002). Building a scholarship of assessment. San
Francisco: Jossey Bass.
Banta, T. (2007). Can assessment for accountability complement assessment for
improvement? Peer Review, 9(2), 9-12.
Barr, R. B., & Tagg, J. (1995). From teaching to learning: A new paradigm for
undergraduate education. Change, 27(6), 13-25.
Benjamin, R. and Chun, M. (2003). A new eld of dreams: e collegiate
learning assessment project. Peer Review, Spring 2003, 26-29.
Bers, T.H. (2008). e role of institutional assessment in assessing student
learning outcomes. New directions for higher education, 141, 31-39.
Bess, J. (1978). Anticipatory socialization of graduate students. Research in Higher
Education, 8, 289-317.
Boyer, E.L. (1990). Scholarship reconsidered: Priorities of the professoriate. Princeton, NJ:
Carnegie Foundation for the Advancement of Teaching.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING52
Business-Higher Education Forum. (2004). Public accountability for student
learning in higher education: Issues and options. Washington, DC:
Retrieved from http://www.bhef.com/includes/pdf/2004_public_accountability.pdf
Chism, N.V.N., and Warner, S.B. (Eds). (1987). Institutional responsibilities and
responses in employment and education of teaching assistants: Readings from a
national conference. Columbus: e Ohio State University, Center for Teaching
Excellence.
Chun, M. (2002). Looking where the light is better: A review of the literature
on assessing higher education quality. Peer Review 4(2/3), 16-25.
Council for Higher Education Accreditation. (2003). Statement of mutual
responsibilities for student learning outcomes: accreditation, institutions, and
programs. Retrieved from http://www.chea.org/pdf/
StmntStudentLearningOutcomes9-03.pdf.
Council for Higher Education Accreditation. (2004). Balancing competing goods:
Accreditation and information to the public about quality. Washington, D.C.:
Council for Higher Education Accreditation.
Council for Higher Education Accreditation. (2009). Inside Accreditation, 5:4.
Cuevas, N.M.; Matveev, A.G.; Miller, K.O. (2010). Mapping general education outcomes in
the major: Intentionality and transparency. Peer Review, 12(1), 10-15. Winter 2010,
Vol. 12 Issue 1, p10-15.
European University Association (2005). Doctoral programmes for the European
knowledge society: Report on the EUA Doctoral Programmes Project. Brussels,
Belgium: EUA.
Ewell, P.T. (2006). Making the grade: how boards can ensure academic
quality. Washington, D.C.: Association of Governing Boards of Colleges
and Universities.
Ewell, P. T. (2001). Accreditation and student learning outcomes: A proposed point of
departure. Washington, DC: Council on Higher Education Accreditation.
Retrieved from http://www.chea.org/award/StudentLearningOutcomes2001.pdf
Ewell, P.T. (2007). Chapter 2, Assessment Supplement, In Assessing and accounting
for student learning: Beyond the Spellings Commission. Borden, M. H., Pike,
G.R., (Eds). San Francisco: Jossey-Bass.
Ewell, P.T. (2008). No correlation: musings on some myths about quality.
Change, November/December. Retrieved from http://www.changemag.org/Archives/
Back%20Issues/November-December%202008/full-no-correlation.html
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 53
Fritzschler, L. (2010). Setting quality standards for higher education. Inside
Higher Ed, September 9. Retrieved from http://www.insidehighered.com/
views/2010/09/09/fritschler
Ga, J. G. (2005). Preparing future faculty and multiple forms of scholarship. In O’Meara,
K. & Rice, R.E. (Eds), Faculty priorities reconsidered: rewarding multiple forms of
scholarship. Jossey-Bass: San Francisco, CA.
Ga, J.G. and Pruitt-Logan, A. (2000). Building the faculty we need: colleges and universities
working together. Washington, DC: Council of Graduate Schools and Association of
American Colleges and Universities.
Ga, J.G., Pruitt-Logan, A., Sims, L., and Denecke, D. (2003). Preparing Future Faculty in
the Humanities and Social Sciences: A guide for change. Washington, DC: Council of
Graduate Schools and Association of American Colleges and Universities.
Glenn, D. (2010). Assessment Projects from Hell. Chronicle of Higher Education,
September 7. Retrieved from http://chronicle.com/blogs/measuring/assessment-
projects-from-hell/26733
Golde, C. M., and Dore, T. M. (2001). At cross purposes: What the experiences of
doctoral students reveal about doctoral education. Philadelphia: A Report for the
Pew Charitable Trusts. Retrieved from http://www.phd-survey.org/report%20nal.pdf
Golde, C.M. (1997, November). Gaps in the training of future faculty: Doctoral student
perceptions. Paper presented at the annual meeting of the Association for the Study
of Higher Education.
Golde, C. M., & Walker, G. E. (Eds.) (2006). Envisioning the future of doctoral
education: Preparing stewards of the discipline (Carnegie essays on the doctorate).
San Francisco: Jossey-Bass.
Goldsmith, S., Haviland, D., Daily, K., and Wiley, A. (2004). Preparing Future
Faculty Initiative: Final Evaluation Report. Retrieved from http://www.aacu.org/p/
pdfs/PFF_Final_Report.pdf
Grindley, C. J., Bernal-Carlo, A., Brennan, S., Frenz-Belkin,P., Gampert, R., Li, I.,
Mangino, C., Zoe, L. (2010). Pulling it all together: Connecting liberal arts outcomes
with departmental goals through general education. Peer Review, 12(1), 27-9.
Higher Learning Commission. (2010). Commission Institute on the Assessment
of Student Learning. North Central Association of Colleges and
Schools. Retrieved from http://www.ncahlc.org/information-for-institutions/
resources-for-institutions.html
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING54
Huba, M.E., and Freed, J.E. (2000). Learner-centered assessment on college campuses: shiing
the focus from teaching to learning. Needham Heights, MA: Allyn & Bacon.
Hutchings, P. (2009). e new guys in assessment town. Change 41(3), 26-33.
Retrieved from
http://www.changemag.org/May-June%202009/full-assessment- town.html
Hutchings, P. (2010, April). Opening doors to faculty involvement in assessment.
Urbana, IL: University of Illinois and Indiana University, National Institute for
Learning Outcomes Assessment.
Johnston, S.W. and Long, K. (2010). How boards oversee educational quality: a
report on a survey of boards and the assessment of student learning.
Washington, DC: Association of Governing Boards.
Keller, C. and Hammang, J. (2007). Chapter 3, Assessment Supplement. In
Assessing and accounting for student learning: Beyond the Spellings Commission.
Borden, M. H., Pike, G. R., (Eds). San Francisco: Jossey-Bass.
Kuh, G.D.; Ewell, P.T.. (2010) e state of learning outcomes assessment in the United
States. Higher Education Management & Policy, 22(1), 9-28.
Kuh, G. and Ikenberry, S. (2009). More than you think, less than we need:
Learning outcomes assessment in American higher education. Urbana, IL:
University of Illinois and Indiana University, National Institute for Learning
Outcomes Assessment.
Kuh, G.D. (2010). Rehabbing the rankings: Fool’s errand or the Lords work?
Presentation for AACRAO 20
th
Annual Strategic Enrollment Management
Conference SEM XX, November 7, 2010. http://www.aacrao.org/sem20/executive.htm
Kuh, G.D. (2008). High-impact educational practices: What they are, who has
access to them, and why they matter. Washington, DC: Association of American
Colleges and Universities.
Kuh, G.D. (2009). NILOA: Tracking the status of outcomes assessment in the
U.S. Presentation at New England Associations of Schools and Colleges, Dec.
3, 2009. Retrieved from http://learningoutcomesassessment.org/documents/
NEASCNILOAplenary2009.pdf.
Lederman, D. (2008). Spreading the Gospel on student learning. Inside Higher
Ed, October 13.
Maki, P. and Borkowski, N. (2006). e Assessment of doctoral education. VA: Stylus.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 55
Messick, J. S. (1999). Assessment in higher education. London: Lawrence Erlbaum Associates,
Publishers.
Lewis, K.G. (Ed.). (1993). e TA experience: Preparing for multiple roles.
Stillwater, OK: New Forums Press.
Masterson, K. (2010). Many College Boards Are at Sea in Assessing Student
Learning. e Chronicle of Higher Education. Sep. 9.
Messick, S. J. (1999). (Ed.). Assessment in higher education. Mahwah, NJ: Lawrence Erlbaum
Associates.
National Commission on Excellence in Education. (1983). A nation at risk. Washington,
DC: U.S. Government Printing Oce.
National Governors Association. (1986). Time for results: the governors’ 1991 report on
education. Washington, DC: National Governors Association.
National Institute of Education. (1984). Involvement in learning: Realizing the potential of
American higher education. Washington, DC: U.S. Government Printing Oce.
Nowlis, V., Clark, K. E., and Rock, M. (1968). e graduate student as teacher
(American Council on Education Monograph). Washington, DC: American
Council on Education.
Nyquist, J. D., Abbott, R. D., Wul, D. H., and Sprague, J. (Eds.) (1991). Preparing
the professoriate of tomorrow to teach: Selected reading in TA training. Dubuque, IA:
Kendall/Hunt.
Nyquist, J.D., Manning, L., Wul, D.H., Austin, A.E., Sprague, J., Fraser, P.K., Calcagno, C.,
& Woodford, B. (1999). On the road to becoming a professor: e graduate student
experience. Change, 31(3), 18-27.
Nyquist, J.D., & Sprague, J. (1992). Developmental stages of TAs. In J.D. Nyquist & D.H.
Wul (Eds.), Preparing teaching assistants for instructional roles: Supervising TAs in
communication (pp. 101-113). Annadale, VA: Speech Communication Association.
Nyquist, J., & Woodford, B. (2000). Re-envisioning the PhD: Seven propositions from
the national conference. Seattle, WA: Center for Instructional Development and
Research, University of Washington. Retrieved from http://www.grad.washington.
edu/envision/project_resources/metathemes.html
Palomba C.A., Banta, T.W. (1999). Assessment essentials: Planning implementing
and improving assessment in higher education. San Francisco, CA: Jossey-Bass
Publishers.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING56
Palomba, C.A., and Banta, T.W. (Eds). (2001). Assessing student competence in
accredited disciplines: pioneering approaches to assessment in higher education.
Sterling, VA: Stylus.
Pellegrino, J. W., Chudowsky, N., & Glaser, R. (2001). Knowing what students
know: e science and design of educational assessment. Washington DC:
National Academy Press.
Peterson, M.W., Einarson, M. K., Augustine, C.H., & Vaughan, D.S. (1999).
Designing student assessment to strengthen institutional performance in doctoral
and research institutions. Stanford, CA: National Center for Postsecondary
Improvement.
Provezis, S. (2010, October). Regional accreditation and student learning
outcomes: Mapping the territory. Urbana, IL: University of Illinois and Indiana
University, National Institute for Learning Outcomes Assessment.
Schmidt, P. (2008). Harvards Derek Bok: Professors, study thy own
teaching.e Chronicle of Higher Education, October 13.
Shavelson, R.J. (2007). A brief history of student learning assessment. Washington,
DC: Association of American Colleges and Universities.
Shavelson, R.J. and Huang, L. (2003). Responding responsibly to the frenzy to
assess learning in higher education. Change, 35(1), 11-19.
Shulman, L.S. (2002). Making dierences: a table of learning. Change 34(6), 36-44.
Suskie, L. (2009). Assessing student learning. San Francisco: Jossey-Bass.
Stevens, D.D. & Levi, A. (2005). Introduction to rubrics: an assessment tool to save
grading time, convey eective feedback, and promote student learning. Sterling,
VA: Stylus Publishing.
Teagle Foundation. (2006). Initiatives in value added assessment. Retrieved from
http://www.teaglefoundation.org/learning/outcome.aspx
Tierney, W.G. (Ed.). (1990). Assessing academic climates and cultures. New Directions for
Institutional Research Series: 68. San Francisco: Jossey Bass.
U.S. Department of Education. (2002). PUBLIC LAW 107–110, No Child Le
Behind Act, 107
th
Congress, Retrieved from
http://www.ed.gov/nclb/landing.jhtml
US Department of Education. Secretary’s Procedures and Criteria for Recognition
of Accrediting Agencies. Federal Register. Washington, DC. 1988:53: 127, 25088-99.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 57
U.S. Department of Education. (2006). A test of leadership: charting the future
of U.S. Higher Education. Washington, D.C. Retrieved from http://www.ed.gov/
about/bdscomm/list/hiedfuture/reports/pre-pub-report.pdf).
Volkwein, J.F. (2003, May). Implementing outcomes assessment on your
campus. e RP Group eJournal. Retrieved from
http://www.rpgroup.org/sites/default/les/implementing%20outcomes%20
assessment.pdf
Walvoord, B.E. (2004). Assessment clear and simple: A practical guide for
institutions, departments, and general education. San Francisco: Jossey-Bass.
Wul, D. H., Austin, A.E. (2004). e challenge to prepare the next generation of
faculty. In D. H. & A.E. Austin & Associates (Eds.), Paths to the professoriate:
Strategies for enriching the preparation of future faculty (pp. 3-16). San Francisco:
Jossey-Bass.
Wingspread Group on Higher Education. (1993). An American imperative: Higher
expectations for higher education.
Woodrow Wilson National Fellowship Foundation. (2005). e responsive Ph.D.:
Innovations in U.S. doctoral education. Princeton, NJ: Woodrow Wilson.
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING58
Web Resources
Center for the Integration of Research, Teaching, and Learning (CIRTL)
http://www.cirtl.net/
Council of Graduate Schools
Preparing Future Faculty National Oce (URL)
www.preparing-faculty.org
NEASC/CIHE standards
http://cihe.neasc.org/standards_policies/standards/standards_html_version
Luminas Tuning USA Project
http://www.luminafoundation.org/our_work/tuning/
e National Institute for Learning Outcomes Assessment
http://www.learningoutcomeassessment.org
e Teagle Foundation
http://www.teaglefoundation.org/
e Carnegie Corporation of New York
http://carnegie.org/
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING 59
Endnotes
1
For a review of this history and variety see Shavelson and see Chun.
2
George Kuh (2010) warns of the dangers of prematurely including learning outcomes in university rankings.
He writes that signicant work must be done before valid measurements of desired learning outcomes can be
included in ranking systems: “[…]ranking outts need valid, reliable data from large numbers of colleges and
universities that have the same or comparable measures” (Kuh, October 7, 2010).
3
Driving this heightened focus on learning assessment has been a convergence of forces, including: inuential
national reports such as that of the Spellings Commission on higher education (A Test of Leadership:
Charting the Course of U.S. Higher Education, 2006) and the Educational Testing Services Culture of Evidence:
Postsecondary Assessments and Learning Outcomes, 2006; an increase in follow-up requests by regional
accrediting bodies for additional institutional documentation of the assessment of student learning outcomes;
and recent calls for greater oversight in the quality of student learning by state governing boards (Johnston and
Long, 2010).
4
It is important to note that resistance is not as common in disciplines where specialized accreditation is
conducted.
5
Of course, this is not to say that measures such as degree completion have only been used as proxies for
quality learning or that they have not driven other important national conversations about educational
quality. e CGS PhD Completion Project, for example, has been instrumental in empowering graduate
schools and program faculty to work together to address a variety of interventions in policies and practices.
6
e article appears in a new online section of the Chronicle, “Measuring Stick,” created to monitor the
growing national discussion around quality and accountability in higher education.
7
e importance of faculty involvement was underscored in the set of recommendations that came out of a
recent NILAO Report on learning outcomes assessment (Kuh and Ikenberry, 2009). e report recommended
that “Faculty members must systematically collect data about student learning, carefully examine and discuss
these results with colleagues, and use this information to improve student outcomes” (p. 28). e diculty of
this eort was also acknowledged in the report.
8
Hutchings 2010, p. 15. is paper recommends six methods of directly connecting assessment with faculty
teaching: 1) Build assessment around the regular, ongoing work of teaching and learning; 2) Make a place
for assessment in faculty development; 3) Integrate assessment into the preparation of graduate students;
4) Reframe assessment as scholarship; 5) Create campus spaces and occasions for constructive assessment
conversation and action; and 6) Involve students in assessment (3).
9
AAC&U’s project, Liberal Education and Americas Promise (LEAP) has, for example, denes essential
learning outcomes divided into four areas: (1) Knowledge of Human Cultures and the Physical and Natural
World; (2) Intellectual and Practical Skills; (3) Personal and Social Responsibility; and (4) Integrative Learning.
See AAC&U 2007, p. 3.
10
See, for example, AAC&U’s VALUE Project (Valid Assessment of Learning in Undergraduate
Education), which produced 15 rubrics for student learning that have been developed by faculty and
experts on student learning.
11
See Shulman 2002 for a discussion of the importance of measuring student engagement.
12
For purposes of this paper, the term “accrediting bodies” refers to the agencies, and the number (six)
does not count multiple commissions at each agency as a separate body, as is the practice in some citations.
ese six regional accrediting bodies include: Middle States Association of Colleges and Schools, New
England Association of Colleges and Schools, North Central Association of Colleges and Schools, Northwest
Association of Accredited Schools, Southern Association of Colleges and Schools, and Western Association of
Schools and Colleges.
13
Standard 14 of the Middle States Commissions standards reads: “Assessment of student learning
demonstrates that, at graduation, or other appropriate points, the institutions students have knowledge, skills,
and competencies consistent with institutional and appropriate higher education goals.” See: http://www.
msche.org/?Nav1=About&Nav2=FAQ&Nav3=Question07. All URLs retrieved on March 19, 2010. e New
England Association of Schools and Colleges (NEASC), Commission on Institutions of Higher Educations
standards for accreditation includes more detailed expectations:
PREPARING FUTURE FACULTY TO ASSESS STUDENT LEARNING60
e institution uses a variety of quantitative and qualitative methods to understand the
experiences and learning outcomes of its students.Inquiry may focus on a variety of
perspectives, including understanding the process of learning, being able to describe
student experiences and learning outcomes in normative terms, and gaining feedback
from alumni, employers, and others situated to help in the description and assessment of
student learning.e institution devotes appropriate attention to ensuring that its methods
of understanding student learning are trustworthy and provide information useful in the
continuing improvement of programs and services for students (section 4.50).
NEASC standards include explicit statements on faculty responsibility for student learning outcomes
assessment [“Responsibilities of teaching faculty include instruction and the systematic understanding
of eective teaching/learning processes and outcomes in courses and programs for which they share
responsibility” (section 5.3)] and on transparency [“e institution has readily available valid documentation
for any statements and promises regarding such matters as program excellence, learning outcomes, success in
placement, and achievements of graduates or faculty” (section 10.12)]. NEASC/CIHE standards are available
online at: http://cihe.neasc.org/standards_policies/standards/standards_html_version
14
See Kuh 2009 and Provezis 2010.
15
Examples include Luminas Tuning USA Project (http://www.luminafoundation.org/our_work/tuning/), and
the National Institute for Learning Outcomes Assessment (http://www.learningoutcomeassessment.org), as well
as other activities supported by the Teagle Foundation, Lumina, and the Carnegie Corporation of New York.
16
e rst three depend upon multiple choice responses while the CLA involves performance tasks that tap
into the students ability to draw conclusions based upon multiple sources of information (Shavelson, 2007).
17
Examples of such graduate reform initiatives include: the Carnegie Initiative on the Doctorate; the Responsive
PhD initiative of the Woodrow Wilson National Fellowship Foundation; and Re-envisioning the PhD.
18
In the late 1980’s the Pew Charitable Trusts championed a series of conferences on this topic (Chism &
Warner, 1987; Lewis, 1993; Nyquist, Abbott, Wul, and Sprague, 1991). Out of this conference, a series of
grant-funded initiatives by Pew, the National Science Foundation, and the Atlantic Philanthropies to CGS
working with the Association of American Colleges and Universities funded centralized and discipline-specic
PFF programs across the United States.
19
S. Goldsmith, D. Haviland, K. Daily, and A. Wiley. 2004. “Preparing Future Faculty Initiative: Final
Evaluation Report.” http://www.aacu.org/p/pdfs/PFF_Final_Report.pdf
20
See Ga, Pruitt-Logan, and Weibl (2000) and Ga, Pruitt-Logan, Sims and Denecke (2003).
21
For a listing of professional development programs developed without CGS PFF grant funds, see: http://
www.preparing-faculty.org/PFFWeb.Like.htm.
22
“With both centralized and departmental components, a hybrid model enhances the visibility, credibility,
and institutionalization of PFF programs on university campuses. Case studies provided a clear sense that
campuses with either centralized or hybrid models have PFF programs that are larger, more visible, and enjoy
greater institutional support than stand-alone departmental programs” (Goldsmith et al., 2004, available
online at: http://www.aacu.org/p/pdfs/PFF_Final_Report.pdf).
23
Participants included 15 graduate deans representing dierent types of institutions (public and private, from
dierent U.S. regions); 6 researchers and experts in the area of student learning outcomes assessment; six
recent or current graduate students who had participated in PFF or similar programs; and representatives of
two organizations that specialize in the development of assessment tools for higher education.
24
See Carnegie 2006.
25
An example of a pilot program specically focused on helping graduate students cultivate a research-based
teaching practice, Princeton University’s, “Eective Teaching and Learning in a Research-Based Environment,
is described in on the Teagle Foundation's website.
26
Papers and discussions highlighting the emergence of learning outcomes for graduate students will be
featured in the proceedings of the 2010 Strategic Leaders Global Summit on Graduate Education, Global
Perspectives on Quality Assessment: Proceedings of the 2010 Strategic Leaders Global Summit on Graduate
Education (CGS: forthcoming 2011).
27
Such an eort would support one of the four recommendations for the future of PFF made by external
evaluators in a study commissioned by the funding agencies that supported the original grant-funded phases:
“Future studies of PFF should document faculty career outcomes of alumni and assess the impact of alumni on
graduate and undergraduate education, including on student achievement” (Goldsmith et al., 2004, p.3).
Council of Graduate Schools
One Dupont Circle, NW
Suite 230
Washington, DC 20036-1173
www.cgsnet.org