Assessment
plan: The document that demonstrates how the program will assess
the upcoming year’s performance. This
document includes: Mission Statement, Goals, Learning outcomes, Curriculum,
Criteria, & Methods.
Assessment report: The document that presents data and discusses how assessment results will be used to change curriculum and/or assessment procedure for the coming year. That is, the two key components of this report include the Results and Use of Results.
Assessment: The ongoing process aimed at
understanding and improving student learning. It involves making our
expectations explicit and public; setting appropriate criteria and high
standards for learning quality; systematically gathering, analyzing, and
interpreting evidence to determine how well performance matches those expectations
and standards; and using the resulting information to document, explain, and
improve performance. When it is embedded effectively within our institutional
system, assessment can help us focus our collective attention, examine our
assumptions, and create a shared academic culture dedicated to assuring and
improving the quality of higher education (3). The systematic collection,
review and use of information about educational programs undertaken for the
purpose of improving student learning and development (11, 12)
Assessment
document: This includes the assessment report and the assessment
plan. Because of the nature of the Assessment
Plan Composer, they are submitted together.
Assessment Plan Composer: A website that allows faculty to write and submit assessment reports and plans in the desired format. This system promotes on-going, evolving assessment.
Benchmarking: A criterion-referenced objective. Performance data that are used for comparative purposes. A program can use its own data as a baseline benchmark against which to compare future performance. It can also use data from another program as a benchmark. In the latter case, the other program often is chosen because it is exemplary and its data are used as a target to strive for, rather than as a baseline. (7)
Best
practice: Compares
your results against the best of your peers. (6)
Bloom’s taxonomy: Six levels in which cognitively related objects can be categorized by levels of increasing complexity; the revised levels are Remember, Understand, Apply, Analyze, Evaluate, and Create (2).
Course-level assessment: Assessment to determine
the extent to which a specific course is achieving its learning goals
and outcomes, as well as, assessment to improve teaching of specific courses or
segments of courses. (6) Compare with program-level assessment.
Criteria: Describes relevant measures that will be used; states precisely what students
will be doing; explains the conditions under which students will perform the
task; states an acceptable level of aggregate performance.
Criterion-referenced
assessment: Compares
student performance or score against an established standard. (6)
Curriculum: States
where in the curriculum students will be exposed to the necessary materials
which will allow them to achieve the learning outcome (e.g. specific courses,
co-curricular opportunities).
Curriculum map: Curriculum maps demonstrate where in the program’s curriculum learning outcomes are being addressed. In essence, a curriculum map consists of a table with two axes, one pertaining to program learning outcomes, the other to courses in the major. “Mapping” program outcomes to course outcomes shows how students develop skills and knowledge in courses that are required for their programs of study.
Direct measures: Direct measures of student leaning
require students to display their knowledge and skills as they respond to the
instrument itself. (12) Compare with indirect
measures.
Exit surveys and interviews: Information obtained from students on
completion of their study. This typically includes information about student
growth and change, satisfaction with academic programs, their experiences in
their majors, and their immediate and future plans. (12)
Focus group: A carefully planned discussion to
obtain perceptions on a defined area of interest in a permissive,
nonthreatening environment. It is
conducted with approximately 7 to 10 people by a skilled interviewer. (8)
Formative assessment: Formative assessment is conducted during the
life of a program (or performance) with the purpose of providing feedback that
can be used to modify, shape, and improve the program (or performance). (12) Compare with summative assessment.
Goals: Goal statements are broad, but
provide a more detailed discussion of the general aims of the program that
support the mission. Goal statements
describe intended outcomes for students/graduates of the program in very
general terms. The goal statements must
list intended outcomes dictated by the mission statement.
Indirect measures: Assessment methods that involve
perceptions of learning rather than actual demonstrations of learning outcome
achievement. For example, a student
survey about whether a course helped develop a greater sensitivity to diversity
or an employer survey asking for feedback on graduates’ skills (9). Compare with direct measures.
Institutional effectiveness: Assessment to determine the extent to which a college or university is achieving its mission. (6);
According to SACS,
Learning outcomes: A statement that describes the measurable skills, knowledge, and attitudes that students should be able to do or demonstrate as a result of the course or program. Learning outcomes should be SMART: Specific, Measurable, Agreed Upon, Realistic, and Time Framed.
Methods: Describes how and when the outcomes will be assessed, and who will conduct the
assessment; describes how assessment data will be disseminated to faculty and
staff as appropriate.
Mission statement: The mission statement is usually a short, one paragraph general explanation of
what the program is, and why the program exists. The mission statement should not be a mission
discussion; keep it short and very general. Avoid words and phrases generally not used in the English language. At the program level, support from the
university administration is gained if the program mission statement supports
the college or school mission statement.
Norm-referenced: A norm-referenced
test is one designed to highlight achievement differences between and among
students to produce a dependable rank order of students across a continuum of
achievement from high achievers to low achievers. (4)
Portfolio: Collections
of multiple student work samples usually compiled over time and rated using
rubrics. The design of the portfolio is
dependent upon how the scoring results are going to be used. (4)
Program-level assessment: Assessment to determine
the extent to which students in an academic program can demonstrate the
learning outcomes for the program. (6) Compare with course-level
assessment.
Qualitative assessment: Assessments that rely on description rather than numerical
scores or ratings. The emphasis is on the measurement of opinions, reflections
and/or judgments. Examples include interviews, focus groups, and observations.
(9) Compare with quantitative
assessment.
Quantitative assessment: Assessments that rely on numerical scores or ratings. The
emphasis is on the use of statistics, cumulative numbers, aggregated data, and
numerical measurements. (9) Compare with qualitative assessment.
Random sample: A sample drawn from the population such that every member of the population has an equal opportunity to be included in the sample. (5)
Results: Presents the findings from the data that have been collected and analyzed in a
simple, easily understood format; accurately depicts the findings relevant to
each learning outcome at the program level.
Rubric: A set of categories that define and describe
the important components of the work being completed, critiqued, and
assessed. Each category contains a
graduation of levels of completion or competence with a score assigned to each
level and a clear description of what criteria need to be met to attain the
score at each level. (4)
Summative assessment: An assessment that is done at the conclusion of a course or some larger instructional period (e.g., at the end of the program). The purpose is to determine success or to what extend the program/project/course met its goals and learning outcomes. (10) Compare with formative assessment.
Triangulation: The use of a combination
of assessment methods in study. An
example of triangulation would be an assessment that incorporated surveys, interviews,
and research papers. (4)
Use of Results: Explains how specific
results from assessment activities will be used for decision-making, strategic
planning, program evaluation and improvement; assists in documenting changes
and the reasons for the changes.
Value-added (growth or
pre-post): Compares
results against student scores when they started or entered the program to the
end of the program or course of study (4); student learning is demonstrated by
determining how much students have gained through participation in the program
(1).
References: