Assessment Glossary


Aggregated data Goals
Assessment plan Indirect measures
Assessment report Institutional effectiveness
Assessment Learning outcomes
Assessment document Methods
Assessment Plan Composer Mission statement
Benchmarking Norm-referenced
Best practice Portfolio
Bloom’s taxonomy Program-level assessment
Course-level assessment Qualitative assessment
Criteria Quantitative assessment
Criterion-referenced assessment Random sample
Curriculum Results
Curriculum map Rubric
Direct measures Summative assessment
Exit surveys and interviews Triangulation
Focus group Use of Results
Formative assessment Value-added

 

Aggregated data: Data that have been combined to show averages or other representations of groups of students. (9) Aggregated data should be collected for program-level assessment.

 

Back to Top

 

Assessment plan: The document that demonstrates how the program will assess the upcoming year’s performance.  This document includes: Mission Statement, Goals, Learning outcomes, Curriculum, Criteria, & Methods.

 

Back to Top

 

Assessment report: The document that presents data and discusses how assessment results will be used to change curriculum and/or assessment procedure for the coming year.  That is, the two key components of this report include the Results and Use of Results. 

 

Back to Top

 

Assessment: The ongoing process aimed at understanding and improving student learning. It involves making our expectations explicit and public; setting appropriate criteria and high standards for learning quality; systematically gathering, analyzing, and interpreting evidence to determine how well performance matches those expectations and standards; and using the resulting information to document, explain, and improve performance. When it is embedded effectively within our institutional system, assessment can help us focus our collective attention, examine our assumptions, and create a shared academic culture dedicated to assuring and improving the quality of higher education (3).  The systematic collection, review and use of information about educational programs undertaken for the purpose of improving student learning and development (11, 12)

 

Back to Top

 

Assessment document: This includes the assessment report and the assessment plan.  Because of the nature of the Assessment Plan Composer, they are submitted together.

 

Back to Top

 

Assessment Plan Composer: A website that allows faculty to write and submit assessment reports and plans in the desired format.  This system promotes on-going, evolving assessment.  

 

Back to Top

 

Benchmarking: A criterion-referenced objective.  Performance data that are used for comparative purposes.  A program can use its own data as a baseline benchmark against which to compare future performance.  It can also use data from another program as a benchmark.  In the latter case, the other program often is chosen because it is exemplary and its data are used as a target to strive for, rather than as a baseline. (7)

 

Back to Top

 

Best practice: Compares your results against the best of your peers. (6) 

 

Back to Top

 

Bloom’s taxonomy: Six levels in which cognitively related objects can be categorized by levels of increasing complexity; the revised levels are Remember, Understand, Apply, Analyze, Evaluate, and Create (2).

 

Back to Top

 

Course-level assessment: Assessment to determine the extent to which a specific course is achieving its learning goals and outcomes, as well as, assessment to improve teaching of specific courses or segments of courses. (6)  Compare with program-level assessment.

 

Back to Top

 

Criteria: Describes relevant measures that will be used; states precisely what students will be doing; explains the conditions under which students will perform the task; states an acceptable level of aggregate performance.  

 

Back to Top

 

Criterion-referenced assessment: Compares student performance or score against an established standard. (6)

 

Back to Top

 

Curriculum: States where in the curriculum students will be exposed to the necessary materials which will allow them to achieve the learning outcome (e.g. specific courses, co-curricular opportunities).

 

Back to Top

 

Curriculum map: Curriculum maps demonstrate where in the program’s curriculum learning outcomes are being addressed.  In essence, a curriculum map consists of a table with two axes, one pertaining to program learning outcomes, the other to courses in the major. “Mapping” program outcomes to course outcomes shows how students develop skills and knowledge in courses that are required for their programs of study.  

 

Back to Top

 

Direct measures: Direct measures of student leaning require students to display their knowledge and skills as they respond to the instrument itself. (12) Compare with indirect measures.

 

Back to Top

 

Exit surveys and interviews: Information obtained from students on completion of their study. This typically includes information about student growth and change, satisfaction with academic programs, their experiences in their majors, and their immediate and future plans. (12)

 

Back to Top

 

Focus group: A carefully planned discussion to obtain perceptions on a defined area of interest in a permissive, nonthreatening environment.  It is conducted with approximately 7 to 10 people by a skilled interviewer. (8)

 

Back to Top

 

Formative assessment:  Formative assessment is conducted during the life of a program (or performance) with the purpose of providing feedback that can be used to modify, shape, and improve the program (or performance). (12)  Compare with summative assessment.

 

Back to Top

 

Goals:  Goal statements are broad, but provide a more detailed discussion of the general aims of the program that support the mission.  Goal statements describe intended outcomes for students/graduates of the program in very general terms.  The goal statements must list intended outcomes dictated by the mission statement. 

 

Back to Top

 

Indirect measures:  Assessment methods that involve perceptions of learning rather than actual demonstrations of learning outcome achievement.  For example, a student survey about whether a course helped develop a greater sensitivity to diversity or an employer survey asking for feedback on graduates’ skills (9).  Compare with direct measures.

 

Back to Top

 

Institutional effectiveness:   Assessment to determine the extent to which a college or university is achieving its mission. (6);

According to SACS,

 

Back to Top

 

Learning outcomes: A statement that describes the measurable skills, knowledge, and attitudes that students should be able to do or demonstrate as a result of the course or program.  Learning outcomes should be SMART: Specific, Measurable, Agreed Upon, Realistic, and Time Framed.   

 

Back to Top

 

Methods: Describes how and when the outcomes will be assessed, and who will conduct the assessment; describes how assessment data will be disseminated to faculty and staff as appropriate. 

 

Back to Top

 

Mission statement: The mission statement is usually a short, one paragraph general explanation of what the program is, and why the program exists.  The mission statement should not be a mission discussion; keep it short and very general.  Avoid words and phrases generally not used in the English language.  At the program level, support from the university administration is gained if the program mission statement supports the college or school mission statement. 

 

Back to Top

 

Norm-referenced: A norm-referenced test is one designed to highlight achievement differences between and among students to produce a dependable rank order of students across a continuum of achievement from high achievers to low achievers. (4)

 

Back to Top

 

Portfolio: Collections of multiple student work samples usually compiled over time and rated using rubrics.  The design of the portfolio is dependent upon how the scoring results are going to be used. (4)

 

Back to Top

 

Program-level assessment: Assessment to determine the extent to which students in an academic program can demonstrate the learning outcomes for the program. (6) Compare with course-level assessment.

 

Back to Top

 

Qualitative assessment: Assessments that rely on description rather than numerical scores or ratings. The emphasis is on the measurement of opinions, reflections and/or judgments. Examples include interviews, focus groups, and observations. (9) Compare with quantitative assessment.

 

Back to Top

 

Quantitative assessment: Assessments that rely on numerical scores or ratings. The emphasis is on the use of statistics, cumulative numbers, aggregated data, and numerical measurements. (9) Compare with qualitative assessment.

 

Back to Top

 

Random sample: A sample drawn from the population such that every member of the population has an equal opportunity to be included in the sample. (5)

 

Back to Top

 

Results: Presents the findings from the data that have been collected and analyzed in a simple, easily understood format; accurately depicts the findings relevant to each learning outcome at the program level. 

 

Back to Top

 

Rubric:  A set of categories that define and describe the important components of the work being completed, critiqued, and assessed.  Each category contains a graduation of levels of completion or competence with a score assigned to each level and a clear description of what criteria need to be met to attain the score at each level. (4) 

 

Back to Top

 

Summative assessment: An assessment that is done at the conclusion of a course or some larger instructional period (e.g., at the end of the program). The purpose is to determine success or to what extend the program/project/course met its goals and learning outcomes. (10)  Compare with formative assessment.

 

Back to Top

 

Triangulation: The use of a combination of assessment methods in study.  An example of triangulation would be an assessment that incorporated surveys, interviews, and research papers. (4)

 

Back to Top

 

Use of Results: Explains how specific results from assessment activities will be used for decision-making, strategic planning, program evaluation and improvement; assists in documenting changes and the reasons for the changes. 

 

Back to Top

 

Value-added (growth or pre-post): Compares results against student scores when they started or entered the program to the end of the program or course of study (4); student learning is demonstrated by determining how much students have gained through participation in the program (1).

 

Back to Top

 

References:

  1. Allen, M. J. (2004). Assessing academic programs in higher education. Bolton, MA: Anker.


  2. Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom's Taxonomy of educational objectives: Complete edition, New York: Longman.


  3. Angelo, T. A. (1995) Reassessing (and Defining) Assessment. American Association for Higher Education (AAHE) Bulletin, 48 (2), November 1995, pp. 7-9.


  4. Assessment Services of Northern Illinois University (2008). Assessment terms glossary. Retrieved February 25, 2008, from http://www.niu.edu/assessment/resources/terms.shtml


  5. Bordens, K.S., & Abbott, B.B. (1997). Research design and methods: A process approach (4th ed.). Mountain View, CA: Mayfield.


  6. Burns, M.; Fager, J.; Gumm, A.; Haley, A.; Krider, D.; Linrud, J.; Osborn, W.; Riebschleger, J.; Shahabuddin, S.; & Webster, D. (2008). Central Michigan University Assessment Toolkit. Retrieved February 25, 2008, from http://academicaffairs.cmich.edu/caa/assessment/resources/toolkit


  7. Hatry, H., van Houten, T., Plantz, M., & Greenway, M.T. (1996). Measuring program outcomes: A practical approach. Alexandria, VA: United Way of America.


  8. Krueger, R.A. (1994). Focus groups: A practical guide for applied research (2nd ed.). Thousand Oaks, CA: Sage.


  9. Oxnard College (2006). Oxnard College SLO Glossary of Terms. Retrieved February 26, 2008, from http://www.oxnardcollege.edu/faculty/slo/SLO%20Glossary%20of%20Terms.pdf


  10. Ozarka College. Retrieved March 5, 2008, from http://www.ozarka.edu/assessment/glossary.cfm


  11. Marchese, T. J. (1987). Third down, ten years to go. AAHE Bulletin. 40, 3-8.


  12. Palomba, C.A & Banta, T.W (1999). Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education. Jossey – Bass: San Francisco.


  13. Southern Association of Colleges and Schools. (2008). Principles of accreditation: Foundations for Quality enhancement.Retrieved March 3, 2008, from http://www.sacscoc.org/pdf/2008%20Interim%20Principles%200108.pdf

 

Back to Top