Measures
A measure is the process by which data is collected and evaluated to determine whether students are achieving learning outcomes. There are two types of measures: direct and indirect.Direct measures require students to demonstrate their competency or ability in some way that is evaluated for measurable quality by an expert, such as an instructor, internship supervisor, or industry representative.
Indirect measures provide secondhand information about student learning. Whereas direct measures are concerned with the quality of student work as it demonstrates learning, indirect measures are indicators that students are probably learning. Often, indirect measures are too broad to represent achievement of specific learning outcomes, but they may provide useful supplemental information.
For academic programs, direct measures of student learning should be prioritized.
For academic & student support units, indirect measures may provide enough information necessary to determine whether objectives have been met.
Examples of Direct Measures:
- Written assignments, oral presentations, or portfolios of student work to which a rubric or other detailed criteria are applied
- Exam questions focused on a particular learning outcome or content area
- Scores on standardized exams (e.g., licensure, certification, or subject area tests)
- Employer, internship supervisor, or committee chair evaluations of student performance
- Competency interviews
- Evaluations of student teaching and classroom observation
- Other assignment grades based on defined criteria
Examples of Indirect Measures:
- Survey questions students answer about their own perception of their abilities
- Tasks tracked by recording completion or participation rates
- Completion of certain degree requirements
- Number of students who publish manuscripts or give conference presentations
- Job placement data
- Course grades and some comprehensive exam grades (i.e., broad exams that cover a variety of learning outcomes)
- GPAs
- Course enrollment data
These short videos created by the Office of University Assessment at the University of Kentucky summarize key information and highlight important considerations when determining measures for collecting assessment data.
Rubrics for Assessing Student Learning
Programs and departments are encouraged to use and/or revise existing rubrics to fit their needs. The American Association of Colleges & Universities (AAC&U) VALUE rubrics were created specifically for this purpose, and many of them have been used as the foundation for Core Curriculum assessment rubrics currently used at Texas A&M University. Attached to each AAC&U rubric is a cover page that provides a definition of the learning outcome, framing language, and a glossary of key terminology used in the rubric.The Division of Student Affairs also has a rubric hub on their website, covering a variety of learning outcomes and other skills.
Personal/Social Responsibility
Targets
Once assessment data is collected and analyzed, the next step is to determine what the findings mean. Was the outcome achieved or not, and to what degree? It is important to set targets ahead of time for each assessment measure you are using.The strongest targets are those which are based on previous information, such as benchmarks, past assessment findings, or some other observed performance or achievement. The faculty/staff group should collectively decide what target should be set. Targets should be clear, specific, and aligned with the language of the measure.
Targets may be quantitative or qualitative, or even multi-faceted to include both types of data. Below are some examples.
Quantitative Targets
- 85% of students will earn at least 7 out of 10 points on the critical thinking essay question.
- 100% of students will achieve the “Competent” threshold on the Content Development rubric criterion.
- 70% of students will score above the 80th percentile on the ACS standardized exam.
- 80% of students will select “Agree” or “Strongly Agree” that the training improved their mentoring skills.
- 75% of service requests will be acknowledged within 24 hours.
- Women student’s enrollment in this activity/event will increase 15% from last year.
- 90% of reports will be submitted on time.
- Demographics of students participating in this experience will match the demographics of students on TAMU campus (list percentages).
Qualitative Targets
- Each submitted developmental portfolio will demonstrate growth (as defined by the program) in incorporating credible research sources.
- When asked open-ended questions about their experience with the service/support, users in focus groups will mention keywords or synonyms related to the unit’s purpose and/or mission statement (e.g., belonging, inclusion, safe space, etc.).
- Each debriefing session with clients will indicate that they are satisfied with the team’s pre-event communication.
- Open-ended survey questions will reveal favorable overarching themes.
- Each submitted developmental portfolio will demonstrate growth (as defined by the unit) in incorporating credible research sources.