DISCLAIMER: This data in this section is fictitious and does not, in any way, represent any of the programs at Gallaudet University. This information is intended only as examples.
Types of Scoring Criteria (Rubrics)
A rubric is a scoring guide used to assess performance against a set of criteria. At a minimum, it is a list of the components you are looking for when you evaluate an assignment. At its most advanced, it is a tool that divides an assignment into its component parts, and provides explicit expectations of acceptable and unacceptable levels of performance for each component.
Types of Rubrics
1 - Checklists, the least complex form of scoring system, are simple lists indicating the presence, NOT the quality, of the elements. Therefore, checklists are NOT frequently used in higher education for program-level assessment. But faculty may find them useful for scoring and giving feedback on minor student assignments or practice/drafts of assignments.
Example 1: Critical Thinking Checklist
__ Accurately interprets evidence, statements, graphics, questions, etc.
__ Identifies the salient arguments (reasons and claims)
__ Offers analyzes and evaluates major alternative points of view
__ Draws warranted, judicious, non-fallacious conclusions
__ Justifies key results and procedures, explains assumptions and reasons
__ Fair-mindedly follows where evidence and reasons lead
Example 2: Presentation Checklist
__ engaged audience
__ used an academic or consultative ASL register
__ used adequate ASL syntactic and semantic features
__ cited references adequately in ASL
__ stayed within allotted time
__ managed PowerPoint presentation technology smoothly
2 - Basic Rating Scales are checklists of criteria that evaluate the quality of elements and include a scoring system. The main drawback with rating scales is that the meaning of the numeric ratings can be vague. Without descriptors for the ratings, the raters must make a judgment based on their perception of the meanings of the terms. For the same presentation, one rater might think a student rated “good” and another rater might feel the same student was "marginal."
Example: Basic Rating Scale for Critical Thinking
Accurately interprets evidence, statements, graphics, questions, etc
Identifies the salient arguments (reasons and claims)
Offers analyzes and evaluates major alternative points of view
Draws warranted, judicious, non-fallacious conclusions
Justifies key results and procedures, explains assumptions and reasons
Fair-mindedly follows where evidence and reasons lead
3 - Holistic Rating Scales use a short narrative of characteristics to award a single scored based on an overall impression of a student's performance on a task. A drawback to using holistic rating scales is that they do not provide specific areas of strengths and weaknesses and therefore are less useful to help you focus your improvement efforts.
Use a holistic rating scale when the projects to be assessed will vary greatly (e.g., independent study projects submitted in a capstone course) or when the number of assignments to be assessed is significant (e.g., reviewing all the essays from applicants to determine who will need developmental courses).
Example: Holistic Rating Scale for Critical Thinking Scoring
- Peter A. Facione, Noreen C. Facione, and Measured Reasons LLC. (2009), The Holistic Critical Thinking Scoring Rubric: A Tool for Developing and Evaluating Critical Thinking. Retrieved April 12, 2010 from Insight Assessment.
4 - Analytic Rating Scales are rubrics that include explicit performance expectations for each possible rating, for each criterion. Analytic rating scales are especially appropriate for complex learning tasks with multiple criteria.
Evaluate carefully whether this the most appropriate tool for your assessment needs. They can provide more detailed feedback on student performance; more consistent scoring among raters but the disadvantage is that they can be time-consuming to develop and apply.
Results can be aggregated to provide detailed information on strengths and weaknesses of a program.
Example: Critical Thinking Portion of the Gallaudet University Rubric for Assessing Written English
IDEAS and CRITICAL THINKING
1. Assignment lacks a central point.
2. Displays central point, although not clearly developed.
3. Displays adequately-developed central point.
4, Displays clear, well-developed central point.
5. Central point is uniquely displayed and developed.
1. Displays no real development of ideas.
2. Develops ideas superficially or inconsistently.
3. Develops ideas with some consistency and depth.
4. Displays insight and thorough development of ideas.
5. Ideas are uniquely developed.
1. Lacks convincing support for ideas.
2. Provides weak support for main ideas.
3. Develops adequate support for main ideas.
4. Develops consistently strong support for main ideas.
5. Support for main ideas is uniquely accomplished.
1. Includes no analysis, synthesis, interpretation, and/or other critical manipulation of ideas.
2. Includes little analysis, synthesis, interpretation, and/or other critical manipulation of ideas.
3. Includes analysis, synthesis, interpretation and/or other critical manipulation of ideas in most parts of the assignment.
4. Includes analysis, synthesis, interpretation, and/or other critical manipulation of ideas, throughout.
5. Includes analysis, synthesis, interpretation, and/or other critical manipulation of ideas, throughout— leading to an overall sense that the piece could withstand critical analysis by experts in the discipline.
1. Demonstrates no real integration of ideas (the author’s or the ideas of others) to make meaning.
2. Begins to integrate ideas (the author’s or the ideas of others) to make meaning.
3. Displays some skill at integrating ideas (the author’s or the ideas of others) to make meaning.
4. Is adept at integrating ideas (the author’s or the ideas of others) to make meaning.
5. Integration of ideas (the author’s or the ideas of others) is accomplished in novel ways.
Steps for Creating an Analytic Rating Scale (Rubric) from Scratch
There are different ways to approach building an analytic rating scale: logical or organic. For both the logical and the organic model, steps 1-3 are the same.
Steps 1 – 3: Logical AND Organic Method
Determine the Best Tool
- Identify what is being assessed, (e.g., ability to apply theory) as this is focused on program-level learning assessment.
Determine first whether an analytic rating scale is the most appropriate way of scoring the performance and/or product.
An analytic rating scale is probably a good choice
a. if there are multiple aspects of the product or process to be considered
b. if a basic rating scale or holistic rating scale cannot provide the breadth of assessment you need.
Building the Shell
- Identify what is being assessed. (e.g., ability to apply theory).
• Specify the skills, knowledge, and/or behaviors that you will be looking for.
• Limit the characteristics to those that are most important to the assessment.
- Develop a rating scale with the levels of mastery that is meaningful.
Tip: Adding numbers to the ratings can make scoring easier. However, if you plan to also use the rating scale for course-level assessment grading as well, a meaning must be attached to that score. For example, what is the minimum score that would be considered acceptable for a “C.”
Other possible descriptors include:
* Exemplary, Proficient, Marginal, Unacceptable
* Advanced, High, Intermediate, Novice
* Beginning, Developing, Accomplished, Exemplary
* Outstanding, Good, Satisfactory, Unsatisfactory
Writing the Performance Descriptors in the Cells
The descriptors are the critical piece of an analytic rating scale. To produce useful, valid scores, attributes in your descriptors must be consistent across the ratings and easy to read. See examples of inconsistent performance characteristics and suggested corrections.
- Use either the logical or the organic method to write the descriptions for each criterion at each level of mastery.
|•||For each criterion, at each rating level, brainstorm a list of the performance characteristics*. Each should be mutually exclusive.||•||Have experts sort sample assignments into piles labeled by ratings (e.g., Outstanding, Good, Satisfactory, Unsatisfactory)|
|•||Based on the documents in the piles, determine the performance characteristics* that distinguish the assignments|
Tips: Keep list of characteristics manageable by only including critical evaluative components. Extremely long, overly-detailed lists make a rating scale hard to use.
In addition to having descriptions brief, the language should be consistent. Below are several ideas to keep descriptors consistent:
- Refer to specific aspects of the performance for each level
analyses the effect of …
describes the effects of …
lists the effects of …
- Keep the aspects of a performance stay the same across the levels but adding adjectives or adverbial phrases to show the qualitative difference
provides a complex explanation
provides a detailed explanation
provides a limited explanation
shows a comprehensive knowledge
shows a sound knowledge
shows a basic knowledge
- Refer to the degree of assistance needed by the student to complete the task
uses correctly and independently
uses with occasional peer or teacher assistance
uses only with teacher guidance
- Use numeric references to show quantitative differences among levels
A word of warning: numeric references on their own can be misleading. They are best teamed with a qualitative reference (eg three appropriate and relevant examples) to avoid ignoring quality at the expense of quantity.
provides three appropriate examples
provides twoappropriate examples
provides an appropriate example
uses several relevant strategies
uses some relevant strategies
uses few or no relevant strategies
Steps 5-6: Logical AND Organic Methods
- Test the rating scale before making it official. Have a norming* session.
Ask colleagues who were not involved in the rating scale’s development to apply it to some products or behaviors and revise as needed to eliminate ambiguities, confusion, and/or inconsistencies. You might also let students self-assess using the rating scale.
*See University of Hawaii’s “Part 6. Scoring Rubric Group Orientation and Calibration” for directions for this process.
- Review and revise.
Steps for Adapting an Existing Analytic Rating Scale (Rubric)
- Evaluate the rating scale. Ask yourself:
- Does the rating scale relate to all or most the outcome(s) I need to assess?
- Does it address anything extraneous?
- Adjust the rating scale to suit your specific needs.
- Add missing criteria
- Delete extraneous criteria
- Adapt the rating scale
- Edit the performance descriptors
- Test the rating scale.
- Review and revise again, if necessary.
Uses of Rating Scales (Rubrics)
Use rating scales for program-level assessment to see trends in strengths and weaknesses of groups of students.
- To evaluate a holistic project (e.g., theses, exhibitions, research project) in capstone course that pulls together all that students have learned in the program.
- Supervisors might use a rating scale developed by the program to evaluate the field experience of students and provide the feedback to both the student and the program.
- Aggregate the scores of rating scale used to evaluate a course-level assignment. For example, the Biology department decides to develop a rating scale to evaluate students' reports from 300- and 400-level sections. The professors use the scores to help determine the students’ grades and provide students with feedback for improvement. The scores are also given to the department’s Assessment Coordinator to summarize to determine how well they are meeting their student learning outcome, "Make appropriate inferences and deductions from biological information."
For more information on using course-level assessment to provide feedback to students and to determine grades, see University of Hawaii’s “Part 7. Suggestions for Using Rubrics in Courses” and the section onConverting Rubric Scores to Gradesin Craig A. Mertler’s “Designing Scoring Rubrics for Your Classroom”.
Sample Rating Scales (Rubrics)
Adapted from sources below:
Allen, Mary. (January, 2006). Assessment Workshop Material. California State University, Bakersfield. Retrieved DATE fromhttp://www.csub.edu/TLC/options/resources/handouts/AllenWorkshopHandoutJan06.pdf
Creating and Using Rubrics. (March, 2008). University of Hawai’i at Manoa. Retrieved April 5, 2010 fromhttp://www.uhm.hawaii.edu/assessment/howto/rubrics.htm
Creating an Original Rubric. Teaching Methods and Management, TeacherVision. Retrieved April 7, 2010 fromhttp://www.teachervision.fen.com/teaching-methods-and-management/rubrics/4523.html?detoured=1
Danielson, Cherry and Naser, Curtis. (November 7, 2009). Developing Effective Rubrics: A New Tool in Your Assessment Toolbox. Workshop at Annual NEAIR Conference.
How to Design Rubrics. Assessment for Learning Curriculum Corporation. Retrieved April 7, 2010 fromhttp://www.assessmentforlearning.edu.au/professional_learning/success_criteria_and_rubrics/success_design_rubrics.html
Mertler, Craig A. (2001). Designing Scoring Rubrics for Your Classroom. Practical Assessment, Research & Evaluation. Retrieved April 7, 2010 fromhttp://pareonline.net/getvn.asp?v=7&n=25
Mueller, Jon. (2001). Rubrics. Authentic Assessment Toolbox. Retrieved April 12, 2010 fromhttp://jonathan.mueller.faculty.noctrl.edu/toolbox/rubrics.htm
Rubric (academic). (2010, March 3). In Wikipedia, the free encyclopedia. Retrieved April 70, 2010, fromhttp://en.wikipedia.org/wiki/Rubric_(academic)
Tierney, Robin & Marielle Simon. (2004). What's Still Wrong With Rubrics: Focusing on the Consistency of Performance Criteria Across Scale Levels. Practical Assessment, Research & Evaluation, 9(2). Retrieved April 13, 2010 fromhttp://PAREonline.net/getvn.asp?v=9&n=2
There is a range of material available, including examples of candidate evidence with commentaries, as part of our Understanding Standards programme. This material is for teachers and lecturers to help them develop their understanding of the standards required for assessment. As new material is developed we will publish this information in our weekly Centre News. All material available can be found in the following locations:
- Available from our Understanding Standards website Material relating to externally assessed components of course assessment, with the exception of those subject to visiting assessment.
- Available from our secure website Material relating to internally assessed components of course assessment, and components of course assessment which are subject to visiting assessment. In addition, material relating to freestanding units which are no longer part of National 5 courses can be found on this website. Teachers and lecturers can arrange access to these materials through their SQA Co-ordinator.
More information on Understanding Standards material for this subject can be found on our Understanding Standards website at http://www.understandingstandards.org.uk/Subjects/Biology
The National 5 webinar provides a detailed overview of the revised course assessment for this subject.
National 5 Biology 15 June 2017
Additional CPD support
Where any particular areas of concern are identified, which are not addressed by our Understanding Standards events or support materials, we will offer free continuing professional development (CPD) training, subject to request. CPD support is subject-specific and can be tailored to cover one or more qualification level. To find out more about this service visit our CPD page.
SSERC, in partnership with SQA, have produced teacher/technician guides that provide background information to teachers/lecturers. Candidate guides containing protocols have also been produced. These are available via the following link:
National 5 Biology Assignment SSERC Resources