Developing Your Assessment Criteria

Having transparent assessment grading criteria (especially for assessment activities that require professional judgement) is one the most important factors for students to be successful in their learning, according to Hattie and Donoghue (2016). Students greatly benefit from understanding what is expected of them. Therefore, it is important that you actively discuss this with them.

The SOLO taxonomy

The Structure of Observed Learning Outcomes (SOLO) Taxonomy is a tool that can help you craft your assessment criteria. The SOLO taxonomy is divided into 5 levels of performance. In practice, you might find it unreasonable to expect students to reach the highest level for all courses. Comparing and discussing with other colleagues might help you determine what is a reasonable level for your students.

Adapted from Biggs & Collis (1982)


Ways of organising your assessment criteria

There are two main ways to judge performance using your assessment criteria - analytic and holistic.  Use the table below to weigh the advantages and disadvantages of each method to decide what would best work for your assessment activities.

Ways To JudgeDefinitionAdvantagesDisadvantages
Holistic or Analytic: One or Several Judgments?
AnalyticEach criterion (dimension, trait) is evaluated separately.
  1. Gives diagnostic information to the teacher.
  2. Gives formative feedback to students.
  3. Easier to link to instruction than holistic rubrics.
  4. Good for formative assessment; adaptable for summative assessment; if you need an overall score for assessment, you can combine the scores.
  1. Takes more time to score than holistic rubrics.
  2. Takes more time to achieve inter-rater reliability than with holistic rubrics.
  3. Assumes that fixed weighting can be or should be assigned to each criterion
  4. May result in rigid marking that potentially discourages “out of the box” performances and originality
HolisticAll criteria (dimensions, traits) are evaluated simultaneously.
  1. Scoring is faster than with analytic rubrics.
  2. Requires less time to achieve inter-rater reliability.
  3. Good for summative assessment.
  4. Allows flexibility in assessing the quality of a work as a whole rather than the sum of its parts
  1. Single overall score does not communicate information about what to do to improve.
  2. Less specific for formative assessment.
Adapted from Brookhart (2013)

Challenges with using assessment criteria

UNSW provides a good summary of the challenges with using assessment rubrics. This includes the difficulty in crafting and assessing with rubrics, and students being over-reliant on the rubrics and less inclined to develop their own judgement. 

To overcome these challenges, faculty can consider making the rubrics with students, getting them to practice peer review with the rubrics, refining the rubrics with the students and giving feedback based on the rubrics. These ways can facilitate both you and the students to have a clearer idea on what the descriptors mean and how students can demonstrate their achievements of the assessed learning outcomes.

Another approach would be to craft the rubrics with the guidance of the SOLO taxonomy. Assessment criteria based on SOLO taxonomy attempts to capture the qualitative difference in the students’ performance.


Examples of rubrics using SOLO taxonomy

Here is example from the NTU School of Physical and Mathematical Sciences that follows the SOLO taxonomy closely. Notice that instead of counting the number of “clever uses” of science, the description emphasises the rigour and depth in which science was used by students in the project assignment.

 Extended AbstractRelationalMulti-StructuralUni-StructuralPre-Structural

Does this project involve some clever use of Science?
(LO 1, 4)
[Explain the Scientific principles behind your project.]

Involves the novel application of existing scientific principles and/or discovery of new scientific principles.

The project was designed based on sound scientific principles.

The project was designed based on some scientific principles and involved some trial and error.

The project was designed based loosely on scientific principles and involved mainly uneducated guesses.

No Science or Mathematics involved in this project.

Note: This is just one of the criteria for the entire rubric. If you are interested in the complete rubric, please email us at [email protected]

Here is a second example from a writing class. Notice that the difference between the performance bands is determined by the quality of the arguments that students construct. The bands closely follow the five levels of SOLO taxonomy.

 StandardUnsatisfactory DevelopingSatisfactory  ProficientExemplary
Quality
of argument
No coherence.
Misses the point.
Makes one or two points that are not connected.Makes a series of points, but without synthesis.Synthesises points into a coherent argument.Highly coherent.
Goes beyond synthesis to create a new, independent, point of view.
Use of secondary materialNo use of secondary material.Some use of secondary material, but without real relevance to the argument.A range of relevant secondary material used uncritically to illustrate points.A range of relevant secondary material used, critically integrated in argument.A range of relevant secondary material used, critically integrated to advance argument and to theorise.

You can find more examples from Macquarie University, Australia.

Summary of Recommendations

  • Use assessment criteria to help your students have clarity on those assessment activities that require professional judgement.
  • Use the SOLO Taxonomy to craft your assessment criteria.
  • Discuss with your colleagues what level of performance on the learning outcomes is reasonable for your students.
  • Consider whether you should use your assessment criteria holistically or analytically.
  • Share and discuss your assessment criteria with your students, as this will help in their learning.

 

Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational leadership, 57(5), 13-19.

Biggs, J. (2003). Aligning teaching for constructing learning.Higher Education Academy, 1-4. 

Biggs, J., & Collis, K. F. (1982). Evaluating the quality of learning: The SOLO Taxonomy. NY: Academic Press

Biggs, J., & Tang, C. (2011). Teaching for Quality Learning at University: What the student does. McGraw-Hill Education (UK) 

Boud, D., & Dochy, F. (2010). Assessment 2020. Seven propositions for assessment reform in higher education.

Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. Ascd.

Dunn, D. S., McCarthy, M. A., Baker, S. C., & Halonen, J. S. (2010). Using quality benchmarks for assessing and developing undergraduate programs. Lismore, NSW: John Wiley & Sons.

Earl, L. (2003). Assessment of Learning, for Learning, and as Learning. Thousand Oaks: Corwin Press.

Hattie, J. A., & Donoghue, G. M. (2016). Learning strategies: a synthesis and conceptual model. npj Science of Learning(16013). doi:doi:10.1038/npjscilearn.2016.13

Sadler, D. R. (2005). Interpretations of criteria‐based assessment and grading in higher education. Assessment & Evaluation in Higher Education, 30(2), 175-194.

Trevelyan, R., & Wilson, A. (2012). Using patchwork texts in assessment: clarifying and categorising choices in their use. Assessment & Evaluation in Higher Education, 37(4), 487-498.