We have talked about why we do assessment validation in another post. We have also talked about how to validate assessment tools according to requirements set in Clause 1.8 of the Standards. Validation can be further divided into two processes, each of which address a different section of Clause 1.8.
As you can see in the figure above, the first half of the clause can be met by pre-assessment validation. This process checks your RTO’s tools, strategies, and processes to make sure they meet training package requirements along with individual unit requirements.
On the other hand, the second half of the clause can be met by post assessment validation. This process reviews how your assessors judged completed assessments. It helps you make sure that your RTO conducts assessments according to the Principles of Assessment and the Rules of Evidence.
While both processes are required by the Standards, let’s take a closer look at post assessment validation.
Step 1: Do a risk assessment
To determine what your RTO needs to validate, conduct a risk assessment of your training products.
The risk level of each training product you deliver will depend on their respective risk indicators. For example, if you get a lot of complaints about your assessments for a certain unit, you could consider that unit “high risk”, and would need to be validated sooner than later. On the other hand, if you have been delivering a unit for a while now, and the feedback has been positive, you could consider that unit “low risk”.
The number of enrolments also affect the risk level of a training product. More enrolments mean higher risk, because changes to the training product would affect more students. In contrast, fewer enrolments mean lower risk, because changes to the training product would affect fewer students.
Risk can also be determined by the nature of the competency themselves. For example, units that deal with safety (e.g. WHS, handling of hazardous materials, etc.) are always more “high risk”. Units that deal with caring for children or the elderly also carry a higher risk than others. Assessing a student accurately becomes even more crucial for these kinds of training products, because they involve the safety and wellbeing of other people.
Doing a risk assessment is important because it tells you what training products to prioritise in your validation schedule, and which specific units you need to validate. It also tells you how many completed assessments you need to validate.
Step 2: Calculate a statistically valid sample size.
It’s not practical to validate all assessment submissions. Unless your risk assessment tells you otherwise, you only really need to validate a large enough number of assessments to be compliant.
Here are some things you need to consider when calculating for a statistically valid sample size of completed assessments:
• Margin of error – This is the likelihood that your sample will be a good representation of all completed. If your assessment outcomes tend to be consistent, then your sample size will matter less, which means you can accept a greater margin of error. Likewise, if your assessment outcomes vary quite often, then your sample size will matter more, which means you can accept a smaller margin of error.
• Confidence level – This is about how sure you need to be that sample assessment judgements will produce an accurate validation outcome for all assessment judgements. If, for example, you have fewer assessors conducting assessments for the same unit, you can be pretty certain that their judgements will be more consistent than if you have more assessors doing the same thing.
Other factors like delivery methods (face-to-face, online, apprenticeship, etc.) and learner cohorts can affect the consistency of your assessment judgements, along with your confidence level. The fewer of these factors you have, the greater your confidence level, and the lesser the need for a large sample size.
• Population size – This is the total from which your sample will based. To get the population size, count the total number of assessment judgements made in the unit you are validating within a set period of time (usually about 6 months).
• Response distribution – This is how much you expect your sample to lean toward a certain response. You can minimise this by being random in your selection of assessments to be validated.
There are different ways to make your selection random. For example, you can try to list your population size alphabetically, and then select every odd or even number on the list. You can even list them on an electronic spreadsheet (like Microsoft Excel) and then use an automated randomisation feature to help you make your selection.
To help you calculate for a statistically valid sample size, a number of statistically valid sample size calculators can “do the math” for you. Raosoft, in particular, is recommended by ASQA for calculating the number of assessments you need to validate.
• Note that the default value for Raosoft’s margin of error is 5%. This will result in a much larger sample size than you may need, especially if you have fairly consistent assessment outcomes. Unless your risk assessment demands otherwise, we recommend you change the value to 15%.
• The widely accepted value for the confidence level is 95%. You can increase this value to 99%, but keep in mind that doing so will also increase your sample size.
If you have properly randomised your selection, you can keep the response distribution at 50%.
Step 3: Conduct assessment validation.
To conduct post assessment validation, you need to look at three things in an assessment submission.
1. Student Response. Does the student’s response answer the question being asked? Or, if the assessment item is a performance task, does the student perform the task as required?
2. Marking Guide. Did the assessor follow the marking guide? Do the benchmark answers allow enough flexibility in the student’s response?
3. Assessor Judgment. Consider an assessment item. Do you agree/disagree with the assessor’s feedback? Do you agree with the student’s answer? Do you think the assessor made the correct judgment?
If you don’t agree with an assessor’s judgement, explain why you disagree, and how you think the assessor should have judged instead. Remember to base your own judgement on the Principles of Assessment and the Rules of Evidence. Discuss what the assessor’s judgement may mean for the validity and reliability in your assessment methods. If corrective actions need to be taken, be sure to take note of them.
Step 3: Finalise observations and offer recommendations.
You can conclude the post assessment validation process with a report that details your findings. If you have found any errors with the validation tool, add suggestions on how to correct them in your report. You may even devote a whole section of your report to suggested rectifications.
Based on the issues you may have identified in your sample size, you must conduct another risk assessment and identify additional risks that your validation has exposed. For example, if quite a number of assessments contained invalid evidence, but the students who submitted them were still marked competent, then they were assessed incorrectly. They should not be issued certificates. What does it mean for your RTO if you already have? Consider the risks carefully, and note corrective actions you will need to take.
ASQA requires all RTOs to maintain evidence of having conducted assessment validation. Keeping your post assessment validation reports can be handy during an audit, because they serve as evidence that you comply with the Standards’ requirements for validation.