Having a hard time designing your assessment tools? Make sure your questions are effective in measuring how much a student has actually learned.
Below are some common problems you can find in assessment tools. If you find any of these in your assessment tools, you need to rectify them immediately.
You must clearly mark any problem you find so that they can be easily identified. You may also give recommendations on how to fix the assessment tools.
As we have discussed in another post, assessment tools need to pass two key requirements:
• compliance to the assessment requirements of the relevant training package or VET accredited course;
• compliance to the Principles of Assessment and the Rules of Evidence.
The first requirement changes, depending on the skills and knowledge needed in the training package. The second requirement, however, does not. Clause 1.8 of the Standards outlines the 4 Principles of Assessment and the 4 Rules of Evidence. These are guidelines that all assessments must follow.
Below are some guide questions that can help you with the Principles of Assessment and Rules of Evidence. You can read more about these requirements in ASQA’s website.
Principles of Assessment
• Fairness – Are the individual’s learner needs considered? Does the candidate require reasonable adjustment?
• Flexibility – Does the assessment tool reflect the learner’s needs? Are a range of assessment methods used? Are competencies held by the candidate acquired from other sources or RTOs considered?
• Validity – Are the assessment decisions of the RTO justified, based on the evidence of performance of the candidate?
• Reliability – Are the pieces of evidence presented for assessment consistently interpreted, no matter which of the RTO’s qualified assessor conducts the assessment?
Rules of Evidence
• Validity – Does the evidence show that the candidate has the skills, knowledge and attributes as described in the module or unit of competency and associated assessment requirements?
• Sufficiency – Does the assessment tool elicit quality, relevant evidence that was produced enough times to prove the candidate’s competency?
• Authenticity – Does the assessment tool have proof that the work is the candidate’s own?
• Currency – Are the assessment tools based on current units of competency and up-to-date industry practice?
Problems with assessment tools usually happen when these tools fail to address either one (or both) of these requirements. Let’s take a look at some of these common errors.
Unaddressed Unit of Competency Requirements
One of the most common problems with assessment tools is that they often do not meet all the components of a single unit requirement. To avoid committing this common mistake, follow these four easy steps.
Step 1: Make sure the verb is performed
For example, let’s say your RTO delivers assessments for MSFFM3005 – Fabricate Custom Furniture. One of its performance criteria (PC) states: “Doors, drawers and shelves are assembled and fitted.”
Because this is a performance requirement, your assessment must make sure that the student can demonstrate the assembling and fitting of doors, drawers, and shelves.
Step 2: Make sure to address the required quantities
The PC states: “Doors, drawers and shelves are assembled and fitted.”
This means that your assessment must make sure that the student can assemble and fit more than one door, drawer, and shelf.
Step 3: Make sure all components of a requirement are addressed
Your assessment must be able to address the PC’s three components:
• and shelves
Step 4: Make sure your assessment has specific benchmarks.
Each assessment item must have clear and specific benchmark answers that can guide the assessor’s judgement.
Vague, ambiguous instructions can confuse students and keep them from providing proper evidence in the knowledge or skill assessment. An assessment that cannot draw proper evidence from a student is not valid, and therefore, violates the Rules of Evidence.
Vague instructions can also make assessment benchmarks unreliable. If a marking guide, for example, has too much room for interpretation, different assessors may provide different outcomes for the same assessment item. This would cause your assessment to violate the Principles of Assessment.
Make sure the instructions for your assessment are as specific as possible. Make sure your marking guides allow very little room for interpretation. If an assessment item calls for multiple answers, specify the number of answers a student needs to provide. This way, your assessment can be conducted reliably, and the evidence you collect will be valid.
A double-barreled question asks for more than one answer from the student at the same time. This can confuse both the student and the assessor, and will violate the Rules of Evidence by making the assessment item invalid. Try splitting a double-barreled question into two or more questions. That way, each question asks for only one answer.
Underpinning Knowledge Only
A unit’s performance requirements (i.e. performance criteria and performance evidence) must be demonstrated through performance. Asking the candidate how they would perform, or how they had previously performed, a skill does not qualify as valid evidence under Clause 1.8 of the Standards. Questions like these only assess a student’s underpinning knowledge.
You must either add or modify instructions so that the student would demonstrate the performance requirements.
No Benchmarks or Insufficient Benchmarks
Each assessment item must have valid benchmarks that a student must meet. Benchmarks can be derived from observable actions that characterise good performance, according to a particular industry’s standards and best practices.
For items that require a specific answer, the benchmarks must indicate what answers the student must provide. For items that require a level of interpretation, however, they must outline a criteria that the student’s output must meet before it can be considered satisfactory.
Without benchmarks, there will be no standard measure to determine a student’s competence in a skill or knowledge. This will render your assessment unreliable, which will violate the Rules of Evidence.
Add benchmark answers to assessment items that lack them. Likewise, items with vague or insufficient benchmarks in the marking guide must be replaced with clear and specific benchmarks.
In addition, quantify benchmarks whenever possible. This will not only make them more specific; it will also address sufficiency in assessments.
Why is it important to address these problems?
Addressing these common problems early can do two things:
• make it easier for students to answer assessments properly, and to provide proper evidence;
• make it easier for assessors to judge a student’s competence accurately.
In addition, addressing these problems will align your assessments with a unit of competency’s requirements. It will also minimise the risk of violating the Principles of Assessment and the Rules of Evidence.
For enquiries about validation and rectification services, please contact us through:
1300 931 604 or firstname.lastname@example.org