Our office is closted until the 6th January 2025. You can still enrol online and send us enquiries and we will activate enrolments and respond to enquiries in the New Year! From our Fortress family to yours, have a Merry Christmas and a Happy New Year!
Our office is closted until the 6th January 2025. You can still enrol online and send us enquiries and we will activate enrolments and respond to enquiries in the New Year! From our Fortress family to yours, have a Merry Christmas and a Happy New Year!
We have decided to take a look at what experienced training professionals do, and compare that with what the TAE50116 Dip VET says they should be doing.
The knowledge and skills required to function competently within the VET sector are packaged in the TAE Training Package. Whereas the somewhat ubiquitous Certificate IV in Training & Assessment may have become somewhat accepted as an entry-level qualification, the Diploma of Vocational Education & Training is rapidly being recognised as essential for anyone involved in the administration, management or leadership of RTO’s.
As with all qualifications, the Dip VET is limited by the Packaging Rules. Of the 10 units in total, 6 are Core. From this, we can confidently presume that within those 6 units can be found the essence of a productively functioning VET professional.
That is the idea, anyway.
The reality is, however, that not all people who are in positions that align with the requirements of the Dip VET’s core units are competent in all of those requirements. Indeed, there may well be considerable gaps between what the Qualification prescribes, and what practitioners actually know and do.
Understanding these gaps is a key to identifying where within the VET sector lie the unknown unknowns; the places where professional trainers and assessors are performing their duties in a manner that is not consistent with the way their own industry wants them to be performed. Given the simple reality that the Dip VET must be held to deliver the more entry-level Cert IV TAE, there is great benefit in identifying these gaps, if for no other reason than to identify the likely areas where the preparation of the industry’s future trainers may be falling short. More broadly, this identification will also inform the operations of RTOs, such that the currently unknown unknowns will be made known.
What is Fortress Learning investigating?
In simple terms, we are looking at the RPL outcomes for Dip VET students.
Because RPL is something that is sought by people who believe that they have the knowledge and skills that a qualification requires to be held, we can safely assume that the people who pursue RPL for the Dip VET’s core units will believe that they are competent in those units. By extension, this also means that they believe that they are doing their jobs correctly.
For each of the core units, we are calculating how many RPL applicants are satisfactorily demonstrating each performance benchmark for each unit. We are using a sample size of 50 students for each unit.
What have we found so far?
It is early days with the data analysis, but we are starting to collect some descriptive data that we can share. These data relate to three units:
The following table shows the summary of assessment outcomes in the above units. For Performance Evidence, the figures represent the average percentages of benchmarks that were satisfactorily demonstrated and the range for that across all 50 sample candidates. For the Knowledge Evidence, the figures represent the percentage who satisfactorily demonstrated all the benchmarks.
For the unit, TAEASS501, of the 50 RPL candidates whose submissions were sampled, the average number of Performance Evidence benchmarks that were demonstrated satisfactorily was 56%. The range among these 50 candidates was 24-90%, meaning that the strongest performer was a candidate who demonstrated 90% of the required performance benchmarks. In the Knowledge Evidence area, 82% of students satisfactorily demonstrated the required benchmarks.
For the unit, TAEASS502 (now also a core unit in TAE40116), of the 50 candidates, the average was 76% of performance evidence benchmarks being demonstrated satisfactorily, with a range of 54-100%. 78% of students demonstrated the required Knowledge Evidence benchmarks in this unit.
For the unit, TAEDEL502, the average of Performance Evidence benchmarks being satisfactorily demonstrated was 80%, with a range of 66-100%. 94% of students demonstrated the required Knowledge Evidence benchmarks in this unit.
Discussion
While little in the way of conclusions can be drawn at this stage, candidate performance in the three units is clearly different. Within the Performance Evidence, it ranges 24% from an average of 56% to 80% of benchmarks being demonstrated satisfactorily. Within the Knowledge Evidence area, the gap across units is reduced to 16%, with the variation being from 78% – 94%.
Of interest perhaps is that these data suggest a stronger knowledge in TAEASS501 compared to TAEASS502, whereas the performance in these units is reversed. And, generally speaking, RPL candidates appear to be generally stronger in the DEL unit than in the ASS units.