Critically evaluate an assessment process Essay
Critically evaluate an assessment process
The assessment system being evaluated is Competency Based Training (CBT). This is used by myself and my colleagues in the Automotive Trade section. This report will include the processes implemented by my section, and what issues are present that impact on how the assessment is conducted. A critique along with personal recommendations that may enhance the process will also be included.
Assessment can be defined as “the process of collecting evidence and making judgments on whether competency has been achieved, or whether specific skills and knowledge have been achieved that will lead to the attainment of competence”. This appears to be a simple credo although all too often seems to fail either the candidate or the industry that represents them. To trace the events leading up to assessment, we must first look at the source. Industry identifies a need in the workforce and, in conjunction with Australian National Training Authority (ANTA), they develop a training package which may be described as the “bare bones” of what is required to be achieved.
The automotive sector adheres to the training package AUR99. From this TAFE unpacks the training package in which staff can retrieve a translation in the form of units of competency contained in the syllabus for a particular module. This is available to staff on the Course Information Documents Online (CIDO) website. The units of competency can be further broken down into elements of competency. These elements can be translated into further detail through the performance criteria from which the learning outcomes are derived.
An example of a module being taught in “Automotive” is called “Electrical system minor repairs”. This module addresses the unit of competency AUR18708A retrieved from the National Training Information Service (NTIS). This module is undertaken by stage one auto-electricians, light vehicle mechanics, and heavy vehicle mechanics. . The module purpose is to provide these trainees and apprentices the knowledge and skills to carry out minor repairs on automotive electrical circuits and systems.
In order to assess a candidate effectively, there are basic principles of assessment that must be adhered to. This report will address and evaluate the above module, and define the principles as outlined below along with observed critiques.
Validity is a process ensuring that the assessment task actually assesses the candidate in the way in which it was designed to.
The module is valid in the assessment as it addresses the outcomes and the learners must successfully complete theory and practical tasks. There is a question of age in the resources used in the section, but as technology in this area has changed little, I believe industry validity is current. Having said this, the inclusion of practical assessments creates its own problems, availability of enough resources, OH&S etc. Because of the financial constraints of having fifteen cars and equipment available at the one time, it is normal to break the class into three groups, with one group doing a theory test in the classroom, another doing one practical task in one part of the workshop and the last group doing another practical in another part of the workshop. Apart from the obvious OH&S issues, there is also the question of how valid an assessment can be if the students doing the theory can copy from each other while it is impossible for the assessor to view the whole of the practical task of all ten people in the workshop.
Reliability. This principle refers to the consistency of an assessment outcome regardless of varying locations, time and assessors.
Although the definition is idealistic, this principle seems to be lacking in many colleges. Due to my experience having taught this module in the light vehicle and auto-electrical sections, and having viewed the heavy vehicle assessments, it is notable that, taking away the human element, each section constructs its own test questions and practical resources without consulting each other, even though at my college these three sections are next door to each other. Therefore, the outcomes to these assessments are unreliable.
Flexibility allows candidates to negotiate time and place with their assessor and should make allowances for reasonable adjustment in the case of possible obstacles such as disability or literacy issues which may or may not inhibit the ability for a candidate to achieve a competency.
Upon teaching this module among other modules, I have noticed that although The government prides itself on flexibility it does not necessarily occur to a great extent in the automotive trades. If a practical task is being assessed and a candidate’s level of literacy could possibly hinder their successful completion of this competency, the education department does have provision for translators or scribes, but to my knowledge, no student is able to negotiate time and place for their assessment to take place when more suitable for them.
Fairness is a process to ensure that no one is disadvantaged during assessment. The candidates must be fully informed about assessment opportunities and be confident that there are no hidden agendas. The assessment must be accessible to all eligible students regardless of age, gender, disability, race, social background, language or geographic location.
I believe no one is specifically disadvantaged in the above categories, except for location.. Although students are forewarned when an assessment is taking place, if a student that is traveling from far away experiences difficulties during this trip due to traffic etc, there is no flexibility with regards to starting later for instance. The next time they are able to sit this assessment may be up to six months away.
Authenticity refers to the fact that the evidence used to make an assessment is the student’s own product or performance.
This is possible in this module, and indeed most of the automotive modules, as there is rarely homework or projects for the students to complete outside college grounds, therefore students should be constantly observed by their teacher whether in a quiet room for a theory exam or workshop practical observation.
In CBT, there are four types of assessments to be used. They are as follows;
Diagnostic. This type of assessment is used to assist in identifying educational or training needs and determine if a candidate is ready to undertake the desired course. This would include a numeracy, literacy and language test, often called a screen test.
This type of testing does occur at the beginning of the trade course certificate III, but not at the beginning of each module. If a student is identified as requiring extra assistance in these areas, we will provide tutorial support.
Formative assessment is done during the learning process and provides feedback about the student’s progress towards to competency.
Although I have seen no official policy in place for formative assessments, it is up to the teacher to provide this in good faith. I personally give students formative assessments to keep them steered in the right direction.
Summative assessment is more formal and conducted at the end of a module to determine whether the candidate has achieved their outcomes to the competency successfully.
This process is always undertaken in my section at work.
Holistic assessment brings together the three domains of learning (cognitive, affect and psychomotor), with an attempt to include technical hands-on skills, problem solving and ethical attitudes in the assessment event. This is normally more complex and requires more than simply one assessment tool. It ideally requires various formats in the shape of written tests, oral questioning, and direct visual evidence by an assessor.
I would once again question whether this occurs due to financial constraints as all of the student’s assessment tasks, both practical and theoretical are marked, but individual direct evidence is costly and time consuming.
One fundamental flaw between the nationally endorsed training package from ANTA and the translated syllabus from CIDO is the issue of grading within a competency-based assessment, which I believe is a contentious issue. The words “competency-based” infer that the candidate should be judged as either “competent” or “not yet competent”. This does not leave room for the high achievers as these people are being encouraged, through lack of recognition, to only put in enough effort to be deemed competent.
Most learners who have put a lot of effort into a piece of work often do not wish to simply see a ‘satisfactory’ label on their results. The system is contradictory and unclear in this issue as it is modeled on CBT system but chooses to grade some modules, such as this one, and conduct other modules in an un-graded format as simply pass or fail. As most automotive trade tests are Category C, grade code 71, this allows the section to locally set and mark their own assessments.
Grading is as follows;
Distinction – > = 83
Credit – > = 70
Pass > = 50
Fail < 50
It interests me that 50% is the pass mark. What has determined this? Should it be 60% or 70%, or why not 40%. In most cases in my view, 50% seems way too low.
Work Evidence Modules (WEMs) also play a part in the assessment as in the course of the student’s apprenticeship they must gather work evidence to be signed by their employer in order to validate that they have completed authentic practical tasks. I have seen flaws in this system as some employers are simply signing WEMs books enmasse without scrutiny as I have seen various trade tasks signed off in some apprentice’s books where I am well aware that their particular workshop does not undertake certain facets of the trade, thus leaving this system open to corruption.
Student’s workbooks are also used in module assessments, which may total up to 50%. There is an inconsistency marking workbooks with some teachers giving half marks, and others having a ‘black and white’ rule meaning all or nothing. This means that dependant on whose class an apprentice ends up in can decide on the level of his/her grade.
In the case of assessment validity, I would recommend that two staff be made available to ‘team teach’ during assessment events in order to view all facets of each task.
In the case of reliability where the same subject is taught across different trade sections, there should be policy in place for the exact same test paper to be used by all as an example as mechanics aren’t given a less challenging electric theory test than an auto electrician as is currently taking place in the autonomous climate.
I recommend that there should be a change in local policy that students should have a negotiated assessment time and place within reason, once again, with a second staff member being available to supervise.
Fairness could be further enhanced for the students who have to travel, but this issue can be appeased if coupled with the flexibility recommendation.
In the grading debate, we must decide on which side of the fence we stand, whether it be CBT un-graded, or graded. I prefer grading students on their merit, because I believe some are more competent than others. Although, if we made a stance on one side of the fence and stuck to it, I would accept either graded or un-graded, but not both throughout the course.
WEMs books should be eliminated, as I believe they will be in the not to distant future. They don’t work, and we used to produce fine trades people in the past without them.
With the above review and personal recommendations made as is apparent, there are flaws to our assessment processes, but this is part and parcel of being an educator. So long as we are reflective and continue to consult both industry and practitioners we will further fine tune our system in order to better serve future students which in this day and age is the customer.
Australian National Training Authority. (n.d.). Retrieved Aug 14, 2004, from http://www.anta.gov.au/
Certificate IV in assessment and workplace training. (2003). NSW., Australia: TAFE.
CIDO. (n.d.). Retrieved Aug 14, 2004, from https://www.det.nsw.edu.au/cgi-forte/fortecgi.exe?servicename=CDO&TemplateName=cdo_logon.html
Everyone’s guide to assessment. (2004). (2nd ed.). Darlinghurst, NSW., Australia: TAFE and NSW DET.
National Training Information Service. (n.d.). Retrieved Aug 14, 2004, from http://www.ntis.gov.au/
NCVER. (2002). Research at a Glance. Competency Based Training in Australia, pp. 159-166.