I am a lecturer at Lambeth College, Vauxhall Centre, teaching the IMI Awards specification on Motorcycle Engineering, Maintenance and Repair. The IMI is the Sector Skills Council for the retail automotive industry. IMI SSC is also responsible for setting the national occupational standards for the industry. In my present role, I cater to a variety of learners of mixed age, gender, ethnicity and backgrounds seeking to achieve a Level 1, 2, or 3 Certificate or Diploma. My duties include interviews, enrolments, inductions, demonstrations, monitoring, assessments, certifications, course and 1:1 tutorials. I have chosen a Level-3 group studying for a Diploma, as the group’s diversity in culture, knowledge and reception ability is conducive in testing the efficacy, range, types and appropriateness of their assessment programme and in linking its effect to core values and philosophies surrounding the principles of assessment methods.
For the students who meet the criteria to be considered for a place in the course, they must undergo an interview based on a question and answer form and a preliminary test giving a general impression of their literacy and numeracy skills, followed during post-enrolment induction, by a Diagnostics test in maths, english, comprehension and ICT. Together, they should expose any obstacles or barriers the students might face in dealing with the required coursework, as skills in calculations and sifting through term heavy information are important in achieving on the subject. Furthermore, these tests give the students a glimpse of what level of background knowledge will be expected of them which might help them finalize their intentions before the interview or probation cycle is over. As mentioned by H.D. Brown (2001, p.391), the aim of an initial assessment is to place a prospective student in the appropriate level.
Most importantly, this form of assessment should indicate “the point at which the student will find a level or class to be…appropriately challenging”. This suggests that if students were not challenged then they would be unable to show potential for a level nor would they display errors pointing to wrong course placements. Placement assessments do not need as much detail and differ from the closely-related diagnostic assessments. Diagnostic testing will follow when the student is placed. This test is used as formative ‘assessment for learning’ in the way that it informs the student and tutor of their current abilities and facilitates the development of schemes of work and alternate methods of testing and as a result identify possible differentiation avenues. In essence, it’s the first step in assessing learners’ needs (Petty, 2009, p.532).
During portfolio development, the students are required to study units, which are either knowledge or skills based and cover a specific topic on m/c systems. These two units create a topic ‘set’ and unless combination is specified on the qualification structure, the ‘set’ for a specific topic must be achieved through the collection of evidence with a practical emphasis given to a workshop or workplace based environment. This evidence constitutes the backbone of their formative and summative assessments. The greatest value of learning in the workplace is that it facilitates an integrated approach. For example, the learner needs to bring understanding, skills and attitudes or values together in solving a problem or executing a task.
In order for the students to meet the General Evidence Requirements, they are presented with a list of learning outcomes stipulating what they are expected to know, understand and/or be able to do and the criteria specifying the standards they must meet to show the learning outcomes have been achieved. The assessment of this evidence must show that they meet all of the performance objectives consistently and that they possess all the knowledge required. They must produce evidence resulting from work they have carried out -and be observed carrying out- on real vehicles in their normal workplace or in a realistic working environment as managed and organised by the college, or a combination of both.
The evidence these Level 3 students must collect are falling into three main types: 1. Performance Evidence which may include records of observation, authenticated job sheets, emails, reports, minutes of meetings, audio and video recordings and witness testimonies within the workplace. 2. Supplementary Evidence that are used to support claims to competence and these could include records of Q&A sessions, professional discussions, completed written tests and online exam results.
3. Historical Evidence as provided through prior learning, achievement or activity to support claims to competence such as plans, reports, letters of validation from employers and certification from other sources. Although records of learners’ progress and achievements may take a variety of formats, they must be clear and concise and show unsuccessful assessments, as well as achievements, providing a clear audit trail to show where, when and how learners have met the criteria. The idea behind collecting such evidence is that they will show how the assessor or tutor: •matched the awarding body’s instruction to the needs of learners •gave learners the opportunities to practice their skills •gave learners positive feedback
•monitored that learners understood and adapted instruction as appropriate •gave learners positive feedback on the learning experience and the outcome achieved •identified anything that prevented learning and reviewed this with the learner The students are given parameters as to how to gather, develop and present quality evidence, focusing on self evaluation and assessment such as: •Is the evidence I am producing relevant to the unit(s) of competence I am claiming? •Is the evidence true? Has it been authenticated?
•Can I show that my evidence and thinking at the time were right for the context in which I was operating? •Have I produced enough evidence drawn from different sources, and in a range of formats? •If asked, could I explain the key principles and methods behind what I have done? •Have I laid out my evidence so that an assessor, even one with little knowledge of the technical aspects of my job, can easily see where and how I have met the standards, as described in the units of competence? I believe that this sort of checklist can provide the students with evidence of validity and quality. The importance of this self-target setting is its vital role in developing self esteem and also in measuring validity in an assessment. Race also mentions incremental application as key principles in assessment. An important factor in any summative assessment should be that formative and progressive assessments can be embedded into the teaching leading up to it, by creating targets. “In addition to measuring achievement against externally devised outcomes, we should also be concerned to confirm the achievement of targets, which our learners have internally identified for themselves” (Fawbert, 2008, p.221).
As an assessor, I am appointed by the centre and the awarding organisation to judge a student’s gathered evidence against the national occupational standards, decide whether the candidate has demonstrated competence and ensure that their own assessment practice meets IMI Awards guidance and national standards. In the process, I have to make sure that the assessment and verification records and documents are fit for purpose and meet the awarding body requirements. Although only some assessment results are needed in order to compile the portfolios for certification, they are summative and they only depict retention ability and not necessarily gained knowledge or improved thinking processes. This is why a variety of assessment methods need to be applied that would actually indicate what, how and when learning is taking place. An assessment can take many forms and “is not only a core activity which impinges on all other aspects of teaching and learning: it is also a means of promoting or denying learner achievement and autonomy” (Armitage et al, 2007, p.98).
Assessment methods employed throughout the course for all the units include: Direct Observation of a student’s work in progress. It is the preferred method of assessment as it gives a clear picture of the student’s practical abilities and application of knowledge at such a hands-on subject. Finished work inspections, which must be wholly or partially the result of work undertaken by the student. Oral Questioning as a quick and revealing way of determining whether or not the student has the necessary knowledge that lies behind the required performance. Examination of written material such as written tests and essays, multiple choice tests and competence/skills tests. The written assessments are in addition to the online tests; together they provide slightly varied approaches in ensuring critical aspects of knowledge are addressed. Personal Statements offer the opportunity of explaining in writing how the task was achieved. It is a way of demonstrating terminology and method knowledge around a given task and the thinking process behind its completion.
Professional discussions which will usually focus on the reasons for selecting specific actions and the alternatives the student has considered, an evaluation of successes and failures, and learning points for the future. In order to ensure inclusion, opportunities and diversity in the assessments, the awarding body must accept a wide range of qualifying evidence as well as allowing comparison of results and traits for norm referencing. The majority of Level 3 units have been designed to be criterion referenced (assessment criteria is listed on written assessments). By the nature of these units, acceptable answers may vary considerably.
It is expected that when the assessor allocates marks, this variation will be taken into account. As per Armitage, “To find out how effectively learning has taken place, we need to compare a performance, demonstration of skill, knowledge or ability with something else as a way of characterizing it”. Norm referencing compares achievement with others in the particular group being assessed. Criterion referencing is used extensively for the written assignments of the IMI courses and here “the correspondence is between a formative or summative result and an objective standard or criterion” (Armitage et al, 2007, p.148).
Assessments are a crucial teaching element in a student’s further improvement and in their many forms (formal, informal, product or process focused, initial, diagnostic), not only measure and record learning but can also be a contributing factor in motivating the student through results and feedback. Some of the reasons for including assessments in teaching are to: Measure the relationship between the teacher’s aims & the students output, Test the progress of students,
Diagnose particular weaknesses or highlight strengths,
Provide feedback to learners, leading to future improvement, Predict future achievement,
Estimate learner’s current skills,
Form part of a student’s profile of abilities,
Demonstrate to students that they have attained some goal or acquired some skill, Provide formal recognition for achievement & certification of learning. Hattie’s research showed that formative assessments have the most positive impact on achievement and specific and constructive quality feedback is an effective motivating tool that encourages achievement. Hattie states “It is what teachers and students strategize, think about, play with, and build conceptions about worthwhile knowledge and understanding. Monitoring, assessing and evaluating the progress in this task is what then leads to the power of feedback” (2009, p.238). Therefore, the quality of feedback given by the assessor can be very effective as the student is motivated through interest in the subject, not for reward alone and in effect may be more liable to consider this feedback in future approach to work.
There are four key principles that must be considered in assessment practices. These are validity, reliability, authenticity and practicality. Validity measures whether the assessment allows the criteria or learning outcomes to be met and relates to practicality. All students are observed by the tutor and evidence of this process is collected in each of their portfolios and in this way formative assessment covers the learning outcomes. Validity is closely linked with authenticity. The assigned practical task must be a valid example of a real life scenario and in as an authentic an environment as possible.
The students are given a similar amount of time as they would professionally. In these respects, the assessment is as authentic as it can be. Reliability in assessment refers to whether fairness is employed, “where validity is concerned with measuring what it is supposed to measure, reliability is about consistency of that measurement” (Armitage & Renwick, 2008, p.24) and is closely related to practicality as an efficient, timely delivery of assessment that accounts for variables, which can threaten consistency. An assessment is reliable if two students of a similar level give similar results. However, different environmental and physical conditions can affect reliability. For example, variety in the timing of the assessment given could affect memory. Petty points out that assessment “is only reliable if the criteria are well defined…Otherwise different markers will apply different standards, or the same markers may apply different standards on different days or to different candidates” (2009, p.480).
Some written assignments are questionable in terms of their reliability because the students’ portfolios may be permitted away from the classroom for completion. This means that documenting their process and research into fault diagnostics could be plagiarized. Any assessment that is not monitored within class is questionable with regards to authorship and reliability. Considered against the learning advantages of investing in a process and result-recording portfolio, this risk is acceptable. Teachers and students can affect validity and reliability and in fact undermine the whole learning process. “Ensuring fairness to students in the assessment process also includes practical considerations about consistent practices among staff in relation to individual students.
I believe that the formative and summative assessments chosen and applied throughout the IMI Awards courses are generally successful models, in that they are inclusive, appropriate and effective. Through their range, a variety of oral and written feedback can by supplied to the students as well as allowing opportunities for self evaluation and assessment. Of course, every type of assessment has advantages and disadvantages, but when it is combined with other types or methods, the disadvantages balance out and the assessments become more effective in their purpose. The greater the diversity in the methods of assessment, the fairer the assessment is to students.
On the subject of fairness, Brown, Race and Smith (2005, p.3) advise “students should have equivalence of opportunities to succeed even if their experiences are not identical”, giving another aspect of why a range of assessments is important. The variety of assessments employed during an IMI course, permits tutors and students of different abilities and skills to evaluate their progress within the group’s activities and decide how should they improve upon it or maintain it with the group’s level. A variety of feedback can be transmitted to the students which when done constructively and timely, will aid the learning process enormously with beneficial results to both teachers and students.