Assessing Writing Skills in a Communicative Paradigm

Custom Student Mr. Teacher ENG 1001-04 6 January 2017

Assessing Writing Skills in a Communicative Paradigm

Communicative Language Testing is intended to assess learners’ ability to using the target language in real life situations. Its now ten years since Communicative Language Teaching (CLT) has been introduced in secondary English curriculum of Bangladesh. Therefore, the test of English at the SSC level is now facing the challenges of assessing learners’ communicative skills. This study looks at the existing model of the SSC English test and explores the possibilities of incorporating a more communicatively based test format. The study is carried out on the basis of an evaluation of the test items on writing skills set in the SSC test papers.

It also explores the views of Bangladeshi secondary English teachers and internationally renowned Language Testing Experts. In this paper, it is argued that, though secondary English education in Bangladesh has stepped into a communicative era ten years back, the current SSC test is not in accordance with the curriculum objectives. It is found that the test items on writing lack both validity and reliability. Suggestions made for improving the current SSC test include: defining the purpose of communication in English for SSC level learners, drafting test specifications, setting test items which are more relevant to a communicative purpose, and developing a marking scheme to mark the subjective items.


The concept of Communicative Language Teaching (CLT) has had much influence in the fields of English language teaching, curriculum and test design. Since the 1970s, there have been considerable developments in the area of language testing. Various theories and practical testing models have evolved following the concept of communicative competence. Bangladesh has introduced a communicative English curriculum at its secondary education sector.

However, the aims and objectives of the communicative curriculum can never be achieved without a testing system that assesses the communicative ability of learners. This paper looks at the existing Secondary School Certificate (SSC) English examination to identify the elements of communicative testing in it and examines the suitability of this testing system to the curriculum goals. The study involves a critical analysis of the current SSC test. It also explores the views of Bangladeshi secondary English teachers and two internationally renowned language testing experts on the SSC test and investigates the ways of making it more communicatively based.

Background of English Language Teaching (ELT) in Bangladesh

The teaching of English in Bangladesh has a long history that traces back to the colonial era. However, the British models of teaching English continued to influence the ELT scenario of post-colonial Bengal even after the colonial rule was over in 1947. Since then the grammar translation method continued to influence the ELT scenario as the most dominant teaching method in the Indian subcontinent. After the independence of Bangladesh (1971), several attempts were made to re-design ELT sector with little or no success.

In 1990, a four year ELT project called Orientation of Secondary School Teachers for Teaching English in Bangladesh (OSSTTEB) was jointly launched by the Government of Bangladesh and DFID, UK to improve English Language Teaching and Learning at secondary level. This project revised, adapted and revamped the secondary English curriculum (Hoque, 1999). In 1997, a major step was initiated with the introduction of English Language Teaching Improvement Project (ELTIP). The project started working with a view to improving the communicative competence of the secondary level learners. Under this project, a communicative curriculum, revised text books and newly written Teachers’ Guides (TGs) were developed and some 30 thousand English teachers, test administrators, and markers were trained.

The SSC examination

The SSC is the first public examination in Bangladesh that learners sit for after 10 years of schooling. Students take English as a compulsory subject at this level. The examination is administered countrywide through the seven Boards of Intermediate and Secondary Education (BISE). The question papers are set by the respective BISE independently following the national curriculum and syllabus of National Curriculum and Textbook Board (NCTB). The syllabus document of NCTB explicitly recommends a testing system that is in keeping with the spirits of CLT. The new syllabus document for classes 9-10 (NCTB 1999: 135) mentions, “Until and unless a suitable public examination is devised that tests English language skills rather than students’ ability to memorise and copy without understanding, the aims and objectives of the syllabus can never be realised.” Moreover samples of question papers were provided in the TGs and Teachers were encouraged to follow the test models.

Research Questions

This study is concerned with the following research questions: 1. How are students’ writing skills tested by the existing SSC English examinations? 2. To what extent are these test items communicatively based? 3. What do Bangladeshi teachers and the international testing experts think of the current SSC English examination? 4. How can the SSC examination be improved to reflect the goals stated in the national curriculum and syllabus document?

Research methodology

The approach to this resaerch belongs to the interpretative epistemology which argues that knowledge, in social research, is concerned not with generalization, prediction and control but with interpretation, meaning and illumination (Usher, 1996: 12). The approach here is guided by the belief that reality is a complex phenomenon which does not admit orderly events or simple cause-effect relationship. The data used is not only concerned with facts but also with values.

In looking at a testing system which is comparatively new in the context of Bangladesh, it is admitted that reality is a human construct. The aim here is to explore perspectives and shared meanings (Wellington, 2000: 16) and the data used here is qualitative.

The research procedure uses three different sources for collecting data and involves three steps. They are: a) a critical evaluation of the SSC English test format, b) collecting the views of Bangladeshi English teachers through questionnaires and, c) interviewing the two Australian testing experts based at Melbourne Univeristy. The evaluation of SSC examination includes a close analysis of the existing SSC test papers, syllabus document and marking criteria. The questionnaire attempts to explore the values and attitudes of secondary English teachers in relation to the SSC English testing system. The interviews with the language testing experts are intended to generate valuable ideas that could be applicable in improving the testing system of SSC.

The development of modern language testing

The development of modern language testing occurred in three historical phases prior to and during the 1970s. These three periods are- the scientific era, the psychometric-structuralist era and the integrative sociolinguistic era Spolsky (1978:5). According to Spolsky, the pre-scientific era was characterised by a lack of concern for statistical matters or for such notions as objectivity and reliability in language testing whereas the psychometric-structuralist period was concerned with tests that focus on discrete item tests. In fact, the psychometric-structuralist approach provided the basis for the flourishing of the standardised language test with its emphasis on discrete structure point items. However, discrete point tests were also criticised for being insufficient indicators of language proficiency (Oller 1979: 212). Language testing was directed to global tests in the 1970s, which opened up the psycholinguistic-sociolinguistic era (Weir, 1988: 3). This format of global and integrative tests (such as cloze) gained theoretical support from many researchers.

Davies distinguishes four important types of language tests on the basis of their function or use- achievement tests, proficiency tests, aptitude tests and diagnostic tests (Davies and Allan 1977: 46-7). While achievement tests are concerned with assessing what has been learned of a known syllabus, proficiency tests are based on assessing the learning of either a known or unknown syllabus.

The concept of communicative competence

The idea of communicative language teaching emerged in the 1970s following Hymes’ theory of communicative competence, which greatly emphasised learners’ ability to use language in context, particularly, in terms of social demands of performance (McNamara, 2000: 116). Hymes believes that knowing a language is more than knowing its rules. Once Hymes proposed the idea of communicative competence, it was expanded in various ways during the following two decades. The term competence was interpreted in many different ways by researchers. To some it simply means the ability to ‘communicate’; to others it means the social rules of language use; and to yet other, it refers to a set of abilities including knowledge of linguistics, socio-linguistics and discourse rules (Bachman & Palmar, 1984: 34). However, the basic idea of communicative competence remains the ability to use language appropriately, both receptively and productively, in real situations (Kiato, et al. 1996: 1)

The development of communicative language testing

The idea of communicative testing was developed on the basis of Hymes’ two dimensional model of communicative competence that comprises a linguistic and a sociolinguistic element. Davies et al. gives the following definition of communicative language tests:

Communicative tests are tests of communicative skills, typically used in contradistinction to tests of grammatical knowledge. Such tests often claim to operationalise theories of communicative competence, although the form they take will depend on which dimension they choose to emphasise, be it specificity to context, authenticity of materials or the simulation of real life performance.  (Davies et al. 1999: 26)

Harrison mentions three ingredients which distinguishes a communicative language test from other tests. He argues:

1. A communicative test should assess language used for a purpose beyond itself. 2. A communicative test should depend on the bridging of an information gap. It has to propose a language using purpose which can be fulfilled by the communicative skill so far acquired by the learners. 3. A communicative test should represent an encounter. The situation at the end of it should be different from what it was at the beginning, and this means that there has to be some sequence within the test.

(Harrison, 1983: 77-8)

Competence Vs performance

There have been debates among the researchers regarding the nature and function of communicative tests. One issue of controversy was how to specify the components of communicative competence and to relate them in measuring performances. Another complication arose as the terms ‘competence’ and ‘performance’ were used differently by various researchers suggesting important distinctions between them. Chomsky (1965) claimed that ‘competence’ refers to the linguistic system which an ideal native speaker has internalized whereas ‘performance’ is mainly concerned with the psychological factors that are involved in the perception and production of speech.

Later Hymes (1972) explicitly, and Campbell and Wales (1970) implicitly proposed a broader notion of communicative competence in which they included grammatical competence as well as contextual or sociolinguistic competence. They, however, adopted the distinction between communicative ‘competence’ and ‘performance’. According to Canale and Swain (1980: 3) ‘competence’ refers to knowledge of grammar and other aspects of language while ‘performance’ refers to actual use.

For the language testing researchers it was difficult to determine an ideal test model, which could be valid and reliable enough to test communicative
competence. They were concerned with what performances for task based activities need to be devised in order to assess learners’ communicative competence. The most discussed answer to this query is the one provided by Canale and Swain (1980) who, in their influential work ‘Approaches to Second Language Testing’ specified four aspects of knowledge or competence- grammatical competence, sociolinguistic competence, strategic competence and discourse competence.

What makes good communicative tests?

Though a communicative language test intents to measure how students use language in real life, it is difficult to set a task that can measure communicative competence in real contexts. Ellison (2001: 44) argues that testing by its very nature is artificial and unless we are to follow an examinee around all the time noting how he/she deals with the target language in all situations, we necessarily have a less than real situation. However, it should be the aim of the test setter to try and complement real situations as much as possible. Referring to the difficulty of identifying the elements of communicative testing Morrow (1991) states:

The essential question which a communicative test must answer is whether or not (or how well) a candidate can use language to communicate meanings. But ‘communicate meanings’ is very elusive criterion indeed on which to base judgment.

(Morrow, 1991: 112)

There have been attempts to develop a model for communicative competence and valid tests of its components. Bachman and Palmer (1984: 35) describe three approaches: the skill-component approach, communicative approach and measurement approach to specify what language tests measure. Offering a detailed interpretation of the Canale-Swain communicative approach, Bachman and Palmer specify some factors (trait factors, modal factors, method factors) that should be considered while designing a performance test. Having examined the structure of a model which encompasses these three factors, Skehan (1991: 9) regarded it as ‘being of pivotal importance in influencing the language testing theories and practices throughout the 1990s.’ Later Bachman went further as he offered important distinctions between task-based and construct-based approaches to test design. He explained:

The procedures for design, development, and use of language tests must incorporate both a specification of the assessment task to be included and definition of the abilities to be assessed.

(Bachman, 2000: 456)

Task based language assessment gave rise to two questions: a) How real-life task types are identified, selected and characterized and how pedagogic or assessment tasks are related to these (Bachman, 2000: 459) .

The discussion of different approaches to language testing are concerned with their strengths and limitations in terms of the criteria of validity and reliability. Validity in language testing is about whether a test can measure what it is intended to measure. Other arguments regarding the test validity include the question of content relevance and representativeness, task difficulty etc. Reliability refers to the extent to which test scores are consistent.

Assessing second language writing

Assessment of second language writing has been discussed on the basis of two different approaches: objective test of writing and direct test of writing. Objective tests claim to test writing through verbal reasoning, error recognition and other measures that have been shown fairly highly with measured writing ability (Lyons, 1991: 5). In direct tests of writing, actual samples of students’ writings are assessed. In fact, direct tests of writing have won the support of many researchers as they engage students with more communicative and creative task types. However, this approach has also been criticised for lacking reliability. Despite their problems with reliability, direct tests are still very popular in many academic settings throughout the world.

Kiato et al. (1996: 2) refer to some typical problems of testing writing. They point out that testing writing objectively may not necessarily reflect the way it is used by the students in the real world. On the other hand, testing of writing in a way that reflects how the students use writing in real world is difficult to evaluate objectively and the test setters has less control over the writing tasks. However, they argue that the ability to write should involve six component skills- grammatical ability, lexical ability, mechanical ability, stylistic skills, organisational skills and judgment of appropriacy. Among the writing tasks they find useful are: gap filling, form completion, making corrections, letter and essay writing.

Weir (1988: 63-4) offers an elaborate discussion on both indirect (objective) and direct tests and distinguishes the two types. He argues that writing can be divided into discrete elements such as grammar, vocabulary and punctuation etc. and these elements are tested separately by the use of objective tests. He suggested that both productive and receptive skills can be broken down in to levels of grammar and lexis according to a discrete point framework and objective tasks such as cloze, selective deletion, gap filling etc. can be designed for testing reading with writing. Weir describes the direct test of writing as a more integrative test which tests a candidate’s ability to perform certain of the functional tasks required in the performance of duties in the target situation.

Research on writing involving both native speakers and second language are also concerned with basic studies of the nature of writing process in order to relate them to the validity of writing test tasks. Some of the questions concerned are:

1. To what extent is performance influenced by the amount of prior knowledge that writers have about the topic that they are asked to write about in a test? 2. Does it make a difference how the writing task is specified on the test paper? 3. Do different types of tasks produce significant difference in the performance of learners in a writing test? (Read, 1991: 77) Johns (1991: 171) suggests three criteria for academic testing of writing- (1) use of reading for writing assessment: testing for audience awareness, (2) exploitation of common writing genres: argumentation and problem solution, and (3) testing of subject matter, conceptual control and planning. He insists that reading and writing be combined to give a more authentic context for testing writing for academic purpose. He says:

Because reading and writing are interconnected at all academic levels, it seems unprofessional and certainly unacademic to test writing without the genuine interactivity that reading provides.

(Johns, 19991: 176)

Literature on testing has suggested different strategies to cope with the problem of making direct writing tasks. The problem with these tasks is they are very difficult to mark as the marking of such tasks is somewhat subjective. One solution suggested by many testing experts is to use an analytical marking scheme to help make the marking consistent. Murphy (1979: 19) outlined the nature of a marking scheme demanded by the Associated Examining Boards, “A marking scheme is a comprehensive document indicating the explicit criteria against which candidate’s answers will be judged; it enables the examiners to relate particular marks to answers of specified quality.”

There have been discussions on two types of marking for free writing tasks- impressionistic and analytic. However there are arguments over what valid and reliable measures of writing can be used and what might be the relationship of these measures to overall impressionistic quality rating. The TOFEL examination included a direct writing measure (Connor, 1991: 216) in 1986 for the test of written English that was marked holistically (TOFEL test of written English guide 1989).

A great deal of research was conducted by the Educational Testing Service into the development and validation of a measure to assess communicative competence in writing (Bridgman Carlson, 1983; Carlson et al. 1985). A holistic scoring guide was developed to mark two general topics-comparison/contrast and describing a graph that had six levels and included syntactic and rhetorical criteria. The Test of Written English Scoring Guidelines (1989) identified the following criteria of a written task.

An essay in the highest category is- well organized and well developed, effectively addressed the writing task, uses appropriate details to support or illustrate ideas, shows unity, coherence and progression, displays consistent facility in the use of language, and demonstrates syntactic variety and appropriate word choice.

(The Test of Written English Scoring Guidelines, 989)

The marking scheme suggested by ELTIP to help teachers assess writing compositions is made on the basis of five criteria- grammar, vocabulary, mechanical accuracy, communication and content. A Marking scheme like this shows how developments in language testing research are providing models to cope with the challenges of marking writing tasks.

The SSC Curriculum, syllabus and the test

The SSC is the school leaving public examination for grad 10 students. English is a compulsory subject at this level and the test of English is an achievement test in kind. The test is designed to test reading and writing skills only as there is no provision of testing listening and speaking skills.

The NCTB syllabus of English focuses on the development of the four skills through learner-centred activities within meaningful contexts. It gives importance to choosing contexts which reflect actual social situations outside the classroom and make the learning of English ‘relevant, interesting and enjoyable’. It is expected as per the syllabus that students should achieve an ‘elementary to intermediate command of the four language skills’ by the end of secondary level. The curriculum document specifies the objective and purposes of learning English as it states:

English needs to be recognised as an essential work-oriented skill that is needed if the employment, development and educational needs of the country are to be met successfully. Increased communicative competence in English, therefore, constitutes a vital skill for learners at this stage.

(SSC Syllabus Document, 1999, NCTB: 136)

Terminal competencies in four skills are specified in the NCTB syllabus. The competencies for writing skills for grade 10 are defined as follows:

Students should be able to-

a) write simple dialogues, formal and informal letters including letters of application and reports. b) demonstrate imagination and creativity in appropriate writing forms. c) fill in the forms (i.e. job applications etc.) and write a curriculum vitae d) plan and organise the above tasks efficiently so as to communicate ideas and facts clearly, accurately and with relevance to the topic. e) take notes and dictations

f) use different punctuation and geographical devices appropriately.


  • Subject:

  • University/College: University of Arkansas System

  • Type of paper: Thesis/Dissertation Chapter

  • Date: 6 January 2017

  • Words:

  • Pages:

We will write a custom essay sample on Assessing Writing Skills in a Communicative Paradigm

for only $16.38 $12.9/page

your testimonials