To install StudyMoose App tap and then “Add to Home Screen”
Save to my list
Remove from my list
Approximately 210,000 third- and fourth-grade students respond to writing and reading tasks annually through the New Jersey Assessment of Skills and Knowledge (NJ ASK). Students in both grades prepare two writing samples (one narrative, one expository). Grade 3 students also respond to two passage-linked open-ended questions while grade 4 students respond to three passage-linked open-ended questions. Each writing sample and reading response is scored individually by two independent readers.
To accomplish the scoring of these writing samples and reading responses, the test contractor selects, trains, and qualifies experienced readers (scorers).
All readers, regardless of experience, are required to participate in an intensive training period that is focused on application of the appropriate rubric to one particular type of task (e.g., a narrative or story based on a picture prompt). The materials used for training and qualifying the readers are scored by consensus during range-finding and then selected for use as anchors (or benchmark papers), training and practice sets, qualifying sets, calibration sets, and monitor papers.
Only those readers who meet the 80 percent agreement standard qualify to score New Jersey writing samples and reading responses. By the end of training, the readers have internalized the defined criteria at each score point of the rubric by practice scoring and discussing more than 100 sample student responses.
Although New Jersey has begun using image (or electronic) scoring for NJ ASK, the procedures for scoring remain consistent with those used by the state since the inception of performance- based tasks in statewide assessment. All writing samples and reading responses are monitored and scored by trained, experienced personnel who have met the same rigorous standards established with the initial holistic scoring study conducted in 1986. Many individuals are responsible for ensuring the success of scoring for any large-scale assessment.
Key to the process of scoring NJ ASK responses accurately and reliably are the contractor’s senior project manager, chief reader, team leaders, and readers.
The contractor’s senior project manager works closely with the New Jersey Department of Education throughout the scoring process. The senior project manager participates in selection of the range-finding and training papers prior to the onset of reader training. The senior project manager directs the activities of the chief reader and oversees all aspects of the project including monitoring reader performance (reader reliability and production rates), directing retraining efforts, and supervising the capture of scoring data.
The chief reader participates in the reading, scoring, and preparation of range-finding papers along with the contractor’s senior project manager, New Jersey teachers, and the department’s Language Arts Literacy coordinator. Additionally, the chief reader prepares annotations for the anchor papers that, along with the scoring criteria, are used to train the team leaders and, in turn, the readers. It is the responsibility of the chief reader to introduce the task, the rubrics, and the sample responses; to conduct the training sessions; and to ensure that readers score reliably and consistently throughout the scoring process. The chief reader supervises the team leaders, directs all scoring and validity procedures, reads and interprets reader quality control reports, and conducts all retraining activities. Additionally, the chief reader handles all resolution readings (when the two readers assign non-adjacent scores to the same response).
Team leaders rely heavily upon periodic individual and small-group training to correct reader drift (that is, scoring that is not in accord with the criteria). They spot-check reader scoring throughout the project and counsel readers who have a higher than acceptable discrepancy rate. A paper is considered discrepant if two independent readers assign non-adjacent scores to the same response (e.g., one reader assigns a “5,” the second reader a “3”). Every response with discrepant scores must be adjudicated, that is submitted for a third reading and resolution of the discrepant scores.
Once trained in the application of the scoring criteria to a given task, the readers’ primary responsibility is to score accurately all responses provided through the electronic system. These responses have been scanned from students’ original test booklets and sorted electronically by item type and number. Therefore, when the reader logs onto the scoring system, a randomly assembled “packet” of responses to the given task is made available for the reader to read and score electronically. As part of the scoring procedures, the reader may not be able to assign a scale score because the response is off-topic, in the wrong format, not English, or blank. In that event, the reader will assign instead one of several codes to indicate the reason that the response is not scorable. After all responses in the “packet” have been scored, the “packet” of responses and reader-assigned scores (or codes) is returned to the main system where the responses will be regrouped in other packets for a second reading. If the second reading results in non-adjacent scores, the scoring system automatically identifies the discrepancy; and the response is submitted to the chief reader for a third reading and resolution of the discrepant scores.
Readers are also responsible for recognizing and flagging “alert” papers (e.g., suspicion of child abuse) so that these papers can be handled in the correct manner. Alert papers are flagged if they reflect potential abuse, emotional or psychological difficulty, or possible plagiarism. Alert papers are scored but then forwarded to the chief reader for review. If the chief reader agrees that the student’s own words specifically state a situation that qualifies as an alert or reflects a potential risk situation for a child, the paper is copied and sent to the Department of Education for documentation and follow-up with district authorities. The Office of Evaluation and Assessment in the department brings these alerts to the attention of school district personnel.
Rules for Our School Draft. (2021, Dec 27). Retrieved from https://studymoose.com/rules-for-our-school-draft-essay
👋 Hi! I’m your smart assistant Amy!
Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.
get help with your assignment