Editing involves reviewing questionnaires to increase accuracy and precision. It consists of screening questionnaires to identify illegible, incomplete, inconsistent, or ambiguous responses. Responses may be illegible if they have been poorly recorded, such as answers to unstructured or open-ended questions. Likewise, questionnaires may be incomplete to varying degrees. A few or many questions may be unanswered. At this stage, the researcher makes a preliminary check for consistency. A response is ambiguous if, for example, the respondent has circled both 4 and 5 on a 7-point scale.
Coding means assigning a code, usually a number, to each possible response to each question. The code includes an indication of the column position and data record it will occupy. For example, gender of respondents may be coded as 1 for females and 2 for males. A field represents a single item of data, such as gender of the respondent. A record consists of related fields, such as sex, marital status, age, household size, and occupation. Thus, each record can have several columns. Generally, all the data for a respondent will be stored on a single record, although a number of records may be used for each respondent.
It is often helpful to prepare a codebook containing the coding instructions and the necessary information about the variables in the data set. Data cleaning is the thorough and extensive checking for consistency and treatment of missing responses. This cleaning process includes consistency checks and treatment of missing responses. While preliminary consistency checks have been made during editing, the checks at this stage are more thorough and extensive, since these are made by computer. Consistency checks are a part of the data cleaning process that identify data that are out of range or logically inconsistent, or that have extreme values.
Data with values not defined by the coding scheme are inadmissible. Missing responses represent values of a variable that are unknown, either because respondents provided ambiguous answers or their answers were not properly recorded. Proper selection, training, and supervision of field workers should minimize the incidence of missing responses. Data cleansing, data cleaning or data scrubbing is the process of detecting and correcting (or removing) corrupt or inaccurate records from a record set, table, or database.
Used mainly in databases, the term refers to identifying incomplete, incorrect, inaccurate, irrelevant, etc. parts of the data and then replacing, modifying, or deleting this dirty data. After cleansing, a data set will be consistent with other similar data sets in the system. The inconsistencies detected or removed may have been originally caused by user entry errors, by corruption in transmission or storage, or by different data dictionary definitions of similar entities in different stores.
Data cleansing differs from data validation in that validation almost invariably means data is rejected from the system at entry and is performed at entry time, rather than on batches of data. The actual process of data cleansing may involve removing typographical errors or validating and correcting values against a known list of entities. The validation may be strict (such as rejecting any address that does not have a valid postal code) or fuzzy (such as correcting records that partially match existing, known records).
Some data cleansing solutions will clean data by cross checking with a validated data set. Also data enhancement, where data is made more complete by adding related information, is a common data cleansing practice. For example, appending addresses with phone numbers related to that address. Data cleansing may also involve activities like, harmonization of data, and standardization of data. For example, harmonization of short codes (St, rd etc. ) to actual words (street, road). Standardization of data is a means changing of reference data set to a new standard, ex, use of standard codes.