The Selection Process is a systematic series of events, which results in an organization making a selection from a group of applicants. The group of applicants usually consists of individuals who best meet the selection criteria for the position available. A lot of people graduating from college will most certainly ask themselves, “just what’s involved in the selection process from a HR managers perspective, or referred to in most cases as the interviewer. ” Lets take a look at the selection process, since trying to land that job of a lifetime is one of the primary reasons why most students decide to continue their education.
Many employers follow a format called an employers guide to good practices. First there is the personnel assessment. This assessment is a systematic approach to gathering information about the individuals. This information is used to make employment or career-related decisions about applicants and employees. For example, an employer may use personnel assessment to select employees for a job.
And career counselors may conduct personnel assessment to provide career guidance to clients. Personnel assessment tools, such as tests and procedures, are used to measure an individual’s employment or career related qualifications.
There are many types of personnel assessment tools. These include traditional knowledge and ability tests, inventories, subjective procedures, and projective instruments. Personnel assessment tools differ in purpose, what they are designed to measure what they are designed to predict, work samples, and levels of standards, objectives, and productivity. All assessment tools used to make employment decisions, regardless of their layout, are subject to legal standards.
For example, both the evaluation of a resume and the use of a highly standardized achievement test must comply with applicable laws.
Assessment tools used only for career counseling are usually not held to the same legal standards. Employers should remember that personnel tests provide only part of the picture about a person. However, these steps help combine and evaluate all the information gathered about a person to make career or employment related decisions. People differ on many psychological and physical characteristics. These characteristics are called constructs. For example, people skillful in verbal and mathematical reasoning are considered high on mental ability.
Those who have little physical stamina and strength are labeled low on endurance and physical strength. The terms mental ability, endurance and physical strength are constructs. Constructs are used to identify personal characteristics and to sort people in terms of how they possess of such characteristics. For example, organizations don’t observe physical strength but they can observe people with great strength lifting heavy objects and people with limited strength attempting, but failing, to lift these objects. Such differences in characteristics among people have important implications in the employment context.
Employees and applicants vary widely in their knowledge, skills, abilities, interests, work styles, and other characteristics. These differences systematically affect the way people perform or behave on the job. Why do organizations conduct assessment? Organizations use assessment tools and procedures to help them perform the following human resource functions: selection, placement, training and development, promotion, career exploration and guidance, and training evaluation. Organizations should use assessment tools in a purposeful manner. It is critical to have a clear understanding of what needs to be measured and for what purpose.
It’s also important not to rely too much on any one test to make decisions. Organizations should use the whole person approach to assessment. Secondly, there is understanding the legal context of assessment employment laws and the regulations with implications for assessment. For example, the Title VII of the Civil Rights Act (CRA) of 1964 (as amended in 1972), and the Tower Amendment to Title VII. Title VII is landmark legislation that prohibits unfair discrimination in all terms and conditions of employment based on race, color, religion, sex, or national origin.
Other legislation has added age and disability to the list. Women and men, people age 40 and older, people with disabilities, and people belonging to a racial, religious, or ethnic group are protected under Title VII and other employment laws. Individuals in these categories are referred to as members of a protected group. The employment practices covered by this law include the following: recruitment, transfer, performance appraisal, disciplinary action, hiring, training, compensation, termination, job classification, promotion, union or other membership, and fringe benefits.
Employers having fifteen or more employees, employment agencies, and labor unions are subject to this law. The Tower Amendment to this act stipulates that professionally developed workplace tests can be used to make employment decisions. However, only instruments that do not discriminate against any protected group can be used. The Age Discrimination in Employment Act of 1967 prohibits against employees or applicant’s age 40 or older in all aspects of the employment process. Individuals in this group must be provided equal employment opportunity.
If and older worker charges discrimination under the ADEA, the employer may defend the practice if it can be shown that the job requirement is a matter of business necessity. Employers must have documented support for the argument they use as a defense. ADEA covers employers having 20 or more employees, labor unions, and employment agencies. Certain groups of employees are exempt from ADEA coverage, including public law enforcement personnel, such as police officers and firefighters. Uniformed military personnel also are exempt from ADEA coverage.
The Equal Employment Opportunity Commission of 1972 is responsible for enforcing federal laws prohibiting employment discrimination. It receives, investigates, and processes charges of unlawful employment practices of employers filed by an individual or group of individuals. If the EEOC determines that there is “reasonable cause” that an unlawful employment practice has occurred, it is authorized to sue on behalf of the charging individual or individuals. The EEOC participated in developing the Uniform Guidelines on Employee Selection Procedures.
Uniform Guidelines on Employee Selection Procedures of 1978 govern the use of employee selection procedures according to applicable laws. They provide the framework for employers and other organizations for determining the proper use of tests and other selection procedures. The guidelines are legally binding under a number of civil rights laws. The courts generally stress great importance to the guidelines of technical standards for establishing the job relatedness tests. One of the basic principles of the Uniform Guidelines is that it is unlawful to use a test or selection procedure that creates adverse impact, unless justified.
Adverse impact occurs when there is substantial different rates in selection of hiring, promotion, or other employment decisions that work to the disadvantage of members of a race, sex, or ethnic group. Adverse impact is normally indicated when the selection rate for one group is less than 80% or 4/5’s of another. If your assessment process results in adverse impact, you are required to eliminate it or justify its continued use. The Americans with Disabilities Act(ADA) of 1990 states those individuals with disabilities must be given equal opportunity in all aspects of employment.
The law prohibits employers from discriminating against qualified individuals with disabilities. Prohibited discrimination includes failure to provide reasonable accommodation to persons with disabilities when doing so would not permit excessive hardship. An example of a qualified individual is one with a disability who can perform the essential tasks of a job, with or without reasonable accommodation. Record keeping of adverse impact and job relatedness of tests requires that all employers maintain a record of their employment-related activities including statistics related to testing and adverse impact.
Some state and localities have issued their own fair employment practices laws, and some have adopted the federal Uniform Guidelines. These state and local laws may be stricter than corresponding federal laws. When there is a contradiction, federal laws and regulations override any contradictions. Employees and applicants should become thoroughly familiar with their own state and local laws on employment and testing before applying. What makes a good test? An employment test is considered good if the following can be said about it. The test measures what it claims to measure consistently and reliability.
For example, if a person were to take the test again, the person would get a similar test score. The test measures what it claims to measure. For example, a test of mental ability does in fact measure mental ability, and not some other characteristic. The test is job relevant. In other words, the test measures one or more characteristics that are important to the job. By using the test, more effective employment decisions can be made about individuals. For example and arithmetic test may help you to select qualified workers for a job that requires knowledge of arithmetic operations.
The test that has these qualities is highlighted by two technical properties, reliability and validity. Reliability refers to how dependably or consistently a test measures a characteristic. If a person takes the test again, will he or she get a similar test score, or a much different score? A test that produces similar scores for a person who repeats the test is said to measure a characteristic reliably. How do we account for someone who does not get exactly the same test score every time he or she takes the test? There could be a variety of reasons.
Test taker’s temporary psychological or physical state such as high levels of anxiety, fatigue, or motivation may affect the applicants test results. Environmental factors such as differences in the testing environment like room temperature, lighting, noise, or even the test administrator, can affect an individual’s test performance. Test forms because many tests have more than one version or form. Items differ on each form, but each form is supposed to measure the same thing. Different forms of a test are known as parallel forms or alternate forms.
These forms are designed to have similar measurement characteristics, but they contain different items. Because the forms are not exactly the same, a test taker might do better on one form than on another. Also, multiple raters because certain test’s scoring is determined by a rater’s judgements of the test taker’s performance or responses. Difference in training, experience, and frame of reference among raters can produce different test scores for the test taker. The factors are sources of chance or random measurement error in the assessment process.
If there were no random errors of measurement, the individual would get the same test score, or the individuals true test score each time. The degree to which these scores are unaffected by measurement errors is an indication of the reliability of the test. Reliable assessment tools produce dependable, repeatable, and consistent information about people. In order to meaning interpret test scores and make useful employment or career-related decisions, you need reliable tools. This brings us to the next principle of assessment. Essential is the interpretation of information from test manuals and reviews.
For example test manuals and independent review of tests provide information on test reliability. The following discussion will help you interpret the reliability information about any test. There are several types of reliability estimates; each influenced by different sources of measurement error. Test developers have the responsibility of reporting the reliability estimates that are relevant for a particular test. Before deciding to use a test, read the test manual and any independent reviews to determine if its reliability is acceptable.
The acceptable level of reliability will differ depending on the type of test and the reliability estimate used. Test validity is the most important issue when an employer is selecting a test. Validity refers to what characteristic the test measures and how well the test measures that characteristic. It tells you if the characteristic being measured by a test is related to job qualifications and requirements. Validity gives meaning to the test scores. It is evidence that supports the link between test performance and job performance. It can tell you what you may conclude or predict about someone from his or her score on the test.
If a test has been demonstrated to be a valid predictor of performance on a particular job, you can assume that persons scoring high on the test are more likely to perform well on the job than persons who score low on the test. Remember, it is important to understand the difference between reliability and validity. Validity will tell you how good a test is for a particular situation, while reliability will tell you how trustworthy a score on that test will be. You cannot draw valid conclusions from a test score unless you are sure that the test is reliable.
Even when a test is reliable, it may not be valid. You should be careful that any test you select is both reliable and valid for your situation. A test’s validity is established in reference to a specific purpose and the test may not be valid for different purposes. For example, the test you use to make valid predictions about someone’s technical proficiency on the job may not be valid for predicting his or her leadership skills or absenteeism rate. This leads to the next principle of assessment in which the test’s validity is established in reference to specific groups.
These groups are called the reference groups. The test may not be valid for every group of applicants. For example, a test designed to predict the performance of managers in situations requiring problem solving may not allow you to make valid or meaningful predictions about the performance of clerical employees. If, for example, the kind of problem-solving ability required for the two positions is different, or the reading level of the test is not suitable for clerical applicants, the test results may be valid for managers, but not for clerical employees.
Test developers have the responsibility of describing the reference groups used to develop the test. The manual should describe the groups for whom the test is valid, and the interpretation of test results for individuals belonging to each of these groups. You must determine if the test can be used appropriately with the certain type of people you want to test. This group of people is called your target population or target group. Your target group and the reference group do not have to match on all factors. Yet, they must be sufficiently similar so that the test will yield meaningful scores for your group.
For example, a writing ability test developed for use with college seniors may be appropriate for measuring the writing ability of white-collar professionals or managers, even though these groups do not have identical characteristics. In determining the correct test for your target groups, consider factors such as occupation, reading level, cultural differences, and language barriers. A valid personnel tool is one that measures an important characteristic of the job the applicant is interested in. Use of valid tools will, on average, enable you to make better employment-related decisions.
Both from the business and legal viewpoints, it is important to only use tests that are valid for your intended the organization’s intended use. In order to be certain an employment test is useful and valid, evidence must be collected relating the test to the job. The process of establishing the job relationship of a test is called validation. The Uniform Guidelines exhibit the following three methods of validation studies that are criterion-related validation necessary to construct related test that best support validation: criterion-related validation, content-related validation, and construct-related validation.
The three methods should be used to provide validation support depending on the situation. These three general methods often overlap and depending on the situation, usually one or more will be appropriate. Professionally developed tests should come with reports on validity evidence, including explanations oh how validation studies were conducted. If you develop your own tests or procedures, you will need to conduct your own validation studies. As the test user, you have the ultimate responsibility for making sure that validity evidence exists for the conclusions you reach using the tests.
This applies to all tests and procedures you use, whether they have been bought off the shelf, developed externally, or developed in-house. Validity evidence is especially critical for tests that have adverse impacts. When a test has adverse impacts, the uniform guidelines require that validity evidence for that specific employment decision be provided. The particular job for which a test is selected should be very similar to the job for which the test was originally developed. Determining the degree of similarity will require a job analysis.
Job analysis is a systematic process used to identify the tasks, duties, responsibilities and working conditions associated with a job and the knowledge, skills, abilities, and other characteristics required to perform that job. Job analysis information may be gathered by direct observation of people currently in the job, interviews with experienced supervisors and job requirements, questioners, personnel and equipment records, and work manuals. In order to meet the requirements of the Uniform Guidelines, it is recommended that a qualified professional conduct the job analysis.
For example, an industrial and organizational psychologist or other professional well trained in job analysis techniques. Job analysis information is certain in deciding what to test for and which tests to use. To determine if a particular test is valid for your intended use, consult the test manual and available independent reviews. Here are three scenarios illustrating why you should consider these factors, individually and in combination with one another. It is important to remember that an assessment instrument, like any tool, is most effective when used properly and can be very counterproductive when used inappropriately.
Here are some important issues and concerns surrounding these limitations. Careful attention to these issues and concerns will help organizations produce fair and effective assessment programs. Deciding whether to test or not to test is an important one. You need to carefully consider several technical, administrative, and practical matters. Sometimes a more aggressive employee-training program will help to improve individual and organizational performance without expanding your current selection procedures.
Sometimes a careful review of each candidate’s educational background and work history will help you to select better workers, and sometimes using additional tests will be beneficial. Consider how much additional time and effort will be involved in expanding your assessment program. As in every business decision, one will want to determine whether the potential benefits outweigh the expenditure of time and effort. Be sure to factor in all the costs, such as purchase of tests and staff time, and balance these against all the benefits, including potential increases in productivity.
In summary, before expanding your assessment program, it is important to have a clear picture of your organization’s needs, the benefits you can expect, and the related costs that will arise. Viewing tests as threats and invasions of policy, are considered be some applicants since they’re intimidated at the mere thought of taking a test. Some may fear that testing will expose their weaknesses, and some may fear that tests will not measure what they really can do on the jog.
Yet, some people view certain tests as an invasion of privacy, This is especially true of personality tests, honesty tests, medical tests, and tests that screen for drug use. Remember that fear or mistrust of tests can lower the scores of some of some otherwise qualified candidates. So to reduce these fears it’s important for the organization to take the time to explain a few things about the testing program before administering it to the applicant. Some points the explanation is why the test is being administered, the fairness of the test, the confidentially of the test and how the test results will be used in the assessment process.
Furthermore, organizations should not rely entirely on any one-assessment instrument in making employment decisions. Using a variety of assessment tools will help an organization obtain a fuller and more accurate picture of an applicant. Every test taker should have a fair chance to demonstrate his or her performance on an assessment procedure. However there are times when a fair chance will most likely not occur. For example, a person who has a child at home with the measles may not be able to concentrate on taking a vocabulary test.
Or someone sitting next to a noisy air conditioner may also not be able to concentrate on the test questions. So on another day under different circumstances, these individuals might receive a different score. If the test taker believes that the test was not valid for this individual, they should consider asking for an alternate means of assessment. A single test cannot be expected to be valid in all situations and for all groups of people. However, employers can effectively use personnel assessment tests to measure job-related skills and capabilities of applicants and employees.
These tools can identify and select better workers and can help improve the quality of an organization’s overall performance. To use these tools properly, employers must be aware of the limitations of any assessment procedure, as well as the legal issues involved in assessment. The guide is organized around thirteen important assessment principles and their applications. These thirteen principles provide the framework for conducting an effective personnel assessment program. First there is the use of assessment tools in a purposeful manner.
Assessment instruments, like other tools, are helpful when used properly but can be useless, harmful, or illegal when used inappropriately. Often, inappropriate use results from not having a clear understanding of what you want to measure and why your want to measure it. As an employer, you must first be clear about what you want to accomplish with your assessment program in order to select the proper tools to achieve those goals. Your assessment strategies should be based on both an understanding of the kind of employment decisions to be made and the population to be assessed.
Once you are clear on your purpose, you will be better able to select appropriate assessment tools, and use those tools in an effective manner. Secondly, use the whole person approach to assessment. An assessment instrument may provide you with important employment related information about an individual. However, no assessment tool is 100% reliable or valid. All are subject to errors, both in measuring job-related characteristics and in predicting job performance. More importantly, a single assessment instrument only provides you with a limited view of a person’s qualifications.
Using a variety of tools to measure skills, abilities, and other job related characteristics gives you a solid basis upon which to make important career and employment decisions. Thirdly, organizations should use assessment instruments that are unbiased and fair tests that will help them select a qualified and diverse workforce. Organizations should review the fairness of evidence associated with assessment instruments before selecting tools by examining the test manual and independent test reviews. Next, organizations should only use reliable assessment instruments and procedures.
For example, if a person takes the same test again, will he or she get a similar score or will they get a very different score. A reliable instrument will provide accurate and consistent scores. Test manuals will usually provide a statistic, known as the reliability coefficient, giving you an indication of a test’s reliability. The higher the reliability coefficient, the more confidence one can have that the score is accurate. Fifth, organizations should only use assessment procedures and instruments that have been proven to be valid for the specific purpose for which they are being used.
Validity is the most important issue in selecting assessment tools. It refers to the characteristic the assessment instrument measures, and how well the instrument measures the characteristic. Validity is not a property of the assessment instrument itself yet it relates to how the instrument is being used. A test’s validity is established in reference to a specific purpose, it may not be valid for different purposes. For example, a test that may be valid for predicting someone’s “job knowledge”, may not be valid for predicting his or her “leadership skills”.
One must be sure that the instrument is valid for the purpose for which it is to be used. Selecting a commercially developed instrument does not relieve an organization of this responsibility. The test manual usually provides a statistic, the validity coefficient that will give an indication of the test’s validity for a specific purpose under specific circumstances. It measures the degree of relationship between test performance and job performance. The sixth principle relates to assessment tools that are appropriate for the target population.
An assessment tool is usually developed for use with a specific group while it may not be valid for other groups. For example, a test designed to predict the performance of office managers might not be valid for clerical workers. The skills and abilities required for the two positions may be different, or the reading level of the test may not be suitable for clerical workers. Tests should be appropriate for the individuals you want to test, that is, your target population. The manual should indicate the group or groups the test is designed to assess.
Your target population should be similar to the group on which the test was developed. In determining the target of an instrument, an organization should consider such factors as reading levels, cultural backgrounds, and language barriers. Principal seven deals with the use assessment instruments for which understandable and comprehensive documentation is available. For example, are the instructions for administration and interpretation understandable? Is the information sufficiently comprehensive to evaluate the tool, which suits your needs?
An organization should carefully evaluate the documentation provided by the test publisher to be sure that the tools you select do the job you want them to do and furnish one with the information needed. If the documentation is not understandable or complete, an organization runs the risk of selecting inappropriate instruments. Test manuals should provide information about both the development and psychometric characteristic of tests. They should cover topics such as procedures for administration, scoring and interpretation.
They should also include a description of the validation procedures used and the evidence of validity, reliability, and test fairness. Procedure eight deals with ensuring that the administration staff is properly trained. Assessment instruments must be administered properly to obtain valid results. Organizations should once again consult the administration manual for guidelines on the qualifications and training required for test administrators. These requirements will vary depending on the nature and complexity of the test. Only suitable staff should be selected.
Administrators should be given ample time to learn their responsibilities and should practice administering tests. Procedure nine ensures that testing conditions are suitable for all test takers. There are various influences that may affect the reliability and validity of an assessment procedure. For example, noise in the testing room, poor lighting, inaccurate timing and damaged test equipment may affect test takers. Staff should ensure the testing environment is suitable and that administration procedures are uniform for all test takers.
Procedure ten provides reasonable accommodation in the assessment process for people with disabilities. It ensures that qualified individuals with disabilities have an equal chance to demonstrate their potential. Modifying test equipment or the testing process or even providing assistance to the test taker may be necessary. For example, an organization may have to administer a test in Braille, or allow for extra time to complete the test or even supplying a reader may be appropriate. Procedure eleven maintains the assessment instruments security is kept up to par.
All materials used in the assessment process must be kept secure. Lack of security could result in some test takers having access to test questions beforehand thus invalidating their scores. To prevent this organizations should for example, keep testing materials in locked rooms or cabinets and limit access to those materials to only the staff involved in the assessment process. Security is also the responsibility of the test developers. The security of a test may become compromised over time so to protect security, test developers should periodically introduce new forms of tests.
Procedure twelve is to maintain confidentiality of assessment results. Assessment results should be regarded as highly personal. Employers must respect the test takers right to confidentiality. Assessment results should only be shared with those who have a legitimate need to know. This would include staff involved in interpreting assessment results and making employment decisions. Personal information should not be released to other organizations or individuals without the informed consent of the test taker.
And lastly, procedure thirteen ensures that scores are interpreted properly. Tests are used to make references to people’s characteristics, capabilities, and future performance. The references should be reasonable, well founded, and not based upon stereotypes. If test scores are not interpreted properly, the conclusions drawn from them are likely to be invalid, thus leading to poor decision making. The organization should ensure that there is solid evidence to justify the test score results and the employment decisions made based from the scores.