We thank the Community Technology Foundation of California (ZeroDivide), UCACCORD, and Computers for Classrooms, Inc. for funding. We thank Hardik Bhatt, Jesse Catlin, Eric Deveraux, Oliver Falck, Eszter Hargittai, Ofer Malamud, Jeff Prince, Jon Robinson, Rhonda Sharpe and participants at seminars and workshops at the Center for Human Potential and Public Policy, the University of Chicago, UCLA, Case Western Reserve University, University College Dublin, University of Rome, UC Irvine, UC Berkeley, the University of Wisconsin, San Francisco Federal Reserve, UCSC and APPAM meetings for comments and suggestions. We also thank Mike Rasmussen, Karen Micalizio, Katalin Miko, Bev McManus, Linda Cobbler, Zeke Rogers and others at Butte College for helping with administering the program and providing administrative data, and Samantha Grunberg, Miranda Schirmer, Luba Petersen, Caitlin White, Anita McSwane-Williams, Matt Jennings, and Emilie Juncker for research assistance. Finally, special thanks go to Pat Furr for providing computers for the study and for her extensive help in administering the giveaway program.
There is no clear theoretical prediction regarding whether home computers are an important input in the educational production function. To investigate the hypothesis that access to a home computer affects educational outcomes, we conduct the first-ever field experiment involving the provision of free computers to students for home use. Financial aid students attending a large community college in Northern California were randomly selected to receive free computers and were followed for two years. Although estimates for a few measures are imprecise and cannot rule out zero effects, we find some evidence that the treatment group achieved better educational outcomes than the control group.
The estimated effects, however, are not large. We also provide some evidence that students initially living farther from campus benefit more from the free computers than students living closer to campus. Home computers appear to improve students’ computer skills and may increase the use of computers at non-traditional times. The estimated effects of home computers on educational outcomes from the experiment are smaller than the positive estimates reported in previous studies. Using matched CPS data, we find estimates of educational effects that are considerably larger than the experimental estimates.
The use of computers is ubiquitous in the U.S. educational system. Nearly all instructional classrooms in U.S. public schools have computers with Internet access, with an average of more than one instructional computer for every four schoolchildren (U.S. Department of Education 2007). A growing number of state, school district and individual school programs have further increased the ratio of computers to students to as high as one to one through the provision of laptops to all schoolchildren and teachers (Warschauer 2006, Silvernail and Gritter 2007). The federal government has also played an active role in reducing disparities in access to technology, spending roughly $2 billion per year on the E-rate program, which provides discounts to schools and libraries for the costs of telecommunications services and equipment (Puma, et al. 2000, Universal Services Administration Company 2007). Schools themselves are spending more than $5 billion per year on technology (MDR 2004).
Despite the efforts to improve computer access in schools, a total of 45 million households in the United States (38 percent of all households) do not have computers with Internet access at home (U.S. Department of Commerce 2008). Access to home computers is also not evenly distributed across the population; large disparities exist by income and race. For example, only 46 percent of the 50 million U.S. households with less than $50,000 in annual income have computers with Internet access at home compared to 87 percent of households with more than $50,000 in income. These disparities in access to home computers—known as the Digital Divide—may contribute to educational inequality. There is no clear theoretical prediction, however, regarding whether home computers are likely to have a negative or positive effect on educational outcomes.
Having access to a home computer is undoubtedly useful for completing school assignments because it increases and improves flexibility in access time to a computer for these purposes. On the other hand, home computers may crowd out schoolwork time because they are commonly used for games, networking, downloading music and videos, communicating with friends, and other entertainment among youth (Lenhart 2009, U.S. Department of Commerce 2004, Jones 2002). A better understanding of the extent to which home computers have an effect on educational outcomes is critical because it sheds light on whether home computers are an important input in the educational production process and whether disparities in access to technology will translate into educational inequality.
Although an extensive literature examines the effectiveness of computer use in the classroom, very little research has focused on the question of whether home computers improve educational outcomes. The handful of previous studies examining the relationship between home computers and educational outcomes find somewhat mixed results (Attewell and Battle 1999, Schmitt and Wadsworth 2004, Fuchs and Woessmann 2004, Fairlie 2005, Beltran, Das and Fairlie 2010, and Malamud and Pop-Eleches 2010). A limitation of this literature, however, is that most of these previous studies may suffer from omitted variable bias—specifically that the most educationally motivated students and families are likely to be the same ones that purchase computers for school use.
To address this limitation, we conduct the first-ever field experiment involving the provision of free computers to students for home use. Participating students were randomly selected to receive free computers and were followed for two years. The random-assignment evaluation is conducted with 286 entering students receiving financial aid at a large community college in Northern California. We received enough funding to provide half of these students with computers. Although baseline and follow-up surveys were conducted, administrative data provided by the college for all students is primarily used for the analysis of educational outcomes eliminating concerns about non-response.
The field experiment identifies the causal effects of home computers on educational outcomes and provides evidence on the potential mechanisms exerting both positive and negative influences on educational outcomes. The findings from this study shed light on whether home computers are an important educational input and whether we should view the remaining digital divide in the United States as a difference in consumer preferences or a disparity in educational resources.
The results from the experiment also provide the first evidence in the literature on the effects of home computers for post-secondary students. The focus on the impacts of computers on community college students is especially important. Community colleges enroll nearly half of all students attending public universities, and an even larger share of disadvantaged students (U.S. Department of Education 2011). The share is even higher in states such as California where community colleges serve 2.9 million students a year, representing more than 70 percent of all public higher education enrollment (Sengupta and Jepsen 2006; California Community Colleges Chancellor’s Office 2009). The returns to community colleges are also high, and recently President Obama proposed an unprecedented funding increase for community colleges that will boost enrollments by 5 million students by 2020.
In addition to providing workforce training and basic skills education community colleges serve as an important gateway to four-year colleges, especially among low-income and minority students. The cost savings from spending two years at a community college before entering a 4-year college can be substantial and are likely to rise — typical full-time tuition at a community college is $2,063 compared with $5,950 at a public university and $21,588 at a private university (U.S. Department of Education 2009).
In some states with large community college systems, such as California, nearly half of all students attending a 4-year college previously attended a community college (California Community Colleges Chancellor’s Office 2009). Finally, unlike four-year colleges where many students live on campus and have access to large computer labs, community college students often have limited access to on-campus technology. Data from an extensive survey of U.S. colleges in 2004 indicates that 80 percent of 4-year college students use their own computers compared with only 35 percent of 2-year college students (Educause 2005).
The findings from the experiment indicate that the treatment group of students receiving free computers had better educational outcomes than the control group along a few measures. Although a few of the estimates are imprecisely measured and cannot rule out zero effects, the point estimates are consistently positive and small in magnitude across several educational outcomes. Estimates for a summary index of educational outcomes indicate that the treatment group is 0.14 standard deviations higher than the control group mean. Students living farther from campus and students who have jobs may have benefitted more from home computers possibly by improving flexibility and total time using computers for schoolwork. Estimates from the random experiment are also found to be smaller than non-experimental estimates from matched CPS data raising concerns that previously reported estimates of large, positive effects of home computers on educational outcomes may be overstated.
2. Previous Research
The educational production function commonly estimated in the literature relates student performance to student, family, teacher, and school inputs measured directly or as fixed effects (see Rivkin, Hanushek and Kain 2005 for example). The personal computer is an example of one of these inputs in the educational production process. The use of computers in U.S. schools is now universal and has been studied extensively, but the role of home computers as an input in educational production is not well understood.
There are several reasons to suspect that home computers may represent an important educational input. First, personal computers make it easier to complete course assignments through the use of word processors, the Internet, spreadsheets, and other software (Lenhart, et al. 2001, Lenhart, et al. 2008). Although many students could use computers at school and libraries, home access represents the highest quality access in terms of availability, flexibility and autonomy, which may provide the most benefits to the user (DiMaggio and Hargittai 2001). Almost all students using home computers use these computers to complete school assignments and nearly three out of every four use them for word processing (Beltran, Das and Fairlie 2010).
Access to a home computer may also improve familiarity with software increasing the effectiveness of computer use for completing school assignments and the returns to computer use at school (Underwood, et al. 1994, Mitchell Institute 2004, and Warschauer and Matuchniak 2009). Enhanced computer skills from owning a personal computer may also alter the economic returns to education, especially in fields in which computers are used extensively. Finally, the social distractions of using a computer in a crowded computer lab on campus may be avoided by using a computer at home.
On the other hand, home computers are often used for games, networking, downloading music and videos, communicating with friends, and other forms of entertainment potentially displacing time for schoolwork (U.S. Department of Commerce 2004, Jones 2002). Nearly three-quarters of home computer users use their computers for games, and a large percentage of these users report playing games at least a few times a week (Beltran, Das and Fairlie 2010, Lenhart, Jones and Rankin 2008). Social networking sites such as Facebook and Myspace and other entertainment sites such as Youtube and iTunes have grown rapidly in recent years (Lenhart 2009).
The number of Facebook users alone increased from only 2 million users in 2004 to 150 million users in 2009. Computers are also often criticized for displacing other more active and effective forms of learning and by emphasizing presentation (e.g. graphics) over content (Giacquinta, et al. 1993, Stoll 1995 and Fuchs and Woessmann 2004). Computers and the Internet also facilitate cheating and plagiarism and make it easier to find information from non-credible sources (Rainie and Hitlin 2005). In the end, there is no clear theoretical prediction on the sign or magnitude of the effects of home computers on educational achievement, and thus an empirical analysis is needed.
To identify the effects of home computers, the starting empirical approach has been to regress educational outcomes on the presence of a home computer controlling for detailed student, family and parental characteristics. Studies using this approach generally find relatively large positive effects of home computers on educational outcomes (Attewell and Battle 1999, Fairlie 2005, Schmitt and Wadsworth 2006, Beltran, Das and Fairlie 2010), although there is some evidence of negative effects (Fuchs and Woessmann 2004). In some cases these controls include prior educational attainment, difficult-to-find detailed characteristics of the educational environment in the household, and extracurricular activities of the student (Attewell and Battle 1999, Schmitt and Wadsworth 2006, Beltran, Das and Fairlie 2010). However, these estimates of the effects of home computers on educational outcomes may still be biased due to omitted variables.
The main concern is that if the most educationally motivated students and families are the ones who are the most likely to purchase computers, then a positive relationship between academic performance and home computers may simply capture the effect of unmeasurable motivation on academic performance. Several studies have investigated this issue using instrumental variable techniques, future computer ownership, falsification tests, individual-student fixed effects, or regression discontinuity designs (RDD). Estimates from bivariate probits for the joint probability of an educational outcome and computer ownership reveal large positive estimates (Fairlie 2005 and Beltran, Das and Fairlie 2010). Another approach, first taken by Schmidt and Wadsworth (2006), is to include future computer ownership in the educational outcome regression.
A positive estimate of future computer ownership on educational attainment would raise suspicions that current ownership proxies for an unobserved factor, such as educational motivation. However, previous studies do not find a positive estimate for future computer ownership, and do not find positive estimates for additional falsification tests (Schmidt and Wadsworth 2006 and Beltran, Das and Fairlie 2010). Malamud and Pop-Eleches (2010) address the endogeneity problem with an RDD based on the effects of a government program in Romania that allocated a fixed number of vouchers for computers to low-income children in public schools. Estimates from the discontinuity created by the allocation of computer vouchers by a ranking of family income indicate that Romanian children winning vouchers have lower grades, but higher cognitive ability and better computer skills.
We take a new approach to address the problem of correlated unobservables by conducting the first random-assignment field experiment providing free computers to students for home use. As noted above, the only previous study that randomly assigned free computers to individuals for home use is Servon and Kaestner (2008). Through a program with a major bank, computers with Internet service were randomly assigned to low- and moderate-income families to determine how they affect the use of financial services. Although there exist recent random experiments involving the provision of computer-assisted learning in schools (e.g. Barrow, Markman and Rouse 2009 and Mathematica 2009), to our knowledge, no previous study has randomly provided free computers to students for home use. Furthermore, no previous study explores the impact of home computers on the educational outcomes of college students.
3. The Field Experiment
To study the educational impacts of home computers, we randomly assigned free computers to entering community college students who were receiving financial aid. The students attended Butte College, which is located in Northern California and has a total enrollment of over 20,000 students. Butte College is part of the California Community College system, which is the largest higher educational system in the nation and includes 110 colleges and educates more than 2.6 million students per year (California Community Colleges Chancellor’s Office 2008). Compared with the average community college in the United States, Butte College is larger, but does not differ substantially in the composition of its student body. For example, Butte College has a roughly similar share of female students as the U.S. total (55.0 percent compared with 58.5 percent) and roughly similar share of non-minority students (65.4 percent compared with 60.8 percent).
The computers used in the study were provided by Computers for Classrooms, Inc. a computer refurbisher located in Chico, California. To implement the study, we first obtained a list of all financial aid students with less than 24 units attending the college in Fall 2006. The 24 unit cutoff was chosen to capture new and relatively new students as of Fall 2006. It ensures that students in the study have less than two previous full-time semesters at the college. In Fall 2006, there were 1,042 financial aid students and 6,681 students in total who met the course unit restriction. The Butte College Office of Financial Aid advertised the program by mailing letters to all financial aid students.
Participation in the program involved returning a baseline questionnaire and consent form releasing future academic records from the college for the study. Students who already owned computers were not excluded from participating in the lottery because their computers may have been very old and not fully functional with the latest software and hardware. The results presented below are not sensitive to the exclusion of these students who represented 29 percent of the sample. We received 286 responses with valid consent forms and completed questionnaires, and received enough funding to randomly provide free computers to 141 of these students.
Eligible students were notified by mail and instructed to pick up their computers at the Computers for Classrooms warehouse. More than 90 percent of eligible students picked up their free computers by the end of November 2006. All correspondence with students was conducted through Butte College’s Office of Financial Aid. We conducted a follow-up survey of study participants in late Spring/Summer 2008 with a response rate of 65 percent. Butte College provided us with detailed administrative data on all students in July 2008.
Who applied for the computer giveaway?
Table 1 reports administrative data from the original application to the college for students applying to the computer-giveaway program, all financial aid students, and all entering students. The racial composition of study participants is very similar to that of all financial aid students, the group initially targeted for the study. A total of 60.1 percent of study participants are white compared to 61.3 percent of all financial aid students. The largest minority group, Latinos, comprise 16.8 percent of study participants and 15.6 percent of all financial aid students. A similar percentage of primarily English language students also participated in the study compared to all financial aid students. The one difference between study participants and the population of financial aid students is that a larger percentage of women applied for the computer lottery than men. Women comprise 62.6 percent of all study participants which is higher than the 54.7 percent for all financial aid students.
Information about students’ educational goals was also collected on the application form. The most common response is “undecided on goal,” which represents 37.4 percent of study participants and 36.5 percent of all financial aid students. The second most common goal reported by applicants is to “obtain an associate degree and transfer to a four-year institution.” Of the study participants, 20.6 percent reported this goal compared to 23.3 percent of all financial aid students. The next most common goal reported is to “transfer to a four-year institution without an associate degree.” Slightly more than 10 percent of both study participants and all financial aid students reported this goal. Overall, the distributions of reported goals at the time of application are very similar.
A comparison to all students reveals that study participants are more likely to be female than the total student body. Women comprise 55.3 percent of all students attending the college. Study participants as well as all financial aid students are more likely to be from minority groups than all students, but are less likely to be non-primary English language students, which may be related to applying for financial aid. These differences, however, are small.
Although study participants are a self-selected group from all financial aid students, they do not appear to be very different from either financial aid students or the entire student body along observable characteristics. They may differ, however, along dimensions directly related to participation in the study. Specifically, they may have less access to computers and disposable income than other financial aid students. These differences have implications for our ability to generalize the results based on study participants to all community college students receiving financial aid. But, students with limited access to computers and financial resources are the population of most interest for any policy intervention involving the provision of free or subsidized computers.
Comparability of Treatment and Control Groups
Table 2 reports a comparison of background characteristics for the treatment and control groups. All study participants were given a baseline survey that included detailed questions on gender, race, age, high school grades, household income, parents’ education, and other characteristics. The average age of study participants is 25. More than half of the students have a parent with at least some college education, and about one third of students received mostly grades of A and B in high school. A little over one quarter of study participants have children and one third live with their parents. As would be expected among financial aid students, study participants have low income levels with only 17 percent having current household incomes of $40,000 or more. The majority of study participants have household incomes below $20,000 and more than half are employed. The treatment and control groups are also similar along the educational goals reported at the time of application.
The similarity of the mean values of these baseline characteristics confirms that the randomization created comparable treatment and control groups for the experiment. We do not find large differences for any of the characteristics, and none of the differences are statistically significant.
First-Stage Results for Computer Use
If distributing free computers has an impact on educational outcomes, we might expect to see more hours of computer use by the treatment group than the control group. Home computers, however, only increase the potential for more computer use and actual use may decline if home computers allow for more efficient use of computers than school computers. Efficiency gains may result from increased familiarity and better suited software on home computers, but may also result from fewer distractions or less interrupted time than found in crowded campus computer labs. Roughly one quarter of students report experiencing wait times when using computers at the college. Nevertheless, it is useful to compare total hours of computer use before exploring the impacts on educational outcomes.
To investigate this issue we examine data from the follow-up survey which was conducted in Spring 2008. Although these data are not as comprehensive in terms of coverage of students as the administrative data that we use to examine educational outcomes (which are available for all study participants), they provide some suggestive information on first-stage effects. Table 3 reports the total number of hours of computer use for the treatment and control groups for the 185 students completing follow-up surveys.
The treatment group reports using computers 16.1 hours per week on average compared to 13.4 hours per week for the control group. The estimated difference of 2.7 hours is large, representing 20 percent more hours, but is not statistically significant at conventional levels for a two-tailed test (the p-value is 0.15). Controlling for baseline characteristics we find a similar difference in hours of computer use between the treatment and control groups.
Another first-stage result is to examine whether home computers allow students increased flexibility in the times when they use computers. Table 3 reports the percentages of students reporting using computers at various times of the day to complete school assignments. Students who received home computers were more likely to report using computers in the daytime, late evening, and nighttime than students who did not receive computers although the differences are not precisely measured. The treatment and control groups are equally likely to use computers in the early morning and early evening. Although certainly not conclusive, the estimated differences provide some suggestive evidence that the provision of the home computers increased the total time of use of computers and flexibility of use.
4. Estimating the Effects of Home Computers on Educational Outcomes
To examine whether the home computers improved educational outcomes we start by briefly examining the full distribution of grades received in courses taken by the two groups of students. Figure 1 displays grade distributions for all courses taken by study participants after fall 2006 (when computers were distributed) through spring 2008. Grade information was provided by Butte College in their administrative data and is available for all students and all quarters in the study period.
Butte College assigns letter grades of A, B, C, D, and F, and non-letter grades of CR and NC. The CR grade is considered the same as a C or higher, and C grades and higher are considered satisfactory. D grades are considered passing, but unsatisfactory, and an NC grade is considered unsatisfactory or failing. The treatment group appears to be more likely than the control group to receive B and C grades and less likely to take courses for non-letter grades (i.e. CR/NC) than the control group. The treatment group also appears less likely to receive an NC grade than the control group.
The primary measure used by the college for measuring the success and eligibility of students for various programs is the percentage of courses in which students receive a satisfactory or higher grade (i.e. C, B, A or CR grade), referred to as the “course success rate.” Panel I of Table 4 reports estimates for the course success rate by treatment status. For the treatment group, 82.6 percent of courses received a successful grade.
The control group has a course success rate of 80.7 percent. Another important educational outcome is the percentage of courses taken for grades, which may be an indicator of student confidence in doing well in courses and the future possibility of transferring to other community colleges or four-year colleges. Among the treatment group, 95.1 percent of courses are taken for letter grades compared with 91.1 percent of courses taken by the control group (see Panel II of Table 4).
In addition to the effects on grades, receiving a free computer may affect longer term outcomes such as transferring to a four-year college or graduating with a degree from the community college. Although Butte College and other community colleges in California do not collect information on whether their students transfer to four-year colleges, we can examine whether students take transferable courses. Transfer course enrollment is tracked by the California Community Colleges Chancellor’s Office (2009) as a measure of community college performance.
Also, previous research using a special cohort of students linked through California state system-wide administrative data indicates that enrollment in transferable courses in the first and second years of study is a major predictor of who eventually transfers to 4-year universities (Sengupta and Jepsen 2006). All courses offered at the college can be identified as being transferable to the California State University or University of California systems. Panel III of Table 4 reports estimates for the treatment/control difference in the probability of taking transferable courses. Of the courses taken by the control group, 74 percent are transferable. The percentage of courses that are transferable taken by the treatment group is 6.3 percentage points higher at 80.3 percent.
The college also provided us with information on whether students received a degree or certificate by summer 2008. Students may have received an associates degree, vocational degree or vocational certificate. Estimates of the treatment-control differences are reported in Panel IV in Table 4. We find that 18.4 percent of computer-eligible students received a degree by summer 2008, compared with 15.9 percent of non-computer eligible students.
The estimates reported in Table 4 provide evidence of positive and statistically significant effects of home computers on two of the reported educational outcomes. Although the treatment estimates are imprecisely measured for the course success and graduation rates, the direction and rough magnitude of these point estimates are consistent with the other educational outcomes. To provide additional evidence on the overall effects of home computers on educational outcomes we create a summary index that aggregates information over multiple treatment effect estimates following the approach recently taken in Kling, Liebman and Katz (2007) and Karlan and Zinman (2008). Specifically, we create an index of educational outcomes that combines the four educational measures reported above.
By aggregating the separate educational outcomes we improve the statistical power to detect treatment effects that work in the same direction, which is the case here. To create the index we first calculate z-scores for each of the dependent variables by subtracting the control group mean and dividing by the control group standard deviation. Thus, each dependent variable has mean zero and standard deviation equal to one for the control group. The educational outcome index is then calculated from an equally-weighted average of the z-scores for the four dependent variables. The treatment effect estimate for this index indicates where the mean of the treatment group is in the distribution of the control group in terms of standard deviation units.
Panel V of Table 4 reports estimates for the educational outcome index. By definition the control group mean for the index is 0. The treatment group index is 0.1368 which captures the treatment effect. It implies that the treatment group mean is 0.1368 standard deviations higher than the control group mean. This difference between the treatment and control groups in the educational summary index is statistically significant. The treatment effect estimate for the summary measure of educational outcomes confirms the findings across educational outcomes from the experiment. We consistently find positive point estimates for treatment effects across educational outcomes although some estimates are statistically insignificant.
To improve precision and confirm the robustness of the results to randomization, we estimate several regressions for the educational outcomes. The regression equation is straightforward in the context of the random experiment: (4.1) yij = α + βXi + δTi + λt + λd + ui + εij,
where yij is the outcome for student i in course j, Xi includes baseline characteristics, Ti is the treatment indicator, λt are quarter fixed effects, λd are department fixed effects, and ui + εij is the composite error term. The effect of becoming eligible for a free computer or the “intent-to-treat” estimate of the giveaway program is captured by δ. All specifications are estimated using OLS and robust standard errors are reported with adjustments for multiple observations per student (i.e. clustered by student). Marginal effects estimates are similar from probit and logit models, and are thus not reported.
Specification 1 of Table 4 reports estimates of the treatment effect after controlling for gender, race/ethnicity, age, parents’ highest education level, high school grades, presence of own children, live with parents, family income, initial campus location, has a job, and primarily speaks English. These detailed controls are taken from the baseline survey administered to all study participants or from the application form to the college, which are both measured prior to the receipt of the free computers. The inclusion of these controls results in similar treatment effect estimates for all of the educational outcomes. The precision of the estimates also improves with the inclusion of these baseline controls.
One concern is that students may have taken different types of courses which ultimately are responsible for differences in grades. A comparison of course departments between the treatment and control groups, however, reveals similar distributions. Table 5 reports distributions for the 10 most popular departments. The most popular department for taking courses is mathematics. A similar percentage of treatment and control students take mathematics courses (11.5 percent compared to 11.1 percent, respectively). The next most popular department is business computer information systems representing 8.5 percent of the treatment group and 10.5 percent of the control group. To further investigate this concern, we estimate regressions that add fixed effects for course departments and the quarter in which the course was taken (Specification 2 of Table 4). The coefficient estimates on the treatment variable remain similar for all of the educational outcomes.
In Specification 3, we add administrative information on basic assessment tests collected by the college for most entering students. Assessments in math, English and reading are available. These assessment scores are used for student placement in courses. Adding more confidence to the results we find similar treatment effect estimates after the inclusion of these assessment scores which are generally strong predictors of educational outcomes. Overall, we find that the treatment-control differences in educational outcomes are not sensitive to controlling for detailed student characteristics and other factors.
Compliance and Local Average Treatment Effects
We next address the potential problem of impartial compliance in both the treatment and control groups. All of the estimates discussed thus far include the full sample of computer eligible students in the treatment group. We first start by noting again that 92 percent of eligible students pick up their free computers (see Table 2). To check that the “treatment-on-the-treated” estimate does not differ substantially from the previous “intent-to-treat” estimate, we estimate an instrumental variables regression. Specifically, we use computer eligibility as an instrumental variable for whether the student picked up the free computer. The first-stage regression for the probability of computer receipt is: (4.2) Ci = ω + γXi + πTi + λt + λd + ui + εij.
The second-stage regression is:
(4.3) yij = α2 + β2Xi + φĈi + λt + λd + ui + εij,
where Ĉi is the predicted value of computer ownership from (4.2). In this case, φ provides an estimate of the “treatment-on-the-treated” effect. The IV estimates are reported in Specification 2 of Table 6 (Specification 1 reports the OLS estimates for convenience). As expected given the high compliance rate, the estimates are only slightly larger than the intent-to-treat estimates and approximate the simple OLS coefficients divided by 0.92.
Similar to most social experiments, it is not possible to prevent the control group from receiving an intervention that potentially has the same effect as the treatment intervention. In this case, the control group may have purchased computers on their own during the study period. From the follow-up survey taken at the end of the study period, we find that 29 percent of the control group reports getting a new computer, but no information is available on when they purchased the computer. Although students in the control group who purchased their computers near the end of the study period are not likely to have a large effect on the estimates, students in the control group purchasing computers at the beginning of the study period may dampen estimated differences between the treatment and control groups.
To investigate these issues further we expand on the “treatment-on-the treated” results to estimate the more general local average treatment effect (LATE). The estimates reported in Specification 2 implicitly assume that all students in the control group received a computer at the end of the study period. The other extreme is to assume that all of the students in the control group reporting obtaining a computer in the follow-up survey received that computer at the beginning of the study period. In this case, control group students obtaining computers contribute to the estimation of (4.2) with Ci=1. Specification 3 in Table 6 reports estimates. The new IV estimates are now larger than the TOT estimates and are roughly 35-40 percent larger than the original OLS estimates.
The IV estimates indicate that the effects of having a home computer on educational outcomes are nontrivial, but are not extremely large. The estimates indicate that the effects of having a home computer are 3 to 4 percent relative to the control group means for the taking courses for grades rate. For taking CSU and UC-eligible transfer courses, the effects of having a home computer range from 10 to 13 percent of the control group mean. Although not statistically significant, the point estimates for the course success rate imply a 4 to 5 percent effect relative to the control group mean, and the point estimates for the graduation rate imply home computer effects from 12 to 15 percent. The summary index estimates indicate that the treatment group mean is 0.12 to 0.14 standard deviation units higher than the control group mean. As discussed further below these estimates of the magnitude of home computers on educational outcomes are smaller than those typically found in non-experimental studies.
The IV estimates for the effects on graduation are useful for generating a rough, back-of-the-envelope estimate of the value of a computer. Although estimates of the returns to obtaining an associate’s degree relative to not obtaining one vary widely, they appear to be in the range of 5 to 11 percent (Kane and Rouse 1995, 1999). Using the mid-point of this range implies that the returns to an associate’s degree are 8 percent or $2,500 per year, and the gain in the present value of lifetime earnings is $32,000.
The average LATE point estimate on the graduation rate of 0.021 implies that the computer is worth $675 in present value of lifetime earnings. If the value of the computers is $500 then there may be some under-investment of personal computers although it does not appear to be extremely large. Financial constraints may bind for some students, especially low-income students, limiting the educational purchases of computers even when it would otherwise be optimal, but there might also be technical and informational constraints due to having less previous experience with computers.
Courtney from Study Moose
Hi there, would you like to get such a paper? How about receiving a customized one? Check it out https://goo.gl/3TYhaX