Essay, Pages 7 (1690 words)
This chapter presents the appraisal of related studies that are considered relevant to this research. It is divided into conceptual and empirical review. Concepts of green IT, its advantages, latest trends in green IT and virtualisations were appraised within the conceptual framework. Empirical review of previous literatures on the subjects was also done.
Green Information Technology (Green IT) is a term used to describe a systematic application of ecological sustainability criteria (such as pollution prevention, product stewardship, use of clean technologies) to the creation, sourcing, use and disposal of the IT technical infrastructure as well as within the IT human and managerial practices that directly or indirectly address environmental sustainability in organizations (Mittal, 2014).
The goals of green IT are similar to green chemistry: reduce the use of hazardous materials, maximize energy efficiency during the product’s lifetime, the recyclability or biodegradability of defunct products and factory waste (Mittal, 2014). Green IT is important for all classes of systems, ranging from handheld systems to large-scale data centres.
Many corporate IT departments have green IT initiatives to reduce the environmental effect of their IT operations.
Modern IT systems rely upon a complicated mix of people, networks, and hardware; as such, a green computing initiative must cover all of these areas as well. A solution may also need to address end user satisfaction, management restructuring, regulatory compliance, and return on investment (ROI) (Mittal, 2014). There are also considerable fiscal motivations for companies to take control of their own power consumption; of the power management tools available, one of the most powerful may still be simple, plain, common sense.
The latest trends in green IT are described below:
Gartner maintains that the PC manufacturing process accounts for 70% of the natural resources used in the life cycle of a PC. More recently, Fujitsu released a Life Cycle Assessment (LCA) of a desktop that show that manufacturing and end of life accounts for the majority of this desktop’s ecological footprint. Therefore, the biggest contribution to green computing usually is to prolong the equipment’s lifetime. Another report from Gartner (2007) recommends to look for product longevity, including upgradability and modularity. For instance, manufacturing a new PC makes a far bigger ecological footprint than manufacturing a new RAM module to upgrade an existing one.
Data centre design
Data centre facilities are heavy consumers of energy, accounting for between 1.1% and 1.5% of the world’s total energy use in 2010. The U.S. Department of Energy estimates that data centre facilities consume up to 100 to 200 times more energy than standard office buildings. Energy efficient data centre design should address all of the energy use aspects included in a data centre: from the IT equipment to the HVAC (Heating, ventilation and air conditioning) equipment to the actual location, configuration and construction of the building (Kurp, 2016).
The U.S. Department of Energy specifies five primary areas on which to focus energy efficient data centre design best practices which are information technology (IT) systems, environmental conditions, air management, cooling systems and electrical systems. Energy efficient data centre design should help to better utilize a data centre’s space, and increase performance and efficiency (Kurp, 2016).
In 2018, three new US Patents make use of facilities design to simultaneously cool and produce electrical power by use of internal and external waste heat. The three patents use silo design for stimulating use internal waste heat, while the recirculation of the air cooling the silo’s computing racks. US Patent 9,510,486, uses the recirculating air for power generation, while sister patent, US Patent 9,907,213, forces the recirculation of the same air, and sister patent, US Patent 10,020,436, uses thermal differences in temperature resulting in negative power usage effectiveness. Negative power usage effectiveness, makes use of extreme differences between temperatures at times running the computing facilities, that they would run only from external sources other than the power use for computing.
Software and deployment optimization
Algorithmic efficiency: The efficiency of algorithms affects the amount of computer resources required for any given computing function and there are many efficiency trade-offs in writing programs. Algorithm changes, such as switching from a slow (e.g. linear) search algorithm to a fast (e.g. hashed or indexed) search algorithm can reduce resource usage for a given task from substantial to close to zero. A study by Rear (2009) estimated that the average Google search released 7 grams of carbon dioxide (CO?). However, Google disputed this figure, arguing instead that a typical search produced only 0.2 grams of CO?.
Resource allocation: Algorithms can also be used to route data to data centres where electricity is less expensive. Researchers have tested an energy allocation algorithm that successfully routes traffic to the location with the cheapest energy costs. The researchers project up to a 40 percent savings on energy costs if their proposed algorithm were to be deployed. However, this approach does not actually reduce the amount of energy being used; it reduces only the cost to the company using it. Nonetheless, a similar strategy could be used to direct traffic to rely on energy that is produced in a more environmentally friendly or efficient way. A similar approach has also been used to cut energy usage by routing traffic away from data centres experiencing warm weather; this allows computers to be shut down to avoid using air conditioning (Ernesta, 2006).
Larger server centres are sometimes located where energy and land are inexpensive and readily available. Local availability of renewable energy, climate that allows outside air to be used for cooling, or locating them where the heat they produce may be used for other purposes could be factors in green siting decisions. Approaches to actually reduce the energy consumption of network devices by proper network/device management techniques are surveyed in (Ernesta, 2006). The authors grouped the approaches into 4 main strategies, namely (i) Adaptive Link Rate (ALR), (ii) Interface Proxying, (iii) Energy Aware Infrastructure, and (iv) Max Energy Aware Applications.
The Advanced Configuration and Power Interface (ACPI), an open industry standard, allows an operating system to directly control the power-saving aspects of its underlying hardware. This allows a system to automatically turn off components such as monitors and hard drives after set periods of inactivity. In addition, a system may hibernate, when most components (including the CPU and the system RAM) are turned off. ACPI is a successor to an earlier Intel-Microsoft standard called Advanced Power Management, which allows a computer’s BIOS to control power management functions (Ernesta, 2006).
Some programs allow the user to manually adjust the voltages supplied to the CPU, which reduces both the amount of heat produced and electricity consumed. This process is called undervolting. Some CPUs can automatically undervolt the processor, depending on the workload; this technology is called “SpeedStep” on Intel processors, “PowerNow!”/”Cool’n’Quiet” on AMD chips, LongHaul on VIA CPUs, and LongRun with Transmeta processors.
Data centre power
Data centres, which have been criticized for their extraordinarily high energy demand, are a primary focus for proponents of green computing. According to a Greenpeace study, data centres represent 21% of the electricity consumed by the IT sector, which is about 382 billion kWh a year. Data centres can potentially improve their energy and space efficiency through techniques such as storage consolidation and virtualisation. Many organizations are aiming to eliminate underutilized servers, which results in lower energy usage (Heikenfeld, 2011). The first step toward this aim will be training of data centre administrators. The U.S. federal government has set a minimum 10% reduction target for data center energy usage by 2011 (Heikenfeld, 2011). With the aid of a self-styled ultraefficient evaporative cooling technology, Google Inc. has been able to reduce its energy consumption to 50% of that of the industry average.
Operating system support
Microsoft Windows, has included limited PC power management features since Windows 95. These initially provided for stand-by (suspend-to-RAM) and a monitor low power state. Further iterations of Windows added hibernate (suspend-to-disk) and support for the ACPI standard. Windows 2000 was the first NT-based operating system to include power management. This required major changes to the underlying operating system architecture and a new hardware driver model. Windows 2000 also introduced Group Policy, a technology that allowed administrators to centrally configure most Windows features (Heikenfeld, 2011). However, power management was not one of those features. This is probably because the power management settings design relied upon a connected set of per-user and per-machine binary registry values, effectively leaving it up to each user to configure their own power management settings.
This approach, which is not compatible with Windows Group Policy, was repeated in Windows XP. The reasons for this design decision by Microsoft are not known, and it has resulted in heavy criticism. Microsoft significantly improved this in Windows Vista by redesigning the power management system to allow basic configuration by Group Policy. The support offered is limited to a single per-computer policy. The most recent release, Windows 7 retains these limitations but does include refinements for timer coalescing, processor power management, and display panel brightness. The most significant change in Windows 7 is in the user experience. The prominence of the default High Performance power plan has been reduced with the aim of encouraging users to save power (Heikenfeld, 2011).
There is a significant market in third-party PC power management software offering features beyond those present in the Windows operating system. Most products offer Active Directory integration and per-user/per-machine settings with the more advanced offering multiple power plans, scheduled power plans, anti-insomnia features and enterprise power usage reporting (Heikenfeld, 2011). Notable vendors include 1E NightWatchman, Data Synergy PowerMAN (Software), Faronics Power Save, Verdiem SURVEYOR and EnviProt Auto Shutdown Manager.
Linux systems started to provide laptop-optimized power-management in 2005, with power-management options being mainstream since 2009.
Desktop computer power supplies are in general 70-75% efficient, dissipating the remaining energy as heat. A certification program called 80 Plus certifies PSUs that are at least 80% efficient; typically, these models are drop-in replacements for older, less efficient PSUs of the same form factor. As of July 20, 2007, all new Energy Star 4.0-certified desktop PSUs must be at least 80% efficient.
Cite this essay
Evaluation of Related Studies. (2019, Nov 28). Retrieved from https://studymoose.com/evaluation-of-related-studies-essay