In the advent of the computer age, the growth of the internet has also necessitated the development of technologies that are not only able to cope with the demands of users but also within the limitations of current hardware. With multitasking having evolved into the norm, the availability of media and applications online has made it increasingly important to provide high availability (Marcus 2003).
The mission-critical applications that have surfaced on the internet have placed added pressure to make sure that highly available services are always ready. As such, this brief discourse will attempt to discuss the current state of high availability technology as well as any recent trends or variations that have surfaced. High Availability, as the term suggests, refers to systems or instruments in information technology that are not only continuously available but also continuously operational for long periods of time (Marcus 2003).
The term availability is used to refer to the access that the users or members of user community have to the system. This type of access or ability can include anything from uploading files, to changing entries, updating works or even just scanning previous works (Marcus 2003). The failure to access the system results in downtime or unavailability. An example of this would be the manner by which community users like to be able to use Facebook to chat, watch videos, update links and upload pictures all at the same time.
With a network that has Low Availability, the users will occasionally experience failures with regard to logging in or accessing different functions of the website because of the necessary downtime for system updating and maintenance (Ulrick 2010). This downtime can be prejudicial for a website or an internet application because it reduces the desirability of the technology. With the pressure on to provide complete and persistent accessibility, companies have tried to achieve the optimum “100% operational” or “never failing” Availability Status . One way of providing almost constant availability (High Availability) is by creating clusters.
These computer systems or networks consist of several pieces that act as back-ups or failover processing mechanism that store data and allow for access. This includes the Redundant Array of Independent Disks (RAID) or the Storage Area Network (SAN), which are used as back up storage devices to ensure constant availability (Marcus 2003). These systems, however, are constantly evolving and changing depending on the technology that is available and developed such as systems that have solid membership administration, consistent group communication sub-systems, quoram sub-systems, and even concurrent control sub-systems, among others.
By creating a clustered computer system or network, backups are created in the form of redundancies for both hardware and software. This is achieved by forming or grouping several independent nodes with each of them simultaneously running a copy of the operating system (OS) and the application software (Marcus 2003). Whenever there is a failure in any of the nodes or when daemon failures occur, the system can quickly be reconfigured and the existing workload is then passed on to the other available or functional nodes within the cluster.
Thus, there is always, theoretically, one system that is available and running to handle the services and access for the user community. It was reported in 1996 that the lost revenue and productivity due to downtime amounted to over US$ 4. 54 billion for American businesses alone (IBM 1998). As such, High availability has been consistently upgrading and evolving to be able to address this issue. The recent development include the creation of High Availability Clusters (HA Clusters or Failover Clusters).
The concept of this is that it provides greater High Availability by operating several computer clusters at the same time. While this applies the same concept as High Availability, it attempts to create several failover systems and clusters that cater to this. It does, however, retain the same concept of constant monitoring to make sure that the systems are running as programmed and as planned. Recent research in this field has shown that there is also a diminishing return principle that can be applied.
Up until recently, it was thought that by creating an expansive network and creating several clusters, the availability could be increased proportionally. However, there are findings that show that High Availability decreases when there are more components that are added to the system. This means that instead of improving the process it is instead undermined by the installation of additional components. The reason for this, according to Chee-Wei Ang, is that the more complex a system the more potential failures arise (Ang 2007).
Since there are more systems to monitor, it becomes more difficult to point out exactly where the problem is. This can be compared to a complicated plumbing system wherein it becomes difficult to find the source of the leak. Though it has been argued by experts that a number of highly available systems utilize a simple design architecture which features high quality multipurpose systems. Yet even with this, it cannot ignore the basic fact that theses systems still require constant upgrading, patching and maintenance.
The recent developments in this field include the creation of more advanced systems designs that streamline and facilitate the maintenance of systems without the need for compromising the availability. This has been achieved by doing load balancing and more advanced failover techniques. It is admitted, however, that while there are several developments, like all hardware devices these systems are also prone to human error and typical wear and tear which cannot be avoided though their effects can be mitigated by the introduction of more effective and efficient means.
Courtney from Study Moose
Hi there, would you like to get such a paper? How about receiving a customized one? Check it out https://goo.gl/3TYhaX