When we talk about the term “high availability” in information technology, we refer to a system or component that is continuously operational for a desirably long length of time. Simply put, high availability means a 100% operational, always running, never-failing system or component.
Technically, a 100% availability is impossible to achieve but a widely held but difficult-to-achieve standard of availability for a system or product is known as “five 9s” availability = 99.999%.
There has been much controversy about the need for high availability infrastructure. Some argue that there isn’t much difference between a 99.999% operational rate and a 99.995% one. However, one needs to factor in the following scenarios:
- What if the downtime happens at peak time?
- What if the 0.0004% difference means that a majority of your customers will be affected?
- What if the company’s productivity is at stake when applications necessary for operations come to a halt?
These are just instances that prove that high availability infrastructure should not just be an option but a necessity. We give you three reasons why you should have high availability infrastructure:
1. It will save you money.
According to a report, the most common causes of downtime are hardware failure, upgrades and migrations. The hourly cost of downtime (money lost for a business) range from $8,580.99 for a small business to as much as $686,250.00 for a large company. (source)
The cost of having high available infrastructure management is nothing compared to the potential losses a business might incur in case of a downtime or outage.
2. It will be good for your business reputation; earn customer satisfaction and loyalty.
Just think about if your business revolves around the banking industry. A few seconds of downtime would be disastrous for clients, trading and consumers.
In today’s global online economy, an organization’s IT end-users and customers demand 24/7 access to applications. Given these high stakes, it’s not surprising that business continuity and disaster recovery are becoming top priorities for organizations of all types and sizes.
3. Investing on fault-tolerant hardware than traditional server clustering will be a more long-term, highly operational (high availability) and cost-effective solution.
According to reports, 67% of best-in-class organizations use fault-tolerant servers and software fault-tolerant solutions to provide high availability. While cost meters for these types of hardware are considerably high, the complexity of implementation and level of human interaction after failure is very low. This just means that operational and management costs are still considered low on average.
When you compare it to traditional high-availability server clustering, all key factors involved: initial purchase price, complexity of implementation and level of human interaction after failure are all high. Plus, the failover cluster instance itself doesn’t provide data protection – data loss is dependent upon the storage system implementation, which is another cost altogether.
An IT system or network is comprised of many parts or components and for high availability to be achieved, planning should cover backup, failover processing and data storage and access. Examples of these are redundant array of independent disks (RAID) and storage area network (SAN) for storage.