Source: R “Ray” Wang - Link
Mainframes entered the market in the early 1950’s when IBM and the seven dwarfs (Burroughs, Unisys, NCR, Control Data, Honeywell, GE, and RCA) created the computing age and competed for critical applications, sophisticated modeling, and large-scale transactions and workloads among the largest of organizations. Over the past seven decades, compute power, storage, and networking have seen various waves of centralization and decentralization amidst each wave of disruptive technology adoption.
In fact, many competitive industry leaders value a hybrid approach with cloud and mainframe as a trusted, efficient architecture for their enterprise organizations delivering intense hybrid workloads from securely filing a healthcare claim, making life-saving prescriptions, booking travel, initiating a credit check, and reducing fraud in banking online. A hybrid approach enables top businesses to apply the right technologies to the right workloads to reduce their risk. Why? Elasticity, distributed compute and storage, and shared infrastructure have continued to withstand the test of time. As the CIO of a Fortune 50 financial services entity noted, “The security, performance, reliability, and value equation of the mainframe give us confidence to continue with our hybrid approach. The cost of data movement and I/O costs would be punishingly expensive with our current public cloud contracts”
We’ve also seen this happen in the midrange. Lots of customers are going hybrid and running “legacy” (read this as “proven”) applications in-house or on machines hosted by a managed service provider. They tie these to SaaS providers.