Kubernetes use case in Industry

Tushar Agarwal
9 min readApr 1, 2021

5 main reasons why anyone should Kubernetes:-

Here are five fundamental business capabilities that Kubernetes can drive in the enterprise–be it large or small. And to add teeth to these use cases, we have identified some real-world examples to validate the value that enterprises are getting from their Kubernetes deployments

  1. Faster time to market
  2. IT cost optimization
  3. Improved scalability and availability
  4. Multi-cloud (and hybrid cloud) flexibility
  5. Effective migration to the cloud

Let’s look at the values in greater detail next.

1. Faster time to market (aka improved app development/deployment efficiencies)

Kubernetes enables a “microservices” approach to building apps. Now you can break up your development team into smaller teams that focus on a single, smaller microservice. These teams are smaller and more agile because each team has a focused function. APIs between these microservices minimize the amount of cross-team communication required to build and deploy. So, ultimately, you can scale multiple small teams of specialized experts who each help support a fleet of thousands of machines.

Kubernetes also allows your IT teams to manage large applications across many containers more efficiently by handling many of the nitty-gritty details of maintaining container-based apps. For example, Kubernetes handles service discovery, helps containers talk to each other, and arranges access to storage from various providers such as AWS and Microsoft Azure.

Real-World Case Study

Airbnb’s transition from a monolithic to a microservices architecture is pretty amazing. They needed to scale continuous delivery horizontally, and the goal was to make continuous delivery available to the company’s 1,000 or so engineers so they could add new services. Airbnb adopted Kubernetes to support over 1,000 engineers concurrently configuring and deploying over 250 critical services to Kubernetes. The net result is that AirBnb can now do over 500 deploys per day on average.

Tinder: One of the best examples of accelerating time to market comes from Tinder. This blog post describes Tinder’s K8 journey well. And here’s the cliff notes version of the story: Due to high traffic volume, Tinder’s engineering team faced challenges of scale and stability. And they realized that the answer to their struggle is Kubernetes. Tinder’s engineering team migrated 200 services and ran a Kubernetes cluster of 1,000 nodes, 15,000 pods, and 48,000 running containers. While the migration process wasn’t easy, the Kubernetes solution was critical to ensure smooth business operations going further.

2. IT cost optimization

Kubernetes can help your business cut infrastructure costs quite drastically if you’re operating at massive scale. Kubernetes makes a container-based architecture feasible by packing together apps optimally using your cloud and hardware investments. Before Kubernetes, administrators often over-provisioned their infrastructure to conservatively handle unexpected spikes, or simply because it was difficult and time consuming to manually scale containerized applications. Kubenetes intelligently schedules and tightly packs containers, taking into account the available resources. It also automatically scales your application to meet business needs, thus freeing up human resources to focus on other productive tasks.

There are many examples of customers who have seen dramatic improvements in cost optimization using K8s.

Real-World Case Study

Spotify is an early K8s adopter and has significant cost saving values by adopting K8s as described in this note. Leveraging K8s, Spotify has seen 2–3x CPU utilization using the orchestration capabilities of K8s, resulting in better IT spend optimization.

Pinterest is another early K8s customer. Leveraging K8s, the Pinterest IT team reclaimed over 80 percent of capacity during non-peak hours. They now use 30 percent less instance-hours per day compared to the static cluster.

3. Improved scalability and availability

The success of today’s applications does not depend only on features, but also on the scalability of the application. After all, if an application cannot scale well, it will be highly non-performant at best scale, and totally unavailable, at the worst case.

As an orchestration system, Kubernetes is a critical management system to “auto-magically” scale and improve app performance. Suppose we have a service that is CPU-intensive and with a dynamic user load that changes based on business conditions (for example, an event ticketing app that will see dramatic users and loads prior to the event and low usage at other times). What we need here is a solution that can scale up the app and its infrastructure so that new machines are automatically spawned up as the load increases (more users are buying tickets) and scale it down when the load subsides. Kubernetes offers just that capability by scaling up the application as the CPU usage goes above a defined threshold — for example, 90 percent on the current machine. And when the load reduces, Kubernetes can scale back the application, thus optimizing the infrastructure utilization. The Kubernetes auto-scaling is not limited to just infrastructure metrics; any type of metric — resource utilization metrics — even custom metrics can be used to trigger the scaling process.

Real-World Case Study

LendingTree: Here’s a great article from LendingTree. LendingTree has many microservices that make up its business apps. LendingTree uses Kubernetes and its horizontal scaling capability to deploy and run these services, and to ensure that their customers have access to service even during peak load. And to get visibility into these containerized and virtual services and monitor its Kubernetes deployment, LendingTree uses Sumo Logic

4. Multi-cloud flexibility

One of the biggest benefits of Kubernetes and containers is that it helps you realize the promise of hybrid and multi-cloud. Enterprises today are already running multi-cloud environments and will continue to do so in the future. Kubernetes makes it much easier is to run any app on any public cloud service or any combination of public and private clouds. This allows you to put the right workloads on the right cloud and to help you avoid vendor lock-in. And getting the best fit, using the right features, and having the leverage to migrate when it makes sense all help you realize more ROI (short and longer term) from your IT investments.

Need more data to validate the multi-cloud and Kubernetes made-in-heaven story? This finding from the Sumo Logic Continuous Intelligence Report identifies a very interesting upward trend on K8 adoption based on the number of cloud platforms organizations use, with 86 percent of customers on all three using managed or native Kubernetes solutions. Should AWS be worried? Probably not. But, it may be an early sign of a level playing field for Azure and GCP — because apps deployed on K8s can be easily ported across environments (on-premise to cloud or across clouds).

Real World Case Study

Gannett/USA Today is a great example of a customer who is using Kubernetes to operate multi-cloud environments across AWS and Google Cloud Platform. In the beginning, Gannett was an AWS shop. Gannett moved to Kubernetes to support their growing scale of customers (they did 160 deployments per day during the 2016 presidential news season!), but as their business and scaling needs changed, Gannett used the fact that they are deployed on Kubernetes in AWS to seamlessly run the apps in GCP.

5. Seamless migration to cloud

Whether you are rehosting (lift and shift of the app), re-platforming (make some basic changes to the way it runs), or refactoring (the entire app and the services that support it are modified to better suit the new compartmentalized environment), Kubernetes has you covered.

Since K8s runs consistently across all environments, on-premise and clouds like AWS, Azure and GCP, Kubernetes provides a more seamless and prescriptive path to port your application from on-premise to cloud environments. Rather than deal with all the variations and complexities of the cloud environment, enterprises can follow a more prescribed path:

  1. Migrate apps to Kubernetes on-premise. Here you are more focused on re-platforming your apps to containers and bringing them under Kubernetes orchestration.
  2. Move to a cloud-based Kubernetes instance. You have many options here — run Kubernetes natively or choose a managed Kubernetes environment from the cloud vendor.
  3. Now that the application is in the cloud, you can start to optimize your application to the cloud environment and its services.

Real-World Case Study

Shopify started as a data centre based application and over the last few years has completely migrated all their application to Google Cloud Platform. Shopify first started running containers (docker); the next natural step was to use K8s as dynamic container management and orchestration system.

So I deployed Kubernetes. What next?

So there you have it. Five reasons why every CIO should consider Kubernetes now. With some real-world values to boot.

But what happens after I deploy Kubernetes, you ask. How do I manage Kubernetes? How do I get visibility into Kubernetes? How do I proactively monitor the performance of apps in Kubernetes? How do I secure my application in Kubernetes? That’s where Sumo Logic comes in.

Sumo Logic has a solution built to help your teams get the most out of Kubernetes and accelerate your digital transformation. The solution provides discoverability, observability, and security of your Kubernetes implementation and helps you manage your apps better. Want to know about Sumo Logic and our Kubernetes solution? Sign up for our service or read more in this easy to understand Kubernetes eBook.

How to Choose The Right Kubernetes Management Platform

There are a number of things to consider as you choose your Kubernetes management platform for your enterprise, including:

  • Production-readiness — Does it provide the features you need to fully automate Kubernetes configuration, without the configuration hassles? Does it have enterprise-grade security features? Will it take care of all management tasks on the cluster — automatically? Does it provide high-availability, scalability, and self-healing for your applications?
  • Future–readiness — Does the platform support a multi-cloud strategy? Although Kubernetes lets you run your apps anywhere and everywhere without the need to adapt them to the new hosting environment, be sure your Kubernetes management platform can support these capabilities so you can configure them when you need them in the future.
  • Ease of management — Does it incorporate automated intelligent monitoring and alerts? Does it remove the problem of analyzing Kubernetes’ raw data so that you have a single pane of glass view into system status, errors, events, and warnings?
  • Support and training — As your enterprise ramps up its container strategy, will your Kubernetes management platform provider assure you of 24×7 support and training?

Of all the available options, only a few check each of these boxes. Kublr, for instance, is a cost-effective and production-ready platform that accelerates and streamlines the set-up and management of Kubernetes. With it, you can gain a self-healing, auto-scaling solution that brings your legacy systems to the cloud on a single engine, while you seamlessly maintain, rebuild, or replace them in the background. Dynamism, flexibility, and unmatched transparency between modules. It’s a win-win.

How to Choose the Right Kubernetes Management Platform Vendor

As you think about and plan your Kubernetes enterprise strategy, educate yourself about the hurdles along the way and the challenges and misconceptions about Kubernetes. Find out what you should be looking for in a Kubernetes platform, spend some time doing a Kubernetes management platform comparison. Finally, see for yourself how automation tools can provide the production-readiness (the single most important feature), future-readiness, ease-of-management, and the support you need to use Kubernetes, without the management overhead.

--

--