I have been interested in sustainability and going green since my early teenage years. However, in my professional life, the sustainability topic for running databases has not typically been a focus for technologies deployed in on-premise data centers. When focusing on traditional operational, performance, and availability challenges, one rarely has the time to think about going green. This is one more reason why I love the impact the cloud has made on the world of technology. Within the AWS ecosystem, we're presented with an array of powerful tools that let us not only address the most demanding IT tasks, such as database management, but also inject a sustainable edge.

The data (updated)

In part one of this series, I illuminated the breadth and depth of our real-world dataset. I am pleased to report that this dataset has expanded substantially and is now enriched with more insights.

  • Our ever-evolving Rapid Discovery tooling now covers even more sources. We've added support for HP-UX as a data source, allowing us to assess all primary Oracle-supported operating system on-premises. Additionally, we've now also integrated AWS EC2 and RDS as sources.

  • We capture more than 60 new data elements regarding the potential migration from Oracle to Postgres (more on that later in the article). Rapid Discovery runs analytics in minutes and over dozens/hundreds/thousands of databases, identifying the potential candidates for Database Freedom migration assessment.

  • Our close collaboration with AWS goes beyond Postgres. We now capture more workload metadata (25+ new data points per DB), allowing us to identify potential modernization targets such as RedShift and Elasticache.

  • We enrich the data with more than 150 new dimensions sourced from diverse auxiliary data streams and rule-based derivations. Those lead to some powerful insights in our assessments.

In the last six months, we've been helping more clients with their migration journeys. As a result, we now have anonymized data from over 8,600 databases, which is a 49% increase. These databases run on 4,180 hosts, an increase of 32%, with over 20,000 physical cores and 40,000 threads, about 33% more than in the initial dataset.

The average CPU age in the dataset is six years, and the average consumption is about 15W per core. The total consumption of all CPU cores is almost 7.5 MW/h per day and more than 2.729 GW/h a year.

Besides the new data, the experience we gain in more and more migrations has also improved our assessment engine. Among the many improvements in Rapid Discovery, three stand out for their significant impact on sustainability:

  • Improved identification of suitable Postgres migration targets.

  • A new module for modeling database consolidation.

  • Intelligent algorithms that identify potential candidates for some cloud-native data stores

First Level of Optimization: 72.49% of CPU Power Required

The first step is similar to the one we do for license optimization. Moving databases to modern CPUs means you need fewer cores to run the same load at equal or better performance. Also, taking advantage of the cloud flexibility, you can provision database servers (RDS, EC2, or other) based on your peak load today. In contrast, buying DB systems on-prem requires over-provisioning with at least 3-5 years of growth in mind, as adding capacity to the data center is not trivial.

We also have other means to reduce the initial CPU requirements. Based on each system's SLA, we can provision a Pilot Light DR - instance, storage, or both. This means we can provision smaller instances in the DR region, leading to lower power consumption. In the case of planned or unplanned switchover, switching to a bigger instance takes a few minutes, and the process can be fully automated.

Using those techniques, we ended up with 30,122 target vCPU. This is 27.51% less carbon footprint than the original thread count - 751 MWh saved annually. And it's just the beginning.

Identification of Postgres targets

In the last two years, Cintra has built a vast experience in migrating from legacy Oracle RDBMS into Postgres. Our procedures, processes, and tools are constantly improving.

Starting from version 3.7 of our Rapid Discovery tool, we can run significantly improved Postgres migration diagnostics on any Oracle database. The new, enhanced functionality aligns with the AWS methodology. We detect and evaluate more than 60 unique aspects of the database workload.

As a result, our assessment services can identify and categorize targets with increasing levels of Postgres migration complexity:

  • Low complexity - High automation. Minor, simple manual conversions may be needed.

  • Medium complexity - Medium automation. Low-medium complexity manual conversions may be needed.

  • High complexity - Low automation. Medium-high complexity manual conversions may be needed. The migration is possible but not recommended by default because of such a project's high risk and cost.

  • Highest complexity. Or, as I call it, "forget-about-it" level of complexity. Here, we see databases using many Oracle-specific features or vast amounts of PL/SQL code. For example, we have seen a single database hosting more than 63 million lines of code.

Aside from assigning complexity scores to each database across big database estates, we can also point out the specific challenges each database will present (lots of non-compatible datatypes here, usage of particular packages there, etc.).

Second Level of Optimization: 59.15% of CPU Power Required

So, how does Postgres migration help sustainability? There are two main gains right after the migration.

First, Postgres has had virtually unlimited Multitenant capabilities since day one. This allows us to consolidate multiple smaller databases in one more efficient instance or cluster. There is no license required or artificial limitations. In contrast, Oracle - being Oracle - has put a usage limit to 3 PDBs per instance on non-Oracle public clouds. This is an excellent incentive for database modernization.

However, the more significant sustainability gain comes from the ability to run both RDS Postgres and Aurora Postgres on ARM-based CPUs. AWS Graviton-based instances use up to 60% less energy for the same performance than comparable EC2 instances.

Our powerful Postgres identification engine has put 5,178 databases (59,89% of the total estate) in the Low and Medium migration complexity. Using the Consolidation module based on our practical rules, we consolidate those to 2,810 instances with 11,074 cores. As those are Graviton instances, we can estimate power saving equal to 5,537 source cores (assuming a more conservative 50% power saving). This saves a further 364 MWh per year and reduces the CPU consumption to 59.16% of the initial number.

There are further potential gains that I am currently not factoring in. For example, 1% of the Postgres candidates are candidates for RedShift migration. This may sound like a relatively low number; but those are some of the biggest and busiest databases. However, such an architecture change needs further evaluation based on non-DB factors, so I am not putting this in the model.

Third Level of Optimization: 54.04% of CPU Power Required

One of the magic tricks Aurora has is being able to run in a serverless fashion. This is great for workloads with unpredictable or wildly variable demands. Running serverless means you run close to the number of CPUs you need (within certain limitations) and not the maximum you have provisioned. If a database goes idle for some configurable time (think of Dev/Test DBs during the night or on weekends), it can even go down to zero CPUs. The database service is resumed upon the first query to the endpoint.

One of the new data points we gather shows the "workload variability". It allows us to identify candidates where serverless makes sense from cost and power perspective.

Out of the databases identified as Aurora candidates, 50.09% have a variability of 2x or higher. Among this estate, the average saving from running Serverless is estimated 4.29x. In other words, a database like this, provisioned to meet the high demand constantly, will occupy, on average, 4.29 times more CPU cores than required. If we run it as serverless, the actual used CPU cores will go up and down based on the demand.

Adding this to the sustainability model, 50.09% of the Aurora candidates are provisioned as serverless. This means we save about 4,253 Graviton cores, reaching down to 1.474 GWh total CPU consumption per year, or 54.04% of the initial number.

Fourth Level of Optimization: 51.86% of CPU Power Required

AWS Elasticacle allows organizations to run enterprise-grade in-memory database caches in a fully managed fashion. Elasticache for Redis scales automatically; it can meet hundreds of millions of requests per second. One crucial aspect of the Redis engine is that it is open-source (no license required). AWS is among the major contributors to the project.

In the last few months, we have worked closely with the AWS Elasticache team to enable Rapid Discovery to identify the potential for Elasticache offloading among the assessed database estate.

  • We added more than a dozen new data points to be gathered during the initial discovery.

  • Based on the new data points, we have added specific business logic to identify potential candidates for Elasticache offloading.

My real-life experience has shown me how a critical enterprise database running on Exadata can benefit significantly from offloading the most popular queries to an in-memory cache, getting the CPU and/or IOPS requirements to half or even less. However, this is just experience and not precise measurement, and I want to base this article on real numbers.

Working with the AWS Elasticache team, we designed a test case to measure the CPU benefits precisely. The result shows that, for suitable workloads, offloading the most popular queries to Redis brings, on average, a 38.50% reduction in the demand for CPU. This includes both the Oracle and Elasticache cores in the target architecture. As a bonus, the average response time is significantly better (the improvement can be an order of magnitude). Also, the DB reliability is improved during high load times.

As per our statistics, among the database estate (both Oracle and Postgres), about 7.83% can benefit from Elasticache. This means 677 databases running on 2,358 target vCPUs can benefit from an average 38.50% improvement, saving 908 vCPUs - a further 120 MWh of power saved per annum. Adding this to the sustainability model, we reach 1.4 GWh total or 51.86% of the initial CPU carbon footprint.

(I did not factor in the usage of Graviton for Elasticache, so the actual number is even better.)

Further improvements

When talking to customers, I always emphasize that the initial migration is only the first step of the cloud journey. Yes, it is a necessary first step, but its immediate benefits are relatively modest. Once you are in the cloud, you get a plethora of optimization prospects. These opportunities not only enable you to manage costs, enhance performance, and ensure reliability effectively but also extend to addressing the environmental impact and other crucial business objectives.

So far, we are saving more than 1.33 GWh annually - only for the CPUs. Here are some examples of what can be done further

  • Aurora Global Database headless clusters - You can use the intelligent Aurora storage to replicate your database to your DR region without having any remote instance. Of course, in case of disaster, you need to start at least one instance in the surviving region, but this takes a few minutes and can be automated. There is no lower consumption (or cost) than zero.

  • Cross-region automated backups for RDS Oracle - if your SLA permits, you can automatically instruct RDS to replicate your backups to a DR region. Use the benefits of the cloud - provision your instance only when you need it!

  • Cloud-native databases. Once your workload is migrated into AWS, you can gradually shift towards cloud-native data stores built with AWS fabric in mind. Besides substantial cost, reliability, and performance benefits, this approach can achieve even better energy efficiency (up to 95% less power consumption). This is known as "Sustainability THROUGH the cloud".

  • AWS Customer Carbon Footprint tool provides excellent visibility over your workload running in AWS and potential rooms for improvement.

Sustainability OF the cloud

So far, I've outlined potential strategies to minimize your carbon footprint during the migration. This is known as "Sustainability IN the cloud" - in other words, what you as a customer can do. There is also one more significant benefit in migration to AWS, known as "Sustainability OF the cloud" - in other words, what AWS themselves do to provide more carbon-neutral service.

Amazon is the world's largest corporate purchaser of renewable energy

  • Path to 100% renewable energy by 2025

  • 232 Global renewable energy projects

  • 85 Utility-scale solar and wind projects

  • 147 On-site solar systems

  • 10 GW total renewable capacity

When compared to surveyed enterprise data centers across several geographic regions, AWS can lower a customer's carbon footprint by nearly 80% today. 

Understand the benefits for YOUR organization.

Cintra would be delighted to help your organization understand the benefits of migrating to AWS by engaging in a funded assessment.

Some of the benefits include:

  • Gain an immediate, accurate, deep understanding of your current estate

  • Understand the future state options for rehost, replatform, and modernization

  • Reduce costs by optimizing and rightsizing your compute, storage, and licensing requirements

  • Plan based on data-driven insights and execute quickly to set the stage for a confident migration

You can find more benefits on our website or contact me via LinkedIn.

Insights