Client: Top-tier bank (confidential) Industry: Financial Services Engagement: 3-year program

Migrating a bank's datacenter to AWS — 700 apps, 18,000 servers, 3 years.

Cloudism led a 3-year datacenter migration program for a top-tier bank as an AWS Partner, working alongside AWS Professional Services and AWS Managed Services. We ran the program as a true migration factory: a structured Migration Readiness & Planning workstream feeding a 5-stage execution lifecycle with six formal quality gates, a 28-day wave cadence per application, and a 12-day hypercare observation window after every cutover.

Program at a glance

~700
Applications migrated
~18,000
Servers retired or moved
3 yrs
End-to-end program
3 envs
DEV, NPRD, PRD
The challenge

Two aging datacenters. A portfolio measured in thousands of servers.

A top-tier bank had reached the end of the road with two on-premises datacenters. Hardware refresh cycles were coming due, real-estate and power costs were climbing, and the operational model — racks, network teams, capacity planning weeks in advance — was holding back the speed at which the business could deliver new digital services. Leadership decided to exit both datacenters and rebuild the foundation on AWS.

The scale was the hard part. The portfolio spanned roughly 700 applications running across ~18,000 servers, with workloads ranging from internal back-office tools to customer-facing systems with strict regulatory, audit, and uptime obligations. Each application had three environments to think about — development (DEV), non-production (NPRD), and production (PRD) — and many had upstream and downstream dependencies that hadn't been formally documented in years.

A program of this size cannot be improvised. It needed a repeatable migration factory, a clear way to triage what moves how, and tight partnership with AWS to keep momentum across a 3-year window.

The approach

A migration factory built on Migration Readiness & Planning, then executed wave by wave.

Cloudism led delivery as an AWS Partner, working alongside AWS Professional Services and AWS Managed Services. We ran the program as a true migration factory — a repeatable production line where each application moved through the same gated set of stages, with the same tooling, the same documentation artifacts, and the same go/no-go criteria. That's what makes 700 applications across 18,000 servers tractable: turning every move into a small, well-understood unit of work that scales.

Discovery: Migration Readiness & Planning (MRP)

Before any server was migrated, we ran a structured Migration Readiness & Planning (MRP) workstream to build the data foundation the rest of the program depended on. MRP gave each application a defined scope, a documented technical baseline, an assigned migration pattern, and an entry point into the execution queue.

Capacity benchmark from the program: roughly 30 business days of discovery work to fully ready a batch of ~100 three-tier applications for execution, with ~10 discovery engineers running in parallel. That ratio is what let the factory keep feeding the execution pipeline without stalls.

The 6 Rs — how the portfolio split out

Execution: a 5-stage lifecycle with formal quality gates

Every application moved through the same five-stage lifecycle, with named quality gates between stages. No stage could begin until the prior gate was signed off by the right stakeholder. This is what kept the factory predictable across hundreds of cutovers.

Six formal gates governed progress between stages: Pre-Requisite Validation, Test Cutover Approval, Pre-WIG Validation (Test), Final Cutover Approval, Pre-WIG Validation (Final), and DNS Change & Handover Complete. Each gate had an owner and a documented set of evidence — no application moved forward on verbal sign-off alone.

Wave timeline

Each application moved through the 5-stage execution lifecycle on a roughly 28-day cadence: 2 days of preparation, ~2 weeks of replication, 1 week of test cutover and validation, half a day of final cutover, and a day of handover — followed by 12 days of hypercare. Multiple waves ran in parallel across the factory, with the execution timeline tuned per application based on its complexity tier.

Multi-account model and change governance

The target AWS environment was structured as a multi-account landing zone with clear separation between the migration staging account and the target operational accounts, with consistent security guardrails applied across both. Operational changes inside the target environment — AMI sharing, snapshot sharing, AMI re-encryption with target-account KMS keys, Security Group provisioning via CloudFormation, EC2 instance provisioning, hostname changes — all flowed through AWS Managed Services Request for Change (RFC) workflows. Every change was logged, reviewed, and auditable, which mattered enormously for a regulated banking environment.

DMZ workloads

Servers in the bank's DMZ followed the same lifecycle, with one addition: proxy configuration on the source server (HTTPS proxy environment variable, proxy reachable on the bank's outbound web gateway) before the CloudEndure agent could replicate. Once the proxy path was validated, those servers ran through the standard factory.

Running the upskill workstream alongside the migration

Many of the bank's internal teams started the program with limited AWS experience. We treated this as a delivery workstream of its own rather than an afterthought, because a migration that leaves the client unable to operate what you've just built isn't really finished. Cloudism's Learning Launchpad ran in parallel with the factory: role-based learning paths (infrastructure, application, security, operations), AWS certification preparation, and hands-on labs grounded in the bank's actual landing-zone patterns and the services that were showing up in their migration waves. Application teams entered each cutover with the AWS literacy to validate their workloads, run their own troubleshooting, and operate the result in steady-state. By the end of the program, the bank's internal teams were AWS-fluent, not dependent on us.

Named teams and responsibilities

The factory ran with five clearly-bounded teams whose responsibilities were defined up front:

The outcome

A 700-app, 18,000-server datacenter exit, executed predictably across 3 years.

By the end of the 3-year program, the bank's application portfolio had been migrated to AWS, operating under a governed multi-account landing zone with environment separation across DEV, NPRD, and PRD. Steady-state operations had been transitioned to AWS Managed Services with documented runbooks, monitoring baselines, and on-call procedures in place for every migrated workload. The bank's internal teams were freed from datacenter-tied operational work and could focus on building new digital capabilities.

Services & technology

Built with

RISC Networks CloudScape AWS Migration Hub AWS Migration Factory CloudEndure / MGN AWS DMS AWS Landing Zone Multi-Account Architecture AWS Direct Connect AWS Managed Services (AMS) AMS RFC Workflows Amazon EC2 Amazon RDS Amazon S3 Amazon VPC AWS IAM AWS KMS AWS CloudFormation Active Directory + delivery & collaboration tooling

Planning a datacenter exit?

We've done this at scale, in a regulated industry, alongside AWS. Tell us where you are and we'll help you scope a realistic path.