Skip to main content

AWS Lambda Managed Instances (Part 2 - Cost)

· 5 min read
Brian McNamara
Brian McNamara
Software Developer

You've likely seen the AWS launch post and are intrigued by what AWS Lambda Managed Instances (AWS LMI) can offer. You may have even read my overview of AWS LMI in an earlier blog post.

In this post, we'll look at AWS Lambda On-Demand costs, how costs change with AWS LMI, when it makes sense to use AWS LMI and when it makes sense to stick with On-Demand AWS Lambda.

How Are AWS Lambda On-Demand Costs Calculated?

Lambda costs are not always easy to understand. Despite having a serverless operating model there are a few things to consider that influence cost.

At their core, Lambda costs represent the product of how many invocations were made, how much memory was allocated, and how long the functions ran (GB-per-month). Typically ARM CPU architectures are less expensive than x86.

Number of Invocations X GB-per-month

Asynchronous Event Sources like S3, SNS, EventBridge, StepFunctions, Cloudwatch Logs incur 1 invocation for the first 256KB chunk of payload and an additional 1 invocation per additional 64KB chunk.

Keep in mind there are other things that affect cost:

  • Provisioned Concurrency (PC) keeps a specified number of execution environments up and available as a way to minimize initialization time (i.e. cold starts). Keeping these execution environments up and available has a cost associated with it.

  • Ephemeral Storage costs are incurred when you allocate more than 512MB to your function. You pay for the difference between what you allocate and 512MB. There is a fraction of a penny charge per invocation for the amount of storage allocated to your function (GB-per-second).

  • Data Transfer costs can come up if your function is configured to use a VPC. Even if you're not using a VPC you may incur costs based upon what you're interacting with. Target AWS services may have their own data transfer pricing considerations.

How Does the Cost Discussion Change with AWS LMI?

This is where things get a bit more interesting - and possibly more complex.

There is an uptime component to AWS LMI just like there is for Provisioned Concurrency. You're paying to have capacity available to you. You will need to specify the instance type.

Because you have an entire instance (or several instances, based on your Capacity Provider configuration), you do not pay for GB-per-second. Instead, you pay for request charges, EC2 instance uptime charges, and an AWS management fee. The good news is the request charges for AWS LMI are the same as On-Demand Lambda ($0.20 per 1M requests). You can reduce EC2 instance costs by using Compute Savings Plans, Reserved Instances, or other EC2 pricing options but now you're dipping your toes into the world of instance capacity planning.

Rather than just considering things like allocated RAM, duration, and the number of invocations, you're going to need to consider your capacity provider configuration. What types of instances are optimal for your workload?

Interactive Cost Calculator

I've created an interactive cost calculator to make it easier to visualize when it might make sense to consider AWS LMI and when it might make sense to stick with On-Demand Lambda.

Full disclosure - and for anyone who knows me this should come as no surprise - I created the cost calculator using Claude Code. I ran through several scenarios to evaluate On-Demand and AWS LMI pricing using the AWS Pricing Calculator with AWS Lambda.

This cost calculator is meant to be directional. It's not intended to be used to make business decisions. Be sure to test your assumptions - don't just rely on my interactive cost calculator.

Lambda On-Demand vs Managed Instances

Interactive cost comparison with break-even analysis across regions and instance types

Selected Instance: m7g.large

FamilyGeneral Purpose
Architecturearm64
vCPUs2
Memory8 GB
On-Demand (us-east-1)$0.0816/hr
w/ 72% Savings Plan$0.0228/hr
+ 15% Mgmt Fee$0.0122/hr

Cost Comparison

Break-even (w/ Savings Plans):3.8M/mo
Break-even (On-Demand):10.3M/mo
Memory:2048 MB
Duration:200 ms
Region:US East (N. Virginia)
100K1.0M10.0M100.0M1.0B$1.00$10.00$100.00$1.0K$10.0KMonthly InvocationsMonthly Cost ($)
On-Demand Lambda
Managed Instances (Standard)
Managed Instances (Savings Plan)

Potential Savings with Managed Instances

2048 MB memory, 200ms duration, m7g.large with 72% Savings Plan in US East (N. Virginia)

1.0M10.0M100.0M1.0B-40%-20%0%20%40%60%80%Managed Instances cheaperOn-Demand Lambda cheaper

Positive = Managed Instances cheaper | Negative = On-Demand Lambda cheaper

Supported Instance Types & Pricing (US East (N. Virginia))

All C, M, R family instances (large and above) are supported. Prices shown are On-Demand + 15% management fee.

InstanceFamilyArchvCPUMemoryOn-Demand/hrSavings Plan/hrTotal w/ Mgmt Fee
m7g.largeGeneralarm6428 GB$0.0816$0.0228$0.0351
m7g.xlargeGeneralarm64416 GB$0.1632$0.0457$0.0702
m7g.2xlargeGeneralarm64832 GB$0.3264$0.0914$0.1404
m7g.4xlargeGeneralarm641664 GB$0.6528$0.1828$0.2807
m7g.8xlargeGeneralarm6432128 GB$1.3056$0.3656$0.5614
m7g.12xlargeGeneralarm6448192 GB$1.9584$0.5484$0.8421
m7g.16xlargeGeneralarm6464256 GB$2.6112$0.7311$1.1228
m7i.largeGeneralx86_6428 GB$0.1008$0.0282$0.0433
m7i.xlargeGeneralx86_64416 GB$0.2016$0.0564$0.0867
m7i.2xlargeGeneralx86_64832 GB$0.4032$0.1129$0.1734
m7i.4xlargeGeneralx86_641664 GB$0.8064$0.2258$0.3468
m7i.8xlargeGeneralx86_6432128 GB$1.6128$0.4516$0.6935
m7i.12xlargeGeneralx86_6448192 GB$2.4192$0.6774$1.0403
m7i.16xlargeGeneralx86_6464256 GB$3.2256$0.9032$1.3870
m7a.largeGeneralx86_6428 GB$0.1157$0.0324$0.0498

Showing all families. 72% Savings Plan discount shown. Management fee is always based on On-Demand price.

Key Takeaways

When Managed Instances Win

  • • High invocation volume (>10M/month with Savings Plans)
  • • Long function durations (>500ms)
  • • Steady, predictable traffic patterns
  • • Existing EC2 Savings Plans commitments
  • • Latency-critical workloads (no cold starts)
  • • Function memory ≥ 2 GB requirement met

When On-Demand Lambda Wins

  • • Low to moderate volume (<5M/month)
  • • Short function durations (<100ms)
  • • Bursty, unpredictable traffic
  • • Need for scale-to-zero
  • • Function memory < 2 GB
  • • Simple operational model preferred

Instance Selection Tips

  • Graviton (g suffix): Best price/performance
  • C family: CPU-intensive workloads
  • M family: Balanced workloads
  • R family: Memory-intensive workloads
  • • Let AWS choose for best availability
  • • Minimum size: large (no medium/small)

Pricing data as of December 2025. Managed Instances include 15% management fee on EC2 On-Demand price. Savings Plan assumes 72% discount (3-year Compute Savings Plan). Regional pricing uses multipliers from us-east-1 base. Multi-concurrency factor: 10 concurrent requests per environment at 70% efficiency.

Now What?

AWS LMI is a nice addition to the serverless compute toolbelt for AWS customers. It offers customers who run steady state, high volume Lambda functions a mechanism to save a substantial amount of money.

Critically evaluate whether AWS LMI is the right option or whether you should stick with On-Demand AWS Lambda. If you're just getting started, I'd encourage you to stick with On-Demand until you have more data to inform a jump to AWS LMI.

As a rule of thumb, I'd only consider AWS LMI once your steady-state function hits 5 million invocations per month. I'd also only consider it appropriate for functions that are configured with 2GB or more. It's a current limit (as of January 2026) so check the AWS Lambda documentation over time.

Stay curious! 🚀