TL;DR
Azure Cost Management shows spend at the elastic pool resource, not per database. If you need per‑database allocation, you have to build it.
Start with an accuracy ladder: (1) split pools by owner, (2) allocate by a simple proxy, (3) allocate by measured resource usage from DMVs, (4) go deeper with Query Store for query-level attribution.
The goal is not accounting perfection. It is accountability that drives the right engineering and platform decisions.

Why elastic pools hide the truth
Elastic pools are a shared compute envelope. Billing is attached to the pool, because that is the thing you provision and pay for.
Inside the pool, databases burst and are idle at different times.
That is the whole point: sharing reduces wasted headroom.
The catch is chargeback. Finance wants answers like “what does Database X cost?” and the portal answers “the pool costs $Y.”
Define what “allocation” means for your org
Before you touch a query, pick the decision you want the allocation to drive:
Showback: make owners aware of their share so they change behavior.
Chargeback: book costs to a cost center with rules you can defend.
Engineering action: identify the top 1–3 databases that deserve isolation, tuning, or a different tier.
If the outcome is “move the noisy neighbor out of the pool,” you do not need a PhD-level model. You need a model that is consistent and repeatable.
The accuracy ladder (Good → Better → Best)
Good: split by ownership, so the bill is already clean
This is the highest‑ROI move when chargeback is the real requirement.
Create one pool per product, per business unit, or per environment boundary (prod vs non‑prod).
Put “shared platform” databases (admin, monitoring, CMDB) in their own pool so they do not distort product allocation.
Use tags and naming on the pool resource, because tagging a database does not change billing.
Pros: simplest, auditable. Cons: more pools to manage, and you may lose some pooling efficiency.
Better: allocate using a proxy that correlates with demand
If you cannot split pools yet, pick a proxy that maps to what you are trying to control.
Even split (only if the pool exists for a single owner and you just need a quick report).
Allocated storage share (GB used per database). Works when “big databases cost more” is directionally true.
Allocated peak connection or worker usage (if you have that data). Useful for multitenant SaaS patterns.
Pros: fast. Cons: weak correlation for CPU-heavy or IO-heavy workloads, easy to argue with.
Best: allocate using measured resource consumption (DMVs + Query Store)
This is where you stop guessing and start measuring.
Use per-database resource stats from inside each database to quantify CPU, data IO, and log write usage over time.
Normalize those stats into a single “consumption score” per database, then allocate the pool bill by each database’s share.
Optionally, go deeper with Query Store to attribute heavy queries to an application component or team.
A practical allocation model you can defend
Here is a model I’ve seen work in real FinOps programs because it is explainable and it drives action.
Export the pool’s cost for the month (or day) from Azure Cost Management.
Collect per-database usage stats for the same time window.
Compute a weighted “consumption score” per database.
Allocate pool cost by each database’s share of the total score.
Publish the top offenders and the trend, not just a monthly number.
Consumption score (simple, tunable)
Start with a weighted score. Tune weights later if your workloads skew heavily toward one dimension.
Example weights:
CPU weight: 0.6
Data IO weight: 0.3
Log write weight: 0.1
Score per interval per database:
score = (cpu_pct 0.6) + (data_io_pct 0.3) + (log_write_pct * 0.1)
Then sum scores over the month and compute share:
Is it perfect? No.
Is it consistent and evidence‑based? Yes.
That is usually enough to drive the right decisions.
How to collect per-database usage (DMVs)
You have two useful perspectives: pool-level saturation and database-level usage. Use both.
Pool-level: see how “hot” the pool is
Query pool usage using sys.dm_elastic_pool_resource_stats (recent, granular) or sys.elastic_pool_resource_stats (catalog view). Use this to validate whether the pool is consistently pressured or just spiky.
-- Run in any database on the logical server
SELECT TOP (2880) -- ~24h at 30s intervals if available
end_time,
avg_cpu_percent,
avg_data_io_percent,
avg_log_write_percent,
max_worker_percent,
max_session_percent
FROM sys.dm_elastic_pool_resource_stats
ORDER BY end_time DESC;Database-level: measure each database’s share
Run this inside each database (or automate looping through all databases). It returns five-minute slices with CPU, data IO, and log write percentages.
-- Run inside EACH database in the pool
SELECT TOP (4032) -- ~14 days of 5-min intervals
end_time,
avg_cpu_percent,
avg_data_io_percent,
avg_log_write_percent,
avg_memory_usage_percent,
max_worker_percent
FROM sys.dm_db_resource_stats
ORDER BY end_time DESC;If you need longer retention, stream metrics to Log Analytics and keep raw time series there.
Monthly aggregation query (per database)
If you store the DMV output in a central table (recommended), aggregate like this:
-- Example schema: dbo.DbResourceStats(database_name, end_time, cpu_pct, data_io_pct, log_pct)
-- Aggregate for a month
DECLARE @start datetime2 = '2026-01-01T00:00:00';
DECLARE @end datetime2 = '2026-02-01T00:00:00';
SELECT database_name,
AVG(cpu_pct) AS avg_cpu_pct,
AVG(data_io_pct) AS avg_data_io_pct,
AVG(log_pct) AS avg_log_write_pct,
SUM((cpu_pct*0.6) + (data_io_pct*0.3) + (log_pct*0.1)) AS total_score
FROM dbo.DbResourceStats
WHERE end_time >= @start AND end_time < @end
GROUP BY database_name
ORDER BY total_score DESC;Going deeper: Query Store for “who did it” attribution
If your question is not “which database” but “which workload,” Query Store helps. It captures query runtime stats, including CPU time, and lets you rank top consumers.
This is best for tuning and accountability. It is more work than the DMV model, so only do it when you need that level of detail.
-- Top CPU-consuming queries (example pattern)
SELECT TOP (50)
qsq.query_id,
qsqt.query_sql_text,
SUM(qsrs.count_executions) AS executions,
SUM(qsrs.avg_cpu_time * qsrs.count_executions) / 1000000.0 AS est_total_cpu_seconds
FROM sys.query_store_runtime_stats qsrs
JOIN sys.query_store_plan qsp ON qsrs.plan_id = qsp.plan_id
JOIN sys.query_store_query qsq ON qsp.query_id = qsq.query_id
JOIN sys.query_store_query_text qsqt ON qsq.query_text_id = qsqt.query_text_id
GROUP BY qsq.query_id, qsqt.query_sql_text
ORDER BY est_total_cpu_seconds DESC;From there, map queries to an app owner. If you do not have an owner map, the data will still be useful for tuning, but not for chargeback.
Option comparison (pick the one that matches your maturity)
Option | Effort | Accuracy | Best for | Gotchas |
Split pools by owner | Medium | High | Clean chargeback | May reduce pooling efficiency |
Even split | Low | Low | Quick showback | Easy to dispute |
Storage share (GB) | Low | Low–Medium | Data-heavy apps | CPU-heavy apps look “cheap” |
DMV consumption score | Medium | Medium–High | Actionable showback/chargeback | Need automation to collect stats |
Query Store attribution | High | High (for tuning) | Root cause, tuning, app accountability | Retention and query grouping require care |
Automation pattern (so this does not become a manual chore)
If you have more than a handful of databases, make data collection a job:
Nightly job loops through all databases in the pool and inserts sys.dm_db_resource_stats into a central table.
Pool-level stats are pulled once per interval and stored alongside (same time window).
A monthly view produces the allocation output table. Power BI or a workbook reads that view.
Operator rule: if you cannot collect it automatically, it will not survive quarter two.
Common pitfalls
Percent values are relative to pool/database limits. If per-database max settings differ, normalize before comparing.
Idle databases may not have rows for every interval. Treat missing intervals as zero usage, not missing data.
Short retention windows mean you must export or persist stats if you want a full month without gaps.
Allocation debates explode when you try to be perfect. Anchor on repeatability and decision value.
What to do with the results
Once you have the top consumers, your playbook is simple:
Tune the top 3 queries in the noisiest database (Query Store).
If one database dominates, move it to its own pool or a single database tier so cost becomes transparent.
Right-size the pool after you remove the spikes, not before.
Set a monthly review with app owners: trend, top changes, and one action each.
CTA: Want the queries and examples?
Click here for the guide:
A ready-to-run DMV collection script (looping databases) plus a central table schema.
A sample allocation worksheet (weights, formulas, and a monthly rollup view).
Query Store examples for top CPU, reads, writes, and regressions.
Verification checklist (quick)
Confirm the DMVs/views you plan to use are available in your Azure SQL Database deployment and tier.
Confirm your retention needs. Default DMV history is short (around two weeks), so plan a daily export if you need month-long rollups.
Confirm whether your pool uses uniform per-database min/max limits. If not, normalize percentages to a common unit before allocation.
Confirm Query Store is enabled and sized appropriately for the workload if you plan to use it for attribution.
Confirm your finance month boundaries and whether you allocate by calendar month or billing month.
