Lower the bill by controlling what lands, where it lands, and how long it stays interactive.

Most Log Analytics bills do not spiral because someone forgot a budget alert. They drift because the workspace keeps collecting too much, stores it too long, or puts the wrong data in the wrong table plan.

That is good news. It means the fix is usually architectural, not magical. You do not need to shut monitoring off. You need a cleaner control path.

This guide focuses on three of the most practical levers you can use right now: retention, table plans, and data collection rules. If you get these three right, you can usually cut waste without damaging the operator experience.

TL;DR

Retention is not just a compliance setting. It is a cost dial. Set interactive retention to match how long teams actively investigate data, then use long-term retention only where it truly earns its keep.

Table plans matter. Analytics is the premium lane for the data you operate on every day. Basic and Auxiliary exist so that noisy, lower-touch data does not burn premium dollars.

DCRs are where upstream savings happen. If you can filter or reshape junk before ingestion, you beat every downstream cost-control trick.

Figure 1. The cost-control chain for Log Analytics.

Why this topic matters

Azure Monitor is flexible enough to hold high-value operational data, noisy troubleshooting logs, and long-hold audit data in the same workspace. That flexibility is powerful, but it also means defaults can quietly become expensive.

The practical pattern is simple: keep hot data hot, keep cold data cheap, and stop ingesting data that nobody uses. That sounds obvious. In real environments, it rarely happens by accident.

The reason I like this topic for beginners is that it teaches a bigger cloud lesson. Cost control gets easier when you make placement decisions early. The further a bad decision travels downstream, the more painful it becomes to unwind.

Retention basics that actually change the bill

Start with the workspace default, but do not stop there. In Azure Monitor Logs, Analytics tables inherit the workspace default analytics retention unless you override the table. The default analytics retention is 30 days, and you can extend Analytics retention up to two years. Table-level total retention can go much longer for long-term hold. That means your first question should be: how many days do teams really need this data to stay interactive?

A lot of teams keep 90 days or 180 days of interactive data because it feels safer. Then they discover that most incident reviews happen within 7 to 30 days. If that is your pattern, you may be paying premium interactive retention for old data nobody is touching.

There is also an operational nuance that catches people off guard. If you shorten total retention, Azure Monitor waits 30 days before removing the older data. That buffer is useful when you make a mistake and need to reverse a change. Another nuance: if a workspace is set to 30 days, Microsoft documents that data might remain for 31 days unless you explicitly configure immediate purge through the API. If privacy timing matters, verify the setting before you publish a policy or promise.

My practical rule: decide interactive retention by investigating habits, not by vague comfort. Decide long-term retention by compliance, audit, or forensics needs. If you cannot name the reason, do not pay for the days.

Figure 2. A quick comparison of Analytics, Basic, and Auxiliary table plans.

At-a-glance comparison

Plan

Best fit

Ingestion

Query pricing

Operator note

Analytics

High-value operational data, alerting, dashboards, investigations

Standard

Included

Fastest, richest experience

Basic

Verbose troubleshooting logs you still need to search for short windows

Reduced

Charged per GB scanned

Cheaper ingestion with query limits

Auxiliary

Low-touch custom logs, audit trails, long-hold data

Minimal

Charged per GB scanned

Slowest lane, best when cost beats speed

Table plans: which lane should the data live in?

Think of table plans as lanes with different economics. Analytics is the full-featured lane. It is built for continuous monitoring, faster queries, richer analytics, Insights, and broader operational use. If the table feeds daily troubleshooting, dashboards, workbooks, alerting, or repeated cross-table analysis, keep it in Analytics.

Basic is the middle lane. It lowers ingestion cost and works well for high-volume troubleshooting data that still needs a short-window interactive search. The tradeoff is that the query model is more limited, the query cost is separate, and the practical experience is narrower than Analytics. That is often fine for verbose platform or container logs that are useful during an incident, but not something teams study every hour.

Auxiliary is the cheapest lane, but it is not a universal downgrade target. It is best for low-touch custom data and long-hold records where cost and retention matter more than speed. Queries can be slower, features are reduced, and it is not the place for fast-moving operational workflows.

One mistake I see often is treating every noisy table as a Basic or Auxiliary candidate. That can backfire. If the data drives response workflows, joins, or richer visuals, the cheaper plan can create friction that pushes teams to re-ingest somewhere else or work around the platform. That is not savings. That is cost is shifted into human pain.

A cleaner test is this: if an operator needs the data often and expects a premium experience, keep it in Analytics. If the data is mostly there for occasional troubleshooting, Basic may be perfect. If the data is low-touch, custom, and retention-heavy, Auxiliary might fit.

DCR basics: the control point that pays you back fastest

Data collection rules are where Azure Monitor starts to feel like a real control plane instead of a bucket. A DCR defines how data should be collected, transformed, and sent to its destination. The practical value is simple: you can shape incoming data before it lands in the workspace.

That matters because the cheapest log line is the one you never ingest. If a field is useless, project it away. If a class of events is noise, filter it out. If incoming data needs schema cleanup, do it before storage. Microsoft’s transformation model uses KQL against a virtual input called source, which keeps the idea approachable once you have written a few normal Log Analytics queries.

Keep the goal modest at first. Do not try to write a heroic transformation on day one. Start with the obvious junk. Remove rows you never investigate. Drop columns you never query. Then measure again.

There are two beginner-friendly ideas to remember. First, not all tables support transformations, so verify your target before designing around it. Second, a workspace transformation DCR applies at the workspace level for the table it targets, which is useful but also means you should be deliberate. Broad changes belong behind a simple review process.

Figure 3. A safe rollout path for one noisy workspace.

A practical operator workflow you can run this week

1) Open the workspace and review usage. Find the loudest billable tables by volume. You are looking for obvious offenders, not perfection.

2) Label each candidate. Is it daily operational data, occasional troubleshooting data, or audit-and-keep data?

3) Adjust retention first. If a table stays in Analytics, lower the interactive retention to the shortest period that still supports normal investigation.

4) Review the table plan. Move suitable supported tables from Analytics to Basic only after you confirm the query and alerting experience still matches the use case. Use Auxiliary for the custom, low-touch scenarios it was designed for.

5) Apply upstream filtering with a DCR where it makes sense. Start with simple filters and test them against known good samples.

6) Wait a week or two, then compare volume, cost behavior, and operator feedback. Good cost control is not just cheaper. It is cheaper without making the incident response worse.

Copy-paste starter ideas

Use the Azure CLI to set the workspace retention when Analytics tables should inherit a new default.

Use table-level updates when one table deserves a different retention profile or when a supported table should move to Basic.

Use a small transformation query in a DCR to filter obvious noise before it reaches the workspace.

Set workspace default analytics retention (CLI)

az monitor log-analytics workspace update \

  --resource-group <rg-name> \

  --workspace-name <workspace-name> \

  --retention-time 60

Set a supported table plan to Basic or back to Analytics (CLI)

az monitor log-analytics workspace table update \

  --resource-group <rg-name> \

  --workspace-name <workspace-name> \

  --name <table-name> \

  --plan Basic

 

az monitor log-analytics workspace table update \

  --resource-group <rg-name> \

  --workspace-name <workspace-name> \

  --name <table-name> \

  --plan Analytics

Set table-level interactive and total retention (CLI)

az monitor log-analytics workspace table update \

  --resource-group <rg-name> \

  --workspace-name <workspace-name> \

  --name <table-name> \

  --retention-time 30 \

  --total-retention-time 180

Starter transformation idea for a DCR

source

| where SeverityLevel != 'info'

| project-away RawEventData

What beginners should avoid

Do not use the daily cap as your main cost strategy. It is a brake for surprises, not a design pattern. Microsoft explicitly warns that it cannot stop ingestion precisely at the configured threshold, and excess data can still be billed.

Do not move a table to a cheaper plan without checking what the team actually does with it. A lower unit cost can still be the wrong business choice.

Do not write complicated transformations before you understand the data. When transformations become slow or brittle, they stop being controlled and start becoming hidden failure points.

Do not make twenty table changes in one wave. Pick one workspace, one or two tables, and prove the pattern.

The art of the possible here is bigger than a lower bill. When you tune retention, place data in the right table plan, and filter upstream with DCRs, your observability estate starts acting like a designed platform instead of a growing pile of telemetry.

That is the real win. Less noise. Better operator experience. Fewer arguments about cost after the fact.

If you want a good first move, pick one noisy workspace today and answer three questions: which tables are loudest, which ones are actually used, and what could have been filtered before ingestion. That small review will usually show you where the next savings live.

Verification checklist before publishing or implementing

Use this short list before turning the guide into a production change or a published post.

Recheck current Microsoft Learn guidance for table-plan support on your target built-in tables.

Verify whether any affected alert rules, dashboards, workbooks, or downstream tools depend on Analytics-only behaviors.

Confirm whether privacy or data-deletion requirements require immediate purge when retention is set to 30 days.

Test transformation KQL against real sample data before rolling into production.

Review daily cap only as a safety control, not as your primary optimization plan.

Call to action: Start with one workspace and one noisy table. Lower the premium footprint first, then prove the change with operator feedback and usage data.

Keep reading