Sample Assessment Output
What you get back from the Cloudsaver AI + Cloud Savings Assessment. The examples below represent the types of analysis included in the written report.
Cloud Savings — Rate Optimization
How you're paying for cloud today, where commitment coverage gaps exist, and the recommended instrument mix to close them. Includes current vs. projected coverage, total identified savings, and a breakdown of existing commitments alongside new Cloudsaver-managed instruments.
Current Coverage
34%
of eligible spend
With Cloudsaver
91%
projected coverage
Total Savings
$4.3M
annualized
Savings Rate
18.4%
of addressable spend
Coverage Comparison
Recommended Instrument Mix
Existing 1-Year RIs
$1.2M
28% of coverage
Existing 3-Year RIs
$680K
16% of coverage
Cloudsaver 30-Day
$1.8M
42% of coverage
Cloudsaver 1-Year
$620K
14% of coverage
Cloud Savings — Usage Optimization
The resources themselves — what's over-provisioned, what's idle, what's running on previous-generation hardware. Each finding is categorized, quantified, and broken down to the resource level with projected annual savings.
Usage Optimization
287 resources identified across 3 categories
Total Identified
$585K/yr
Rightsizing
Over-provisioned compute instances across 8 accounts
Idle & Orphaned
Resources running with no meaningful utilization
Previous Generation
Resources on older instance families with better options available
Tag Health
A composite score across three dimensions — coverage, compliance, and clarity — with specific remediation recommendations ranked by severity. This is the foundation that makes cost attribution, showback, and forecasting reliable.
Overall Health
72% of resources have at least one tag
58% of tagged resources comply with tag policies
44% consistency across tag keys and values
Remediations
5 findingsAI — License Optimization
Seat utilization across every AI platform with recommendations organized into three buckets: reduce (reclaim idle seats), elevate (upgrade power users), and expand (fulfill waitlist demand). Includes idle detection and estimated value recovered.
Total Pool
3,250
Active
2,088
Idle
362
Recommendations
Reduce
- 84 idle Claude seats — no activity in 60+ days. Reclaim and reallocate.
- 112 idle ChatGPT seats — last active 45+ days ago. $33K/yr recoverable.
- 9 Gemini Finance seats unused since provisioning. Reclaim entirely.
Elevate
- 38 ChatGPT Team users hitting rate limits weekly — upgrade to Enterprise tier or shift to API.
- 22 Claude Standard users averaging 4x the usage of peers — evaluate Premium upgrade for ROI.
Expand
- 62 Copilot waitlist requests from Engineering — current utilization supports expansion.
- 41 Claude waitlist requests from Product — idle seats available for immediate reassignment.
- 28 ChatGPT requests from Customer Success — consider provisioning from reclaimed seats.
AI — Usage Patterns
Aggregate consumption patterns across your AI portfolio — model mix by spend, cost optimization opportunities, and individual flags for concerning behavior. Not a per-user roster, but the patterns that matter.
Model Mix by Spend
GPT-4o
42% · $38K/mo
Claude Sonnet
28% · $24K/mo
Claude Opus
12% · $18K/mo
GPT-4o Mini
11% · $4K/mo
Gemini Pro
7% · $3K/mo
Aggregate Patterns
Individual Flags
AI — Behavioral Insights (Tier 2)
Combined usage metrics and prompt categorization surfacing what your organization is actually doing with AI. Blends token volumes, model selection, and categorized prompt patterns into findings you can act on.
Prompt Categorization
What your organization is using AI for, based on categorized prompt analysis.
Code generation
38%
Content writing
24%
Data analysis
18%
Research / Q&A
12%
Other / unclassified
8%
Key Findings
The top 8% of users (by token volume) average 920K tokens/user/month and are primarily focused on marketing content generation. Their usage has increased at a 22% monthly rate over the last quarter — if unchecked, this cohort alone will add $18K/mo by Q3.
42% of prompts sent to GPT-4o and Claude Opus are single-turn Q&A or formatting requests. Prompt categorization confirms these produce equivalent results on cheaper models. Shifting this traffic to GPT-4o Mini or Claude Haiku would save $14K/mo with no quality loss.
Categorization flagged prompts containing what appear to be customer names, account numbers, and internal financial data across 34 users. Recommend reviewing AI acceptable use policy and implementing input guardrails before expanding access.
Engineering users generate 3.4x the token volume of other groups, but 78% of code-gen prompts are single-shot with no follow-up. Established AI workflow training could increase output quality and reduce redundant generation — estimated 15-20% token reduction.
API traffic analysis combined with prompt categorization identified 3 unsanctioned tools routing through personal API keys. Combined spend: $4.2K/mo. Usage patterns suggest these tools duplicate capabilities already available through sanctioned platforms.
What the Full Report Includes
The written assessment is delivered as a structured document covering:
- Executive summary — total identified savings, key risks, prioritized recommendations
- Cloud savings findings — rate optimization (commitment coverage, 30-day opportunities, over-commitment, 1yr/3yr recommendations) and usage optimization (rightsizing, idle resources, tagging health)
- AI cost and usage findings — seat utilization vs. contracted across all platforms, spend by team/user/model, anomaly events, license recommendations, cross-platform inventory
- (Tier 2 only) Behavioral findings — categorized prompt analysis, use-case patterns across teams, governance recommendations grounded in actual usage
Each finding includes the underlying data, the recommended action, the projected savings or risk reduction, and the suggested implementation owner.
Want to see how this applies to your environment?
Get your free savings assessment