Terms and Conditions
Last Updated: January 7, 2026
Introduction
This document outlines the Service Terms and Service Level Agreements (SLA) governing your use of cloudidr's services. cloudidr offers multiple distinct services: Disaster Recovery Compute Service (DRCS), Schedule Compute Service (SCS) / FlexCompute, GPU Service, Hosted Models, and LLM Ops, each with specific terms and SLAs.
These Service Terms and Service Level Agreement ('Service Terms') supplement and amend the Customer Agreement between CloudIDR and Customer. In the event of any conflict or inconsistency between these Service Terms and the Customer Agreement, these Service Terms shall take precedence and control with respect to the Services described herein. For matters not explicitly addressed in these Service Terms, the Customer Agreement shall govern. Terms defined in these Service Terms supersede any conflicting definitions in the Customer Agreement.
1. Disaster Recovery Compute Service (DRCS)
1.1 Service Description
DRCS provides high-availability compute services enabling businesses to recover their applications in a designated failover region. Customers can purchase one or more Disaster Recovery (DR) policies for their failover regions. DRCS is specifically designed and must be used exclusively for DR events.
1.2 Disaster Recovery Policy Terms
Each DR policy includes:
Designated region and availability zone (AZ)
Specified instance types and quantities
12-month term commitment
Usage limitations:
96 total hours of usage per instance within 12 months
Maximum 48 continuous hours of usage per instance
Concurrent utilization of all policy-defined instances
Maximum 3 activation requests per 12-month period
Policy terms:
Non-refundable
Modifications permitted only for same-region upgrades, subject to CloudIDR's capacity
All instances within a policy must be utilized in the same Region/AZ
Instance substitution:
In rare cases when exact instance type is unavailable, cloudidr may substitute with an equivalent instance of similar or higher family/size
Substitution maintains or exceeds specifications (CPU, memory, network performance)
Customer will be notified of substitution
No additional charges for higher-tier substitutions
1.3 DR Policy Availability (DRPA)
Target availability: 99.999% for policy-specified instances
Measurement: Based on instance availability during DR event activations
Service Credits eligibility for non-compliance with SLA
1.4 DRCS Service Level Agreement
1.4.1 Availability Target
99.999% instance availability (excluding scheduled maintenance)
Maximum allowable downtime: 6 minutes per year during policy usage
1.4.2 Availability Exceptions
Force majeure events
Cloud service provider failures in their AZ or regions
Scheduled maintenance (with 48-hour advance notice)
1.4.3 Service Credits
Below 99.999%: 10% credit
Below 99.0%: 25% credit
Non-availability: 100% credit for affected instance-minutes
2. Schedule Compute Service (SCS) / FlexCompute
2.1 Service Description
SCS (also known as FlexCompute) provides discounted AWS EC2 compute resources (GPU and CPU) at discount compared to AWS On-Demand pricing. The service operates in two modes:
Advance capacity reservation for customers with flexible usage schedules
Billing based on usage-minutes, rounded up to the nearest minute
Requires advance booking
Instances provisioned directly in customer's AWS account
Pricing available at cloudidr.com/pricing
2.2 Service Terms
2.2.1 Scheduled Reservations (SCS) Terms:
Refund Policy:
Full refund: Within 10 minutes of creation and before policy start
80% credit: When cancelled ≥24 hours before start time
Early termination: Unused instance-hours credited to maximum of 20% of purchase price
Modifications: Not permitted after booking
Billing: Pre-paid based on reserved capacity and credited at the end of the moth
2.3 Service Level Agreement
2.3.1 Scheduled Reservations (SCS) SLA:
Availability Target: 99% during reservation period
Maximum Downtime: Approximately 14 minutes per 24/7 usage
Service Credits: 100% credit for unavailable reserved instance-minutes
Instance Availability: Best-effort basis; cloudidr does not guarantee 100% availability
2.4 Customer Responsibilities
Maintain valid AWS account with appropriate permissions
Provide cloudidr with necessary IAM roles for instance provisioning
Monitor and manage instances through AWS Console or Cloudidr dashboard
Terminate instances when no longer needed to stop billing (Pay-As-You-Go)
Utilize scheduled reservations during booked time windows (Scheduled Reservations)
3. GPU Service
3.1 Service Description
GPU Service provides SSH-only access to discounted AWS GPU instances. Customers receive shell access to GPU capacity at discount compared to AWS On-Demand pricing.
3.2 Service Terms
Access Method: SSH access only; no AWS Console access to these specific instances
Billing: Usage billing based on actual minutes of usage
Instance Availability: Best-effort basis; no guaranteed availability
Data Location: Instances run in AWS regions
Session Management: Sessions may be terminated with notice if capacity reallocation is needed
Cancellation and refunds: similar to schedule compute service (SCS).
3.3 GPU Service Level Agreement
No SLA commitment; service provided on best-effort basis
Cloudidr will provide reasonable notice (minimum 24 hours when possible) before session termination due to capacity needs
No service credits available
3.4 Customer Responsibilities
Maintain secure SSH keys and access credentials
Back up data regularly; cloudidr is not responsible for data loss
Comply with SSH access policies and security best practices
Terminate sessions when not in use to minimize costs
4. Hosted Models
4.1 Service Description
Hosted Models provides managed LLM hosting where Cloudidr installs and operates AI models (including OpenAI, Mistral, DeepSeek, and other supported models) on behalf of customers. Billing is token-based.
4.2 Service Terms
Billing: Per-token pricing; rates vary by model and usage tier
Supported Models: OpenAI-compatible models, Mistral, DeepSeek, and others as listed on cloudidr.com
Integration: API endpoint configuration
Model Updates: cloudidr maintains model versions; customers notified of updates
Data Handling: Customer prompts and responses are processed but not stored beyond operational logs (retained for 7 days)
API Keys: Customer maintains their own API keys for model providers when applicable
4.3 Hosted Models Service Level Agreement
Free Tier: No SLA commitment; best-effort service
Paid Tiers: 99% API availability during billing period
Response Time: 95th percentile response time within documented limits per model type
Service Credits:
Below 99% availability: 10% credit
Below 95% availability: 25% credit
Below 90% availability: 50% credit
4.4 Customer Responsibilities
Maintain valid API credentials
Comply with model provider terms of service
Monitor token usage and associated costs
Ensure proper error handling in applications
5. LLM Ops
5.1 Service Description
LLM Ops is a cost tracking and monitoring service that provides visibility into AI API spending across multiple providers (OpenAI, Anthropic Claude, Google Gemini). The service operates as a pass-through proxy, logging metadata while routing API requests to the original providers.
5.2 How LLM Ops Works
Proxy Service: LLM Ops sits between customer's application and AI providers (OpenAI, Anthropic, Google)
API Key Handling: Customer API keys pass through the proxy via HTTPS but are never stored or logged
Metadata Collection: Only metadata is collected (token counts, model names, timestamps, request IDs, calculated costs)
Content Privacy: Customer prompts, responses, and all message content pass through encrypted but are never logged or stored
Direct Routing: All API requests are immediately forwarded to the respective provider; cloudidr does not process or modify content
Integration Options:
Direct proxy integration (change base URL)
Callback/webhook support for applications using LiteLLM or similar proxy frameworks
Custom integration via cloudidr API
5.3 LLM Ops Service Terms
6.3.1 Data Collection and Privacy
Metadata Only: Cloudidr collects and stores only:
Token counts (input/output)
Model identifiers
Timestamps
Request/response IDs
Calculated costs
Custom tags (team, agent, department - if provided by customer)
No Content Storage: Prompts, responses, and message content are never logged or stored
API Key Security: API keys pass through via HTTPS but are never stored, logged, or retained
Data Storage Location: All metadata stored in AWS data centers in the United States
Data Retention: Metadata retained while customer account is active
Data Deletion: All metadata permanently deleted within 30 days of account deletion
6.3.2 Service Availability and Reliability
Best Effort: LLM Ops provided on best-effort basis for Free Tier
Proxy Availability: If LLM Ops proxy becomes unavailable:
Customer may experience service interruption
Customer is responsible for implementing failover to direct provider access
cloudidr is not liable for unavailability or service interruptions
Customer can immediately route requests directly to providers by removing proxy configuration
Data Accuracy: Cost tracking data provided on best-effort basis:
Cloudidr makes reasonable efforts to ensure accuracy
No guarantee of 100% accuracy; discrepancies may occur on cost calculations and other calculated metrics
Customers should verify costs against provider invoices
Cloudidr not liable for cost tracking errors or discrepancies
6.3.3 Pricing Tiers
Free Tier (Current):
Unlimited API calls tracked
All supported providers (OpenAI, Claude, Gemini)
Real-time dashboard
Cost alerts
Team/agent/model breakdown
Limited features
No SLA commitment
Commercial use allowed
Paid Tiers (Future):
Enhanced features (to be announced)
99.99% uptime SLA
Priority support
Advanced analytics and optimizations
Custom integrations
Pricing to be announced
6.3.4 Free Tier Limitations
No guaranteed uptime or availability
No service credits for downtime
Best-effort customer support
Feature limitations compared to paid tiers
cloudidr reserves right to add usage limits with 30 days notice
5.4 LLM Ops Service Level Agreement
6.4.1 Free Tier SLA
No SLA Commitment: Free tier provided as-is with no uptime guarantees
Best Effort: CloudIDR will make reasonable efforts to maintain service availability
No Service Credits: Service credits not available for free tier
Maintenance Windows: CloudIDR may perform maintenance without prior notice
6.4.2 Paid Tier SLA (When Available)
Uptime Target: 99.99% monthly uptime
Maximum Downtime: Approximately 4.38 minutes per month
Measurement Period: Calendar month
Exclusions:
Scheduled maintenance (with 48-hour notice)
Force majeure events
Customer's network or configuration issues
Third-party provider (OpenAI, Anthropic, Google) outages
DDoS attacks or security incidents
Service Credits:
99.9% - 99.99%: 10% credit
99.0% - 99.9%: 25% credit
Below 99.0%: 50% credit
Below 95.0%: 100% credit
6.4.3 Data Accuracy
No SLA or guarantee for cost tracking accuracy
Best-effort calculation based on provider pricing
Customer responsible for verifying costs against provider invoices
Cloudidr not liable for cost calculation errors
5.5 Customer Responsibilities
Integration: Properly integrate LLM Ops into applications (typically 2 lines of code)
API Key Security: Maintain secure storage and handling of API keys
Fallback Configuration: Implement fallback to direct provider access in case of proxy unavailability
Cost Verification: Verify cost tracking data against actual provider invoices
Usage Monitoring: Monitor API usage and implement appropriate rate limiting
Compliance: Ensure use complies with provider terms of service (OpenAI, Anthropic, Google)
Commercial Use: Allowed for all tiers; customer responsible for compliance with provider terms
5.6 Limitations of Liability (LLM Ops Specific)
In addition to Section 9 of the Customer Agreement:
No Liability for Proxy Downtime: Cloudidr not liable for costs, losses, or damages resulting from LLM Ops proxy unavailability
No Liability for Cost Tracking Errors: Cloudidr not liable for financial impact of cost tracking inaccuracies
Third-Party Provider Responsibility: Cloudidr not liable for third-party provider (OpenAI, Anthropic, Google) issues, outages, or changes to their services
Customer Failover Responsibility: Customer responsible for implementing failover mechanisms
Data Loss: CloudIDR not liable for loss of metadata in case of service failure
7. Definitions
7.1 Instance-Minutes
Total minutes of usage per specific instance type. Example: m7g.xlarge used for 1.5 hours = 90 instance-minutes
7.2 Instance-Minutes Price
Calculated as: (per-minute price) × (instance-minutes)
Example: $0.50 per minute × 90 minutes = $45
7.3 Disaster Recovery (DR) Events
Qualifying Events:
Physical infrastructure damage from:
Criminal activities
Military actions
Natural hazards
Accidental fire or building damage
Extended power outages exceeding backup capacity
Cyber incidents causing infrastructure disability
Non-Qualifying Events:
Temporary shutdowns from operator error
Reversible software upgrade issues
Issues resolvable through power cycling
7.4 Metadata (LLM Ops Specific)
Non-content information collected by LLM Ops including but not limited to:
Token counts (input and output)
Model names and identifiers
Timestamps
Request and response IDs
Calculated costs based on provider pricing
Custom tags provided by customer (team, agent, department)
Does NOT include:
Prompts or queries sent to AI providers
Responses or completions from AI providers
Any message content
API keys or authentication credentials
8. Service Credit Claims
To request service credits for SLA violations:
Email: claims@cloudidr.com
Required Information:
Account ID
Policy/Service name
Date and time of service failure
Description of impact
Supporting documentation
Timeframe: Claims must be submitted within 30 days of incident
Processing: CloudIDR will respond within 15 business days
9. Modifications to Terms
9.1 Right to Modify
Cloudidr reserves the right to modify these Service Terms, including pricing, at any time.
Changes may be made to reflect:
Service improvements
Market conditions
Legal or regulatory requirements
Operational needs
9.2 Notice of Modifications
Material changes: 30 days advance notice
Non-material changes: Effective upon posting
Notice provided through:
Email to account administrators
Dashboard notifications
Service documentation updates
Each notification will include:
Summary of changes
Effective date
Any required customer actions
9.3 Acceptance of Changes
Continued use of Services after effective date constitutes acceptance of modified terms
For material changes affecting services with commitments:
Existing policies/contracts continue under original terms until renewal
New or renewed services follow modified terms
For pay-as-you-go services (FlexCompute, LLM Ops, GPU Service, Hosted Models):
Changes apply to all usage after effective date
No grandfather clauses for pay-as-you-go services
9.4 Customer Rights
If you do not agree with material changes:
Complete current committed services under existing terms
Option to not renew under new terms
Cancel pay-as-you-go services without penalty
Early termination options available per standard cancellation policies
9.5 Emergency Changes
Immediate changes may be implemented if required for:
Security reasons
Legal compliance
Critical system stability
Notice will be provided as soon as practicable
Document Version: 2.0
Previous Version Date: December 19, 2024
Current Version Date: January 7, 2026
Summary of Changes from Previous Version
Major Additions:
Merged FlexCompute into Section 2: Schedule Compute Service (SCS) / FlexCompute with two modes (Scheduled Reservations and Pay-As-You-Go)
Added Section 3: GPU Service terms and SLA
Added Section 4: Hosted Models terms and SLA
Added Section 5: LLM Ops comprehensive terms, privacy, and SLA
Updated Section 7.4: Added Metadata definition for LLM Ops
Enhanced Section 9.3: Clarified acceptance rules for different service types
Added DRCS instance substitution policy in Section 1.2
Added LLM Ops webhook/callback integration support in Section 5.2
Service-Specific Changes:
LLM Ops: Complete new section covering proxy operation, data privacy, no-storage policy, best-effort SLA, webhook/callback support
SCS / FlexCompute: Merged into unified section with two modes (Scheduled Reservations + Pay-As-You-Go), AWS account integration
GPU Service: New section covering SSH-only access model
Hosted Models: New section covering token-based billing (removed time-based integration claims)
DRCS: Added instance substitution policy for equivalent or higher instances
All sections updated to reflect current service portfolio
Policy Changes:
Clarified that API keys are never stored (LLM Ops)
Established metadata-only collection policy (LLM Ops)
Added best-effort accuracy disclaimer for cost tracking (LLM Ops)
Added no-liability clauses for proxy downtime (LLM Ops)
Clarified free tier has no SLA commitment (multiple services)
Added future paid tier SLA commitments (LLM Ops, FlexCompute)

