Back to Churn Predictor
Customer SuccessGPT-3.5 Turbo
Churn Predictor Using GPT-3.5 Turbo
Optimized prompts and cost breakdowns for running churn predictor with GPT-3.5 Turbo.
About GPT-3.5 Turbo
- Provider
- openai
- Context Window
- 16K tokens
- Input Cost
- $0.5 / 1M tokens
- Output Cost
- $1.5 / 1M tokens
Best For
Legacy supportSimple classificationQuick responses
Optimized Prompts for GPT-3.5 Turbo
These prompts are tailored to work best with GPT-3.5 Turbo's capabilities
analyze
Analyze this customer's churn risk:
{{customer_data}}
Usage trends: {{usage_trends}}
Support tickets: {{support_history}}
Provide:
1. Churn risk score (1-100)
2. Key risk factors
3. Recommended interventions
4. Urgency levelCost Breakdown
Estimated costs for running this workflow with GPT-3.5 Turbo
- input tokens
- ~916 tokens
- output tokens
- ~927 tokens
- cost per request
- $0.0040
- monthly 1000 requests
- $31.12
Rate Limiting Considerations
Consider batching requests during peak hours Check provider rate limits for GPT-3.5 Turbo
Monitoring with Administrate
Track latency and token usage for GPT-3.5 Turbo in Administrate dashboard