Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.osvi.ai/llms.txt

Use this file to discover all available pages before exploring further.

A well-written prompt is the difference between an agent that sounds robotic and one that handles real conversations naturally. This guide covers the key principles and patterns for writing prompts on OSVI.

Anatomy of a Good System Prompt

Every agent prompt should cover four things:
  1. Identity — who the agent is and who it works for
  2. Goal — what the agent is trying to accomplish in this conversation
  3. Rules — constraints on what the agent can and cannot do
  4. Tone — how the agent should sound
You are Priya, a customer support agent for Acme Corp.

Your goal is to help callers track their orders and resolve delivery issues.

Rules:
- Only discuss order-related topics. Politely redirect off-topic questions.
- Never promise a refund without first checking the order status.
- If the issue cannot be resolved, offer to escalate to a human agent.

Tone: Friendly, concise, and professional. Avoid filler words like "absolutely" or "certainly".

Writing for Voice vs Chat

Voice and chat conversations have different rhythms. Keep these differences in mind:
VoiceChat
Sentence lengthShort — listeners can’t re-readMedium — readers can scan
ListsAvoid bullet points; say “first… second… third…”Lists and formatting work well
ConfirmationRepeat key information back (“So that’s order number 1234, correct?”)Less repetition needed
PaceBuild in natural pauses with short sentencesNot applicable

Single-Prompt Agents

For straightforward workflows, a single prompt is usually enough. Structure it as:
[Identity and goal]

[Step-by-step flow you want the agent to follow]

[Rules and edge cases]

[How to end the conversation]
Example — appointment reminder:
You are an AI assistant calling on behalf of City Dental Clinic to remind patients
about their upcoming appointments.

Follow this flow:
1. Greet the patient by name and confirm you're speaking with the right person.
2. Remind them of their appointment: date, time, and doctor's name.
3. Ask if they can confirm attendance.
4. If they need to reschedule, collect their preferred date and time and tell them
   the clinic will call back to confirm.
5. Thank them and end the call politely.

Rules:
- If someone other than the patient answers, do not share appointment details.
  Ask them to have the patient call back at 1800-XXX-XXXX.
- Keep the call under 2 minutes.

Multi-Prompt Agents

Multi-prompt agents let you define distinct states, each with its own prompt and transition conditions. Use this when your flow has meaningful branches. When to use multi-prompt:
  • The conversation has 3+ distinct phases (greeting → qualification → closing)
  • Different parts of the conversation require very different tones or instructions
  • You want explicit control over when the agent moves between topics
State design tips:
  • Name states clearly: greeting, qualification, objection_handling, closing
  • Keep each state prompt focused — it should only describe what happens in that state
  • Define clear transition triggers: “Move to closing when the user agrees to a demo”
Example state structure for a sales agent:
State: greeting
→ Introduce yourself, confirm you're speaking with the decision-maker.
→ Transition to: qualification

State: qualification
→ Ask 3 discovery questions about their current process.
→ Transition to: pitch (if they express a pain point) or closing_no_fit (if not relevant)

State: pitch
→ Present the relevant product feature based on their pain point.
→ Transition to: objection_handling (if they raise a concern) or closing_positive

State: closing_positive
→ Offer to schedule a demo. Collect preferred time.

State: objection_handling
→ Address the concern. Transition back to: pitch or closing_no_fit

Using additional_data

The additional_data field on POST /call lets you inject runtime context into the agent’s prompt. Reference it with {{variable_name}} syntax. In your API call:
{
  "agent_uuid": "agent_xxx",
  "phone_number": "9876543210",
  "country_code": "IN",
  "person_name": "Ravi Kumar",
  "additional_data": {
    "plan_name": "Pro",
    "renewal_date": "March 20, 2025",
    "outstanding_amount": "₹4,999"
  }
}
In your system prompt:
You are calling {{person_name}} regarding their {{plan_name}} subscription,
which is due for renewal on {{renewal_date}}.

The outstanding amount is {{outstanding_amount}}.

Using Jinja for Dynamic System Prompts

OSVI system prompts are rendered as Jinja2 templates before the call begins. This means you can use the full Jinja syntax — not just simple variable substitution — to build prompts that adapt to your data.

Variable Substitution

The most common use. Any key from additional_data (or the top-level person_name) is available as a Jinja variable:
You are calling {{ person_name }} about their order {{ additional_data.order_id }}.

Conditionals

Use {% if %} to include or exclude sections of the prompt based on the data passed in:
You are calling {{ person_name }} regarding their subscription.

{% if additional_data.is_overdue %}
Their payment is overdue by {{ additional_data.days_overdue }} days.
Your primary goal is to collect payment or arrange a payment plan.
{% else %}
Their renewal is coming up on {{ additional_data.renewal_date }}.
Your goal is to confirm they want to continue and answer any questions.
{% endif %}

Loops

Use {% for %} to enumerate lists — useful when the agent needs to cover multiple items:
The customer has the following open support tickets:
{% for ticket in additional_data.tickets %}
- Ticket #{{ ticket.id }}: {{ ticket.summary }}
{% endfor %}

Work through each ticket in order and confirm whether it has been resolved.

Default Values

Use the default filter to guard against missing fields so the prompt doesn’t break if a value isn’t provided:
You are calling {{ person_name | default("the customer") }}.
Their preferred language is {{ additional_data.language | default("English") }}.

Filters

Jinja filters let you transform values inline:
{# Capitalise the name in case it came in lowercase #}
Hello, {{ person_name | capitalize }}.

{# Format a number #}
Your outstanding balance is ₹{{ additional_data.amount | int }}.

Passing Data from the API

All Jinja variables are populated from the additional_data object and the top-level person_name field in your POST /call request:
{
  "agent_uuid": "agent_xxx",
  "phone_number": "9876543210",
  "country_code": "IN",
  "person_name": "Ravi Kumar",
  "additional_data": {
    "is_overdue": true,
    "days_overdue": 14,
    "renewal_date": "March 20, 2025",
    "tickets": [
      { "id": "1042", "summary": "Login issue on mobile app" },
      { "id": "1078", "summary": "Invoice not received" }
    ]
  }
}
Keep your Jinja logic simple. Complex template logic is hard to debug and maintain. If you find yourself writing deeply nested conditionals, consider splitting the workflow into multiple agent states instead.

Common Mistakes to Avoid

Bad: “Help the user with their query.”Good: “Help the caller track their order status. Ask for their order number and registered email address to look up the order.”Vague prompts lead to inconsistent behaviour. Be specific about the goal and the steps.
Long lists of rules are hard for the model to follow consistently. Group related rules together, and prioritise the most important ones at the top.
For outbound calls, always define what the agent should do if someone other than the intended contact answers. Example: “If the person who answers is not {{person_name}}, ask them to pass on the message and end the call politely.”
Agents need to know how and when to end a conversation. Always include a closing instruction: “Once you have confirmed the appointment, thank the caller and end the call.”
Markdown, bullet points, and bold text don’t translate to speech. Write voice prompts as plain prose, the way you’d actually say it.

Prompt Testing Checklist

Before deploying an agent, test these scenarios:
  • Happy path — the conversation goes exactly as planned
  • Wrong person answers the call
  • User goes off-topic or asks something unrelated
  • User refuses or says they’re not interested
  • User asks to speak to a human
  • User gives ambiguous or incomplete answers
  • User interrupts the agent mid-sentence