Conversational AI with Language Models
CALM (Conversational AI with Language Models) is the dialogue system that runs Rasa text and voice assistants. It interprets user input, manages dialogue, and keeps interactions on track. By combining language model flexibility with predefined logic, Rasa enables fluent, high-trust conversations that reliably resolve user requests.
Key Benefits
- Separation of concerns: In CALM assistants, LLMs keep the conversation fluent but don't guess your business logic.
- Built-in conversational awareness: Detects and handles common conversational patterns like topic changes, corrections, and clarifications for smoother interactions.
- Deterministic execution: Follows structured workflows for reliable, debuggable interactions.
- Designed for efficiency: Optimized for smaller, fine-tuned models (e.g., Llama 8B) to reduce latency and inference costs.
- Works with your existing stack: Integrates with NLU classifiers, entity extractors, and tools, so you can enhance your assistant without starting from scratch.
Who is CALM For?
- AI/ML practitioners and developers looking to build scalable conversational assistants.
- Conversation designers who care about user experience and want to build high-trust AI assistants.
- Businesses seeking robust, next-gen AI applications without sacrificing control or reliability.
Note for researchers: If you use CALM in your research, please consider citing our research paper.
How CALM Works
CALM is a controlled framework that uses an LLM to interpret user input and suggest the next steps—ensuring the assistant follows predefined logic without guessing or inventing the next steps on the fly. Instead, it understands what the user wants and dynamically routes them through structured “Flows,” which are predefined business processes broken down into clear steps.
Let’s walk through how CALM processes user input to see how this works in practice.