Marvin 1.1: More ways to play
Introducing Anthropic and Azure support, native mapping, natural language guidance, and more.
Marvin is an AI engineering framework for building natural language interfaces that are reliable, scalable, and easy to trust.
With the 1.0 release last week, we focused on delivering four key components that you can adopt anywhere in your software:
🧩 AI Models for structuring text into type-safe schemas
🏷️ AI Classifiers for bulletproof classification and routing
🪄 AI Functions for complex business logic and transformations
🤝 AI Applications for interactive use and persistent state
Marvin 1.0 was focused on getting the developer experience just right by reusing familiar interfaces like Pydantic models, enums, and functions. Today, we’re making some substantial enhancements to that core functionality in Marvin 1.1.
Anthropic & Azure OpenAI Service support
In addition to OpenAI, Marvin 1.1 adds support for both Anthropic and the Azure OpenAI Service through a new providers interface. To change your model, set your llm_model
(or the MARVIN_LLM_MODEL
environment variable) to a compatible string, like this:
Adding new providers is straightforward, as long as they are compatible with the OpenAI functions API (or can be taught to be compatible with it!). You can see the full list of supported models here.
Please note that at this time, AI Classifiers only support OpenAI-compatible APIs because they take advantage of a special tokenization feature to deliver results extremely quickly.
Mapping: process batches in parallel
AI Models, AI Classifiers, and AI Functions all have a new .map()
method that applies their logic to multiple inputs. Mapping operates concurrently, which means that processing happens (almost) in parallel and your results will be available as soon as the slowest call completes.
Guide behavior with instructions
AI Models, AI Classifiers, and AI Functions all have a new instructions
parameter that can be used to steer their behavior. This is helpful for controlling parsing or edge cases without redefining your entire component. Instructions can be provided when you decorate a component or on a per-call basis:
Note that for AI Models, the parameter is called instructions_
to avoid conflicts with models that have an actual “instructions” field!
Streaming and more
All Marvin LLMs now support streaming outputs, which is most useful when working with AI Applications. In addition, 1.1 is full of improvements, bug fixes, and prompt tweaks designed to improve the overall developer experience.
Upgrade today (pip install marvin -U)
, check out the docs, and give the repo a star!