Concepts
Here are the core concepts that we have built Composable around.
Prompt Templates
Prompt Templates
are the core concept of Composable.
They are the building blocks of prompts and are used to create prompts.
Prompts are then assembled to define a prompt for a task (Interaction).
- Name
JS Template
- Description
It's a Javascript Template engine, running in a jailed environment. You can use standard javascript string replacement syntax (
${var}
), as well as control blocks (for
,if
,else
, etc.), and array functionsmap
,reduce
,filter
, etc. It needs to retur a string.
- Name
Plain Text
- Description
Simple plain text format, with no variable replacement. Useful for application context or safety prompts.
Interactions
Interactions
are the core concept of Composable.
They define the tasks the LLM are requested to perform.
An interaction is defined by the following main components:
- Name
Name
- Description
- The name of the interaction.
- Name
Description
- Description
- A description of the interaction.
- Name
Prompt Segments
- Description
A list of prompts templates to be rendered as part of the final prompt.
- Name
Schema
- Description
JSON Schema requested from the generative model for the response. It will be used to validate the response as well.
- Name
Configuration
- Description
Environment and Model to execute the interaction on, and execution parameters.
Runs
Runs are the execution of an interaction, it is both the request to and the response from the generative model.
Runs have the following statuses:
- Name
created
- Description
The run has been created, but not yet started. Typically the case when waiting for the streaming start from the client.
- Name
processing
- Description
- The run is currently executing.
- Name
completed
- Description
- The run has completed successfully.
- Name
failed
- Description
The run has failed. The failure reason is in the field
error
.
Environments
Environments connect to LLM inference providers which are the execution platforms running generative models.
We currently support environments for the following inference providers:
azure_openai
- Azure OpenAI Servicebedrock
- Amazon Bedrockgroq
- Groqhuggingface_ie
- Hugging Face's Inference Endpointmistralai
- Mistral AI's La Platformeopenai
- OpenAIreplicate
- Replicatetogetherai
- TogetherAIvertexai
- Google's Vertex AIwastsonx
- IBM's watsonx.ai
In addition to the core inference providers above, we have created virtual providers to assemble models and platform into a virtual, synthetic LLM, and offer several balancing and execution strategies:
virtual_lb
- a synthetic environment that allows load balancing and failover between multiple modelsvirtual_mediator
- a synthetic environment that allows multi-head execution and LLM mediation