JavaScript SDK

The Javascrip SDK library provides an easy way to integrate your new or existing JavaScript applications with Composable Prompts Studio.
This guide focuses on installation, basic usage, and interaction execution. We recommend using TypeScript in your projects to leverage the SDK types and the interactions generated types.

Requirements

Node version 18 or higher is required (the fetch API is required). It will also work with node version 17.5 by using the --experimental-fetch flag

Installation

npm install @composableai/sdk

Usage

We recommend using node ES modules when building server-side applications (i.e. use "type": "module" in your package.json). ES modules, when used with TypeScript, requires the .js extension when importing dependencies. If you are using vscode, you can configure it to automatically add the extensions. Create a new file (or edit the existing one) .vscode/settings.xml in your project and add this:

{
    "javascript.preferences.importModuleSpecifierEnding": "js",
    "typescript.preferences.importModuleSpecifierEnding": "js",
}

For web applications, you can use a bundler or a development environment like Vite.js.

In the code examples below, we will use ES modules with TypeScript.

To connect to Studio Server, you need an API KEY that can be generated in your Studio account. See API Keys in the settings page.

In the example below, we import the StudioClient and make a call to list the projects you created in your Studio account.

import {StudioClient} from "@composableai/sdk"

const client = new StudioClient({
    apikey: "YOUR_API_KEY_HERE"
})

const projects = await client.projects.list();

for (const project of projects) {
  console.log(project.name+': '+project.id);
}

The primary focus of Composable Prompts is to build and manage interactions with LLM environments easily. Interactions are created inside a project. To do something useful with the SDK, you first need to connect to a project. This can be done when creating the client by passing a projectId property. Let's pick a project ID and configure the client to connect to that project. You can find the current project ID in the dashboard or pick one from the output of the example above.

Here is an example that lists the interactions available in a project

import { StudioClient } from "@composableai/sdk"

const client = new StudioClient({
    apikey: "YOUR_API_KEY_HERE",
})

const interactions = await client.interactions.list();

for (const interaction of interactions) {
    console.log(interaction.name + ': ' + interaction.id);
}

You can also select a project after the client was created by assigning the target project ID to the client.project property.

Example:

import { StudioClient } from "@composableai/sdk"

const client = new StudioClient({
    apikey: "YOUR_API_KEY_HERE",
})

const projects = await client.projects.list();
if (projects.length > 0) {
    console.log('Select project', projects[0].name);
    // set the target project
    client.project = projects[0].id;

    const interactions = await client.interactions.list();

    console.log('Interactions:');
    for (const interaction of interactions) {
        console.log(interaction.name + ': ' + interaction.id);
    }
}

Executing an Interaction

The most basic interactions will not consume anything as input and will output a string. However, real-life interactions will almost always consume some input data used to generate the prompts sent to the LLM. Also, in many cases, you will want to output back a structured output, not just a string.
Thus, an interaction will usually consume an object and will output a string or a structured object. You can define the input and output schemas using JSON Schemas.

Example:

Here is an example of an interaction that requires an input object of type

{
    questions: string[];
}

and will receive a string as output.


import { StudioClient } from "@composableai/sdk"

const client = new StudioClient({
    apikey: "YOUR_API_KEY_HERE",
})

// Note that the interactionId argument must be a valid interaction ID which belongs to the project you are connected to.
const run = await client.interactions.execute(interactionId, {
    data: {
        questions: [
            "What is the origin of the 'hello world' sentence in code examples?",
            "Say 'hello world' in french"
        ]
    }
});

console.log(run.result);

Note that the execution may take some time depending on your target AI. So it would be useful to render some progress feedback. Or you can just stream the execution to display chunk by chunk the response as soon as the next chunk becomes available.

Streaming an Execution

We will skip the client initialization for the sake of clarity.


const run = await client.interactions.execute(interactionId, {
    data: {
        questions: [
            "What is the origin of the 'hello world' sentence in code examples?",
            "Say 'hello world' in french"
        ]
    }
}, (chunk:string) => {
    // we got the next chunk of the result. Print it on screen
    process.stdout.write(chunk);
});

// we got the entire result.
console.log();
console.log("Result:\n", run.result);

Execution Runs

When an interaction execution completes, an ExecutionRun object is returned. This object contains the full configuration of the execution, including the input object, the generated prompt, and the response. The runs objects are stored, so you can inspect or reuse these objects later.

To easily find an execution run, you can tag the executions and use these tags later to find the matching runs.

Tags can be used to:

  1. Filter out runs in the Runs Page. This way, you can quickly find a run and inspect its input data, the generated prompt, and the result.
  2. Find a specific run to re-use it.

Example:

Let's say you are implementing a dictionary. An interaction will ask to translate a given word into a target language.
You can tag each execution using the word, the source language, and the target language: ["hello", "en", "fr"]. Later when someone wants a translation for the same word in the same language, you could just retrieve the result from the existing runs, by filtering using tags. This way, you could avoid costly executions on prompts you already performed.

Execution Tags

When executing an interaction, you can specify a tags property which will accept either a string or an array of strings (in case you want to add multiple tags).

Example

Let's update the previous example:

const run = await client.interactions.execute(interactionId, {
    data: {
        questions: [
            "What is the origin of the 'hello world' sentence in code examples?",
            "Say 'hello world' in french"
        ]
    },
    tags: ['documentation', 'test']
});

Also, you can specify a session name at the client level which will be added as a tag for all the interactions which are executed using the client. This can be done by settings the client.sessionName property, or when initializing the client along with the apikey and projectId properties.

const client = new StudioClient({
    apikey: "YOUR_API_KEY_HERE",
    sessionName: "documentation"
})
// or
// client.sessionName = "documentation"

const run = await client.interactions.execute(interactionId, {
    data: {
        questions: [
            "What is the origin of the 'hello world' sentence in code examples?",
            "Say 'hello world' in french"
        ]
    },
    tags: 'test'
});

The run above will be tagged as documentation and test

Search runs using tags

This will retrieve all runs of the specified interaction, tagged as documentation and test:

const runs = await client.runs.search.execute({interaction: "TARGET_INTERACTION_ID", tags: ['documentation', 'test']})

Generated Interaction Classes

Working with complex input and output objects is difficult in pure JavaScript. We recommend using TypeScript and the Compoosable Prompts CLI application to generate the strong typed TypeScript classes for your interactions and interfaces for the input and output objects.

This was already discussed in the Overview guide. Here we will focus on the structure of the generated classes and on integration with existing projects

For details on how to generate interaction classes take a look to the Componsable Prompts CLI guide.

Let's use the same interaction as in the Overview guide: an interaction named Which Color which takes as input an object name and outputs a possible color of that object. The input object shape is {object:string} and the output object shape is {color:string}

By running cpcli codegen WhichColor a new directory ./interactions will be created in the current working directory which will contains a directory named WhichColor.
The code for the WhichColor class is not exported in a single file since the interaction may have multiple versions. When generating the code of an interaction the default is to generate the draft version and the last published version. You can of course control which versions are generated as well as the target directory by using the options provided by the CLI codegen command.

The WhichColor directory will contain one file by version, plus an index.ts file.

The files corresponding to the published versions are named using the 'v' prefix followed by the version number: v{number}.ts. The draft version is always named draft.ts.
The index file is not containing code it will just re-export one of the version files. You can choose at generation time which version will be re-exported by the index using the -x option. The default is to re-export the latest published version and if no published versions were generated the to re-export the draft version.

Let's suppose we don't have any published versions yet. So, we will have only the draft.ts. Here is the content of that file:

//#export 654df9de09676ad3b8631dc3 6554cf617eae1c28ef5f3d40 @2023-11-21T12:07:37.264Z
// This is a generated file. Do not edit.

import { StudioClient, StudioClientProps, InteractionBase } from "@composableai/sdk";

/**
 * WhichColor input type
 */
export interface WhichColorProps {
    object: string;
}

/**
 * WhichColor result type
 */
export interface WhichColorResult {
    color: string;
}

/**
 * WhichColor
 */
export class WhichColor extends InteractionBase<WhichColorProps, WhichColorResult> {
    readonly projectId = "654df9de09676ad3b8631dc3";
    constructor(clientOrProps: StudioClient | StudioClientProps) {
        super ("6554cf617eae1c28ef5f3d40", clientOrProps);
        this.client.project = this.projectId;
    }
}

We can see the class doesn't contain much code because it extends the InteractionBase class which is the base class for all generated interaction classes.
The InteractionBase class is a generic class which takes two type parameters: the input type and the output type.

Practically you may achieve the same result by directly instantiating the InteractionBase class, passing the right interaction ID to the constructor and writing the correct interfaces for the input and oputput objects.
But, this would be quite ineficient because you will have to modify the interfaces each time you modify the interaction schemas.
Using code generation you don;t need to bother about this. You just run again the cpcli codegen WhichColor and it will regenerate the version files. And also will update the index to reexport the latest published version.

Using generated classes

To use the generated classes in your project just move the generated folder to the desired location in your project sources (or directly generate to the desired location).
Then import the index.ts as you would import any other typescript file.

To run the generated interaction is quite simple:

import { WhichColor } from "./WhichColor/index.js"

const wcolor = new WhichColor({
    apikey: "YOUR_API_KEY_HERE"
});

const run = await wcolor.execute({
    data: { object: "sky" }
});

console.log(run.result);

To execute in streaming mode just pass a onChunk callback as the second argument of the execute method. The callback will be called for each response chunk as soon as it is available.

At the time of writing this guide, the generated classes implement this interface:

interface InteractionBase {
    retrieve(): Promise<Interaction>

    update(payload: InteractionUpdatePayload): Promise<Interaction>

    execute(payload: InteractionExecutionPayload<P> = {},
        onChunk?: (chunk: string) => void): Promise<ExecutionRun<P, R>>
}

If you need to do more advanced stuff, you can use the client instance configured by the class to access the server.

const wcolor = new WhichColor({
    apikey: "YOUR_API_KEY_HERE"
});

const client = wcolor.client;

const envs = await client.environments.list():

Or if you have multiple generated classes from the same project and you want to share the same client instance you can instantiate a client then pass it as an argument to the generated classes:

const client = new StudioClient({
    apikey: "YOUR_API_KEY_HERE"
})

const wcolor = new WhichColor(client);

Was this page helpful?