Constructor
new Agent(config)
Creates an instance of a LoopGPT Agent.
Parameters:
Name | Type | Description | ||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
config |
object | The configuration object for initializing the Agent class. Properties
|
Members
_tools
constraints :string|Array.<any>
Type:
- string | Array.<any>
history :Array.<{role: string, content: any}>
Type:
- Array.<{role: string, content: any}>
plan :Array.<any>
Type:
- Array.<any>
progress :Array.<any>
Type:
- Array.<any>
Methods
_getNonUserMessages(n)
This function returns the last n non-user messages from a chat history, excluding any system messages that contain the phrase "do_nothing".
Parameters:
Name | Type | Description |
---|---|---|
n |
number | The number of non-user messages to retrieve from the chat history. |
Returns:
This function returns an array of the last n non-user messages from the chat history, excluding any system messages that contain the phrase "do_nothing".
(async) chat(object)
This is a function for a chatbot agent that handles user messages, runs a language model to generate a response, and can stage and run tools based on the response.
Parameters:
Name | Type | Description |
---|---|---|
object |
Object | {message: string|null, run_tool: boolean,} |
Returns:
The function chat()
returns a Promise that resolves to the reply message from the
agent's conversation with the user, or an error message if an error occurs during the
conversation.
config(init_promptopt, next_promptopt) → {Agent}
This function takes in two boolean parameters and returns a modified configuration object with compressed history.
Parameters:
Name | Type | Attributes | Default | Description |
---|---|---|---|---|
init_prompt |
boolean |
<optional> |
false | A boolean value indicating whether or not to include the initial prompt in the configuration object. |
next_prompt |
boolean |
<optional> |
false | The "next_prompt" parameter is a boolean value that determines whether or not a prompt should be displayed for the next input after the initial input. If set to "true", a prompt will be displayed for each subsequent input. If set to "false", no prompt will be displayed for subsequent inputs. |
Returns:
The config
object with the init_prompt
and next_prompt
properties removed if their
corresponding arguments are false
, and with the history
property set to the compressed history
obtained from the getCompressedHistory()
method.
- Type
- Agent
constraintsPrompt()
The function generates a prompt message listing the constraints.
Returns:
The function constraintsPrompt()
returns a string that lists the constraints of an
object, with each constraint numbered and separated by a new line character.
(async) extractJsonWithGpt(s)
The function extracts JSON from a given string using GPT.
Parameters:
Name | Type | Description |
---|---|---|
s |
string | The parameter |
Returns:
The function extractJsonWithGpt
is returning the result of calling this.model.chat
with a set of messages, a temperature of 0.0, and a maximum number of tokens to generate. The
messages include a system message with a JavaScript function and a default response format, and a
user message s
.
getCompressedHistory()
This function returns a compressed version of a chat history by removing certain properties from assistant messages.
Returns:
The function getCompressedHistory()
returns a modified version of the history
array
of messages. The modifications include removing all messages with the role of "user" and removing
certain properties from the thoughts
object of any messages with the role of "assistant". The
modified history
array is then returned.
getFullMessage(message) → {string}
This function returns a message with a prompt based on the current state of an agent.
Parameters:
Name | Type | Description |
---|---|---|
message |
string | null | The message parameter is a string that represents the user's input or response to the agent's prompt. It is an optional parameter that can be passed to the getFullMessage function. |
Returns:
The function getFullMessage
is returning a string that includes either the
init_prompt
or next_prompt
property of the current object instance, followed by a new line and
the message
parameter (if provided).
- Type
- string
(async) getFullPrompt(user_inputopt) → {Promise.<{full_prompt: Array.<{role: string, content: string}>, token_count: number}>}
This function generates a full prompt for a chatbot conversation, including system messages, user input, and relevant memory.
Parameters:
Name | Type | Attributes | Description |
---|---|---|---|
user_input |
string |
<optional> |
The user's input, which is an optional parameter. If provided, it will be added to the prompt as a user message. |
Returns:
- An object with two properties: "full_prompt" which is an array of messages to be displayed to the user, and "token_count" which is the number of tokens used by the messages in the "full_prompt" array.
- Type
- Promise.<{full_prompt: Array.<{role: string, content: string}>, token_count: number}>
goalsPrompt()
The function generates a prompt displaying a list of goals.
Returns:
The goalsPrompt()
function is returning a string that lists the goals of an object,
with each goal numbered and separated by a newline character.
headerPrompt()
The function returns a string prompt based on the persona, goals, constraints, plan, and progress of a project.
Returns:
The headerPrompt()
function is returning a string that includes prompts for the
persona, goals, constraints, plan, and progress, joined together with line breaks.
(async) loadJson(s, try_gptopt)
The function attempts to parse a string as JSON, and if it fails, it may try to extract the JSON using GPT or return the original string.
Parameters:
Name | Type | Attributes | Default | Description |
---|---|---|---|---|
s |
string | The input string that contains the JSON data to be parsed. |
||
try_gpt |
boolean |
<optional> |
true | A boolean parameter that indicates whether to try extracting JSON using GPT if the initial parsing fails. If set to true, the function will attempt to extract JSON using GPT if the initial parsing fails. If set to false, the function will not attempt to extract JSON using GPT. |
Returns:
The loadJson
function returns a parsed JSON object if the input string is in valid JSON
format, or a string representation of the input if it cannot be parsed as JSON. If the input
cannot be parsed as JSON and the try_gpt
parameter is true
, the function will attempt to
extract JSON using a GPT model and retry parsing. If parsing still fails, an error is
personaPrompt()
The function returns a string that includes the name and description of a person.
Returns:
The function personaPrompt()
is returning a string that includes the name and
description of the object that the function is called on. The specific values of this.name
and
this.description
will depend on the object that the function is called on.
planPrompt()
The function returns a string that displays the current plan.
Returns:
The planPrompt()
method is returning a string that includes the current plan joined
together with new line characters and preceded by the text "CURRENT PLAN:".
progressPrompt()
The function generates a progress prompt by iterating through a list of completed tasks and displaying them in a formatted string.
Returns:
The progressPrompt()
function is returning a string that lists the progress made so
far. The string includes a header "PROGRESS SO FAR:" and a numbered list of tasks that have been
completed, with each item in the list formatted as "DONE - [task description]". The items in the
list are separated by newline characters.
(async) runStagingTool()
The function runs a staging tool with specified arguments and returns the result or an error message.
Returns:
The function runStagingTool()
returns different responses depending on the conditions
met in the code. It can return a string response or an object response depending on the command
and arguments provided. The specific response returned is indicated in the code comments.
toolsPrompt() → {string}
Displays the prompt for selecting tools.
Returns:
The tool prompt.
- Type
- string