new OpenAIModel(apiKey, modelopt)
This is a constructor function that initializes an OpenAI chatbot with a specified API key and model.
Parameters:
Name | Type | Attributes | Default | Description |
---|---|---|---|---|
apiKey |
string | The API key is a unique identifier that allows access to OpenAI's API services. It is required to make requests to the OpenAI API. |
||
model |
string |
<optional> |
gpt-3.5-turbo | The model parameter is a string that specifies the OpenAI language model to use for generating text. In this case, the default model is 'gpt-3.5-turbo'. |
- Source:
Members
apiUrl :string
Type:
- string
- Source:
max_tokens :maxTokens|undefined
Type:
- maxTokens | undefined
- Source:
Methods
(async) chat(messages, maxTokensopt, temperatureopt) → {Promise.<any>}
This is an async function that sends a POST request to an API endpoint with specified parameters and returns the response data.
Parameters:
Name | Type | Attributes | Default | Description |
---|---|---|---|---|
messages |
any | An array of strings representing the conversation history or prompt for the chatbot to generate a response to. |
||
maxTokens |
maxTokens |
<optional> |
The maximum number of tokens (words) that the API should generate in response to the given messages. If not provided, the API will use its default value. |
|
temperature |
number |
<optional> |
0.8 | The temperature parameter controls the "creativity" of the AI-generated responses. A higher temperature value will result in more diverse and unpredictable responses, while a lower temperature value will result in more conservative and predictable responses. The default value is 0.8. |
- Source:
Returns:
the result of the API call made using the provided parameters (messages, maxTokens, and temperature) after handling any errors that may occur during the API call.
- Type
- Promise.<any>
config()
The function returns an object with the model and apiKey properties.
- Source:
Returns:
An object with two properties: "model" and "apiKey", both of which are being accessed from the current object using "this".
(async) countPromptTokens(messages)
The function counts the number of tokens in a set of messages based on the selected language model.
Parameters:
Name | Type | Description |
---|---|---|
messages |
any | An array of objects representing messages, where each object has properties such as "name" and "text". |
- Source:
Returns:
the total number of tokens in the messages array, based on the model being used.
getTokenLimit() → {number}
The function returns the token limit for a specific language model.
- Source:
Returns:
The function getTokenLimit()
returns the token limit for a specific language model
based on the value of this.model
. The token limit is returned as an integer value.
- Type
- number
setApiKey(value)
Parameters:
Name | Type | Description |
---|---|---|
value |
string |
- Source:
(static) fromConfig(config)
This function returns a new OpenAIModel object using the apiKey and model specified in the config parameter.
Parameters:
Name | Type | Description |
---|---|---|
config |
Object | The |
- Source:
Returns:
The fromConfig
method is returning a new instance of the OpenAIModel
class with the
apiKey
and model
properties set based on the config
object passed as an argument. However,
the code snippet is incomplete as there is a missing argument after config.model
.