Skip to content

Chore: Add new tool parameters #343

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 17, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 15 additions & 12 deletions docs/docs/07-gpt-file-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,18 +43,21 @@ Tool instructions go here.

Tool parameters are key-value pairs defined at the beginning of a tool block, before any instructional text. They are specified in the format `key: value`. The parser recognizes the following keys (case-insensitive and spaces are ignored):

| Key | Description |
|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
| `Name` | The name of the tool. |
| `Model Name` | The OpenAI model to use, by default it uses "gpt-4-turbo" |
| `Description` | The description of the tool. It is important that this properly describes the tool's purpose as the description is used by the LLM. |
| `Internal Prompt` | Setting this to `false` will disable the built-in system prompt for this tool. |
| `Tools` | A comma-separated list of tools that are available to be called by this tool. |
| `Credentials` | A comma-separated list of credential tools to run before the main tool. |
| `Args` | Arguments for the tool. Each argument is defined in the format `arg-name: description`. |
| `Max Tokens` | Set to a number if you wish to limit the maximum number of tokens that can be generated by the LLM. |
| `JSON Response` | Setting to `true` will cause the LLM to respond in a JSON format. If you set true you must also include instructions in the tool. |
| `Temperature` | A floating-point number representing the temperature parameter. By default, the temperature is 0. Set to a higher number for more creativity. |
| Key | Description |
|--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
| `Name` | The name of the tool. |
| `Model Name` | The LLM model to use, by default it uses "gpt-4-turbo". |
| `Global Model Name`| The LLM model to use for all the tools. |
| `Description` | The description of the tool. It is important that this properly describes the tool's purpose as the description is used by the LLM. |
| `Internal Prompt` | Setting this to `false` will disable the built-in system prompt for this tool. |
| `Tools` | A comma-separated list of tools that are available to be called by this tool. |
| `Global Tools` | A comma-separated list of tools that are available to be called by all tools. |
| `Credentials` | A comma-separated list of credential tools to run before the main tool. |
| `Args` | Arguments for the tool. Each argument is defined in the format `arg-name: description`. |
| `Max Tokens` | Set to a number if you wish to limit the maximum number of tokens that can be generated by the LLM. |
| `JSON Response` | Setting to `true` will cause the LLM to respond in a JSON format. If you set true you must also include instructions in the tool. |
| `Temperature` | A floating-point number representing the temperature parameter. By default, the temperature is 0. Set to a higher number for more creativity. |
| `Chat` | Setting it to `true` will enable an interactive chat session for the tool. |



Expand Down