diff --git a/README.md b/README.md
index 5de388d4..df4c61f1 100644
--- a/README.md
+++ b/README.md
@@ -1,280 +1,33 @@
# GPTScript
-[](https://discord.gg/9sSf4UyAMC)
+
-## Overview
+GPTScript is a framework that allows Large Language Models (LLMs) to operate and interact with various systems. These systems can range from local executables to complex applications with OpenAPI schemas, SDK libraries, or any RAG-based solutions. GPTScript is designed to easily integrate any system, whether local or remote, with your LLM using just a few lines of prompts.
-GPTScript is a new scripting language to automate your interaction with a Large Language Model (LLM), namely OpenAI. The ultimate goal is to create a natural language programming experience. The syntax of GPTScript is largely natural language, making it very easy to learn and use.
-Natural language prompts can be mixed with traditional scripts such as bash and python or even external HTTP service
-calls. With GPTScript you can do just about anything, like [plan a vacation](./examples/travel-agent.gpt),
-[edit a file](./examples/add-go-mod-dep.gpt), [run some SQL](./examples/sqlite-download.gpt), or [build a mongodb/flask app](./examples/hacker-news-headlines.gpt). Here are some common use cases for GPTScript:
+Here are some sample use cases of GPTScript:
+1. Chat with a local CLI - [Try it!](https://docs.gptscript.ai/examples/cli)
+2. Chat with an OpenAPI compliant endpoint - [Try it!](https://docs.gptscript.ai/examples/api)
+3. Chat with local files and directories - [Try it!](https://docs.gptscript.ai/examples/local-files)
+4. Run an automated workflow - [Try it!](https://docs.gptscript.ai/examples/workflow)
-1. [Retrieval-Augmented Generation (RAG)](./docs/README-USECASES.md#retrieval)
-2. [Task Automation](./docs/README-USECASES.md#task-automation)
-3. [Agents and Assistants](./docs/README-USECASES.md#agents-and-assistants)
-4. [Data Analysis](./docs/README-USECASES.md#data-analysis)
-5. [Vision, Image, and Audio](./docs/README-USECASES.md#vision-image-and-audio)
-6. [Memory Management](./docs/README-USECASES.md#memory-management)
-7. [Chatbots](./docs/README-USECASES.md#chatbots)
-| :memo: | We are currently exploring options for interacting with local models using GPTScript. |
-| ------ | :------------------------------------------------------------------------------------ |
-
-The following example illustrates how GPTScript allows you to accomplish a complex task by writing instructions in English:
-
-```yaml
-# example.gpt
-
-Tools: sys.download, sys.exec, sys.remove
-
-Download https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip to a
-random file. Then expand the archive to a temporary location as there is a sqlite
-database in it.
-
-First inspect the schema of the database to understand the table structure.
-
-Form and run a SQL query to find the artist with the most number of albums and output
-the result of that.
-
-When done remove the database file and the downloaded content.
-```
-
-```shell
-$ gptscript ./example.gpt
-```
-
-```
-OUTPUT:
-
-The artist with the most number of albums in the database is Iron Maiden, with a total
-of 21 albums.
+### Getting started
+MacOS and Linux:
```
-
-## Quick Start
-
-### 1. Install the latest release
-
-#### Homebrew (macOS and Linux)
-
-```shell
-brew install gptscript-ai/tap/gptscript
+brew install gptscript-ai/tap/gptscript
+gptscript github.com/gptscript-ai/llm-basics-demo
```
-#### Install Script (macOS and Linux):
-
-```shell
-curl https://get.gptscript.ai/install.sh | sh
+Windows:
```
-
-### Scoop (Windows)
-
-```shell
-scoop bucket add extras # If 'extras' is not already enabled
-scoop install gptscript
-```
-
-#### WinGet (Windows)
-
-```shell
winget install gptscript-ai.gptscript
+gptscript github.com/gptscript-ai/llm-basics-demo
```
-#### Manually
-
-Download and install the archive for your platform and architecture from the [releases page](https://github.com/gptscript-ai/gptscript/releases).
-
-### 2. Get an API key from [OpenAI](https://platform.openai.com/api-keys).
-
-#### macOS and Linux
-
-```shell
-export OPENAI_API_KEY="your-api-key"
-```
-
-#### Windows
-
-```powershell
-$env:OPENAI_API_KEY = 'your-api-key'
-```
-
-### 3. Run Hello World
-
-```shell
-gptscript https://get.gptscript.ai/echo.gpt --input 'Hello, World!'
-```
-
-```
-OUTPUT:
-
-Hello, World!
-```
-
-The model used by default is `gpt-4o` and you must have access to that model in your OpenAI account.
-
-### 4. Extra Credit: Examples and Run Debugging UI
-
-Clone examples and run debugging UI
-
-```shell
-git clone https://github.com/gptscript-ai/gptscript
-cd gptscript/examples
-
-# Run the debugging UI
-gptscript --server
-```
-
-## How it works
-
-**_GPTScript is composed of tools._** Each tool performs a series of actions similar to a function. Tools have available
-to them other tools that can be invoked similar to a function call. While similar to a function, the tools are
-primarily implemented with a natural language prompt. **_The interaction of the tools is determined by the AI model_**,
-the model determines if the tool needs to be invoked and what arguments to pass. Tools are intended to be implemented
-with a natural language prompt but can also be implemented with a command or HTTP call.
-
-### Example
-
-Below are two tool definitions, separated by `---`. The first tool does not require a name or description, but
-every tool after name and description are required. The first tool, has the parameter `tools: bob` meaning that the tool named `bob` is available to be called if needed.
-
-```yaml
-tools: bob
-
-Ask Bob how he is doing and let me know exactly what he said.
-
----
-name: bob
-description: I'm Bob, a friendly guy.
-args: question: The question to ask Bob.
-
-When asked how I am doing, respond with "Thanks for asking "${question}", I'm doing great fellow friendly AI tool!"
-```
-
-Put the above content in a file named `bob.gpt` and run the following command:
-
-```shell
-$ gptscript bob.gpt
-```
-
-```
-OUTPUT:
-
-Bob said, "Thanks for asking 'How are you doing?', I'm doing great fellow friendly AI tool!"
-```
-
-Tools can be implemented by invoking a program instead of a natural language prompt. The below
-example is the same as the previous example but implements Bob using python.
-
-```yaml
-Tools: bob
-
-Ask Bob how he is doing and let me know exactly what he said.
-
----
-Name: bob
-Description: I'm Bob, a friendly guy.
-Args: question: The question to ask Bob.
-
-#!python3
-
-import os
-
-print(f"Thanks for asking {os.environ['question']}, I'm doing great fellow friendly AI tool!")
-```
-
-With these basic building blocks you can create complex scripts with AI interacting with AI, your local system, data,
-or external services.
-
-## GPT File Reference
-
-### Extension
-
-GPTScript files use the `.gpt` extension by convention.
-
-### File Structure
-
-A GPTScript file has one or more tools in the file. Each tool is separated by three dashes `---` alone on a line.
-
-```yaml
-Name: tool1
-Description: This is tool1
-
-Do sample tool stuff.
-
----
-Name: tool2
-Description: This is tool2
-
-Do more sample tool stuff.
-```
-
-### Tool Definition
-
-A tool starts with a preamble that defines the tool's name, description, args, available tools and additional parameters.
-The preamble is followed by the tool's body, which contains the instructions for the tool. Comments in
-the preamble are lines starting with `#` and are ignored by the parser. Comments are not really encouraged
-as the text is typically more useful in the description, argument descriptions or instructions.
-
-```yaml
-Name: tool-name
-# This is a comment in the preamble.
-Description: Tool description
-# This tool can invoke tool1 or tool2 if needed
-Tools: tool1, tool2
-Args: arg1: The description of arg1
-
-Tool instructions go here.
-```
-
-#### Tool Parameters
-
-Tool parameters are key-value pairs defined at the beginning of a tool block, before any instructional text. They are specified in the format `key: value`. The parser recognizes the following keys (case-insensitive and spaces are ignored):
-
-
-| Key | Description |
-|------------------|-----------------------------------------------------------------------------------------------------------------------------------------|
-| `Name` | The name of the tool. |
-| `Model Name` | The OpenAI model to use, by default it uses "gpt-4-turbo" |
-| `Description` | The description of the tool. It is important that this properly describes the tool's purpose as the description is used by the LLM. |
-| `Internal Prompt`| Setting this to `false` will disable the built-in system prompt for this tool. |
-| `Tools` | A comma-separated list of tools that are available to be called by this tool. |
-| `Args` | Arguments for the tool. Each argument is defined in the format `arg-name: description`. |
-| `Max Tokens` | Set to a number if you wish to limit the maximum number of tokens that can be generated by the LLM. |
-| `JSON Response` | Setting to `true` will cause the LLM to respond in a JSON format. If you set true you must also include instructions in the tool. |
-| `Temperature` | A floating-point number representing the temperature parameter. By default, the temperature is 0. Set to a higher number for more creativity. |
-
-
-#### Tool Body
-
-The tool body contains the instructions for the tool which can be a natural language prompt or
-a command to execute. Commands must start with `#!` followed by the interpreter (e.g. `#!/bin/bash`, `#!python3`)
-a text that will be placed in a file and passed to the interpreter. Arguments can be references in the instructions
-using the format `${arg1}`.
-
-```yaml
-name: echo-ai
-description: A tool that echos the input
-args: input: The input
-
-Just return only "${input}"
-
----
-name: echo-command
-description: A tool that echos the input
-args: input: The input
-
-#!/bin/bash
-
-echo "${input}"
-```
-
-## Built in Tools
-
-There are several built in tools to do basic things like read/write files, download http content and execute commands.
-Run `gptscript --list-tools` to list all the built-in tools.
-
-## Examples
-
-For more examples check out the [examples](examples) directory.
+A few notes:
+- You'll need an [OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key)
+- On Windows, after installing gptscript you may need to restart your terminal for the changes to take effect
+- The above script is a simple chat-based assistant. You can ask it questions and it will answer to the best of its ability.
## Community
diff --git a/docs/docs/01-overview.md b/docs/docs/01-overview.md
index 68f94afb..b808ae6f 100644
--- a/docs/docs/01-overview.md
+++ b/docs/docs/01-overview.md
@@ -2,39 +2,41 @@
title: Overview
slug: /
---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
[](https://discord.gg/9sSf4UyAMC)
-GPTScript is a new scripting language to automate your interaction with a Large Language Model (LLM), namely OpenAI. The ultimate goal is to create a natural language programming experience. The syntax of GPTScript is largely natural language, making it very easy to learn and use. Natural language prompts can be mixed with traditional scripts such as bash and python or even external HTTP service calls. With GPTScript you can do just about anything, like [plan a vacation](https://github.com/gptscript-ai/gptscript/blob/main/examples/travel-agent.gpt), [edit a file](https://github.com/gptscript-ai/gptscript/blob/main/examples/add-go-mod-dep.gpt), [run some SQL](https://github.com/gptscript-ai/gptscript/blob/main/examples/sqlite-download.gpt), or [build a mongodb/flask app](https://github.com/gptscript-ai/gptscript/blob/main/examples/hacker-news-headlines.gpt).
-
-:::note
-We are currently exploring options for interacting with local models using GPTScript.
-:::
-
-```yaml
-# example.gpt
-
-Tools: sys.download, sys.exec, sys.remove
-
-Download https://www.sqlitetutorial.net/wp-content/uploads/2018/03/chinook.zip to a
-random file. Then expand the archive to a temporary location as there is a sqlite
-database in it.
-
-First inspect the schema of the database to understand the table structure.
-
-Form and run a SQL query to find the artist with the most number of albums and output
-the result of that.
-
-When done remove the database file and the downloaded content.
-```
-```shell
-$ gptscript ./example.gpt
-```
-```
-OUTPUT:
-
-The artist with the most number of albums in the database is Iron Maiden, with a total
-of 21 albums.
-```
-
-For more examples check out the [examples](https://github.com/gptscript-ai/gptscript/blob/main/examples) directory.
+
+
+GPTScript is a framework that allows Large Language Models (LLMs) to operate and interact with various systems. These systems can range from local executables to complex applications with OpenAPI schemas, SDK libraries, or any RAG-based solutions. GPTScript is designed to easily integrate any system, whether local or remote, with your LLM using just a few lines of prompts.
+
+Here are some sample use cases of GPTScript:
+1. Chat with a local CLI - [Try it!](examples/cli)
+2. Chat with an OpenAPI compliant endpoint - [Try it!](examples/api)
+3. Chat with local files and directories - [Try it!](examples/local-files)
+4. Run an automated workflow - [Try it!](examples/workflow)
+
+### Getting Started
+
+
+
+ ```shell
+ brew install gptscript-ai/tap/gptscript
+ gptscript github.com/gptscript-ai/llm-basics-demo
+ ```
+ A few notes:
+ - You'll need an [OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key)
+ - The above script is a simple chat-based assistant. You can ask it questions and it will answer to the best of its ability.
+
+
+ ```shell
+ winget install gptscript-ai.gptscript
+ gptscript github.com/gptscript-ai/llm-basics-demo
+ ```
+ A few notes:
+ - You'll need an [OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key)
+ - After installing gptscript you may need to restart your terminal for the changes to take effect
+ - The above script is a simple chat-based assistant. You can ask it questions and it will answer to the best of its ability.
+
+
diff --git a/docs/docs/02-examples/01-cli.md b/docs/docs/02-examples/01-cli.md
new file mode 100644
index 00000000..7a59f592
--- /dev/null
+++ b/docs/docs/02-examples/01-cli.md
@@ -0,0 +1,286 @@
+# Chat with a Local CLI
+
+GPTScript makes it easy to write AI integrations with CLIs and other executable available on your local workstation. This is powerful because it allows you to work AI to solve complex problems using your available CLIs. You can describe complex requests in plain English and GPTScript will figure out the best CLI commands to make that happen. This guide will show you how to build a GPTScript that integrates with two CLIs:
+
+- [gh](https://cli.github.com/) - the GitHub CLI
+- [kubectl](https://kubernetes.io/docs/reference/kubectl/) - the Kubernetes CLI
+
+:::warning
+This script **does not install** or configure gh or kubectl. We assume you've done that already.
+
+- For gh, you must be logged in via `gh auth login`. [See here for more details](https://docs.github.com/en/github-cli/github-cli/quickstart)
+- For kubectl, you must have a proper `kubeconfig`. [See here for more details](https://kubernetes.io/docs/tasks/tools/)
+
+:::
+
+## Too Long; Didn't Read
+
+Want to start using this script now? Just run:
+
+```
+gptscript github.com/gptscript-ai/cli-demo
+```
+
+Or if you want to skip ahead and just grab the full script so that you can start hacking on it, jump to the [Putting it all together section](cli#putting-it-all-together).
+
+## Getting Started
+
+The rest of this guide will walk you through building a script that can serve as an assistant for GitHub and Kubernetes tasks. We'll be explaining the how, what, and why along the way.
+
+First, open up a new gptscript file in your favorite editor. We'll call the file cli-demo.gpt
+
+```
+vim cli-demo.gpt
+```
+
+All edits below are assumed to be in this file. At the end, we'll share the entire script as one cohesive file, but along the way we'll just be adding tools one-by-one.
+
+## Create the Kubernetes Agent
+
+Let's start by adding the Kubernetes agent. In our script, add the following:
+
+```
+---
+Name: k8s-agent
+Description: An agent that can help you with your Kubernetes cluster by executing kubectl commands
+Context: shared-context
+Tools: sys.exec
+Parameter: task: The kubectl related task to accomplish
+Chat: true
+
+You have the kubectl cli available to you. Use it to accomplish the tasks that the user asks of you.
+
+```
+
+Now, let's walk through this tool line-by-line.
+
+**---** is a block separator. It's how we delineate tools in a script.
+
+**Name and Description** help the LLM understand the purpose of this tool. You should always have meaningful names and descriptions.
+
+**Tools: sys.exec** makes the built-in `sys.exec` tool available to this agent. This gives the agent the ability to execute arbitrary commands. Based on our prompt, it will be used for kubectl commands. GPTScript's authorization system will prompt for approval whenever it's going to run a `sys.exec` command.
+
+**Parameter: task:** defines a parameter named "task" for this tool. This will be important later on when other tools need to hand-off to this tool - they'll pass the task to it as this parameter. As with the name and description fields, it's important to provide a good description so that the LLM knows how to use this parameter.
+
+**Chat: true** turns this tool into a "chat-able" tool, which we also call an "agent". This is important for open-ended tasks that might take some iteration.
+
+Finally, we have the **tool body**, which in this case is a prompt:
+
+```
+You have the kubectl cli available to you. Use it to accomplish the tasks that the user asks of you.
+```
+
+This is what the tool will actually do. Tool bodies can be prompts or raw code like python, javascript, or the [world's best programming language](https://x.com/ibuildthecloud/status/1796227491943637125) - bash. For chat-able tools, your tool body should always be a prompt.
+
+That's all there is to the Kubernetes agent. You can try it out now. One nice thing about GPTScript is that tools are composable. So, you can get this tool working well and then move onto the next tool without affecting this one. To launch this tool, run:
+
+```
+gptscript --sub-tool k8s-agent cli-demo.gpt
+```
+
+Once you're chatting, try asking it do something like list all the pods in your cluster or even to launch an new deployment in the cluster.
+
+## Create the GitHub Agent
+
+Now let's add the GitHub Agent. Drop the following into the file below the tool we just added.
+
+```
+---
+Name: github-agent
+Description: An agent to help you with GitHub related tasks using the gh cli
+Context: learn-gh
+Tools: sys.exec
+Parameter: task: The GitHub task to accomplish
+Chat: true
+
+You have the gh cli available to you. Use it to accomplish the tasks that the user asks of you.
+
+```
+
+This tool is very similar to the Kubernetes agent. There are just a few key differences:
+
+1. Names and descriptions have been changed to reference GitHub and gh as appropriate.
+2. We've introduced the `learn-gh` context. We'll explore this next.
+
+### The learn-gh context tool
+
+Add this for the learn-gh context tool:
+
+```
+---
+Name: learn-gh
+
+#!/usr/bin/env bash
+
+echo "The following is the help text for the gh cli and some of its sub-commands. Use these when figuring out how to construct new commands. Note that the --search flag is used for filtering and sorting as well; there is no dedicate --sort flag."
+gh --help
+gh repo --help
+gh issue --help
+gh issue list --help
+gh issue create --help
+gh issue comment --help
+gh issue delete --help
+gh issue edit --help
+gh pr --help
+gh pr create --help
+gh pr checkout --help
+gh release --help
+gh release create --help
+```
+
+As we saw, this tool is used as the context for the github-agent. Why did we add this and what does it do?
+
+To answer that, let's first understand what the Context stanza does. Any tools referenced in the Context stanza will be called and their output will be added to the chat context. As the name suggests, this gives the LLM additional context for subsequent messages. Sometimes, an LLM needs extra instructions or context in order to achieve the desired results. There's no hard or fast rule here for when you should include context; it's best discovered through trial-and-error.
+
+ We didn't need extra context for the Kubernetes tool because we found our default LLM knows kubectl (and Kubernetes) quite well. However, our same testing showed that our default LLM doesn't know the gh cli as well. Specifically, the LLM would sometimes hallucinate invalid combinations of flags and parameters. Without this context, the LLM often takes several tries to get the gh command correct.
+
+:::tip
+Did you catch that "takes several tries to get the command correct" part? One useful feature of GPTScript is that it will feed error messages back to the LLM, which allows the LLM to learn from its mistake and try again.
+:::
+
+And that's the GitHub Agent. You can try it out now:
+
+```
+gptscript --sub-tool github-agent cli-demo.gpt
+```
+
+Once you're chatting, try asking it do something like "Open an issue in gptscript-ai/gptscript with a title and body that says Hi from me and states how wonderful gptscript is but jazz it up and make it unique"
+
+## Your CLI Assistant
+
+Right now if you were to launch this script, you'd be dropped right into the Kubernetes agent. Let's create a new entrypoint whose job it is to handle your initial conversation and route to the appropriate agent. Add this to the **TOP** of your file:
+
+```
+Name: Your CLI Assistant
+Description: An assistant to help you with local cli-based tasks for GitHub and Kubernetes
+Agents: k8s-agent, github-agent
+Context: shared-context
+Chat: true
+
+Help the user acomplish their tasks using the tools you have. When the user starts this chat, just say hello and ask what you can help with. You donlt need to start off by guiding them.
+```
+
+By being at the top of the file, this tool will serve as the script's entrypoint. Here are the parts of this tool that are worth additional explanation:
+
+**Agents: k8s-agent, github-agent** puts these two agents into a group that can hand-off to each other. So, you can ask a GitHub question, then a Kubernetes question, and then a GitHub question again and the chat conversation will get transferred to the proper agent each time.
+
+Next is **Context: shared-context**. You're already familiar with contexts, but in the next section we'll explain what's unique about this one.
+
+### The shared-context tool
+
+Drop the shared-context tool in at the very bottom of the page:
+
+```
+---
+Name: shared-context
+Share Context: github.com/gptscript-ai/context/history
+
+#!sys.echo
+Always delegate to the best tool for the users request.
+Ask the user for information needed to complete a task.
+Provide the user with the exact action you will be taking and get the users confirmation when creating or updating resources.
+ALWAYS ask the user to confirm deletions, provide as much detail about the action as possible.
+```
+
+and do one more thing: add it as a context tool to both the k8s-agent and github-agent. For k8s-agent, that means adding this line: `Context: shared-context` and for github-agent, it means modifying the existing Context line to: `Context: learn-gh, shared-context`.
+
+**Share Context: github.com/gptscript-ai/context/history** - In this line, "Share Context" means that the specified tool(s) will be part of the context for any tools that references this tool in their Context stanza. It's a way to compose and aggregate contexts.
+
+ The specific tool referenced here - github.com/gptscript-ai/context/history - makes it so that when you transition from one agent to the next, your chat history is carried across. Using this file as an example, this would allow you to have a history of all the Kubernetes information you gathered available when talking to the GitHub tool.
+
+The **#!sys.echo** body is a simple way to directly output whatever text follows it. This is useful if you just have a static set of instructions you need to inject into the context. The actual text should make sense if you read it. We're telling the agents how we want them to behave and interact.
+
+## Putting it all together
+
+Let's take a look at this script as one cohesive file:
+
+```
+Name: Your CLI Assistant
+Description: An assistant to help you with local cli-based dev tasks
+Context: shared-context
+Agents: k8s-agent, github-agent
+Chat: true
+
+Help the user acomplish their tasks using the tools you have. When the user starts this chat, just say hello and ask what you can help with. You donlt need to start off by guiding them.
+
+---
+Name: k8s-agent
+Description: An agent that can help you with your Kubernetes cluster by executing kubectl commands
+Context: shared-context
+Tools: sys.exec
+Parameter: task: The kubectl releated task to accomplish
+Chat: true
+
+You have the kubectl cli available to you. Use it to accomplish the tasks that the user asks of you.
+
+---
+Name: github-agent
+Description: An agent to help you with GitHub related tasks using the gh cli
+Context: learn-gh, shared-context
+Tools: sys.exec
+Parameter: task: The GitHub task to accomplish
+Chat: true
+
+You have the gh cli available to you. Use it to accomplish the tasks that the user asks of you.
+
+---
+Name: learn-gh
+
+#!/usr/bin/env bash
+
+echo "The following is the help text for the gh cli and some of its sub-commands. Use these when figuring out how to construct new commands. Note that the --search flag is used for filtering and sorting as well; there is no dedicate --sort flag."
+gh --help
+gh repo --help
+gh issue --help
+gh issue list --help
+gh issue create --help
+gh issue comment --help
+gh issue delete --help
+gh issue edit --help
+gh pr --help
+gh pr create --help
+gh pr checkout --help
+gh release --help
+gh release create --help
+
+
+---
+Name: shared-context
+Share Context: github.com/gptscript-ai/context/history
+
+#!sys.echo
+Always delegate to the best tool for the users request.
+Ask the user for information needed to complete a task.
+Provide the user with the exact action you will be taking and get the users confirmation when creating or updating resources.
+ALWAYS ask the user to confirm deletions, provide as much detail about the action as possible.
+```
+
+There isn't anything new to cover in this file, we just wanted you to get a holistic view of it. This script is now fully functional. You can launch it via:
+
+```
+gpscript cli-demo.gpt
+```
+
+### Adding your own CLI
+
+By now you should notice a simple pattern emerging that you can follow to add your own CLI-powered agents to a script. Here are the basics of what you need:
+
+```
+Name: {your cli}-agent
+Description: An agent to help you with {your taks} related tasks using the gh cli
+Context: {here's your biggest decsion to make}, shared-context
+Tools: sys.exec
+Parameter: task: The {your task}The GitHub task to accomplish
+Chat: true
+
+You have the {your cli} cli available to you. Use it to accomplish the tasks that the user asks of you.
+```
+
+You can drop in your task and CLI and have a fairly functional CLI-based chat agent. The biggest decision you'll need to make is what and how much context to give your agent. For well-known for CLIs/technologies like kubectl and Kubernetes, you probably won't need a custom context. For custom CLIs, you'll definitely need to help the LLM out. The best approach is to experiment and see what works best.
+
+## Next steps
+
+Hopefully you've found this guide helpful. From here, you have several options:
+
+- You can checkout out some of our other guides available in this section of the docs
+- You can dive deeper into the options available when [writing script](/tools/gpt-file-reference)
diff --git a/docs/docs/02-examples/02-api.md b/docs/docs/02-examples/02-api.md
new file mode 100644
index 00000000..d84ae653
--- /dev/null
+++ b/docs/docs/02-examples/02-api.md
@@ -0,0 +1,238 @@
+# Chat with an API
+
+Interacting with cloud providers through dashboards, APIs, and CLIs is second nature to devops engineers. Using AI chat, the engineer can express a goal, and the AI can generate and execute the calls needed to achieve it. This saves the engineer time from having to look up the API calls needed themselves. GPTScript makes building a chat integration with an existing OpenAPI schema quick and easy.
+
+This guide will walk through the process of using the OpenAPI spec from Digital Ocean to build a chatbot capable of launching droplets and databases. The reader will be able to continue adding Digital Ocean capabilities or build their own chatbot with another OpenAPI schema.
+
+## Too Long; Didn't Read
+
+If you just want to try out the Digital Ocean chatbot first:
+
+Follow the [API credential](#api-access) settings here.
+
+Then you can run the following commands to get started:
+
+```bash
+gptscript github.com/gptscript-ai/digital-ocean-agent
+```
+
+## Getting started
+
+First we will need to download a copy of the openapi.yaml. This spec technically can be accessed by URL, but initially, it is easier to download a copy and save it as openapi.yaml.
+
+### The Digital Ocean openapi.yaml spec
+
+Getting the openapi.yaml file from Digital Ocean can be done by running the following command in a terminal.
+
+```bash
+curl -o openapi.yaml -L https://api-engineering.nyc3.cdn.digitaloceanspaces.com/spec-ci/DigitalOcean-public.v2.yaml
+```
+
+This will download a copy of the openapi yaml file to the local directory.
+
+Lets take a look at the spec file a little bit. The integration in GPTScript creates a tool named after each operationId in the OpenAPI spec. You can see what these tools would be by running the following.
+
+```bash
+grep operationId openapi.yaml
+# …
+# operationId: domains_delete_record
+# operationId: droplets_list
+# operationId: droplets_create
+# operationId: droplets_destroy_byTag
+# operationId: droplets_get
+# operationId: droplets_destroy
+# operationId: droplets_list_backups
+# operationId: droplets_list_snapshots
+# operationId: dropletActions_list
+# operationId: dropletActions_post
+# operationId: dropletActions_post_byTag
+# operationId: dropletActions_get
+# operationId: droplets_list_kernels
+# operationId: droplets_list_firewalls
+# operationId: droplets_list_neighbors
+# …
+```
+
+If we look at the operationIds, you’ll notice they are structured around an object like droplet, database, or project. Each object has a collection of verb like list, get, delete, create, etc. Each tool in GPTScript has it’s own set of tools. So we can create agents, tools with chat enabled, that are experts in a specific set of objects and have access to all of the object_verb tools available to them. This allows us to fan out tools from a main entrypoint to multiple experts that can solve the users tasks.
+
+Lets explore this design pattern.
+
+## Creating Main Entrypoint
+
+Lets start by creating our main entrypoint to the Digital Ocean chatbot. The main tool in a GPTScript chat program is usually named agent.gpt. Let’s first setup the agents by giving it a name, the ability to chat, basic instructions, and the main greeting prompt. Create an agent.gpt file with the following contents.
+
+agent.gpt
+
+```
+Name: Digital Ocean Bot
+Chat: true
+
+You are a helpful DevOps assistant that is an expert in Digital Ocean.
+Using only the tools available, do not answer without using a tool, respond to the user task.
+Greet the User with: "Hello! How can I help you with Digital Ocean?"
+```
+
+This file when run will show the following.
+
+
+
+In the current form, the chatbot will not be able to do anything since it doesn’t have access to any APIs. Let’s address that now, open our tool.gpt file and add the following.
+
+agent.gpt
+
+```
+Name: Digital Ocean Bot
+Chat: true
+Agents: droplets.gpt
+
+You are a helpful DevOps assistant that is an expert in Digital Ocean
+Using only the tools available, do not answer without using a tool, respond to the user task.
+Greet the User with: "Hello! How can I help you with Digital Ocean?"
+```
+
+Now lets create a droplets.gpt file to bring in the droplet tools.
+
+droplets.gpt
+
+```
+Name: Droplet Agent
+Chat: true
+Tools: droplets* from ./openapi.yaml
+Description: Use this tool to work with droplets
+Args: request: the task requested by the user
+
+Help the user complete their Droplet operation requests using the tools available.
+When creating droplets, always ask if the user would like to access via password or via SSHkey.
+```
+
+Here we have defined the Droplet Agent, and enabled chat. We have also brought in an subset of the openapi.yaml tools that relate to droplets. By using droplets* we are making available everything droplet related into the available tools for this agent. We also provided the description to the main agent, and any other agent that has access to it, when to utilize this tool. We also have an argument called “request”, this is used when the LLM decides to call the agent it can smoothly pass off the user request without the Droplet Agent having to ask again.
+
+## Chat with Digital Ocean
+
+### API Access
+
+Now that we have brought in our first tool using the OpenAPI spec, we will need to setup authentication. Defined in the openapi.yaml is how the Digital Ocean API expects authenticated requests to work. If you look in the spec file path of components.securitySchemes you will see that Digital Ocean expects bearer_auth. So you will need to create an API key in the Digital Ocean dashboard with the access you want the LLM to be able to interact with Digital Ocean. For instance, you can do a read only key that will allow you to just query information, or you can provide it full access and the operator can work with the LLM to do anything in the project. It is up to you. For this example, we will be using a full access token, but you can adjust for your needs. You can create your API key by going to this link [Apps & API](https://cloud.digitalocean.com/account/api/tokens) section in your account.
+
+Once you have an API key, you will need to set an environment variable with that value stored.
+
+```bash
+export GPTSCRIPT_API_DIGITALOCEAN_COM_BEARER_AUTH=******
+```
+
+Where the *** is the API key created in the dashboard.
+
+### Chatting with Digital Ocean APIs
+
+Now you can run gptscript to start your conversation with Digital Ocean.
+
+```bash
+gptscript agent.gpt
+```
+
+You should now be able to ask how many droplets are running?
+
+And get an output from the chatbot. This is great, but not quite ready to use just yet. Lets keep adding some functionality.
+
+## Adding Database Support
+
+Now that we can do droplets, we can add support for databases just as easy. Lets create a databases.gpt file with the following contents.
+
+Ddtabases.gpt
+
+```
+Name: Database Agent
+Chat: true
+Tools: databases* from ./openapi.yaml
+Description: Call this tool to manage databases on digital ocean
+Args: request: the task requested by the user
+
+Help the user complete database operation requests with the tools available.
+```
+
+Here again, we are essentially scoping our agent to handle database calls with the Digital Ocean API. Now in order for this to be used, we need to add it to our agent list in the main agent.gpt file.
+
+Agent.gpt
+
+```
+Name: Digital Ocean Bot
+Chat: true
+Agents: droplets.gpt, databases.gpt
+
+You are a helpful DevOps assistant that is an expert in Digital Ocean
+Using only the tools available, do not answer without using a tool, respond to the user task.
+Greet the User with: "Hello! How can I help you with Digital Ocean?"
+```
+
+Now when we test it out we can ask how many databases are running? And it should give back the appropriate response.
+
+Now, when it comes to creating a database or droplet, we are missing some APIs to gather the correct information. We don’t have access to size information, regions, SSH Keys, etc. Since these are common tools, it would be a bit of a hassle to add lines to both the databases.gpt and droplets.gpt files. To avoid this, we can make use of the GPTScript Context to provide a common set of tools and instructions.
+
+## Context
+
+Context is a powerful concept in GPTScript that provides information to the system prompt, and provide a mechanism to compose a common set of tools reducing duplication in your GPTScript application. Lets add a context.gpt file to our chatbot here with the following contents.
+
+context.gpt
+
+```
+Share Tools: sys.time.now
+
+Share Tools: images* from ./openapi.yaml
+Share Tools: regions_list from ./openapi.yaml
+Share Tools: tags* from openapi.yaml
+Share Tools: sizes_list from ./openapi.yaml
+Share Tools: sshKeys_list from ./openapi.yaml
+
+
+#!sys.echo
+Always delegate to the best tool for the users request.
+Ask the user for information needed to complete a task.
+Provide the user with the exact action you will be taking and get the users confirmation when creating or updating resources.
+ALWAYS ask the user to confirm deletions, provide as much detail about the action as possible.
+```
+
+There is quite a bit going on here, so lets break it down. Anywhere you see Share Tools it is making that tool available to anything uses the context. In this case, it is providing access to the time now tool so you can ask what was created yesterday and the LLM can get a frame of reference. Additionally, it provides a common set of Digital Ocean APIs that are needed for placement, organization(tags), sizes, and images, etc. Since multiple components in Digital Ocean use these values, it is useful to only need to define it once. Last we are providing a set of common instructions for how we want the chatbot to behave overall. This way, we do not need to provide this information in each agent. Also, since this is in the system prompt, it is given a higher weight to the instructions in the individual agents.
+
+Now lets add this to our agents. You will need to add the line:
+
+```
+Context: context.gpt
+```
+
+To each of our agents, so the droplets.gpt, agent.gpt, and databases.gpt will have this line.
+
+## Wrapping up
+
+Provided you have given API access through your token, you should now be able to run the chatbot and create a database or a droplet and be walked through the process. You should also be able to ask quesitons like What VMs were created this week?
+
+You now know how to add additional capabilities through agents to the chatbots. You can follow the same patterns outlined above to add more capabilities or you can checkout the chat bot repository to see additional functionality.
+
+### Use your own OpenAPI schema
+
+If you have your own OpenAPI schema, you can follow the same pattern to build a chatbot for your own APIs. The simplest way to get started is to create a gptscript file with the following contents.
+
+```
+Name: {Your API Name} Bot
+Chat: true
+Tools: openapi.yaml
+
+You are a helpful assistant. Say "Hello, how can I help you with {Your API Name} system today?"
+```
+
+You can then run that and the LLM will be able to interact with your API.
+
+#### Note on OpenAI tool limits
+
+As we mentioned before, GPTScript creates a tool for each operationId in the OpenAPI spec. If you have a large OpenAPI spec, you may run into a limit on the number of tools that can be created. OpenAI, the provider of the GPT-4o model only allows a total of 200 tools to be passed in at a single time. If you exceed this limit, you will see an error message from OpenAI. If you run into this issue, you can follow the same pattern we did above to create our Digital Ocean bot.
+
+A quick check to see how many tools total would be created, you can run the following:
+
+```bash
+grep operationId openapi.yaml|wc -l
+ 306
+```
+
+In our case, there are 306 tools that would be created in the case of our Digital Ocean spec. This would not fit into a single agent, so breaking it up into multiple agents is the best way to handle this.
+
+## Next Steps
+
+Now that you have seen how to create a chatbot with an OpenAPI schema, checkout our other guides to see how to build other ChatBots and agents.
diff --git a/docs/docs/02-examples/04-local-files.md b/docs/docs/02-examples/04-local-files.md
new file mode 100644
index 00000000..522deb01
--- /dev/null
+++ b/docs/docs/02-examples/04-local-files.md
@@ -0,0 +1,244 @@
+# Chat with Local Files
+
+With GPTScript interacting with local files is simple and powerful. This can help you streamline repetitive or data-intensive tasks. In this guide, we'll build a script that can query Excel files, CSVs, and PDFs. We'll then use the script to read, transform, and utilize the data in these files.
+
+## Too Long; Didn't Read
+
+:::warning
+The below command will allow GPTScript to work with the files in your ~/Documents directory. Change the directory if you want to restrict it.
+:::
+
+Want to start using this script now? Just run:
+```
+gptscript --workspace=~/Documents github.com/gptscript-ai/local-files-demo
+```
+
+## Getting Started
+The rest of this guide will walk you through building and using a data processing assistant. We'll be explaining the how, what, and why along the way.
+
+First, let's get some sample data to work with. You can clone our repo with our sample data:
+```
+git clone https://github.com/gptscript-ai/local-files-demo.git
+cd local-files-demo
+```
+
+Next, open up a new gptscript file in your favorite editor. We'll call the file data-assistant.gpt.
+```
+vim data-assistant.gpt
+```
+All edits below are assumed to be in this file.
+
+### Create the Assistant
+Put this in the gpt file:
+```
+Name: Your Data Processing Assitant
+Description: An asistant to help you with processing data found in files on your workstation. Helpful for querying spreadsheets, CSVs, JSON files, and pdfs.
+Tools: github.com/gptscript-ai/structured-data-querier, github.com/gptscript-ai/pdf-reader
+Context: github.com/gptscript-ai/context/workspace
+Chat: true
+
+You are a helpful data processing assistant. Your goal is to help the user with data processing. Help the user accomplish their tasks using the tools you have. When the user starts this chat, just say hi, introduce yourself, and ask what you can help with.
+```
+This is actually the entirety of the script. We're packing a lot of power into just a handful of lines here. Let's talk through them.
+
+**Name and Description** help the LLM understand the purpose of this tool. You should always have meaningful names and descriptions.
+
+The **Tools: ...** stanza pulls two useful tools into this assistant.
+
+The [structured-data-querier](https://github.com/gptscript-ai/structured-data-querier) makes it possible to query csv, xlsx, and json files as though they SQL databases (using an application called [DuckDB](https://duckdb.org/)). This is extremely powerful when combined with the power of LLMs because it let's you ask natural language questions that the LLM can then translate to SQL.
+
+The [pdf-reader](https://github.com/gptscript-ai/pdf-reader) isn't quite as exciting, but still useful. It parses and reads PDFs and returns the contents to the LLM. This will put the entire contents in your chat context, so it's not appropriate for extremely large PDFs, but it's handy for smaller ones.
+
+**Context: github.com/gptscript-ai/context/workspace** introduces a context tool makes this assistant "workspace" aware. It's description reads:
+> Adds the workspace and tools needed to access the workspace to the current context
+
+That translates to telling the LLM what the workspace directory is and instructing it to use that directory for reading and writing files. As we saw above, you can specify a workspace like this:
+```
+gptscript --workspace=/Your/path/here ...
+```
+If you don't specify one, a temporary directory will be created and used for the workspace.
+
+This context also shares the `sys.read`, `sys.write`, and `sys.ls` built-in tools with the assistant so that it automatically has access to them.
+
+Next we have **Chat: true**, which you've seen if you looked at any of our other guides. This makes the current tool "chat-able". We refer to chatable tools as agents or assistants.
+
+Finally, we have the prompt:
+> You are a helpful data processing assistant. Your goal is to help the user with data processing tasks. Help the user accomplish their tasks using the tools you have. When the user starts this chat, just say hi, introduce yourself, and ask what you can help with.
+
+
+## Using the Assistant
+Once again, that's all there is to this assistant. You can start using it by specifying your own workspace or using our sample-data directory as the workspace. Assuming you're using our sample data and have followed these instructions, here's how you launch it:
+```
+gptscript --workspace=./sample-data data-assistant.gpt
+```
+
+Here's a few sample interactions with these files.
+
+### Cleaning up data
+```
+> whats in the key contacts file?
+
+ The key_contacts.txt file contains the following information:
+
+ Contains location, name & email
+ Scranton - Michael michael@gptscript-fakeco.com
+ Stamford Martin marty@gptscript-fakeco.com
+ Nashua - Rachael, rach@gptscript-fakeco.com
+ Utica,Ross - ross@gptscript-fakeco.com
+ Albany Allen allen@gptscript-fakeco.com
+ Akron;Pam;pam@gptscript-fakeco.com
+ Camden. Kali. kali@gptscript-fakeco.com
+ Yonkers - Meredith - meredith@gptscript-fakeco.com
+ Rochester Kevin
+ Syracuse William will@gptscript-fakeco.com
+
+ Is there anything specific you would like to do with this information?
+
+> That looks bad. Can you make a cleaned up csv of it?
+
+ Sure, I'll clean up the data and create a CSV file for you. Here's the cleaned-up data in CSV format:
+
+ Location,Name,Email
+ Scranton,Michael,michael@gptscript-fakeco.com
+ Stamford,Martin,marty@gptscript-fakeco.com
+ Nashua,Rachael,rach@gptscript-fakeco.com
+ Utica,Ross,ross@gptscript-fakeco.com
+ Albany,Allen,allen@gptscript-fakeco.com
+ Akron,Pam,pam@gptscript-fakeco.com
+ Camden,Kali,kali@gptscript-fakeco.com
+ Yonkers,Meredith,meredith@gptscript-fakeco.com
+ Rochester,Kevin,
+ Syracuse,William,will@gptscript-fakeco.com
+
+ I will now create a CSV file with this cleaned-up data.
+```
+
+### Identifying and fixing data gaps
+```
+> is there any missing data in that csv?
+...
+ Yes, there is missing data in the CSV file. Specifically, the email for Kevin in Rochester is missing.
+
+ Would you like to update this information or perform any other actions?
+> yes set his email to kev@ and use the same domain as the others
+
+ I will update Kevin's email to kev@gptscript-fakeco.com in the CSV file.
+```
+
+### Cross-referencing
+```
+> what were sales like for Kevin's location?
+
+ To find out sales for Kevin's location, I will need to check the yoy_sales.xlsx file for any relevant data. Let me take a
+ look at the contents of that file.
+...
+ The sales data for Rochester, where Kevin is located, is as follows:
+
+ LOCATION │ PRODUCT SKU │ MARCH 2023 SALES │ MARCH 2024 SALES │ YOY SALES GROWTH (%)
+ ────────────┼─────────────┼──────────────────┼──────────────────┼───────────────────────
+ Rochester │ SKU_001 │ $803.91 │ $843.10 │ 12.12%
+ Rochester │ SKU_002 │ $3,013.36 │ $3,159.02 │ 13.13%
+ Rochester │ SKU_003 │ $1,879.24 │ $1,968.20 │ 12.15%
+ Rochester │ SKU_004 │ $4,989.81 │ $5,239.30 │ 13.88%
+ Rochester │ SKU_005 │ $4,334.03 │ $4,550.73 │ 12.70%
+ Rochester │ SKU_006 │ $2,143.76 │ $2,251.95 │ 13.07%
+ Rochester │ SKU_007 │ $7,976.54 │ $8,375.37 │ 12.23%
+ Rochester │ SKU_008 │ $313.21 │ $328.87 │ 13.80%
+ Rochester │ SKU_009 │ $671.70 │ $705.29 │ 12.91%
+ Rochester │ SKU_010 │ $4,084.94 │ $4,289.19 │ 13.03%
+
+ Is there anything else you would like to know or do with this data?
+```
+### Pulling all the info together
+```
+> Let's help Kevin raise sales. What promotions do we have going on?
+...
+ Here are the current promotions for office paper products:
+
+ 1. Dunder Supreme Multi-Purpose Paper
+ • Product SKU: SKU_001
+ • Promotion Period: May 1 - May 31
+ • Details: Buy two reams of Dunder Supreme and get one free. This paper is ideal for both professional and
+ personal use, made from sustainably sourced forests.
+ 2. Scranton Elite Glossy Photo Paper
+ • Product SKU: SKU_003
+ • Promotion Period: May 1 - May 31
+ • Details: Receive a 25% discount on every pack. This paper is designed for vibrant color reproduction and
+ unrivaled photo quality, perfect for professional portfolios or family memories.
+ 3. MiSlin Premium Cardstock
+ • Product SKU: SKU_005
+ • Promotion Period: May 1 - May 31
+ • Details: Buy one, get one 50% off on all cardstock purchases. This cardstock is ideal for presentations,
+ report covers, and crafts, praised for its durability and smooth finish.
+
+
+ These promotions can help Kevin boost sales in Rochester. Is there anything specific you would like to do with
+ this information?
+
+> craft an email summarizing performance at this location and these promotions
+
+ Here's a draft email summarizing the performance at Kevin's location and the current promotions:
+
+ --------
+
+ Subject: Sales Performance and Upcoming Promotions for Rochester
+
+ Dear Kevin,
+
+ I hope this email finds you well.
+
+ I wanted to take a moment to review the recent sales performance for Rochester and share some exciting promotions
+ that can help boost our sales further.
+
+ Sales Performance: The sales data for Rochester has shown promising growth across various product SKUs. Here are
+ the highlights:
+
+ • SKU_001: March 2023 Sales: $803.91 | March 2024 Sales: $843.10 | YoY Growth: 12.12%
+ • SKU_002: March 2023 Sales: $3,013.36 | March 2024 Sales: $3,159.02 | YoY Growth: 13.13%
+ • SKU_003: March 2023 Sales: $1,879.24 | March 2024 Sales: $1,968.20 | YoY Growth: 12.15%
+ • SKU_004: March 2023 Sales: $4,989.81 | March 2024 Sales: $5,239.30 | YoY Growth: 13.88%
+ • SKU_005: March 2023 Sales: $4,334.03 | March 2024 Sales: $4,550.73 | YoY Growth: 12.70%
+ • SKU_006: March 2023 Sales: $2,143.76 | March 2024 Sales: $2,251.95 | YoY Growth: 13.07%
+ • SKU_007: March 2023 Sales: $7,976.54 | March 2024 Sales: $8,375.37 | YoY Growth: 12.23%
+ • SKU_008: March 2023 Sales: $313.21 | March 2024 Sales: $328.87 | YoY Growth: 13.80%
+ • SKU_009: March 2023 Sales: $671.70 | March 2024 Sales: $705.29 | YoY Growth: 12.91%
+ • SKU_010: March 2023 Sales: $4,084.94 | March 2024 Sales: $4,289.19 | YoY Growth: 13.03%
+
+ Upcoming Promotions: To further enhance our sales, we have some exciting promotions lined up for May:
+
+ 1. Dunder Supreme Multi-Purpose Paper (SKU_001)
+ • Promotion Period: May 1 - May 31
+ • Details: Buy two reams and get one free. Ideal for both professional and personal use, made from sustainably
+ sourced forests.
+ 2. Scranton Elite Glossy Photo Paper (SKU_003)
+ • Promotion Period: May 1 - May 31
+ • Details: Receive a 25% discount on every pack. Perfect for vibrant color reproduction and unrivaled photo
+ quality.
+ 3. MiSlin Premium Cardstock (SKU_005)
+ • Promotion Period: May 1 - May 31
+ • Details: Buy one, get one 50% off on all cardstock purchases. Ideal for presentations, report covers, and
+ crafts.
+
+
+ These promotions are designed to attract more customers and increase our sales. Please ensure that these
+ promotions are prominently displayed and communicated to our customers.
+
+ If you have any questions or need further assistance, feel free to reach out.
+
+ Best regards,
+
+ [Your Name] [Your Position] [Your Contact Information]
+
+ --------
+
+ Feel free to customize this email further to better suit your needs. Let me know if there's anything else I can
+ assist you with!
+```
+Try it out yourself and see what you can come up with.
+
+## Next steps
+
+Hopefully you've found this guide helpful. From here, you have several options:
+
+- You can checkout out some of our other guides available in this section of the docs
+- You can dive deeper into the options available when [writing script](/tools/gpt-file-reference)
diff --git a/docs/docs/02-examples/05-workflow.md b/docs/docs/02-examples/05-workflow.md
new file mode 100644
index 00000000..0aca6fc3
--- /dev/null
+++ b/docs/docs/02-examples/05-workflow.md
@@ -0,0 +1,189 @@
+# Run an Automated Workflow
+
+Automating a sequence of tasks that integrate with one or more systems is a ubiquitous engineering problem that typically requires some degree of domain-specific knowledge up-front. However, workflows written with GPTScript all but eliminate this prerequisite, enabling developers to build their workflows by describing the high-level steps it should perform.
+
+This guide will show you how to build a GPTScript that encapsulates a workflow consisting of the following steps:
+1. Get a selection of twitter posts
+2. Summarize their content
+3. Summarize the content of any links they directly reference
+4. Write the results to a Markdown document
+
+We'll be explaining the how, what, and why along the way.
+
+## Too long; didn't read
+
+Want to start using this script now? Just run:
+
+```
+gptscript github.com/gptscript-ai/workflow-demo
+```
+
+Or if you want to skip ahead and just grab the full script so that you can start hacking on it, jump to the [Putting it all together section](workflow#putting-it-all-together).
+
+## Getting Started
+
+First, open up a new GPTScript file in your favorite editor. We'll call the file `workflow-demo.gpt`
+
+```
+vim workflow-demo.gpt
+```
+
+All edits below are assumed to be in this file. At the end, we'll share the entire script as one cohesive file, but along the way we'll just be adding tools one-by-one.
+
+## Workflow Entrypoint
+
+To get started, let's define the first tool in our file. This will act as the entrypoint for our workflow.
+
+Add the following tool definition to the file:
+
+```
+Tools: sys.write, summarize-tweet
+Description: Summarize tweets
+
+Always summarize tweets synchronously in the order they appear.
+Never summarize tweets in parallel.
+Write all the summaries for the tweets at the following URLs to `tweets.md`:
+- https://x.com/acornlabs/status/1798063732394000559
+- https://x.com/acornlabs/status/1797998244447900084
+```
+
+Let's walk through the important bits.
+
+This tool:
+- imports two other tools
+ - `sys.write` is a built-in tool which enables the entrypoint tool to write files to your system.
+ - `summarize-tweet` is a custom tool that encapsulates how each tweet gets summarized. We'll define this tool in the next step.
+- ensures tweets are never summarized in parallel to ensure they are summarized in the correct order
+- defines the tweet URLs to summarize and the file to write them to
+
+At a high-level, it's getting the summaries for two tweets and storing them in the `tweets.md` file.
+
+## Tweet Summarization Tool
+
+The next step is to define the `summarize-tweet` tool used by the entrypoint to summarize each tweet.
+
+Add the following tool definition below the entrypoint tool:
+
+```
+---
+Name: summarize-tweet
+Description: Returns a brief summary of a tweet in markdown
+Tools: get-hyperlinks, summarize-hyperlink, github.com/gptscript-ai/browser
+Parameters: url: URL of the tweet to summarize
+
+Return a markdown formatted summary of the tweet at ${url} using the following format:
+
+## [](${url})
+
+Short summary of the tweet.
+
+### References
+
+
+```
+
+This tool
+- takes the `url` of a tweet as an argument
+- imports three other tools to solve summarization sub-problems
+ - `github.com/gptscript-ai/browser` is an external tool that is used to open the tweet URL in the browser and extract the page content
+ - `get-hyperlinks` and `summarize-hyperlinks` are custom helper tools we'll define momentarily that extract hyperlinks from tweet text and summarize them
+- describes the markdown document this tool should produce, leaving it up to the LLM to decide which of the available tools to call to make this happen
+
+## Hyperlink Summarization Tools
+
+To complete our workflow, we just need to define the `get-hyperlinks` and `summarize-hyperlinks` tools.
+
+Add the following tool definitions below the `summarize-tweet` definition:
+
+```
+---
+Name: get-hyperlinks
+Description: Returns the list of hyperlinks in a given text
+Parameters: text: Text to extract hyperlinks from
+
+Return the list of hyperlinks found in ${text}. If ${text} contains no hyperlinks, return an empty list.
+
+---
+Name: summarize-hyperlink
+Description: Returns a summary of the page at a given hyperlink URL
+Tools: github.com/gptscript-ai/browser
+Parameters: url: HTTPS URL of a page to summarize
+
+Briefly summarize the page at ${url}.
+```
+
+As we can see above, `get-hyperlinks` takes the raw text of the tweet and returns any hyperlinks it finds, while `summarize-hyperlink`
+uses the `github.com/gptscript-ai/browser` tool to get content at a URL and returns a summary of it.
+
+## Putting it all together
+
+Let's take a look at this script as one cohesive file:
+
+```
+Description: Summarize tweets
+Tools: sys.write, summarize-tweet
+
+Always summarize tweets synchronously in the order they appear.
+Never summarize tweets in parallel.
+Write all the summaries for the tweets at the following URLs to `tweets.md`:
+- https://x.com/acornlabs/status/1798063732394000559
+- https://x.com/acornlabs/status/1797998244447900084
+
+---
+Name: summarize-tweet
+Description: Returns a brief summary of a tweet in markdown
+Tools: get-hyperlinks, summarize-hyperlink, github.com/gptscript-ai/browser
+Parameters: url: Absolute URL of the tweet to summarize
+
+Return a markdown formatted summary of the tweet at ${url} using the following format:
+
+## [](${url})
+
+Short summary of the tweet.
+
+### References
+
+
+
+---
+Name: get-hyperlinks
+Description: Returns the list of hyperlinks in a given text
+Parameters: text: Text to extract hyperlinks from
+
+Return the list of hyperlinks found in ${text}. If ${text} contains no hyperlinks, return an empty list.
+
+---
+Name: summarize-hyperlink
+Description: Returns a summary of the page at a given hyperlink URL
+Tools: github.com/gptscript-ai/browser
+Parameters: url: HTTPS URL of a page to summarize
+
+Briefly summarize the page at ${url}.
+```
+
+You can launch it via:
+
+```
+gpscript workflow-demo.gpt
+```
+
+Which should produce a `tweets.md` file in the current directory:
+
+```markdown
+## Jun 4, 2024 [Acorn Labs Tweet](https://x.com/acornlabs/status/1798063732394000559)
+
+Our latest release of #GPTScript v0.7 is available now. Key updates include OpenAPI v2 support, workspace tools replacement, and a chat history context tool.
+
+### References
+
+- [Acorn | Introducing GPTScript v0.7](https://buff.ly/3V1KiWw): The page is a blog post introducing GPTScript v0.7, highlighting its new features and fixes. Key updates include support for OpenAPI v2, removal and replacement of workspace tools, introduction of a chat history context tool, and several housekeeping changes like defaulting to the gpt-4o model and logging token usage. The post also mentions upcoming features in v0.8 and provides links to related articles and resources.
+
+## Jun 4, 2024 [Acorn Labs Tweet](https://x.com/acornlabs/status/1797998244447900084)
+
+Use #GPTScript with #Python to build the front end of this AI-powered YouTube generator. Part of this series walks you through it all.
+
+### References
+
+- [Tutorial on building an AI-powered YouTube title and thumbnail generator (Part 1)](https://buff.ly/4aGJYCD): Covers setting up the front end with Flask, modifying the script, and creating templates for the front page and result page.
+- [Tutorial on building an AI-powered YouTube title and thumbnail generator (Part 2)](https://buff.ly/49MSK1k): Covers the installation and setup of GPTScript, creating a basic script, handling command-line arguments, generating a YouTube title, and generating thumbnails using DALL-E 3.
+```
diff --git a/docs/docs/02-examples/_category_.json b/docs/docs/02-examples/_category_.json
new file mode 100644
index 00000000..f5419091
--- /dev/null
+++ b/docs/docs/02-examples/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Examples and Guides"
+}
diff --git a/docs/docs/02-getting-started.md b/docs/docs/02-getting-started.md
deleted file mode 100644
index 51a02769..00000000
--- a/docs/docs/02-getting-started.md
+++ /dev/null
@@ -1,65 +0,0 @@
-# Getting Started
-
-### 1. Install the latest release
-
-#### Homebrew (macOS and Linux)
-
-```shell
-brew install gptscript-ai/tap/gptscript
-```
-
-#### Install Script (macOS and Linux):
-
-```shell
-curl https://get.gptscript.ai/install.sh | sh
-```
-
-#### WinGet (Windows)
-
-```shell
-winget install gptscript-ai.gptscript
-```
-
-#### Manually
-
-Download and install the archive for your platform and architecture from the [releases page](https://github.com/gptscript-ai/gptscript/releases).
-
-### 2. Get an API key from [OpenAI](https://platform.openai.com/api-keys).
-
-#### macOS and Linux
-
-```shell
-export OPENAI_API_KEY="your-api-key"
-```
-
-#### Windows
-
-```powershell
-$env:OPENAI_API_KEY = 'your-api-key'
-```
-
-### 3. Run Hello World
-
-```shell
-gptscript https://get.gptscript.ai/echo.gpt --input 'Hello, World!'
-```
-
-```
-OUTPUT:
-
-Hello, World!
-```
-
-The model used by default is `gpt-4o` and you must have access to that model in your OpenAI account.
-
-### 4. Extra Credit: Examples and Run Debugging UI
-
-Clone examples and run debugging UI
-
-```shell
-git clone https://github.com/gptscript-ai/gptscript
-cd gptscript/examples
-
-# Run the debugging UI
-gptscript --server
-```
diff --git a/docs/docs/05-how-it-works.md b/docs/docs/03-tools/06-how-it-works.md
similarity index 100%
rename from docs/docs/05-how-it-works.md
rename to docs/docs/03-tools/06-how-it-works.md
diff --git a/docs/docs/07-gpt-file-reference.md b/docs/docs/03-tools/07-gpt-file-reference.md
similarity index 100%
rename from docs/docs/07-gpt-file-reference.md
rename to docs/docs/03-tools/07-gpt-file-reference.md
diff --git a/docs/docs/03-tools/_category_.json b/docs/docs/03-tools/_category_.json
index 51a6b95d..0bf6fac6 100644
--- a/docs/docs/03-tools/_category_.json
+++ b/docs/docs/03-tools/_category_.json
@@ -1,3 +1,3 @@
{
- "label": "Tools"
+ "label": "Writing GPTScripts"
}
diff --git a/docs/docs/04-alternative-model-providers.md b/docs/docs/04-alternative-model-providers.md
index 7c105fcc..a60d1974 100644
--- a/docs/docs/04-alternative-model-providers.md
+++ b/docs/docs/04-alternative-model-providers.md
@@ -1,4 +1,4 @@
-# Alternative Model Providers
+# Support Models and Platforms
## Usage
diff --git a/docs/docs/06-use-cases.md b/docs/docs/06-use-cases.md
deleted file mode 100644
index a88598f2..00000000
--- a/docs/docs/06-use-cases.md
+++ /dev/null
@@ -1,91 +0,0 @@
-# Use Cases
-
-## Retrieval
-
-Retrieval-Augmented Generation (RAG) leverages a knowledge base outside of the training data before consulting the LLM to generate a response.
-The following GPTScript implements RAG:
-
-```yaml
-name: rag
-description: implements retrieval-augmented generation
-args: prompt: a string
-tools: query
-
-First query the ${prompt} in the knowledge base, then construsts an answer to ${prompt} based on the result of the query.
-
----
-
-name: query
-description: queries the prompt in the knowledge base
-args: prompt: a string
-
-... (implementation of knowledge base query follows) ...
-```
-
-The idea behind RAG is simple. Its core logic can be implemented in one GPTScript statement: `First query the ${prompt} in the knowledge base, then construsts an answer to ${prompt} based on the result of the query.` The real work of building RAG lies in the tool that queries your knowledge base.
-
-You construct the appropriate query tool based on the type of knowledge base you have.
-
-| Knowledge Base | Query Tool |
-|------|------|
-| A vector database storing common document types such as text, HTML, PDF, and Word | Integrate with LlamaIndex for vector database support [Link to example]|
-| Use the public or private internet to supply up-to-date knowledge | Implement query tools using a search engine, as shown in [`search.gpt`](https://github.com/gptscript-ai/gptscript/tree/main/examples/search.gpt)|
-| A structured database supporting SQL such as sqlite, MySQL, PostgreSQL, and Oracle DB | Implement query tools using database command line tools such as `sqlite` and `mysql` [Link to example]|
-| An ElasticSearch/OpenSearch database storing logs or other text files | Implement query tools using database command line tools [Link to example]|
-| Other databases such as graph or time series databases | Implement query tools using database command line tools [Link to example]|
-
-## Task Automation
-
-### Planning
-
-Here is a GPTScript that produces a detailed travel itinerary based on inputs from a user: [`travel-agent.gpt`](https://github.com/gptscript-ai/gptscript/tree/main/examples/travel-agent.gpt)
-
-### Web UI Automation
-
-Here is a GPTScript that automates a web browser to browse and navigate website, extract information: [coachella-browse.gpt](https://github.com/gptscript-ai/browser/tree/main/examples/coachella-browse.gpt)
-
-For additional examples, please explore [here](https://github.com/gptscript-ai/browser?tab=readme-ov-file#examples).
-
-### API Automation
-
-Here are a few GPTScripts that interact with issues on GitHub:
-
-- [Create GitHub Issues](https://github.com/gptscript-ai/create-github-issues/tree/main/examples/example.gpt)
-- [Modify GitHub Issues](https://github.com/gptscript-ai/modify-github-issues/tree/main/examples/example.gpt)
-- [Query GitHub Issues](https://github.com/gptscript-ai/gptscriptquery-github-issues/tree/main/examples/example.gpt)
-
-## Agents and Assistants
-
-Agents and assistants are synonyms. They are software programs that leverage LLM to carry out tasks.
-
-In GPTScript, agents and assistants are implemented using tools. Tools can use other tools. Tools can be implemented using natural language prompts or using traditional programming languages such as Python or JavaScript. You can therefore build arbitrarily complex agents and assistants in GPTScript. Here is an example of an assistant that leverages an HTTP client, MongoDB, and Python code generation to display Hacker News headlines: [`hacker-news-headlines.gpt`](https://github.com/gptscript-ai/gptscript/tree/main/examples/hacker-news-headlines.gpt)
-
-## Data Analysis
-
-Depending on the context window supported by the LLM, you can either send a large amount of data to the LLM to analyze in one shot or supply data in batches.
-
-### Summarization
-
-Here is a GPTScript that sends a large document in batches to the LLM and produces a summary of the entire document. [hamlet-summarizer](https://github.com/gptscript-ai/gptscript/tree/main/examples/hamlet-summarizer)
-
-### Tagging
-
-Here is a GPTScript that performs sentiment analysis on the input text: [Social Media Post Sentiment Analyzer](https://github.com/gptscript-ai/gptscript/tree/main/examples/sentiments)
-
-### CSV Files
-
-Here is a GPTScript that reads the content of a CSV file and make query using natural language: [csv-reader.gpt](https://github.com/gptscript-ai/csv-reader/tree/main/examples/csv-reader.gpt)
-
-### Understanding Code
-
-Here is a GPTScript that summarizes the code stored under the current directory: [`describe-code.gpt`](https://github.com/gptscript-ai/gptscript/tree/main/examples/describe-code.gpt)
-
-## Vision, Image, and Audio
-
-### Vision
-
-Here is an example of a web app that leverages GPTScript to recognize ingredients in a photo and suggest a recipe based on them: [recipe-generator](https://github.com/gptscript-ai/gptscript/tree/main/examples/recipegenerator).
-
-### Image Generation
-
-Here is a GPTScript that takes a story prompt and generates an illustrated children's book: [story-book.gpt](https://github.com/gptscript-ai/gptscript/tree/main/examples/story-book)
diff --git a/docs/docs/08-sdks.md b/docs/docs/08-sdks.md
deleted file mode 100644
index da0a6a27..00000000
--- a/docs/docs/08-sdks.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# SDKs
-
-Currently, there are three SDKs being maintained: [Python](https://github.com/gptscript-ai/py-gptscript), [Node](https://github.com/gptscript-ai/node-gptscript), and [Go](https://github.com/gptscript-ai/go-gptscript). They are currently under development and are being iterated on relatively rapidly. The READMEs in each repository contain the most up-to-date documentation for the functionality of each.
\ No newline at end of file
diff --git a/docs/docs/09-faqs.md b/docs/docs/09-faqs.md
new file mode 100644
index 00000000..a06e5818
--- /dev/null
+++ b/docs/docs/09-faqs.md
@@ -0,0 +1,11 @@
+# FAQs
+
+#### I don't have Homebrew, how can I install GPTScript?
+On MacOS and Linux, you can alternatively install via: `curl https://get.gptscript.ai/install.sh | sh`
+
+On all supported systems, you download and install the archive for your platform and architecture from the [releases page](https://github.com/gptscript-ai/gptscript/releases).
+
+
+#### Does GPTScript have an SDK or API I can program against?
+
+Currently, there are three SDKs being maintained: [Python](https://github.com/gptscript-ai/py-gptscript), [Node](https://github.com/gptscript-ai/node-gptscript), and [Go](https://github.com/gptscript-ai/go-gptscript). They are currently under development and are being iterated on relatively rapidly. The READMEs in each repository contain the most up-to-date documentation for the functionality of each.
diff --git a/docs/static/img/chat-api.png b/docs/static/img/chat-api.png
new file mode 100644
index 00000000..cb839a6e
Binary files /dev/null and b/docs/static/img/chat-api.png differ
diff --git a/docs/static/img/demo.gif b/docs/static/img/demo.gif
new file mode 100644
index 00000000..de19ee07
Binary files /dev/null and b/docs/static/img/demo.gif differ