Skip to content

debug message caused program to panic on any gptscript #109

Closed
@michaellzc

Description

@michaellzc

I tried the released version or compiled from main. Regardless of which example scripts I ran, it will always panic at the same place:

slog.Debug("stream", "content", response.Choices[0].Delta.Content)

How to reproduce

$ gptscript -v
gptscript version v0.1.4-701b9c8e-dirty

$ gptscript examples/echo.gpt --cache false --input 'Hello World'

16:59:33 started  [main] [input=--cache false --input Hello World]
16:59:33 sent     [main]
         content  [1] content | Waiting for model response...         content  [1] content | --cache false --input Hello Worldpanic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
github.com/gptscript-ai/gptscript/pkg/openai.(*Client).call(_, {_, _}, {{0x105800100, 0x13}, {0x140003901e0, 0x3, 0x4}, 0x0, 0x322bcc77, ...}, ...)
        github.com/gptscript-ai/gptscript/pkg/openai/client.go:425 +0x7cc
github.com/gptscript-ai/gptscript/pkg/openai.(*Client).Call(0x140000ca040, {0x1058a27a0, 0x1400003a460}, {{0x105800100, 0x13}, 0x0, {0x0, 0x0, 0x0}, {0x140000cc600, ...}, ...}, ...)
        github.com/gptscript-ai/gptscript/pkg/openai/client.go:295 +0x4ec
github.com/gptscript-ai/gptscript/pkg/llm.(*Registry).Call(0x104ffc024?, {0x1058a27a0, 0x1400003a460}, {{0x105800100, 0x13}, 0x0, {0x0, 0x0, 0x0}, {0x140000cc600, ...}, ...}, ...)
        github.com/gptscript-ai/gptscript/pkg/llm/registry.go:53 +0x94
github.com/gptscript-ai/gptscript/pkg/engine.(*Engine).complete(0x140000b8f60, {0x1058a27a0, 0x1400003a460}, 0x140000e6630)
        github.com/gptscript-ai/gptscript/pkg/engine/engine.go:218 +0x1f8
github.com/gptscript-ai/gptscript/pkg/engine.(*Engine).Start(_, {{0x105b64088, 0x1}, {0x1058a27a0, 0x1400003a460}, 0x0, 0x140000b8e10, {{{0x0, 0x0}, {0x140000be0cd, ...}, ...}, ...}}, ...)
        github.com/gptscript-ai/gptscript/pkg/engine/engine.go:189 +0x98c
github.com/gptscript-ai/gptscript/pkg/runner.(*Runner).call(_, {{0x105b64088, 0x1}, {0x1058a27a0, 0x1400003a460}, 0x0, 0x140000b8e10, {{{0x0, 0x0}, {0x140000be0cd, ...}, ...}, ...}}, ...)
        github.com/gptscript-ai/gptscript/pkg/runner/runner.go:104 +0x234
github.com/gptscript-ai/gptscript/pkg/runner.(*Runner).Run(0x140000ae2c0, {0x1058a27a0, 0x1400003a460}, {{0x16ae1b4aa, 0x11}, {0x140000ac138, 0x13}, 0x140000b8270, 0x0}, {0x14000398008, ...}, ...)
        github.com/gptscript-ai/gptscript/pkg/runner/runner.go:60 +0x1f0
github.com/gptscript-ai/gptscript/pkg/cli.(*GPTScript).Run(0x14000186100, 0x1400018af08, {0x140000d6000, 0x5, 0x5})
        github.com/gptscript-ai/gptscript/pkg/cli/gptscript.go:238 +0x848
github.com/acorn-io/cmd.Command.bind.func4(0x1400018af08, {0x140000d6000, 0x5, 0x5})
        github.com/acorn-io/[email protected]/builder.go:477 +0x188
github.com/spf13/cobra.(*Command).execute(0x1400018af08, {0x1400001e1f0, 0x5, 0x5})
        github.com/spf13/[email protected]/command.go:983 +0x840
github.com/spf13/cobra.(*Command).ExecuteC(0x1400018af08)
        github.com/spf13/[email protected]/command.go:1115 +0x344
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/[email protected]/command.go:1039
github.com/spf13/cobra.(*Command).ExecuteContext(...)
        github.com/spf13/[email protected]/command.go:1032
github.com/acorn-io/cmd.Main(0x1400018af08)
        github.com/acorn-io/[email protected]/builder.go:74 +0x54
main.main()
        github.com/gptscript-ai/gptscript/main.go:9 +0x44

Then, if you apply this patch:

diff --git a/pkg/openai/client.go b/pkg/openai/client.go
index 1590f29..1cff199 100644
--- a/pkg/openai/client.go
+++ b/pkg/openai/client.go
@@ -422,7 +422,6 @@ func (c *Client) call(ctx context.Context, request openai.ChatCompletionRequest,
                } else if err != nil {
                        return nil, err
                }
-               slog.Debug("stream", "content", response.Choices[0].Delta.Content)
                if partial != nil {
                        partialMessage = appendMessage(partialMessage, response)
                        partial <- types.CompletionStatus{

Run the fresh binary:

$ go build ./
$ ./gptscript examples/echo.gpt --cache false --input 'Hello World'

17:03:44 started  [main] [input=--cache false --input Hello World]
17:03:44 sent     [main]
         content  [1] content | Waiting for model response...
         content  [1] content | --cache false --input Hello World
17:03:45 ended    [main]

INPUT:

--cache false --input Hello World

OUTPUT:

--cache false --input Hello World

Notes

On cache hit (the same input is used) it will not panic even without the patch. I suppose it will return the cached response right away without hitting LLM. Of course, without the patch, you won't be able to cache the response in the first place.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions