You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: assemblyai/lemur.py
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -172,7 +172,7 @@ def question(
172
172
Args:
173
173
questions: One or a list of questions to ask.
174
174
context: The context which is shared among all questions. This can be a string or a dictionary.
175
-
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", and "assemblyai/mistral-7b").
175
+
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", "assemblyai/mistral-7b", and "anthropic/claude-2-1").
176
176
max_output_size: Max output size in tokens
177
177
timeout: The timeout in seconds to wait for the answer(s).
178
178
temperature: Change how deterministic the response is, with 0 being the most deterministic and 1 being the least deterministic.
@@ -214,7 +214,7 @@ def summarize(
214
214
Args:
215
215
context: An optional context on the transcript.
216
216
answer_format: The format on how the summary shall be summarized.
217
-
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", and "assemblyai/mistral-7b").
217
+
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", "assemblyai/mistral-7b", and "anthropic/claude-2-1").
218
218
max_output_size: Max output size in tokens
219
219
timeout: The timeout in seconds to wait for the summary.
220
220
temperature: Change how deterministic the response is, with 0 being the most deterministic and 1 being the least deterministic.
@@ -254,7 +254,7 @@ def action_items(
254
254
Args:
255
255
context: An optional context on the transcript.
256
256
answer_format: The preferred format for the result action items.
257
-
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", and "assemblyai/mistral-7b").
257
+
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", "assemblyai/mistral-7b", and "anthropic/claude-2-1").
258
258
max_output_size: Max output size in tokens
259
259
timeout: The timeout in seconds to wait for the action items response.
260
260
temperature: Change how deterministic the response is, with 0 being the most deterministic and 1 being the least deterministic.
@@ -289,7 +289,7 @@ def task(
289
289
290
290
Args:
291
291
prompt: The prompt to use for this task.
292
-
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", and "assemblyai/mistral-7b").
292
+
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", "assemblyai/mistral-7b", and "anthropic/claude-2-1").
293
293
max_output_size: Max output size in tokens
294
294
timeout: The timeout in seconds to wait for the task.
295
295
temperature: Change how deterministic the response is, with 0 being the most deterministic and 1 being the least deterministic.
0 commit comments