Skip to content

Repo sync for protected branch #273

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions semantic-kernel/Frameworks/process/process-best-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,14 @@ Organizing your project files in a logical and maintainable structure is crucial

An organized structure not only simplifies navigation within the project but also enhances code reusability and facilitates collaboration among team members.

### Kernel Instance Isolation

> [!Important]
> Do not share a single Kernel instance between the main Process Framework and any of its dependencies (such as agents, tools, or external services).

Sharing a Kernel across these components can result in unexpected recursive invocation patterns, including infinite loops, as functions registered in the Kernel may inadvertently invoke each other. For example, a Step may call a function that triggers an agent, which then re-invokes the same function, creating a non-terminating loop.

To avoid this, instantiate separate Kernel objects for each independent agent, tool, or service used within your process. This ensures isolation between the Process Framework’s own functions and those required by dependencies, and prevents cross-invocation that could destabilize your workflow. This requirement reflects a current architectural constraint and may be revisited as the framework evolves.

### Common Pitfalls
To ensure smooth implementation and operation of the Process Framework, be mindful of these common pitfalls to avoid:
Expand Down
40 changes: 19 additions & 21 deletions semantic-kernel/concepts/planning.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,9 @@ To use automatic function calling in Semantic Kernel, you need to do the followi
2. Create an execution settings object that tells the AI to automatically call functions
3. Invoke the chat completion service with the chat history and the kernel

> [!TIP]
> The following code sample uses the `LightsPlugin` defined [here](./plugins/adding-native-plugins.md#defining-a-plugin-using-a-class).

::: zone pivot="programming-language-csharp"

```csharp
Expand Down Expand Up @@ -127,31 +130,25 @@ do {
import asyncio

from semantic_kernel import Kernel
from semantic_kernel.functions import kernel_function
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
from semantic_kernel.connectors.ai.function_choice_behavior import FunctionChoiceBehavior
from semantic_kernel.connectors.ai import FunctionChoiceBehavior
from semantic_kernel.connectors.ai.chat_completion_client_base import ChatCompletionClientBase
from semantic_kernel.contents.chat_history import ChatHistory
from semantic_kernel.functions.kernel_arguments import KernelArguments

from semantic_kernel.connectors.ai.open_ai.prompt_execution_settings.azure_chat_prompt_execution_settings import (
from semantic_kernel.connectors.ai.open_ai import (
AzureChatCompletion,
AzureChatPromptExecutionSettings,
)
from semantic_kernel.contents import ChatHistory
from semantic_kernel.functions import kernel_function

async def main():
# 1. Create the kernel with the Lights plugin
kernel = Kernel()
kernel.add_service(AzureChatCompletion(
deployment_name="your_models_deployment_name",
api_key="your_api_key",
base_url="your_base_url",
))
kernel.add_service(AzureChatCompletion())
kernel.add_plugin(
LightsPlugin(),
plugin_name="Lights",
)

chat_completion : AzureChatCompletion = kernel.get_service(type=ChatCompletionClientBase)
chat_completion: AzureChatCompletion = kernel.get_service(type=ChatCompletionClientBase)

# 2. Enable automatic function calling
execution_settings = AzureChatPromptExecutionSettings()
Expand All @@ -173,12 +170,11 @@ async def main():
history.add_user_message(userInput)

# 3. Get the response from the AI with automatic function calling
result = (await chat_completion.get_chat_message_contents(
result = await chat_completion.get_chat_message_content(
chat_history=history,
settings=execution_settings,
kernel=kernel,
arguments=KernelArguments(),
))[0]
)

# Print the results
print("Assistant > " + str(result))
Expand Down Expand Up @@ -263,14 +259,16 @@ if __name__ == "__main__":

When you use automatic function calling, all of the steps in the automatic planning loop are handled for you and added to the `ChatHistory` object. After the function calling loop is complete, you can inspect the `ChatHistory` object to see all of the function calls made and results provided by Semantic Kernel.

## What about the Function Calling Stepwise and Handlebars planners?
## What happened to the Stepwise and Handlebars planners?

The Stepwise and Handlebars planners have been deprecated and removed from the Semantic Kernel package. These planners are no longer supported in either Python, .NET, or Java.

The Stepwise and Handlebars planners are still available in Semantic Kernel. However, we recommend using function calling for most tasks as it is more powerful and easier to use. Both the Stepwise and Handlebars planners will be deprecated in a future release of Semantic Kernel.
We recommend using **function calling**, which is both more powerful and easier to use for most scenarios.

Learn how to [migrate Stepwise Planner to Auto Function Calling](../support/migration/stepwise-planner-migration-guide.md).
To update existing solutions, follow our [Stepwise Planner Migration Guide](../support/migration/stepwise-planner-migration-guide.md).

> [!CAUTION]
> If you are building a new AI agent, we recommend that you _not_ use the Stepwise or Handlebars planners. Instead, use function calling as it is more powerful and easier to use.
> [!TIP]
> For new AI agents, use function calling instead of the deprecated planners. It offers better flexibility, built-in tool support, and a simpler development experience.

## Next steps
Now that you understand how planners work in Semantic Kernel, you can learn more about how influence your AI agent so that they best plan and execute tasks on behalf of your users.
23 changes: 15 additions & 8 deletions semantic-kernel/concepts/plugins/adding-native-plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,9 @@ Below, we'll walk through the two different ways of providing your AI agent with

The easiest way to create a native plugin is to start with a class and then add methods annotated with the `KernelFunction` attribute. It is also recommended to liberally use the `Description` annotation to provide the AI agent with the necessary information to understand the function.

> [!TIP]
> The following `LightsPlugin` uses the `LightModel` defined [here](./index.md#1-define-your-plugin).

::: zone pivot="programming-language-csharp"

```csharp
Expand Down Expand Up @@ -81,22 +84,24 @@ public class LightsPlugin
::: zone pivot="programming-language-python"

```python
from typing import List, Optional, Annotated
from typing import Annotated

from semantic_kernel.functions import kernel_function

class LightsPlugin:
def __init__(self, lights: List[LightModel]):
def __init__(self, lights: list[LightModel]):
self._lights = lights

@kernel_function
async def get_lights(self) -> List[LightModel]:
async def get_lights(self) -> list[LightModel]:
"""Gets a list of lights and their current state."""
return self._lights

@kernel_function
async def change_state(
self,
change_state: LightModel
) -> Optional[LightModel]:
) -> LightModel | None:
"""Changes the state of the light."""
for light in self._lights:
if light["id"] == change_state["id"]:
Expand Down Expand Up @@ -538,22 +543,24 @@ This approach eliminates the need to manually provide and update the return type
When creating a plugin in Python, you can provide additional information about the functions in the `kernel_function` decorator. This information will be used by the AI agent to understand the functions better.

```python
from typing import List, Optional, Annotated
from typing import Annotated

from semantic_kernel.functions import kernel_function

class LightsPlugin:
def __init__(self, lights: List[LightModel]):
def __init__(self, lights: list[LightModel]):
self._lights = lights

@kernel_function(name="GetLights", description="Gets a list of lights and their current state")
async def get_lights(self) -> List[LightModel]:
async def get_lights(self) -> list[LightModel]:
"""Gets a list of lights and their current state."""
return self._lights

@kernel_function(name="ChangeState", description="Changes the state of the light")
async def change_state(
self,
change_state: LightModel
) -> Optional[LightModel]:
) -> LightModel | None:
"""Changes the state of the light."""
for light in self._lights:
if light["id"] == change_state["id"]:
Expand Down
130 changes: 127 additions & 3 deletions semantic-kernel/support/migration/stepwise-planner-migration-guide.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,21 @@
---
title: .NET Migrating from Stepwise Planner to Auto Function Calling
title: Migrating from Stepwise Planner to Auto Function Calling
description: Describes the steps for SK caller code to migrate from Stepwise Planner to Auto Function Calling.
zone_pivot_groups: programming-languages
author: dmytrostruk
ms.topic: conceptual
ms.author: dmytrostruk
ms.date: 06/10/2025
ms.service: semantic-kernel
---
::: zone pivot="programming-language-csharp"

# Stepwise Planner Migration Guide
This migration guide shows how to migrate from `FunctionCallingStepwisePlanner` to a new recommended approach for planning capability - [Auto Function Calling](../../concepts/ai-services/chat-completion/function-calling/index.md). The new approach produces the results more reliably and uses fewer tokens compared to `FunctionCallingStepwisePlanner`.

## Plan generation

::: zone pivot="programming-language-csharp"

Following code shows how to generate a new plan with Auto Function Calling by using `FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()`. After sending a request to AI model, the plan will be located in `ChatHistory` object where a message with `Assistant` role will contain a list of functions (steps) to call.

Old approach:
Expand Down Expand Up @@ -114,7 +118,127 @@ ChatMessageContent result = await chatCompletionService.GetChatMessageContentAsy
string planResult = result.Content;
```

The code snippets above demonstrate how to migrate your code that uses Stepwise Planner to use Auto Function Calling. Learn more about [Function Calling with chat completion](../../concepts/ai-services/chat-completion/function-calling/index.md).
::: zone-end

::: zone pivot="programming-language-python"

The following code shows how to generate a new plan with Auto Function Calling by using `function_choice_behavior = FunctionChoiceBehavior.Auto()`. After sending a request to AI model, the plan will be located in `ChatHistory` object where a message with `Assistant` role will contain a list of functions (steps) to call.

Old approach:
```python
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
from semantic_kernel.planners.function_calling_stepwise_planner import (
FunctionCallingStepwisePlanner,
FunctionCallingStepwisePlannerResult,
)

kernel = Kernel()
kernel.add_service(AzureChatCompletion())

# Add any plugins to the kernel that the planner will leverage
kernel.add_plugins(...)

planner = FunctionCallingStepwisePlanner(service_id="service_id")

result: FunctionCallingStepwisePlannerResult = await planner.invoke(
kernel=kernel,
question="Check current UTC time and return current weather in Boston city.",
)

generated_plan = result.chat_history
```

New approach:

```python
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai import FunctionChoiceBehavior
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion, AzureChatPromptExecutionSettings
from semantic_kernel.contents import ChatHistory

chat_completion_service = AzureChatCompletion()

chat_history = ChatHistory()
chat_hitory.add_user_message("Check current UTC time and return current weather in Boston city.")

request_settings = AzureChatPromptExecutionSettings(function_choice_behavior=FunctionChoiceBehavior.Auto())

# Add any plugins to the kernel that the planner will leverage
kernel = Kernel()
kernel.add_plugins(...)

response = await chat_completion_service.get_chat_message_content(
chat_history=chat_history,
settings=request_settings,
kernel=kernel,
)
print(response)

# The generated plan is now contained inside of `chat_history`.
```

## Execution of the new plan
Following code shows how to execute a new plan with Auto Function Calling by using `function_choice_behavior = FunctionChoiceBehavior.Auto()`. This approach is useful when only the result is needed without plan steps. In this case, the `Kernel` object can be used to pass a goal to the `invoke_prompt` method. The result of plan execution will be located in a `FunctionResult` object.

Old approach:
```python
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
from semantic_kernel.planners.function_calling_stepwise_planner import (
FunctionCallingStepwisePlanner,
FunctionCallingStepwisePlannerResult,
)

kernel = Kernel()
kernel.add_service(AzureChatCompletion())

# Add any plugins to the kernel that the planner will leverage
kernel.add_plugins(...)

planner = FunctionCallingStepwisePlanner(service_id="service_id")

result: FunctionCallingStepwisePlannerResult = await planner.invoke(
kernel=kernel,
question="Check current UTC time and return current weather in Boston city.",
)

print(result.final_answer)
```

New approach:

```python
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai import FunctionChoiceBehavior
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion, AzureChatPromptExecutionSettings
from semantic_kernel.contents import ChatHistory
from semantic_kernel.functions import KernelArguments

kernel = Kernel()
kernel.add_service(AzureChatCompletion())
# Add any plugins to the kernel that the planner will leverage
kernel.add_plugins(...)

chat_history = ChatHistory()
chat_hitory.add_user_message("Check current UTC time and return current weather in Boston city.")

request_settings = AzureChatPromptExecutionSettings(function_choice_behavior=FunctionChoiceBehavior.Auto())

response = await kernel.invoke_prompt(
"Check current UTC time and return current weather in Boston city.",
arguments=KernelArguments(settings=request_settings),
)
print(response)
```

::: zone-end

::: zone pivot="programming-language-java"

> Planners were not available in SK Java. Please use function calling directly.

::: zone-end

The code snippets above demonstrate how to migrate your code that uses Stepwise Planner to use Auto Function Calling. Learn more about [Function Calling with chat completion](../../concepts/ai-services/chat-completion/function-calling/index.md).