Skills modify agent behavior by injecting additional context and rules. This example shows both always-active skills and keyword-triggered skills:
examples/01_standalone_sdk/03_activate_skill.py
import osfrom pydantic import SecretStrfrom openhands.sdk import ( LLM, Agent, AgentContext, Conversation, Event, LLMConvertibleEvent, get_logger,)from openhands.sdk.context import ( KeywordTrigger, Skill,)from openhands.sdk.tool import Toolfrom openhands.tools.file_editor import FileEditorToolfrom openhands.tools.terminal import TerminalToollogger = get_logger(__name__)# Configure LLMapi_key = os.getenv("LLM_API_KEY")assert api_key is not None, "LLM_API_KEY environment variable is not set."model = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929")base_url = os.getenv("LLM_BASE_URL")llm = LLM( usage_id="agent", model=model, base_url=base_url, api_key=SecretStr(api_key),)# Toolscwd = os.getcwd()tools = [ Tool( name=TerminalTool.name, ), Tool(name=FileEditorTool.name),]# AgentContext provides flexible ways to customize prompts:# 1. Skills: Inject instructions (always-active or keyword-triggered)# 2. system_message_suffix: Append text to the system prompt# 3. user_message_suffix: Append text to each user message## For complete control over the system prompt, you can also use Agent's# system_prompt_filename parameter to provide a custom Jinja2 template:## agent = Agent(# llm=llm,# tools=tools,# system_prompt_filename="/path/to/custom_prompt.j2",# system_prompt_kwargs={"cli_mode": True, "repo": "my-project"},# )## See: https://docs.openhands.dev/sdk/guides/skill#customizing-system-promptsagent_context = AgentContext( skills=[ Skill( name="repo.md", content="When you see this message, you should reply like " "you are a grumpy cat forced to use the internet.", # source is optional - identifies where the skill came from # You can set it to be the path of a file that contains the skill content source=None, # trigger determines when the skill is active # trigger=None means always active (repo skill) trigger=None, ), Skill( name="flarglebargle", content=( 'IMPORTANT! The user has said the magic word "flarglebargle". ' "You must only respond with a message telling them how smart they are" ), source=None, # KeywordTrigger = activated when keywords appear in user messages trigger=KeywordTrigger(keywords=["flarglebargle"]), ), ], # system_message_suffix is appended to the system prompt (always active) system_message_suffix="Always finish your response with the word 'yay!'", # user_message_suffix is appended to each user message user_message_suffix="The first character of your response should be 'I'", # You can also enable automatic load skills from # public registry at https://github.com/OpenHands/skills load_public_skills=True,)# Agentagent = Agent(llm=llm, tools=tools, agent_context=agent_context)llm_messages = [] # collect raw LLM messagesdef conversation_callback(event: Event): if isinstance(event, LLMConvertibleEvent): llm_messages.append(event.to_llm_message())conversation = Conversation( agent=agent, callbacks=[conversation_callback], workspace=cwd)print("=" * 100)print("Checking if the repo skill is activated.")conversation.send_message("Hey are you a grumpy cat?")conversation.run()print("=" * 100)print("Now sending flarglebargle to trigger the knowledge skill!")conversation.send_message("flarglebargle!")conversation.run()print("=" * 100)print("Now triggering public skill 'github'")conversation.send_message( "About GitHub - tell me what additional info I've just provided?")conversation.run()print("=" * 100)print("Conversation finished. Got the following LLM messages:")for i, message in enumerate(llm_messages): print(f"Message {i}: {str(message)[:200]}")# Report costcost = llm.metrics.accumulated_costprint(f"EXAMPLE_COST: {cost}")
Running the Example
export LLM_API_KEY="your-api-key"cd agent-sdkuv run python examples/01_standalone_sdk/03_activate_skill.py
Skills are defined with a name, content (the instructions), and an optional trigger:
agent_context = AgentContext( skills=[ Skill( name="repo.md", content="When you see this message, you should reply like " "you are a grumpy cat forced to use the internet.", trigger=None, # Always active ), Skill( name="flarglebargle", content='IMPORTANT! The user has said the magic word "flarglebargle". ' "You must only respond with a message telling them how smart they are", trigger=KeywordTrigger(keywords=["flarglebargle"]), ), ])
OpenHands maintains a public skills repository with community-contributed skills. You can automatically load these skills without waiting for SDK updates.
You can also load public skills manually and have more control:
from openhands.sdk.context.skills import load_public_skills# Load all public skillspublic_skills = load_public_skills()# Use with AgentContextagent_context = AgentContext(skills=public_skills)# Or combine with custom skillsmy_skills = [ Skill(name="custom", content="Custom instructions", trigger=None)]agent_context = AgentContext(skills=my_skills + public_skills)
from openhands.sdk.context.skills import load_public_skills# Load from a custom repositorycustom_skills = load_public_skills( repo_url="https://github.com/my-org/my-skills", branch="main")
The load_public_skills() function uses git-based caching for efficiency:
First run: Clones the skills repository to ~/.openhands/cache/skills/public-skills/
Subsequent runs: Pulls the latest changes to keep skills up-to-date
Offline mode: Uses the cached version if network is unavailable
This approach is more efficient than fetching individual skill files via HTTP and ensures you always have access to the latest community skills.
Explore available public skills at github.com/OpenHands/skills. These skills cover various domains like GitHub integration, Python development, debugging, and more.
Custom template example (custom_system_prompt.j2):
You are a helpful coding assistant for {{ repo_name }}.{% if cli_mode %}You are running in CLI mode. Keep responses concise.{% endif %}Follow these guidelines:- Write clean, well-documented code- Consider edge cases and error handling- Suggest tests when appropriate
Key points:
Use relative filenames (e.g., "system_prompt.j2") to load from the agent’s prompts directory
Use absolute paths (e.g., "/path/to/prompt.j2") to load from any location
Pass variables to the template via system_prompt_kwargs
The system_message_suffix from AgentContext is automatically appended after your custom prompt