Documentation Index
Fetch the complete documentation index at: https://langchain-5e9cc07a-preview-mdrxyo-1777658790-7be347c.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Harness and provider profiles are Python-only and require
deepagents>=0.5.4. They are public beta APIs and may be updated in future releases.create_deep_agent call site. Use HarnessProfile when building profiles in Python; use HarnessProfileConfig when loading or saving YAML/JSON files. Deep Agents ships built-in harness profiles for OpenAI and Anthropic (Claude) models.
Provider profiles are a narrower companion API for model-construction kwargs, which don’t affect the harness. Most callers don’t need them; reach for one when you want init_chat_model defaults, credential checks, or runtime-derived kwargs as defaults with your provider choice (for example, when packaging a provider integration).
Harness profiles
AHarnessProfile describes prompt-assembly, tool-visibility, middleware, and default-subagent adjustments that create_deep_agent applies after the chat model has been constructed:
Replace the base Deep Agents system prompt (
CUSTOM in Prompt assembly).Append text to the assembled base prompt (
SUFFIX in Prompt assembly); applied to the main agent, declarative subagents, and the auto-added general-purpose subagent.Override individual tool descriptions, keyed by tool name.
Remove specific harness-level tools from the tool set.
Strip specific middleware classes from the stack. Accepts middleware classes or string names.
Append middleware to every stack this profile applies to.
Disable, rename, or re-prompt the general-purpose subagent. When this field’s
system_prompt is set alongside base_system_prompt, the general-purpose-specific subagent prompt wins—see General-purpose subagent prompt.Caller-supplied
system_prompt= always sits at the front of the assembled prompt, and system_prompt_suffix always sits at the end—regardless of which model is selected. The same overlay rules apply to subagents: each subagent re-runs profile resolution against its own model. See Prompt assembly for the full per-case breakdown (main agent, subagents, and the general-purpose subagent).excluded_middleware accept two forms:
- A middleware class (matched by exact type), or a plain string that matches
AgentMiddleware.name. Use plain strings for built-ins and public aliases such as"SummarizationMiddleware". - An
module:Classimport ref (for example,"my_pkg.middleware:TelemetryMiddleware") to target an exact middleware class from a config file. Import refs resolve lazily, so use them only for trusted local configuration — loading one imports Python code.
Lookup order for preconfigured model instances
Lookup order for preconfigured model instances
When you pass a preconfigured chat model instance instead of a
provider:model string, the harness synthesizes the canonical provider:identifier key from the instance and looks it up in this order:- Exact
provider:identifiermatch - Identifier-only (only when the identifier already contains
:) - Provider-only fallback
Registration keys
Both profile types use the same key format:- Provider-level — a bare provider name like
"openai"applies to every model from that provider. - Model-level — a fully qualified
provider:modelkey like"openai:gpt-5.4"applies only to that specific model.
There is no wildcard key that matches every provider. To apply the same overrides everywhere—say, dropping
TodoListMiddleware regardless of which model is selected—register the profile under each provider key you use. Profiles are intended for adjustments that depend on the model being selected. Global adjustments that should apply regardless of model should be made on the create_deep_agent call site.Merge semantics
| Field | Merge behavior |
|---|---|
base_system_prompt, system_prompt_suffix | New value wins when set; otherwise inherits |
tool_description_overrides | Mappings merge per key; new value wins on a shared key |
excluded_tools, excluded_middleware | Set union |
extra_middleware | Merged by concrete class: new instance replaces existing at its position, novel classes append |
general_purpose_subagent | Merged field-wise (unset fields inherit) |
init_kwargs (provider) | Dicts merge key-wise; new value wins on a shared key |
pre_init (provider) | Callables chain: existing runs first, then the new one |
init_kwargs_factory (provider) | Factories chain with their outputs merged every resolve_model call |
Provider profiles
AProviderProfile declares how Deep Agents should construct a chat model for a given provider or specific model spec. It applies only when you provide a provider:model string when creating the deep agent, not when you pass a preconfigured model with init_chat_model:
Static initialization arguments forwarded to
init_chat_model.Side effects to run before construction (for example, credential validation).
Kwargs derived from runtime state (for example, headers pulled from environment variables).
Load profiles from config files
For YAML/JSON-backed workflows, useHarnessProfileConfig. It mirrors the declarative subset of HarnessProfile (prompt text, tool-description overrides, excluded tools and middleware, general-purpose subagent edits) and owns to_dict / from_dict. Runtime-only state — middleware instances, factories, and class-form excluded_middleware entries — stays on HarnessProfile.
register_harness_profile accepts either type, so config-backed callers don’t need a manual conversion step:
HarnessProfileConfig.from_harness_profile(...) exports a runtime profile back to the declarative shape when it only uses serializable features:
- Class-form
excluded_middlewareentries serialize as a public alias (when the class exposes one viaserialized_name: ClassVar[str]) or as amodule:Classimport ref. - Non-empty
extra_middlewareand middleware classes declared in__main__or inside a function scope cannot be serialized — export raisesValueError.
Ship a profile as a plugin
Distributable profiles can register themselves viaimportlib.metadata entry points instead of requiring callers to run register_*_profile by hand. Load order is built-ins first, then entry-point plugins, then any direct register_*_profile calls in user code; all three paths funnel through the same additive registration, so later registrations layer on top of earlier ones under the same key.
Declare an entry point in the distribution’s own pyproject.toml under the appropriate group:
deepagents.profiles is imported:
Related
- Harness — overview of harness capabilities
- Models — configure model providers and parameters
- Customization — full
create_deep_agentconfiguration surface
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

