Skills
A skill is the fundamental unit of executable logic within the Newo agent framework. Each skill encapsulates a script, an LLM model configuration, and optional parameters, providing the building block that flows use to handle events and perform work.
What a skill does
Skills are the "how" of an agent's behavior. While flows define the structure and event routing for an agent, skills contain the actual logic that runs when an event is dispatched. A single skill might:
- Generate a reply to a user message
- Extract structured data from a conversation
- Look up persona attributes and set state values
- Send system events to trigger other flows
- Call an LLM to classify, summarize, or reason about information
Every skill is defined inside a flow's YAML configuration and points to a script file that contains its logic. When an event triggers a skill, the Newo Script Language (NSL) runtime loads the script, injects any parameters, and executes it against the configured LLM model.
Skill anatomy
A skill definition in YAML contains the following fields:
- title: ""
idn: GenerateReplySkill
prompt_script: flows/MyFlow/skills/GenerateReplySkill.nsl
runner_type: nsl
model:
model_idn: gemini25_flash
provider_idn: google
parameters:
- name: user_id
default_value: ""
- name: prompt
default_value: ""| Field | Description |
|---|---|
title | Optional human-readable display name for the skill |
idn | The identifying name of the skill, used to reference it in event subscriptions and from other scripts |
prompt_script | Relative path to the script file (.nsl or .nslg) that contains the skill's logic |
runner_type | The script engine used to execute the skill: nsl or guidance |
model | The LLM provider and model to use when the skill calls generation functions |
parameters | Array of named parameters with default values that can be passed into the skill at invocation |
Runner types
The runner_type field determines which script engine processes the skill's script file. The platform supports two runner types, each with its own syntax and file extension.
NSL runner (nsl)
nsl)NSL (Newo Script Language) scripts use Jinja2-style template syntax and are stored in .nsl files. This is the more common runner type, used for the majority of skills including utility logic, control flow, and LLM generation.
NSL scripts use {% %} for control blocks, {{ }} for expressions, and support standard Jinja2 features like set, if/else, and for loops:
{% set user_id = GetUser().id | string %}
{% set email = GetPersonaAttribute(id=user_id, field="email").strip() %}
{% if email %}
{{SendSystemEvent(
eventIdn="email_worker_send_email",
connectorIdn="system",
email=email,
subject="Follow-up",
body="Thank you for your inquiry."
)}}
{% endif %}
{{Return(val=email)}}
NSL skills call LLM generation using Gen() and GenStream() within the Jinja2 expression syntax. A typical pattern places the system prompt and assistant markers around the generation call:
{{system}}{{prompt.strip()}}{{end}}
{{assistant}}
{% set result = Gen(
temperature=0.2,
topP=0,
maxTokens=4000,
thinkingBudget=85
) %}
{{end}}
{{Return(val=result)}}
Guidance runner (guidance)
guidance)Guidance scripts use Handlebars-style template syntax and are stored in .nslg files. This runner type uses {{#if}}...{{/if}} blocks, {{Set()}} for variable assignment, and {{#system~}}...{{~/system}} for prompt sections:
{{StartNotInterruptibleBlock()}}
{{Set(name="user_id", value=GetUser(field="id"))}}
{{Set(name="integration_idn", value=GetActor(field="integrationIdn"))}}
{{#if integration_idn == "newo_chat"}}
{{SendSystemEvent(eventIdn="extend_session", connectorIdn="system")}}
{{/if}}
{{StopNotInterruptibleBlock()}}
Guidance scripts call LLM generation using Gen() within a {{Set()}} expression. The system prompt is wrapped in {{#system~}}...{{~/system}} blocks:
{{#system~}}
Analyze the following conversation and extract user information.
Conversation:
{{GetMemory(count=10, maxLen=5000)}}
{{~/system}}
{{#assistant~}}
{{Set(name="result", value=Gen(
jsonSchema=schema,
validateSchema="True",
temperature=0.2,
topP=0,
thinkingBudget=425
), expose="True")}}
{{~/assistant}}
::: 🗒️ NOTE
The runner_type must match the file extension of the prompt_script. Use nsl for .nsl files and guidance for .nslg files. Mismatched runner types and file extensions will cause script execution errors.
:::
Model configuration
Each skill specifies the LLM it uses through the model field, which contains two identifiers:
model:
model_idn: gemini25_flash
provider_idn: google| Field | Description |
|---|---|
provider_idn | The LLM provider identifier (e.g., google, openai, anthropic) |
model_idn | The specific model identifier within that provider (e.g., gemini25_flash, gpt4o) |
The model configuration can be set to null, in which case the skill uses the flow's default model defined at the flow level:
default_runner_type: guidance
default_provider_idn: google
default_model_idn: gemini25_flash::: 🗒️ NOTE
The model configuration controls which LLM is used when the skill's script calls Gen() or GenStream(). Skills that do not invoke any generation functions still require a model entry in their YAML definition, but the model will not be called at runtime.
:::
Parameter passing
Parameters allow data to flow between skills and into skill scripts. Each parameter has a name and a default_value:
parameters:
- name: user_id
default_value: ""
- name: count
default_value: "15"
- name: include_system
default_value: "False"How parameters are passed
When one skill calls another, it passes parameters as named arguments. The called skill receives these values as template variables.
In an NSL (.nsl) script, the calling skill invokes another skill like a function:
{% set memory = get_memory(
user_id=user_id,
count="10",
include_system="True"
) %}
Inside the get_memory skill's script, user_id, count, and include_system are available as template variables that can be used directly:
{% if include_system == "True" %}
{% set system_actors = GetActors(personaId=user_id, integrationIdn="system") %}
{% endif %}
{{Return(val=GetMemory(count=count, maxLen=10000, filterByActorIds=actors))}}
In a Guidance (.nslg) script, calling another skill works similarly:
{{Set(name="channel", value=get_conversation_channel(integrationIdn=integration_idn))}}
Default values
If a parameter is not supplied by the caller, the default_value from the YAML definition is used. All default values are strings. Common conventions include:
""(empty string) — The parameter is expected to be provided by the caller"False"or"True"— Boolean-like flags, compared as strings in scripts- A meaningful default like
"15"— Used when the parameter has a reasonable fallback
The Gen() and GenStream() functions
Gen() and GenStream() functionsSkills interact with LLMs through two primary generation functions.
Gen()
Gen()Gen() performs a synchronous LLM generation call and returns the result as a string. It is the most commonly used generation function across both runner types.
Common parameters:
| Parameter | Description |
|---|---|
temperature | Controls randomness of output (e.g., 0.2 for deterministic, 0.65 for creative) |
topP | Nucleus sampling threshold (typically 0 or 0.5) |
maxTokens | Maximum number of tokens in the generated response |
thinkingBudget | Token budget allocated for the model's internal reasoning |
jsonSchema | A JSON schema string to constrain the output to structured JSON |
validateSchema | When "True", validates the output against the provided jsonSchema |
skipFilter | When True, bypasses content filtering on the output |
Example of structured generation using Gen() with a JSON schema:
{{system}}{{prompt.strip()}}{{end}}
{{assistant}}
{% set result_json = Gen(
jsonSchema=schema,
validateSchema="True",
temperature=0.2,
topP=0,
maxTokens=4000,
skipFilter=True,
thinkingBudget=85
) %}
{{end}}
{{Return(val=result_json)}}
GenStream()
GenStream()GenStream() performs a streaming LLM generation call. Instead of waiting for the full response, it streams tokens to specified actors in real time. This is used for interactive reply generation, particularly in chat and voice scenarios.
{{system}}{{prompt.strip()}}{{end}}
{{assistant}}
{% set agent_answer = GenStream(
interruptMode="interruptWindow",
interruptWindow=0.7,
temperature=0.2,
topP=0.5,
maxTokens=4000,
skipFilter=True,
sendTo="actors",
actorIds=[actor_id],
thinkingBudget=thinking_budget
) %}
{{end}}
{{Return(val=agent_answer)}}
The sendTo and actorIds parameters direct the streamed output to specific conversation participants, enabling real-time message delivery while the LLM is still generating.
Skill execution lifecycle
When a skill executes, it follows this sequence:
- Event dispatch — An incoming event matches a flow's event subscription, which identifies the target skill by
skill_idn(or resolves it dynamically from a state field). - Parameter injection — The runtime injects any parameters provided by the caller (or from the event payload), falling back to
default_valuefor unspecified parameters. - Script loading — The runtime loads the script file specified in
prompt_scriptand selects the appropriate runner (nslorguidance). - Execution — The script runs, accessing platform functions (
GetUser(),GetPersonaAttribute(),SetState(),SendSystemEvent(), etc.) and optionally callingGen()orGenStream()against the configured LLM. - Return — The skill produces a return value via
Return(val=...), which is available to the calling skill or to the flow engine for further processing. - Sub-skill calls — During execution, a skill can invoke other skills defined in the same flow. These calls are synchronous: the calling skill pauses until the sub-skill returns.
::: ⚠️ CAUTION
Skills that use StartNotInterruptibleBlock() and StopNotInterruptibleBlock() create a protected execution window. While inside a non-interruptible block, incoming events are queued rather than interrupting the running skill. This is critical for skills that must complete an atomic sequence (e.g., setting up session state on conversation start).
:::
Event subscriptions
Skills are connected to incoming events through a flow's events array. Each event subscription maps a specific event (optionally scoped by integration and connector) to a target skill.
events:
- idn: user_message
skill_selector: skill_idn
skill_idn: UserNewoChatReplySkill
state_idn: null
integration_idn: newo_chat
connector_idn: newo_chat
interrupt_mode: queueSkill selectors
The skill_selector field determines how the target skill is resolved:
skill_idn— The skill to execute is specified directly in theskill_idnfield. This is the most common mode.skill_idn_from_state— The skill identifier is read dynamically from a state field identified bystate_idn. This allows the target skill to change at runtime based on agent state.
Example of a state-based selector:
events:
- idn: user_message
skill_selector: skill_idn_from_state
skill_idn: null
state_idn: phone_reply_skill
integration_idn: newo_voice
connector_idn: newo_voice_connector
interrupt_mode: interruptIn this example, when a user_message event arrives from the newo_voice integration, the flow reads the value of the phone_reply_skill state field and executes whichever skill identifier is stored there.
Interrupt modes
The interrupt_mode field controls how the event is handled if the flow is already executing a skill for the same user:
| Mode | Behavior |
|---|---|
interrupt | Cancels the currently running skill and immediately starts the new one |
queue | Waits for the current skill to finish, then executes the new one |
cancel | Discards the event if a skill is already running |
Naming conventions
Skill identifiers (idn) follow CamelCase naming by convention. The platform uses two common patterns:
- CamelCase with "Skill" suffix — Used for event-triggered skills that handle a specific concern:
ConversationStartedSkill,UserNewoChatReplySkill,UpdateConversationMetaSkill - snake_case — Used for utility skills that function as reusable helpers:
get_memory,structured_generation,format_response - Underscore prefix — Skills prefixed with
_are private helpers not intended to be called from outside their own flow:_validateInput,_formatPayload
::: 🗒️ NOTE
The skill idn is the exact string used to invoke the skill from other scripts and to reference it in event subscriptions. Choose descriptive, consistent names because renaming a skill idn requires updating all references across the flow's scripts and event configuration.
:::
Skills within a flow YAML
A flow's YAML file is the central configuration that ties skills, events, state, and defaults together. Here is an annotated excerpt showing how skills fit into the broader flow structure:
title: CAMessageFlow # Flow display name
idn: CAMessageFlow # Flow identifier
description: null
agent_id: null
skills: # All skills belonging to this flow
- title: ""
idn: SendMessage # Called by event subscription below
prompt_script: flows/CAMessageFlow/skills/SendMessage.nsl
runner_type: nsl
model:
model_idn: gemini25_flash
provider_idn: google
parameters: []
- title: ""
idn: generate_text_by_instruction # Utility skill called by SendMessage
prompt_script: flows/CAMessageFlow/skills/generate_text_by_instruction.nsl
runner_type: nsl
model:
model_idn: gemini25_flash
provider_idn: google
parameters:
- name: input
default_value: ""
- name: instruction
default_value: ""
- name: user_id
default_value: ""
- title: ""
idn: get_user_email # Simple helper, returns a string
prompt_script: flows/CAMessageFlow/skills/get_user_email.nsl
runner_type: nsl
model:
model_idn: gemini25_flash
provider_idn: google
parameters:
- name: user_id
default_value: ""
events: # Event-to-skill mappings
- idn: convoagent_send_notification_message
skill_selector: skill_idn
skill_idn: SendMessage
state_idn: null
integration_idn: system
connector_idn: system
interrupt_mode: queue
state_fields: [] # Flow-level state (none for this flow)
default_runner_type: guidance # Fallback runner if skill omits it
default_provider_idn: google # Fallback LLM provider
default_model_idn: gemini25_flash # Fallback LLM modelThis flow defines three skills and one event subscription. When the convoagent_send_notification_message event fires, the SendMessage skill executes. Inside its script, SendMessage calls get_user_email and generate_text_by_instruction as sub-skills, passing parameters to each.
Cross-references
- See Integrations and connectors for how integration and connector identifiers scope event subscriptions
- See Agents and the multi-agent system for how flows and skills relate to agent architecture
- See Attributes system for details on
GetCustomerAttribute,GetPersonaAttribute, and other attribute functions used within skill scripts - See Event Identifier List for all event identifiers that can trigger skills
Updated 2 days ago
