Memory-based Conversation

The “SendMessage” action was used to demonstrate some Skill Script actions and the functionality of the Sandbox chat. Now, we will expand the code to include various other actions and integrate an LLM. Use the same flow from the “Hello World” example.

Goal: Have the agent respond to a user question in the Sandbox chat panel by involving an LLM. Include past user/agent memory to allow the LLM to understand succeeding questions/responses.

🎉

Prefer a different perspective on the Newo.ai platform functionality? Check out our co-founder lessons YouTube channel. Like, comment, and share to get the word out!

🚧

If creating a new agent and flow, go through the steps to create a new Sandbox chat connector with a different name. If you don’t do this, and use an existing connector, when a user sends a message in the Sandbox chat, all flows using this connector will activate simultaneously, which is not ideal.

LLM-integrated Skill Script

Replace the “SendMessage” action with the following code:

{{#system~}}

{{~/system}}

{{#assistant~}}

{{gen(name='RESULT', temperature=0.75)}}

{{~/assistant}}

The system block contains all actions that will be sent to the LLM and is denoted between {{#system~}} and {{~/system}}.

When you send a message in the Sandbox chat, the previously created event will trigger. The event activates the Skill. When this happens, the double curly braces are removed and replaced with some content.

In this case, the system block will produce nothing. The assistant block, denoted by {{#assistant~}} and {{~/assistant}}, will activate the LLM and talk to GPT-4 (the model shown in the "Skill Settings"). However, since the system block is empty, a blank prompt is sent to the LLM.

To send something to the LLM, add a plain text prompt in the system block. For example:

{{#system~}}

Give a proof of the Pythagoras theorem.

{{~/system}}

{{#assistant~}}

{{gen(name='RESULT', temperature=0.75)}}

{{~/assistant}}

Click Save and Publish, type anything in the Sandbox chat, and click the send icon.

The LLM will respond with the ‘RESULT,’ just give it a few seconds. Once the result appears, click the > icon and the Show Prompt button to see what prompt was sent to the LLM and what result was returned.

Everything between <|im_start|>system and <|im_end|> was sent to the LLM and everything between <|im_start|>assistant and <|im_end|> is the LLMs 'RESULT.’

However, there’s an issue with the agent in this state because what happens if you type another message? If you type another message and click the send icon, it will ignore the previous LLM response and provide another response to “Give a proof of the Pythagoras theorem.”

This is not ideal in a real-life scenario, as you’d want your agent to respond in a collaborative manner. In this case, memory needs to be added to the agent as follows:

{{#system~}}

{{set(name='agent_', value=GetAgent())}}  
{{set(name='memory', value=GetMemory(count=40, maxLen=20000))}}  

{{memory}}  
{{agent_}}:

{{~/system}}

{{#assistant~}}

{{gen(name='RESULT', temperature=0.75)}}

{{~/assistant}}

Note the additional set parameters needed to declare 'agent’ and ‘memory.’

Since the memory is now sent to the LLM each time a message is sent, there is no need to include the prompt “Give a proof of Pythagoras theorem” as it is already part of the Sandbox chat memory. Once the above code is added to the Skill Script, click Save and Publish again.

Type anything in the Sandbox chat, such as “Explain the steps in more detail,” and click the send icon. Again, give it a few seconds to generate a response.

Once the result appears, click the > icon and the Show Prompt button to see what prompt was sent to the LLM and what result was returned. Observe how the past agent and user conversations are added to the prompt, which is used to generate a response.