Memory-based Conversation

The “SendMessage” action was used to demonstrate some skill script actions and the functionality of the Sandbox chat. Now, we will expand the code to include various other actions and integrate an LLM. Use the same flow from the “Hello World” example.

Goal: Have the agent respond to a user question in the Sandbox chat panel by involving an LLM. Include past user/agent memory to allow the LLM to understand succeeding questions/responses.

🎉

Prefer a different perspective on the Newo.ai platform functionality? Check out our co-founder lessons YouTube channel. Like, comment, and share to get the word out!

OpenAI API Keys

LLM API keys need to be added to the platform. LLM keys can be for OpenAI, LLaMa, Anthropic, etc. Let’s use OpenAI for this example.

  1. Navigate to the OpenAI website and click Log in.
  2. If you already have an account, fill in your credentials. If not, click the Sign up link and go through the steps to authenticate your account.
  3. Once logged in, you will be presented with the page below. Click API.
  1. Click API keys in the left-side navigation panel.
  2. Click Create a new secret key and give it a name.
  3. Click Create secret key and copy the generated key to your clipboard.

Adding LLM API Keys to the Newo.ai Platform

Let’s add the LLM key you have copied to your clipboard to the Newo.ai platform:

  1. Navigate to your Newo.ai profile from the left-side navigation panel on the platform.
  2. Click the LLM Keys tab.
  3. Click Add Key inside the OpenAI section.
  4. Paste the key from your clipboard into the “LLM Key” field. Keep all remaining values as their default.
  5. Click Create.

When adding LLM keys, you can select whether you’d like the model to be used for your main agent(s) or used as support for your main agent(s).

Additionally, you can set the priority. What this means is that if your priority 1 LLM runs out of tokens, it will shift to your priority 2 LLM, and so on.

🚧

If creating a new agent and flow, go through the steps to create a new Sandbox chat connector with a different name. If you don’t do this, and use an existing connector, when a user sends a message in the Sandbox chat, all flows using this connector will activate simultaneously, which is not ideal.

LLM-integrated Skill Script

Replace the “SendMessage” action with the following code:

{{#system~}}

{{~/system}}

{{#assistant~}}

{{gen(name='RESULT', temperature=0.75)}}

{{~/assistant}}

The system block contains all actions that will be sent to the LLM and is denoted between {{#system~}} and {{~/system}}.

When you send a message in the Sandbox chat, the previously created event will trigger. The event activates the skill. When this happens, the double curly braces are removed and replaced with some content.

In this case, the system block will produce nothing. The assistant block, denoted by {{#assistant~}} and {{~/assistant}}, will activate the LLM and talk to GPT-4 (the model shown in the skill settings). However, since the system block is empty, a blank prompt is sent to the LLM.

To send something to the LLM, add a plain text prompt in the system block. For example:

{{#system~}}

Give a proof of the Pythagoras theorem.

{{~/system}}

{{#assistant~}}

{{gen(name='RESULT', temperature=0.75)}}

{{~/assistant}}

Click Save and Publish, type anything in the Sandbox chat, and click the send icon.

The LLM will respond with the ‘RESULT,’ just give it a few seconds. Once the result appears, click the > icon and the Show Prompt button to see what prompt was sent to the LLM and what result was returned.

Everything between <|im_start|>system and <|im_end|> was sent to the LLM and everything between <|im_start|>assistant and <|im_end|> is the LLMs 'RESULT.’

However, there’s an issue with the agent in this state because what happens if you type another message? If you type another message and click the send icon, it will ignore the previous LLM response and provide another response to “Give a proof of the Pythagoras theorem.”

This is not ideal in a real-life scenario, as you’d want your agent to respond in a collaborative manner. In this case, memory needs to be added to the agent as follows:

{{#system~}}

{{set(name='agent_', value=GetAgent())}}  
{{set(name='memory', value=GetMemory(count=40, maxLen=20000))}}  

{{memory}}  
{{agent_}}:

{{~/system}}

{{#assistant~}}

{{gen(name='RESULT', temperature=0.75)}}

{{~/assistant}}

Note the additional set parameters needed to declare 'agent’ and ‘memory.’

Since the memory is now sent to the LLM each time a message is sent, there is no need to include the prompt “Give a proof of Pythagoras theorem” as it is already part of the Sandbox chat memory. Once the above code is added to the skill script, click Save and Publish again.

Type anything in the Sandbox chat, such as “Explain the steps in more detail,” and click the send icon. Again, give it a few seconds to generate a response.

Once the result appears, click the > icon and the Show Prompt button to see what prompt was sent to the LLM and what result was returned. Observe how the past agent and user conversations are added to the prompt, which is used to generate a response.