Simplest AI Agent
We create simplest possible AI agent when we provide some code harness around the LLM call.
The harness contains code to do two things. a. Implement the tool calls by the LLM b. run a while loop to ensure the output of the tool call is fed back to the LLM till the LLM does not stop.
a. Implement the tool calls by the LLM
Agent is something that can cause action on its environment. LLM call can suggest what action to take through the tool_call. This harness code makes that suggestion a reality.
Following is an example of the harness code to implement the 'run_command'
tool = {
type: 'function',
function: {
name: 'run_command',
description: 'Run a shell command and return its output.',
parameters: {
type: 'object',
properties: {
command: { type: 'string', description: 'The shell command to execute' }
},
required: ['command']
}
}
}import { execSync } from 'child_process';
function executeTool(name, args) {
switch (name) {
case 'run_command': {
try {
return execSync(args.command, { encoding: 'utf-8' });
} catch (e) {
return `Error (exit ${e.status}): ${e.stderr || e.message}`;
}
}
}b. While loop
Every time a request is made to the LLM it can respond with a tool call. Now suppose it responds with a tool call, we implement the suggestion and send back a request which contain the output of that tool_call implementation. Now again the LLM can respond with a tool call. This can go on until the LLM sends the message.
While loop enables this loop, where LLM calls are incrementally made till the goal is achieved.
while (true) {
const reply = await callAPI();
messages.push(reply);
// If no tool calls, print and we're done
if (!reply.tool_calls || reply.tool_calls.length === 0) {
return reply.content;
}
// Execute each tool call and feed results back
const tc = reply.tool_calls[0]
const args = JSON.parse(tc.function.arguments);
const result = executeTool(tc.function.name, args);
messages.push({
role: 'tool',
tool_call_id: tc.id,
content: result
});
}