← Back

LLM Call

LLMs are models that take text input and give out sensible text output. This is pretty much what LLMs do.

The LLM input message is designed to contain three things a. system prompt b. user prompt c. tool list

Following is a simplified example of input message.

message = [{role : system, content : You are a helpful assistant},
           {role : user,   content : write a essay on Earth in 100 words}]

tools = [{ 
      name: 'read_file',
      description: 'Read the contents of a file at the given path.',
      parameters: {
        type: 'object',
        properties: {
          path: { type: 'string', description: 'Absolute or relative file path' }
        },
        required: ['path']
      }]

LLM output text contain either a. message content or b. tool_call. Following is an example.

message = { role : assistant, 
            content : Hello, How can I help you today}
tool_calls = { name : read_file,
               argument : 'path-to-current-folder'}

This is the simple element at the core of all AI application. From simple chatbot like the ChatGPT or the coding CLI like Claude Code.