GTWY Header

Creating and Managing AI Agents on GTWY

#

Overview

This document provides a step-by-step guide to creating, configuring, testing, and publishing AI agents on the GTWY platform. It also covers monitoring agent behavior and integrating agents into applications using GTWY’s built-in tools and APIs.


#

1. Choose Agent Type

When creating a new agent, first select the appropriate agent type based on your use case:

  • API Agent – For backend services, workflows, and system integrations

  • Chatbot Agent – For user-facing conversational interfaces (websites, apps, dashboards)


#

2. Create a New Agent

After selecting the agent type, create the agent in GTWY.
This will open the full configuration interface where you can define the agent’s behavior, models, tools, and integrations.


#

3. Prompt Configuration

#

3.1 Pre-Function (Optional)

You may define a function that executes before the main prompt.
This can be used for tasks such as:

  • Input normalization

  • Context enrichment

  • Pre-processing user queries

#

3.2 Prompt Structure

Define the main prompt using the following structure:

  • Role – Specifies who the agent is

  • Objective / Task – Defines the primary responsibility of the agent

  • Instructions – Provides detailed guidance on how the agent should behave and respond

Using this structure ensures consistent and predictable outputs.


#

4. Response Type

Configure how the agent should return responses:

  • Structured response (e.g., JSON or schema-based)

  • Plain text

  • Default response format (if no strict structure is required)


#

5. Model Configuration

#

5.1 Service Provider

Select the AI service provider (e.g., OpenAI, Anthropic, Google, etc.).

#

5.2 Model Selection

Choose the model appropriate for your use case.

#

5.3 API Key

Provide the API key for the selected provider.

#

5.4 Parameters

Configure model-level parameters, including:

  • Token limits

  • Tool enablement

  • Parallel tool calls (if supported by the provider)


#

6. Fallback Model Configuration

Configure fallback models to ensure reliability in case the primary model fails.

  • Define one or more fallback models

  • Configure multiple API keys if required

  • The fallback model is invoked automatically upon failure of the primary model


#

7. Connectors (Tools Integration)

Add and configure tools that the agent can use, such as:

  • Web search

  • Image-related tools

  • External APIs

  • ViaSocket integrations

Enable only the tools required for your use case to maintain security and performance.


#

8. Knowledge and Agent Connections

#

8.1 Knowledge Base

Attach documents and data sources that the LLM can search to provide grounded and context-aware responses, using your own content as the source of truth in the description.

#

8.2 Agent Connections

Connect published agents to reuse their capabilities within other agents.


#

9. Memory Configuration

Define what user context and conversation data should be retained by the agent, such as:

  • User preferences

  • Relevant historical messages

  • Ongoing task context

This enables continuity and more personalized responses.


#

10. Advanced Settings

  • Orchestral Agent – Coordinate workflows across tools and sub-agents

  • Tone – Define the response style (professional, friendly, technical, etc.)

  • Response Time Preference – Optimize for speed or depth of response

  • Problem Bar – Limit how deeply the agent searches external or web sources

  • Maximum Function Call Limit – Set how many tool calls are allowed per request


#

11. Testing with Playground

Before publishing, validate your agent using the Playground:

  • Submit test queries

  • Verify prompt behavior

  • Test all configured parameters

  • Validate tool calls and fallback behavior

  • Ensure response formats meet expectations

This step is strongly recommended before production use.


#

12. Monitoring and History

Use the History tab to monitor agent interactions:

  • Review user queries

  • Review model responses

  • Inspect tool calls and failures

This enables debugging, performance analysis, and continuous improvement of agent behavior.


#

13. Summary and Publishing

Before publishing your agent:

  • Add a concise summary describing the agent’s purpose

  • Review all configurations

  • Publish the agent

Once published, the agent becomes available for integration and production use.


#

14. Integration Guide

Use the GTWY Integration Guide to:

  • Embed chatbots into applications

  • Integrate API agents into backend systems

  • Follow step-by-step instructions for frontend and backend integration


#

Best Practices

  • Test agents thoroughly in the Playground before publishing

  • Use fallback models for production reliability

  • Attach only relevant tools and documents

  • Review History logs regularly to refine prompts and behavior

  • Use structured response formats for automation workflows


On this page
Overview
1. Choose Agent Type
2. Create a New Agent
3. Prompt Configuration
3.1 Pre-Function (Optional)
3.2 Prompt Structure
4. Response Type
5. Model Configuration
5.1 Service Provider
5.2 Model Selection
5.3 API Key
5.4 Parameters
6. Fallback Model Configuration
7. Connectors (Tools Integration)
8. Knowledge and Agent Connections
8.1 Knowledge Base
8.2 Agent Connections
9. Memory Configuration
10. Advanced Settings
11. Testing with Playground
12. Monitoring and History
13. Summary and Publishing
14. Integration Guide
Best Practices