AI Map Agent Plugin

The AI Map Agent plugin gives a language model conversational control over a TomTom map and its services. You bring the LLM and optionally a chat UI — the plugin handles the tool loop, agent state, and map coordination.

import { MapAgentChat } from './chat/MapAgentChat';
import { useMapAgent } from './useMapAgent';

export function App() {
    const chat = useMapAgent();

    return (
        <div id="app">
            <div id="sdk-map" />
            <div id="chat-panel">
                {chat.transport ? (
                    <MapAgentChat transport={chat.transport} />
                ) : (
                    <div id="chat-loading">Initializing assistant...</div>
                )}
            </div>
        </div>
    );
}

What it is

@tomtom-org/maps-sdk-plugin-ai-agent is a headless plugin built on Vercel AI SDK . It wires a TomTomMap instance and TomTom services to a curated set of LLM-callable tools, and manages the multi-step tool loop via ToolLoopAgent.

You bring:

  • An LLM model instance — any Vercel AI SDK provider (OpenAI, Anthropic, Azure, Google, etc.)
  • A chat UI, or programmatic calls to the agent

The plugin provides:

  • 30+ built-in tools covering map control, search, routing, traffic, and utilities
  • An intent classifier that selects the right tool subset per message
  • Plugin state that persists places, routes, and waypoints across tool calls
  • A composable API to add, remove, or replace any tool

How it works

Your ApplicationAI Map Agent PluginMaps SDK for JavaScriptChat UI (BYO)LLM Provider (BYO)Intent ClassifierToolsState

Every user message goes through two phases before a response is sent:

1. Intent classification

Before the model sees the full tool list, a lightweight pre-pass selects only the tools relevant to this turn. Fewer tools in the prompt means fewer wrong choices and lower token usage.

The classifier is built from each tool’s classificationPrompt — a one-liner that describes when to activate that tool. When you add a custom tool, it participates in classification automatically as long as it has a classificationPrompt.

The default classifier reuses your main model. For cost-sensitive deployments, swap it for a smaller model:

import { createMapAgent, createDefaultClassifier } from '@tomtom-org/maps-sdk-plugin-ai-agent';
import { openai } from '@ai-sdk/openai';
const agent = createMapAgent(map, {
model: openai('gpt-4o'),
classifier: createDefaultClassifier({ model: openai('gpt-4o-mini') }),
});

Or disable classification entirely (all tools always visible):

const agent = createMapAgent(map, {
model: openai('gpt-4o'),
classifier: false,
});

2. Tool loop

The model calls tools — geocode, search, route, show on map — in sequence or in parallel until it can form a complete answer. Tools never return raw GeoJSON to the model; they return compact summaries (counts, names, labels, status). Full geospatial data is kept in plugin state for follow-up use.

State persists across turns, so the user can say “add a stop after the first one” or “what were those restaurants again?” without repeating context. The agent retrieves prior results from its internal stores via dedicated recall tools.

The tool registry

The plugin ships a DEFAULT_TOOLS registry — a flat record of named ToolEntry objects. Each entry defines what the tool does and how the agent should use it:

{
description: string; // Full tool description for the model
inputSchema: ZodType; // Validated input parameters
outputSchema?: ZodType; // Structured output (improves reliability)
execute: (input, state) => Promise<any>;
// Classifier metadata
classificationPrompt?: string; // One-liner: when to activate this tool
tags?: string[]; // Category labels (e.g. 'route', 'traffic')
examples?: string[]; // Usage examples shown in the help tool
examplePrompts?: string[]; // Natural-language prompts shown in the help tool
relatedTools?: string[]; // Tools often used together
dependsOn?: string[]; // Tools that must run first
}

Tools are grouped by what they do, not which API they wrap:

CategoryRepresentative tools
LocationlocatePlace, reverseGeocode, getCurrentLocation, getViewport
RoutingsetRouteLocations, setRouteParameters, addStopToRoute, removeStopFromRoute
PlacesdiscoverPlaces, getPOICategoryCodes, searchAlongRoute
TrafficgetTrafficIncidents, getTrafficAreaAnalytics, queryTrafficAnalytics
Map displayshowPlaces, showRoute, showWaypoints, showTrafficAreaAnalytics, clearMap
Map controlflyTo, zoomInOrOut, setMapStandardStyle, setLanguage, toggleTraffic*, togglePOIs
MapLibre directexecuteMaplibreCode, setLayoutProperties, setPaintProperties, getMapStyleLayers
State / recallrecallPlaces, recallRoutes, recallRanges, getCurrentWaypoints
UtilitiesformatDistance, formatDuration, calculateBBox, getRouteProgress, help

The help tool

The built-in help tool is available to the model at runtime. When a user asks “what can you do?” or “show me routing examples”, the model calls help to surface tool descriptions, usage examples, and example prompts filtered by tag. Users discover capabilities through natural conversation — no documentation browsing required.

Tools that read state vs. fetch data

A key design distinction: some tools fetch fresh data from TomTom services; others read from plugin state. The recall tools (recallPlaces, recallRoutes, recallRanges) exist because tool results are not retained in conversation history. If a user asks “what were those places?” an hour into a session, the model calls a recall tool — it does not guess or hallucinate prior results.

Plugin state

The plugin maintains structured state across the full conversation, organized by feature area:

State sliceWhat it holds
placesPlace search results, geocoded locations, PlacesModule access
routingCurrent routes, waypoints, route parameters, RoutingModule access
baseMapViewport, style, language, MapLibre map instance
trafficTraffic flow/incident visibility, area analytics results
mapPOIsPOI category visibility and filters
rangesReachable range (isochrone) results and bounding boxes

State is shared across all tools. A place resolved by locatePlace is immediately available to addStopToRoute without the model having to pass it along — tools read directly from shared stores. This prevents the hallucination-prone “read then pass” pattern.

You can inspect live state at any time from your application:

agent.state.routing.currentRoutes; // most recent route results
agent.state.places.currentPlaces; // most recent place results
agent.state.baseMap.mapLibreMap; // raw MapLibre Map instance

Installation

All plugins use @tomtom-org/maps-sdk as a peer dependency — ensure the SDK is installed first. See the Project Setup guide if needed.

npm install @tomtom-org/maps-sdk-plugin-ai-agent

Add a Vercel AI SDK provider for your chosen model:

npm install ai @ai-sdk/openai # or @ai-sdk/anthropic, @ai-sdk/azure, etc.

Quick start

import { TomTomConfig } from '@tomtom-org/maps-sdk/core';
import { TomTomMap } from '@tomtom-org/maps-sdk/map';
import { createMapAgent } from '@tomtom-org/maps-sdk-plugin-ai-agent';
import { openai } from '@ai-sdk/openai';
import { DirectChatTransport } from 'ai';
import { useChat } from 'ai/react';
TomTomConfig.instance.put({ apiKey: 'YOUR_API_KEY' });
const map = new TomTomMap({ mapLibre: { container: 'map' } });
const agent = createMapAgent(map, { model: openai('gpt-4o') });
// Wire to any Vercel AI SDK chat interface
const { messages, sendMessage } = useChat({
transport: new DirectChatTransport({ agent }),
});

Once wired, users can interact naturally:

  • “Route from Berlin to Munich avoiding tolls”
  • “Show coffee shops near the map center”
  • “What traffic incidents are on this route?”
  • “Switch to dark mode”
  • “How far is the second stop?”

Customizing tools

Tools are resolved by merging the DEFAULT_TOOLS registry with your tools option. You can add, replace, or remove individual entries — the rest remain untouched.

Remove a default tool

const agent = createMapAgent(map, {
model: openai('gpt-4o'),
tools: { setLanguage: false },
});

Replace a default tool

Pass a ToolEntry with the same key to swap the built-in implementation with yours:

const agent = createMapAgent(map, {
model: openai('gpt-4o'),
tools: {
getCurrentLocation: {
description: 'Returns the user position from the company fleet system.',
inputSchema: z.object({}),
execute: async (_input, state) => {
const position = await fleetApi.getDriverPosition();
state.baseMap.userPosition = position;
return { position };
},
classificationPrompt: 'Get the driver\'s current GPS position from the fleet system.',
tags: ['location'],
},
},
});

Add a custom tool (BYOD blending)

Bring your own data sources alongside the built-in TomTom tools. Custom tools receive the same state object, so they can display results using SDK modules without any extra wiring:

import { z } from 'zod';
import type { ToolEntry, ToolState } from '@tomtom-org/maps-sdk-plugin-ai-agent';
const getFleetVehicle: ToolEntry = {
description: 'Get the current map position of a fleet vehicle by ID and show it as a place marker.',
classificationPrompt: 'Locate or display a fleet vehicle on the map by its ID.',
inputSchema: z.object({ vehicleId: z.string().describe('The fleet vehicle identifier') }),
execute: async ({ vehicleId }, state) => {
const position = await fleetApi.getPosition(vehicleId);
const placesModule = await state.places.getPlacesModule();
placesModule.show({ type: 'FeatureCollection', features: [toFeature(position)] });
return { vehicleId, position, status: 'shown' };
},
tags: ['location'],
examplePrompts: ['Where is vehicle TT-001?', 'Show fleet vehicle on the map'],
};
const agent = createMapAgent(map, {
model: openai('gpt-4o'),
tools: { getFleetVehicle },
});

The custom tool participates in intent classification automatically. If the user says “Where is vehicle TT-001?”, the classifier activates getFleetVehicle alongside any other tools needed to answer the question.

Start from a blank slate

Set includeDefaultTools: false to begin with no built-in tools. Useful when building a narrowly scoped agent that should not stray into general map control:

const agent = createMapAgent(map, {
model: openai('gpt-4o'),
includeDefaultTools: false,
tools: {
getFleetVehicle,
locatePlace, // selectively re-add built-in tools you do want
},
});

Extending the system prompt

Append to the built-in prompt

const agent = createMapAgent(map, {
model: openai('gpt-4o'),
systemPromptSuffix: 'Always respond in Spanish. Never show more than 5 places at once.',
});

Full replacement

Import BASE_SYSTEM_PROMPT as a baseline to extend rather than starting from scratch. The base prompt contains coordinate order rules, tool-usage guidance, and response formatting instructions that are important for reliability:

import { createMapAgent, BASE_SYSTEM_PROMPT } from '@tomtom-org/maps-sdk-plugin-ai-agent';
const agent = createMapAgent(map, {
model: openai('gpt-4o'),
systemPrompt: BASE_SYSTEM_PROMPT + `
ADDITIONAL INSTRUCTIONS:
- This is a logistics application. Prioritize route efficiency over scenery.
- Always show estimated arrival times in responses.
- When a vehicle ID is mentioned, call getFleetVehicle before anything else.
`,
});

Per-tool classification prompts

If your custom tool is being activated too broadly or not activated when it should be, tune its classificationPrompt. This is the one-liner the classifier uses to decide whether a user message needs this tool:

// Too broad — activates for any location question
classificationPrompt: 'Get fleet vehicle data.'
// Precise — activates only for explicit vehicle ID references
classificationPrompt: 'Locate a fleet vehicle by its ID (e.g. "TT-001"); not for general location queries.'

Custom state slices

If your custom tools need to share data across calls, extend ToolState with your own slice. Implement reset() to participate in agent.destroy() cleanup:

import type { ToolState, StateSlice } from '@tomtom-org/maps-sdk-plugin-ai-agent';
class FleetState implements StateSlice {
vehicles: Map<string, VehiclePosition> = new Map();
reset() { this.vehicles.clear(); }
}
interface MyState extends ToolState {
fleet: FleetState;
}
const agent = createMapAgent<MyState>(map, {
model: openai('gpt-4o'),
state: { fleet: new FleetState() },
tools: { getFleetVehicle: myFleetTool },
});
// Access live state from your application
agent.state.fleet.vehicles;
agent.state.routing.currentRoutes;

Observing classification

Use onClassify to inspect which tools were selected for each turn — useful for debugging classification behaviour or building a dev panel:

const agent = createMapAgent(map, {
model: openai('gpt-4o'),
onClassify: (result) => {
if (result) {
console.log('Selected tools:', result.activeToolNames);
console.log('Classification took:', result.timeMs, 'ms');
}
},
});

Cleanup

agent.destroy(); // resets all built-in and custom state slices that implement reset()

API Reference

Map Agent Plugin API Reference