Skip to main content
Problem: ChatGPT Apps involve three actors (ChatGPT, your server, your widget) communicating in complex patterns. Understanding this flow is essential. Solution: Skybridge provides clear abstractions for each communication pattern.

The Three Actors

  1. ChatGPT (Host): The conversational interface where users type messages and the model responds
  2. Your MCP Server: The backend that exposes tools and business logic
  3. Your Widget (Guest): The React component rendered in an iframe inside ChatGPT

Key Terms

Tool responses contain three fields:
  • content: Text array shown to the model in the conversation
  • structuredContent: Typed JSON data passed to your widget
  • _meta: Optional metadata for both

Tools vs Widgets

Before diving into data flow, understand the difference:
registerTool()registerWidget()
Has UI?NoYes
Returnscontent only (text for the model)structuredContent (data for the widget) + content
RendersNothingA React component
Use caseBackground operations, calculationsInteractive UI
// Tool: No UI, returns text
server.registerTool("calculate", { /* ... */ }, async (args) => {
  return { content: [{ type: "text", text: "Result: 42" }] };
});

// Widget: Has UI, returns structured data
server.registerWidget("chart", { /* ... */ }, { /* ... */ }, async (args) => {
  return {
    content: [{ type: "text", text: "Displaying chart" }],
    structuredContent: { data: [1, 2, 3], labels: ["A", "B", "C"] }
  };
});

Data Flow Patterns

1. Tool → Widget (Initial Hydration)

When ChatGPT calls one of your widgets, it returns structuredContent to hydrate the React component: Server:
server.registerWidget("show_flights", {}, {
  inputSchema: { destination: z.string() },
}, async ({ destination }) => {
  const flights = await searchFlights(destination);

  return {
    content: [{ type: "text", text: `Found ${flights.length} flights` }],
    structuredContent: { flights } // This goes to the widget
  };
});
Widget:
import { useToolInfo } from "skybridge/web";

export function FlightWidget() {
  const toolInfo = useToolInfo<{ flights: Flight[] }>();

  if (toolInfo.isSuccess) {
    const { flights } = toolInfo.output.structuredContent;

    return (
      <ul>
        {flights.map(flight => <li key={flight.id}>{flight.name}</li>)}
      </ul>
    );
  }

  return <div>Loading...</div>;
}
Use useToolInfo for the initial data that renders your widget. This data is set once when the widget loads.
useToolInfo is read-onlyuseToolInfo provides the initial hydration data and should not be used as a mutable data store. For persistent widget state that survives re-renders, use useWidgetState.

2. Widget → Server (Tool Calls)

Widgets can trigger additional tool calls in response to user actions:
import { useCallTool } from "skybridge/web";

export function FlightWidget() {
  const { callTool, isPending, data } = useCallTool("get_flight_details");

  const handleViewDetails = (flightId: string) => {
    callTool({ flightId });
  };

  return (
    <button onClick={() => handleViewDetails("AF123")} disabled={isPending}>
      {isPending ? "Loading..." : "View Details"}
    </button>
  );
}
Use useCallTool when the user performs an action that requires fetching more data.
Don’t call tools on mountNever wrap callTool in a useEffect to fetch data on mount. Pass initial data through structuredContent instead.Why? Tool calls add latency and model round-trips. You control what initially comes from structuredContent, leverage it.

3. Widget → Model (Context Sync)

Your widget needs to communicate its state back to the model. Use the data-llm attribute to declaratively describe what the user sees. See LLM Context Sync.

4. Widget → Chat (Follow-up Messages)

Widgets can send messages back into the conversation:
import { useSendFollowUpMessage } from "skybridge/web";

export function FlightWidget() {
  const sendMessage = useSendFollowUpMessage();

  const handleBookFlight = (flight: Flight) => {
    sendMessage({
      prompt: `I'd like to book the ${flight.name} flight. What payment methods do you accept?`
    });
  };

  return <button onClick={() => handleBookFlight(selectedFlight)}>Book Now</button>;
}
This creates a continuous loop: the widget can ask the model for help, and the model responds naturally in the conversation.
When to use sendFollowUpMessage vs data-llm
  • data-llm: For passive context—the model reads it when the user asks a question
  • sendFollowUpMessage: For active prompts—triggers a new model response immediately
If the model’s response triggers another widget, ChatGPT renders a new widget instance (not nested inside the current one).

Response Fields Explained

Tool responses have three fields:
FieldPurposeConsumed by
contentText description for the modelChatGPT (shown in conversation)
structuredContentTyped data for the widgetWidget (useToolInfo, useCallTool)
_metaResponse metadataBoth (optional)
return {
  content: [{ type: "text", text: "Found 3 flights to Paris" }],
  structuredContent: {
    flights: [{ id: "AF123", name: "Air France 123" }, /* ... */]
  },
  _meta: {
    searchId: "abc123",
    cached: false
  }
};

When to Use What

NeedUseWhy
Initial widget datauseToolInfoData passed at hydration, no extra calls
User-triggered fetchuseCallToolModel sees the result, can answer questions
Silent background fetchDirect API callModel doesn’t need to know
Describe current UI statedata-llmPassive context for user questions
Trigger model responsesendFollowUpMessageActive prompt, immediate reply
Persist widget stateuseWidgetStateSurvives re-renders
Why useCallTool instead of a direct API call?useCallTool adds the result to the conversation context. This means:
  • User asks “What’s the baggage limit?” → Model can answer using the flight details from context
  • User says “Compare this with the previous one” → Model has both results in context
Direct API calls don’t add to context. Use them only for data the model doesn’t need (analytics, logging, UI-only updates).
Direct fetch requires CSP configurationIf you use fetch() directly from widgets, you need to configure Content Security Policy (CSP) headers. ChatGPT blocks requests to domains not explicitly allowed. Add allowed domains to connectDomains in your widget configuration. See registerWidget.

The Communication Loop

  1. ChatGPT calls your tool → Server responds with structuredContent
  2. Widget hydrates with useToolInfo
  3. User interacts → Widget updates data-llm → Model sees the context
  4. User triggers action → Widget calls useCallTool → Server responds
  5. Widget sends follow-upuseSendFollowUpMessage → ChatGPT replies
This loop creates a seamless experience where the conversation, the UI, and your backend work together.