Skip to main content
The data-llm attribute allows your widget to communicate its current UI state back to the ChatGPT model. This creates a feedback loop where the model can understand what the user is seeing and respond contextually to their questions.

Basic usage

Static string

export function StatusWidget() {
  return (
    <div data-llm="User is on the home page">
      <h1>Welcome Home</h1>
    </div>
  );
}

Dynamic expression

export function FlightWidget() {
  const [selectedFlight, setSelectedFlight] = useState<Flight | null>(null);

  return (
    <div data-llm={
      selectedFlight
        ? `User is viewing ${selectedFlight.name} (${selectedFlight.id})`
        : "User is browsing the flight list"
    }>
      {selectedFlight ? (
        <FlightDetails flight={selectedFlight} />
      ) : (
        <FlightList onSelect={setSelectedFlight} />
      )}
    </div>
  );
}

Why it exists

ChatGPT Apps introduce a unique challenge: the model needs to understand both the conversation history and what the user is currently viewing in your widget. Without data-llm, the model only knows about the initial tool call that rendered your widget. As users interact with your UI, the model remains unaware of state changes unless you explicitly sync them. Example scenario:
  1. User asks “Show me flights to Paris”
  2. Your widget displays 10 flights
  3. User clicks on “Flight AF123” to view details
  4. User asks “What’s the baggage policy?”
Without data-llm, the model doesn’t know which flight the user selected. With data-llm, your widget can sync this context, allowing the model to answer accurately.

How it works

The data-llm attribute is syntactic sugar that gets transformed at build time by Skybridge’s Babel plugin: What you write:
<div data-llm="User is viewing Flight AF123">
  {/* Your flight details UI */}
</div>
What gets compiled:
import { DataLLM } from "skybridge/web";

<DataLLM content="User is viewing Flight AF123">
  <div>
    {/* Your flight details UI */}
  </div>
</DataLLM>
The DataLLM component:
  1. Registers its content in a global state tree
  2. Automatically syncs with window.openai.setWidgetState
  3. Only shares currently rendered content (removed components are cleaned up)
  4. Supports nested hierarchies for complex UIs

Use cases and examples

E-commerce: Product browsing

import { useState } from "react";

type Product = {
  id: string;
  name: string;
  price: number;
  category: string;
};

export function ProductCatalogWidget() {
  const [selectedProduct, setSelectedProduct] = useState<Product | null>(null);
  const [cart, setCart] = useState<Product[]>([]);

  return (
    <div data-llm={
      selectedProduct
        ? `User is viewing ${selectedProduct.name} ($${selectedProduct.price}).
           Cart has ${cart.length} items.`
        : `User is browsing products. Cart has ${cart.length} items.`
    }>
      {selectedProduct ? (
        <ProductDetails
          product={selectedProduct}
          onAddToCart={() => setCart([...cart, selectedProduct])}
          onBack={() => setSelectedProduct(null)}
        />
      ) : (
        <ProductGrid onSelect={setSelectedProduct} />
      )}
    </div>
  );
}
Now when the user asks “Can I get a discount?”, the model knows which product they’re viewing.

Multi-step wizard

export function BookingWizard() {
  const [step, setStep] = useState<"dates" | "rooms" | "payment">("dates");
  const [bookingData, setBookingData] = useState({
    checkIn: null,
    checkOut: null,
    roomType: null,
  });

  const stepDescriptions = {
    dates: "User is selecting check-in and check-out dates",
    rooms: `User is selecting a room type. Dates: ${bookingData.checkIn} to ${bookingData.checkOut}`,
    payment: `User is on the payment page. Room: ${bookingData.roomType}, Dates: ${bookingData.checkIn} to ${bookingData.checkOut}`,
  };

  return (
    <div data-llm={stepDescriptions[step]}>
      {step === "dates" && <DateSelector onNext={(dates) => {
        setBookingData({ ...bookingData, ...dates });
        setStep("rooms");
      }} />}
      {step === "rooms" && <RoomSelector onNext={(room) => {
        setBookingData({ ...bookingData, roomType: room });
        setStep("payment");
      }} />}
      {step === "payment" && <PaymentForm bookingData={bookingData} />}
    </div>
  );
}
The model now understands which step the user is on and can provide contextual guidance.

Interactive data visualization

export function AnalyticsWidget() {
  const [selectedMetric, setSelectedMetric] = useState<string>("revenue");
  const [dateRange, setDateRange] = useState<string>("last-7-days");
  const [hoveredDataPoint, setHoveredDataPoint] = useState<string | null>(null);

  return (
    <div data-llm={
      hoveredDataPoint
        ? `User is hovering over ${hoveredDataPoint} in the ${selectedMetric} chart (${dateRange})`
        : `User is viewing ${selectedMetric} chart for ${dateRange}`
    }>
      <MetricSelector value={selectedMetric} onChange={setSelectedMetric} />
      <DateRangeSelector value={dateRange} onChange={setDateRange} />
      <Chart
        metric={selectedMetric}
        dateRange={dateRange}
        onHover={setHoveredDataPoint}
      />
    </div>
  );
}
Now when the user asks “Why did it spike?”, the model knows exactly which data point they’re referencing.

Search and filter interfaces

export function SearchWidget() {
  const [query, setQuery] = useState("");
  const [filters, setFilters] = useState({ category: "all", priceRange: "any" });
  const [results, setResults] = useState<SearchResult[]>([]);

  return (
    <div data-llm={
      query
        ? `User searched for "${query}" with filters: ${JSON.stringify(filters)}.
           Found ${results.length} results.`
        : "User hasn't searched yet"
    }>
      <SearchInput value={query} onChange={setQuery} />
      <Filters values={filters} onChange={setFilters} />
      <ResultsList results={results} />
    </div>
  );
}
The model can now help refine searches based on current query and filters.

Advanced patterns

Nested data-llm attributes

You can nest data-llm attributes to create hierarchical context:
export function DashboardWidget() {
  const [activeSection, setActiveSection] = useState<"overview" | "details">("overview");
  const [selectedItem, setSelectedItem] = useState<string | null>(null);

  return (
    <div data-llm={`User is on the ${activeSection} section`}>
      {activeSection === "overview" && (
        <div data-llm="Viewing 10 summary cards">
          <SummaryCards onSelectItem={setSelectedItem} />
        </div>
      )}

      {activeSection === "details" && selectedItem && (
        <div data-llm={`Viewing detailed information for ${selectedItem}`}>
          <DetailView item={selectedItem} />
        </div>
      )}
    </div>
  );
}
The model receives a hierarchical description:
- User is on the details section
  - Viewing detailed information for Item-123

Conditional context

Only render data-llm when there’s meaningful state to share:
export function NotificationWidget() {
  const [notifications, setNotifications] = useState<Notification[]>([]);
  const [selectedNotification, setSelectedNotification] = useState<Notification | null>(null);

  return (
    <div data-llm={
      notifications.length === 0
        ? "User has no notifications"
        : selectedNotification
        ? `User is reading notification: ${selectedNotification.title}`
        : `User has ${notifications.length} unread notifications`
    }>
      {/* Your UI */}
    </div>
  );
}

Rich context descriptions

Provide rich context that helps the model understand user intent:
export function FormWidget() {
  const [formData, setFormData] = useState({
    name: "",
    email: "",
    preferences: [],
  });
  const [errors, setErrors] = useState<Record<string, string>>({});

  return (
    <div data-llm={
      `User is filling out a registration form.
       Completed fields: ${Object.keys(formData).filter(k => formData[k]).join(", ")}.
       ${Object.keys(errors).length > 0
         ? `Has validation errors in: ${Object.keys(errors).join(", ")}`
         : "No validation errors"
       }`
    }>
      <Form data={formData} errors={errors} onChange={setFormData} />
    </div>
  );
}

Best practices

Do: Describe what the user sees

// Good: Describes the current UI state
<div data-llm="User is viewing 3 available time slots for Dr. Smith">
// Bad: Describes internal state
<div data-llm="timeSlots.length === 3">

Do: Update when meaningful state changes

// Good: Updates when selection changes
<div data-llm={selectedItem ? `Selected: ${selectedItem.name}` : "No selection"}>
// Bad: Updates on every mouse move
<div data-llm={`Mouse at ${mouseX}, ${mouseY}`}>

Do: Be concise but descriptive

// Good: Clear and concise
<div data-llm="User viewing Flight AF123 details. Price: $450. Departure: 10:30 AM">
// Bad: Too verbose
<div data-llm="The user is currently in the process of viewing the detailed information panel for the flight with the identifier AF123, which has a price point of $450.00 USD and is scheduled to depart at 10:30 AM local time">

Don’t: Include implementation details

// Bad: Exposes technical details
<div data-llm={`State: ${JSON.stringify(internalState)}`}>
// Good: User-focused description
<div data-llm="User has filtered products by price: $50-$100">

Don’t: Use data-llm for every element

// Bad: Too granular
<div data-llm="Main container">
  <button data-llm="Submit button">Submit</button>
  <button data-llm="Cancel button">Cancel</button>
</div>
// Good: One meaningful context per view
<div data-llm="User is reviewing their order before checkout">
  <button>Submit</button>
  <button>Cancel</button>
</div>

When to use data-llm

Use data-llm when:
  • User interactions change what’s displayed (navigation, selections, filters)
  • The widget shows different views or states (wizard steps, tabs, modals)
  • Context about the current view helps answer user questions
  • You want the model to understand progressive actions (multi-step flows)

When NOT to use data-llm

Avoid data-llm when:
  • The widget is purely static and never changes
  • State changes are too frequent (animations, hover effects)
  • The information is already in the conversation history
  • The state is purely cosmetic (theme, collapsed panels)

Technical details

How context is synced

  1. Each data-llm attribute creates a DataLLM component
  2. Each component gets a unique ID and registers itself in a global map
  3. When content changes, the entire tree is traversed and formatted
  4. The formatted string is stored in window.openai.widgetState.__widget_context
  5. ChatGPT reads this value and includes it in the model’s context

Component lifecycle

// On mount or content change
setNode({ id, parentId, content });
window.openai.setWidgetState({
  ...window.openai.widgetState,
  __widget_context: formatTree()
});

// On unmount
removeNode(id);
window.openai.setWidgetState({
  ...window.openai.widgetState,
  __widget_context: formatTree()
});

Context format

Nested data-llm attributes create an indented list:
- User is on the dashboard
  - Viewing revenue metrics
    - Hovering over Q4 data
  - 5 notifications pending