data-llm attribute creates a live description of your widget’s state that the model can read.
The Dual Interaction Surface
ChatGPT Apps have a unique challenge: two interaction surfaces. The user clicked Flight AF123, but the model only sees the conversation text—it doesn’t know what “this one” refers to.How data-llm Solves This
Adddata-llm attributes to describe what the user is seeing:
How It Works Under the Hood
- Build time: The Vite plugin transforms
data-llmattributes intoDataLLMReact components - Runtime:
DataLLMcomponents register their content in a global tree - State sync: When content changes, the tree generates a hierarchical string
- Model context: This string is stored in
window.openai.widgetState.__widget_context - LLM reads: ChatGPT includes this context when generating responses
Best Practices
Do: Describe what the user sees
Do: Update on user interaction
Don’t: Include sensitive data
data-llm content is sent to ChatGPT’s servers and included in the model’s context. Avoid exposing tokens, passwords, internal IDs, or any data you wouldn’t want in a prompt.
Don’t: Sync everything
Do: Nest for hierarchy
Expression Limitations
Keepdata-llm expressions simple (strings, ternaries, template literals). Pre-compute complex logic:
__widget_context Reserved Key
Skybridge uses a reserved key__widget_context in widget state:
- Automatically managed by
DataLLMcomponents - Filtered out when you use
useWidgetState(you only see your own state) - Read by ChatGPT when the user sends a message (passive context)
Example: Multi-step Wizard
Related
- data-llm API Reference - Full API documentation
- Communicating with the Model Guide - More patterns
- useSendFollowUpMessage - Send messages to the conversation
