GenUI in FlutterFlow 2026: How AI Agents Now Compose Your App's UI in Real Time
GenUI is Google's open-source SDK for Flutter that enables AI agents to compose user interfaces dynamically at runtime. Instead of showing a chat text response, the AI agent assembles actual UI components — product cards, booking tiles, dashboards — from your app's existing widget catalog, in response to what the user needs. FlutterFlow implements GenUI as "GenUI Chat," allowing FlutterFlow apps to deliver agent-driven experiences where the UI adapts in real time to user intent. The underlying protocol is A2UI (Agent-to-UI), an open project by Google defining declarative UI communication between agents and frontends.
Commercial Expertise
Need help with Mobile Development?
Ortem deploys dedicated Enterprise Mobile Solutions squads in 72 hours.
Next Best Reads
Continue your research on Mobile Development
These links are chosen to move readers from general education into service understanding, proof, and buying-context pages.
Mobile App Development Services
See how Ortem scopes, designs, and ships iOS, Android, Flutter, and React Native products.
See service detailsFlutter Delivery Model
Understand when Flutter is the right fit for speed, consistency, and shared engineering effort.
See Flutter serviceMobile App Case Study
Review a real mobile product build with delivery scope, platform choices, and outcome signals.
Read case studyEvery UI you have ever built makes the same assumption: you know in advance what the user will need.
You design screens for the expected user journeys. You build forms for the expected inputs. You create navigation for the expected flows. The UI is a static prediction of what users will want.
GenUI breaks this assumption. The AI agent observes what the user is trying to accomplish — and assembles the appropriate UI at runtime, from your existing widget catalog, in response to the user's actual intent.
The Core Concept: From Text to UI
The evolution of AI in apps:
Gen 1: AI returns text
User: "Show available rooms for next weekend"
AI: "We have Room 101 available at $120/night and Room 205 at $150/night..."
Gen 2: AI returns structured data (JSON)
User: "Show available rooms for next weekend"
AI returns: { rooms: [{id: 101, price: 120, ...}, {id: 205, ...}] }
App renders: a list view using hardcoded template
Gen 3 (GenUI): AI returns UI specification
User: "Show available rooms for next weekend"
AI returns A2UI JSON: {
type: "RoomGrid",
items: [{widget: "RoomCard", id: 101, price: 120, features: [...], cta: "Book Now"}],
layout: "grid_2col",
filter_widget: "PriceRangeSlider"
}
App renders: actual RoomCard widgets + PriceRangeSlider, all interactive
In Gen 3, the agent does not just answer — it builds the interface. The user gets functional UI components, not a text description of what the UI would show.
How GenUI Works Technically
GenUI uses a three-layer architecture:
Layer 1: Widget Catalog Registration
You register your app's components with the GenUI SDK — defining what widgets exist and what data they accept:
// Register your widget catalog with GenUI
final catalog = GenUIWidgetCatalog(
widgets: [
WidgetSpec(
name: "ProductCard",
description: "Displays a product with image, name, price, and add-to-cart button",
schema: {
"product_id": SchemaField.string(required: true),
"name": SchemaField.string(required: true),
"price": SchemaField.number(required: true),
"image_url": SchemaField.string(),
"rating": SchemaField.number(min: 0, max: 5),
"in_stock": SchemaField.boolean(defaultValue: true),
},
builder: (data, onAction) => ProductCard(
productId: data["product_id"],
name: data["name"],
price: data["price"],
imageUrl: data["image_url"],
rating: data["rating"],
onAddToCart: () => onAction("add_to_cart", {"product_id": data["product_id"]}),
),
),
WidgetSpec(
name: "BookingCalendar",
description: "Interactive calendar for selecting available appointment slots",
schema: {
"available_dates": SchemaField.array(SchemaField.string()),
"time_slots": SchemaField.array(SchemaField.string()),
"service_type": SchemaField.string(),
},
builder: (data, onAction) => BookingCalendar(
availableDates: data["available_dates"],
timeSlots: data["time_slots"],
onSlotSelected: (date, time) => onAction("slot_selected", {
"date": date, "time": time
}),
),
),
// ... more widgets
],
);
Layer 2: Agent-to-UI Communication
When the user sends input, GenUI sends it to the LLM along with the widget catalog schema. The LLM responds with A2UI JSON — a specification of which widgets to render with what data:
{
"ui_response": {
"message": "I found 3 available slots for your haircut appointment this week:",
"components": [
{
"widget": "BookingCalendar",
"data": {
"available_dates": ["2026-05-19", "2026-05-20", "2026-05-22"],
"time_slots": ["10:00 AM", "2:00 PM", "4:30 PM"],
"service_type": "haircut"
}
},
{
"widget": "ServiceInfoCard",
"data": {
"service": "Classic Haircut",
"duration": "45 minutes",
"price": 35
}
}
]
}
}
Layer 3: State Feedback Loop
When the user interacts with the rendered widget (selects a time slot, clicks Add to Cart, fills a form), the action is fed back to the agent:
// GenUI automatically feeds user actions back to the agent
GenUIChat(
catalog: catalog,
model: FirebaseAI.googleAI().generativeModel("gemini-2.5-flash"),
onAction: (actionType, actionData, conversation) {
// "slot_selected" action fires when user picks a booking time
// GenUI automatically includes this in the next agent context
// The agent knows what the user selected and responds appropriately
},
)
The agent maintains context of the entire interaction — what UI it showed, what the user clicked, what data was displayed. The next user input continues a stateful conversation where the agent knows exactly where you are in the flow.
FlutterFlow's GenUI Chat
For FlutterFlow developers, GenUI Chat is available without writing the SDK integration manually:
- Enable GenUI Chat in the FlutterFlow widget library
- Define your widget catalog — select which FlutterFlow components the agent can use (you pick from your existing component library)
- Connect AI model — select Firebase AI Logic + Gemini (zero configuration) or configure a custom LLM endpoint
- Configure the agent's system prompt — define the agent's role, what it can do, what widgets to use for different scenarios
FlutterFlow generates the widget catalog schema automatically from your component library — you do not need to write JSON schemas by hand.
Real Use Cases in Production
E-Commerce: Dynamic Product Discovery
Instead of fixed category pages, the agent assembles product grids based on natural language queries. "Show me running shoes under $100 for wide feet" → agent renders a filtered ProductGrid with PriceRangeSlider and FootWidthFilter widgets, with results pre-filtered to the user's criteria.
Healthcare: Personalized Patient Dashboards
Instead of one-size-fits-all health dashboards, the agent assembles relevant widgets based on each patient's conditions, medications, and upcoming appointments. A diabetic patient's dashboard shows GlucoseTracker and MealLogger. A cardiac patient's shows HeartRateCard and MedicationSchedule. For regulated environments, this requires HIPAA-compliant development practices throughout the agent and data layers.
B2B SaaS: Adaptive Analytics
Instead of fixed dashboard layouts, the agent assembles the metrics most relevant to each user's role and current goals. The sales VP sees pipeline and quota attainment. The account manager sees their book of business and renewal risk. This adaptive approach is core to modern SaaS development where personalization drives retention.
Travel Apps: Conversational Booking
"I want a beach vacation in Southeast Asia, first week of July, family of 4, ~$3,000 budget" → agent assembles FlightSearchResults + HotelGrid + ItineraryBuilder, pre-filtered and pre-populated with the user's constraints.
What This Changes for Flutter Developers
GenUI shifts the Flutter developer's role:
Before GenUI: You design every possible screen, every state, every user flow — and hope you predicted what users would need.
With GenUI: You build a rich widget catalog — high-quality, composable components. The agent decides which widgets to assemble for each user's specific situation. You focus on making great widgets; the AI focuses on assembling them correctly. This shifts Flutter development from predicting flows to building vocabularies of reusable components.
The implication: the UI layer becomes a vocabulary of components, and the agent is the grammar that arranges them. Apps that implement GenUI can handle user intents that were never explicitly designed for, because the agent can assemble novel combinations of existing widgets.
Ortem Technologies builds Flutter and FlutterFlow applications with GenUI integration — creating adaptive, agent-driven experiences for healthcare, fintech, and enterprise clients. Explore our Flutter development services → | AI agent development → | Talk to our mobile team →
About Ortem Technologies
Ortem Technologies is a premier custom software, mobile app, and AI development company. We serve enterprise and startup clients across the USA, UK, Australia, Canada, and the Middle East. Our cross-industry expertise spans fintech, healthcare, and logistics, enabling us to deliver scalable, secure, and innovative digital solutions worldwide.
Get the Ortem Tech Digest
Monthly insights on AI, mobile, and software strategy - straight to your inbox. No spam, ever.
Sources & References
- 1.GenUI SDK for Flutter - Flutter.dev
- 2.Rich and Dynamic UIs with Flutter and GenUI - Flutter Blog
- 3.GenUI Chat in FlutterFlow - FlutterFlow Docs
- 4.GenUI + Firebase AI in Flutter 2026 - Flutter Fever
About the Author
Director – AI Product Strategy, Development, Sales & Business Development, Ortem Technologies
Praveen Jha is the Director of AI Product Strategy, Development, Sales & Business Development at Ortem Technologies. With deep expertise in technology consulting and enterprise sales, he helps businesses identify the right digital transformation strategies - from mobile and AI solutions to cloud-native platforms. He writes about technology adoption, business growth, and building software partnerships that deliver real ROI.
Frequently Asked Questions
- GenUI (Generative UI) is Google's open-source SDK for Flutter that enables AI models to generate and compose Flutter UI components at runtime. Instead of returning plain text in a chat interface, the AI returns a JSON specification of UI components — product cards, dashboards, booking tiles, form widgets — which GenUI renders using your app's actual widget catalog. The user sees real, interactive Flutter widgets instead of a text description. The GenUI SDK handles the JSON-to-widget mapping, state management, and the feedback loop that sends user interactions back to the AI agent.
- A2UI (Agent-to-UI) is an open protocol specification by Google that defines how AI agents communicate UI structure to frontends. It uses a JSON-based declarative format: the agent describes what UI to show ("show a product grid with these items, with add-to-cart buttons") and the frontend renders it using its own component library. A2UI is framework-agnostic — the protocol works with Flutter (via GenUI SDK), React (via web implementations), and other frontends. FlutterFlow's GenUI Chat feature is an implementation of A2UI for FlutterFlow apps.
- GenUI Chat is FlutterFlow's implementation of agent-driven UI, available in FlutterFlow apps without writing custom code. In GenUI Chat mode, you define your widget catalog (the components your app has — product cards, booking tiles, user profile cards, etc.), connect an AI model (Gemini or other LLM), and FlutterFlow generates a chat interface where the AI assembles UI from your catalog in response to user input. A user asking "show me available appointments for next week" gets a calendar widget with available slots — not a text list of dates.
- A regular chatbot returns text. GenUI returns interactive UI components. The difference is significant for user experience: a chatbot answering "What products do we have?" returns a text list. GenUI answers with an actual product grid — card components with images, prices, ratings, and functional "Add to Cart" buttons — assembled from your app's real widget catalog. When the user clicks "Add to Cart" in the GenUI-rendered component, that action is fed back to the agent, which updates its context and assembles the next appropriate UI state.
- GenUI SDK works with any LLM that supports function calling / structured output. Google recommends Gemini models (Gemini 2.5 Pro for complex UI reasoning, Gemini 2.5 Flash for faster, cheaper generation). The SDK supports Firebase AI Logic as the backend (formerly Firebase ML), which connects directly to Gemini without custom backend setup. Third-party LLMs (Claude, GPT-5.5) can also be used via custom backend if configured to output the A2UI JSON schema. Firebase AI Logic is the recommended path for FlutterFlow implementations due to zero-configuration setup.
Stay Ahead
Get engineering insights in your inbox
Practical guides on software development, AI, and cloud. No fluff — published when it's worth your time.
Ready to Start Your Project?
Let Ortem Technologies help you build innovative solutions for your business.
You Might Also Like
GDPR Compliant Mobile App Development: What You Actually Need to Build In

