State Management in AI-Generated React Native Apps

SS

By Sanket Sahu

9th May 2026

Last updated: 9th May 2026

State Management in AI-Generated React Native Apps

When a generative AI tool writes a React Native app for you, somebody has to pick the state management library. The model doesn't deliberate; it imitates whatever pattern the system prompt and training data nudge it toward. Pick badly and every generated app inherits a footgun — re-render storms, stale data on offline reconnect, a 200KB Redux store for a screen with one form. Pick well and the app feels like it was written by a careful engineer who knew the tradeoffs.

This post is a candid breakdown of how RapidNative handles react native state management across two very different surfaces: the editor where you build the app, and the generated React Native code that ships to your users. They run on opposite philosophies, and that's deliberate.

Mobile developer coding on laptop with phone showing app preview Photo by Daniel Korpai on Unsplash

The two state problems nobody talks about

Most articles on react native state management assume one app, one team, one set of decisions. Generative AI breaks that assumption. RapidNative has two state surfaces:

  1. Editor state — the canvas, the file tree, the chat with the AI, the artboards, the preview iframes. This is a complex, single-page Next.js app with collaborative editing, undo/redo, recovery from crashes, and ten kinds of modal. It needs centralized, normalized, time-travel-friendly state.
  2. Generated app state — the React Native / Expo code the AI writes for end users. It needs to be small, idiomatic, offline-aware, and easy for a non-technical user to extend without learning a new abstraction.

Those are different problems. We solve them with different stacks. The editor runs on Redux Toolkit. The generated apps don't ship Redux at all — they use TanStack Query for server state and React's built-in primitives for everything else. Conflating the two is the most common mistake teams make when adding AI to a builder.

TL;DR — How does RapidNative handle state in AI-generated React Native apps? The editor uses Redux Toolkit with five slices for canvas, files, chat, theming, and recovery. The apps it generates use TanStack Query for server data with AsyncStorage persistence for offline support, plus useState and a single React Context for local UI. Redux, Zustand, and Jotai are deliberately not generated.

Why the editor uses Redux

The editor in src/modules/editor/store/store.ts is configured with Redux Toolkit 2.8.2 and a custom middleware stack. Five slices live under the root reducer:

SliceResponsibility
appUser session, credits, subscription tier, team context, projects list
editorFiles, artboards, chat messages, preview state, modals, runtime errors
themeEditorDesign system colors and typography
integrationConnected third-party integrations (RevenueCat, Stripe, etc.)
recoveryUndo/redo history and crash-recovery snapshots

The editor slice alone is 1,157 lines. It tracks the file map (files: { [path]: { content, name, mimeType, id } }), the artboard layout on the canvas (positions, zoom, device frames), the AI request lifecycle (isAiRequestInProgress, optimistic messages, streaming tokens), and a runtime error log surfaced from the preview iframes. Every dispatched action goes through a messageManagerMiddleware that records to a TimelineEventLog so we can replay the last 30 seconds of state if the tab crashes.

If that sounds like overkill for a "form on a page," that's because it is — and that's also the point. You only want this complexity when you have a 1,000-line component graph that needs deterministic state and replay. Most React Native apps don't.

Developer reviewing code architecture on multiple monitors Photo by Carl Heyerdahl on Unsplash — multi-monitor setup for tracing complex state

Why the generated apps don't get Redux

The first version of every AI mobile builder I've seen ships Redux into the user's app. It's the most common pattern in training data, so the model defaults to it. The result is an 80-line store.ts, three slices, an action for every form field, and a junior developer staring at it wondering why their "to-do app" has more boilerplate than business logic.

In 2026, Zustand has overtaken Redux in adoption for new React projects, and TanStack Query has become the de-facto standard for server state. Our generated stack reflects that. Looking at the package.json for our fullstack scaffold:

{
  "@tanstack/react-query": "^5.90.16",
  "@tanstack/query-async-storage-persister": "^5.90.18",
  "@react-native-async-storage/async-storage": "2.2.0",
  "@react-native-community/netinfo": "^11.4.1",
  "@vibecode-db/client": "3.0.4"
}

There is no @reduxjs/toolkit. No zustand. No jotai. The system prompt that drives code generation explicitly forbids them. Why? Because for the apps people actually build with a prompt-to-app tool — feed trackers, internal tools, marketplace MVPs, content apps — every piece of state falls into one of three buckets:

  • Server state that lives in a database. Belongs to TanStack Query.
  • Local UI state that dies with the screen (modal open? input value?). Belongs to useState.
  • Cross-cutting context like the auth client and the DB client. Belongs to one React Context.

Three buckets, three primitives, zero new vocabulary. A founder building their first app reads useState and understands it. They read useQuery and Google "how to fetch data with react query" and get a thousand correct tutorials. They read a custom Redux slice and they're lost.

Teaching the LLM the rules

The interesting part isn't the choice of TanStack Query — it's how we make the LLM honor it consistently across millions of generated lines. The system prompt for our fullstack template (tools/project-templates/fullstack/ai/prompts/system-prompt.ts) ships these rules verbatim, in priority order:

  1. DB clientconst { client } = useApp(). Userconst { user } = useAuth(). Use user?.id in queries.
  2. Query resultsconst { data, error } = await client.from('posts').select('*'). The result is { data, error }, not an array.
  3. Hooks at the top of the component — never inside if-blocks, loops, or after early returns.
  4. Auth-scoped queries — only when the schema has a user_id column; disable the query otherwise.

The prompt then shows a canonical example the model is expected to mimic:

const { client } = useApp();
const queryClient = useQueryClient();

const { data: posts = [], isLoading, isFetching } = useQuery({
  queryKey: ['posts'],
  queryFn: async () => {
    const { data, error } = await client
      .from('posts')
      .select('*')
      .order('created_at', { ascending: false })
      .limit(50);
    if (error) throw error;
    return data;
  },
});

That snippet is doing more work than it looks. It establishes the query-key convention ([tableName]), the descending-by-created_at default, the .limit(50) guard against runaway lists, and the data: posts = [] default that prevents downstream .map errors before the first fetch resolves. The model sees this once and reproduces it for every list screen — feed, comments, products, history.

Server state vs. UI state — the boundary we enforce

The single biggest mistake LLMs make with state in React is using useState to mirror server data. You see it constantly: useEffect fetches a list, dumps it into useState, and now you have two sources of truth and no way to invalidate them.

Our system prompt makes the boundary explicit:

No useState for server data. All server state goes through TanStack Query. Use useState only for local UI state: modal visibility, form input values, animation flags, transient selections.

This rule alone eliminates an entire class of stale-data bugs. When a user creates a post on screen A and navigates back to screen B, screen B doesn't show stale data because the mutation invalidated ['posts'] and TanStack Query refetched on focus. There is no Redux action, no event bus, no forceUpdate. The cache is the synchronization mechanism.

Mobile app interface showing data list and form Photo by Daniel Romero on Unsplash — server state vs. UI state on a real mobile screen

Offline-first by default

React Native apps live on phones, and phones spend a non-trivial percentage of their lives in elevators, planes, and basements. Generated apps need to handle that without the user having to ask. Our queryClient.ts configures two modes — designer mode (running inside the live preview during editing) and standalone mode (the deployed app):

const queryClient = new QueryClient({
  defaultOptions: isDesigner ? {
    queries: {
      staleTime: 0,
      gcTime: 0,
      networkMode: 'always',
    },
  } : {
    queries: {
      staleTime: 1000 * 60 * 5,
      gcTime: 1000 * 60 * 60 * 24,
      networkMode: 'offlineFirst',
    },
  },
});

In designer mode every change is fresh — the user just edited the schema, they want to see the new column immediately. In production the cache holds for 5 minutes and survives in memory for 24 hours, with networkMode: 'offlineFirst' so the UI hydrates from cache before the network resolves.

The persister bridges the gap between sessions:

export const asyncStoragePersister = createAsyncStoragePersister({
  storage: AsyncStorage,
  key: 'REACT_QUERY_OFFLINE_CACHE',
  throttleTime: 1000,
  shouldDehydrateQuery: (query) => {
    if (query.state.status === 'error') return false;
    if (query.queryKey[0] === 'auth') return false;
    return true;
  },
});

Two details worth calling out. First, errored queries are not persisted — we don't want to hydrate a failed state on next launch and present it as truth. Second, auth queries are excluded; sessions get re-validated on launch, never replayed from disk. The throttle keeps AsyncStorage writes to one per second so a chatty screen doesn't thrash the SQLite-backed store.

Network status flows through NetInfo into TanStack Query's onlineManager via a useOffline hook:

useEffect(() => {
  const unsubscribe = NetInfo.addEventListener((state) => {
    const online = !!state.isConnected && !!state.isInternetReachable;
    setIsOffline(!online);
    onlineManager.setOnline(online);
    if (online) setLastSyncTime(new Date());
  });
  return () => unsubscribe();
}, []);

When the device comes back online, queries that were paused automatically retry. Mutations the user fired while offline replay in order. None of this requires a Redux store, a custom queue, or an opinion from the user.

The one Context we ship

There is exactly one React Context in the generated apps, and it lives in src/providers/AppProvider.tsx:

const AppContext = createContext<AppContextValue | null>(null);

export function useApp() {
  const ctx = useContext(AppContext);
  if (!ctx) throw new Error('useApp must be used within AppProvider');
  return ctx;
}

It exposes a single value: the database client. The client is a thin wrapper from @vibecode-db/client that swaps adapters at runtime — mock (AsyncStorage-backed, used inside the preview), supabase, or pocketbase. The selection happens via EXPO_PUBLIC_ADAPTER_TYPE, which means the same generated code runs against fake data in the preview and against real Supabase in production with no rewrites.

We avoided pulling in a Context-based store framework like Jotai or Zustand because Context here is doing what Context is good at: dependency injection, not state management. Mistaking those two is how teams end up with re-render storms.

Why not Zustand?

Zustand is excellent. We use it personally on other projects. It didn't fit the generated surface for two reasons.

First, adoption rate beats elegance when an LLM is the author. TanStack Query has more high-quality training data and more idiomatic patterns in public code. The model produces correct TanStack Query code on the first try roughly 95% of the time in our internal evals. Zustand's API is smaller, but the patterns for derived state, async actions, and cache invalidation are inconsistent across the corpus, and the model produces subtly broken stores often enough to matter.

Second, server state and client state collapse in most prompt-to-app projects. A to-do app, a journaling app, an internal admin dashboard — these are 90% CRUD. If you put that data in Zustand you've built a worse cache. Better to put it in the right cache and use the language built around fetching, mutation, invalidation, and retries that already exists.

That doesn't mean Zustand never appears. If a user explicitly asks for "a global shopping cart with persistence and animations," the model can install and configure it. But it doesn't ship by default.

How RapidNative's editor and generated state interact

This is the part most architectural posts skip. The editor and the generated app aren't isolated — they share preview iframes that need to stay in sync. When you edit a screen in the canvas and the same screen is rendered three times (small phone, large phone, tablet), every preview should show the same data.

We solve this without Redux, without a state library, and without a message bus. Inside src/modules/editor/store/localStore.ts there's a single shared Map:

let sharedTables: Map<string, Record<string, any>[]> | null = null;

function syncIframeVibecode(iframe: HTMLIFrameElement) {
  const adapter = iframe.contentWindow?.vibecode?.adapter;
  if (!adapter?.tables) return;

  if (!sharedTables) {
    sharedTables = adapter.tables;
  } else if (adapter.tables !== sharedTables) {
    adapter.tables = sharedTables;
  }
}

The first preview iframe to mount wins; subsequent iframes have their tables Map reference swapped to the same instance. Now when the user adds a row in preview A, preview B and C see it instantly because they're all reading from the same memory. The MockAdapter's useSyncExternalStore integration triggers a re-render in every iframe. Zero state library involved — just a shared reference and React's built-in subscription primitive.

This pattern only works because we control both the editor and the runtime. It's the kind of optimization you can't do when you're integrating someone else's state library — and it's one of the reasons we ship our own minimal DB client instead of forcing every generated app to import a heavy ORM.

A quick comparison table

For readers who want the punch line:

State surfaceLibraryWhy
Editor (Next.js app)Redux Toolkit + RTK QueryComplex normalized state, undo/redo, replay, dev tooling
Generated app — server dataTanStack QueryCaching, offline persistence, retry, refetch on focus
Generated app — local UIuseState / useReducerBuilt-in, idiomatic, smallest surface
Generated app — auth + DB clientSingle React ContextDependency injection, not state
Cross-iframe preview syncShared Map referenceZero-overhead, ref-equal subscriptions

People also ask

Is TanStack Query a replacement for Redux in React Native?

For most apps, yes. TanStack Query handles server state — fetching, caching, mutating, syncing — far better than a hand-rolled Redux store, and it ships with offline persistence and retry built in. Use Redux only when you have complex client-side state that doesn't come from a server, like a multi-step wizard with branching logic.

What's the smallest viable state stack for a new React Native app?

useState for local UI, one React Context for cross-cutting dependencies (DB client, auth), and TanStack Query for any data that lives in a backend. Add @tanstack/query-async-storage-persister if you need offline support. That's the entire stack we generate by default.

Should AI-generated React Native apps ship with Redux?

No, unless the app genuinely has complex non-server state. Redux adds boilerplate and cognitive load without solving problems that TanStack Query, Context, and useState already handle. Generated apps should match the vibe-coding best practices of being readable, idiomatic, and skinnable by a non-expert.

What this means if you're building an app on RapidNative

You don't have to learn any of this to use RapidNative. The point of opinionated defaults is that you get correct state management for free. But if you ever export your code, push it to a real Expo project, or extend a screen by hand, you'll notice that the generated app reads like an app a careful engineer would have written — not like a template fight between five competing libraries.

If you're curious how the rest of the stack lines up, we've written about how the AI generation pipeline streams components in real time, how the visual point-and-edit feature works, and how the export pipeline turns a session into App Store-ready code.

State management in AI-generated React Native apps isn't hard because the libraries are bad. It's hard because somebody has to choose, and they have to choose well enough that the model can imitate them. We picked TanStack Query, AsyncStorage persistence, and exactly one Context, and we wrote the rules down where the LLM can read them. Two years and a lot of generated apps later, that's still the right call.

Ready to see it in action? Start building a React Native app and inspect the code yourself — the state patterns above are exactly what you'll find.

Ready to Build Your App?

Turn your idea into a production-ready React Native app in minutes.

Try It Now

Free tools to get you started

Frequently Asked Questions

RapidNative is an AI-powered mobile app builder. Describe the app you want in plain English and RapidNative generates real, production-ready React Native screens you can preview, edit, and publish to the App Store or Google Play.