ACT AI
@egeuysall · May 9, 2026
@egeuysall · May 9, 2026
ACT AI is a Next.js 16 app that helps students prep for the ACT. It combines a streaming AI tutor, saved chat history, MDX-based tutorials and practice sets, and a RAG-powered source index, all behind Clerk authentication.
| Layer | Technology |
|---|---|
| Framework | Next.js 16 App Router + React 19 |
| Auth | Clerk |
| Database / Backend | Convex |
| AI | Vercel AI SDK, ToolLoopAgent, openai/gpt-5.4-nano |
| Markdown rendering | Streamdown + KaTeX |
| UI | shadcn sidebar components |
| Route | What it does |
|---|---|
/ | Landing page; redirects signed-in users to /dashboard |
/sign-in and /sign-up | Clerk auth; both redirect to /dashboard on success |
/dashboard and /dashboard/[chatId] | AI chatbot tab |
/dashboard/tutorials/[slug] | MDX tutorial lessons with optional YouTube embeds |
/dashboard/practice/[slug] | MDX ACT practice sets with KaTeX math |
Users type a question (or pick a suggestion), and the message is sent to POST /api/agent/chat. The route checks the user's session, saves the message to the database, hands the full conversation to the AI agent, and streams the reply back in real time. Once the AI finishes, the reply is saved to the database too.
The composer supports image uploads in PNG, JPG, and WebP formats, with a max of 2 files at 4 MB each. It also saves draft text to localStorage under the key act-ai:chat-input-draft so it survives a page reload.
The agent is built on the Vercel AI SDK's ToolLoopAgent. Rather than answering in a single shot, a tool loop agent can pause mid-response, call one of its tools to fetch information or run a calculation, and then continue writing. It repeats this process up to 8 steps per response.
This matters for ACT prep because raw language model answers about test strategy can be vague or wrong. Giving the agent tools lets it ground answers in real source material, build structured study plans, and verify its own math before responding.
The model defaults to openai/gpt-5.4-nano but can be overridden with the AGENT_MODEL environment variable.
RAG is the technique of giving an AI access to a searchable library of documents so it can look things up instead of relying purely on what it learned during training. Here is how it works in this app.
A seed script fetches two Google Docs (ACT Math Strategies and ACT Reading Strategies) using the export URL format below, splits them into chunks of roughly 1200 characters each, and stores those chunks in Convex.
https://docs.google.com/document/d/{documentId}/export?format=txtThis only runs once, or whenever the source docs need updating, via:
bun run seed:sourcesAt chat time, the agent searches those stored chunks using a Convex full-text search index and pulls the most relevant ones into its context before writing a response. This means answers about ACT strategy are grounded in the actual content of those documents rather than guesswork.
The Convex search index is defined on the actSourceChunks table like this:
.searchIndex("search_text", {
searchField: "text",
filterFields: ["sourceId", "subject"],
})The agent has four tools it can call during a response.
searchActSources is the RAG tool. It takes a search query, runs it against the Convex chunk index, and returns the most relevant passages from the ACT strategy documents. The agent is instructed to call this before giving any ACT strategy advice.
listActTools tells the agent what subject areas and source documents exist in the library. This lets it understand the scope of what it can search before deciding how to answer.
planDrill generates a structured study session given a weak area and a time limit. It outputs a warmup block, a focused drill block, and a review block with specific time allocations. The output follows a fixed structure rather than being freely generated, which keeps drill plans consistent and practical. For a 20-minute session the breakdown looks like this:
| Block | Duration |
|---|---|
| Method review (warmup) | 4 min |
| Focused solving (drill) | 11 min |
| Review and log misses | 5 min |
verifyDerivative exists because language models make calculus mistakes. When the agent proposes a derivative, this tool checks it numerically using the mathjs library. It samples several x values, estimates the true derivative using central differences, evaluates the proposed derivative at those same points, and returns a pass or fail with error details. If it fails, the agent is expected to revise its answer before responding.
Tutorials and practice sets are written as .mdx files, which are Markdown files that can also include React components. Each file exports a metadata object like this:
export const metadata = {
title: "Backsolving Fast Start",
summary: "A short lesson for using answer choices as a shortcut on ACT Math.",
subject: "Math",
minutes: 6,
status: "sample",
};These files are registered manually in src/lib/content/posts.ts, which the app uses to render listing cards and load the right content on detail pages. Math renders via remark-math and rehype-katex, which process LaTeX syntax into properly formatted equations. Inline math uses $...$ and block math uses $$...$$.
To add a new tutorial, create the file at src/content/tutorials/my-new-lesson.mdx and add it to posts.ts. The same pattern applies for practice sets under src/content/practice/.
The browser never talks to the database directly for protected data. All chat history requests go through Next.js API routes, which verify the Clerk session and attach CONVEX_APP_SERVER_SECRET before querying Convex. Convex rejects any request missing that secret. Chat history is scoped per user using the identifier clerk:{userId}, so one user cannot query another user's chats.
CONVEX_DEPLOYMENT=
NEXT_PUBLIC_CONVEX_URL=
CONVEX_APP_SERVER_SECRET= # required, not in .env.example
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=
CLERK_SECRET_KEY=
CLERK_JWT_ISSUER_DOMAIN=
AI_GATEWAY_API_KEY=
AGENT_MODEL= # optional override