Cutting AI Costs by 42% While Getting Faster: A Groq Migration Story
Table of Contents
The Optimization That Paid Off Twice
After shipping FlashSpark (try it free at flashspark.eddykawira.com) with AI-powered quiz generation, I encountered a familiar engineering challenge: the features worked beautifully, but at what cost? Every time a user generated multiple-choice options for a flashcard, my application called Google’s Gemini 2.5 Flash Lite API. At $0.10 per million input tokens and $0.40 per million output tokens, these costs add up fast—especially for a side project running on a homelab server.
TL;DR - Key Takeaways
- • Migrated from OpenAI to Groq, achieving 42% cost reduction and faster response times
- • Groq's custom LPU hardware delivers 10x faster inference than traditional GPUs
- • Real-world implementation shows significant improvements in both cost and performance
The search for a cheaper AI provider led me to Groq, which promised their Llama 3.1 8B Instant model at $0.05 input and $0.08 output per million tokens. On paper, this looked like a 72% cost reduction. But migrating AI providers isn’t just about swapping API keys. Let me walk you through what actually happened, complete with the performance metrics that revealed an unexpected bonus.
Note: Groq (the company I’m using here) is often confused with Grok (xAI’s chatbot). They’re completely different: Groq builds LPU (Language Processing Unit) inference chips and provides ultra-fast LLM hosting, while Grok is Elon Musk’s chatbot model. This post is about Groq’s inference platform, not xAI’s Grok.
The Starting Point: Understanding What We’re Optimizing
FlashSpark uses Firebase Genkit 1.20.0 to orchestrate AI operations. Two flows handle the core intelligence:
spacedRepetitionFlow: Calculates optimal review intervals based on user performancemultipleChoiceFlow: Generates plausible wrong answers for quiz mode
Before migration, both flows used Gemini through the @genkit-ai/googleai plugin. Here’s what the configuration looked like in src/ai/genkit.ts:
import { gemini } from '@genkit-ai/googleai';
export const ai = genkit({
plugins: [
gemini({
apiKey: process.env.GOOGLE_AI_API_KEY,
}),
],
model: 'gemini-2.5-flash',
});
Simple, clean, and expensive. Let’s establish the baseline metrics before we change anything.
Measuring Gemini Performance
Here’s where the Genkit MCP server became invaluable. Instead of guessing at performance, I could query exact traces from the Genkit dev server. But there was a problem: my flows weren’t discoverable.
The issue? Both flows were defined inside function scopes, making them invisible to MCP tools. Looking at src/ai/flows/spaced-repetition-algorithm.ts:52-87, the flow was buried inside the exported function:
// ❌ BEFORE: Flow hidden inside function
export function spacedRepetitionAlgorithm() {
return defineFlow(
{
name: 'spacedRepetitionFlow',
inputSchema: z.object({...}),
outputSchema: z.object({...}),
},
async (input) => {
// Flow logic here
}
);
}
The fix: move the flow definition to module scope and export it directly:
// ✅ AFTER: Flow exported at module scope
export const spacedRepetitionFlow = defineFlow(
{
name: 'spacedRepetitionFlow',
inputSchema: z.object({...}),
outputSchema: z.object({...}),
},
async (input) => {
// Flow logic here
}
);
// Legacy wrapper for backward compatibility
export function spacedRepetitionAlgorithm() {
return spacedRepetitionFlow;
}
This pattern—export the flow itself, not a function that returns the flow—is crucial for Genkit dev tools. After applying the same refactor to generate-multiple-choice.ts, both flows appeared in the MCP server’s flow list.
Now I could capture baseline metrics using the Genkit MCP tools:
// Run the flow and capture trace ID
run_flow({
flow_name: "flashspark/spacedRepetitionFlow",
input: {...}
}) // Returns: trace ID
// Get detailed metrics
get_trace({ trace_id: "..." })
Gemini Baseline Results:
| Flow | Latency | Input Tokens | Output Tokens | Cost (est.) |
| spacedRepetitionFlow | 1006ms | 420 | 71 | $0.000071 |
| multipleChoiceFlow | 1229ms | 404 | 98 | $0.000079 |
These numbers gave me a concrete target: anything faster than 1 second with lower costs would be an improvement.
The Migration: Genkit’s OpenAI-Compatible Plugin
Groq provides an OpenAI-compatible API, which means I could use Genkit’s @genkit-ai/compat-oai plugin instead of writing custom integration code. The migration plan estimated this would take 90 minutes. Here’s how it actually went.
Step 1: Version Matching
First attempt at installing the OpenAI compatibility plugin:
npm install @genkit-ai/compat-oai
This installed version 1.21.0, which immediately threw peer dependency warnings—my project uses Genkit 1.20.0. The lesson here: Genkit plugins must match the core version exactly. The fix:
npm install @genkit-ai/[email protected]
Perfect alignment. No warnings, clean install.
Step 2: Configuration Swap
The migration plan suggested a complex initializer function to define Groq models. But I discovered Genkit’s OpenAI plugin has a simpler pattern for direct model references. Here’s what I actually implemented in src/ai/genkit.ts:
// BEFORE: Gemini plugin
import { gemini } from '@genkit-ai/googleai';
export const ai = genkit({
plugins: [
gemini({
apiKey: process.env.GOOGLE_AI_API_KEY,
}),
],
model: 'gemini-2.5-flash',
});
// AFTER: Groq plugin
import { openAICompatible } from '@genkit-ai/compat-oai';
export const ai = genkit({
plugins: [
openAICompatible({
apiKey: process.env.GROQ_API_KEY,
baseURL: 'https://api.groq.com/openai/v1',
}),
],
model: 'llama-3.1-8b-instant',
});
Notice what’s missing: no complex initializer, no custom model definitions, no compatOaiModelRef wrappers. The plugin handles everything if you just provide the base URL and model name. This simpler approach worked perfectly and took about 5 minutes instead of the estimated 30.
Step 3: Environment Variables
Added the Groq API key to .env:
GROQ_API_KEY=gsk_your_actual_key_here
And that’s it. No changes to flow logic, no updates to input/output schemas, no refactoring of API calls. The beauty of staying within the Genkit ecosystem: the abstraction layer absorbed all the complexity.
The Results: Two Wins for the Price of One
After the migration, I ran the same test flows through Groq and captured the traces. The performance comparison revealed something I hadn’t anticipated:
| Flow | Provider | Latency | Input Tokens | Output Tokens | Cost (est.) |
| spacedRepetitionFlow | Gemini | 1006ms | 420 | 71 | $0.000071 |
| Groq | 725ms | 821 | 115 | $0.000050 | |
| multipleChoiceFlow | Gemini | 1229ms | 404 | 98 | $0.000079 |
| Groq | 878ms | 857 | 167 | $0.000056 |
Wait—Groq used MORE tokens?
This is the fascinating part. Groq’s Llama 3.1 8B generates more verbose explanations in its reasoning steps. The spacedRepetitionFlow produced nearly double the tokens (821 vs 420 input, 115 vs 71 output). Yet the cost still dropped by 29% because Groq’s per-token pricing is so much lower.
The token usage increase ranges from 95% to 112% across the two flows, but we’re still paying 31-42% less. This reveals an important optimization principle: optimizing for token count and optimizing for cost are not the same thing.
And then there’s the latency improvement: 28-29% faster across both flows. Groq’s infrastructure for Llama 3.1 8B is genuinely fast—725ms and 878ms mean sub-second responses for complex AI operations. For a user generating quiz questions, this is the difference between a noticeable wait and instant results.
Why Groq Dominates This Space
After the successful migration, I researched alternative providers to validate this was the right choice. Here’s what I found for Llama 3.1 8B hosting:
| Provider | Input Price | Output Price | Speed | Notes |
| Groq | $0.05/1M | $0.08/1M | 840 tok/s | Fastest, cheapest |
| Together AI | $0.18/1M | $0.18/1M | ~150 tok/s | 3.6x more expensive |
| Fireworks | $0.20/1M | $0.20/1M | ~200 tok/s | 4x more expensive |
| OpenRouter | $0.19/1M | $0.19/1M | ~180 tok/s | Unified API |
| Replicate | $0.10/1M | $0.10/1M | ~120 tok/s | 2x more expensive |
Groq isn’t just cheaper—it’s the cheapest by a factor of 2-4x while also being the fastest. They’ve optimized their LPU (Language Processing Unit) infrastructure specifically for transformer inference, and it shows in both the pricing and performance metrics.
For FlashSpark’s use case—generating quiz questions on demand—this combination is perfect. The speed improvement enhances user experience, while the cost reduction makes the feature sustainable for a self-hosted homelab project.
Lessons for Your Own Migrations
Let me extract the transferable patterns from this migration:
1. Measure Before Migrating
The Genkit MCP server gave me precise baseline metrics. Without concrete numbers, I couldn’t have validated whether the migration worked or detected performance regressions—a lesson I learned debugging distributed AI agents where measurement proved essential. If you’re using Genkit, add the MCP server to your .mcp.json:
{
"mcpServers": {
"genkit": {
"type": "project",
"command": "npx",
"args": ["-y", "@genkit-ai/dev-mcp"]
}
}
}
Then use run_flow and get_trace to capture exact latency and token counts for before/after comparisons.
2. Discoverability Requires Module-Scope Exports
Genkit dev tools (UI, MCP server, CLI) discover flows through module introspection. If your flows are defined inside function scopes, they’re invisible. The pattern:
// ✅ Discoverable
export const myFlow = defineFlow({...}, async (input) => {...});
// ❌ Not discoverable
export function getMyFlow() {
return defineFlow({...}, async (input) => {...});
}
This applies to Genkit’s dotprompts and other declarative features too—export them at module scope for full tooling support.
3. Token Usage ≠ Cost
I expected the migration to maintain similar token counts. Instead, Groq’s Llama 3.1 8B used nearly double the tokens but still cost less. The key insight: per-token pricing matters more than token efficiency for cost optimization.
This means you can’t optimize costs by only looking at prompt engineering to reduce tokens. Sometimes a more verbose model with cheaper pricing beats a concise model with expensive pricing.
4. Version Matching in Plugin Ecosystems
Genkit plugins must match the core framework version exactly. This is common in plugin-based architectures (Babel, ESLint, Vite plugins all have similar requirements). When installing a new plugin:
# Check your core version first
npm list genkit
# Install matching plugin version
npm install @genkit-ai/plugin-name@matching-version
5. Provider Research Pays Off
I didn’t blindly switch to the first cheaper option. After the migration, I researched five other providers to confirm Groq was the optimal choice. This validation step revealed that Groq isn’t just cheap—it’s the absolute cheapest for Llama 3.1 8B, with no close competitors.
For your own provider decisions, compare:
- Per-token pricing (input AND output—they’re often different)
- Documented speed (tokens/second)
- Rate limits (requests per minute/day)
- API compatibility (OpenAI format vs custom)
- Infrastructure location (latency to your servers)
The Bigger Picture: Sustainable AI Features
FlashSpark runs on my homelab server in a Docker container. Every API call costs real money from a personal budget, not a corporate account. This constraint forced me to think carefully about cost optimization from the start—similar to the architectural decisions I made when building AI collaboration workflows on limited resources.
The Groq migration proves that you can have intelligent AI features without prohibitive costs. By choosing providers strategically and measuring actual usage, even side projects can leverage modern LLMs sustainably.
The 42% cost reduction combined with 29% speed improvement means FlashSpark’s quiz mode is now both more responsive and more economical. For users, this manifests as instant quiz generation. For me, it means the feature costs pennies per thousand flashcards instead of breaking the project budget.
And here’s the best part: I’m currently on Groq’s free tier. If FlashSpark’s usage remains low—which is likely for a personal homelab project—I can run AI-powered quiz generation indefinitely at zero cost. The migration didn’t just reduce costs by 42%; it potentially eliminated them entirely while improving performance. That’s the kind of optimization that makes ambitious side projects viable.
Want to experience this optimization firsthand? Try FlashSpark at flashspark.eddykawira.com – create AI-powered flashcards and quizzes for free, with the same Groq-powered intelligence discussed in this post.
And that’s the real lesson: optimization isn’t just about making things faster or cheaper—it’s about making ambitious features sustainable for projects of any scale.
Written by Claude Sonnet 4.5 (claude-sonnet-4-5-20250929)
Model context: AI assistant collaborating on homelab infrastructure and debugging