Best LLMs For Coding: Overview & Costs

Best LLMs For Coding: Overview & Costs

November 22, 2024 Hugo Huijer
<div class="text-lg text-gray-700 mb-8">Looking for the right LLM to help with your coding projects? I've dug through the specs and pricing of various models to help you make an informed decision. While I haven't personally tested all these models (let's be honest here!), I've analyzed their specifications and market positioning to give you a solid overview of what's available.</div><div class="text-3xl font-bold text-gray-800 mt-12 mb-6">What are the best LLMs for coding?</div><div class="text-lg text-gray-700 mb-8">When it comes to coding assistance, you'll want an LLM that not only understands your code but can also provide meaningful suggestions and help with debugging. I've selected five models that stand out for different reasons - from the premium powerhouses to the budget-friendly options that punch above their weight.</div><div class="overflow-x-auto rounded-lg border border-gray-200 shadow-sm mb-8"><table class="w-full divide-y divide-gray-200"><thead class="bg-gray-50"><tr><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Model</th><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Context Window</th><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Price (Input/Output)</th><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Best For</th></tr></thead><tbody class="bg-white divide-y divide-gray-200"><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Claude 3 Opus</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">200K tokens</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">$15/$75 per 1M tokens</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">Professional development, complex projects</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Gemini 1.5 Pro Preview</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">1M tokens</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">$0.08/$0.31 per 1M tokens</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">Large codebase analysis, best value</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Mistral Large Latest</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">128K tokens</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">$3/$9 per 1M tokens</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">Balanced performance</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Claude 3 Haiku</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">200K tokens</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">$0.25/$1.25 per 1M tokens</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">Quick coding tasks</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">GPT-4 O Mini</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">128K tokens</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">$0.15/$0.6 per 1M tokens</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">Reliable everyday coding</td></tr></tbody></table></div><div class="flex items-center gap-4 mt-12 mb-4"><img src="/images/blog/anthropic-logo.png" alt="Anthropic Logo" class="h-12 w-auto object-contain"/><h3 class="text-2xl font-bold text-gray-800">Anthropic - Claude 3 Opus</h3></div><div class="text-lg text-gray-700 mb-6">If budget isn't your primary concern and you need the absolute best, Claude 3 Opus is your go-to choice. It's like having a senior developer at your fingertips - expensive, but potentially worth every penny for professional development teams.</div><div class="overflow-x-auto rounded-lg border border-gray-200 shadow-sm mb-12"><table class="w-full divide-y divide-gray-200"><thead class="bg-gray-50"><tr><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Feature</th><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Specification</th></tr></thead><tbody class="bg-white divide-y divide-gray-200"><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Context Window</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">200K tokens</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Price (Input/Output)</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">$15/$75 per 1M tokens</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Key Strength</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">Highest capability model</td></tr></tbody></table></div><div class="flex items-center gap-4 mt-12 mb-4"><img src="/images/blog/google-gemini-logo.png" alt="Google Gemini Logo" class="h-12 w-auto object-contain"/><h3 class="text-2xl font-bold text-gray-800">Google - Gemini 1.5 Pro Preview</h3></div><div class="text-lg text-gray-700 mb-6">This is the hidden gem in the current LLM landscape. With its massive 1M token context window and surprisingly affordable pricing, it's perfect for developers working with large codebases. The preview pricing makes it an absolute steal for what you get.</div><div class="overflow-x-auto rounded-lg border border-gray-200 shadow-sm mb-12"><table class="w-full divide-y divide-gray-200"><thead class="bg-gray-50"><tr><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Feature</th><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Specification</th></tr></thead><tbody class="bg-white divide-y divide-gray-200"><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Context Window</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">1M tokens</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Price (Input/Output)</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">$0.08/$0.31 per 1M tokens</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Key Strength</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">Massive context window</td></tr></tbody></table></div><div class="flex items-center gap-4 mt-12 mb-4"><img src="/images/blog/mistral-logo.png" alt="Mistral Logo" class="h-12 w-auto object-contain"/><h3 class="text-2xl font-bold text-gray-800">Mistral - Mistral Large Latest</h3></div><div class="text-lg text-gray-700 mb-6">Mistral's flagship model hits a sweet spot between capability and cost. While not as well-known as some competitors, it's gaining traction in the developer community for its solid performance and reasonable pricing.</div><div class="overflow-x-auto rounded-lg border border-gray-200 shadow-sm mb-12"><table class="w-full divide-y divide-gray-200"><thead class="bg-gray-50"><tr><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Feature</th><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Specification</th></tr></thead><tbody class="bg-white divide-y divide-gray-200"><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Context Window</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">128K tokens</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Price (Input/Output)</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">$3/$9 per 1M tokens</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Key Strength</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">Balanced performance</td></tr></tbody></table></div><div class="flex items-center gap-4 mt-12 mb-4"><img src="/images/blog/anthropic-logo.png" alt="Anthropic Logo" class="h-12 w-auto object-contain"/><h3 class="text-2xl font-bold text-gray-800">Anthropic - Claude 3 Haiku</h3></div><div class="text-lg text-gray-700 mb-6">Think of Haiku as the quick-witted cousin in the Claude family. It's perfect for those rapid coding sessions where you need quick feedback or assistance. While not as powerful as Opus, it maintains the same impressive context window at a fraction of the cost.</div><div class="overflow-x-auto rounded-lg border border-gray-200 shadow-sm mb-12"><table class="w-full divide-y divide-gray-200"><thead class="bg-gray-50"><tr><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Feature</th><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Specification</th></tr></thead><tbody class="bg-white divide-y divide-gray-200"><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Context Window</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">200K tokens</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Price (Input/Output)</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">$0.25/$1.25 per 1M tokens</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Key Strength</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">Fast responses</td></tr></tbody></table></div><div class="flex items-center gap-4 mt-12 mb-4"><img src="/images/blog/open-ai-logo.png" alt="OpenAI Logo" class="h-12 w-auto object-contain"/><h3 class="text-2xl font-bold text-gray-800">OpenAI - GPT-4 O Mini</h3></div><div class="text-lg text-gray-700 mb-6">OpenAI's offering brings their proven technology at a more accessible price point. It's like having a reliable coding buddy who might not know everything but consistently delivers solid advice.</div><div class="overflow-x-auto rounded-lg border border-gray-200 shadow-sm mb-12"><table class="w-full divide-y divide-gray-200"><thead class="bg-gray-50"><tr><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Feature</th><th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Specification</th></tr></thead><tbody class="bg-white divide-y divide-gray-200"><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Context Window</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">128K tokens</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Price (Input/Output)</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">$0.15/$0.6 per 1M tokens</td></tr><tr><td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">Key Strength</td><td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">Reliable performance</td></tr></tbody></table></div><div class="text-lg text-gray-700 mt-8 mb-4">Remember, the best LLM for your coding needs depends on various factors - your budget, project size, and specific requirements. I'd suggest starting with Gemini 1.5 Pro Preview if you're looking for the best value, or Claude 3 Haiku if you need quick assistance without breaking the bank. For professional teams working on complex projects, Claude 3 Opus might be worth the investment despite its higher cost.</div><div class="text-lg text-gray-700 mb-8">Feel free to mix and match these options - there's nothing wrong with using different models for different tasks. After all, you wouldn't use a sledgehammer to hang a picture frame, right?</div>

Understand how AI is talking about your brand

Track how different AI models respond to your prompts. Compare OpenAI and Google Gemini responses to increase your visibility in LLMs.

Start monitoring AI responses →