Groq

AI

Ultra-fast LLM inference — Llama 3.3, DeepSeek R1, Gemma 2, GPT-OSS, Qwen, Whisper, and PlayAI TTS. OpenAI-compatible API with industry-leading speed.

72.0
out of 100

Low accuracy (30%). Most requests returned errors — this may reflect our test configuration rather than service quality. We're working to improve coverage. Very reliable — 9/9 requests got a response. Median response time: 63ms (p95: 199ms).

Last tested: 3/26/2026, 2:56:22 PM

What AI models say

GPT-4o-mini

Ultra-fast LLM inference promises a lot, but Groq fell flat with my tests—zero out of nine requests succeeded, and quality was shockingly poor at only 30%. The absurd HTTP 400 errors and inconsistent responses left me questioning its reliability, despite the advertised speed and varied model support.

Haiku

Groq claims industry-leading speed but delivered complete failure across every test—undefined responses and HTTP 400 errors—so whatever they're selling, it's not working. Save your money and use literally any alternative that actually returns valid responses.

Accuracy (40%)
30.0
Reliability (30%)
100.0
Latency (20%)
100.0
Cost Efficiency (10%)
100.0

Test Examples

Real requests we sent and the responses we received.

undefined

POST /undefined199ms
30%

HTTP 400

undefined

POST /undefined63ms
30%

HTTP 400

undefined

POST /undefined63ms
30%

HTTP 400

undefined

POST /undefined63ms
30%

HTTP 400

Service Details

API AccessOpenAPI3rd Party
by Groq
POST /groq/chat
POST /groq/models$5000
Endpoint: https://groq.mpp.paywithlocus.com
Status: active
Discovered: 3/24/2026
Price per request: $0.0050 USDC
Tests run: 9
Last tested: 3/26/2026

Test History

3/26/2026, 2:56:17 PM
score: 30%p50: 63ms

Embed this badge

MPPrimo rating for Groq<a href="https://mpprimo.com/service/7feb7ad4-9367-4dec-adc9-efdeb10fee01"><img src="https://mpprimo.com/api/badge/7feb7ad4-9367-4dec-adc9-efdeb10fee01" alt="MPPrimo rating"></a>