06/03/2025Introducing GroqCloud™ LoRA Fine-Tune Support: Unlock Efficient Model Adaptation for Enterprises
11/15/2024Groq First Generation 14nm Chip Just Got a 6x Speed Boost: Introducing Llama 3.3 70B Speculative Decoding on GroqCloud™
10/09/2024Whisper Large v3 Turbo Now Available on Groq, Combining Speed & Quality for Speech Recognition
09/12/2024Unleashing the Power of Fast AI Inference: Groq and Aramco Digital Partner to Establish World-leading Data Center
08/20/2024Distil-Whisper is Now Available to the Developer Community on GroqCloud™ for Faster and More Efficient Speech Recognition
06/24/2024Groq Runs Whisper Large V3 at a 164x Speed Factor According to New Artificial Analysis Benchmark
04/20/202412 Hours Later, Groq Deploys Llama 3 Instruct (8 & 70B) by Meta AI on Its LPU™ Inference Engine
02/08/2024ArtificialAnalysis.ai LLM Benchmark Doubles Axis To Fit New Groq LPU™ Inference Engine Performance Results