ChatGPT, Gemini and Copilot are three chatbots with artificial intelligence (AI) that I use every day to facilitate the writing process . All three have their own advantages but have one common drawback which is the relatively long processing time. On average, the time to process answers is around 5-10 seconds and then it takes up to 20 seconds for all the answers to be given.
Now comes Groq, an AI chatbot that can give full answers to questions in less than two seconds. Groq can provide answers faster because it uses a Language Processing Unit (LPU) created by itself. According to Groq, LPU can speed up LLM processing several times compared to using existing CPUs and GPUs.
In our own tests, Groq can process at over 500 tokens per second which is faster than Copilot's 18 tokens per second. When paired with a virtual assistant with a voice, Groq can provide answers in real time as if chatting with a real human.
Although Groq is fast, I realize the weakness lies in the LLaMA 2 and Mixtral models used because the information provided is outdated. For example Groq says Snapdragon 8 Gen 3 and MacBook Pro M3 are two products that do not exist.
It also hallucinates when asked to give a list of the biggest AI news 2024 because it gives news answers for March 2024 which has not yet happened! But for information about Tun Mahathir for example Groq gives a long answer about the former PM in less than four seconds.
Groq was founded in 2016 by Jonathan Ross who previously worked at Google. He was involved in the development of the Tensor Processing Unit (TPU) chip that Google uses for their various AI systems now.