Guru is an ardent explorer of the domains where neuroscience, differential geometry and artificial intelligence converge.
He have had the fortune of learning from the best, and standing on the shoulders of giants, while at IIT-M, CMU and recently at Caltech (where he got his PhD in the realm of computational neuroscience).
Things he loves (apart from Neuro-AI): Running, 5AM club and Studying Eastern Philosophy.
TL;DR We have benchmarked several popular open source LLMs (including the latest Llama-v2–7b-chat) to estimate both, the frequency and degree of hallucinations. Overall, we find that on average, popular, open-source models hallucinate close to 55% of the time on a context-aware Q&A task, when tested without any tuning...