Epsiode 25: 1.5 years of LLMs - hope, hype and hallucination
Some 36 months after the release of ChatGPT, the verdict is still out on the role that large language models (LLMs) will play in biotech, pharma, and medicine. On paper, the range of tasks that LLMs can perform in biomedical research and healthcare is vast—excavating relevant information for drug discovery from mountains of scientific literature, designing novel proteins, transcribing doctors' notes, aiding diagnostic decision-making, and acting as patient-facing chatbots.
But given the models’ propensity to hallucinate, we need to define how much error we can tolerate for different LLM use cases in the biomedical fields and create evaluation frameworks that allow us to apply the models confidently. In some cases, it might turn out that the time spent for human supervision of the model will outweigh the efficiency gain.
In episode 25 of We’re doomed we’re saved we talk to idalab founder and mathematician, Paul von Bünau we discuss the promise and challenges of LLMs in the biomedical field and ask the question if we can ever stop them from hallucinating.
Content and Editing:
Louise von Stechow and Andreas Horchler
Disclaimer:
Louise von Stechow & Andreas Horchler and their guests express their personal opinions, which are founded on research on the respective topics, but do not claim to give medical, investment or even life advice in the podcast.
Learn more about the future of biotech in our podcasts and keynotes. Contact us here:
scientific communication: https://science-tales.com/
Podcasts: https://www.podcon.de/
Keynotes: https://www.zukunftsinstitut.de/louise-von-stechow
Image:
jo-coenen-studio-dries-2-6-yST9mzlMVLQ via Unsplash