Finetuning LLaMA on Medical Papers
The result is PMC-LLaMA trained on 4.8 million Medical papers.
Welcome Back!
As some of you know Iām a big followers of A.I. in the future of healthcare. So I thought I had to cover this. From this tweet.
Finetuning LLaMA on Medical Papers
- fine-tunes LLaMA on 4.8 million biomedical papers - enhances capabilities in the medical domain - the proposed model, PMC-LLaMA, achieves high performance on biomedical QA benchmarks
paper: https://arxiv.org/abs/2304.14454
code: https://github.com/chaoyi-wu/PMC-LLaMA
With the emergence of a āLinux of A.I.ā (mid 2023) moment this is more interesting than it at first appears.
Keep reading with a 7-day free trial
Subscribe to Artificial Intelligence Learning š¤š§ š¦¾ to keep reading this post and get 7 days of free access to the full post archives.