Dokumentdetails
ID

oai:arXiv.org:2409.05732

Thema
Computer Science - Computation and...
Autor
Zhou, Meng Parmar, Surajsinh Bhatti, Anubhav
Kategorie

Computer Science

Jahr

2024

Auflistungsdatum

11.09.2024

Schlüsselwörter
language instruction fine-tuning
Metrisch

Zusammenfassung

Open-source, multilingual medical large language models (LLMs) have the potential to serve linguistically diverse populations across different regions.

Adapting generic LLMs for healthcare often requires continual pretraining, but this approach is computationally expensive and sometimes impractical.

Instruction fine-tuning on a specific task may not always guarantee optimal performance due to the lack of broader domain knowledge that the model needs to understand and reason effectively in diverse scenarios.

To address these challenges, we introduce two multilingual instruction fine-tuning datasets, MMed-IFT and MMed-IFT-MC, containing over 200k high-quality medical samples in six languages.

We propose a two-stage training paradigm: the first stage injects general medical knowledge using MMed-IFT, while the second stage fine-tunes task-specific multiple-choice questions with MMed-IFT-MC.

Our method achieves competitive results on both English and multilingual benchmarks, striking a balance between computational efficiency and performance.

We plan to make our dataset and model weights public at \url{https://github.com/SpassMed/Med-Llama3} in the future.

;Comment: Technical Report v1, work in progress

Zhou, Meng,Parmar, Surajsinh,Bhatti, Anubhav, 2024, Towards Democratizing Multilingual Large Language Models For Medicine Through A Two-Stage Instruction Fine-tuning Approach

Dokumentieren

Öffnen

Teilen

Quelle

Artikel empfohlen von ES/IODE AI