生成式大语言模型在医学考试题库建设中的实践探索
Exploratory practice of generative large language models in the construction of medical item banks
传统的医学考试题库建设耗时长且依赖于命题专家资源,而大语言模型为题库建设带来了新方式,其试题生成质量很大程度上取决于提示词的设计。为了提高医学试题质量,帮助医学教师有效利用大语言模型开展命题工作,本文介绍了大语言模型中常用的提示工程,并以"术后胆漏"试题生成为例,探索了零样本、少样本、思维链、自洽性思维链、思维树提示工程策略的命题效果。分析结果显示,零样本和少样本提示操作简便,但在试题多样性和深度上存在一定局限。通过增加思维成分的提示策略,可以引导大语言模型执行草稿、打磨、比较和确定等命题过程,从而提高试题质量。同时,虽然通过改进提示词可以有效提高命题效果,但其具体实施与设计仍有极大的挖掘空间,需要进一步的研究和探索。
更多Item development in healthcare profession education is time-consuming and heavily reliant on content experts. While large language models (LLMs) introduce a new approach to reduce the burdens, the output quality is largely contingent upon the prompt. This article aims to guide educators in effectively leveraging LLMs for item development, enhancing the quality through prompt engineering. Using ″postoperative bile leakage″ as an example, the paper demonstrates the effectiveness of various prompt engineering strategies, including Zero-shot, Few-shot, Chain of Thought (CoT), CoT with Self-Consistency (CoT-SC), and Tree of Thoughts (ToT). It is found that while Zero-shot and Few-shot methods are straightforward, they have certain limitations in terms of item diversity and depth. Conversely, prompt strategies incorporating ″Thought″ elements can navigate the LLMs through stages of drafting, refining, comparing, and finalizing, thereby elevating question quality. Although refining prompts indeed leads to notable improvements in question formulation efficacy, there remains substantial room for exploring and optimizing prompt formulations and strategies to further augment the quality of generated questions. The pursuit of advancing prompt engineering techniques holds the promise of significantly elevating the standards of question bank development within medical education.
More- 浏览:7
- 被引:0
- 下载:0
相似文献
- 中文期刊
- 外文期刊
- 学位论文
- 会议论文