Item type |
Symposium(1) |
公開日 |
2024-09-10 |
タイトル |
|
|
タイトル |
An Empirical Study on Small Language Models in Sentiment Analysis for Software Engineering |
タイトル |
|
|
言語 |
en |
|
タイトル |
An Empirical Study on Small Language Models in Sentiment Analysis for Software Engineering |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
大規模言語モデル |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_5794 |
|
資源タイプ |
conference paper |
著者所属 |
|
|
|
Kyushu University |
著者所属 |
|
|
|
Kyushu University |
著者所属 |
|
|
|
Kyushu University |
著者所属 |
|
|
|
Kyushu University |
著者所属(英) |
|
|
|
en |
|
|
Kyushu University |
著者所属(英) |
|
|
|
en |
|
|
Kyushu University |
著者所属(英) |
|
|
|
en |
|
|
Kyushu University |
著者所属(英) |
|
|
|
en |
|
|
Kyushu University |
著者名 |
Chunrun, Tao
Honglin, Shu
Masanari, Kondo
Yasutaka, Kamei
|
著者名(英) |
Chunrun, Tao
Honglin, Shu
Masanari, Kondo
Yasutaka, Kamei
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Software engineering has become very important in daily life and scientific research. The ability to quickly understand developers' emotions, especially negative ones, during the software development process, as well as the reputation and user feedback of the software, is crucial in software engineering today. Over the years, many tools have been developed for Sentiment Analysis for Software Engineering (SA4SE), but capturing sentiment efficiently and accurately remains challenging. The fine-tuned model performs well but relies on a large number of high-quality labeled datasets. While Large Language Models (LLMs) are relatively easy to use and not dependent on these datasets, they generally have mediocre performance except in a few cases. Additionally, they require a large amount of computational resources. In this study, we introduce the Small Language Models (SLMs) and empirically determine its characteristics. We also compare its performance with existing models to generalize SLM's characteristics and see if it improves performance. In addition, the emergence of various chatbots provides this research with a new opportunity: Language Models (LMs) Negotiation. This study examines whether it can improve performance compared to a single LM. The experimental results show that SLMs currently performs similarly to LLMs, indicating that SLMs has good potential in this task. Additionally, LMs Negotiation slightly improves its performance compared to individual models. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
Software engineering has become very important in daily life and scientific research. The ability to quickly understand developers' emotions, especially negative ones, during the software development process, as well as the reputation and user feedback of the software, is crucial in software engineering today. Over the years, many tools have been developed for Sentiment Analysis for Software Engineering (SA4SE), but capturing sentiment efficiently and accurately remains challenging. The fine-tuned model performs well but relies on a large number of high-quality labeled datasets. While Large Language Models (LLMs) are relatively easy to use and not dependent on these datasets, they generally have mediocre performance except in a few cases. Additionally, they require a large amount of computational resources. In this study, we introduce the Small Language Models (SLMs) and empirically determine its characteristics. We also compare its performance with existing models to generalize SLM's characteristics and see if it improves performance. In addition, the emergence of various chatbots provides this research with a new opportunity: Language Models (LMs) Negotiation. This study examines whether it can improve performance compared to a single LM. The experimental results show that SLMs currently performs similarly to LLMs, indicating that SLMs has good potential in this task. Additionally, LMs Negotiation slightly improves its performance compared to individual models. |
書誌情報 |
ソフトウェアエンジニアリングシンポジウム2024論文集
巻 2024,
p. 130-136,
発行日 2024-09-10
|
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |