ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. シンポジウム
  2. シンポジウムシリーズ
  3. ソフトウェアエンジニアリングシンポジウム
  4. 2024

An Empirical Study on Small Language Models in Sentiment Analysis for Software Engineering

https://ipsj.ixsq.nii.ac.jp/records/239252
https://ipsj.ixsq.nii.ac.jp/records/239252
63244e35-31d7-4db7-b442-505438122dc2
名前 / ファイル ライセンス アクション
IPSJ-SES2024024.pdf IPSJ-SES2024024.pdf (138.4 kB)
 2026年9月10日からダウンロード可能です。
Copyright (c) 2024 by the Information Processing Society of Japan
非会員:¥660, IPSJ:学会員:¥330, SE:会員:¥0, DLIB:会員:¥0
Item type Symposium(1)
公開日 2024-09-10
タイトル
タイトル An Empirical Study on Small Language Models in Sentiment Analysis for Software Engineering
タイトル
言語 en
タイトル An Empirical Study on Small Language Models in Sentiment Analysis for Software Engineering
言語
言語 eng
キーワード
主題Scheme Other
主題 大規模言語モデル
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_5794
資源タイプ conference paper
著者所属
Kyushu University
著者所属
Kyushu University
著者所属
Kyushu University
著者所属
Kyushu University
著者所属(英)
en
Kyushu University
著者所属(英)
en
Kyushu University
著者所属(英)
en
Kyushu University
著者所属(英)
en
Kyushu University
著者名 Chunrun, Tao

× Chunrun, Tao

Chunrun, Tao

Search repository
Honglin, Shu

× Honglin, Shu

Honglin, Shu

Search repository
Masanari, Kondo

× Masanari, Kondo

Masanari, Kondo

Search repository
Yasutaka, Kamei

× Yasutaka, Kamei

Yasutaka, Kamei

Search repository
著者名(英) Chunrun, Tao

× Chunrun, Tao

en Chunrun, Tao

Search repository
Honglin, Shu

× Honglin, Shu

en Honglin, Shu

Search repository
Masanari, Kondo

× Masanari, Kondo

en Masanari, Kondo

Search repository
Yasutaka, Kamei

× Yasutaka, Kamei

en Yasutaka, Kamei

Search repository
論文抄録
内容記述タイプ Other
内容記述 Software engineering has become very important in daily life and scientific research. The ability to quickly understand developers' emotions, especially negative ones, during the software development process, as well as the reputation and user feedback of the software, is crucial in software engineering today. Over the years, many tools have been developed for Sentiment Analysis for Software Engineering (SA4SE), but capturing sentiment efficiently and accurately remains challenging. The fine-tuned model performs well but relies on a large number of high-quality labeled datasets. While Large Language Models (LLMs) are relatively easy to use and not dependent on these datasets, they generally have mediocre performance except in a few cases. Additionally, they require a large amount of computational resources. In this study, we introduce the Small Language Models (SLMs) and empirically determine its characteristics. We also compare its performance with existing models to generalize SLM's characteristics and see if it improves performance. In addition, the emergence of various chatbots provides this research with a new opportunity: Language Models (LMs) Negotiation. This study examines whether it can improve performance compared to a single LM. The experimental results show that SLMs currently performs similarly to LLMs, indicating that SLMs has good potential in this task. Additionally, LMs Negotiation slightly improves its performance compared to individual models.
論文抄録(英)
内容記述タイプ Other
内容記述 Software engineering has become very important in daily life and scientific research. The ability to quickly understand developers' emotions, especially negative ones, during the software development process, as well as the reputation and user feedback of the software, is crucial in software engineering today. Over the years, many tools have been developed for Sentiment Analysis for Software Engineering (SA4SE), but capturing sentiment efficiently and accurately remains challenging. The fine-tuned model performs well but relies on a large number of high-quality labeled datasets. While Large Language Models (LLMs) are relatively easy to use and not dependent on these datasets, they generally have mediocre performance except in a few cases. Additionally, they require a large amount of computational resources. In this study, we introduce the Small Language Models (SLMs) and empirically determine its characteristics. We also compare its performance with existing models to generalize SLM's characteristics and see if it improves performance. In addition, the emergence of various chatbots provides this research with a new opportunity: Language Models (LMs) Negotiation. This study examines whether it can improve performance compared to a single LM. The experimental results show that SLMs currently performs similarly to LLMs, indicating that SLMs has good potential in this task. Additionally, LMs Negotiation slightly improves its performance compared to individual models.
書誌情報 ソフトウェアエンジニアリングシンポジウム2024論文集

巻 2024, p. 130-136, 発行日 2024-09-10
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 08:20:47.864674
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3