ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. シンポジウム
  2. シンポジウムシリーズ
  3. ソフトウェアエンジニアリングシンポジウム
  4. 2024

Towards Reliable Machine Learning Models for Code

https://ipsj.ixsq.nii.ac.jp/records/239237
https://ipsj.ixsq.nii.ac.jp/records/239237
6a3ddc47-c540-4d7d-8b3e-392d429cf8d6
名前 / ファイル ライセンス アクション
IPSJ-SES2024009.pdf IPSJ-SES2024009.pdf (34.6 kB)
 2026年9月10日からダウンロード可能です。
Copyright (c) 2024 by the Information Processing Society of Japan
非会員:¥0, IPSJ:学会員:¥0, SE:会員:¥0, DLIB:会員:¥0
Item type Symposium(1)
公開日 2024-09-10
タイトル
タイトル Towards Reliable Machine Learning Models for Code
タイトル
言語 en
タイトル Towards Reliable Machine Learning Models for Code
言語
言語 eng
キーワード
主題Scheme Other
主題 国際セッション
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_5794
資源タイプ conference paper
著者所属
Polytechnique Montréal
著者所属(英)
en
Polytechnique Montréal
著者名 Foutse, Khomh

× Foutse, Khomh

Foutse, Khomh

Search repository
著者名(英) Foutse, Khomh

× Foutse, Khomh

en Foutse, Khomh

Search repository
論文抄録
内容記述タイプ Other
内容記述 Machine learning (ML) models trained on code are increasingly integrated into various software engineering tasks. While they generally demonstrate promising performance, many aspects of their capabilities remain unclear. Specifically, there is a lack of understanding regarding what these models learn, why they learn it, how they operate, and when they produce erroneous outputs. In this talk, I will present findings from a series of studies that (i) examine the abilities of these models to complement human developers, (ii) explore the syntax and representation learning capabilities of ML models designed for software maintenance tasks, and (iii) investigate the patterns of bugs these models exhibit. Additionally, I will discuss a novel self-refinement approach aimed at enhancing the reliability of code generated by Large Language Models (LLMs). This method focuses on reducing the occurrence of bugs before execution, autonomously and without the need for human intervention or predefined test cases.
論文抄録(英)
内容記述タイプ Other
内容記述 Machine learning (ML) models trained on code are increasingly integrated into various software engineering tasks. While they generally demonstrate promising performance, many aspects of their capabilities remain unclear. Specifically, there is a lack of understanding regarding what these models learn, why they learn it, how they operate, and when they produce erroneous outputs. In this talk, I will present findings from a series of studies that (i) examine the abilities of these models to complement human developers, (ii) explore the syntax and representation learning capabilities of ML models designed for software maintenance tasks, and (iii) investigate the patterns of bugs these models exhibit. Additionally, I will discuss a novel self-refinement approach aimed at enhancing the reliability of code generated by Large Language Models (LLMs). This method focuses on reducing the occurrence of bugs before execution, autonomously and without the need for human intervention or predefined test cases.
書誌情報 ソフトウェアエンジニアリングシンポジウム2024論文集

巻 2024, p. 17-17, 発行日 2024-09-10
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 08:21:08.423868
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3