ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 研究報告
  2. コンピュータセキュリティ(CSEC)
  3. 2024
  4. 2024-CSEC-106

Study on Potential of Speech-pathological Features for Deepfake Speech Detection

https://ipsj.ixsq.nii.ac.jp/records/237251
https://ipsj.ixsq.nii.ac.jp/records/237251
0bdae827-d735-46d1-a89b-7dfd369d58ee
名前 / ファイル ライセンス アクション
IPSJ-CSEC24106045.pdf IPSJ-CSEC24106045.pdf (1.2 MB)
Copyright (c) 2024 by the Institute of Electronics, Information and Communication Engineers This SIG report is only available to those in membership of the SIG.
CSEC:会員:¥0, DLIB:会員:¥0
Item type SIG Technical Reports(1)
公開日 2024-07-15
タイトル
タイトル Study on Potential of Speech-pathological Features for Deepfake Speech Detection
タイトル
言語 en
タイトル Study on Potential of Speech-pathological Features for Deepfake Speech Detection
言語
言語 eng
キーワード
主題Scheme Other
主題 ISEC/EMM
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_18gh
資源タイプ technical report
著者所属
School of Information Science, Japan Advanced Institute of Science and Technology/Sirindhorn International Institute of Technology, Thammasat University
著者所属
National Science and Technology Development Agency
著者所属
National Science and Technology Development Agency
著者所属
Sirindhorn International Institute of Technology, Thammasat University
著者所属
School of Information Science, Japan Advanced Institute of Science and Technology
著者所属(英)
en
School of Information Science, Japan Advanced Institute of Science and Technology / Sirindhorn International Institute of Technology, Thammasat University
著者所属(英)
en
National Science and Technology Development Agency
著者所属(英)
en
National Science and Technology Development Agency
著者所属(英)
en
Sirindhorn International Institute of Technology, Thammasat University
著者所属(英)
en
School of Information Science, Japan Advanced Institute of Science and Technology
著者名 Anuwat, Chaiwongyen

× Anuwat, Chaiwongyen

Anuwat, Chaiwongyen

Search repository
Suradej, Duangpummet

× Suradej, Duangpummet

Suradej, Duangpummet

Search repository
Jessada, Karnjana

× Jessada, Karnjana

Jessada, Karnjana

Search repository
Waree, Kongprawechnon

× Waree, Kongprawechnon

Waree, Kongprawechnon

Search repository
Masashi, Unoki

× Masashi, Unoki

Masashi, Unoki

Search repository
著者名(英) Anuwat, Chaiwongyen

× Anuwat, Chaiwongyen

en Anuwat, Chaiwongyen

Search repository
Suradej, Duangpummet

× Suradej, Duangpummet

en Suradej, Duangpummet

Search repository
Jessada, Karnjana

× Jessada, Karnjana

en Jessada, Karnjana

Search repository
Waree, Kongprawechnon

× Waree, Kongprawechnon

en Waree, Kongprawechnon

Search repository
Masashi, Unoki

× Masashi, Unoki

en Masashi, Unoki

Search repository
論文抄録
内容記述タイプ Other
内容記述 This paper proposes a method to detect deepfakes using speech-pathological features commonly used to assess unnaturalness in disordered voices associated with voice-production mechanisms. We investigated the potential of eleven speech-pathological features for distinguishing between genuine and deepfake speech, including jitter (three types), shimmer (four types), harmonics-to-noise ratio, cepstral-harmonics-to-noise ratio, normalized noise energy, and glottal-to-noise excitation ratio. This paper introduces a new method that employs segmental frames of analysis technique to significantly improve the effectiveness of deepfake speech detection. We evaluated the proposed method using the datasets from the Automatic Speaker Verification Spoofing and Countermeasures Challenges (ASVspoof). The results demonstrate that the proposed method outperforms the baselines in terms of recall and F2-score, achieving 99.46% and 98.59%, respectively, on the ASVspoof 2019 dataset.
論文抄録(英)
内容記述タイプ Other
内容記述 This paper proposes a method to detect deepfakes using speech-pathological features commonly used to assess unnaturalness in disordered voices associated with voice-production mechanisms. We investigated the potential of eleven speech-pathological features for distinguishing between genuine and deepfake speech, including jitter (three types), shimmer (four types), harmonics-to-noise ratio, cepstral-harmonics-to-noise ratio, normalized noise energy, and glottal-to-noise excitation ratio. This paper introduces a new method that employs segmental frames of analysis technique to significantly improve the effectiveness of deepfake speech detection. We evaluated the proposed method using the datasets from the Automatic Speaker Verification Spoofing and Countermeasures Challenges (ASVspoof). The results demonstrate that the proposed method outperforms the baselines in terms of recall and F2-score, achieving 99.46% and 98.59%, respectively, on the ASVspoof 2019 dataset.
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AA11235941
書誌情報 研究報告コンピュータセキュリティ(CSEC)

巻 2024-CSEC-106, 号 45, p. 1-6, 発行日 2024-07-15
ISSN
収録物識別子タイプ ISSN
収録物識別子 2188-8655
Notice
SIG Technical Reports are nonrefereed and hence may later appear in any journals, conferences, symposia, etc.
出版者
言語 ja
出版者 情報処理学会
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 08:56:36.595269
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3