ログイン 新規登録
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 論文誌(ジャーナル)
  2. Vol.62
  3. No.12

Timing Attack on Random Forests: Experimental Evaluation and Detailed Analysis

https://ipsj.ixsq.nii.ac.jp/records/214336
https://ipsj.ixsq.nii.ac.jp/records/214336
8f4c75d1-7818-40a6-bba2-821accd7d1df
名前 / ファイル ライセンス アクション
IPSJ-JNL6212003.pdf IPSJ-JNL6212003.pdf (4.8 MB)
Copyright (c) 2021 by the Information Processing Society of Japan
オープンアクセス
Item type Journal(1)
公開日 2021-12-15
タイトル
タイトル Timing Attack on Random Forests: Experimental Evaluation and Detailed Analysis
タイトル
言語 en
タイトル Timing Attack on Random Forests: Experimental Evaluation and Detailed Analysis
言語
言語 eng
キーワード
主題Scheme Other
主題 [特集:デジタル社会の情報セキュリティとトラスト(推薦論文)] side-channel attack, adversarial examples, black-box attack, evolution strategy
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_6501
資源タイプ journal article
著者所属
NTT Secure Platform Laboratories
著者所属
NTT Social Informatics Laboratories
著者所属
NTT Social Informatics Laboratories
著者所属(英)
en
NTT Secure Platform Laboratories
著者所属(英)
en
NTT Social Informatics Laboratories
著者所属(英)
en
NTT Social Informatics Laboratories
著者名 Yuichiro, Dan

× Yuichiro, Dan

Yuichiro, Dan

Search repository
Toshiki, Shibahara

× Toshiki, Shibahara

Toshiki, Shibahara

Search repository
Junko, Takahashi

× Junko, Takahashi

Junko, Takahashi

Search repository
著者名(英) Yuichiro, Dan

× Yuichiro, Dan

en Yuichiro, Dan

Search repository
Toshiki, Shibahara

× Toshiki, Shibahara

en Toshiki, Shibahara

Search repository
Junko, Takahashi

× Junko, Takahashi

en Junko, Takahashi

Search repository
論文抄録
内容記述タイプ Other
内容記述 This paper proposes a novel implementation attack on machine learning. The threat of such attacks has recently become an problem in machine learning. These attacks include side-channel attacks that use information acquired from implemented devices and fault attacks that inject faults into implemented devices using external tools such as lasers. Thus far, these attacks have targeted mainly deep neural networks; however, other common methods such as random forests can also be targets. In this paper, we investigate the threat of implementation attacks to random forests. Specifically, we propose a novel timing attack that generates adversarial examples. Additionally, we experimentally evaluate and analyze its attack success rate. The proposed attack exploits a fundamental property of random forests: the response time from the input to the output depends on the number of conditional branches invoked during prediction. More precisely, we generate adversarial examples by optimizing the response time. This optimization affects predictions because changes in the response time indicate changes in the results of the conditional branches. For the optimization, we use an evolution strategy that tolerates measurement error in the response time. Experiments are conducted in a black-box setting where attackers can use only prediction labels and response times. Experimental results show that the proposed attack generates adversarial examples with higher probability than a state-of-the-art attack that uses only predicted labels. Detailed analysis of these results indicates an unfortunate trade-off that restricting tree depth of random forests may mitigate this attack but decrease prediction accuracy.
------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.29(2021) (online)
DOI http://dx.doi.org/10.2197/ipsjjip.29.757
------------------------------
論文抄録(英)
内容記述タイプ Other
内容記述 This paper proposes a novel implementation attack on machine learning. The threat of such attacks has recently become an problem in machine learning. These attacks include side-channel attacks that use information acquired from implemented devices and fault attacks that inject faults into implemented devices using external tools such as lasers. Thus far, these attacks have targeted mainly deep neural networks; however, other common methods such as random forests can also be targets. In this paper, we investigate the threat of implementation attacks to random forests. Specifically, we propose a novel timing attack that generates adversarial examples. Additionally, we experimentally evaluate and analyze its attack success rate. The proposed attack exploits a fundamental property of random forests: the response time from the input to the output depends on the number of conditional branches invoked during prediction. More precisely, we generate adversarial examples by optimizing the response time. This optimization affects predictions because changes in the response time indicate changes in the results of the conditional branches. For the optimization, we use an evolution strategy that tolerates measurement error in the response time. Experiments are conducted in a black-box setting where attackers can use only prediction labels and response times. Experimental results show that the proposed attack generates adversarial examples with higher probability than a state-of-the-art attack that uses only predicted labels. Detailed analysis of these results indicates an unfortunate trade-off that restricting tree depth of random forests may mitigate this attack but decrease prediction accuracy.
------------------------------
This is a preprint of an article intended for publication Journal of
Information Processing(JIP). This preprint should not be cited. This
article should be cited as: Journal of Information Processing Vol.29(2021) (online)
DOI http://dx.doi.org/10.2197/ipsjjip.29.757
------------------------------
書誌レコードID
収録物識別子タイプ NCID
収録物識別子 AN00116647
書誌情報 情報処理学会論文誌

巻 62, 号 12, 発行日 2021-12-15
ISSN
収録物識別子タイプ ISSN
収録物識別子 1882-7764
戻る
0
views
See details
Views

Versions

Ver.1 2025-01-19 16:39:38.518798
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3