Item type |
Symposium(1) |
公開日 |
2017-11-03 |
タイトル |
|
|
タイトル |
Accelerate Parallel Deep Learning Inferences with MCTS in the game of Go |
タイトル |
|
|
言語 |
en |
|
タイトル |
Accelerate Parallel Deep Learning Inferences with MCTS in the game of Go |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
Deep Learning inference |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
Monte Carlo Tree Search |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
Computer Go |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
Parallel computing |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
GPU |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
AVX-512 |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
Xeon Phi |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_5794 |
|
資源タイプ |
conference paper |
著者所属 |
|
|
|
Dept. of Computer Science and Information Engineering, National Dong Hwa University |
著者所属 |
|
|
|
Dept. of Computer Science and Information Engineering, National Dong Hwa University |
著者所属 |
|
|
|
Dept. of Computer Science and Information Engineering, National Taipei University |
著者所属(英) |
|
|
|
en |
|
|
Dept. of Computer Science and Information Engineering, National Dong Hwa University |
著者所属(英) |
|
|
|
en |
|
|
Dept. of Computer Science and Information Engineering, National Dong Hwa University |
著者所属(英) |
|
|
|
en |
|
|
Dept. of Computer Science and Information Engineering, National Taipei University |
著者名 |
Ching-Nung, Lin
Shi-Jim, Yen
Jr-Chang, Chen
|
著者名(英) |
Ching-Nung, Lin
Shi-Jim, Yen
Jr-Chang, Chen
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
The performance of Deep Learning Inference is a serious issue when combining with speed constraint Monte Carlo Tree Search(MCTS). Traditional hybrid CPU and Graphics processing unit solution is bounded because of frequently heavy data transferring. This research focuses on accelerating parallel synchronized Deep Convolution Neural Network(DCNN) prediction in MCTS. This paper proposes a method to accelerate parallel DCNN prediction and MCTS execution at GPU, Intel AVX-512 CPU and Xeon Phi Corner. It outperforms the original architecture using the GPU forwarding server. In some cases, GPU speeds up 7.2 times; AVX-512 CPU increase 15.7 times speed. Xeon Phi Corner accelerates 11.1 times performance. In addition, with 64 threads in Google Cloud Platform, maximal 53.8 times faster is achieved. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
The performance of Deep Learning Inference is a serious issue when combining with speed constraint Monte Carlo Tree Search(MCTS). Traditional hybrid CPU and Graphics processing unit solution is bounded because of frequently heavy data transferring. This research focuses on accelerating parallel synchronized Deep Convolution Neural Network(DCNN) prediction in MCTS. This paper proposes a method to accelerate parallel DCNN prediction and MCTS execution at GPU, Intel AVX-512 CPU and Xeon Phi Corner. It outperforms the original architecture using the GPU forwarding server. In some cases, GPU speeds up 7.2 times; AVX-512 CPU increase 15.7 times speed. Xeon Phi Corner accelerates 11.1 times performance. In addition, with 64 threads in Google Cloud Platform, maximal 53.8 times faster is achieved. |
書誌情報 |
ゲームプログラミングワークショップ2017論文集
巻 2017,
p. 131-137,
発行日 2017-11-03
|
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |