{"updated":"2025-01-21T19:20:49.578534+00:00","links":{},"created":"2025-01-18T23:35:39.920585+00:00","metadata":{"_oai":{"id":"oai:ipsj.ixsq.nii.ac.jp:00081487","sets":["934:989:6743:6744"]},"path":["6744"],"owner":"11","recid":"81487","title":["強化学習を用いたチーム編成の効率化モデルの提案と環境変化に対する評価"],"pubdate":{"attribute_name":"公開日","attribute_value":"2012-03-05"},"_buckets":{"deposit":"9acd09c6-cd10-4026-b862-87496734c86c"},"_deposit":{"id":"81487","pid":{"type":"depid","value":"81487","revision_id":0},"owners":[11],"status":"published","created_by":11},"item_title":"強化学習を用いたチーム編成の効率化モデルの提案と環境変化に対する評価","author_link":["0","0"],"item_titles":{"attribute_name":"タイトル","attribute_value_mlt":[{"subitem_title":"強化学習を用いたチーム編成の効率化モデルの提案と環境変化に対する評価"},{"subitem_title":"Efficient Team Formation Based on Learning and Reorganization and Influence of Change of Tasks","subitem_title_language":"en"}]},"item_keyword":{"attribute_name":"キーワード","attribute_value_mlt":[{"subitem_subject":"オリジナル論文","subitem_subject_scheme":"Other"}]},"item_type_id":"3","publish_date":"2012-03-05","item_3_text_3":{"attribute_name":"著者所属","attribute_value_mlt":[{"subitem_text_value":"早稲田大学大学院基幹理工学研究科情報理工学専攻"},{"subitem_text_value":"早稲田大学大学院基幹理工学研究科情報理工学専攻"}]},"item_3_text_4":{"attribute_name":"著者所属(英)","attribute_value_mlt":[{"subitem_text_value":"Department of Computer Science and Engineering, Waseda University","subitem_text_language":"en"},{"subitem_text_value":"Department of Computer Science and Engineering, Waseda University","subitem_text_language":"en"}]},"item_language":{"attribute_name":"言語","attribute_value_mlt":[{"subitem_language":"jpn"}]},"item_publisher":{"attribute_name":"出版者","attribute_value_mlt":[{"subitem_publisher":"情報処理学会","subitem_publisher_language":"ja"}]},"publish_status":"0","weko_shared_id":-1,"item_file_price":{"attribute_name":"Billing file","attribute_type":"file","attribute_value_mlt":[{"url":{"url":"https://ipsj.ixsq.nii.ac.jp/record/81487/files/IPSJ-TOM0501006.pdf"},"date":[{"dateType":"Available","dateValue":"2014-03-05"}],"format":"application/pdf","billing":["billing_file"],"filename":"IPSJ-TOM0501006.pdf","filesize":[{"value":"2.4 MB"}],"mimetype":"application/pdf","priceinfo":[{"tax":["include_tax"],"price":"660","billingrole":"5"},{"tax":["include_tax"],"price":"330","billingrole":"6"},{"tax":["include_tax"],"price":"0","billingrole":"17"},{"tax":["include_tax"],"price":"0","billingrole":"44"}],"accessrole":"open_date","version_id":"dce1dd84-0e8f-45a5-b8b4-40978edf85de","displaytype":"detail","licensetype":"license_note","license_note":"Copyright (c) 2012 by the Information Processing Society of Japan"}]},"item_3_creator_5":{"attribute_name":"著者名","attribute_type":"creator","attribute_value_mlt":[{"creatorNames":[{"creatorName":"佐藤, 大樹"},{"creatorName":"菅原, 俊治"}],"nameIdentifiers":[{}]}]},"item_3_creator_6":{"attribute_name":"著者名(英)","attribute_type":"creator","attribute_value_mlt":[{"creatorNames":[{"creatorName":"Daiki, Satoh","creatorNameLang":"en"},{"creatorName":"Toshiharu, Sugawara","creatorNameLang":"en"}],"nameIdentifiers":[{}]}]},"item_3_source_id_9":{"attribute_name":"書誌レコードID","attribute_value_mlt":[{"subitem_source_identifier":"AA11464803","subitem_source_identifier_type":"NCID"}]},"item_resource_type":{"attribute_name":"資源タイプ","attribute_value_mlt":[{"resourceuri":"http://purl.org/coar/resource_type/c_6501","resourcetype":"journal article"}]},"item_3_source_id_11":{"attribute_name":"ISSN","attribute_value_mlt":[{"subitem_source_identifier":"1882-7780","subitem_source_identifier_type":"ISSN"}]},"item_3_description_7":{"attribute_name":"論文抄録","attribute_value_mlt":[{"subitem_description":"インターネット上のサービスに対応したタスクは,それを構成する複数のサブタスクを処理することで達成される.効率的なタスク処理のためには,サブタスクを対応する能力やリソースを持つエージェントに適切に割り当てる必要がある.我々はこれまで,強化学習とそれに基づくネットワーク構造の再構成により,チーム編成とネットワーク構造を同時に効率化する手法を提案してきた.さらに,通信遅延の生じる環境においても既存手法より効率的なチームを編成できることを示した.しかし,そこで用いた機械学習は,近隣のエージェントの内部状態を既知としており,必ずしも現実のシステムと合致していない.また,実験で仮定したエージェントの配置も固定的であった.そこで本論文では,まず提案手法を,他のエージェントの内部状態ではなく,近隣からのメッセージと遅延を考慮した減衰率から報酬を求め,それに基づいてQ学習するようにモデル化する.次に,エージェントの配置もランダムに行い,多様な配置の初期状態にかかわらず,学習と組織構造の変化を組み合わせることで既存手法よりも効率化できることを示す.さらに,タスクの量・種類といった環境の変化についても,効率的なチーム編成が可能なことを実験により評価する.","subitem_description_type":"Other"}]},"item_3_description_8":{"attribute_name":"論文抄録(英)","attribute_value_mlt":[{"subitem_description":"A task in a distributed environment is usually achieved by doing a number of subtasks that require different functions and resources. These subtasks have to be processed cooperatively in the appropriate team of agents that have the required functions with sufficient resources, but it is difficult to anticipate, during the design stage of the system, what kinds of tasks will be requested in the dynamic and open environment. We already showed that the proposed method combines the learning for team formation and reorganization in a way that is adaptive to the environment and that it can improve the overall performance and increase the success in communication delay that may change dynamically. However, in the previous method, we assume that agents know the internal states of neighboring agents to learn the appropriate actions; this is not always available in real systems. In this paper, we propose the method of distributed team formation that uses modified Q-learning combining the reward and successful messages from downstream agents and their times elapsed from task requests. We also perform a number of experiments in more general deployment of agents. We show that it can improve the overall performance and can adapt to the environments that may change the range and quantity of tasks.","subitem_description_type":"Other"}]},"item_3_biblio_info_10":{"attribute_name":"書誌情報","attribute_value_mlt":[{"bibliographicPageEnd":"49","bibliographic_titles":[{"bibliographic_title":"情報処理学会論文誌数理モデル化と応用(TOM)"}],"bibliographicPageStart":"40","bibliographicIssueDates":{"bibliographicIssueDate":"2012-03-05","bibliographicIssueDateType":"Issued"},"bibliographicIssueNumber":"1","bibliographicVolumeNumber":"5"}]},"relation_version_is_last":true,"weko_creator_id":"11"},"id":81487}