Item type |
JInfP(1) |
公開日 |
2015-11-15 |
タイトル |
|
|
タイトル |
An Approach to Dynamic Query Classification and Approximation on an Inference-enabled SPARQL Endpoint |
タイトル |
|
|
言語 |
en |
|
タイトル |
An Approach to Dynamic Query Classification and Approximation on an Inference-enabled SPARQL Endpoint |
言語 |
|
|
言語 |
eng |
キーワード |
|
|
主題Scheme |
Other |
|
主題 |
[Special Issue on E-Service and Knowledge Management toward Smart Computing Society] SPARQL, inference, ontology mapping |
資源タイプ |
|
|
資源タイプ識別子 |
http://purl.org/coar/resource_type/c_6501 |
|
資源タイプ |
journal article |
著者所属 |
|
|
|
Graduate School of Informatics, Shizuoka University |
著者所属 |
|
|
|
Graduate School of Informatics, Shizuoka University |
著者所属(英) |
|
|
|
en |
|
|
Graduate School of Informatics, Shizuoka University |
著者所属(英) |
|
|
|
en |
|
|
Graduate School of Informatics, Shizuoka University |
著者名 |
Yuji, Yamagata
Naoki, Fukuta
|
著者名(英) |
Yuji, Yamagata
Naoki, Fukuta
|
論文抄録 |
|
|
内容記述タイプ |
Other |
|
内容記述 |
On a retrieval of Linked Open Data using SPARQL, it is important to consider an execution cost of query, especially when the query utilizes inference capability on the endpoint. A query often causes unpredictable and unwanted consumption of endpoints' computing resources since it is sometimes difficult to understand and predict what computations will occur on the endpoints. To prevent such an execution of time-consuming queries, approximating the original query could be a good option to reduce loads of endpoints. In this paper, we present an idea and its conceptual model on building endpoints having a mechanism to automatically reduce unwanted amount of inference computation by predicting its computational costs and allowing it to transform such a query into a more speed optimized one by applying a GA-based query rewriting approach. Our analysis shows a potential benefit on preventing unexpectedly long inference computations and keeping a low variance of inference-enabled query executions by applying our query rewriting approach. We also present a prototype system that classifies whether a query execution is time-consuming or not by using machine learning techniques at the endpoint-side, as well as rewriting such time-consuming queries by applying our approach. |
論文抄録(英) |
|
|
内容記述タイプ |
Other |
|
内容記述 |
On a retrieval of Linked Open Data using SPARQL, it is important to consider an execution cost of query, especially when the query utilizes inference capability on the endpoint. A query often causes unpredictable and unwanted consumption of endpoints' computing resources since it is sometimes difficult to understand and predict what computations will occur on the endpoints. To prevent such an execution of time-consuming queries, approximating the original query could be a good option to reduce loads of endpoints. In this paper, we present an idea and its conceptual model on building endpoints having a mechanism to automatically reduce unwanted amount of inference computation by predicting its computational costs and allowing it to transform such a query into a more speed optimized one by applying a GA-based query rewriting approach. Our analysis shows a potential benefit on preventing unexpectedly long inference computations and keeping a low variance of inference-enabled query executions by applying our query rewriting approach. We also present a prototype system that classifies whether a query execution is time-consuming or not by using machine learning techniques at the endpoint-side, as well as rewriting such time-consuming queries by applying our approach. |
書誌レコードID |
|
|
収録物識別子タイプ |
NCID |
|
収録物識別子 |
AA00700121 |
書誌情報 |
Journal of information processing
巻 23,
号 6,
p. 759-766,
発行日 2015-11-15
|
ISSN |
|
|
収録物識別子タイプ |
ISSN |
|
収録物識別子 |
1882-6652 |
出版者 |
|
|
言語 |
ja |
|
出版者 |
情報処理学会 |