1887

Abstract

Background & Objectives Semantic Web technologies are used extensively in the health domain to enable expressive, standards-based reasoning. Deploying Semantic Web reasoning processes directly on mobile devices has a number of advantages, including robustness to connectivity loss and more timely results. By leveraging local reasoning processes, Clinical Decision Support Systems (CDSS) can thus present timely alerts given dangerous health issues, even when connectivity is lacking. However, a number of challenges arise as well, related to mobile platform heterogeneity and limited computing resources. To tackle these challenges, developers should be empowered to benchmark mobile reasoning performance across different mobile platforms, with rule- and datasets of varying scale and complexity, and under typical CDSS reasoning process flows. To deal with the current heterogeneity of rule formats, a uniform interface on top of mobile reasoning engines also needs to be provided. System We present a mobile, cross-platform benchmark framework, comprising two main components: 1) a generic Semantic Web layer, supplying a uniform, standards-based rule- and dataset interface to mobile reasoning engines; and 2) a Benchmark Engine, to investigate mobile reasoning performance. This framework was implemented using the PhoneGap cross-platform development tool, allowing it to be deployed on a range of mobile platforms. During benchmark execution, the benchmark rule- and dataset (encoded using the SPARQL Inferencing Notation (SPIN) and Resource Description Framework (RDF)) are first passed to the generic Semantic Web layer. In this layer, the local Proxy component contacts an external Conversion Web Service, where converters perform conversion into the different rule engine formats. Developers may develop new converters to support other engines. The results are then communicated back to the Proxy and passed on to the local Benchmark Engine. In the Benchmark Engine, reasoning can be conducted using different process flows, to better align the benchmarks with real-world CDSS. To plugin new reasoning engines (JavaScript or native), developers need to implement a plugin realizing a uniform interface (e.g., load data, execute rules). New process flows can also be supplied. In the benchmarks, data and rule loading times, as well as reasoning times, are measured. From our work in clinical decision support, we identified two useful reasoning process flows: * Frequent Reasoning: To infer new facts, the reasoning engine is loaded with the entire datastore each time a certain timespan has elapsed, and the relevant ruleset is executed. * Incremental Reasoning: In this case, the datastore is kept in-memory, whereby reasoning is applied each time a new fact has been added. Currently, 4 reasoning engines (and their custom formats) are supported, including RDFQuery (https://code.google.com/p/rdfquery/wiki/RdfPlugin), RDFStore-JS (http://github.com/antoniogarrote/rdfstore-js), Nools (https://github.com/C2FO/nools) and AndroJena (http://code.google.com/p/androjena/). Conclusion In this paper, we introduced a mobile, cross-platform and extensible benchmark framework for comparing mobile Semantic Web reasoning performance. Future work consists of investigating techniques to optimize mobile reasoning processes.

Loading

Article metrics loading...

/content/papers/10.5339/qfarc.2014.ITPP0792
2014-11-18
2020-09-23
Loading full text...

Full text loading...

http://instance.metastore.ingenta.com/content/papers/10.5339/qfarc.2014.ITPP0792
Loading
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error