|Appears in Collections:||Computing Science and Mathematics Conference Papers and Proceedings|
|Title:||Strategies for efficiently keeping local linked open data caches up-to-date|
|Citation:||Dividino R, Gottron T & Scherp A (2015) Strategies for efficiently keeping local linked open data caches up-to-date. In: Simperl E, Arenas M, Corcho O, Strohmaier M, d'Aquin M, Srinivas K, Groth P, Dumontier M, Heflin J, Thirunarayan K & Staab S (eds.) The Semantic Web - ISWC 2015. ISWC 2015. Lecture Notes in Computer Science, 9367. 14th International Semantic Web Conference, ISWC 2015, Bethlehem, PA, USA, 11.10.2015-15.10.2015. Cham, Switzerland: Springer Verlag, pp. 356-373. https://doi.org/10.1007/978-3-319-25010-6_24|
|Series/Report no.:||Lecture Notes in Computer Science, 9367|
|Conference Name:||14th International Semantic Web Conference, ISWC 2015|
|Conference Dates:||2015-10-11 - 2015-10-15|
|Conference Location:||Bethlehem, PA, USA|
|Abstract:||Quite often, Linked Open Data (LOD) applications pre-fetch data from the Web and store local copies of it in a cache for faster access at runtime. Yet, recent investigations have shown that data published and interlinked on the LOD cloud is subject to frequent changes. As the data in the cloud changes, local copies of the data need to be updated. However, due to limitations of the available computational resources (e.g., network bandwidth for fetching data, computation time) LOD applications may not be able to permanently visit all of the LOD sources at brief intervals in order to check for changes. These limitations imply the need to prioritize which data sources should be considered first for retrieving their data and synchronizing the local copy with the original data. In order to make best use of the resources available, it is vital to choose a good scheduling strategy to know when to fetch data of which data source. In this paper, we investigate different strategies proposed in the literature and evaluate them on a large-scale LOD dataset that is obtained from the LOD cloud by weekly crawls over the course of three years. We investigate two different setups: (i) in the single step setup, we evaluate the quality of update strategies for a single and isolated update of a local data cache, while (ii) the iterative progression setup involves measuring the quality of the local data cache when considering iterative updates over a longer period of time. Our evaluation indicates the effectiveness of each strategy for updating local copies of LOD sources, i.e, we demonstrate for given limitations of bandwidth, the strategies’ performance in terms of data accuracy and freshness. The evaluation shows that the measures capturing change behavior of LOD sources over time are most suitable for conducting updates.|
|Status:||VoR - Version of Record|
|Rights:||The publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.|
|Dividino et al 2015.pdf||Fulltext - Published Version||729.46 kB||Adobe PDF||Under Permanent Embargo Request a copy|
Note: If any of the files in this item are currently embargoed, you can request a copy directly from the author by clicking the padlock icon above. However, this facility is dependent on the depositor still being contactable at their original email address.
This item is protected by original copyright
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
If you believe that any material held in STORRE infringes copyright, please contact email@example.com providing details and we will remove the Work from public display in STORRE and investigate your claim.