Program
09:00 - 09:10 | Introduction |
09:10 - 10:30 | Developers talk |
10:30 - 11:00 | Coffee break |
11:00 - 12:30 | Tutorial on Linked Data Fragments |
12:30 - 14:00 | Lunch |
14:00 - 17:30 | Semantic Web & Linked Data Hackathon Submit your subject now! |
Developers talk (09:10 - 10:30)
Each presentation lasts 10 minutes, followed by 10 minutes for questions and/or discussion.
-
Creating Semantic Mind Maps from Linked Data with AutoMind Creator
-
Linked Data Reactor: a Framework for Building Reactive Linked Data Applications
Tutorial on Linked Data Fragments (11:00 - 12:30)
For a long period of time, querying Linked Data has been a story of two extremes, with Linked Data documents on the one side and the SPARQL protocol on the other. Currently, neither of them is able to drive real-world applications on the Web. On the one hand, public SPARQL endpoints are limited in number and suffer from frequent downtime. Their resource consumption, caused by the expressiveness of the language and individual user demand, is hard to predict. This downtime results in insufficient reliability for client applications. Linked Data documents, on the other hand, are both larger in number and are more predictable. However, link-traversalbased query methods are significantly slower and result sets are inherently incomplete as disconnected graph components cannot be traversed. These are both undesired traits for user applications. The issues with these two query solutions hint at a need for other client/server trade-offs.
Linked Data Fragments (LDF) is a conceptual framework that provides a uniform view on all possible interfaces to RDF, by observing that each interface partitions a dataset into its own specific kind of fragments. Since its first publication in 2014, it has gained traction throughout the Semantic Web and Linked Data community. Several popular datasets, such as DBPedia and Wikidata are already available through alternative, LDF-based APIs. The fact that LDF is able to fulfill the earlier formulated demand for more versatile client/server trade-off is shown by the fact that researchers from multiple research groups are already using LDF in their research. To facilitate this promising trend in Linked Data publishing and querying, this tutorial introduces more people to the aforementioned space of trade-offs. First, we want to educate people who want to start using LDF, but do not know where to start. Second, we want to establish LDF as a general concept, vision and approach to expand its community, rather than being perceived as a single method by specific authors. Therefore, the objectives of this tutorial are three-fold:
- learn about the LDF framework and how it can be used to describe Linked Data interfaces
- learn about alternative interfaces like Triple Pattern Fragments
- learn how to develop client applications on top of different LDF interfaces
Covered material
In this tutorial, we will cover the following materials:
- Linked Data Fragments: What interfaces exist? What are current interaction patterns? What are the key components of LDF? How can you use it to view an interface?
- Triple Pattern Fragments: What is it? How do the client/server work? What are the trade-offs involved?
- Self-descriptive APIs: How can you describe your interface to a client? What is Hydra? How does a client discover API features?
- Exploring the LDF axis: How to extend existing interfaces? How to design new trade-offs?
- Application development: How to create requests to the TPF interface? How to use existing libraries?
- Data publishing: How to publish your own LDF endpoint in order to make your Linked Dataset available for other to query. How to expose an existing query endpoint using LDF?
Semantic Web & Linked Data Hackathon (14:00 - 17:30)
In this Hackathon, we let developers team up to create proof-of-concept applications or small prototypes. Each team can work on one of the following topic: - Triple Pattern Fragments - …
Do you still have a great application idea? Submit your subject now!