Oops, an error occurred! Code: 20191128014526e797d981

Linked Data

Linked Data technologies provide for setting links between records in distinct databases and thus to connect these databases into a global data space. Over the last years, Linked Data technologies have been adopted by an increasing number of data providers, including government agencies, libraries as well as mayor players in the media and pharmaceutical industry, leading to the creation of a global Web of Linked Data.

In his Lecture, Christian Bizer and Heiko Paulheim will give an overview of the Web of Linked Data as well as applications that work on top of this data space. Afterwards, he will discuss how the openness and self-descriptiveness of Linked Data provide for splitting data integration costs between data publishers, data consumers and third parties and thus might enable global-scale data integration in an evolutionary, pay-as-you-go fashion.

Lecture Slides: Download Part-1, Part-2

Linked Data promises that a large portion of Web Data will be usable as one big interlinked RDF database against which structured queries can be answered. In this lecture we will show how reasoning - using RDF Schema (RDFS) and the Web Ontology Language (OWL) - can help to obtain more complete answers for such queries over Linked Data. We first look at the extent to which RDFS and OWL features are being adopted on the Web. We then introduce two high-level architectures for query answering over Linked Data and outline how these can be enriched by (lightweight) RDFS and OWL reasoning, enumerating the main challenges faced and discussing reasoning methods that make practical and theoretical trade-offs to address these challenges. In the end, we also ask whether or not RDFS and OWL are enough and discuss numeric reasoning methods that are beyond the scope of these standards but that are often important when integrating Linked Data from several, heterogeneous sources.

Ontologies and Description Logics

Description logics (DLs) are a family of knowledge representation formalisms with formal semantics, that allow to represent knowledge in a declarative way. Based on the semantics a range of powerful reasoning services have been defined and their computation algorithms have been investigated for a range of DLs. In recent years, DLs have received more attention, since they are the underlying the formalism for the standardized web ontology language OWL. The course will cover an introduction to the basic notions of Description Logic systems and their fundamental reasoning services. We will study the reasoning procedures for these services. Another focus of the course is on those DLs that form the core of the OWL 2 profiles and the reasoning services they are tailored for.

Lecture Slides: Download

Following on from Anni-Yasmin's course, we will have a closer look at the reasoning services and - for some prominent description logics - discuss how they can be realised and how costly this can be. This will provide us with some useful insights into the sources of complexity and limits of the expressive power of the description logics investigated. We will also discuss how worst case complexity relates to state-of-the-art reasoner performance, and  look into mechanisms to understand reasoner answers, in particular unexpected ones.   We assume students taking this course to have some understanding of OWL or description logics, e.g., from following Anni-Yasmin's course, as well as some basic knowledge in computer science.

Lecture Slides: Download

Rules and Queries

Answer Set Programming (ASP) evolved from various fields such as Logic Programming, Deductive Databases, Knowledge Representation, an Nonmonotonic Reasoning, and serves as a flexible language fordeclarative problem solving. There are two main tasks in problem solving, representation and reasoning, which are clearly separated inthe declarative paradigm. In ASP, representation is done using a rule-based language, while reasoning is performed using implementations of general-purpose algorithms, referred to as ASP solvers. Rules in ASP are interpreted according to common sense principles, including a variant of the closed-world-assumption (CWA) and the unique-name-assumption (UNA). Collections of ASP rules are referred to as ASP programs, which represent the modeled knowledge. To each ASP program a collection of answer sets, or intended models, is associated, which stand for the solutions to the modeled problem; this collection can also be empty, meaning that the modeled problem does not admit a solution. Several reasoning tasks exist: the classical ASP task is enumerating all answer sets or determining whether an answer set exists, but ASP also allows for query answering in brave or cautious modes. This article provides an introduction to the field, starting with historical perspectives, followed by a definition of the core language, a guideline to knowledge representation, an overview of existing ASP solvers, and a panorama of current research topics in the field.

Lecture Slides: Download


Ontology-based data access (OBDA) is regarded as a key ingredient for the new generation of information systems. In the OBDA paradigm, an ontology defines a high-level global schema of (already existing) data sources and provides a vocabulary for user queries. An OBDA system rewrites such queries and ontologies into the vocabulary of the data sources and then delegates the actual query evaluation to a suitable query answering system such as a relational database management system (RDBMS) or a datalog engine.

This lecture will mainly focus on OBDA with the ontology language OWL 2 QL, one of three profiles of the W3C standard Web Ontology Language OWL 2, and relational databases, although other possible languages  will also be discussed. It will cover the following topics:

- different types of conjunctive query rewriting; their existence and succinctness;
- different architectures of OBDA systems;
- practical OBDA with open source system Ontop (http://ontop.inf.unibz.it)

Special Topics

Geospatial semantics as a research field studies how to publish, retrieve, reuse, and integrate geo-data, how to describe geo-data by conceptual models, and how to develop formal specifications on top of data structures to reduce the risk of incompatibilities. Geo-data is highly heterogeneous and ranges from qualitative interviews and thematic maps to satellite imagery and complex simulations. This makes ontologies, semantic annotations, and reasoning support essential ingredients towards a Geospatial Semantic Web. We present an overview of major research questions, recent findings, and core literature.

The lecture will provide an overview of some existing state of the art large scale information extraction projects.  The majority of the learning algorithms developed for these projects are based on the lexical and syntactical  processing of large web corpora. Due to the size of the processed data existing knowledge representation formalism are often inadequate due to the intractability of the associated inference problems. The lecture will present recent advances in combining tractable logical and probabilistic models that bring statistical language processing and rule-based approaches closer together.  The main goal of the lecture will be to convince the attendees of the numerous synergies and research agendas that can arise  when uncertainty-based data-driven research meets rule-based schema-driven research.