Wednesday, 23 June 2010 15:33

## 1st International Workshop on Business Models, Business Rules and Ontologies (BuRO 2010)

(workshop co-located with the 4th International Conference on Web Reasoning and Rule Systems, RR2010, Bressanone/Brixen, Italy, September 22-24, 2010)

### Description of the Workshop

It is a challenge in a business to enable the right people to interact in their own way with the right part of their business application. We distinguish between three views on the business organization: (1) the view of the business analyst using a formal and validated business model; (2) the view of the knowledge engineer via ontologies and rules, and (3) the view of the IT department via an operationalization in applications. We can glue these views together via an end-to-end point solution: (1) conceptualization and where possible acquisition of business models and their transformation into ontologies and rules; (2) their management and maintenance, and (3) the transparent operationalization in IT applications.

The vision at the heart of the Semantic Web is of high relevance in a business setting as well. The proposed workshop addresses the different issues that arise in a business that wishes to have a transparent and where possible and useful a semi-automatic transfer of knowledge present in business documents expressing, e.g., policies, to an IT operationalization. Moreover, the workshop tackles these issues from an holistic perspective, raising awareness for the overall picture, instead of focusing on stand-alone issues. E.g., although OWL is well-investigated it is unclear how business knowledge expressed in SBVR can be mapped to it. Another example is the W3C's RIF effort: although based on well-investigated rule paradigms, it is less well-connected to upper business layers: how to go from a formal business model to RIF rules and how to interact with derived ontologies?

During the ONTORULE project which shares a similar vision on a business, it has been recognized that this holistic view goes beyond the results attainable within the project and that much more discussion and exchange is needed. As such the workshop wants to create awareness with researchers in stand-alone fields like ontology acquisition, business modeling, integration of ontologies and rules, implementations of rule/ontology engines, that there is a bigger picture that can and should be used to extract requirements on the one hand and to provide output that is fine-tuned for other fields on the other hand.

### Topics of interest

Suggested topics include the following (but are not limited to):

• the acquisition of ontologies and rules from unstructured text via Natural Language Processing (NLP) techniques
• the development of a complete, formal and validated business model, taking all possible inputs into account (people and documents, structured and unstructured, some of which as output from an NLP phase), using the Semantics of Business Vocabulary and Business Rules (SBVR)
• transformation from structured business representations, from SBVR, to RDF/OWL and/or rules
• the management and maintenance of business models, ontologies and rules, e.g., consistency maintenance and the integration of rules and ontologies (semantics, algorithms)
• implementations of such management systems
• use cases and field reports

### Program

The workshop will be help during the full day of September 21st, from 9:00 to 18:00. Checkout the full program for further details.

### Proceedings

Proceedings are composed of the following papers:

You can cite the proceeding as:

Thomas Eiter, Adil El Ghali, Sergio Fernández, Stijn Heymans, Thomas Krennwallner, François Lévy (eds.): Proceedings of the First International Workshop on Business Models, Business Rules and Ontologies (BuRO 2010), co-located with the 4th International Conference on Web Reasoning and Rule Systems (RR2010), Bressanone/Brixen (Italy), September 21, 2010.

### Submissions

We invite full papers up to 14 pages length. The workshop content will be made available in separate workshop proceedings. Please use the Springer LNCS format for the papers. Submitted papers will be reviewed by at least two members of the program committee. Papers must be submitted electronically in PDF format. For paper submission we use the EasyChair conference management system: http://www.easychair.org/conferences/?conf=buro2010

### Important Dates

• Submission deadline: Aug 6, 2010 Aug 20, 2010
• Notification of acceptance: August 20, 2010 Sep 3, 2010
• Camera-ready paper submission: September 3, 2010 Sep 10, 2010
• Workshop: September 21, 2010

### Invited Talks

#### Adventures of Two Little OWLs in Rule Land

Markus Kroetzsch, Oxford University Computing Laboratory

Abstract: Combining ontological and rule-based modelling can be an onerous task, from the choice of a suitable semantic framework (there are quite a few) to the selection of a chain of tools for supporting it (there are just a few). Typical solutions combine not only the advantages but also the difficulties of both domains, especially regarding computational complexity. For the recently introduced light-weight profiles of OWL 2, however, the situation is remarkably different. Here we find that existing rule-based systems can rather easily be adopted to support ontological inferencing using established algorithmic methods. This is well-known for OWL RL – “RL” is for “Rule Language” after all – but much less so for OWL EL.

In this talk, we take a closer look at this exciting grey area between light-weight ontologies and rules where both approaches are close enough to allow for an easy combination. We recall the features of OWL EL and RL, and explain how reasoning tasks in both languages can be answered by common rule systems with only a slight transformation of syntax. This approach uses rules as a computational formalism for implementing OWL reasoning without implying a semantic connection: even production rule systems could be used. Going further, we aim at a more intimate semantic combination of (logical) rules, OWL EL, and OWL RL, carefully tuned to allow efficient implementation in polynomial time. Further insights into matters of practical efficiency are gained from recent results on the worst-case space requirements of OWL EL inferencing, and from our experiences with the prototype implementation Orel.

Short Bio: Markus Krötzsch is a post-doctoral researcher at the Oxford University Computing Laboratory. He completed his PhD studies at the Institute of Applied Informatics and Formal Description Methods (AIFB) of the Karlsruhe Institute of Technology (KIT) in 2010. His research interest is the intelligent automatic processing of information, ranging from the foundations of formal knowledge representation to application areas like the Semantic Web. He is the lead developer of the successful Semantic Web application platform Semantic MediaWiki, co-editor of the W3C OWL 2 specification, chief maintainer of the semanticweb.org community portal, and co-author of the textbook Foundations of Semantic Web Technologies.

#### Combining Nonmonotonic Knowledge Bases for Modular and Distributed Knowledge-Based Information Systems

Thomas Krennwallner, Vienna University of Technology

Abstract: The developments in information technology during the last decade have been rapidly changing the possibilities for data and knowledge access. To respect this, several declarative knowledge representation formalisms have been extended with the capability to access data and knowledge sources that are external to a knowledge base. Such knowledge sources can come in various forms and may be as simple as a query interface to a database up to a full-fledged knowledge base.

In this talk we present two formalisms that that are centered around Answer Set Programming and have been designed with multiple knowledge bases in mind. One is modular nonmonotonic logic programs (MLP), which take up the issue of combining modules of logic programs into a coherent framework. The other formalism is multi-context systems (MCS), which are concerned with integrating knowledge from heterogeneous and possibly nonmonotonic knowledge bases (the contexts) using bridge rules, and combine them to a system with a semantics for contextual reasoning. We will argue that MLPs have the potential to host other formalisms that are relevant for the Semantic Web, like hybrid languages that combine ontologies and rules. MCS on the other hand are well-suited for distributed scenarios, where we can only assume an interface to contextualized knowledge bases---e.g., description logic or default theories---and do not get access to the actual content of the individual context. Heterogeneous nonmonotonic multi-context systems and modular nonmonotonic logic programs provide a basis for advanced knowledge-based information systems, which are targeted in ongoing research projects. They have been developed by the KBS group of the Vienna University of Technology in cooperation with external colleagues.

This work has been supported by the Austrian Science Fund (FWF) projects P20840 & P20841, the EC ICT Integrated Project Ontorule (FP7 231875), and the Vienna Science and Technology Fund (WWTF) project ICT08-020.

Short Bio: Thomas Krennwallner is a project assistant since June 2008 at the Institute of Information Systems at Vienna University of Technology (TU Wien), Austria, funded by the EU FP7 project "Ontorule" and the Austrian Science Fund project "Modular HEX-Programs." In 2007 and 2008, he was working as research intern at Digital Enterprise Research Institute Galway, Ireland, in the EU FP6 funded project "inContext." Between 1999 and 2004 he was a software developer in several companies. He has contributed to various software systems, most recently to DLVHEX, DMCS, GiaBATA, and XSPARQL. He is currently pursuing his PhD at the Knowledge-Based Systems Group at TU Wien, where he is developing extensions and algorithms for modular and distributed evaluation of HEX-programs, modular nonmonotonic logic programs, and heterogeneous nonmonotonic multi-context systems. He obtained a master's degree in Computational Intelligence in 2007 and a bachelor's degree in Software and Information Engineering in 2005, both at TU Wien.

#### Using OWL in Ontology-based data integration

Domenico Lembo, Sapienza Universita of Rome

Abstract: Data integration is the problem of providing a single interface and unified mechanisms to access data stored in several autonomous, possibly heterogeneous, information sources. This is a challenging task in many IT applications, such as enterprise information management and data warehousing, as well as in scenarios like e-science, e-government, and web data management. In the context of the Semantic Web, data integration has been often faced through the adoption of shared conceptualizations of the domain of interest referred to as ontologies, with the aim of posing the semantics of the application domain at the center of the scene. It is therefore interesting to analyze which are the implications of using ontologies in data integration, and in particular of adopting Semantic Web languages, such as OWL, within the traditional architecture for data integration. According to such architecture, a data integration system is composed by a global schema, which represents the interface towards the user, a source schema, which models all the sources to be integrated, and the mapping between the two.

In this talk, we consider data integration under this framework when the global schema is specified in OWL, and discuss the impact of this choice on computational complexity of query answering under different instantiations of the framework in terms of query language and form and interpretation of the mapping. As we will see, query answering in the resulting setting is in general computationally too complex, and some limitations on the expressive power of the various components of the framework has to be adopted in order to have efficient query answering. In particular, we will present OWL 2 QL, a tractable profile of OWL 2, and consider it as the ontology language used to express the global schema. OWL 2 QL essentially corresponds to a member of the DL-Lite family, a family of Description Logics designed to have a good trade-off between expressive power of the language and computational complexity of reasoning.

The results in this talk represent joint work with Diego Calvanese (Free University of Bozen/Bolzano), Giuseppe De Giacomo, Maurizio Lenzerini, and Riccardo Rosati (SAPIENZA University of Rome).

Short Bio: Domenico Lembo is assistant professor at the Department of Computer and System Sciences of the SAPIENZA University of Rome. His research interests concerns mainly information integration, Description Logics, Ontologies and the Semantic Web, inconsistency-tolerance in information systems. He authored more than 50 publications on the above topics in international journals and conferences. He is the author of several tutorials in the areas of data integration, ontologies, and the Semantic Web.

#### Incorporating regulations, business rules and other texts in the IT

François Lévy, Paris13 University

(involving material and ideas from A. Guissé, A. Nazarenko)

Abstract: Quite a lot of regulations available in texts impact everyday activities and are of interest in IT systems. Applicability of rights or duties to particulars (individuals or companies) is more easily answered with automated analysis of individual cases. Conformance of  procedures to the many constraints that apply to organizations can benefit from automation (The conformance of organization procedures can be checked automatically, as soon as the relevant constraints are made explicit and formalized). This is the case for laws and regulations originating from official organizations as well as for internal regulations and business rules used in companies. Nevertheless, there is a critical bottleneck: analyzing regulatory texts to extract rules that the IT system will have to implement is still a challenging task. This is the topic of this talk.

A first part of the talk will focus on the general process leading from a source text written in Natural Language to a normalized set of rules and constraints that model the source policy and to its translation in a specialized formalism (either production rules, or a form of ontology \emph{plus} logic programming for instance). The goal is to obtain a reformulation as close as possible of a controlled language, without loss of information.

After this general view, the various operations  which participate in a progressive normalization of the source text will be inventoried and described, starting with the identification of relevant sentences.  The challenge is to design tools to support an efficient computer aided transformation. We will present the rule editing environment that is currently being developped for that purpose, showing how ontology building, semantic annotation of the source text, semantic calculus, pattern-based analysis and index querying can help the task of human analysts.

Even if a formal evaluation of this aided modeling process is difficult to set up, we will consider the role of the natural language processing and knowledge engineering in the light of the life cycle of IT systems, showing the benefit of the backward traceability to source texts in different maintenance tasks.

Short Bio: François Lévy is full professor at Paris 13 university since 1993. He has been responsible of the french group Natural Language semantics'', working on logical representation of semantic and cognitive phenomena such as events, processes in narratives, and causality. He has also had some activities in default logic and in diagnosis.

#### The Entity-centric organization

Heiko Stoermer (Fondazione Bruno Kessler, Trento, Italy)

Abstract: Semantic technologies enable a shift from a schema-centric (or data centric) approach to data management in complex organizations to an entity-centric approach, where different data sources are viewed as potential providers of statements about relevant business entities (people, companies, products, locations, events, projects, etc.). We will argue that such a shift may enormously simplify the management of data and in particular their integration and exploration. This claim will be supported by a number of very concrete use cases where we have used this approach to solve very different issues and by showing why the entity-centric approach was better (along different dimensions) with respect to different and more traditional approaches.

Short Bio: Heiko Stoermer works as a researcher in the area of Entity-centric DIKM at Fondazione Bruno Kessler (FBK-irst), Trento, Italy. After studies in Linguistics he moved to the area of Computer Science, where he holds a German university degree. He gathered industry experience as a software consultant and project manager, and opted for an academic career in 2004 when he received a scholarship at the International Doctorate School in Trento. In 2008, he was awarded with a PhD for his work on Identity and Reference on the Semantic Web. His research interests include information integration, semantic interoperability and contextual knowledge representation. He has (co-) authored a number of scientific publications, acted as a reviewer for important international conferences, is co-organizer of several workshops, and steering committee member of the workshop series on Semantic Web Applications and Perspectives. Currently, he is devoting most of his time to his position as Technical Director of the European large-scale Integrated Project OKKAM, which he co-founded.

### Organizing Committee

• Thomas Eiter, TU Vienna, Austria
• Adil El Ghali, IBM, France
• Sergio Fernández, Fundación CTIC, Spain
• Stijn Heymans, TU Vienna, Austria
• Thomas Krennwallner, TU Vienna, Austria
• François Lévy, Université Paris 13, France

### Program Committee

• Patrick Albert, IBM, France
• Darko Anicic, FZI, Germany
• Christopher Brewster, Aston University, UK
• Jordi Cabot, Université de Nantes, France
• Jean Charlet, INSERM, France
• Michael Erdmann, ontoprise GmbH, Germany
• Bernardo Cuenca Grau, University of Oxford, UK
• Stephan Grimm, FZI, Germany
• Pascal Hitzler, Wright State University, USA
• Giovambattista Ianni, Universita della Calabria, Italy
• Thomas Krennwallner, Vienna University of Technology, Austria
• Markus Kroetzsch, AIFB, University of Karlsruhe, Germany
• Yue Ma, LIPN, Univ. Paris 13, France
• Diana Maynard, University of Sheffield, UK
• Adeline Nazarenko, Univ. Paris 13, France
• Sjir Nijssen, PNA University
• Adrian Paschke, Free University Berlin, Germany
• Maria Theresa Pazienza, Univ. Tor Vergata, Roma, Italy
• Luis Polo, Fundacion CTIC, Spain
• Edna Ruckhaus, Universidad Simon Bolivar Caracas, Venezuela
• Sebastian Rudolph, AIFB, University of Karlsruhe, Germany
• Jos de Bruijn, Vienna University of Technology, Austria

### Venue

The workshop is colocated with the 4th International Conference on Web Reasoning and Rule Systems (RR2010) and will take place at Bressanone/Brixen (Italy) on September 21, 2010. There are some hotels around Brixen at reduced rates for attendees of BuRO 2010.

### Registration

Participants can register for the workshop at https://conf.seekda.com/conference/registration/register/BURO2010. The registration fee is 40EUR.

If you want to additionally participate to RR and/or SWAP, you have to fill out a separate registration form, accessible from the RR home page (for RR or RR+SWAP registration) or the SWAP home page (for SWAP only registration).

 accommodation-BuRO-2010.pdf [ ] 332 Kb BuRO-2010-program.pdf [ ] 91 Kb buro2010-proceedings.pdf [ ] 1957 Kb buro2010_paper_1.pdf [ ] 226 Kb buro2010_paper_2.pdf [ ] 670 Kb buro2010_paper_3.pdf [ ] 743 Kb