In order to handle real-world problems, state-of-the-art probabilistic logic and learning frameworks, such as ProbLog, reduce the expensive inference to an efficient Weighted Model Counting. To do so ProbLog employs a sequence of transformation steps, called an \emph{inference pipeline}. Each step in the probabilistic inference pipeline is called a \emph{pipeline component}. The choice of the mechanism to implement a component can be crucial to the performance of the system. In this paper we describe in detail different ProbLog pipelines. Then we perform a empirical analysis to determine which components have a crucial impact on the efficiency. Our results show that the Boolean formula conversion is the crucial component in an inference pipeline. Our main contributions are the thorough analysis of ProbLog inference pipelines and the introduction of new pipelines, one of which performs very well on our benchmarks. ispartof: pages:90-104 ispartof: Practical Aspects of Declarative Languages vol:9131 pages:90-104 ispartof: Practical Aspects of Declarative Languages location:Portland, Oregon, USA date:18 Jun - 19 Jun 2015 status: published
The different versions of the original document can be found in:
Published on 01/01/2015
Volume 2015, 2015
DOI: 10.1007/978-3-319-19686-2_7
Licence: CC BY-NC-SA license
Are you one of the authors of this document?