(Created page with " == Abstract == Big Data Pipelines decompose complex analyses of large data sets into a series of simpler tasks, with independently tuned components for each task. This modul...") |
m (Scipediacontent moved page Draft Content 740494343 to Swaminathan et al 2013a) |
(No difference)
|
Big Data Pipelines decompose complex analyses of large data sets into a series of simpler tasks, with independently tuned components for each task. This modular setup allows re-use of components across several different pipelines. However, the interaction of independently tuned pipeline components yields poor end-to-end performance as errors introduced by one component cascade through the whole pipeline, affecting overall accuracy. We propose a novel model for reasoning across components of Big Data Pipelines in a probabilistically well-founded manner. Our key idea is to view the interaction of components as dependencies on an underlying graphical model. Different message passing schemes on this graphical model provide various inference algorithms to trade-off end-to-end performance and computational cost. We instantiate our framework with an efficient beam search algorithm, and demonstrate its efficiency on two Big Data Pipelines: parsing and relation extraction.
The different versions of the original document can be found in:
Published on 01/01/2013
Volume 2013, 2013
DOI: 10.1145/2487575.2487588
Licence: CC BY-NC-SA license
Are you one of the authors of this document?