Frequent Itemset Mining (FIM) is one of the most well known techniques to extract knowledge from data. The combinatorial explosion of FIM methods become even more problematic when they are applied to Big Data. Fortunately, recent improvements in the ﬁeld of parallel programming already provide good tools to tackle this problem. However, these tools come with their own technical challenges, e.g. balanced data distribution and inter-communication costs. In this paper, we investigate the applicability of FIM techniques on the MapReduce platform. We introduce two new methods for mining large datasets: Dist-Eclat focuses on speed while BigFIM is optimized to run on really large datasets. In our experiments we show the scalability of our methods.
This paper is published at Workshop on Scalable Machine Learning: Theory and Applications on October 6, 2013.
A stand-alone version of BigFIM and DistEclat can be found on GitLab:
(The original source code for Mahout is still available on GitLab, however, this version no longer receives any updates: