Packet classification is crucial to the implementation of several advanced services that require the capability to distinguish traffic in different flows, such as firewalls, intrusion detection systems, and many QoS implementations. Although hardware solutions, such as TCAMs, provide high search speed, they do not scale to large rulesets. Instead, some of the most promising algorithmic research embraces the practice of leveraging the data redundancy in real-life rulesets to improve high performance packet classification. In this paper, we provide a general framework for discerning relationships and distinctions of the design-space of existing packet classification algorithms. Several best-known algorithms, such as RFC and HiCuts/HyperCuts, are carefully analyzed based on this framework, and an improved scheme for each algorithm is proposed. All algorithms studied in this paper, along with their variations, are objectively assessed using both real-life and synthetic rulesets. The source codes of these algorithms are made publicly available on Web-site
The different versions of the original document can be found in:
Published on 01/01/2005
Volume 2005, 2005
DOI: 10.1109/icas-icns.2005.74
Licence: CC BY-NC-SA license
Are you one of the authors of this document?