Search-Space Reduction Beyond Kernelization
The framework of kernelization gives a mathematical model for the rigorous analysis of preprocessing, aimed at showing that any instance with parameter k can efficiently be reduced to an equivalent one whose size depends only on \(k\). Kernelization serves as a useful tool to obtain FPT algorithms, since any brute-force algorithm solves the reduced instance in time depending only on \(k\). However, from the definition of kernelization it is not clear why kernelization would lead to significant speed-ups when the reduced instance is not solved by brute force, but by a fixed-parameter tractable algorithm whose running time is governed by the value of the parameter. To speed up such algorithms, it is the parameter controlling the size of the search space which should decrease, rather than the encoding size of the input. The discrepancy between reducing the instance size versus decreasing the search space is the focus of this talk. The quest for preprocessing algorithms that reduce the search space leads to a new type of algorithmic and graph-theoretical questions. The talk gives an introduction to this budding line of inquiry by combining examples from recent work with open problems for future research.