Before OCR (Optical Character Recognition) can be applied to an old print, all regions found within a scan have to be segmented. This means separating text from images, as well as categorising textual elements into finer-grained classes like main text, columns or marginal and foot notes. A general solution does currently not exist, as there is a big variety of possible printing layouts. We improve the current state of research with a combination of different image processing techniques using neural networks (pixel classifiers based on a encoder-decoder architecture with skip-connections; contour-based approaches to classify letters/non-letters; baseline approaches to recognize virtual lines within text lines) and integrate all segmentation appraoches into an OCR-pipeline, e.g. OCR4all.