The digital nature of Online Communities allows unrestricted and anonymous exchange of content. Consequently, they are vulnerable for abuse, especially in the form of offending communication (Hate Speech). The rising amount of Hate Speech causes Online Communities to hire personnel that manually examines user generated content to detect such cases. However, due to the vast number of messages, this task is labor-intensive and time-consuming. The goal of this dissertation is to design and implement software artifacts to automatically detect Hate Speech including the referenced victims. The developed methods are based on a sequence model to structure texts and a pattern-based approach to detect grammatical links between hateful words and the referenced victims. The presented method achieves improvements compared to existing approaches with respect to the classification performance. Additionally, it enables the analysis of interrelated communication that is typical for Cyberbullying.