Processing tasks in parallel is used in nearly all applications to keep up with the requirements of modern software systems. However, the current implementation of parallel processing in GECKODB, a graph database system developed in our group, requires to spawn many short-lived threads that execute a single task and then terminate. This creates a lot of overhead since the threads are not being reused. To counter this effect, we implemented a thread pool to be used in GECKODB to reduce the additional time required to create and close thousands of threads. In this paper, we show our implementation of a thread pool to process independent tasks in parallel, including the waiting for a set of processed tasks. We also compare the thread pool implementation against the current one. Additionally, we show that the task and thread pool configuration, depending on the use case, has a high impact on the thread pool performance. At the end of our evaluation, we show that the implementation fulfils all given requirements and generally reduce overhead compared to creating and spawning threads individually.