Ensuring data privacy is an essential objective competing with the ever-rising capabilities of machine learning approaches fueled by vast amounts of centralized data. Federated learning addresses this conflict by moving the model to the data and ensuring the data itself does not leave a client's device. However, maintaining privacy impels new challenges concerning algorithm performance or fairness of the algorithm's results that remain uncovered from a sociotechnical perspective. We tackle this research gap by conducting a structured literature review and analyzing 152 articles to develop a taxonomy of federated learning applications with nine dimensions and 24 characteristics. Our taxonomy illustrates how different attributes of federated learning may affect the trade-off between an algorithm's privacy, performance, and fairness. Despite an increasing interest in the technical implementation of federated learning, our work is one of the first to emphasize an information systems perspective on this emerging and promising topic.</p