Intern
    Data Science Chair

    Research

    At the Data Science chair, we focus on diverse set of research topics, all connected by the common interest of fundamentals in machine learning. Here, we give a general overview about methods and research topics we're currently interested in. For our various applications and projects, click here

    Knowledge Enriched Machine Learning

    Leveraging additional domain knowledge for deep learning models, for example knowledge graphs for NLP, additional item attributes in recommendation or knowledge about physical characteristics in Neural PDEs.

    Large Language Models

     We’re working on adapting Large Language Models such as BERT and BART to solve diverse tasks and work on extracting and including knowledge.

    Time Series & Sequence Modeling

     Reasonable modeling of time or order is a challenge in many areasm such as sensor data, but also (human) behavior & language. Other examples include RNA, or behaviour sequences in recommendation and many areas more.

    Deep Representation and Metric Learning 

    Developing common approaches for designing and learning representations and metrics for a specific domain or downstream task.

    Deep Learning for Imbalanced Data

    Improving and developing machine learning techniques which can work with rare events, for example rare events in weather or bee hives, or anomalies in security or ERP data.

    Explainable AI

    Understanding models by explainable AI techniques helps to effectively build models tailored to the specific challenges of the various application areas.

    Ranking

    We have also done research for ranking methods based on PageRank as well as methods for learning to rank.