Intern
    Data Science Chair

    Our paper "Enhancing Sequential Next-Item Prediction through Modelling Non-Item Pages" won the Best Paper Award at the ICDM NeuRec Workshop 2023

    01.12.2023

    At the 4th international workshop on advanced neural algorithms and theories for recommender systems we won the award for the best Paper with "Enhancing Sequential Next-Item Prediction through Modelling Non-Item Pages".

    Abstract 
    By analyzing the sequence of historical interactions between users and items, sequential recommendation models can learn and infer user intent and make predictions about the next item of interest. Next to these item interactions, in most systems there are also interactions with pages not related to specific items, for example navigation pages, account pages, and pages for a specific category or brand, which may provide additional insights into the user interests. However, while there are various approaches to integrate additional information about items and users, the topic of non-item pages has been less explored. In order to fill this gap, we propose various approaches of representing these non-item pages (e.g, based on their content or a unique id) to use them as an additional information source for the task of sequential next-item prediction in transformerbased models. We create a synthetic dataset with non-item pages highly related to the subsequent item to show that the models are generally capable of learning from these non-item page interactions, and subsequently evaluate the improvements gained by including non-item pages contained in two real-world datasets. We adapt two state-of-the-art models capable of integrating item attributes and investigate the abilities of bi- and uni-directional transformers to extract user intent from additional non-item pages. Our results show that non-item pages are a valuable source of information, but representing such a page well is the key to successfully leverage them. The inclusion of non-item pages, when represented appropriately, increases the performance of state-ofthe-art transformer models for next-item prediction.

    Link to the paper

    Zurück