A GNN-based multi-task learning framework for personalized video search

Abstract

Watching online videos has become more and more popular and users tend to watch videos based on their personal tastes and preferences. Providing a customized ranking list to maximize the user’s satisfaction has become increasingly important for online video platforms. Existing personalized search methods (PSMs) train their models with user feedback information (e.g. clicks). However, we identified that such feedback signals may indicate attractiveness but not necessarily indicate relevance in video search. Besides, the click data and user historical information are usually too sparse to train a good PSM, which is different from the conventional Web search containing users’ rich historical information. To address these concerns, in this paper we propose a multi-task graph neural network architecture for personalized video search (MGNN-PVS) that can jointly model user’s click behaviour and the relevance between queries and videos. To relieve the sparsity problem and learn better representation for users, queries and videos, we develop an efficient and novel GNN architecture based on neighborhood sampling and hierarchical aggregation strategy by leveraging their different hops of neighbors in the user-query and query-document click graph. Extensive experiments on a major commercial video search engine show that our model significantly outperforms stateof-the-art PSMs, which illustrates the effectiveness of our proposed framework.

Publication
International Conference on Web Search and Data Mining (WSDM)
Li Zhang
Li Zhang
PhD Student (now an Associate Lecturer at University College London)
Haiping Lu
Haiping Lu
Director of the UK Open Multimodal AI Network, Professor of Machine Learning, and Head of AI Research Engineering

I am a Professor of Machine Learning. I develop translational multimodal AI technologies for advancing healthcare and scientific discovery.