Welcome to EGAD

Abstract

In this study, we present a dynamic graph representation learning model on weighted graphs to accurately predict the network capacity of connections between viewers in a live video streaming event. We propose EGAD, a neural network architecture to capture the graph evolution by introducing a self-attention mechanism on the weights between consecutive graph convolutional networks (GCNs). In addition, we account for the fact that our neural architecture might require a huge amount of parameters to train, thus increasing the online inference latency and negatively influence the user experience in a live video streaming event. To address the problem of the high online inference of a vast number of parameters, we propose a knowledge distillation strategy. We design a distillation loss function, aiming to first pretrain a teacher model on offline data, and then transfer the knowledge from the teacher to a smaller student model with less parameters. We evaluate our proposed model on the link prediction task on three real-world datasets, generated by live video streaming events which lasted 80 minutes in our company Hive Streaming AB. The experiments demonstrate the effectiveness of the proposed model in terms of link prediction accuracy and number of required parameters, evaluated against competitive approaches. In addition, we study the distillation performance of the proposed model in terms of compress ratio for different distillation strategies, where we show that the proposed model can achieve a compress ratio up to 15:100, while preserving high link prediction accuracy.

Datasets

In this paper, we collected three real-world datasets of live video streaming events. You can download the datasets using the following links:

  1. LiveStream-4K
  2. LiveStream-6K
  3. LiveStream-16K