Path: blob/master/examples/graph/ipynb/node2vec_movielens.ipynb
3508 views
Graph representation learning with node2vec
Author: Khalid Salama
Date created: 2021/05/15
Last modified: 2021/05/15
Description: Implementing the node2vec model to generate embeddings for movies from the MovieLens dataset.
Introduction
Learning useful representations from objects structured as graphs is useful for a variety of machine learning (ML) applications—such as social and communication networks analysis, biomedicine studies, and recommendation systems. Graph representation Learning aims to learn embeddings for the graph nodes, which can be used for a variety of ML tasks such as node label prediction (e.g. categorizing an article based on its citations) and link prediction (e.g. recommending an interest group to a user in a social network).
node2vec is a simple, yet scalable and effective technique for learning low-dimensional embeddings for nodes in a graph by optimizing a neighborhood-preserving objective. The aim is to learn similar embeddings for neighboring nodes, with respect to the graph structure.
Given your data items structured as a graph (where the items are represented as nodes and the relationship between items are represented as edges), node2vec works as follows:
Generate item sequences using (biased) random walk.
Create positive and negative training examples from these sequences.
Train a word2vec model (skip-gram) to learn embeddings for the items.
In this example, we demonstrate the node2vec technique on the small version of the Movielens dataset to learn movie embeddings. Such a dataset can be represented as a graph by treating the movies as nodes, and creating edges between movies that have similar ratings by the users. The learnt movie embeddings can be used for tasks such as movie recommendation, or movie genres prediction.
This example requires networkx
package, which can be installed using the following command:
Setup
Download the MovieLens dataset and prepare the data
The small version of the MovieLens dataset includes around 100k ratings from 610 users on 9,742 movies.
First, let's download the dataset. The downloaded folder will contain three data files: users.csv
, movies.csv
, and ratings.csv
. In this example, we will only need the movies.dat
, and ratings.dat
data files.
Then, we load the data into a Pandas DataFrame and perform some basic preprocessing.
Let's inspect a sample instance of the ratings
DataFrame.
Next, let's check a sample instance of the movies
DataFrame.
Implement two utility functions for the movies
DataFrame.
Construct the Movies graph
We create an edge between two movie nodes in the graph if both movies are rated by the same user >= min_rating
. The weight of the edge will be based on the pointwise mutual information between the two movies, which is computed as: log(xy) - log(x) - log(y) + log(D)
, where:
xy
is how many users rated both moviex
and moviey
with >=min_rating
.x
is how many users rated moviex
>=min_rating
.y
is how many users rated moviey
>=min_rating
.D
total number of movie ratings >=min_rating
.
Step 1: create the weighted edges between movies.
Step 2: create the graph with the nodes and the edges
To reduce the number of edges between nodes, we only add an edge between movies if the weight of the edge is greater than min_weight
.
Let's display the total number of nodes and edges in the graph. Note that the number of nodes is less than the total number of movies, since only the movies that have edges to other movies are added.
Let's display the average node degree (number of neighbours) in the graph.
Step 3: Create vocabulary and a mapping from tokens to integer indices
The vocabulary is the nodes (movie IDs) in the graph.
Implement the biased random walk
A random walk starts from a given node, and randomly picks a neighbour node to move to. If the edges are weighted, the neighbour is selected probabilistically with respect to weights of the edges between the current node and its neighbours. This procedure is repeated for num_steps
to generate a sequence of related nodes.
The biased random walk balances between breadth-first sampling (where only local neighbours are visited) and depth-first sampling (where distant neighbours are visited) by introducing the following two parameters:
Return parameter (
p
): Controls the likelihood of immediately revisiting a node in the walk. Setting it to a high value encourages moderate exploration, while setting it to a low value would keep the walk local.In-out parameter (
q
): Allows the search to differentiate between inward and outward nodes. Setting it to a high value biases the random walk towards local nodes, while setting it to a low value biases the walk to visit nodes which are further away.
Generate training data using the biased random walk
You can explore different configurations of p
and q
to different results of related movies.
Generate positive and negative examples
To train a skip-gram model, we use the generated walks to create positive and negative training examples. Each example includes the following features:
target
: A movie in a walk sequence.context
: Another movie in a walk sequence.weight
: How many times these two movies occurred in walk sequences.label
: The label is 1 if these two movies are samples from the walk sequences, otherwise (i.e., if randomly sampled) the label is 0.
Generate examples
Let's display the shapes of the outputs
Convert the data into tf.data.Dataset
objects
Train the skip-gram model
Our skip-gram is a simple binary classification model that works as follows:
An embedding is looked up for the
target
movie.An embedding is looked up for the
context
movie.The dot product is computed between these two embeddings.
The result (after a sigmoid activation) is compared to the label.
A binary crossentropy loss is used.
Implement the model
Train the model
We instantiate the model and compile it.
Let's plot the model.
Now we train the model on the dataset
.
Finally we plot the learning history.
Analyze the learnt embeddings.
Find related movies
Define a list with some movies called query_movies
.
Get the embeddings of the movies in query_movies
.
Compute the consine similarity between the embeddings of query_movies
and all the other movies, then pick the top k for each.
Display the top related movies in query_movies
.
Visualize the embeddings using the Embedding Projector
Download the embeddings.tsv
and metadata.tsv
to analyze the obtained embeddings in the Embedding Projector.