Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
keras-team
GitHub Repository: keras-team/keras-io
Path: blob/master/examples/generative/wgan-graphs.py
3507 views
1
"""
2
Title: WGAN-GP with R-GCN for the generation of small molecular graphs
3
Author: [akensert](https://github.com/akensert)
4
Date created: 2021/06/30
5
Last modified: 2021/06/30
6
Description: Complete implementation of WGAN-GP with R-GCN to generate novel molecules.
7
Accelerator: GPU
8
"""
9
10
"""
11
## Introduction
12
13
In this tutorial, we implement a generative model for graphs and use it to generate
14
novel molecules.
15
16
Motivation: The [development of new drugs](https://en.wikipedia.org/wiki/Drug_development)
17
(molecules) can be extremely time-consuming and costly. The use of deep learning models
18
can alleviate the search for good candidate drugs, by predicting properties of known molecules
19
(e.g., solubility, toxicity, affinity to target protein, etc.). As the number of
20
possible molecules is astronomical, the space in which we search for/explore molecules is
21
just a fraction of the entire space. Therefore, it's arguably desirable to implement
22
generative models that can learn to generate novel molecules (which would otherwise have never been explored).
23
24
### References (implementation)
25
26
The implementation in this tutorial is based on/inspired by the
27
[MolGAN paper](https://arxiv.org/abs/1805.11973) and DeepChem's
28
[Basic MolGAN](https://deepchem.readthedocs.io/en/latest/api_reference/models.html#basicmolganmod
29
el).
30
31
### Further reading (generative models)
32
Recent implementations of generative models for molecular graphs also include
33
[Mol-CycleGAN](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-019-0404-1),
34
[GraphVAE](https://arxiv.org/abs/1802.03480) and
35
[JT-VAE](https://arxiv.org/abs/1802.04364). For more information on generative
36
adverserial networks, see [GAN](https://arxiv.org/abs/1406.2661),
37
[WGAN](https://arxiv.org/abs/1701.07875) and [WGAN-GP](https://arxiv.org/abs/1704.00028).
38
39
"""
40
41
"""
42
## Setup
43
44
### Install RDKit
45
46
[RDKit](https://www.rdkit.org/) is a collection of cheminformatics and machine-learning
47
software written in C++ and Python. In this tutorial, RDKit is used to conveniently and
48
efficiently transform
49
[SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) to
50
molecule objects, and then from those obtain sets of atoms and bonds.
51
52
SMILES expresses the structure of a given molecule in the form of an ASCII string.
53
The SMILES string is a compact encoding which, for smaller molecules, is relatively
54
human-readable. Encoding molecules as a string both alleviates and facilitates database
55
and/or web searching of a given molecule. RDKit uses algorithms to
56
accurately transform a given SMILES to a molecule object, which can then
57
be used to compute a great number of molecular properties/features.
58
59
Notice, RDKit is commonly installed via [Conda](https://www.rdkit.org/docs/Install.html).
60
However, thanks to
61
[rdkit_platform_wheels](https://github.com/kuelumbus/rdkit_platform_wheels), rdkit
62
can now (for the sake of this tutorial) be installed easily via pip, as follows:
63
```
64
pip -q install rdkit-pypi
65
```
66
And to allow easy visualization of a molecule objects, Pillow needs to be installed:
67
```
68
pip -q install Pillow
69
```
70
71
"""
72
73
"""
74
### Import packages
75
76
"""
77
78
from rdkit import Chem, RDLogger
79
from rdkit.Chem.Draw import IPythonConsole, MolsToGridImage
80
import numpy as np
81
import tensorflow as tf
82
from tensorflow import keras
83
84
RDLogger.DisableLog("rdApp.*")
85
86
"""
87
## Dataset
88
89
The dataset used in this tutorial is a
90
[quantum mechanics dataset](http://quantum-machine.org/datasets/) (QM9), obtained from
91
[MoleculeNet](http://moleculenet.ai/datasets-1). Although many feature and label columns
92
come with the dataset, we'll only focus on the
93
[SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system)
94
column. The QM9 dataset is a good first dataset to work with for generating
95
graphs, as the maximum number of heavy (non-hydrogen) atoms found in a molecule is only nine.
96
"""
97
98
csv_path = tf.keras.utils.get_file(
99
"qm9.csv", "https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/qm9.csv"
100
)
101
102
data = []
103
with open(csv_path, "r") as f:
104
for line in f.readlines()[1:]:
105
data.append(line.split(",")[1])
106
107
# Let's look at a molecule of the dataset
108
smiles = data[1000]
109
print("SMILES:", smiles)
110
molecule = Chem.MolFromSmiles(smiles)
111
print("Num heavy atoms:", molecule.GetNumHeavyAtoms())
112
molecule
113
114
"""
115
### Define helper functions
116
These helper functions will help convert SMILES to graphs and graphs to molecule objects.
117
118
**Representing a molecular graph**. Molecules can naturally be expressed as undirected
119
graphs `G = (V, E)`, where `V` is a set of vertices (atoms), and `E` a set of edges
120
(bonds). As for this implementation, each graph (molecule) will be represented as an
121
adjacency tensor `A`, which encodes existence/non-existence of atom-pairs with their
122
one-hot encoded bond types stretching an extra dimension, and a feature tensor `H`, which
123
for each atom, one-hot encodes its atom type. Notice, as hydrogen atoms can be inferred by
124
RDKit, hydrogen atoms are excluded from `A` and `H` for easier modeling.
125
126
"""
127
128
atom_mapping = {
129
"C": 0,
130
0: "C",
131
"N": 1,
132
1: "N",
133
"O": 2,
134
2: "O",
135
"F": 3,
136
3: "F",
137
}
138
139
bond_mapping = {
140
"SINGLE": 0,
141
0: Chem.BondType.SINGLE,
142
"DOUBLE": 1,
143
1: Chem.BondType.DOUBLE,
144
"TRIPLE": 2,
145
2: Chem.BondType.TRIPLE,
146
"AROMATIC": 3,
147
3: Chem.BondType.AROMATIC,
148
}
149
150
NUM_ATOMS = 9 # Maximum number of atoms
151
ATOM_DIM = 4 + 1 # Number of atom types
152
BOND_DIM = 4 + 1 # Number of bond types
153
LATENT_DIM = 64 # Size of the latent space
154
155
156
def smiles_to_graph(smiles):
157
# Converts SMILES to molecule object
158
molecule = Chem.MolFromSmiles(smiles)
159
160
# Initialize adjacency and feature tensor
161
adjacency = np.zeros((BOND_DIM, NUM_ATOMS, NUM_ATOMS), "float32")
162
features = np.zeros((NUM_ATOMS, ATOM_DIM), "float32")
163
164
# loop over each atom in molecule
165
for atom in molecule.GetAtoms():
166
i = atom.GetIdx()
167
atom_type = atom_mapping[atom.GetSymbol()]
168
features[i] = np.eye(ATOM_DIM)[atom_type]
169
# loop over one-hop neighbors
170
for neighbor in atom.GetNeighbors():
171
j = neighbor.GetIdx()
172
bond = molecule.GetBondBetweenAtoms(i, j)
173
bond_type_idx = bond_mapping[bond.GetBondType().name]
174
adjacency[bond_type_idx, [i, j], [j, i]] = 1
175
176
# Where no bond, add 1 to last channel (indicating "non-bond")
177
# Notice: channels-first
178
adjacency[-1, np.sum(adjacency, axis=0) == 0] = 1
179
180
# Where no atom, add 1 to last column (indicating "non-atom")
181
features[np.where(np.sum(features, axis=1) == 0)[0], -1] = 1
182
183
return adjacency, features
184
185
186
def graph_to_molecule(graph):
187
# Unpack graph
188
adjacency, features = graph
189
190
# RWMol is a molecule object intended to be edited
191
molecule = Chem.RWMol()
192
193
# Remove "no atoms" & atoms with no bonds
194
keep_idx = np.where(
195
(np.argmax(features, axis=1) != ATOM_DIM - 1)
196
& (np.sum(adjacency[:-1], axis=(0, 1)) != 0)
197
)[0]
198
features = features[keep_idx]
199
adjacency = adjacency[:, keep_idx, :][:, :, keep_idx]
200
201
# Add atoms to molecule
202
for atom_type_idx in np.argmax(features, axis=1):
203
atom = Chem.Atom(atom_mapping[atom_type_idx])
204
_ = molecule.AddAtom(atom)
205
206
# Add bonds between atoms in molecule; based on the upper triangles
207
# of the [symmetric] adjacency tensor
208
(bonds_ij, atoms_i, atoms_j) = np.where(np.triu(adjacency) == 1)
209
for bond_ij, atom_i, atom_j in zip(bonds_ij, atoms_i, atoms_j):
210
if atom_i == atom_j or bond_ij == BOND_DIM - 1:
211
continue
212
bond_type = bond_mapping[bond_ij]
213
molecule.AddBond(int(atom_i), int(atom_j), bond_type)
214
215
# Sanitize the molecule; for more information on sanitization, see
216
# https://www.rdkit.org/docs/RDKit_Book.html#molecular-sanitization
217
flag = Chem.SanitizeMol(molecule, catchErrors=True)
218
# Let's be strict. If sanitization fails, return None
219
if flag != Chem.SanitizeFlags.SANITIZE_NONE:
220
return None
221
222
return molecule
223
224
225
# Test helper functions
226
graph_to_molecule(smiles_to_graph(smiles))
227
228
"""
229
### Generate training set
230
231
To save training time, we'll only use a tenth of the QM9 dataset.
232
"""
233
234
adjacency_tensor, feature_tensor = [], []
235
for smiles in data[::10]:
236
adjacency, features = smiles_to_graph(smiles)
237
adjacency_tensor.append(adjacency)
238
feature_tensor.append(features)
239
240
adjacency_tensor = np.array(adjacency_tensor)
241
feature_tensor = np.array(feature_tensor)
242
243
print("adjacency_tensor.shape =", adjacency_tensor.shape)
244
print("feature_tensor.shape =", feature_tensor.shape)
245
246
"""
247
## Model
248
249
The idea is to implement a generator network and a discriminator network via WGAN-GP,
250
that will result in a generator network that can generate small novel molecules
251
(small graphs).
252
253
The generator network needs to be able to map (for each example in the batch) a vector `z`
254
to a 3-D adjacency tensor (`A`) and 2-D feature tensor (`H`). For this, `z` will first be
255
passed through a fully-connected network, for which the output will be further passed
256
through two separate fully-connected networks. Each of these two fully-connected
257
networks will then output (for each example in the batch) a tanh-activated vector
258
followed by a reshape and softmax to match that of a multi-dimensional adjacency/feature
259
tensor.
260
261
As the discriminator network will receives as input a graph (`A`, `H`) from either the
262
generator or from the training set, we'll need to implement graph convolutional layers,
263
which allows us to operate on graphs. This means that input to the discriminator network
264
will first pass through graph convolutional layers, then an average-pooling layer,
265
and finally a few fully-connected layers. The final output should be a scalar (for each
266
example in the batch) which indicates the "realness" of the associated input
267
(in this case a "fake" or "real" molecule).
268
269
270
### Graph generator
271
"""
272
273
274
def GraphGenerator(
275
dense_units,
276
dropout_rate,
277
latent_dim,
278
adjacency_shape,
279
feature_shape,
280
):
281
z = keras.layers.Input(shape=(LATENT_DIM,))
282
# Propagate through one or more densely connected layers
283
x = z
284
for units in dense_units:
285
x = keras.layers.Dense(units, activation="tanh")(x)
286
x = keras.layers.Dropout(dropout_rate)(x)
287
288
# Map outputs of previous layer (x) to [continuous] adjacency tensors (x_adjacency)
289
x_adjacency = keras.layers.Dense(tf.math.reduce_prod(adjacency_shape))(x)
290
x_adjacency = keras.layers.Reshape(adjacency_shape)(x_adjacency)
291
# Symmetrify tensors in the last two dimensions
292
x_adjacency = (x_adjacency + tf.transpose(x_adjacency, (0, 1, 3, 2))) / 2
293
x_adjacency = keras.layers.Softmax(axis=1)(x_adjacency)
294
295
# Map outputs of previous layer (x) to [continuous] feature tensors (x_features)
296
x_features = keras.layers.Dense(tf.math.reduce_prod(feature_shape))(x)
297
x_features = keras.layers.Reshape(feature_shape)(x_features)
298
x_features = keras.layers.Softmax(axis=2)(x_features)
299
300
return keras.Model(inputs=z, outputs=[x_adjacency, x_features], name="Generator")
301
302
303
generator = GraphGenerator(
304
dense_units=[128, 256, 512],
305
dropout_rate=0.2,
306
latent_dim=LATENT_DIM,
307
adjacency_shape=(BOND_DIM, NUM_ATOMS, NUM_ATOMS),
308
feature_shape=(NUM_ATOMS, ATOM_DIM),
309
)
310
generator.summary()
311
312
"""
313
### Graph discriminator
314
315
316
**Graph convolutional layer**. The
317
[relational graph convolutional layers](https://arxiv.org/abs/1703.06103) implements non-linearly transformed
318
neighborhood aggregations. We can define these layers as follows:
319
320
`H^{l+1} = σ(D^{-1} @ A @ H^{l+1} @ W^{l})`
321
322
323
Where `σ` denotes the non-linear transformation (commonly a ReLU activation), `A` the
324
adjacency tensor, `H^{l}` the feature tensor at the `l:th` layer, `D^{-1}` the inverse
325
diagonal degree tensor of `A`, and `W^{l}` the trainable weight tensor at the `l:th`
326
layer. Specifically, for each bond type (relation), the degree tensor expresses, in the
327
diagonal, the number of bonds attached to each atom. Notice, in this tutorial `D^{-1}` is
328
omitted, for two reasons: (1) it's not obvious how to apply this normalization on the
329
continuous adjacency tensors (generated by the generator), and (2) the performance of the
330
WGAN without normalization seems to work just fine. Furthermore, in contrast to the
331
[original paper](https://arxiv.org/abs/1703.06103), no self-loop is defined, as we don't
332
want to train the generator to predict "self-bonding".
333
334
335
336
"""
337
338
339
class RelationalGraphConvLayer(keras.layers.Layer):
340
def __init__(
341
self,
342
units=128,
343
activation="relu",
344
use_bias=False,
345
kernel_initializer="glorot_uniform",
346
bias_initializer="zeros",
347
kernel_regularizer=None,
348
bias_regularizer=None,
349
**kwargs
350
):
351
super().__init__(**kwargs)
352
353
self.units = units
354
self.activation = keras.activations.get(activation)
355
self.use_bias = use_bias
356
self.kernel_initializer = keras.initializers.get(kernel_initializer)
357
self.bias_initializer = keras.initializers.get(bias_initializer)
358
self.kernel_regularizer = keras.regularizers.get(kernel_regularizer)
359
self.bias_regularizer = keras.regularizers.get(bias_regularizer)
360
361
def build(self, input_shape):
362
bond_dim = input_shape[0][1]
363
atom_dim = input_shape[1][2]
364
365
self.kernel = self.add_weight(
366
shape=(bond_dim, atom_dim, self.units),
367
initializer=self.kernel_initializer,
368
regularizer=self.kernel_regularizer,
369
trainable=True,
370
name="W",
371
dtype=tf.float32,
372
)
373
374
if self.use_bias:
375
self.bias = self.add_weight(
376
shape=(bond_dim, 1, self.units),
377
initializer=self.bias_initializer,
378
regularizer=self.bias_regularizer,
379
trainable=True,
380
name="b",
381
dtype=tf.float32,
382
)
383
384
self.built = True
385
386
def call(self, inputs, training=False):
387
adjacency, features = inputs
388
# Aggregate information from neighbors
389
x = tf.matmul(adjacency, features[:, None, :, :])
390
# Apply linear transformation
391
x = tf.matmul(x, self.kernel)
392
if self.use_bias:
393
x += self.bias
394
# Reduce bond types dim
395
x_reduced = tf.reduce_sum(x, axis=1)
396
# Apply non-linear transformation
397
return self.activation(x_reduced)
398
399
400
def GraphDiscriminator(
401
gconv_units, dense_units, dropout_rate, adjacency_shape, feature_shape
402
):
403
adjacency = keras.layers.Input(shape=adjacency_shape)
404
features = keras.layers.Input(shape=feature_shape)
405
406
# Propagate through one or more graph convolutional layers
407
features_transformed = features
408
for units in gconv_units:
409
features_transformed = RelationalGraphConvLayer(units)(
410
[adjacency, features_transformed]
411
)
412
413
# Reduce 2-D representation of molecule to 1-D
414
x = keras.layers.GlobalAveragePooling1D()(features_transformed)
415
416
# Propagate through one or more densely connected layers
417
for units in dense_units:
418
x = keras.layers.Dense(units, activation="relu")(x)
419
x = keras.layers.Dropout(dropout_rate)(x)
420
421
# For each molecule, output a single scalar value expressing the
422
# "realness" of the inputted molecule
423
x_out = keras.layers.Dense(1, dtype="float32")(x)
424
425
return keras.Model(inputs=[adjacency, features], outputs=x_out)
426
427
428
discriminator = GraphDiscriminator(
429
gconv_units=[128, 128, 128, 128],
430
dense_units=[512, 512],
431
dropout_rate=0.2,
432
adjacency_shape=(BOND_DIM, NUM_ATOMS, NUM_ATOMS),
433
feature_shape=(NUM_ATOMS, ATOM_DIM),
434
)
435
discriminator.summary()
436
437
"""
438
### WGAN-GP
439
"""
440
441
442
class GraphWGAN(keras.Model):
443
def __init__(
444
self,
445
generator,
446
discriminator,
447
discriminator_steps=1,
448
generator_steps=1,
449
gp_weight=10,
450
**kwargs
451
):
452
super().__init__(**kwargs)
453
self.generator = generator
454
self.discriminator = discriminator
455
self.discriminator_steps = discriminator_steps
456
self.generator_steps = generator_steps
457
self.gp_weight = gp_weight
458
self.latent_dim = self.generator.input_shape[-1]
459
460
def compile(self, optimizer_generator, optimizer_discriminator, **kwargs):
461
super().compile(**kwargs)
462
self.optimizer_generator = optimizer_generator
463
self.optimizer_discriminator = optimizer_discriminator
464
self.metric_generator = keras.metrics.Mean(name="loss_gen")
465
self.metric_discriminator = keras.metrics.Mean(name="loss_dis")
466
467
def train_step(self, inputs):
468
if isinstance(inputs[0], tuple):
469
inputs = inputs[0]
470
471
graph_real = inputs
472
473
self.batch_size = tf.shape(inputs[0])[0]
474
475
# Train the discriminator for one or more steps
476
for _ in range(self.discriminator_steps):
477
z = tf.random.normal((self.batch_size, self.latent_dim))
478
479
with tf.GradientTape() as tape:
480
graph_generated = self.generator(z, training=True)
481
loss = self._loss_discriminator(graph_real, graph_generated)
482
483
grads = tape.gradient(loss, self.discriminator.trainable_weights)
484
self.optimizer_discriminator.apply_gradients(
485
zip(grads, self.discriminator.trainable_weights)
486
)
487
self.metric_discriminator.update_state(loss)
488
489
# Train the generator for one or more steps
490
for _ in range(self.generator_steps):
491
z = tf.random.normal((self.batch_size, self.latent_dim))
492
493
with tf.GradientTape() as tape:
494
graph_generated = self.generator(z, training=True)
495
loss = self._loss_generator(graph_generated)
496
497
grads = tape.gradient(loss, self.generator.trainable_weights)
498
self.optimizer_generator.apply_gradients(
499
zip(grads, self.generator.trainable_weights)
500
)
501
self.metric_generator.update_state(loss)
502
503
return {m.name: m.result() for m in self.metrics}
504
505
def _loss_discriminator(self, graph_real, graph_generated):
506
logits_real = self.discriminator(graph_real, training=True)
507
logits_generated = self.discriminator(graph_generated, training=True)
508
loss = tf.reduce_mean(logits_generated) - tf.reduce_mean(logits_real)
509
loss_gp = self._gradient_penalty(graph_real, graph_generated)
510
return loss + loss_gp * self.gp_weight
511
512
def _loss_generator(self, graph_generated):
513
logits_generated = self.discriminator(graph_generated, training=True)
514
return -tf.reduce_mean(logits_generated)
515
516
def _gradient_penalty(self, graph_real, graph_generated):
517
# Unpack graphs
518
adjacency_real, features_real = graph_real
519
adjacency_generated, features_generated = graph_generated
520
521
# Generate interpolated graphs (adjacency_interp and features_interp)
522
alpha = tf.random.uniform([self.batch_size])
523
alpha = tf.reshape(alpha, (self.batch_size, 1, 1, 1))
524
adjacency_interp = (adjacency_real * alpha) + (1 - alpha) * adjacency_generated
525
alpha = tf.reshape(alpha, (self.batch_size, 1, 1))
526
features_interp = (features_real * alpha) + (1 - alpha) * features_generated
527
528
# Compute the logits of interpolated graphs
529
with tf.GradientTape() as tape:
530
tape.watch(adjacency_interp)
531
tape.watch(features_interp)
532
logits = self.discriminator(
533
[adjacency_interp, features_interp], training=True
534
)
535
536
# Compute the gradients with respect to the interpolated graphs
537
grads = tape.gradient(logits, [adjacency_interp, features_interp])
538
# Compute the gradient penalty
539
grads_adjacency_penalty = (1 - tf.norm(grads[0], axis=1)) ** 2
540
grads_features_penalty = (1 - tf.norm(grads[1], axis=2)) ** 2
541
return tf.reduce_mean(
542
tf.reduce_mean(grads_adjacency_penalty, axis=(-2, -1))
543
+ tf.reduce_mean(grads_features_penalty, axis=(-1))
544
)
545
546
547
"""
548
## Train the model
549
550
To save time (if run on a CPU), we'll only train the model for 10 epochs.
551
"""
552
553
wgan = GraphWGAN(generator, discriminator, discriminator_steps=1)
554
555
wgan.compile(
556
optimizer_generator=keras.optimizers.Adam(5e-4),
557
optimizer_discriminator=keras.optimizers.Adam(5e-4),
558
)
559
560
wgan.fit([adjacency_tensor, feature_tensor], epochs=10, batch_size=16)
561
562
"""
563
## Sample novel molecules with the generator
564
"""
565
566
567
def sample(generator, batch_size):
568
z = tf.random.normal((batch_size, LATENT_DIM))
569
graph = generator.predict(z)
570
# obtain one-hot encoded adjacency tensor
571
adjacency = tf.argmax(graph[0], axis=1)
572
adjacency = tf.one_hot(adjacency, depth=BOND_DIM, axis=1)
573
# Remove potential self-loops from adjacency
574
adjacency = tf.linalg.set_diag(adjacency, tf.zeros(tf.shape(adjacency)[:-1]))
575
# obtain one-hot encoded feature tensor
576
features = tf.argmax(graph[1], axis=2)
577
features = tf.one_hot(features, depth=ATOM_DIM, axis=2)
578
return [
579
graph_to_molecule([adjacency[i].numpy(), features[i].numpy()])
580
for i in range(batch_size)
581
]
582
583
584
molecules = sample(wgan.generator, batch_size=48)
585
586
MolsToGridImage(
587
[m for m in molecules if m is not None][:25], molsPerRow=5, subImgSize=(150, 150)
588
)
589
590
"""
591
## Concluding thoughts
592
593
**Inspecting the results**. Ten epochs of training seemed enough to generate some decent
594
looking molecules! Notice, in contrast to the
595
[MolGAN paper](https://arxiv.org/abs/1805.11973), the uniqueness of the generated
596
molecules in this tutorial seems really high, which is great!
597
598
**What we've learned, and prospects**. In this tutorial, a generative model for molecular
599
graphs was successfully implemented, which allowed us to generate novel molecules. In the
600
future, it would be interesting to implement generative models that can modify existing
601
molecules (for instance, to optimize solubility or protein-binding of an existing
602
molecule). For that however, a reconstruction loss would likely be needed, which is
603
tricky to implement as there's no easy and obvious way to compute similarity between two
604
molecular graphs.
605
606
Example available on HuggingFace
607
608
| Trained Model | Demo |
609
| :--: | :--: |
610
| [![Generic badge](https://img.shields.io/badge/%F0%9F%A4%97%20Model-wgan%20graphs-black.svg)](https://huggingface.co/keras-io/wgan-molecular-graphs) | [![Generic badge](https://img.shields.io/badge/%F0%9F%A4%97%20Spaces-wgan%20graphs-black.svg)](https://huggingface.co/spaces/keras-io/Generating-molecular-graphs-by-WGAN-GP) |
611
"""
612
613