📚 The CoCalc Library - books, templates and other resources
License: OTHER
Credits: Forked from deep-learning-keras-tensorflow by Valerio Maggio
Unsupervised learning
AutoEncoders
An autoencoder, is an artificial neural network used for learning efficient codings.
The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction.
Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labeled responses. The most common unsupervised learning method is cluster analysis, which is used for exploratory data analysis to find hidden patterns or grouping in data.
Testing the Autoencoder
Sample generation with Autoencoder
Pretraining encoders
One of the powerful tools of auto-encoders is using the encoder to generate meaningful representation from the feature vectors.
Natural Language Processing using Artificial Neural Networks
“In God we trust. All others must bring data.” – W. Edwards Deming, statistician
Word Embeddings
What?
Convert words to vectors in a high dimensional space. Each dimension denotes an aspect like gender, type of object / word.
"Word embeddings" are a family of natural language processing techniques aiming at mapping semantic meaning into a geometric space. This is done by associating a numeric vector to every word in a dictionary, such that the distance (e.g. L2 distance or more commonly cosine distance) between any two vectors would capture part of the semantic relationship between the two associated words. The geometric space formed by these vectors is called an embedding space.
Why?
By converting words to vectors we build relations between words. More similar the words in a dimension, more closer their scores are.
Example
W(green) = (1.2, 0.98, 0.05, ...)
W(red) = (1.1, 0.2, 0.5, ...)
Here the vector values of green and red are very similar in one dimension because they both are colours. The value for second dimension is very different because red might be depicting something negative in the training data while green is used for positiveness.
By vectorizing we are indirectly building different kind of relations between words.
Example of word2vec
using gensim
Reading blog post from data directory
Word2Vec
Doc2Vec
The same technique of word2vec is extrapolated to documents. Here, we do everything done in word2vec + we vectorize the documents too
Convolutional Neural Networks for Sentence Classification
Train convolutional network for sentiment analysis. Based on "Convolutional Neural Networks for Sentence Classification" by Yoon Kim http://arxiv.org/pdf/1408.5882v2.pdf
For 'CNN-non-static' gets to 82.1% after 61 epochs with following settings: embedding_dim = 20 filter_sizes = (3, 4) num_filters = 3 dropout_prob = (0.7, 0.8) hidden_dims = 100
For 'CNN-rand' gets to 78-79% after 7-8 epochs with following settings: embedding_dim = 20 filter_sizes = (3, 4) num_filters = 150 dropout_prob = (0.25, 0.5) hidden_dims = 150
For 'CNN-static' gets to 75.4% after 7 epochs with following settings: embedding_dim = 100 filter_sizes = (3, 4) num_filters = 150 dropout_prob = (0.25, 0.5) hidden_dims = 150
it turns out that such a small data set as "Movie reviews with one sentence per review" (Pang and Lee, 2005) requires much smaller network than the one introduced in the original article:
embedding dimension is only 20 (instead of 300; 'CNN-static' still requires ~100)
2 filter sizes (instead of 3)
higher dropout probabilities and
3 filters per filter size is enough for 'CNN-non-static' (instead of 100)
embedding initialization does not require prebuilt Google Word2Vec data. Training Word2Vec on the same "Movie reviews" data set is enough to achieve performance reported in the article (81.6%)
** Another distinct difference is slidind MaxPooling window of length=2 instead of MaxPooling over whole feature map as in the article
Parameters
Model Variations. See Kim Yoon's Convolutional Neural Networks for Sentence Classification, Section 3 for detail.
Data Preparation
Building CNN Model
Another Example
Using Keras + GloVe - Global Vectors for Word Representation
Using pre-trained word embeddings in a Keras model
Reference: https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html