CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
y33-j3T

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: y33-j3T/Coursera-Deep-Learning
Path: blob/master/Sequence Models/Week 1/Jazz improvisation with LSTM/__pycache__/data_utils.cpython-36.pyc
Views: 13383
3

ڰqZ��@s�ddlTddlTddlmZed�\ZZee�\ZZ	Z
Zee
e��ZdZejd�Zejdef�Zejdef�Zdd�Zeee	e
ed	d
dfdd
�Zeeefdd�ZdS)�)�*)�to_categoricalzdata/original_metheny.mid�@��Nc	CsHtd�\}}t|�\}}}}tt|��}t||dd�\}}}||||fS)Nzdata/original_metheny.mid�<�)�get_musical_data�get_corpus_data�len�set�data_processing)	�chords�abstract_grammars�corpus�tones�
tones_indices�
indices_tones�N_tones�X�Y�r�C/home/jovyan/work/Week 1/Jazz improvisation with LSTM/data_utils.py�load_music_utils
s
r�
i�g�?c	s�tj�}	d}
ttt�d�}td��xDtd|�D�]4}tj�}
x"t|D]}|
j|j	d|�qJWt
|�\}}t|j��}�fdd�|D�}d}x(tt|�d�D]}|||d	7}q�W||d7}|j
d
d�j
dd�}t|�}t||
�}t|�}t|�}td
tdd�|D��|f�x |D]}|	j|
|j	|��q"Wx |
D]}|	j|
|j	|��qDW|
d7}
q2W|	jdtjdd��tjj|	�}|jdd�|j�td�|j�|	S)a�
    Generates music using a model trained to learn musical patterns of a jazz soloist. Creates an audio stream
    to save the music and play it.
    
    Arguments:
    model -- Keras model Instance, output of djmodel()
    corpus -- musical corpus, list of 193 tones as strings (ex: 'C,0.333,<P1,d-5>')
    abstract_grammars -- list of grammars, on element can be: 'S,0.250,<m2,P-4> C,0.250,<P4,m-2> A,0.250,<P4,m-2>'
    tones -- set of unique tones, ex: 'A,0.250,<M2,d-4>' is one element of the set.
    tones_indices -- a python dictionary mapping unique tone (ex: A,0.250,< m2,P-4 >) into their corresponding indices (0-77)
    indices_tones -- a python dictionary mapping indices (0-77) into their corresponding unique tone (ex: A,0.250,< m2,P-4 >)
    Tx -- integer, number of time-steps used at training time
    temperature -- scalar value, defines how conservative/creative the model is when generating music
    
    Returns:
    predicted_tones -- python list containing predicted tones
    g�z2Predicting new values for different set of chords.r�csg|]}�|�qSrr)�.0�p)rrr�
<listcomp>?sz"generate_music.<locals>.<listcomp>zC,0.25 � z Az Cz Xz]Generated %s sounds using the predicted values for the set of chords ("%s") and after pruningcSsg|]}t|tj�r|�qSr)�
isinstance�note�Note)r�krrrrXsg@�)�numberzoutput/my_music.midi�wbz5Your generated music is saved in output/my_music.midi�����)�stream�Stream�intrr�print�range�Voice�insert�offset�predict_and_sample�list�squeeze�replace�
prune_grammar�unparse_grammar�prune_notes�clean_up_notes�tempo�
MetronomeMark�midi�	translate�streamToMidiFile�open�write�close)�inference_modelrrrrrZT_y�	max_triesZ	diversityZ
out_streamZcurr_offsetZ
num_chords�iZcurr_chords�j�_�indices�predZpredicted_tonesr$Zsounds�m�mc�mfr)rr�generate_musicsB


rKcCs2|j|||g�}tj|dd�}t|dd�}||fS)a�
    Predicts the next value of values using the inference model.
    
    Arguments:
    inference_model -- Keras model instance for inference time
    x_initializer -- numpy array of shape (1, 1, 78), one-hot vector initializing the values generation
    a_initializer -- numpy array of shape (1, n_a), initializing the hidden state of the LSTM_cell
    c_initializer -- numpy array of shape (1, n_a), initializing the cell state of the LSTM_cel
    Ty -- length of the sequence you'd like to generate.
    
    Returns:
    results -- numpy-array of shape (Ty, 78), matrix of one-hot vectors representing the values generated
    indices -- numpy-array of shape (Ty, 1), matrix of indices representing the values generated
    r)�axisr)�num_classesr()�predict�np�argmaxr)rA�
x_initializer�
a_initializer�
c_initializerrGrF�resultsrrrr1ssr1N)rrr)�music_utils�
preprocess�keras.utilsrr	rrr
rrrrrrr�n_arO�zerosrQrRrSrrKr1rrrr�<module>s
^