CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
y33-j3T

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: y33-j3T/Coursera-Deep-Learning
Path: blob/master/Convolutional Neural Networks/dummy/__pycache__/resnets_utils.cpython-36.pyc
Views: 13377
3

���Y8#�@s�ddlZddlZddlZddlZddlZdZdd�Zdd�Z	dd�Z
ddlZddlZddlZddlZd	d
�Zddd
�Zdd�Z
dd�Zdd�ZdS)�Nz/output/log/c
Cs4tj�}tj�}tj||�}tjj�}	tjjt�r>tjj	t�tjj
t�tjd|�}
tj
���}tjj�}tjj||d�|j|�x�tt||
��D]�}
|j|||g�\}}}|
|
ddko�|
dkr�|j||g�\}}tdjt|
|
���tdj||��|	j|tjjtd�|
d�q�W|j�Wd	QRXd	S)
aC
    Train the network.
        Args:
            loss_op: Tensorflow op to run to compute loss.
            train_op: Tensorflow op to run to train network.
            accuracy: Tensor of current accuracy of the model.
            update_op: Tensorflow op needed to update accuracy
            batch_size: Batch size
    �train)�sess�coord�rzFinished epoch {}.z+Current loss: {} Current train accuracy: {}Zresnet)�global_stepN)�tf�global_variables_initializer�local_variables_initializer�groupr�Saver�gfile�Exists�LOG_DIR�DeleteRecursively�MakeDirs�cifar10�get_steps_per_epoch�Session�Coordinator�start_queue_runners�run�tqdm�range�print�format�int�save�os�path�join�request_stop)�loss_op�train_op�accuracy�	update_op�
num_epochs�
batch_size�	global_op�local_op�init_op�saver�steps_per_epochrr�step�_�loss�acc�r0�)/home/jovyan/work/Resnet/resnets_utils.pyr
s(




rcCs�tj�}tj�}tj||�}tjj�}tjjt�}tj	d|�}tj
���}	tjj�}
tjj|	|
d�|	j
|�|j|	|�x tt|��D]}|	j
|g�}q�W|	j
|g�}
td�tdj|
��|
j�WdQRXdS)z�
    Test the network.
        Args:
            accuracy: Tensor of current accuracy of the model.
            update_op: Tensorflow op needed to update accuracy
            batch_size: Batch size
    �test)rrz#Finished passing over the test set.zTest accuracy: {}N)rrr	r
rr�latest_checkpointrrrrrrr�restorerrrrr )r#r$r&r'r(r)r*�model_checkpoint_pathr+rrr,r-r/r0r0r1r21s"



r2cstd�tj�}g}x:|D]2}tjtjjd|d�}dd�|D�}|j|�qW�fdd�|D�}tjt	|�|f�}	tj
���}
tjj�}tjj
|
|d�|
j|�xXtt|��D]H}|
j|�|g�}
|
dd
�}x$tt	|��D]}|||	||<q�Wq�W|j�WdQRX|	S)a�
    Construct a queued batch of images and labels.
        Args:
            loss_op: Tensorflow op to run to compute loss.
            train_op: Tensorflow op to run to train network.
            steps: Number of steps to train.
            layers: List of ints. indicating for which layers to
                    get the gradients
        Returns:
            grads_list: List of np.arrays. Gradients computed for
                        specificed layers for every train step

    z&Start training to collect gradients...z	layer_%d/)�scopecSsg|]}d|jkr|�qS)�kernel)�name)�.0�vr0r0r1�
<listcomp>dsz&get_gradient_norms.<locals>.<listcomp>cs(g|] }tjdd�tj�|�D���qS)cSsg|]}tj|��qSr0)r�abs)r9�gr0r0r1r;fsz1get_gradient_norms.<locals>.<listcomp>.<listcomp>)r�reduce_mean�	gradients)r9Zvars_j)r!r0r1r;fs)rrN������)rrr�get_collection�	GraphKeys�TRAINABLE_VARIABLES�append�np�zeros�lenrrrrrrrr )r!r"�steps�layersr)�	variables�layerZvars_iZ	grads_opsZ
grads_listrrr,�results�grads�ir0)r!r1�get_gradient_normsOs,





rPcCs�tjdd�}tj|ddd��}tj|ddd��}tjdd�}tj|ddd��}tj|ddd��}tj|ddd��}|jd	|jd
f�}|jd	|jd
f�}|||||fS)Nzdatasets/train_signs.h5�rZtrain_set_xZtrain_set_yzdatasets/test_signs.h5Z
test_set_xZ
test_set_yZlist_classesrr)�h5py�FilerF�array�reshape�shape)Z
train_datasetZtrain_set_x_origZtrain_set_y_orig�test_datasetZtest_set_x_origZtest_set_y_orig�classesr0r0r1�load_dataset~srY�@cCs6|jd}g}tjj|�ttjj|��}||dd�dd�dd�f}||dd�f}tj||�}	xptd|	�D]b}
||
||
||�dd�dd�dd�f}||
||
||�dd�f}||f}
|j	|
�qpW||dk�r2||	||�dd�dd�dd�f}||	||�dd�f}||f}
|j	|
�|S)a
    Creates a list of random minibatches from (X, Y)
    
    Arguments:
    X -- input data, of shape (input size, number of examples) (m, Hi, Wi, Ci)
    Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) (m, n_y)
    mini_batch_size - size of the mini-batches, integer
    seed -- this is only for the purpose of grading, so that you're "random minibatches are the same as ours.
    
    Returns:
    mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
    rN)
rVrF�random�seed�list�permutation�math�floorrrE)�X�YZmini_batch_sizer\�mZmini_batchesr^Z
shuffled_XZ
shuffled_YZnum_complete_minibatches�kZmini_batch_XZmini_batch_YZ
mini_batchr0r0r1�random_mini_batches�s$
, $
recCstj|�|jd�j}|S)Nr�����)rF�eyerU�T)rb�Cr0r0r1�convert_to_one_hot�srjc
Cs�|d}|d}|d}|d}|d}|d}tjtj||�|�}tjj|�}	tjtj||	�|�}
tjj|
�}tjtj||�|�}|S)a�
    Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
    
    Arguments:
    X -- input dataset placeholder, of shape (input size, number of examples)
    parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
                  the shapes are given in initialize_parameters

    Returns:
    Z3 -- the output of the last LINEAR unit
    �W1�b1�W2�b2�W3�b3)r�add�matmul�nn�relu)
ra�
parametersrkrlrmrnrorp�Z1�A1�Z2�A2ZZ3r0r0r1�forward_propagation_for_predict�srzcCs�tj|d�}tj|d�}tj|d�}tj|d�}tj|d�}tj|d�}||||||d�}tjdd	d
g�}	t|	|�}
tj|
�}tj�}|j||	|id�}
|
S)Nrkrlrmrnrorp)rkrlrmrnrorp�floati0r)�	feed_dict)r�convert_to_tensor�placeholderrz�argmaxrr)rarurkrlrmrnrorp�params�x�z3�pr�
predictionr0r0r1�predict�s$

r�)rZr)r�numpyrF�
tensorflowrrRr_rrr2rPrYrerjrzr�r0r0r0r1�<module>s"$*
)