CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
y33-j3T

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: y33-j3T/Coursera-Deep-Learning
Path: blob/master/Natural Language Processing with Attention Models/Week 4 - Chatbot/model/train/events.out.tfevents.1608284936.b0ea9a627a44
Views: 13380
�K"	���A
brain.Event:2��>a.��W�	
���A*!

metrics/CrossEntropyLosseU'A{�eQJ���	vo���A*�
�

gin_configB�B�#### Parameters for Adam:

    Adam.b1 = 0.9
    Adam.b2 = 0.999
    Adam.clip_grad_norm = None
    Adam.eps = 1e-05
    Adam.weight_decay_rate = 1e-05
    
#### Parameters for AddLossWeights:

    # None.
    
#### Parameters for backend:

    backend.name = 'jax'
    
#### Parameters for BucketByLength:

    BucketByLength.length_axis = 0
    BucketByLength.length_keys = None
    BucketByLength.strict_pad_on_len = False
    
#### Parameters for FastGelu:

    # None.
    
#### Parameters for FilterByLength:

    FilterByLength.length_axis = 0
    FilterByLength.length_keys = None
    
#### Parameters for LogSoftmax:

    LogSoftmax.axis = -1
    
#### Parameters for random_spans_helper:

    # None.
    
#### Parameters for layers.SelfAttention:

    layers.SelfAttention.attention_dropout = 0.0
    layers.SelfAttention.bias = False
    layers.SelfAttention.chunk_len = None
    layers.SelfAttention.masked = False
    layers.SelfAttention.n_chunks_after = 0
    layers.SelfAttention.n_chunks_before = 0
    layers.SelfAttention.n_parallel_heads = None
    layers.SelfAttention.predict_drop_len = None
    layers.SelfAttention.predict_mem_len = None
    layers.SelfAttention.share_qk = False
    layers.SelfAttention.use_python_loop = False
    layers.SelfAttention.use_reference_code = False
    
#### Parameters for SentencePieceVocabulary:

    # None.
    
#### Parameters for Serial:

    # None.
    
#### Parameters for Shuffle:

    Shuffle.queue_size = 1024
    
#### Parameters for data.Tokenize:

    # None.
    
#### Parameters for tf_inputs.Tokenize:

    tf_inputs.Tokenize.keys = None
    tf_inputs.Tokenize.n_reserved_ids = 0
    tf_inputs.Tokenize.vocab_type = 'subword'
    
#### Parameters for Vocabulary:

    # None.
    
#### Parameters for warmup_and_rsqrt_decay:

    # None.J

text�F?#,���E	+|���A*

training/learning_rateĚ'7�S$�/m]P	�|���A*"
 
training/steps per second
O�<YUl+��K	�}���A*

training/gradients_l2R�(@�:�Q#��wC	�~���A*


training/losseU'A���)7�_	���A*

training/weights_l2�E�Hp.��W�	�[����A
*!

metrics/CrossEntropyLoss��$A� �,���E	?i����A
*

training/learning_rateu��8t���/m]P	�i����A
*"
 
training/steps per second�]=���~+��K	�j����A
*

training/gradients_l2K
,@�Q\�#��wC	�k����A
*


training/loss��$Ao�T�)7�_	@l����A
*

training/weights_l2#�E�ed�