This situation presents 2 challenges: computing an overall f1-score when you only have per-batch values, and doing so in a multilabel setting where micro, macro and weighted f1 scores are expected. What a journey!
Splitting a multi-label dataset into train and test sets is more complicated than the single-label case. You can't simply split each class. You have to be more clever, and stratify - here's how.
The saved_model API allows for easy saving. Restoring the model and performing inference is a bit trickier when the input Tensors come from a tf.data.Dataset. We'll see here how this works.
That's my personal setup: links, descriptions and configuration. From ZSH to Pyenv through Spaceship and Tmux
Here is my second AMI, ugraded from the previous one with a lot of what you need
Use my vict0rsch-2.0 AMI on a p2.xlarge instance
Les grandes lignes que peuvent comprendre les petits cousins et les vieux oncles pour faire un peu attention sur internet. English version some day.
Using a personal AMI, I'll go through every single step from 0 (you don't even know what an instance is) to running a Tensorflow example on GPU including how to connect to the instance
I will show you the world... Kidding. Let's tool up for Tensorflow Keras etc. STATUS -> being written, be patient
Theano and Tensorflow don't handle computations as others do. They are millenials so they feel the need to be special.