Tag: Tensorflow rss

Posts

10 May 2018 /
TensorFlow serving can handle a variable batch size when doing predictions. I never understood how to configure this and also the shape of the results returned. Finally figuring this out, here’s the changes to our previous serving setup to accept a variable number of images to classify for our model. Serving input function First thing is to update our serving input receiver function placeholder. In the past we had set the placeholder to have a shape of [1], for variable batch size, this is as easy as setting it to [None].
Here we’ll look at exporting our previously trained dog and cat classifier and call that with local or remote files to test it out. To do this, I’ll use TensorFlow Serving in a docker container and use a python client to call to the remote host. _Update 12th June, 2018: I used the gRPC interface here, but TensorFlow serving now has a REST API that could be beneficial or of more interest_
21 April 2018 /
This notebook is available as a codelab TensorFlow Hub was announced at TensorFlow Dev Summit 2018 and promises to reduce the effort required to use existing machine learning models and weights in your own custom model. From the overview page TensorFlow Hub is a library to foster the publication, discovery, and consumption of reusable parts of machine learning models. A module is a self-contained piece of a TensorFlow graph, along with its weights and assets, that can be reused across different tasks in a process known as transfer learning.
15 January 2018 / / Machine Learning / Azure
Azure Batch AI provides us the PaaS opportunity to use GPU resources in the cloud. The basis is to use virtual machines in a managed cluster (i.e. you don’t have to maintain them) and run jobs as you see fit. For my use case, the opportunity of low-priority VMs to reduce the cost of using GPU machines is also particularly promising. What I’ll run through is running our first job on Azure Batch AI.
02 January 2018 / / Machine Learning
The TensorFlow canned estimators got promoted to core in version 1.3 to make training and evaluation of machine learning models very easy. This API allows you to describe your input data (categorical, numeric, embedding etc) through the use of feature columns. The estimator API also allows you to write a custom model for your unique job, and the feature columns capabilities can be utilised here as well to simplify or enhance things.
06 December 2017 /
Understanding the shape of your model is sometimes non-trivial when it comes to machine learning. Look at convolutional neural nets with the number of filters, padding, kernel sizes etc and it’s quickly evident why understanding what shapes your inputs and outputs are will keep you sane and reduce the time spent digging into strange errors. TensorFlow’s RNN API exposed me to similar frustrations and misunderstandings about what I was expected to give it and what I was getting in return.