Site Loader
Contact Us
Rock Street, San Francisco

PyTorch vs TensorFlow

by Attune World Wide / /

This blog is all about  the main differences I’ve find out between PyTorch and TensorFlow. This blog  is intended to be useful for anyone considering starting a new project or making the switch from one deep learning framework to another. The objective is on programmability and flexibility when setting up the components of the training and deployment deep learning stack.

Pytorch has popular for its dynamic computational graph and efficient memory usage. Dynamic graph is very fit for specific use-cases like working with text.

TensorFlow was written mainly in C++ and CUDA. CUDA is the NVIDIA’s  own language for programming GPUs.with TensorFlow, you are not restricted by Python. Even the language syntax differs a bit, the concepts remains the same.

So let’s start the battle between pyTorch and TensorFlow:

PyTorch is an advanced version of NumPy which is able to use the power of GPUs with functionality for building and training deep neural networks. This makes PyTorch easy to learn if you are known  NumPy.

On the other hand, TensorFlow is a programming language embedded within Python. When you write TensorFlow code it compiled into a graph by Python and then run by the TensorFlow execution engine. TensorFlow has a few extra concepts to learn such as the session, the graph, variable scoping etc. Also more hard code is needed to get a basic model running.

  • The ramp-up time to get going with PyTorch is definitely smaller than TensorFlow.

As PyTorch ages,there is number of functionality which TensorFlow supports that PyTorch doesn’t. some features that PyTorch doesn’t have that are:

  • Tensor flipping along a dimension (np.flip, np.flipud, np.fliplr)
  • Checking a tensor for NaN and infinity (np.is_nan, np.is_inf)
  • Fast Fourier transforms (np.fft)
  • All these are supported in TensorFlow. Also the TensorFlow contrib package has many more functions and models than PyTorch.
  • the graph construction is dynamic in pyTorch, meaning the graph is built at run-time. TensorFlow graph construction is static, The graph is “compiled” and then run.
  • PyTorch code debugging is like debugging the code of Python. by using pdb and set a break-point anywhere. On other hand debugging of TensorFlow code is not so easy. The two choice are there, one is to request the variables you want to inspect from the session or to learn and use the TensorFlow debugger.

PyTorch has simple API which can either save all the weights of a model or pickle the entire class. On other hand  TensorFlow Saver object is easy to use and exposes a few more options for check-pointing.The advantage TensorFlow has in serialization is that the entire graph can be stored as a protocol buffer. The graph can loaded in other supported languages (C++, Java)also.

So TensorFlow has age here.

In PyTorch,Interfaces are specified in a dataset, a sampler, and a data loader. A data loader takes a dataset and a sampler and produces an iterator over the dataset according to the sampler’s schedule.In TensorFlow,In part this is because adding all the preprocessing code you want to run in parallel into the TensorFlow graph is not always straight-forward.

About Attune World Wide

What you can read next

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

Recent Comments

    X