Every now and then, a Python library is developed that can make a difference in the field of deep learning. PyTorch is one such library.
The creators of PyTorch say they have a philosophy – they want to be imperative. This means that we run our computations immediately. This fits in with the Python programming methodology because we don’t have to wait until all the code is written before we know if it works or not. We can easily run a piece of code and test it in real time. For me, a neural network debugger, this is a boon!
PyTorch is a Python library designed to provide flexibility as a deep learning development platform. PyTorch’s workflow is as close as possible to Python’s scientific computing library, numpy.
Now you may ask, why use PyTorch specifically to build deep learning models? I can list three things that can help answer this question:
- Easy to use API – it is as simple as Python can be;
- Python support – as mentioned above, PyTorch integrates seamlessly with the Python data science stack. It is so similar to numpy that you may not even notice the difference;
- Dynamic Computation Graphs – Instead of predefined graphs with specific functions, PyTorch provides us with a framework to build computation graphs as we go along and even change them at runtime. This is useful in situations where we don’t know how much memory we will need to create a neural network.
A few other advantages of using PyTorch are support for multiple GPUs, custom data loaders, and simplified preprocessors.
Since its release in early January 2016, many researchers have embraced it as a useful library because of its ease of building new and extremely complex graphs. However, it will still be some time before PyTorch is adopted by most data science practitioners due to the fact that it is new and under development.
PyTorch uses an imperative paradigm. That is, each line of code required to build a graph defines a component of that graph. We can independently perform computations on those components, even before your graph is fully constructed. This is called the “definition-by-definition” methodology.