What is Pytorch? With Examples

Pytorch is an open-source, Python-based machine and deep learning framework, which is being widely used for several natural language processing and computer vision applications. PyTorch was developed by Facebook’s AI Research and is adapted by several industries like Uber, Twitter, Salesforce, and NVIDIA.

History of PyTorch

PyTorch derives its current form from two sources. The first being Torch, a machine learning library developed in Lua language, dating back to 2002. Torch is no longer active and has been completely taken over by PyTorch as of now. The second source of PyTorch is the Chainer framework, developed in Japan in 2015, that uses NumPy like tensor structures for computations and an eager approach to auto differentiation. Both these features have been actively adopted by the PyTorch framework.

Another independent framework developed by Facebook known as Caffe2 (Convolutional Architecture for Fast Feature Embedding) has later been merged into PyTorch.

Features of PyTorch

  • Versatile Collection of Modules: PyTorch comes with several specially developed modules like torchtext, torchvision, and torchaudio to work with different areas of deep learning like NLP, computer vision and speech processing.
  • Numpy friendly: PyTorch works with NumPy like tensor structures for its computations which are all GPU compatible.
  • Easy to implement backpropagation: PyTorch supports auto-differentiation i.e. it greatly simplifies the way in which complex calculations like backpropagation are handled by recording the operations performed on a variable and runs them backward. This proves to be effective in saving time and also takes the burden off the programmers’ backs.
  • More Pythonic: PyTorch is considered more Pythonic by several developers since it supports dynamically making changes to your code.
  • Flexible, pain-less debugging: PyTorch doesn’t require you to define the entire graph a priori. It runs with an imperative paradigm, meaning that each line of code adds a certain component to the graph, and each component can be run, tested and debugged independently of the complete graph structure, which makes it very flexible.

Comparison to Tensorflow

Though Google’s Tensorflow is already a well-established ML/DL framework with several faithful supporters, PyTorch has found its stronghold due to its dynamic graph approach and flexible debugging strategy. PyTorch has several researchers actively supporting it due to these reasons. In the year 2018-19, it was observed that research papers mentioning PyTorch have doubled in number.

Tensorflow 2.0 has introduced an eager execution paradigm for dynamic graph definitions in similar lines to PyTorch. However, the resources to help you learn this feature are still sparse. Though Tensorflow is often touted as the industry strength ML/DL library, PyTorch still continues to rise, owing to its gentler learning curves for newcomers.

This tutorial series aims to equip you with all the necessary skills you need to start developing and training your own neural networks with PyTorch.

So bookmark the PyTorch page keep a watch on all the new topics that will be covered in the future.

By admin

Leave a Reply

%d bloggers like this: