.. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_beginner_blitz_autograd_tutorial.py: Autograd: Automatic Differentiation =================================== Central to all neural networks in PyTorch is the ``autograd`` package. Let’s first briefly visit this, and we will then go to training our first neural network. The ``autograd`` package provides automatic differentiation for all operations on Tensors. It is a define-by-run framework, which means that your backprop is defined by how your code is run, and that every single iteration can be different. Let us see this in more simple terms with some examples. Tensor -------- ``torch.Tensor`` is the central class of the package. If you set its attribute ``.requires_grad`` as ``True``, it starts to track all operations on it. When you finish your computation you can call ``.backward()`` and have all the gradients computed automatically. The gradient for this tensor will be accumulated into ``.grad`` attribute. To stop a tensor from tracking history, you can call ``.detach()`` to detach it from the computation history, and to prevent future computation from being tracked. To prevent tracking history (and using memory), you can also wrap the code block in ``with torch.no_grad():``. This can be particularly helpful when evaluating a model because the model may have trainable parameters with `requires_grad=True`, but for which we don't need the gradients. There’s one more class which is very important for autograd implementation - a ``Function``. ``Tensor`` and ``Function`` are interconnected and build up an acyclic graph, that encodes a complete history of computation. Each tensor has a ``.grad_fn`` attribute that references a ``Function`` that has created the ``Tensor`` (except for Tensors created by the user - their ``grad_fn is None``). If you want to compute the derivatives, you can call ``.backward()`` on a ``Tensor``. If ``Tensor`` is a scalar (i.e. it holds a one element data), you don’t need to specify any arguments to ``backward()``, however if it has more elements, you need to specify a ``gradient`` argument that is a tensor of matching shape. .. code-block:: python import torch Create a tensor and set requires_grad=True to track computation with it .. code-block:: python x = torch.ones(2, 2, requires_grad=True) print(x) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([[1., 1.], [1., 1.]], requires_grad=True) Do an operation of tensor: .. code-block:: python y = x + 2 print(y) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([[3., 3.], [3., 3.]], grad_fn=) ``y`` was created as a result of an operation, so it has a ``grad_fn``. .. code-block:: python print(y.grad_fn) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Do more operations on y .. code-block:: python z = y * y * 3 out = z.mean() print(z, out) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([[27., 27.], [27., 27.]], grad_fn=) tensor(27., grad_fn=) ``.requires_grad_( ... )`` changes an existing Tensor's ``requires_grad`` flag in-place. The input flag defaults to ``False`` if not given. .. code-block:: python a = torch.randn(2, 2) a = ((a * 3) / (a - 1)) print(a.requires_grad) a.requires_grad_(True) print(a.requires_grad) b = (a * a).sum() print(b.grad_fn) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none False True Gradients --------- Let's backprop now Because ``out`` contains a single scalar, ``out.backward()`` is equivalent to ``out.backward(torch.tensor(1))``. .. code-block:: python out.backward() print gradients d(out)/dx .. code-block:: python print(x.grad) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([[4.5000, 4.5000], [4.5000, 4.5000]]) You should have got a matrix of ``4.5``. Let’s call the ``out`` *Tensor* “:math:`o`”. We have that :math:`o = \frac{1}{4}\sum_i z_i`, :math:`z_i = 3(x_i+2)^2` and :math:`z_i\bigr\rvert_{x_i=1} = 27`. Therefore, :math:`\frac{\partial o}{\partial x_i} = \frac{3}{2}(x_i+2)`, hence :math:`\frac{\partial o}{\partial x_i}\bigr\rvert_{x_i=1} = \frac{9}{2} = 4.5`. You can do many crazy things with autograd! .. code-block:: python x = torch.randn(3, requires_grad=True) y = x * 2 while y.data.norm() < 1000: y = y * 2 print(y) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([-1178.9551, 1202.9015, 293.6342], grad_fn=) .. code-block:: python gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float) y.backward(gradients) print(x.grad) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none tensor([1.0240e+02, 1.0240e+03, 1.0240e-01]) You can also stop autograd from tracking history on Tensors with ``.requires_grad=True`` by wrapping the code block in ``with torch.no_grad()``: .. code-block:: python print(x.requires_grad) print((x ** 2).requires_grad) with torch.no_grad(): print((x ** 2).requires_grad) .. rst-class:: sphx-glr-script-out Out: .. code-block:: none True True False **Read Later:** Documentation of ``autograd`` and ``Function`` is at https://pytorch.org/docs/autograd **Total running time of the script:** ( 0 minutes 0.007 seconds) .. _sphx_glr_download_beginner_blitz_autograd_tutorial.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download :download:`Download Python source code: autograd_tutorial.py ` .. container:: sphx-glr-download :download:`Download Jupyter notebook: autograd_tutorial.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_