diff --git a/beginner_source/basics/autogradqs_tutorial.py b/beginner_source/basics/autogradqs_tutorial.py index 8eff127ddee..107ff3cd2bc 100644 --- a/beginner_source/basics/autogradqs_tutorial.py +++ b/beginner_source/basics/autogradqs_tutorial.py @@ -133,7 +133,7 @@ # - To mark some parameters in your neural network as **frozen parameters**. # - To **speed up computations** when you are only doing forward pass, because computations on tensors that do # not track gradients would be more efficient. - +# See this `note` for additional reference. ###################################################################### @@ -160,6 +160,15 @@ # - accumulates them in the respective tensor’s ``.grad`` attribute # - using the chain rule, propagates all the way to the leaf tensors. # +# To get a sense of what this computational graph looks like we can use the following tools: +# +# 1. torchviz is a package to visualize computational graphs +# +# +# 2. TORCH_LOGS="+autograd" enables logging for the backward pass. +# +# +# # .. note:: # **DAGs are dynamic in PyTorch** # An important thing to note is that the graph is recreated from scratch; after each