Skip to content

Commit 93b03e4

Browse files
authored
Fixing documentation for torchtext nn modules (#1267)
1 parent eb5e39d commit 93b03e4

File tree

2 files changed

+6
-3
lines changed

2 files changed

+6
-3
lines changed

docs/source/nn_modules.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
.. role:: hidden
22
:class: hidden-section
33

4-
torchtext.nn.modules.multiheadattention
4+
torchtext.nn
55
=======================================
66

7-
.. automodule:: torchtext.nn.modules.multiheadattention
8-
.. currentmodule:: torchtext.nn.modules.multiheadattention
7+
.. automodule:: torchtext.nn
8+
.. currentmodule:: torchtext.nn
99

1010
:hidden:`MultiheadAttentionContainer`
1111
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

torchtext/nn/modules/multiheadattention.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ def __init__(self, nhead, in_proj_container, attention_layer, out_proj, batch_fi
2020
2121
Examples::
2222
>>> import torch
23+
>>> from torchtext.nn import MultiheadAttentionContainer, InProjContainer, ScaledDotProduct
2324
>>> embed_dim, num_heads, bsz = 10, 5, 64
2425
>>> in_proj_container = InProjContainer(torch.nn.Linear(embed_dim, embed_dim),
2526
torch.nn.Linear(embed_dim, embed_dim),
@@ -122,6 +123,7 @@ def __init__(self, dropout=0.0, batch_first=False):
122123
as `(batch, seq, feature)`. Default: ``False``
123124
124125
Examples::
126+
>>> import torch, torchtext
125127
>>> SDP = torchtext.nn.ScaledDotProduct(dropout=0.1)
126128
>>> q = torch.randn(21, 256, 3)
127129
>>> k = v = torch.randn(21, 256, 3)
@@ -245,6 +247,7 @@ def forward(self,
245247
value (Tensor): The values to be projected.
246248
247249
Examples::
250+
>>> import torch
248251
>>> from torchtext.nn import InProjContainer
249252
>>> embed_dim, bsz = 10, 64
250253
>>> in_proj_container = InProjContainer(torch.nn.Linear(embed_dim, embed_dim),

0 commit comments

Comments
 (0)