Skip to content

DOC: make RST files conform to pandas token usage #37393

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions doc/source/development/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ want to clone your fork to your machine::
git remote add upstream https://github.com/pandas-dev/pandas.git

This creates the directory ``pandas-yourname`` and connects your repository to
the upstream (main project) *pandas* repository.
the upstream (main project) pandas repository.

Note that performing a shallow clone (with ``--depth==N``, for some ``N`` greater
or equal to 1) might break some tests and features as ``pd.show_versions()``
Expand Down Expand Up @@ -1381,7 +1381,7 @@ using ``.`` as a separator. For example::

will only run the ``GroupByMethods`` benchmark defined in ``groupby.py``.

You can also run the benchmark suite using the version of ``pandas``
You can also run the benchmark suite using the version of pandas
already installed in your current Python environment. This can be
useful if you do not have virtualenv or conda, or are using the
``setup.py develop`` approach discussed above; for the in-place build
Expand Down
2 changes: 1 addition & 1 deletion doc/source/development/contributing_docstring.rst
Original file line number Diff line number Diff line change
Expand Up @@ -616,7 +616,7 @@ be added with blank lines before and after them.

The way to present examples is as follows:

1. Import required libraries (except ``numpy`` and ``pandas``)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leave this, actually generally we do want to use double-backticks around pandas as it highlites it

1. Import required libraries (except ``numpy`` and pandas)

2. Create the data required for the example

Expand Down
2 changes: 1 addition & 1 deletion doc/source/development/developer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ where ``KeyValue`` is
}

So that a ``pandas.DataFrame`` can be faithfully reconstructed, we store a
``pandas`` metadata key in the ``FileMetaData`` with the value stored as :
pandas metadata key in the ``FileMetaData`` with the value stored as :

.. code-block:: text

Expand Down
12 changes: 6 additions & 6 deletions doc/source/development/extending.rst
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ your ``MyExtensionArray`` class, as follows:

.. note::

Since ``pandas`` automatically calls the underlying operator on each
Since pandas automatically calls the underlying operator on each
element one-by-one, this might not be as performant as implementing your own
version of the associated operators directly on the ``ExtensionArray``.

Expand Down Expand Up @@ -303,7 +303,7 @@ dtypes included in pandas, and ensure roundtrip to pyarrow and the Parquet file
Subclassing pandas data structures
----------------------------------

.. warning:: There are some easier alternatives before considering subclassing ``pandas`` data structures.
.. warning:: There are some easier alternatives before considering subclassing pandas data structures.

1. Extensible method chains with :ref:`pipe <basics.pipe>`

Expand All @@ -313,7 +313,7 @@ Subclassing pandas data structures

4. Extending by :ref:`extension type <extending.extension-types>`

This section describes how to subclass ``pandas`` data structures to meet more specific needs. There are two points that need attention:
This section describes how to subclass pandas data structures to meet more specific needs. There are two points that need attention:

1. Override constructor properties.
2. Define original properties
Expand All @@ -327,15 +327,15 @@ Override constructor properties

Each data structure has several *constructor properties* for returning a new
data structure as the result of an operation. By overriding these properties,
you can retain subclasses through ``pandas`` data manipulations.
you can retain subclasses through pandas data manipulations.

There are 3 constructor properties to be defined:

* ``_constructor``: Used when a manipulation result has the same dimensions as the original.
* ``_constructor_sliced``: Used when a manipulation result has one lower dimension(s) as the original, such as ``DataFrame`` single columns slicing.
* ``_constructor_expanddim``: Used when a manipulation result has one higher dimension as the original, such as ``Series.to_frame()``.

Following table shows how ``pandas`` data structures define constructor properties by default.
Following table shows how pandas data structures define constructor properties by default.

=========================== ======================= =============
Property Attributes ``Series`` ``DataFrame``
Expand Down Expand Up @@ -411,7 +411,7 @@ Below example shows how to define ``SubclassedSeries`` and ``SubclassedDataFrame
Define original properties
^^^^^^^^^^^^^^^^^^^^^^^^^^

To let original data structures have additional properties, you should let ``pandas`` know what properties are added. ``pandas`` maps unknown properties to data names overriding ``__getattribute__``. Defining original properties can be done in one of 2 ways:
To let original data structures have additional properties, you should let pandas know what properties are added. pandas maps unknown properties to data names overriding ``__getattribute__``. Defining original properties can be done in one of 2 ways:

1. Define ``_internal_names`` and ``_internal_names_set`` for temporary properties which WILL NOT be passed to manipulation results.
2. Define ``_metadata`` for normal properties which will be passed to manipulation results.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/user_guide/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3365,7 +3365,7 @@ All pandas objects are equipped with ``to_pickle`` methods which use Python's
df
df.to_pickle("foo.pkl")

The ``read_pickle`` function in the ``pandas`` namespace can be used to load
The ``read_pickle`` function in the pandas namespace can be used to load
any pickled pandas object (or any other pickled object) from file:


Expand Down
2 changes: 1 addition & 1 deletion doc/source/user_guide/options.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ You can get/set options directly as attributes of the top-level ``options`` attr
pd.options.display.max_rows = 999
pd.options.display.max_rows

The API is composed of 5 relevant functions, available directly from the ``pandas``
The API is composed of 5 relevant functions, available directly from the pandas
namespace:

* :func:`~pandas.get_option` / :func:`~pandas.set_option` - get/set the value of a single option.
Expand Down
4 changes: 2 additions & 2 deletions doc/source/whatsnew/v0.12.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ API changes
~~~~~~~~~~~

- The I/O API is now much more consistent with a set of top level ``reader`` functions
accessed like ``pd.read_csv()`` that generally return a ``pandas`` object.
accessed like ``pd.read_csv()`` that generally return a pandas object.

* ``read_csv``
* ``read_excel``
Expand Down Expand Up @@ -179,7 +179,7 @@ API changes
``bs4`` + ``html5lib`` when lxml fails to parse. a list of parsers to try
until success is also valid

- The internal ``pandas`` class hierarchy has changed (slightly). The
- The internal pandas class hierarchy has changed (slightly). The
previous ``PandasObject`` now is called ``PandasContainer`` and a new
``PandasObject`` has become the base class for ``PandasContainer`` as well
as ``Index``, ``Categorical``, ``GroupBy``, ``SparseList``, and
Expand Down
4 changes: 2 additions & 2 deletions doc/source/whatsnew/v0.13.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ API changes
- Text parser now treats anything that reads like inf ("inf", "Inf", "-Inf",
"iNf", etc.) as infinity. (:issue:`4220`, :issue:`4219`), affecting
``read_table``, ``read_csv``, etc.
- ``pandas`` now is Python 2/3 compatible without the need for 2to3 thanks to
- pandas now is Python 2/3 compatible without the need for 2to3 thanks to
@jtratner. As a result, pandas now uses iterators more extensively. This
also led to the introduction of substantive parts of the Benjamin
Peterson's ``six`` library into compat. (:issue:`4384`, :issue:`4375`,
Expand All @@ -57,7 +57,7 @@ API changes
filter, map and zip, plus other necessary elements for Python 3
compatibility. ``lmap``, ``lzip``, ``lrange`` and ``lfilter`` all produce
lists instead of iterators, for compatibility with ``numpy``, subscripting
and ``pandas`` constructors.(:issue:`4384`, :issue:`4375`, :issue:`4372`)
and pandas constructors.(:issue:`4384`, :issue:`4375`, :issue:`4372`)
- ``Series.get`` with negative indexers now returns the same as ``[]`` (:issue:`4390`)
- Changes to how ``Index`` and ``MultiIndex`` handle metadata (``levels``,
``labels``, and ``names``) (:issue:`4039`):
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.14.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ API changes
you have a local variable that is *not* a column you must still refer to
it with the ``'@'`` prefix.
- You can have an expression like ``df.query('@a < a')`` with no complaints
from ``pandas`` about ambiguity of the name ``a``.
from pandas about ambiguity of the name ``a``.
- The top-level :func:`pandas.eval` function does not allow you use the
``'@'`` prefix and provides you with an error message telling you so.
- ``NameResolutionError`` was removed because it isn't necessary anymore.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.17.1.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Version 0.17.1 (November 21, 2015)

.. note::

We are proud to announce that *pandas* has become a sponsored project of the (`NumFOCUS organization`_). This will help ensure the success of development of *pandas* as a world-class open-source project.
We are proud to announce that pandas has become a sponsored project of the (`NumFOCUS organization`_). This will help ensure the success of development of pandas as a world-class open-source project.

.. _numfocus organization: http://www.numfocus.org/blog/numfocus-announces-new-fiscally-sponsored-project-pandas

Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.18.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1274,7 +1274,7 @@ Bug fixes
- Bug in ``.groupby`` where a ``KeyError`` was not raised for a wrong column if there was only one row in the dataframe (:issue:`11741`)
- Bug in ``.read_csv`` with dtype specified on empty data producing an error (:issue:`12048`)
- Bug in ``.read_csv`` where strings like ``'2E'`` are treated as valid floats (:issue:`12237`)
- Bug in building *pandas* with debugging symbols (:issue:`12123`)
- Bug in building pandas with debugging symbols (:issue:`12123`)


- Removed ``millisecond`` property of ``DatetimeIndex``. This would always raise a ``ValueError`` (:issue:`12019`).
Expand Down
4 changes: 2 additions & 2 deletions doc/source/whatsnew/v0.18.1.rst
Original file line number Diff line number Diff line change
Expand Up @@ -381,9 +381,9 @@ NumPy function compatibility
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Compatibility between pandas array-like methods (e.g. ``sum`` and ``take``) and their ``numpy``
counterparts has been greatly increased by augmenting the signatures of the ``pandas`` methods so
counterparts has been greatly increased by augmenting the signatures of the pandas methods so
as to accept arguments that can be passed in from ``numpy``, even if they are not necessarily
used in the ``pandas`` implementation (:issue:`12644`, :issue:`12638`, :issue:`12687`)
used in the pandas implementation (:issue:`12644`, :issue:`12638`, :issue:`12687`)

- ``.searchsorted()`` for ``Index`` and ``TimedeltaIndex`` now accept a ``sorter`` argument to maintain compatibility with numpy's ``searchsorted`` function (:issue:`12238`)
- Bug in numpy compatibility of ``np.round()`` on a ``Series`` (:issue:`12600`)
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.19.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ Google BigQuery enhancements
Fine-grained NumPy errstate
^^^^^^^^^^^^^^^^^^^^^^^^^^^

Previous versions of pandas would permanently silence numpy's ufunc error handling when ``pandas`` was imported. pandas did this in order to silence the warnings that would arise from using numpy ufuncs on missing data, which are usually represented as ``NaN`` s. Unfortunately, this silenced legitimate warnings arising in non-pandas code in the application. Starting with 0.19.0, pandas will use the ``numpy.errstate`` context manager to silence these warnings in a more fine-grained manner, only around where these operations are actually used in the pandas code base. (:issue:`13109`, :issue:`13145`)
Previous versions of pandas would permanently silence numpy's ufunc error handling when pandas was imported. pandas did this in order to silence the warnings that would arise from using numpy ufuncs on missing data, which are usually represented as ``NaN`` s. Unfortunately, this silenced legitimate warnings arising in non-pandas code in the application. Starting with 0.19.0, pandas will use the ``numpy.errstate`` context manager to silence these warnings in a more fine-grained manner, only around where these operations are actually used in the pandas code base. (:issue:`13109`, :issue:`13145`)

After upgrading pandas, you may see *new* ``RuntimeWarnings`` being issued from your code. These are likely legitimate, and the underlying cause likely existed in the code when using previous versions of pandas that simply silenced the warning. Use `numpy.errstate <https://numpy.org/doc/stable/reference/generated/numpy.errstate.html>`__ around the source of the ``RuntimeWarning`` to control how these conditions are handled.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.20.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1543,7 +1543,7 @@ Other deprecations
- ``TimedeltaIndex.searchsorted()``, ``DatetimeIndex.searchsorted()``, and ``PeriodIndex.searchsorted()`` have deprecated the ``key`` parameter in favor of ``value`` (:issue:`12662`)
- ``DataFrame.astype()`` has deprecated the ``raise_on_error`` parameter in favor of ``errors`` (:issue:`14878`)
- ``Series.sortlevel`` and ``DataFrame.sortlevel`` have been deprecated in favor of ``Series.sort_index`` and ``DataFrame.sort_index`` (:issue:`15099`)
- importing ``concat`` from ``pandas.tools.merge`` has been deprecated in favor of imports from the ``pandas`` namespace. This should only affect explicit imports (:issue:`15358`)
- importing ``concat`` from ``pandas.tools.merge`` has been deprecated in favor of imports from the pandas namespace. This should only affect explicit imports (:issue:`15358`)
- ``Series/DataFrame/Panel.consolidate()`` been deprecated as a public method. (:issue:`15483`)
- The ``as_indexer`` keyword of ``Series.str.match()`` has been deprecated (ignored keyword) (:issue:`15257`).
- The following top-level pandas functions have been deprecated and will be removed in a future version (:issue:`13790`, :issue:`15940`)
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.23.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -454,7 +454,7 @@ These bugs were squashed:
``Series.str.cat`` has gained the ``join`` kwarg
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Previously, :meth:`Series.str.cat` did not -- in contrast to most of ``pandas`` -- align :class:`Series` on their index before concatenation (see :issue:`18657`).
Previously, :meth:`Series.str.cat` did not -- in contrast to most of pandas -- align :class:`Series` on their index before concatenation (see :issue:`18657`).
The method has now gained a keyword ``join`` to control the manner of alignment, see examples below and :ref:`here <text.concatenate>`.

In v.0.23 ``join`` will default to None (meaning no alignment), but this default will change to ``'left'`` in a future version of pandas.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v1.2.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ Alternatively, you can also use the dtype object:
Index/column name preservation when aggregating
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

When aggregating using :meth:`concat` or the :class:`DataFrame` constructor, Pandas
When aggregating using :meth:`concat` or the :class:`DataFrame` constructor, pandas
will attempt to preserve index (and column) names whenever possible (:issue:`35847`).
In the case where all inputs share a common name, this name will be assigned to the
result. When the input names do not all agree, the result will be unnamed. Here is an
Expand Down