Skip to content

[NFC][MLGO] Convert notes to proper RST note directives in MLGO.rst #146450

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 1, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 21 additions & 13 deletions llvm/docs/MLGO.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,13 @@ Currently the following heuristics feature such integration:

This document is an outline of the tooling and APIs facilitating MLGO.

Note that tools for orchestrating ML training are not part of LLVM, as they are
dependency-heavy - both on the ML infrastructure choice, as well as choices of
distributed computing. For the training scenario, LLVM only contains facilities
enabling it, such as corpus extraction, training data extraction, and evaluation
of models during training.
.. note::

The tools for orchestrating ML training are not part of LLVM, as they are
dependency-heavy - both on the ML infrastructure choice, as well as choices of
distributed computing. For the training scenario, LLVM only contains facilities
enabling it, such as corpus extraction, training data extraction, and evaluation
of models during training.


.. contents::
Expand Down Expand Up @@ -329,8 +331,10 @@ We currently feature 4 implementations:
the neural network, together with its weights (essentially, loops performing
matrix multiplications)

NOTE: we are actively working on replacing this with an EmitC implementation
requiring no out of tree build-time dependencies.
.. note::

we are actively working on replacing this with an EmitC implementation
requiring no out of tree build-time dependencies.

- ``InteractiveModelRunner``. This is intended for training scenarios where the
training algorithm drives compilation. This model runner has no special
Expand Down Expand Up @@ -531,9 +535,11 @@ implementation details.
Building with ML support
========================

**NOTE** For up to date information on custom builds, see the ``ml-*``
`build bots <http://lab.llvm.org>`_. They are set up using
`like this <https://github.com/google/ml-compiler-opt/blob/main/buildbot/buildbot_init.sh>`_.
.. note::

For up to date information on custom builds, see the ``ml-*``
`build bots <http://lab.llvm.org>`_. They are set up using
`like this <https://github.com/google/ml-compiler-opt/blob/main/buildbot/buildbot_init.sh>`_.

Embed pre-trained models (aka "release" mode)
---------------------------------------------
Expand Down Expand Up @@ -567,9 +573,11 @@ You can also specify a URL for the path, and it is also possible to pre-compile
the header and object and then just point to the precompiled artifacts. See for
example ``LLVM_OVERRIDE_MODEL_HEADER_INLINERSIZEMODEL``.

**Note** that we are transitioning away from the AOT compiler shipping with the
tensorflow package, and to a EmitC, in-tree solution, so these details will
change soon.
.. note::

We are transitioning away from the AOT compiler shipping with the
tensorflow package, and to a EmitC, in-tree solution, so these details will
change soon.

Using TFLite (aka "development" mode)
-------------------------------------
Expand Down
Loading