From 69eff5a4a24244b8c6a019402175b921c7864e90 Mon Sep 17 00:00:00 2001 From: svkeerthy Date: Tue, 1 Jul 2025 01:21:56 +0000 Subject: [PATCH] Doc cleanup --- llvm/docs/MLGO.rst | 34 +++++++++++++++++++++------------- 1 file changed, 21 insertions(+), 13 deletions(-) diff --git a/llvm/docs/MLGO.rst b/llvm/docs/MLGO.rst index 6f7467063552f..a33af82c287f2 100644 --- a/llvm/docs/MLGO.rst +++ b/llvm/docs/MLGO.rst @@ -15,11 +15,13 @@ Currently the following heuristics feature such integration: This document is an outline of the tooling and APIs facilitating MLGO. -Note that tools for orchestrating ML training are not part of LLVM, as they are -dependency-heavy - both on the ML infrastructure choice, as well as choices of -distributed computing. For the training scenario, LLVM only contains facilities -enabling it, such as corpus extraction, training data extraction, and evaluation -of models during training. +.. note:: + + The tools for orchestrating ML training are not part of LLVM, as they are + dependency-heavy - both on the ML infrastructure choice, as well as choices of + distributed computing. For the training scenario, LLVM only contains facilities + enabling it, such as corpus extraction, training data extraction, and evaluation + of models during training. .. contents:: @@ -329,8 +331,10 @@ We currently feature 4 implementations: the neural network, together with its weights (essentially, loops performing matrix multiplications) -NOTE: we are actively working on replacing this with an EmitC implementation -requiring no out of tree build-time dependencies. +.. note:: + + we are actively working on replacing this with an EmitC implementation + requiring no out of tree build-time dependencies. - ``InteractiveModelRunner``. This is intended for training scenarios where the training algorithm drives compilation. This model runner has no special @@ -531,9 +535,11 @@ implementation details. Building with ML support ======================== -**NOTE** For up to date information on custom builds, see the ``ml-*`` -`build bots `_. They are set up using -`like this `_. +.. note:: + + For up to date information on custom builds, see the ``ml-*`` + `build bots `_. They are set up using + `like this `_. Embed pre-trained models (aka "release" mode) --------------------------------------------- @@ -567,9 +573,11 @@ You can also specify a URL for the path, and it is also possible to pre-compile the header and object and then just point to the precompiled artifacts. See for example ``LLVM_OVERRIDE_MODEL_HEADER_INLINERSIZEMODEL``. -**Note** that we are transitioning away from the AOT compiler shipping with the -tensorflow package, and to a EmitC, in-tree solution, so these details will -change soon. +.. note:: + + We are transitioning away from the AOT compiler shipping with the + tensorflow package, and to a EmitC, in-tree solution, so these details will + change soon. Using TFLite (aka "development" mode) -------------------------------------