Skip to content

[IR] Initial introduction of memset_pattern #97583

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
627f1ef
[IR] Initial introduction of memset_pattern
asb Jul 10, 2024
3a0d10a
Tweak wording in Langref description based on feedback
asb Jul 24, 2024
d710a1f
Removing ConstantInt getLength (holdover from when this was a restric…
asb Jul 24, 2024
4bcc00e
Properly update memset-pattern.ll test cases
asb Jul 24, 2024
7d1347c
Removed outdated comment
asb Jul 31, 2024
60ba68b
Change to memset_pattern taking a count rather than a number of bytes
asb Jul 31, 2024
be558f9
Rename to llvm.memset.pattern as requested in review
asb Jul 31, 2024
dfc0564
Add comments to memset_pattern intrinsic to describe args
asb Aug 14, 2024
8ac8b69
Improve memset.pattern langref: fix outdated refs to bytes and mentio…
asb Aug 14, 2024
6d16c82
Excise errant memset_pattern mention
asb Sep 11, 2024
55ee84a
Fix incorrect mangling in LangRef and explain memory address is incre…
asb Sep 11, 2024
1e60edd
Allow memset.pattern expansion for big endian targets
asb Sep 11, 2024
88b5af3
Allow non-power-of-two length patterns
asb Sep 11, 2024
ea429b4
Remove unnecessary and incorrect mangling from llvm.memset.pattern uses
asb Sep 11, 2024
e9c98c8
Rename memset-pattern-inline.ll test to memset-pattern.ll to reflect …
asb Sep 11, 2024
30d59b9
Remove unnecessary comment
asb Sep 11, 2024
d83fdfb
Fix logic for alignment of stores in memset.pattern expansion
asb Sep 11, 2024
64bc6af
Merge remote-tracking branch 'origin/main' into 2024q2-memset-pattern…
asb Nov 6, 2024
c19adc1
Regenerate memset-pattern.ll after merge
asb Nov 8, 2024
03e07d5
Use normal createMemsetAsLoop helper for memset.pattern
asb Nov 8, 2024
a7373b7
Rename to llvm.experimental.memset.pattern
asb Nov 8, 2024
a68aa8d
Move MemSetPattern out of the MemSet hierarchy
asb Nov 8, 2024
9580ab0
Fix underline length in langref
asb Nov 8, 2024
4ebc985
Address review comments
asb Nov 9, 2024
78bad3b
Verkfy llvm.experimental.memset.pattern pattern arg is integral numbe…
asb Nov 9, 2024
71dd9b5
Revert "Verkfy llvm.experimental.memset.pattern pattern arg is integr…
asb Nov 9, 2024
ad7585c
Remove outdated comment about integral bit widths only
asb Nov 9, 2024
4d6d9ab
Adopt Nikita's langref rewording suggestion
asb Nov 13, 2024
0b0e81e
typo fixes
asb Nov 15, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 56 additions & 0 deletions llvm/docs/LangRef.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15230,6 +15230,62 @@ The behavior of '``llvm.memset.inline.*``' is equivalent to the behavior of
'``llvm.memset.*``', but the generated code is guaranteed not to call any
external functions.

.. _int_memset_pattern:

'``llvm.memset.pattern``' Intrinsic
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Syntax:
"""""""

This is an overloaded intrinsic. You can use ``llvm.memset_pattern`` on
any integer bit width that is an integral number of bytes and for different
address spaces. Not all targets support all bit widths however.

::

declare void @llvm.memset.pattern.p0.i64.i128(ptr <dest>, i128 <val>,
i64 <len>, i1 <isvolatile>)

Overview:
"""""""""

The '``llvm.memset.pattern.*``' intrinsics fill a block of memory with
a particular value. This may be expanded to an inline loop, a sequence of
stores, or a libcall depending on what is available for the target and the
expected performance and code size impact.

Arguments:
""""""""""

The first argument is a pointer to the destination to fill, the second
is the value with which to fill it, the third argument is an integer
argument specifying the number of times to fill the value, and the fourth is a
boolean indicating a volatile access.

The :ref:`align <attr_align>` parameter attribute can be provided
for the first argument.

If the ``isvolatile`` parameter is ``true``, the
``llvm.memset.pattern`` call is a :ref:`volatile operation <volatile>`. The
detailed access behavior is not very cleanly specified and it is unwise to
depend on it.

Semantics:
""""""""""

The '``llvm.memset.pattern*``' intrinsic fills "len" bytes of memory
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here it says that len is bytes of memory, while above it says it is the number of times the pattern should be repeated. Confusing. But maybe we haven't settled if the pattern is allowed to be non-byte-sized or not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, that had indeed been missed and has now been addressed.

starting at the destination location. If the argument is known to be aligned
to some boundary, this can be specified as an attribute on the argument.

If ``<len>`` is not an integer multiple of the pattern width in bytes, then any
remainder bytes will be copied from ``<val>``.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be worth describing how this works with respect to endianess.
I imagine that for a big endian target the <val> pattern is specified as a big endian number, and bytes are written by picking bytes round robin starting at the most significant byte in <val>. While for litte-endian targets the first byte written is the least significant byte in <val>.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added what aims to be a minimal description. Also fixed the outdated references to byte counts from an earlier iteration of the intrinsic.

If ``<len>`` is 0, it is no-op modulo the behavior of attributes attached to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thing that could be clarified here is that we increment by the alloc size of the type. (Only relevant for over-aligned types.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added text that intends to convey this.

the arguments.
If ``<len>`` is not a well-defined value, the behavior is undefined.
If ``<len>`` is not zero, ``<dest>`` should be well-defined, otherwise the
behavior is undefined.

.. _int_sqrt:

'``llvm.sqrt.*``' Intrinsic
Expand Down
3 changes: 3 additions & 0 deletions llvm/include/llvm/IR/InstVisitor.h
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,7 @@ class InstVisitor {
RetTy visitDbgInfoIntrinsic(DbgInfoIntrinsic &I){ DELEGATE(IntrinsicInst); }
RetTy visitMemSetInst(MemSetInst &I) { DELEGATE(MemIntrinsic); }
RetTy visitMemSetInlineInst(MemSetInlineInst &I){ DELEGATE(MemSetInst); }
RetTy visitMemSetPatternInst(MemSetPatternInst &I) { DELEGATE(MemSetInst); }
RetTy visitMemCpyInst(MemCpyInst &I) { DELEGATE(MemTransferInst); }
RetTy visitMemCpyInlineInst(MemCpyInlineInst &I){ DELEGATE(MemCpyInst); }
RetTy visitMemMoveInst(MemMoveInst &I) { DELEGATE(MemTransferInst); }
Expand Down Expand Up @@ -295,6 +296,8 @@ class InstVisitor {
case Intrinsic::memset: DELEGATE(MemSetInst);
case Intrinsic::memset_inline:
DELEGATE(MemSetInlineInst);
case Intrinsic::memset_pattern:
DELEGATE(MemSetPatternInst);
case Intrinsic::vastart: DELEGATE(VAStartInst);
case Intrinsic::vaend: DELEGATE(VAEndInst);
case Intrinsic::vacopy: DELEGATE(VACopyInst);
Expand Down
19 changes: 18 additions & 1 deletion llvm/include/llvm/IR/IntrinsicInst.h
Original file line number Diff line number Diff line change
Expand Up @@ -1208,6 +1208,7 @@ class MemIntrinsic : public MemIntrinsicBase<MemIntrinsic> {
case Intrinsic::memmove:
case Intrinsic::memset:
case Intrinsic::memset_inline:
case Intrinsic::memset_pattern:
case Intrinsic::memcpy_inline:
return true;
default:
Expand All @@ -1219,14 +1220,16 @@ class MemIntrinsic : public MemIntrinsicBase<MemIntrinsic> {
}
};

/// This class wraps the llvm.memset and llvm.memset.inline intrinsics.
/// This class wraps the llvm.memset, llvm.memset.inline, and
/// llvm.memset.pattern intrinsics.
class MemSetInst : public MemSetBase<MemIntrinsic> {
public:
// Methods for support type inquiry through isa, cast, and dyn_cast:
static bool classof(const IntrinsicInst *I) {
switch (I->getIntrinsicID()) {
case Intrinsic::memset:
case Intrinsic::memset_inline:
case Intrinsic::memset_pattern:
return true;
default:
return false;
Expand All @@ -1249,6 +1252,18 @@ class MemSetInlineInst : public MemSetInst {
}
};

/// This class wraps the llvm.memset.pattern intrinsic.
class MemSetPatternInst : public MemSetInst {
public:
// Methods for support type inquiry through isa, cast, and dyn_cast:
static bool classof(const IntrinsicInst *I) {
return I->getIntrinsicID() == Intrinsic::memset_pattern;
}
static bool classof(const Value *V) {
return isa<IntrinsicInst>(V) && classof(cast<IntrinsicInst>(V));
}
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need both MemSetPatternIntrinsic and MemSetPatternInst?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm mirroring the pattern followed by other mem intrinsics, and although it's not super pretty, having both classes like this (as for the standard MemSet intrinsics) seems the way that reduces copy and paste of accessor code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm okay, I thought this might only be needed for cases where we have both atomic and non-atomic variants.


/// This class wraps the llvm.memcpy/memmove intrinsics.
class MemTransferInst : public MemTransferBase<MemIntrinsic> {
public:
Expand Down Expand Up @@ -1328,6 +1343,7 @@ class AnyMemIntrinsic : public MemIntrinsicBase<AnyMemIntrinsic> {
case Intrinsic::memmove:
case Intrinsic::memset:
case Intrinsic::memset_inline:
case Intrinsic::memset_pattern:
case Intrinsic::memcpy_element_unordered_atomic:
case Intrinsic::memmove_element_unordered_atomic:
case Intrinsic::memset_element_unordered_atomic:
Expand All @@ -1350,6 +1366,7 @@ class AnyMemSetInst : public MemSetBase<AnyMemIntrinsic> {
switch (I->getIntrinsicID()) {
case Intrinsic::memset:
case Intrinsic::memset_inline:
case Intrinsic::memset_pattern:
case Intrinsic::memset_element_unordered_atomic:
return true;
default:
Expand Down
8 changes: 8 additions & 0 deletions llvm/include/llvm/IR/Intrinsics.td
Original file line number Diff line number Diff line change
Expand Up @@ -1003,6 +1003,14 @@ def int_memset_inline
NoCapture<ArgIndex<0>>, WriteOnly<ArgIndex<0>>,
ImmArg<ArgIndex<3>>]>;

// Memset variant that writes a given pattern.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment what the operands are

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added with inline comments (there's not a totally consistent pattern for describing args - many intrinsics have no description at all, but I see some other examples in the file using inline comments for the args)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really we ought to have tablegen mandatory doc strings

def int_memset_pattern
: Intrinsic<[],
[llvm_anyptr_ty, llvm_anyint_ty, llvm_anyint_ty, llvm_i1_ty],
[IntrWriteMem, IntrArgMemOnly, IntrWillReturn, IntrNoFree, IntrNoCallback,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DefaultAttrIntrinsic would hide most of these

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dug through this and unfortunately I can't make use of DefaultAttrIntrinsic because nosync isn't necessarily true. https://reviews.llvm.org/D86021 switched over memset to using DefaultAttrIntrinsic but this was later backed out in a888e49 due to nosync not applying unconditionally.

NoCapture<ArgIndex<0>>, WriteOnly<ArgIndex<0>>,
ImmArg<ArgIndex<3>>]>;

// FIXME: Add version of these floating point intrinsics which allow non-default
// rounding modes and FP exception handling.

Expand Down
8 changes: 8 additions & 0 deletions llvm/lib/CodeGen/PreISelIntrinsicLowering.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -276,6 +276,13 @@ bool PreISelIntrinsicLowering::expandMemIntrinsicUses(Function &F) const {
Memset->eraseFromParent();
break;
}
case Intrinsic::memset_pattern: {
auto *Memset = cast<MemSetPatternInst>(Inst);
expandMemSetAsLoop(Memset);
Changed = true;
Memset->eraseFromParent();
break;
}
default:
llvm_unreachable("unhandled intrinsic");
}
Expand All @@ -294,6 +301,7 @@ bool PreISelIntrinsicLowering::lowerIntrinsics(Module &M) const {
case Intrinsic::memmove:
case Intrinsic::memset:
case Intrinsic::memset_inline:
case Intrinsic::memset_pattern:
Changed |= expandMemIntrinsicUses(F);
break;
case Intrinsic::load_relative:
Expand Down
3 changes: 2 additions & 1 deletion llvm/lib/IR/Verifier.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5435,7 +5435,8 @@ void Verifier::visitIntrinsicCall(Intrinsic::ID ID, CallBase &Call) {
case Intrinsic::memcpy_inline:
case Intrinsic::memmove:
case Intrinsic::memset:
case Intrinsic::memset_inline: {
case Intrinsic::memset_inline:
case Intrinsic::memset_pattern: {
break;
}
case Intrinsic::memcpy_element_unordered_atomic:
Expand Down
60 changes: 60 additions & 0 deletions llvm/lib/Transforms/Utils/LowerMemIntrinsics.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -456,6 +456,56 @@ static void createMemMoveLoop(Instruction *InsertBefore, Value *SrcAddr,
ElseTerm->eraseFromParent();
}

static void createMemSetPatternLoop(Instruction *InsertBefore, Value *DstAddr,
Value *Count, Value *SetValue,
Align DstAlign, bool IsVolatile) {
BasicBlock *OrigBB = InsertBefore->getParent();
Function *F = OrigBB->getParent();
const DataLayout &DL = F->getDataLayout();

if (DL.isBigEndian())
report_fatal_error("memset.pattern expansion not currently "
"implemented for big-endian targets",
false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this check? Your implementation is endian-ness independent. The endian-specific handling is inside the store.

Copy link
Contributor Author

@asb asb Sep 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now removed - this was a holdover from an earlier implementation. The PPC test now serves as a simple smoke test for the expansion proceeding as expected in big-endian targets.


if (!isPowerOf2_32(SetValue->getType()->getScalarSizeInBits()))
report_fatal_error("Pattern width for memset_pattern must be a power of 2",
false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This limitation would have to be checked by the IR verifier. I don't think it's needed though, your code should work fine without it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed this limitation and added test coverage.


Type *TypeOfCount = Count->getType();

BasicBlock *NewBB = OrigBB->splitBasicBlock(InsertBefore, "split");
BasicBlock *LoopBB =
BasicBlock::Create(F->getContext(), "storeloop", F, NewBB);
IRBuilder<> Builder(OrigBB->getTerminator());

Builder.CreateCondBr(
Builder.CreateICmpEQ(ConstantInt::get(TypeOfCount, 0), Count), NewBB,
LoopBB);
OrigBB->getTerminator()->eraseFromParent();

IRBuilder<> LoopBuilder(LoopBB);
PHINode *CurrentDst = LoopBuilder.CreatePHI(DstAddr->getType(), 0);
CurrentDst->addIncoming(DstAddr, OrigBB);
PHINode *LoopCount = LoopBuilder.CreatePHI(TypeOfCount, 0);
LoopCount->addIncoming(Count, OrigBB);

// Create the store instruction for the pattern
LoopBuilder.CreateAlignedStore(SetValue, CurrentDst, DstAlign, IsVolatile);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this also need the "PartAlign" logic? It's DstAlign on the first iteration, but afterwards its the common alignment of DstAlign and the stride.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, thanks. Logic fixed and test coverage for expected alignment added.


Value *NextDst = LoopBuilder.CreateInBoundsGEP(
SetValue->getType(), CurrentDst, ConstantInt::get(TypeOfCount, 1));
CurrentDst->addIncoming(NextDst, LoopBB);

Value *NewLoopCount =
LoopBuilder.CreateSub(LoopCount, ConstantInt::get(TypeOfCount, 1));
LoopCount->addIncoming(NewLoopCount, LoopBB);

LoopBuilder.CreateCondBr(
LoopBuilder.CreateICmpNE(NewLoopCount, ConstantInt::get(TypeOfCount, 0)),
LoopBB, NewBB);
}

static void createMemSetLoop(Instruction *InsertBefore, Value *DstAddr,
Value *CopyLen, Value *SetValue, Align DstAlign,
bool IsVolatile) {
Expand Down Expand Up @@ -591,6 +641,16 @@ bool llvm::expandMemMoveAsLoop(MemMoveInst *Memmove,
}

void llvm::expandMemSetAsLoop(MemSetInst *Memset) {
if (isa<MemSetPatternInst>(Memset)) {
return createMemSetPatternLoop(
/* InsertBefore */ Memset,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/* InsertBefore */ Memset,
/* InsertBefore= */ Memset,

/* DstAddr */ Memset->getRawDest(),
/* Count */ Memset->getLength(),
/* SetValue */ Memset->getValue(),
/* Alignment */ Memset->getDestAlign().valueOrOne(),
Memset->isVolatile());
}

createMemSetLoop(/* InsertBefore */ Memset,
/* DstAddr */ Memset->getRawDest(),
/* CopyLen */ Memset->getLength(),
Expand Down
Loading
Loading