-
Notifications
You must be signed in to change notification settings - Fork 14.6k
[RISCV] Add isel special case for (and (srl X, c2), c1) -> (slli_uw (srli x, c2+c3), c3). #100966
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…srli x, c2+c3), c3). Where c1 is a shifted mask with 32 set bits and c3 trailing zeros. Fixes llvm#100936.
@llvm/pr-subscribers-backend-risc-v Author: Craig Topper (topperc) ChangesWhere c1 is a shifted mask with 32 set bits and c3 trailing zeros. Fixes #100936. Full diff: https://github.com/llvm/llvm-project/pull/100966.diff 2 Files Affected:
diff --git a/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp b/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
index eef6ae677ac85..01b2bc08d3ba0 100644
--- a/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
@@ -1393,6 +1393,18 @@ void RISCVDAGToDAGISel::Select(SDNode *Node) {
ReplaceNode(Node, SLLI);
return;
}
+ // If we have 32 bits in the mask, we can use SLLI_UW instead of SLLI.
+ if (Trailing > 0 && Leading + Trailing == 32 && C2 + Trailing < XLen &&
+ OneUseOrZExtW && Subtarget->hasStdExtZba()) {
+ SDNode *SRLI = CurDAG->getMachineNode(
+ RISCV::SRLI, DL, VT, X,
+ CurDAG->getTargetConstant(C2 + Trailing, DL, VT));
+ SDNode *SLLI_UW = CurDAG->getMachineNode(
+ RISCV::SLLI_UW, DL, VT, SDValue(SRLI, 0),
+ CurDAG->getTargetConstant(Trailing, DL, VT));
+ ReplaceNode(Node, SLLI_UW);
+ return;
+ }
}
// Turn (and (shl x, c2), c1) -> (slli (srli x, c3-c2), c3) if c1 is a
diff --git a/llvm/test/CodeGen/RISCV/rv64zba.ll b/llvm/test/CodeGen/RISCV/rv64zba.ll
index 61be5ee458e9d..20a0484464018 100644
--- a/llvm/test/CodeGen/RISCV/rv64zba.ll
+++ b/llvm/test/CodeGen/RISCV/rv64zba.ll
@@ -2962,8 +2962,7 @@ define i64 @srli_slliuw_2(i64 %1) {
;
; RV64ZBA-LABEL: srli_slliuw_2:
; RV64ZBA: # %bb.0: # %entry
-; RV64ZBA-NEXT: srli a0, a0, 15
-; RV64ZBA-NEXT: srli a0, a0, 3
+; RV64ZBA-NEXT: srli a0, a0, 18
; RV64ZBA-NEXT: slli.uw a0, a0, 3
; RV64ZBA-NEXT: ret
entry:
@@ -2985,8 +2984,7 @@ define i64 @srli_slliuw_canonical_2(i64 %0) {
;
; RV64ZBA-LABEL: srli_slliuw_canonical_2:
; RV64ZBA: # %bb.0: # %entry
-; RV64ZBA-NEXT: srli a0, a0, 15
-; RV64ZBA-NEXT: srli a0, a0, 3
+; RV64ZBA-NEXT: srli a0, a0, 18
; RV64ZBA-NEXT: slli.uw a0, a0, 3
; RV64ZBA-NEXT: ret
entry:
|
I don't think it is a good way to fix these kinds of issues. I prefer to do a final clean up for slli+slli/srli+srli pairs in Another special case with slli+slli (sampled from pybind11): https://godbolt.org/z/rc95a37ej
|
Is that something we should handle in generic DAGCombine? The X86 code is also bad. https://godbolt.org/z/7Tv3Gsc5f |
Unfortunately we cannot handle this in DAGCombine since it happens in ISel. |
I wouldn't want to expand sign_extend_inreg by itself. But maybe we should expand this into a sra+shl during DAGCombine.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
(The discussion about a generic option can continue even if this lands.)
I filed #101040 to track this issue. |
Where c1 is a shifted mask with 32 set bits and c3 trailing zeros.
Fixes #100936.