rust/library/core/benches
Scott McMurray 8ca47d7ae4 Stop manually SIMDing in swap_nonoverlapping
Like I previously did for `reverse`, this leaves it to LLVM to pick how to vectorize it, since it can know better the chunk size to use, compared to the "32 bytes always" approach we currently have.

It does still need logic to type-erase where appropriate, though, as while LLVM is now smart enough to vectorize over slices of things like `[u8; 4]`, it fails to do so over slices of `[u8; 3]`.

As a bonus, this also means one no longer gets the spurious `memcpy`(s?) at the end up swapping a slice of `__m256`s: <https://rust.godbolt.org/z/joofr4v8Y>
2022-02-21 00:54:02 -08:00
..
ascii
char Add two more benchmarks for strictly ASCII and non ASCII cases 2021-02-26 11:42:59 -06:00
hash
num Cosmetic fixes. 2021-09-09 20:06:46 +02:00
str Respond to review feedback, and improve implementation somewhat 2022-02-05 11:15:18 -08:00
any.rs
ascii.rs Unify way to flip 6th bit. (Same assembly generated) 2021-02-08 12:21:36 +00:00
fmt.rs move core::hint::black_box under its own feature gate 2021-04-25 11:08:12 +02:00
iter.rs Use HTTPS links where possible 2021-06-23 16:26:46 -04:00
lib.rs Auto merge of #88788 - falk-hueffner:speedup-int-log10-branchless, r=joshtriplett 2021-10-12 03:18:54 +00:00
ops.rs
pattern.rs
slice.rs Stop manually SIMDing in swap_nonoverlapping 2022-02-21 00:54:02 -08:00
str.rs Optimize core::str::Chars::count 2022-02-05 11:15:17 -08:00