Add VecDeque::extend from TrustedLen specialization
Continuation of #95904
Inspired by how [`VecDeque::copy_slice` works](c08b235a5c/library/alloc/src/collections/vec_deque/mod.rs (L437-L454)).
## Benchmarks
Before
```
test vec_deque::bench_extend_chained_bytes ... bench: 1,026 ns/iter (+/- 17)
test vec_deque::bench_extend_chained_trustedlen ... bench: 1,024 ns/iter (+/- 40)
test vec_deque::bench_extend_trustedlen ... bench: 637 ns/iter (+/- 693)
```
After
```
test vec_deque::bench_extend_chained_bytes ... bench: 828 ns/iter (+/- 24)
test vec_deque::bench_extend_chained_trustedlen ... bench: 25 ns/iter (+/- 1)
test vec_deque::bench_extend_trustedlen ... bench: 21 ns/iter (+/- 0)
```
## Why do it this way
https://rust.godbolt.org/z/15qY1fMYh
The Compiler Explorer example shows how "just" removing the capacity check, like the [`Vec` `TrustedLen` specialization](c08b235a5c/library/alloc/src/vec/spec_extend.rs (L22-L58)) does, wouldn't have been enough for `VecDeque`. `wrap_add` would still have greatly limited what LLVM could do while optimizing.
---
r? `@the8472`
Batch proc_macro RPC for TokenStream iteration and combination operations
This is the first part of #86822, split off as requested in https://github.com/rust-lang/rust/pull/86822#pullrequestreview-1008655452. It reduces the number of RPC calls required for common operations such as iterating over and concatenating TokenStreams.
btree: avoid forcing the allocator to be a reference
The previous code forces the actual allocator used to be some `&A`. This generalizes the code to allow any `A: Copy`. If people truly want to use a reference, they can use `&A` themselves.
Fixes https://github.com/rust-lang/rust/issues/98176
Rollup of 5 pull requests
Successful merges:
- #97803 (Impl Termination for Infallible and then make the Result impls of Termination more generic)
- #97828 (Allow configuring where artifacts are downloaded from)
- #98150 (Emscripten target: replace -g4 with -g, and -g3 with --profiling-funcs)
- #98195 (Fix rustdoc json primitive handling)
- #98205 (Remove a possible unnecessary assignment)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Remove a possible unnecessary assignment
The reference issue has been closed (the feature has been stabilized)
and things work fine without it, it seems.
Signed-off-by: Yuki Okushi <jtitor@2k36.org>
Emscripten target: replace -g4 with -g, and -g3 with --profiling-funcs
Emscripten prints the following warning:
```
emcc: warning: please replace -g4 with -gsource-map [-Wdeprecated]
```
`@sbc100`
Allow configuring where artifacts are downloaded from
Bootstrap has support for downloading prebuilt LLVM and rustc artifacts to speed up local builds, but that currently works only for users working on `rust-lang/rust`. Forks of the repository (for example Ferrocene) might have different URLs to download artifacts from, or might use a different email address on merge commits, breaking both LLVM and rustc artifact downloads.
This PR refactors bootstrap to load the download URLs and other constants from `src/stage0.json`, allowing downstream forks to tweak those values. It also future-proofs the download code to easily allow forks to add their own custom protocols (like `s3://`).
This PR is best reviewed commit-by-commit.
Impl Termination for Infallible and then make the Result impls of Termination more generic
This allows things like `Result<ExitCode, E>` to 'just work'
The reference issue has been closed (the feature has been stabilized)
and things work fine without fine, it seems.
Signed-off-by: Yuki Okushi <jtitor@2k36.org>
ctfe: limit hashing of big const allocations when interning
Const allocations are only hashed for interning. However, they can be large, making the hashing expensive especially since it uses `FxHash`: it's better suited to short keys, not potentially big buffers like the actual bytes of allocation and the associated 1/8th sized `InitMask`.
We can partially hash these fields when they're large, hashing the length, and head and tail of these buffers, to
limit possible collisions while avoiding most of the hashing work.
r? `@ghost`
Rollup of 5 pull requests
Successful merges:
- #95392 (std: Stabilize feature try_reserve_2 )
- #97798 (Hide irrelevant lines in suggestions to allow for suggestions that are far from each other to be shown)
- #97844 (Windows: No panic if function not (yet) available)
- #98013 (Subtype FRU fields first in `type_changing_struct_update`)
- #98191 (Remove the rest of unnecessary `to_string`)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Subtype FRU fields first in `type_changing_struct_update`
So this fixes a subtle bug that `type_changing_struct_update` introduced, where it'll no longer coerce the base expr correctly. I actually think this code is easier to understand now, too.
r? `@lcnr` since you reviewed the last one
Windows: No panic if function not (yet) available
In some situations (e.g. #97814) it is possible for required functions to be called before they've had a chance to be loaded. Therefore, we make it possible to recover from this situation simply by looking at error codes.
`@rustbot` label +O-windows
Hide irrelevant lines in suggestions to allow for suggestions that are far from each other to be shown
This is an attempt to fix suggestions one part of which is 6 lines or more far from the first. I've noticed "the problem" (of not showing some parts of the suggestion) here: https://github.com/rust-lang/rust/pull/97759#discussion_r889689230.
I'm not sure about the implementation (this big closure is just bad and makes already complicated code even more so), but I want to at least discuss the result.
Here is an example of how this changes the output:
Before:
```text
help: consider enclosing expression in a block
|
3 ~ 'l: { match () { () => break 'l,
4 |
5 |
6 |
7 |
8 |
...
```
After:
```text
help: consider enclosing expression in a block
|
3 ~ 'l: { match () { () => break 'l,
4 |
...
31|
32~ } };
|
```
r? `@estebank`
`@rustbot` label +A-diagnostics +A-suggestion-diagnostics
`BitSet` related perf improvements
This commit makes two changes:
1. Changes `MaybeLiveLocals` to use `ChunkedBitSet`
2. Overrides the `fold` method for the iterator for `ChunkedBitSet`
I have local benchmarks verifying that each of these changes individually yield significant perf improvements to #96451 . I'm hoping this will be true outside of that context too. If that is not the case, I'll try to gate things on where they help as needed
r? `@nnethercote` who I believe was working on closely related things, cc `@tmiasko` because of the destprop pr
Move `finish` out of the `Encoder` trait.
This simplifies things, but requires making `CacheEncoder` non-generic.
(This was previously merged as commit 4 in #94732 and then was reverted
in #97905 because it caused a perf regression.)
r? `@ghost`
This is an experimental patch to try to reduce the codegen complexity of
TokenStream's FromIterator and Extend implementations for downstream
crates, by moving the core logic into a helper type. This might help
improve build performance of crates which depend on proc_macro as
iterators are used less, and the compiler may take less time to do
things like attempt specializations or other iterator optimizations.
The change intentionally sacrifices some optimization opportunities,
such as using the specializations for collecting iterators derived from
Vec::into_iter() into Vec.
This is one of the simpler potential approaches to reducing the amount
of code generated in crates depending on proc_macro, so it seems worth
trying before other more-involved changes.
This significantly reduces the cost of common interactions with TokenStream
when running with the CrossThread execution strategy, by reducing the number of
RPC calls required.
Add `#[inline]` to small fns of futex `RwLock`
The important methods like `read` and `write` were already inlined,
which can propagate all the way to inlining in user code, but these
small state functions were left behind as normal calls. They should
almost always be inlined as well, as they're just a few instructions.
Rollup of 5 pull requests
Successful merges:
- #97377 (Do not suggest adding semicolon/changing delimiters for macros in item position that originates in macros)
- #97675 (Make `std::mem::needs_drop` accept `?Sized`)
- #98118 (Test NLL fix of bad lifetime inference for reference captured in closure.)
- #98166 (Add rustdoc-json regression test for #98009)
- #98169 (Keyword docs: Link to wikipedia article for dynamic dispatch)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Test NLL fix of bad lifetime inference for reference captured in closure.
This came up as a use-case for `thread::scope` API that only compiles successfully since `feature(nll)` got stabilized recently.
Closes#93203 which had been re-opened for tracking this very test case to be added.
Make `std::mem::needs_drop` accept `?Sized`
This change attempts to make `needs_drop` work with types like `[u8]` and `str`.
This enables code in types like `Arc<T>` that was not possible before, such as https://github.com/rust-lang/rust/pull/97676.
Compile `unicode-normalization` faster
Various optimizations and cleanups aimed at improving compilation of `unicode-normalization`, which is notable for having several very large `match`es with many char ranges.
Best reviewed one commit at a time.
r? `@oli-obk`