Begin fixing all the broken doctests in `compiler/`
Begins to fix#95994.
All of them pass now but 24 of them I've marked with `ignore HELP (<explanation>)` (asking for help) as I'm unsure how to get them to work / if we should leave them as they are.
There are also a few that I marked `ignore` that could maybe be made to work but seem less important.
Each `ignore` has a rough "reason" for ignoring after it parentheses, with
- `(pseudo-rust)` meaning "mostly rust-like but contains foreign syntax"
- `(illustrative)` a somewhat catchall for either a fragment of rust that doesn't stand on its own (like a lone type), or abbreviated rust with ellipses and undeclared types that would get too cluttered if made compile-worthy.
- `(not-rust)` stuff that isn't rust but benefits from the syntax highlighting, like MIR.
- `(internal)` uses `rustc_*` code which would be difficult to make work with the testing setup.
Those reason notes are a bit inconsistently applied and messy though. If that's important I can go through them again and try a more principled approach. When I run `rg '```ignore \(' .` on the repo, there look to be lots of different conventions other people have used for this sort of thing. I could try unifying them all if that would be helpful.
I'm not sure if there was a better existing way to do this but I wrote my own script to help me run all the doctests and wade through the output. If that would be useful to anyone else, I put it here: https://github.com/Elliot-Roberts/rust_doctest_fixing_tool
Overhaul `MacArgs`
Motivation:
- Clarify some code that I found hard to understand.
- Eliminate one use of three places where `TokenKind::Interpolated` values are created.
r? `@petrochenkov`
The value in `MacArgs::Eq` is currently represented as a `Token`.
Because of `TokenKind::Interpolated`, `Token` can be either a token or
an arbitrary AST fragment. In practice, a `MacArgs::Eq` starts out as a
literal or macro call AST fragment, and then is later lowered to a
literal token. But this is very non-obvious. `Token` is a much more
general type than what is needed.
This commit restricts things, by introducing a new type `MacArgsEqKind`
that is either an AST expression (pre-lowering) or an AST literal
(post-lowering). The downside is that the code is a bit more verbose in
a few places. The benefit is that makes it much clearer what the
possibilities are (though also shorter in some other places). Also, it
removes one use of `TokenKind::Interpolated`, taking us a step closer to
removing that variant, which will let us make `Token` impl `Copy` and
remove many "handle Interpolated" code paths in the parser.
Things to note:
- Error messages have improved. Messages like this:
```
unexpected token: `"bug" + "found"`
```
now say "unexpected expression", which makes more sense. Although
arbitrary expressions can exist within tokens thanks to
`TokenKind::Interpolated`, that's not obvious to anyone who doesn't
know compiler internals.
- In `parse_mac_args_common`, we no longer need to collect tokens for
the value expression.
Using an obviously-placeholder syntax. An RFC would still be needed before this could have any chance at stabilization, and it might be removed at any point.
But I'd really like to have it in nightly at least to ensure it works well with try_trait_v2, especially as we refactor the traits.
This commit rearranges the `match`. The new code avoids testing for
`MacArgs::Eq` twice, at the cost of repeating the `self.print_path()`
call. I think this is worthwhile because it puts the `match` in a more
standard and readable form.
Parse inner attributes on inline const block
According to https://github.com/rust-lang/rust/pull/84414#issuecomment-826150936, inner attributes are intended to be supported *"in all containers for statements (or some subset of statements)"*.
This PR adds inner attribute parsing and pretty-printing for inline const blocks (https://github.com/rust-lang/rust/issues/76001), which contain statements just like an unsafe block or a loop body.
```rust
let _ = const {
#![allow(...)]
let x = ();
x
};
```
It's only needed for macro expansion, not as a general element in the
AST. This commit removes it, adds `NtOrTt` for the parser and macro
expansion cases, and renames the variants in `NamedMatch` to better
match the new type.
Factor convenience functions out of main printer implementation
The pretty printer in rustc_ast_pretty has a section of methods commented "Convenience functions to talk to the printer". This PR pulls those out to a separate module. This leaves pp.rs with only the minimal API that is core to the pretty printing algorithm.
I found this separation to be helpful in https://github.com/dtolnay/prettyplease because it makes clear when changes are adding some fundamental new capability to the pretty printer algorithm vs just making it more convenient to call some already existing functionality.
The `print_expr` method already places an `ibox(INDENT_UNIT)` around
every expr that gets printed. Some exprs were then using `self.head`
inside of that, which does its own `cbox(INDENT_UNIT)`, resulting in two
levels of indentation:
while true {
stuff;
}
This commit fixes those cases to produce the expected single level of
indentation within every expression containing a block.
while true {
stuff;
}
Previously the pretty printer would compute indentation always relative
to whatever column a block begins at, like this:
fn demo(arg1: usize,
arg2: usize);
This is never the thing to do in the dominant contemporary Rust style.
Rustfmt's default and the style used by the vast majority of Rust
codebases is block indentation:
fn demo(
arg1: usize,
arg2: usize,
);
where every indentation level is a multiple of 4 spaces and each level
is indented relative to the indentation of the previous line, not the
position that the block starts in.
Render more readable macro matcher tokens in rustdoc
Follow-up to #92334.
This PR lifts some of the token rendering logic from https://github.com/dtolnay/prettyplease into rustdoc so that even the matchers for which a source code snippet is not available (because they are macro-generated, or any other reason) follow some baseline good assumptions about where the tokens in the macro matcher are appropriate to space.
The below screenshots show an example of the difference using one of the gnarliest macros I could find. Some things to notice:
- In the **before**, notice how a couple places break in between `$(....)`↵`*`, which is just about the worst possible place that it could break.
- In the **before**, the lines that wrapped are weirdly indented by 1 space of indentation relative to column 0. In the **after**, we use the typical way of block indenting in Rust syntax which is put the open/close delimiters on their own line and indent their contents by 4 spaces relative to the previous line (so 8 spaces relative to column 0, because the matcher itself is indented by 4 relative to the `macro_rules` header).
- In the **after**, macro_rules metavariables like `$tokens:tt` are kept together, which is how just about everybody writing Rust today writes them.
## Before
![Screenshot from 2022-01-14 13-05-53](https://user-images.githubusercontent.com/1940490/149585105-1f182b78-751f-421f-a234-9dbc04fa3bbd.png)
## After
![Screenshot from 2022-01-14 13-06-04](https://user-images.githubusercontent.com/1940490/149585118-d4b52ea7-3e67-4b6e-a12b-31dfb8172f86.png)
r? `@camelid`
PrintStackElems with pbreak=PrintStackBreak::Fits always carried a
meaningless value offset=0. We can combine the two types PrintStackElem
+ PrintStackBreak into one PrintFrame enum that stores offset only for
Broken frames.
The pretty printer algorithm involves 2 VecDeques: a ring-buffer of
tokens and a deque of ring-buffer indices. Confusingly, those two deques
were being grown in opposite directions for no good reason. Ring-buffer
pushes would go on the "back" of the ring-buffer (i.e. higher indices)
while scan_stack pushes would go on the "front" (i.e. lower indices).
This commit flips the scan_stack accesses to grow the scan_stack and
ring-buffer in the same direction, where push does the same
operation as a Vec push i.e. inserting on the high-index end.
Pretty printer algorithm revamp step 2
This PR follows #92923 as a second chunk of modernizations backported from https://github.com/dtolnay/prettyplease into rustc_ast_pretty.
I've broken this up into atomic commits that hopefully are sensible in isolation. At every commit, the pretty printer is compilable and has runtime behavior that is identical to before and after the PR. None of the refactoring so far changes behavior.
The general theme of this chunk of commits is: the logic in the old pretty printer is doing some very basic things (pushing and popping tokens on a ring buffer) but expressed in a too-low-level way that I found makes it quite complicated/subtle to reason about. There are a number of obvious invariants that are "almost true" -- things like `self.left == self.buf.offset` and `self.right == self.buf.offset + self.buf.data.len()` and `self.right_total == self.left_total + self.buf.data.sum()`. The reason these things are "almost true" is the implementation tends to put updating one side of the invariant unreasonably far apart from updating the other side, leaving the invariant broken while unrelated stuff happens in between. The following code from master is an example of this:
e5e2b0be26/compiler/rustc_ast_pretty/src/pp.rs (L314-L317)
In this code the `advance_right` is reserving an entry into which to write a next token on the right side of the ring buffer, the `check_stack` is doing something totally unrelated to the right boundary of the ring buffer, and the `scan_push` is actually writing the token we previously reserved space for. Much of what this PR is doing is rearranging code to shrink the amount of stuff in between when an invariant is broken to when it is restored, until the whole thing can be factored out into one indivisible method call on the RingBuffer type.
The end state of the PR is that we can entirely eliminate `self.left` (because it's now just equal to `self.buf.offset` always) and `self.right` (because it's equal to `self.buf.offset + self.buf.data.len()` always) and the whole `Token::Eof` state which used to be the value of tokens that have been reserved space for but not yet written.
I found without these changes the pretty printer implementation to be hard to reason about and I wasn't able to confidently introduce improvements like trailing commas in `prettyplease` until after this refactor. The logic here is 43 years old at this point (Graydon translated it as directly as possible from the 1979 pretty printing paper) and while there are advantages to following the paper as closely as possible, in `prettyplease` I decided if we're going to adapt the algorithm to work better for Rust syntax, it was worthwhile making it easier to follow than the original.
Move expr- and item-related pretty printing functions to modules
Currently *compiler/rustc_ast_pretty/src/pprust/state.rs* is 2976 lines on master. The `tidy` limit is 3000, which is blocking #92243.
This PR adds a `mod expr;` and `mod item;` to move logic related to those AST nodes out of the single huge file.