Adjust handleCopyOut() to stop trying to write data once it's failed
one time. For typical cases such as out-of-disk-space or broken-pipe,
additional attempts aren't going to do anything but waste time, and
in any case clean truncation of the output seems like a better behavior
than randomly dropping blocks in the middle.
Also remove dubious (and misleadingly documented) attempt to force our way
out of COPY_OUT state if libpq didn't do that. If we did have a situation
like that, it'd be a bug in libpq and would be better fixed there, IMO.
We can hope that commit fa4440f516 took care
of any such problems, anyway.
Also fix longstanding bug in handleCopyIn(): PQputCopyEnd() only supports
a non-null errormsg parameter in protocol version 3, and will actively
fail if one is passed in version 2. This would've made our attempts
to get out of COPY_IN state after a failure into infinite loops when
talking to pre-7.4 servers.
Back-patch the COPY_OUT state change business back to 9.2 where it was
introduced, and the other two fixes into all supported branches.
Previously we were piggybacking on transaction ID parameters to freeze
multixacts; but since there isn't necessarily any relationship between
rates of Xid and multixact consumption, this turns out not to be a good
idea.
Therefore, we now have multixact-specific freezing parameters:
vacuum_multixact_freeze_min_age: when to remove multis as we come across
them in vacuum (default to 5 million, i.e. early in comparison to Xid's
default of 50 million)
vacuum_multixact_freeze_table_age: when to force whole-table scans
instead of scanning only the pages marked as not all visible in
visibility map (default to 150 million, same as for Xids). Whichever of
both which reaches the 150 million mark earlier will cause a whole-table
scan.
autovacuum_multixact_freeze_max_age: when for cause emergency,
uninterruptible whole-table scans (default to 400 million, double as
that for Xids). This means there shouldn't be more frequent emergency
vacuuming than previously, unless multixacts are being used very
rapidly.
Backpatch to 9.3 where multixacts were made to persist enough to require
freezing. To avoid an ABI break in 9.3, VacuumStmt has a couple of
fields in an unnatural place, and StdRdOptions is split in two so that
the newly added fields can go at the end.
Patch by me, reviewed by Robert Haas, with additional input from Andres
Freund and Tom Lane.
We used the length of the input string, not the de-escaped string, as
the trigger for NAMEDATALEN truncation. AFAICS this would only result
in sometimes printing a phony truncation warning; but it's just luck
that there was no worse problem, since we were violating the API spec
for truncate_identifier(). Per bug #9204 from Joshua Yanovski.
This has been wrong since the Unicode-identifier support was added,
so back-patch to all supported branches.
We have a practice of providing a "bread crumb" trail between the minor
versions where the migration section actually tells you to do something.
Historically that was just plain text, eg, "see the release notes for
9.2.4"; but if you're using a browser or PDF reader, it's a lot nicer
if it's a live hyperlink. So use "<xref>" instead. Any argument against
doing this vanished with the recent decommissioning of plain-text release
notes.
Vik Fearing
In pqSendSome, if the connection is already closed at entry, discard any
queued output data before returning. There is no possibility of ever
sending the data, and anyway this corresponds to what we'd do if we'd
detected a hard error while trying to send(). This avoids possible
indefinite bloat of the output buffer if the application keeps trying
to send data (or even just keeps trying to do PQputCopyEnd, as psql
indeed will).
Because PQputCopyEnd won't transition out of PGASYNC_COPY_IN state
until it's successfully queued the COPY END message, and pqPutMsgEnd
doesn't distinguish a queuing failure from a pqSendSome failure,
this omission allowed an infinite loop in psql if the connection closure
occurred when we had at least 8K queued to send. It might be worth
refactoring so that we can make that distinction, but for the moment
the other changes made here seem to offer adequate defenses.
To guard against other variants of this scenario, do not allow
PQgetResult to return a PGRES_COPY_XXX result if the connection is
already known dead. Make sure it returns PGRES_FATAL_ERROR instead.
Per report from Stephen Frost. Back-patch to all active branches.
In a database that's not yet reached consistency, it's possible that some
segments of a relation are not full-size but are not the last ones either.
Because of the way smgrnblocks() works, asking for a new page with P_NEW
will fill in the last not-full-size segment --- and if that makes it full
size, the apparent EOF of the relation will increase by more than one page,
so that the next P_NEW request will yield a page past the next consecutive
one. This breaks the relation-extension logic in XLogReadBufferExtended,
possibly allowing a page update to be applied to some page far past where
it was intended to go. This appears to be the explanation for reports of
table bloat on replication slaves compared to their masters, and probably
explains some corrupted-slave reports as well.
Fix the loop to check the page number it actually got, rather than merely
Assert()'ing that dead reckoning got it to the desired place. AFAICT,
there are no other places that make assumptions about exactly which page
they'll get from P_NEW.
Problem identified by Greg Stark, though this is not the same as his
proposed patch.
It's been like this for a long time, so back-patch to all supported
branches.
If an error occurs in the foreground (backup) process of pg_basebackup,
and we exit in a controlled way, the background process (streaming
xlog process) would stay around and keep streaming.
This is evidently the default on buildfarm member narwhal, but that
is a pretty ancient Mingw version, and there is reason to think that
more recent versions of GNU ld have this feature turned on by default.
Since we are trying to achieve consistency of link behavior across
all Windows toolchains, let's just make sure here.
This is expected to make it start failing when contrib modules
reference non-PGDLLIMPORT'ed global variables, as the other Windows
build methods do. Aside from the value of consistency, the underlying
implementation of this switch is pretty ugly and not really something
we want to rely on if we have to use PGDLLIMPORT anyway for MSVC.
This should make the MSVC build act more like builds for other platforms,
i.e. backend global variables will be automatically available to loadable
libraries without need for explicit PGDLLIMPORT marking.
Craig Ringer
Get rid of use of dlltool for linking the main postgres executable.
dlltool is obsolete and we'd prefer to stop depending on it.
Also, include $(LDAP_LIBS_FE) in $(libpq_pgport). (It's not clear that
this is really needed, or why it's not a linker bug if it is needed.
But reports are that it's needed on current Cygwin.)
We might want to back-patch this if it works, but first let's see
what the buildfarm thinks.
Marco Atzeri
This results in spurious empty lines in the server log. Instead, add
the newlines only when printing out the --echo output. In some cases,
this was already done, leading to two newlines being printed. Clean
that up as well.
From: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Providing this information as plain text was doubtless worth the trouble
ten years ago, but it seems likely that hardly anyone reads it in this
format anymore. And the effort required to maintain these files (in the
form of extra-complex markup rules in the relevant parts of the SGML
documentation) is significant. So, let's stop doing that and rely solely
on the other documentation formats.
Per discussion, the plain-text INSTALL instructions might still be worth
their keep, so we continue to generate that file.
Rather than remove HISTORY and src/test/regress/README from distribution
tarballs entirely, replace them with simple stub files that tell the reader
where to find the relevant documentation. This is mainly to avoid possibly
breaking packaging recipes that expect these files to exist.
Back-patch to all supported branches, because simplifying the markup
requirements for release notes won't help much unless we do it in all
branches.
WakeupWaiters() is supposed to wake up all LW_WAIT_UNTIL_FREE waiters of
the slot, but the loop incorrectly also woke up the first LW_EXCLUSIVE
waiter, if there was no LW_WAIT_UNTIL_FREE waiters in the queue.
Noted by Andres Freund. This code is new in 9.4, so no backpatching.
Make ftello error-checking consistent to all calls and remove a
bit of ftello-related code which has been #if 0'd out since 2001.
Note that we are not concerned with the ftello() call under
snprintf() failing as it is just building a string to call
exit_horribly() with; printing -1 in such a case is fine.
Rather than reset errno (or just hope that its cleared already),
check just the result of the ftello for < 0 to determine if there
was an issue.
Oversight by me, pointed out by Tom.
This prevents pg_basebackup from generating excessive output when
dumping large clusters. The status is now updated once / second,
still making it possible to see that there is progress happening,
but limiting the total bandwidth.
Mika Eloranta, reviewed by Sawada Masahiko and Oskari Saarenmaa
When using verbose mode for pg_basebackup, in tar format sent to
stdout, we'd print an unitialized buffer as the filename.
Reported by Pontus Lundkvist
Improve pg_dump by checking results on various fgetc() calls which
previously were unchecked, ditto for ftello. Also clean up a couple
of very minor memory leaks by waiting to allocate structures until
after the initial check(s).
Issues spotted by Coverity.
The shimTriConstistentFn, which calls the opclass's consistent function with
all combinations of TRUE/FALSE for any MAYBE argument, modifies the entryRes
array passed by the caller. Change startScanKey to re-initialize it between
each call to accommodate that.
It's actually a bad habit by shimTriConsistentFn to modify its argument. But
the only caller that doesn't already re-initialize the entryRes array was
startScanKey, and it's easy for startScanKey to do so. Add a comment to
shimTriConsistentFn about that.
Note: this does not give a free pass to opclass-provided consistent
functions to modify the entryRes argument; shimTriConsistent assumes that
they don't, even though it does it itself.
While at it, refactor startScanKey to allocate the requiredEntries and
additionalEntries after it knows exactly how large they need to be. Saves a
little bit of memory, and looks nicer anyway.
Per complaint by Tom Lane, buildfarm and the pg_trgm regression test.
If you have a GIN query like "rare & frequent", we currently fetch all the
items that match either rare or frequent, call the consistent function for
each item, and let the consistent function filter out items that only match
one of the terms. However, if we can deduce that "rare" must be present for
the overall qual to be true, we can scan all the rare items, and for each
rare item, skip over to the next frequent item with the same or greater TID.
That greatly speeds up "rare & frequent" type queries.
To implement that, introduce the concept of a tri-state consistent function,
where the 3rd value is MAYBE, indicating that we don't know if that term is
present. Operator classes only provide a boolean consistent function, so we
simulate the tri-state consistent function by calling the boolean function
several times, with the MAYBE arguments set to all combinations of TRUE and
FALSE. Testing all combinations is only feasible for a small number of MAYBE
arguments, but it is envisioned that we'll provide a way for operator
classes to provide a native tri-state consistent function, which can be much
more efficient. But that is not included in this patch.
We were already using that trick to for lossy pages, calling the consistent
function with the lossy entry set to TRUE and FALSE. Now that we have the
tri-state consistent function, use it for lossy pages too.
Alexander Korotkov, with fair amount of refactoring by me.
We may process relcache flush requests during transaction startup or
shutdown. In general it's not terribly safe to do catalog access at those
times, so the code's habit of trying to immediately revalidate unflushable
relcache entries is risky. Although there are no field trouble reports
that are positively traceable to this, we have been able to demonstrate
failure of the assertions recently added in RelationIdGetRelation() and
SearchCatCache(). On the other hand, it seems safe to just postpone
revalidation of the cache entry until we're inside a valid transaction.
The one case where this is questionable is where we're exiting a
subtransaction and the outer transaction is holding the relcache entry open
--- but if we made any significant changes to the rel inside such a
subtransaction, we've got problems anyway. There are mechanisms in place
to prevent that (to wit, locks for cross-session cases and
CheckTableNotInUse() for intra-session cases), so let's trust to those
mechanisms to keep us out of trouble.
Commit 42c80c696e added an
Assert(IsTransactionState()) in SearchCatCache(), to catch
any code that thought it could do a catcache lookup outside
transactions. Extend the same idea to relcache lookups.
These flushes were added in my commit d2896a9ed, which added the btree
logic that keeps a cached copy of the index metapage data in index relcache
entries. The idea was to ensure that other backends would promptly update
their cached copies after a change. However, this is not really necessary,
since _bt_getroot() has adequate defenses against believing a stale root
page link, and _bt_getrootheight() doesn't have to be 100% right.
Moreover, if it were necessary, a relcache flush would be an unreliable way
to do it, since the sinval mechanism believes that relcache flush requests
represent transactional updates, and therefore discards them on transaction
rollback. Therefore, we might as well drop these flush requests and save
the time to rebuild the whole relcache entry after a metapage change.
If we ever try to support in-place truncation of btree indexes, it might
be necessary to revisit this issue so that _bt_getroot() can't get caught
by trying to follow a metapage link to a page that no longer exists.
A possible solution to that is to make use of an smgr, rather than
relcache, inval request to force other backends to discard their cached
metapages. But for the moment this is not worth pursuing.
postgres_fdw tended to say "unknown error" if it tried to execute a command
on an already-dead connection, because some paths in libpq just return a
null PGresult for such cases. Out-of-memory might result in that, too.
To fix, pass the PGconn to pgfdw_report_error, and look at its
PQerrorMessage() string if we can't get anything out of the PGresult.
Also, fix the transaction-exit logic to reliably drop a dead connection.
It was attempting to do that already, but it assumed that only connection
cache entries with xact_depth > 0 needed to be examined. The folly in that
is that if we fail while issuing START TRANSACTION, we'll not have bumped
xact_depth. (At least for the case I was testing, this fix masks the
other problem; but it still seems like a good idea to have the PGconn
fallback logic.)
Per investigation of bug #9087 from Craig Lucas. Backpatch to 9.3 where
this code was introduced.
Commit a5ff502fce was a brick shy of a load
in the backend lexer too, not just psql. Per further testing of bug #9068.
In passing, improve related comments.