We can't really fix the problem that the result is defined to depend on
random(), so it is still going to fail the "unstable input conversion"
test in parse_type.c. However, we can at least satify valgrind. (It
looks like this code used to be valgrind-clean, actually, until somebody
did a careless s/strncpy/strlcpy/g on it.)
In passing, let's just make real sure that chkpass_out doesn't overrun
its output buffer.
No need for backpatch, I think, since this is just to satisfy debugging
tools.
Asif Naeem
To get CRC functionality in a client program, you now need to link with
libpgcommon instead of libpgport. The CRC code has nothing to do with
portability, so libpgcommon is a better home. (libpgcommon didn't exist
when pg_crc.c was originally moved to src/port.)
Remove the possibility to get CRC functionality by just #including
pg_crc_tables.h. I'm not aware of any extensions that actually did that and
couldn't simply link with libpgcommon.
This also moves the pg_crc.h header file from src/include/utils to
src/include/common, which will require changes to any external programs
that currently does #include "utils/pg_crc.h". That seems acceptable, as
include/common is clearly the right home for it now, and the change needed
to any such programs is trivial.
Commit 13629df changed metaphone() function to return an empty string on
empty input, but it left the old error message in place. It's now dead code.
Michael Paquier, per Coverity warning.
This function uses uninitialized stack and heap buffers as supplementary
entropy sources. Mark them so Memcheck will not complain. Back-patch
to 9.4, where Valgrind Memcheck cooperation first appeared.
Marko Tiikkaja
This covers alterations to buffer sizing and zeroing made between imath
1.3 and imath 1.20. Valgrind Memcheck identified the buffer overruns
and reliance on uninitialized data; their exploit potential is unknown.
Builds specifying --with-openssl are unaffected, because they use the
OpenSSL BIGNUM facility instead of imath. Back-patch to 9.0 (all
supported versions).
Security: CVE-2015-0243
Most callers pass a stack buffer. The ensuing stack smash can crash the
server, and we have not ruled out the viability of attacks that lead to
privilege escalation. Back-patch to 9.0 (all supported versions).
Marko Tiikkaja
Security: CVE-2015-0243
Coverity points out that mdc_finish returns a pointer to a local buffer
(which of course is gone as soon as the function returns), leaving open
a risk of misbehaviors possibly as bad as a stack overwrite.
In reality, the only possible call site is in process_data_packets()
which does not examine the returned pointer at all. So there's no
live bug, but nonetheless the code is confusing and risky. Refactor
to avoid the issue by letting process_data_packets() call mdc_finish()
directly instead of going through the pullf_read() API.
Although this is only cosmetic, it seems good to back-patch so that
the logic in pgp-decrypt.c stays in sync across all branches.
Marko Kreen
connectby() didn't adequately check that the constructed SQL query returns
what it's expected to; in fact, since commit 08c33c426b it wasn't
checking that at all. This could result in a null-pointer-dereference
crash if the constructed query returns only one column instead of the
expected two. Less excitingly, it could also result in surprising data
conversion failures if the constructed query returned values that were
not I/O-conversion-compatible with the types specified by the query
calling connectby().
In all branches, insist that the query return at least two columns;
this seems like a minimal sanity check that can't break any reasonable
use-cases.
In HEAD, insist that the constructed query return the types specified by
the outer query, including checking for typmod incompatibility, which the
code never did even before it got broken. This is to hide the fact that
the implementation does a conversion to text and back; someday we might
want to improve that.
In back branches, leave that alone, since adding a type check in a minor
release is more likely to break things than make people happy. Type
inconsistencies will continue to work so long as the actual type and
declared type are I/O representation compatible, and otherwise will fail
the same way they used to.
Also, in all branches, be on guard for NULL results from the constructed
query, which formerly would cause null-pointer dereference crashes.
We now print the row with the NULL but don't recurse down from it.
In passing, get rid of the rather pointless idea that
build_tuplestore_recursively() should return the same tuplestore that's
passed to it.
Michael Paquier, adjusted somewhat by me
Benchmarks has shown that aligning the buffer descriptor array to
cache lines is important for scalability; especially on bigger,
multi-socket, machines.
Currently the array sometimes already happens to be aligned by
happenstance, depending how large previous shared memory allocations
were. That can lead to wildly varying performance results after minor
configuration changes.
In addition to aligning the start of descriptor array, also force the
size of individual descriptors to be of a common cache line size (64
bytes). That happens to already be the case on 64bit platforms, but
this way we can change the struct BufferDesc more easily.
As the alignment primarily matters in highly concurrent workloads
which probably all are 64bit these days, and the space wastage of
element alignment would be a bit more noticeable on 32bit systems, we
don't force the stride to be cacheline sized on 32bit platforms for
now. If somebody does actual performance testing, we can reevaluate
that decision by changing the definition of BUFFERDESC_PADDED_SIZE.
Discussion: 20140202151319.GD32123@awork2.anarazel.de
Per discussion with Bruce Momjan, Tom Lane, Robert Haas, and Peter
Geoghegan.
gist_poly_compress() and gist_circle_compress() checked for a NULL-pointer
key argument, but that was dead code; the gist code never passes a
NULL-pointer to the "compress" method.
This commit also removes a documentation note added in commit a0a3883,
about doing NULL-pointer checks in the "compress" method. It was added
based on the fact that some implementations were doing NULL-pointer
checks, but those checks were unnecessary in the first place.
The NULL-pointer check in gbt_var_same() function was also unnecessary.
The arguments to the "same" method come from the "compress", "union", or
"picksplit" methods, but none of them return a NULL pointer.
None of this is to be confused with SQL NULL values. Those are dealt with
by the gist machinery, and are never passed to the GiST opclass methods.
Michael Paquier
Some fields of the sinfo struct are modified within PG_TRY and then
referenced within PG_CATCH, so as with recent patch to async.c, "volatile"
is necessary for strict POSIX compliance; and that propagates to a couple
of subroutines as well as materializeQueryResult() itself. I think the
risk of actual issues here is probably higher than in async.c, because
storeQueryResult() is likely to get inlined into materializeQueryResult(),
leaving the compiler free to conclude that its stores into sinfo fields are
dead code.
strncpy() has a well-deserved reputation for being unsafe, so make an
effort to get rid of nearly all occurrences in HEAD.
A large fraction of the remaining uses were passing length less than or
equal to the known strlen() of the source, in which case no null-padding
can occur and the behavior is equivalent to memcpy(), though doubtless
slower and certainly harder to reason about. So just use memcpy() in
these cases.
In other cases, use either StrNCpy() or strlcpy() as appropriate (depending
on whether padding to the full length of the destination buffer seems
useful).
I left a few strncpy() calls alone in the src/timezone/ code, to keep it
in sync with upstream (the IANA tzcode distribution). There are also a
few such calls in ecpg that could possibly do with more analysis.
AFAICT, none of these changes are more than cosmetic, except for the four
occurrences in fe-secure-openssl.c, which are in fact buggy: an overlength
source leads to a non-null-terminated destination buffer and ensuing
misbehavior. These don't seem like security issues, first because no stack
clobber is possible and second because if your values of sslcert etc are
coming from untrusted sources then you've got problems way worse than this.
Still, it's undesirable to have unpredictable behavior for overlength
inputs, so back-patch those four changes to all active branches.
contrib/pg_stat_statements will sometimes run the core lexer a second time
on submitted statements. Formerly, if you had standard_conforming_strings
turned off, this led to sometimes getting two copies of any warnings
enabled by escape_string_warning. While this is probably no longer a big
deal in the field, it's a pain for regression testing.
To fix, change the lexer so it doesn't consult the escape_string_warning
GUC variable directly, but looks at a copy in the core_yy_extra_type state
struct. Then, pg_stat_statements can change that copy to disable warnings
while it's redoing the lexing.
It seemed like a good idea to make this happen for all three of the GUCs
consulted by the lexer, not just escape_string_warning. There's not an
immediate use-case for callers to adjust the other two AFAIK, but making
it possible is easy enough and seems like good future-proofing.
Arguably this is a bug fix, but there doesn't seem to be enough interest to
justify a back-patch. We'd not be able to back-patch exactly as-is anyway,
for fear of breaking ABI compatibility of the struct. (We could perhaps
back-patch the addition of only escape_string_warning by adding it at the
end of the struct, where there's currently alignment padding space.)
The folly of the previous arrangement was just demonstrated: there's no
convenient way to add fields to ExplainState without breaking ABI, even
if callers have no need to touch those fields. Since we might well need
to do that again someday in back branches, let's change things so that
only explain.c has to have sizeof(ExplainState) compiled into it. This
costs one extra palloc() per EXPLAIN operation, which is surely pretty
negligible.
Previously, read() might have returned a length equal to the buffer
length, and then the subsequent store to buf[len] would write a
zero-byte one byte past the end. This doesn't seem likely to be
a security issue, but there's some chance it could result in
pg_standby misbehaving.
Spotted by Coverity; patch by Michael Paquier, reviewed by me.
Previously the computation of the total test duration, measured in
microseconds, accidentally overflowed due to accidentally using signed
32bit arithmetic. As the only consequence is that pg_test_timing
invocations with such, overly large, durations never finished the
practical consequences of this bug are minor.
Pointed out by Coverity.
Backpatch to 9.2 where pg_test_timing was added.
In the unlikely case of stdin (fd 0) being closed, the off-by-one
would lead to pg_xlogdump failing to open files.
Spotted by Coverity.
Backpatch to 9.3 where pg_xlogdump was introduced.
Previously, if you wanted anything besides C-string hash keys, you had to
specify a custom hashing function to hash_create(). Nearly all such
callers were specifying tag_hash or oid_hash; which is tedious, and rather
error-prone, since a caller could easily miss the opportunity to optimize
by using hash_uint32 when appropriate. Replace this with a design whereby
callers using simple binary-data keys just specify HASH_BLOBS and don't
need to mess with specific support functions. hash_create() itself will
take care of optimizing when the key size is four bytes.
This nets out saving a few hundred bytes of code space, and offers
a measurable performance improvement in tidbitmap.c (which was not
exploiting the opportunity to use hash_uint32 for its 4-byte keys).
There might be some wins elsewhere too, I didn't analyze closely.
In future we could look into offering a similar optimized hashing function
for 8-byte keys. Under this design that could be done in a centralized
and machine-independent fashion, whereas getting it right for keys of
platform-dependent sizes would've been notationally painful before.
For the moment, the old way still works fine, so as not to break source
code compatibility for loadable modules. Eventually we might want to
remove tag_hash and friends from the exported API altogether, since there's
no real need for them to be explicitly referenced from outside dynahash.c.
Teodor Sigaev and Tom Lane
Use SSPI authentication to allow connections exclusively from the OS
user that launched the test suite. This closes on Windows the
vulnerability that commit be76a6d39e
closed on other platforms. Users of "make installcheck" or custom test
harnesses can run "pg_regress --config-auth=DATADIR" to activate the
same authentication configuration that "make check" would use.
Back-patch to 9.0 (all supported versions).
Security: CVE-2014-0067
As with NOT NULL constraints, we consider that such constraints are merely
reports of constraints that are being enforced by the remote server (or
other underlying storage mechanism). Their only real use is to allow
planner optimizations, for example in constraint-exclusion checks. Thus,
the code changes here amount to little more than removal of the error that
was formerly thrown for applying CHECK to a foreign table.
(In passing, do a bit of cleanup of the ALTER FOREIGN TABLE reference page,
which had accumulated some weird decisions about ordering etc.)
Shigeru Hanada and Etsuro Fujita, reviewed by Kyotaro Horiguchi and
Ashutosh Bapat.
If the called command fails to return data, runShellCommand forgot to
pclose() the pipe before returning. This is fairly harmless in the current
code, because pgbench would then abandon further processing of that client
thread; so no more than nclients descriptors could be leaked this way. But
it's not hard to imagine future improvements whereby that wouldn't be true.
In any case, it's sloppy coding, so patch all branches. Found by Coverity.
In commit 462bd95705, I changed postgres_fdw
to rely on get_plan_rowmark() instead of get_parse_rowmark(). I still
think that's a good idea in the long run, but as Etsuro Fujita pointed out,
it doesn't work today because planner.c forces PlanRowMarks to have
markType = ROW_MARK_COPY for all foreign tables. There's no urgent reason
to change this in the back branches, so let's just revert that part of
yesterday's commit rather than trying to design a better solution under
time pressure.
Also, add a regression test case showing what postgres_fdw does with FOR
UPDATE/SHARE. I'd blithely assumed there was one already, else I'd have
realized yesterday that this code didn't work.
Ordinarily we can omit checking of a WHERE condition that matches a partial
index's condition, when we are using an indexscan on that partial index.
However, in SELECT FOR UPDATE we must include the "redundant" filter
condition in the plan so that it gets checked properly in an EvalPlanQual
recheck. The planner got this mostly right, but improperly omitted the
filter condition if the index in question was on an inheritance child
table. In READ COMMITTED mode, this could result in incorrectly returning
just-updated rows that no longer satisfy the filter condition.
The cause of the error is using get_parse_rowmark() when get_plan_rowmark()
is what should be used during planning. In 9.3 and up, also fix the same
mistake in contrib/postgres_fdw. It's currently harmless there (for lack
of inheritance support) but wrong is wrong, and the incorrect code might
get copied to someplace where it's more significant.
Report and fix by Kyotaro Horiguchi. Back-patch to all supported branches.
dblink and postgres_fdw use SHLIB_PREREQS = submake-libpq to build libpq
first. This doesn't work in a PGXS build, because there is no libpq to
build. So just omit setting SHLIB_PREREQS in this case.
Note that PGXS users can still use SHLIB_PREREQS (although it is not
documented). The problem here is only that contrib modules can be built
in-tree or using PGXS, and the prerequisite is only applicable in the
former case.
Commit 6697aa2bc2 previously attempted to
address this by creating a somewhat fake submake-libpq target in
Makefile.global. That was not the right fix, and it was also done in a
nonportable way, so revert that.
Transactions can now set their commit timestamp directly as they commit,
or an external transaction commit timestamp can be fed from an outside
system using the new function TransactionTreeSetCommitTsData(). This
data is crash-safe, and truncated at Xid freeze point, same as pg_clog.
This module is disabled by default because it causes a performance hit,
but can be enabled in postgresql.conf requiring only a server restart.
A new test in src/test/modules is included.
Catalog version bumped due to the new subdirectory within PGDATA and a
couple of new SQL functions.
Authors: Álvaro Herrera and Petr Jelínek
Reviewed to varying degrees by Michael Paquier, Andres Freund, Robert
Haas, Amit Kapila, Fujii Masao, Jaime Casanova, Simon Riggs, Steven
Singer, Peter Eisentraut
The logical decoding patchset introduced PROC_IN_LOGICAL_DECODING flag
PGXACT flag, that allows such backends to be skipped when computing
the xmin horizon/snapshots. That's fine and sensible for walsenders
streaming out logical changes, but not at all fine for SQL backends
doing logical decoding. If the latter set that flag any change they
have performed outside of logical decoding will not be regarded as
visible - which e.g. can lead to that change being vacuumed away.
Note that not setting the flag for SQL backends isn't particularly
bothersome - the SQL backend doesn't do streaming, so it only runs for
a limited amount of time.
Per buildfarm member 'tick' and Alvaro.
Backpatch to 9.4, where logical decoding was introduced.
We expose a function IsValidJsonNumber that internally calls the lexer
for json numbers. That allows us to use the same test everywhere,
instead of inventing a broken test for hstore conversions. The new
function is also used in datum_to_json, replacing the code that is now
moved to the new function.
Backpatch to 9.3 where hstore_to_json_loose was introduced.
This is advance preparation for introducing even more test modules; the
easy solution is to add them to contrib, but that's bloated enough that
it seems a good time to think of something different.
Moved modules are dummy_seclabel, test_shm_mq, test_parser and
worker_spi.
(test_decoding was also a candidate, but there was too much opposition
to moving that one. We can always reconsider later.)
This patch adds a way of iterating through the members of a bitmapset
nondestructively, unlike the old way with bms_first_member(). While
bms_next_member() is very slightly slower than bms_first_member()
(at least for typical-size bitmapsets), eliminating the need to palloc
and pfree a temporary copy of the target bitmapset is a significant win.
So this method should be preferred in all cases where a temporary copy
would be necessary.
Tom Lane, with suggestions from Dean Rasheed and David Rowley
Mark Simonetti reported that libxslt sometimes crashes for him, and that
swapping xslt_process's object-freeing calls around to do them in reverse
order of creation seemed to fix it. I've not reproduced the crash, but
valgrind clearly shows a reference to already-freed memory, which is
consistent with the idea that shutdown of the xsltTransformContext is
trying to reference the already-freed stylesheet or input document.
With this patch, valgrind is no longer unhappy.
I have an inquiry in to see if this is a libxslt bug or if we're just
abusing the library; but even if it's a library bug, we'd want to adjust
our code so it doesn't fail with unpatched libraries.
Back-patch to all supported branches, because we've been doing this in
the wrong(?) order for a long time.
Code that check the flag no longer need #ifdef's, which is more convenient.
In particular, makes it easier to write extensions that depend on it.
In the passing, modify sslinfo's ssl_is_used function to check ssl_in_use
instead of the OpenSSL specific 'ssl' pointer. It doesn't make any
difference currently, as sslinfo is only compiled when built with OpenSSL,
but seems cleaner anyway.
postgres_fdw would send query conditions involving system columns to the
remote server, even though it makes no effort to ensure that system
columns other than CTID match what the remote side thinks. tableoid,
in particular, probably won't match and might have some use in queries.
Hence, prevent sending conditions that include non-CTID system columns.
Also, create_foreignscan_plan neglected to check local restriction
conditions while determining whether to set fsSystemCol for a foreign
scan plan node. This again would bollix the results for queries that
test a foreign table's tableoid.
Back-patch the first fix to 9.3 where postgres_fdw was introduced.
Back-patch the second to 9.2. The code is probably broken in 9.1 as
well, but the patch doesn't apply cleanly there; given the weak state
of support for FDWs in 9.1, it doesn't seem worth fixing.
Etsuro Fujita, reviewed by Ashutosh Bapat, and somewhat modified by me