Rotate instead of shifting hash join batch number.

Our algorithm for choosing batch numbers turned out not to work
effectively for multi-billion key inner relations.  We would use
more hash bits than we have, and effectively concentrate all tuples
into a smaller number of batches than we intended.  While ideally
we should switch to wider hashes, for now, change the algorithm to
one that effectively gives up bits from the bucket number when we
don't have enough bits.  That means we'll finish up with longer
bucket chains than would be ideal, but that's better than having
batches that don't fit in work_mem and can't be divided.

Batch-patch to all supported releases.

Author: Thomas Munro
Reviewed-by: Tom Lane, thanks also to Tomas Vondra, Alvaro Herrera, Andres Freund for testing and discussion
Reported-by: James Coleman
Discussion: https://postgr.es/m/16104-dc11ed911f1ab9df%40postgresql.org
This commit is contained in:
Thomas Munro 2019-12-24 11:31:24 +13:00
parent d5b9c2baff
commit e69d644547
2 changed files with 18 additions and 4 deletions

View file

@ -37,6 +37,7 @@
#include "miscadmin.h"
#include "pgstat.h"
#include "port/atomics.h"
#include "port/pg_bitutils.h"
#include "utils/dynahash.h"
#include "utils/lsyscache.h"
#include "utils/memutils.h"
@ -1877,7 +1878,7 @@ ExecHashGetHashValue(HashJoinTable hashtable,
* chains), and must only cause the batch number to remain the same or
* increase. Our algorithm is
* bucketno = hashvalue MOD nbuckets
* batchno = (hashvalue DIV nbuckets) MOD nbatch
* batchno = ROR(hashvalue, log2_nbuckets) MOD nbatch
* where nbuckets and nbatch are both expected to be powers of 2, so we can
* do the computations by shifting and masking. (This assumes that all hash
* functions are good about randomizing all their output bits, else we are
@ -1889,7 +1890,11 @@ ExecHashGetHashValue(HashJoinTable hashtable,
* number the way we do here).
*
* nbatch is always a power of 2; we increase it only by doubling it. This
* effectively adds one more bit to the top of the batchno.
* effectively adds one more bit to the top of the batchno. In very large
* joins, we might run out of bits to add, so we do this by rotating the hash
* value. This causes batchno to steal bits from bucketno when the number of
* virtual buckets exceeds 2^32. It's better to have longer bucket chains
* than to lose the ability to divide batches.
*/
void
ExecHashGetBucketAndBatch(HashJoinTable hashtable,
@ -1902,9 +1907,9 @@ ExecHashGetBucketAndBatch(HashJoinTable hashtable,
if (nbatch > 1)
{
/* we can do MOD by masking, DIV by shifting */
*bucketno = hashvalue & (nbuckets - 1);
*batchno = (hashvalue >> hashtable->log2_nbuckets) & (nbatch - 1);
*batchno = pg_rotate_right32(hashvalue,
hashtable->log2_nbuckets) & (nbatch - 1);
}
else
{

View file

@ -136,4 +136,13 @@ extern int (*pg_popcount64) (uint64 word);
/* Count the number of one-bits in a byte array */
extern uint64 pg_popcount(const char *buf, int bytes);
/*
* Rotate the bits of "word" to the right by n bits.
*/
static inline uint32
pg_rotate_right32(uint32 word, int n)
{
return (word >> n) | (word << (sizeof(word) * BITS_PER_BYTE - n));
}
#endif /* PG_BITUTILS_H */