Information completely translated to the SGML/DocBook source files

pgtcl.sgml, spi.sgml, trigger.sgml.
Online docs in html and postscript are the current versions.
This commit is contained in:
Thomas G. Lockhart 1998-04-05 17:10:41 +00:00
parent e98562a511
commit 7a9385e9d3
3 changed files with 0 additions and 1043 deletions

View file

@ -1,200 +0,0 @@
pgtcl is a tcl package for front-end programs to interface with PostgreSQL
backends. PgTcl does not use the libpq library but communicates to
the backend directly via the frontend-backend protocol. Thus, it is
more efficient than previous postgres->tcl bindings which are layered
on top of libpq. In addition, pgtcl can handle multiple backend
connections from a single frontend application.
If you have any questions or bug reports, please send them to
Jolly Chen at jolly@cs.berkeley.edu.
-------------------------------------------------------------------
The pgtcl package provides the following commands.
pg_connect - opens a connection to the backend server
pg_disconnect - closes a connection
pg_exec - send a query to the backend
pg_select - loop over the result of a select statement
pg_result - manipulate the results of a query
pg_lo_creat - create a large object
pg_lo_open - open a large object
pg_lo_close - close a large object
pg_lo_read - read a large object
pg_lo_write - write a large object
pg_lo_lseek - seek to a position on a large object
pg_lo_tell - return the current seek position of a large object
pg_lo_unlink - delete a large object
pg_lo_import - import a Unix file into a large object
pg_lo_export - export a large object into a Unix file
1) pg_connect: opens a connection to the backend
syntax:
pg_connect dbName [-host hostName] [-port portNumber] [-tty pqtty] [-options optionalBackendArgs]]
the return result is either an error message or a handle for a database
connection. Handles start with the prefix "pgp"
2) pg_disconnect: closes a connection
syntax:
pg_disconnect connection
The argument passed in must be a connection pointer.
3) pg_exec: send a query string to the backend
syntax:
pg_exec connection query
the return result is either an error message or a handle for a query
result. Handles start with the prefix "pgp"
4) pg_select: loop over the result of a select statement
syntax:
pg_select connection query var proc
The query must be a select statement. Anything else returns an error.
The var variable is an array name used in the loop. It is filled
out with the result of the query for each tuple using the field
names as the associative indeces. Proc is the procedure that is
run for each tuple found.
example: (DB is set to database name)
set conn [pg_connect $DB]
pg_select $conn "SELECT * from table" array {
puts [format "%5d %s" array(control) array(name)]
}
pg_disconnect $conn
5) pg_result: get information about a query result
syntax:
pg_result result ?option?
the options are:
-status
the status of the result
-oid
if the last query was an insert, returns the oid of the
inserted tuple
-conn
the connection that produced the result
-assign arrayName
assign the results to an array
-numTuples
the number of tuples in the query
-attributes
returns a list of the name/type pairs of the tuple attributes
-getTuple tupleNumber
returns the values of the tuple in a list
-clear
clear the result buffer. Do not reuse after this
----------------------------------------------------------------------------
The pg_lo* routines are interfaces to the Inversion large objects in postgres.
The functions are designed to mimic the analogous file system functions in
the standard Unix file system interface.
The pg_lo* routines should typically be used within a BEGIN/END transaction
block becaus the file descriptor returned by pg_lo_open is only valid for
the current transaction. pg_lo_import and pg_lo_export MUST be used
in a BEGIN/END transaction block.
* pg_lo_creat: create a large object
syntax:
g_lo_creat conn mode
mode can be any OR'ing together of INV_READ, INV_WRITE, and INV_ARCHIVE.
The OR delimiter character is "|".
e.g. [pg_lo_creat $conn "INV_READ|INV_WRITE"]
returns the oid of the large object created.
* pg_lo_open: open a large object
syntax:
pg_lo_open conn objOid mode
where mode can be either "r", "w", or "rw"
returns a file descriptor for use in later pg_lo* routines
* pg_lo_close: close a large object
syntax:
pg_lo_close conn fd
* pg_lo_read: read a large object
syntax:
pg_lo_read conn fd bufVar len
reads at most len bytes from a large object into a variable named bufVar.
Note that the third argument should be a variable name.
* pg_lo_write: write a large object
syntax:
pg_lo_write conn fd buf len
write at most len bytes to a large object.
The third argument should be the actual string to write, not a variable name.
* pg_lo_lseek: seek to a position on a large object
syntax:
pg_lo_lseek conn fd offset whence
whence can be "SEEK_CUR", "SEEK_END", or "SEEK_SET"
* pg_lo_tell: return the current seek position of a large object
syntax:
pg_lo_tell conn fd
* pg_lo_unlink: delete a large object
syntax:
pg_lo_unlink conn lobjId
* pg_lo_import: import a Unix file into a large object
syntax:
pg_lo_import conn filename
pg_lo_import must be called within a BEGIN/END transaction block
* pg_lo_export: export a large object into a Unix file
syntax:
pg_lo_export conn lobjId filename
pg_lo_export must be called within a BEGIN/END transaction block
------------------------------------------------------------------
Here's a small example of how to use the routines:
# getDBs :
# get the names of all the databases at a given host and port number
# with the defaults being the localhost and port 5432
# return them in alphabetical order
proc getDBs { {host "localhost"} {port "5432"} } {
# datnames is the list to be result
set conn [pg_connect template1 -host $host -port $port]
set res [pg_exec $conn "SELECT datname FROM pg_database ORDER BY datname"]
set ntups [pg_result $res -numTuples]
for {set i 0} {$i < $ntups} {incr i} {
lappend datnames [pg_result $res -getTuple $i]
}
pg_disconnect $conn
return $datnames
}

View file

@ -1,518 +0,0 @@
PostgreSQL Server Programming Interface
The Server Programming Interface (SPI) is an attempt to give users the
ability to run SQL-queries inside user-defined C-functions. Given the lack
of a proper Procedural Language (PL) in the current version of PostgreSQL,
SPI is only way to write server stored procedures and triggers. In the future
SPI will be used as the "workhorse" for PL.
In fact, SPI is just set of builtin interface functions to simplify
access to the Parser, Planner, Optimizer and Executor. SPI also does some
memory management.
To avoid misunderstanding we'll use the word "function" for SPI interface
functions and the word "procedure" for user-defined C-functions using SPI.
SPI procedures are always called by some (upper) Executor and the SPI
manager uses the Executor to run your queries. Other procedures may be
called by the Executor running queries from your procedure.
Note, that if during execution of a query from a procedure the transaction
is be aborted then control will not be returned to your procedure - all work
will be rolled back and the server will wait for the next command from the
client. This will be changed in the future versions.
Other restrictions are the inability to execute BEGIN, END and ABORT
(transaction control statements) and cursor operations. This will also be
changed in future.
Interface functions
If successful, SPI functions return a non-negative result (either via
returned (int) value or in SPI_result global variable, as described below).
On error, a negative result will be returned.
int SPI_connect (void)
Connects your procedure to the SPI manager. Initializes the SPI internal
structures for query execution and memory management.
You should call this function if you will need to execute queries. Some
utility SPI functions may be called from un-connected procedures.
Returns:
SPI_OK_CONNECT if connected.
SPI_ERROR_CONNECT if not. You may get this error if SPI_connect() is
called from an already connected procedure - e.g. if you directly call one
procedure from another connected one. Actually, while the child procedure
will be able to use SPI, your parent procedure will not be able to continue
to use SPI after the child returns (if SPI_finish() is called by the child).
It's bad practice.
int SPI_finish(void)
Disconnects your procedure from the SPI manager and frees all memory
allocations made by your procedure via palloc() since the SPI_connect().
These allocations can't be used any more! See Memory management.
After SPI_finish() is called your procedure loses the ability to run
queries. The server is in the same state as just before the call to
SPI_connect().
Returns:
SPI_OK_FINISH if properly disconnected.
SPI_ERROR_UNCONNECTED if called from an un-connected procedure. No problem
with this - it means that nothing was made by the SPI manager.
NOTE! SPI_finish() MUST be called by connected procedure or you may get
unpredictable results! But you are able to skip the call to SPI_finish()
if you abort the transaction (via elog(WARN)).
int SPI_exec(char *query, int tcount)
Creates an execution plan (parser+planner+optimizer) and executes query
for tcount tuples. This should only be called from a connected procedure.
If tcount eq 0 then it executes the query for all tuples returned by the
query scan. Using tcount > 0 you may restrict the number of tuples for
which the query will be executed:
SPI_exec ("insert into _table_ select * from _table_", 5);
- at max 5 tuples will be inserted into _table_.
If execution of your query was successful then one of the following
(non-negative) values will be returned:
SPI_OK_UTILITY if some utility (e.g. CREATE TABLE ...) was executed.
SPI_OK_SELECT if SELECT (but not SELECT ... INTO!) was executed.
SPI_OK_SELINTO if SELECT ... INTO was executed.
SPI_OK_INSERT if INSERT (or INSERT ... SELECT) was executed.
SPI_OK_DELETE if DELETE was executed.
SPI_OK_UPDATE if UPDATE was executed.
NOTE! You may pass many queries in one string or query string may be
re-written by RULEs. SPI_exec() returns the result for the last query
executed.
The actual number of tuples for which the (last) query was executed is
returned in the global variable SPI_processed (if not SPI_OK_UTILITY).
If SPI_OK_SELECT returned and SPI_processed > 0 then you may use global
pointer SPITupleTable *SPI_tuptable to access the selected tuples:
Structure SPITupleTable is defined in spi.h:
typedef struct
{
uint32 alloced; /* # of alloced vals */
uint32 free; /* # of free vals */
TupleDesc tupdesc; /* tuple descriptor */
HeapTuple *vals; /* tuples */
} SPITupleTable;
HeapTuple *vals is an array of pointers to tuples. TupleDesc tupdesc is
a tuple descriptor which you may pass to SPI functions dealing with
tuples.
NOTE! Functions SPI_exec(), SPI_execp() and SPI_prepare() change both
SPI_processed and SPI_tuptable (just the pointer, not the contents of the
structure)! So, save them in local procedure variables if you need them.
Also NOTE, that SPI_finish() frees and makes all SPITupleTables
unusable! (See Memory management).
SPI_exec() may return one of the following (negative) values:
SPI_ERROR_ARGUMENT if query is NULL or tcount < 0.
SPI_ERROR_UNCONNECTED if procedure is unconnected.
SPI_ERROR_COPY if COPY TO/FROM stdin.
SPI_ERROR_CURSOR if DECLARE/CLOSE CURSOR, FETCH.
SPI_ERROR_TRANSACTION if BEGIN/ABORT/END.
SPI_ERROR_OPUNKNOWN if type of query is unknown (this shouldn't occur).
void *SPI_prepare(char *query, int nargs, Oid * argtypes)
Creates and returns an execution plan (parser+planner+optimizer) but doesn't
execute the query. Should only be called from a connected procedure.
nargs is number of parameters ($1 ... $<nargs> - as in SQL-functions),
*argtypes is an array of parameter type OIDs.
nargs may be 0 only if there is not any $1 in query.
Execution of prepared execution plans is sometimes much faster so this
feature may be useful if the same query will be executed many times.
NOTE! The plan returned by SPI_prepare() may be used only in current
invocation of procedure: SPI_finish() frees memory allocated for a plan.
See SPI_saveplan().
If successful, NOT NULL pointer will be returned. Otherwise, you'll get
a NULL plan. In both cases SPI_result will be set like the value returned
by SPI_exec, except
SPI_ERROR_ARGUMENT if query is NULL or nargs < 0 or nargs > 0 && argtypes
is NULL.
void *SPI_saveplan(void *plan)
Currently, there is no ability to store prepared plans in the system
catalog and fetch them from there for execution. This will be implemented
in future versions.
As a work arround, there is the ability to reuse prepared plans in the
consequent invocations of your procedure in the current session.
SPI_saveplan() saves a passed plan (prepared by SPI_prepare()) in memory
protected from freeing by SPI_finish() and by the transaction manager and
returns a pointer to the saved plan. You may save the pointer returned in
a local variable. Always check if this pointer is NULL or not either when
preparing a plan or using an already prepared plan in SPI_execp (see below).
NOTE! If one of objects (relation, function, ...) referenced by prepared
plan is dropped during your session (by your backend or another) then the
results of SPI_execp (for this plan) will be unpredictable.
If successful, NOT NULL is returned otherwise, SPI_result is set to
SPI_ERROR_ARGUMENT if plan is NULL.
SPI_ERROR_UNCONNECTED if procedure is un-connected.
int SPI_execp(void *plan, Datum * values, char *Nulls, int tcount)
Executes a plan prepared by SPI_prepare() (or returned by SPI_saveplan()).
Should only be called from a connected procedure.
plan is pointer to an execution plan, values points to actual parameter
values, Nulls - to array describing what parameters get NULLs ('n' -
NULL, ' ' - NOT NULL), tcount - number of tuples for which plan is to be
executed.
If Nulls is NULL then SPI assumes that all values (if any) are NOT NULL.
Returns the same value as SPI_exec, except
SPI_ERROR_ARGUMENT if plan is NULL or tcount < 0.
SPI_ERROR_PARAM if Values is NULL and plan prepared with some parameters.
If successful, SPI_tuptable and SPI_processed are initialized as in
SPI_exec().
All functions described below may be used by connected and unconnected
procedures.
HeapTuple SPI_copytuple(HeapTuple tuple)
Makes copy of tuple in upper Executor context (see Memory management).
If successful, NOT NULL returned. NULL (i.e. - error) will be returned
only if NULL is passed in.
HeapTuple SPI_modifytuple(Relation rel, HeapTuple tuple, int natts,
int *attnum, Datum * Values, char *Nulls)
Modifies tuple of relation rel as described by the rest of the arguments.
natts is the number of attribute numbers in attnum.
attnum is an array of numbers of the attributes which are to be changed.
Values are new values for the attributes specified.
Nulls describes which of the attributes specified are NULL (if Nulls is
NULL then no NULLs).
If successful, NOT NULL pointer to new tuple returned. New tuple is
allocated in upper Executor context (see Memory management). Passed tuple
is not changed.
Returns NULL if failed with cause in SPI_result:
SPI_ERROR_ARGUMENT if rel is NULL or tuple is NULL or natts le 0 or
attnum is NULL or Values is NULL.
SPI_ERROR_NOATTRIBUTE if there is invalid (le 0 or gt number of
attributes in tuple) attribute number in attnum.
int SPI_fnumber(TupleDesc tupdesc, char *fname)
Returns the attribute number for the attribute with name in fname.
tupdesc is tuple description.
Attribute numbers are 1 based.
Returns SPI_ERROR_NOATTRIBUTE if the named attribute is not found.
char *SPI_fname(TupleDesc tupdesc, int fnumber)
Returns (a copy of) the name of the attribute with number fnumber.
Returns NULL and (SPI_ERROR_NOATTRIBUTE in SPI_result) if fnumber is
greater than the number of attributes in tupdesc or fnumber le 0.
char *SPI_getvalue(HeapTuple tuple, TupleDesc tupdesc, int fnumber)
Returns an external (string) representation of the value of attribute
fnumber in tuple with descriptor tupdesc. Allocates memory as required
by the value.
Returns NULL if
attribute is NULL (SPI_result is 0 - no error);
fnumber is invalid (SPI_result is SPI_ERROR_NOATTRIBUTE);
there is no output function (SPI_result is SPI_ERROR_NOOUTFUNC).
Datum SPI_getbinval(HeapTuple tuple, TupleDesc tupdesc, int fnumber,
bool *isnull)
Returns the value of attribute fnumber in the tuple with descriptor
tupdesc. This is a binary value in internal form. This is not a copy!
Returns NULL indicator in *isnull.
SPI_result is SPI_ERROR_NOATTRIBUTE if fnumber is invalid.
char *SPI_gettype(TupleDesc tupdesc, int fnumber)
Returns (a copy of) the type name for attribute fnumber.
Returns NULL (and SPI_ERROR_NOATTRIBUTE in SPI_result) if fnumber
is invalid.
Oid SPI_gettypeid(TupleDesc tupdesc, int fnumber)
Returns type OID for attribute fnumber.
SPI_result is SPI_ERROR_NOATTRIBUTE if fnumber is invalid.
char *SPI_getrelname(Relation rel)
Returns (a copy of) the name of relation rel.
void *SPI_palloc (Size size)
Allocates memory in upper Executor context (see Memory management).
void *SPI_repalloc(void *pointer, Size size)
Re-allocates memory allocated in upper Executor context (see Memory
management).
void SPI_pfree(void *pointer)
Frees memory allocated in upper Executor context (see Memory management).
Memory management
Server allocates memory in memory contexts in such way that allocations
made in one context may be freed by context destruction without affecting
allocations made in other contexts. All allocations (via palloc(), etc) are
made in the context which are chosen as current one. You'll get
unpredictable results if you'll try to free (or reallocate) memory allocated
not in current context.
Creation and switching between memory contexts are subject of SPI manager
memory management.
SPI procedures deal with two memory contexts: upper Executor memory
context and procedure memory context (if connected).
Before a procedure is connected to the SPI manager, current memory context
is upper Executor context so all allocation made by the procedure itself via
palloc()/repalloc() or by SPI utility functions before connecting to SPI are
made in this context.
After SPI_connect() is called current context is the procedure's one. All
allocations made via palloc()/repalloc() or by SPI utility functions (except
for SPI_copytuple(), SPI_modifytuple, SPI_palloc() and SPI_repalloc()) are
made in this context.
When a procedure disconnects from the SPI manager (via SPI_finish()) the
current context is restored to the upper Executor context and all allocations
made in the procedure memory context are freed and can't be used any more!
If you want to return something to the upper Executor then you have to
allocate memory for this in the upper context!
SPI has no ability to automatically free allocations in the upper Executor
context!
SPI automatically frees memory allocated during execution of a query when
this query is done!
Data changes visibility
PostgreSQL data changes visibility rule: during a query execution, data
changes made by the query itself (via SQL-function, SPI-function, triggers)
are invisible to the query scan. For example, in query
INSERT INTO a SELECT * FROM a
tuples inserted are invisible for SELECT' scan. In effect, this
duplicates the database table within itself (subject to unique index
rules, of course) without recursing.
Changes made by query Q are visible by queries which are started after
query Q, no matter whether they are started inside Q (during the execution
of Q) or after Q is done.
The last example of the usage of SPI procedure below demonstrates the
visibility rule.
Examples
There are more complex examples in in src/test/regress/regress.c and
in contrib/spi.
This is a very simple example of SPI usage. The procedure execq accepts
an SQL-query in its first argument and tcount in its second, executes the
query using SPI_exec and returns the number of tuples for which the query
executed:
----------------------------------------------------------------------------
#include "executor/spi.h" /* this is what you need to work with SPI */
int execq(text *sql, int cnt);
int
execq(text *sql, int cnt)
{
int ret;
int proc = 0;
SPI_connect();
ret = SPI_exec(textout(sql), cnt);
proc = SPI_processed;
/*
* If this is SELECT and some tuple(s) fetched -
* returns tuples to the caller via elog (NOTICE).
*/
if ( ret == SPI_OK_SELECT && SPI_processed > 0 )
{
TupleDesc tupdesc = SPI_tuptable->tupdesc;
SPITupleTable *tuptable = SPI_tuptable;
char buf[8192];
int i;
for (ret = 0; ret < proc; ret++)
{
HeapTuple tuple = tuptable->vals[ret];
for (i = 1, buf[0] = 0; i <= tupdesc->natts; i++)
sprintf(buf + strlen (buf), " %s%s",
SPI_getvalue(tuple, tupdesc, i),
(i == tupdesc->natts) ? " " : " |");
elog (NOTICE, "EXECQ: %s", buf);
}
}
SPI_finish();
return (proc);
}
----------------------------------------------------------------------------
Now, compile and create the function:
create function execq (text, int4) returns int4 as '...path_to_so' language 'c';
vac=> select execq('create table a (x int4)', 0);
execq
-----
0
(1 row)
vac=> insert into a values (execq('insert into a values (0)',0));
INSERT 167631 1
vac=> select execq('select * from a',0);
NOTICE:EXECQ: 0 <<< inserted by execq
NOTICE:EXECQ: 1 <<< value returned by execq and inserted by upper INSERT
execq
-----
2
(1 row)
vac=> select execq('insert into a select x + 2 from a',1);
execq
-----
1
(1 row)
vac=> select execq('select * from a', 10);
NOTICE:EXECQ: 0
NOTICE:EXECQ: 1
NOTICE:EXECQ: 2 <<< 0 + 2, only one tuple inserted - as specified
execq
-----
3 <<< 10 is max value only, 3 is real # of tuples
(1 row)
vac=> delete from a;
DELETE 3
vac=> insert into a values (execq('select * from a', 0) + 1);
INSERT 167712 1
vac=> select * from a;
x
-
1 <<< no tuples in a (0) + 1
(1 row)
vac=> insert into a values (execq('select * from a', 0) + 1);
NOTICE:EXECQ: 0
INSERT 167713 1
vac=> select * from a;
x
-
1
2 <<< there was single tuple in a + 1
(2 rows)
-- This demonstrates data changes visibility rule:
vac=> insert into a select execq('select * from a', 0) * x from a;
NOTICE:EXECQ: 1
NOTICE:EXECQ: 2
NOTICE:EXECQ: 1
NOTICE:EXECQ: 2
NOTICE:EXECQ: 2
INSERT 0 2
vac=> select * from a;
x
-
1
2
2 <<< 2 tuples * 1 (x in first tuple)
6 <<< 3 tuples (2 + 1 just inserted) * 2 (x in second tuple)
(4 rows) ^^^^^^^^
tuples visible to execq() in different invocations

View file

@ -1,325 +0,0 @@
PostgreSQL Trigger Programming Guide
While the current version of PostgreSQL has various client interfaces
such as Perl, Tcl, Python and C, it lacks an actual Procedural Language
(PL). We hope to have a proper PL one day. In the meantime it is possible
to call C functions as trigger actions. Note that STATEMENT-level trigger
events are not supported in the current version. You can currently specify
BEFORE or AFTER on INSERT, DELETE or UPDATE of a tuple as a trigger event.
If a trigger event occurs, the trigger manager (called by the Executor)
initializes the global structure TriggerData *CurrentTriggerData (described
below) and calls the trigger function to handle the event.
The trigger function must be created before the trigger is created as a
function taking no arguments and returns opaque.
The syntax for creating triggers is as follows.
CREATE TRIGGER <trigger name> <BEFORE|AFTER> <INSERT|DELETE|UPDATE>
ON <relation name> FOR EACH <ROW|STATEMENT>
EXECUTE PROCEDURE <procedure name> (<function args>);
The name of the trigger is used if you ever have to delete the trigger.
It is used as an argument to the DROP TRIGGER command.
The next word determines whether the function is called before or after
the event.
The next element of the command determines on what event(s) will trigger
the function. Multiple events can be specified separated by OR.
The relation name determines which table the event applies to.
The FOR EACH statement determines whether the trigger is fired for each
affected row or before (or after) the entire statement has completed.
The procedure name is the C function called.
The args are passed to the function in the CurrentTriggerData structure.
The purpose of passing arguments to the function is to allow different
triggers with similar requirements to call the same function.
Also, function may be used for triggering different relations (these
functions are named as "general trigger functions").
As example of using both features above, there could be a general
function that takes as its arguments two field names and puts the current
user in one and the current timestamp in the other. This allows triggers to
be written on INSERT events to automatically track creation of records in a
transaction table for example. It could also be used as a "last updated"
function if used in an UPDATE event.
Trigger functions return HeapTuple to the calling Executor. This
is ignored for triggers fired after an INSERT, DELETE or UPDATE operation
but it allows BEFORE triggers to:
- return NULL to skip the operation for the current tuple (and so the
tuple will not be inserted/updated/deleted);
- return a pointer to another tuple (INSERT and UPDATE only) which will
be inserted (as the new version of the updated tuple if UPDATE) instead
of original tuple.
Note, that there is no initialization performed by the CREATE TRIGGER
handler. This will be changed in the future. Also, if more than one trigger
is defined for the same event on the same relation, the order of trigger
firing is unpredictable. This may be changed in the future.
If a trigger function executes SQL-queries (using SPI) then these queries
may fire triggers again. This is known as cascading triggers. There is no
explicit limitation on the number of cascade levels.
If a trigger is fired by INSERT and inserts a new tuple in the same
relation then this trigger will be fired again. Currently, there is nothing
provided for synchronization (etc) of these cases but this may change. At
the moment, there is function funny_dup17() in the regress tests which uses
some techniques to stop recursion (cascading) on itself...
Interaction with the trigger manager
As mentioned above, when function is called by the trigger manager,
structure TriggerData *CurrentTriggerData is NOT NULL and initialized. So
it is better to check CurrentTriggerData against being NULL at the start
and set it to NULL just after fetching the information to prevent calls to
a trigger function not from the trigger manager.
struct TriggerData is defined in src/include/commands/trigger.h:
typedef struct TriggerData
{
TriggerEvent tg_event;
Relation tg_relation;
HeapTuple tg_trigtuple;
HeapTuple tg_newtuple;
Trigger *tg_trigger;
} TriggerData;
tg_event
describes event for which the function is called. You may use the
following macros to examine tg_event:
TRIGGER_FIRED_BEFORE(event) returns TRUE if trigger fired BEFORE;
TRIGGER_FIRED_AFTER(event) returns TRUE if trigger fired AFTER;
TRIGGER_FIRED_FOR_ROW(event) returns TRUE if trigger fired for
ROW-level event;
TRIGGER_FIRED_FOR_STATEMENT(event) returns TRUE if trigger fired for
STATEMENT-level event;
TRIGGER_FIRED_BY_INSERT(event) returns TRUE if trigger fired by INSERT;
TRIGGER_FIRED_BY_DELETE(event) returns TRUE if trigger fired by DELETE;
TRIGGER_FIRED_BY_UPDATE(event) returns TRUE if trigger fired by UPDATE.
tg_relation
is pointer to structure describing the triggered relation. Look at
src/include/utils/rel.h for details about this structure. The most
interest things are tg_relation->rd_att (descriptor of the relation
tuples) and tg_relation->rd_rel->relname (relation's name. This is not
char*, but NameData. Use SPI_getrelname(tg_relation) to get char* if
you need a copy of name).
tg_trigtuple
is a pointer to the tuple for which the trigger is fired. This is the tuple
being inserted (if INSERT), deleted (if DELETE) or updated (if UPDATE).
If INSERT/DELETE then this is what you are to return to Executor if
you don't want to replace tuple with another one (INSERT) or skip the
operation.
tg_newtuple
is a pointer to the new version of tuple if UPDATE and NULL if this is
for an INSERT or a DELETE. This is what you are to return to Executor if
UPDATE and you don't want to replace this tuple with another one or skip
the operation.
tg_trigger
is pointer to structure Trigger defined in src/include/utils/rel.h:
typedef struct Trigger
{
char *tgname;
Oid tgfoid;
func_ptr tgfunc;
int16 tgtype;
int16 tgnargs;
int16 tgattr[8];
char **tgargs;
} Trigger;
tgname is the trigger's name, tgnargs is number of arguments in tgargs,
tgargs is an array of pointers to the arguments specified in the CREATE
TRIGGER statement. Other members are for internal use only.
Visibility of Data Changes
PostgreSQL data changes visibility rule: during a query execution, data
changes made by the query itself (via SQL-function, SPI-function, triggers)
are invisible to the query scan. For example, in query
INSERT INTO a SELECT * FROM a
tuples inserted are invisible for SELECT' scan. In effect, this
duplicates the database table within itself (subject to unique index
rules, of course) without recursing.
But keep in mind this notice about visibility in the SPI documentation:
Changes made by query Q are visible by queries which are started after
query Q, no matter whether they are started inside Q (during the
execution of Q) or after Q is done.
This is true for triggers as well so, though a tuple being inserted
(tg_trigtuple) is not visible to queries in a BEFORE trigger, this tuple
(just inserted) is visible to queries in an AFTER trigger, and to queries
in BEFORE/AFTER triggers fired after this!
Examples
There are more complex examples in in src/test/regress/regress.c and
in contrib/spi.
Here is a very simple example of trigger usage. Function trigf reports
the number of tuples in the triggered relation ttest and skips the
operation if the query attempts to insert NULL into x (i.e - it acts as a
NOT NULL constraint but doesn't abort the transaction).
----------------------------------------------------------------------------
#include "executor/spi.h" /* this is what you need to work with SPI */
#include "commands/trigger.h" /* -"- and triggers */
HeapTuple trigf(void);
HeapTuple
trigf()
{
TupleDesc tupdesc;
HeapTuple rettuple;
char *when;
bool checknull = false;
bool isnull;
int ret, i;
if (!CurrentTriggerData)
elog(WARN, "trigf: triggers are not initialized");
/* tuple to return to Executor */
if (TRIGGER_FIRED_BY_UPDATE(CurrentTriggerData->tg_event))
rettuple = CurrentTriggerData->tg_newtuple;
else
rettuple = CurrentTriggerData->tg_trigtuple;
/* check for NULLs ? */
if (!TRIGGER_FIRED_BY_DELETE(CurrentTriggerData->tg_event) &&
TRIGGER_FIRED_BEFORE(CurrentTriggerData->tg_event))
checknull = true;
if (TRIGGER_FIRED_BEFORE(CurrentTriggerData->tg_event))
when = "before";
else
when = "after ";
tupdesc = CurrentTriggerData->tg_relation->rd_att;
CurrentTriggerData = NULL;
/* Connect to SPI manager */
if ((ret = SPI_connect()) < 0)
elog(WARN, "trigf (fired %s): SPI_connect returned %d", when, ret);
/* Get number of tuples in relation */
ret = SPI_exec("select count(*) from ttest", 0);
if (ret < 0)
elog(WARN, "trigf (fired %s): SPI_exec returned %d", when, ret);
i = SPI_getbinval(SPI_tuptable->vals[0], SPI_tuptable->tupdesc, 1, &isnull);
elog (NOTICE, "trigf (fired %s): there are %d tuples in ttest", when, i);
SPI_finish();
if (checknull)
{
i = SPI_getbinval(rettuple, tupdesc, 1, &isnull);
if (isnull)
rettuple = NULL;
}
return (rettuple);
}
----------------------------------------------------------------------------
Now, compile and
create table ttest (x int4);
create function trigf () returns opaque as
'...path_to_so' language 'c';
vac=> create trigger tbefore before insert or update or delete on ttest
for each row execute procedure trigf();
CREATE
vac=> create trigger tafter after insert or update or delete on ttest
for each row execute procedure trigf();
CREATE
vac=> insert into ttest values (null);
NOTICE:trigf (fired before): there are 0 tuples in ttest
INSERT 0 0
-- Insertion skipped and AFTER trigger is not fired
vac=> select * from ttest;
x
-
(0 rows)
vac=> insert into ttest values (1);
NOTICE:trigf (fired before): there are 0 tuples in ttest
NOTICE:trigf (fired after ): there are 1 tuples in ttest
^^^^^^^^
remember what we said about visibility.
INSERT 167793 1
vac=> select * from ttest;
x
-
1
(1 row)
vac=> insert into ttest select x * 2 from ttest;
NOTICE:trigf (fired before): there are 1 tuples in ttest
NOTICE:trigf (fired after ): there are 2 tuples in ttest
^^^^^^^^
remember what we said about visibility.
INSERT 167794 1
vac=> select * from ttest;
x
-
1
2
(2 rows)
vac=> update ttest set x = null where x = 2;
NOTICE:trigf (fired before): there are 2 tuples in ttest
UPDATE 0
vac=> update ttest set x = 4 where x = 2;
NOTICE:trigf (fired before): there are 2 tuples in ttest
NOTICE:trigf (fired after ): there are 2 tuples in ttest
UPDATE 1
vac=> select * from ttest;
x
-
1
4
(2 rows)
vac=> delete from ttest;
NOTICE:trigf (fired before): there are 2 tuples in ttest
NOTICE:trigf (fired after ): there are 1 tuples in ttest
NOTICE:trigf (fired before): there are 1 tuples in ttest
NOTICE:trigf (fired after ): there are 0 tuples in ttest
^^^^^^^^
remember what we said about visibility.
DELETE 2
vac=> select * from ttest;
x
-
(0 rows)