The pljava module allows stored procedures, triggers, and functions to be written in the Java language and executed in the Postgres Pro backend.
pljava provides the following main features:
An ability to write functions, triggers, and user-defined types using recent Java versions.
Standardized utilities to install and maintain Java code in a database.
Standardized mappings of parameters and results. Supports scalar and composite user-defined types (UDTs), pseudo-types, arrays, and sets.
An embedded high-performance JDBC driver utilizing the internal Postgres Pro SPI routines.
Metadata support for the JDBC driver. Both
DatabaseMetaData and
ResultSetMetaData are included.
Integration with Postgres Pro savepoints and exception handling.
An ability to use IN, INOUT,
and OUT parameters.
Two language handlers: javau (functions are
not restricted in behavior, only superusers can create them)
and java (functions run under a security
manager blocking filesystem access, users who can create
them are configured with GRANT/REVOKE).
Transaction and savepoint listeners enabling code execution when a transaction or savepoint is committed or rolled back.
Backend functions and triggers are written in Java using a directly-connected efficient version of the standard Java JDBC API that pljava transparently provides, with enhanced capabilities found in the pljava API.
A function or trigger in SQL resolves to a static method in a Java class. In order for the function to execute, the appointed class must be installed in the database. pljava adds a set of functions that help installing and maintaining Java classes.
The Java compiler also writes an SQLJ deployment descriptor containing the SQL statements that must be executed when installing and uninstalling the compiled Java code in the Postgres Pro backend.
The compiled Java code and the deployment descriptor file are stored together in a Java archive (JAR file). The sqlj.install_jar function both loads the code into Postgres Pro backend and executes the necessary SQL commands in the deployment descriptor, making new types, functions, and triggers available for use.
pljava implements a standardized way of
passing parameters and return values. Complex types and sets are passed
using the standard JDBC ResultSet
class. Great care was taken not to introduce any proprietary interfaces
unless absolutely necessary so that Java code written using
pljava becomes as database agnostic as possible.
A JDBC driver is included in pljava. This driver is written directly on top of the internal Postgres Pro SPI routines. This driver is essential since it is very common for functions and triggers to reuse the database. When they do, they must use the same transactional boundaries that where used by the caller.
pljava is optimized for performance. The Java virtual machine executes within the same process as the backend itself. This vouches for a very low call overhead. pljava is designed with the objective to enable the power of Java to the database itself so that database intensive business logic can execute as close to the actual data as possible.
The standard Java Native Interface (JNI) is used when bridging calls from the backend into the Java virtual machine and vice versa.
pljava is provided with Postgres Pro Enterprise
as a separate pre-built package pljava-ent-17
(for the detailed installation instructions, see Chapter 17).
The sqlj.install_jar, sqlj.replace_jar, and sqlj.remove_jar functions can act on a deployment descriptor allowing SQL commands to be executed after the JAR file was installed or prior to removal.
The descriptor is added as a normal text file to your JAR file. In the manifest of the JAR file, there must be an entry that appoints the file as the SQLJ deployment descriptor.
Name: deployment/examples.ddr SQLJDeploymentDescriptor: TRUE
Such a file can be written by hand according to the format below but the
usual method is to add specific Java annotations in the source code, as
described in the Generating SQL
Automatically section. The Java compiler then generates
the deployment descriptor file at the same time it compiles the
Java sources, and the compiled classes and .ddr
file can all be placed in the JAR file together.
The format of the deployment descriptor is stipulated by ISO/IEC 9075-13:2003.
<descriptor_file> ::= SQLActions <left_bracket> <right_bracket> <equal_sign> { [ <double_quote> <action_group> <double_quote> [ <comma> <double_quote> <action_group> <double_quote> ] ] } <action_group> ::= <install_actions> | <remove_actions> <install_actions> ::= BEGIN INSTALL [ <command> <semicolon> ]... END INSTALL <remove_actions> ::= BEGIN REMOVE [ <command> <semicolon> ]... END REMOVE <command> ::= <SQL_statement> | <implementor_block> <SQL_statement> ::= <SQL_token>... <implementor_block> ::= BEGIN <implementor_name> <SQL_token>... END <implementor_name> <implementor_name> ::= <identifier> <SQL_token> ::= !an SQL lexical unit specified by the term "<token>"in Sub clause 5.2, "<token> and <separator>", in ISO/IEC 9075-2.
If implementor blocks are used, pljava considers
only those with the PostgreSQL implementor name
(case insensitive) by default. Here is a sample deployment descriptor:
SQLActions[] = {
"BEGIN INSTALL
CREATE FUNCTION javatest.java_getTimestamp()
RETURNS timestamp
AS 'org.postgresql.pljava.example.Parameters.getTimestamp'
LANGUAGE java;
END INSTALL",
"BEGIN REMOVE
DROP FUNCTION javatest.java_getTimestamp();
END REMOVE"
}
Although, by default, only the PostgreSQL implementor
name is recognized, the implementor name(s) to be recognized can be set
as a list in the pljava.implementors configuration
parameter. It is consulted after every command while executing a deployment
descriptor, which gives code in the descriptor a rudimentary form of conditional
execution control, by changing which implementor blocks will be
executed based on discovered conditions.
A Java function is declared with the name of a class and a public
static method on that class. The class is resolved using
classpath that was defined for the schema
where the function is declared. If no classpath
was defined for that schema, the public schema
is used. Note that the System ClassLoader
always takes precedence. There is no way to override classes loaded
with that loader.
The following function can be declared to access the static
getProperty method of the
java.lang.System class:
CREATE FUNCTION getsysprop(VARCHAR)
RETURNS VARCHAR
AS 'java.lang.System.getProperty'
LANGUAGE java;
SELECT getsysprop('java.version');
Both the parameters and the return value can be explicitly stated, so the example above can also be written as follows:
CREATE FUNCTION getsysprop(VARCHAR)
RETURNS VARCHAR
AS 'java.lang.String=java.lang.System.getProperty(java.lang.String)'
LANGUAGE java;
This way of declaring the function is useful when the default mapping is inadequate. pljava uses a standard Postgres Pro explicit cast when the SQL type of the parameter or return value does not correspond to the Java type defined in the mapping.
Note that the explicit cast here referred to is not accomplished
by creating an actual SQL CAST expression but
by mostly equivalent means.
The simplest way to write the SQL function declaration that corresponds to your Java code is to have the Java compiler to do the following:
public class Hello {
@Function
public static String hello(String toWhom) {
return "Hello, " + toWhom + "!";
}
}
When this function is compiled, a deployment descriptor containing
the right SQL function declaration is also produced. When it is
included in a JAR file with the compiled code,
the sqlj.install_jar function of
pljava creates the SQL function
declaration at the same time it loads the file.
The method signature of a trigger is predefined. A trigger method
must always return void and have the
org.postgresql.pljava.TriggerData parameter. The
TriggerData interface provides access to two
java.sql.ResultSet instances: one representing the
old row and one representing the new row. The old row is read-only, while
the new row can be updated.
ResultSets are only available for triggers that
are fired on each row. Delete triggers have no new row, and insert triggers
have no old row. Only update triggers have both.
In addition to the sets, several boolean methods exist to gain more information about the trigger.
CREATE TABLE mdt (
id int4,
idesc text,
moddate timestamp DEFAULT CURRENT_TIMESTAMP NOT NULL);
CREATE FUNCTION moddatetime()
RETURNS trigger
AS 'org.postgresql.pljava.example.Triggers.moddatetime'
LANGUAGE java;
CREATE TRIGGER mdt_moddatetime
BEFORE UPDATE ON mdt
FOR EACH ROW
EXECUTE PROCEDURE moddatetime (moddate);
The corresponding Java code looks as follows:
/**
* Update a modification time when the row is updated
*/
static void moddatetime(TriggerData td)
throws SQLException
{
if(td.isFiredForStatement())
throw new TriggerException(td, "can't process STATEMENT events");
if(td.isFiredAfter())
throw new TriggerException(td, "must be fired before event");
if(!td.isFiredByUpdate())
throw new TriggerException(td, "can only process UPDATE events");
ResultSet _new = td.getNew();
String[] args = td.getArguments();
if(args.length != 1)
throw new TriggerException(td, "one argument was expected");
_new.updateTimestamp(args[0], new Timestamp(System.currentTimeMillis()));
}
Scalar types are mapped in a straightforward way. The table below shows the current mappings.
Table H.9. Scalar Type Mapping
| Postgres Pro | Java |
|---|---|
bool | boolean |
“char” | byte |
int2 | short |
int4 | int |
int8 | long |
float4 | float |
float8 | double |
char | java.lang.String |
varchar | java.lang.String |
text | java.lang.String |
name | java.lang.String |
bytea | byte[] |
date | java.sql.Date |
time | java.sql.Time (stored value treated as local time) |
timetz | java.sql.Time |
timestamp | java.sql.Timestamp (stored value treated as local time) |
timestamptz | java.sql.Timestamp |
All scalar types can be represented as an array. Although
Postgres Pro allows you to declare
multidimensional arrays with fixed sizes, pljava
treats all arrays as having one dimension (with the exception of
byte[], which maps to byte[][]). The
reason for this is that the information about dimensions and sizes
is not stored anywhere and not enforced in any way.
However, the current implementation does not enforce the array size limits — the behavior is the same as for arrays of unspecified length.
Actually, the current implementation does not enforce the declared
number of dimensions either. Arrays of a particular element type
are considered to be of the same type regardless of the size or
number of dimensions. So, declaring the number of dimensions or
sizes in CREATE TABLE does not affect
run-time behavior.
Table H.10. Scalar Array Type Mapping
| Postgres Pro | Java |
|---|---|
bool[] | boolean[] |
“char”[] | byte[] |
int2[] | short[] |
int4[] | int[] |
int8[] | long[] |
float4[] | float[] |
float8[] | double[] |
char[] | java.lang.String[] |
varchar[] | java.lang.String[] |
text[] | java.lang.String[] |
name[] | java.lang.String[] |
bytea[] | byte[][] |
date[] | java.sql.Date[] |
time[] | java.sql.Time[] (stored value treated as local time) |
timetz[] | java.sql.Time[] |
timestamp[] | java.sql.Timestamp[] (stored value treated as local time) |
timestamptz[] | java.sql.Timestamp[] |
A domain type is mapped in accordance with the type that it extends unless you installed a specific mapping to override that behavior.
Table H.11. Pseudo-Type Mapping
| Postgres Pro | Java |
|---|---|
“any” | java.lang.Object |
anyelement | java.lang.Object |
anyarray | java.lang.Object[] |
cstring | java.lang.String |
record | java.sql.ResultSet |
trigger | org.postgresql.pljava.TriggerData (see Triggers) |
NULL Handling of Primitives #
Scalar types that map to Java primitives cannot be passed as
NULL values. To enable this, those types can
have an alternative mapping. You can enable this mapping by
denoting it in the method reference explicitly.
CREATE FUNCTION trueIfEvenOrNull(integer)
RETURNS bool
AS 'foo.fee.Fum.trueIfEvenOrNull(java.lang.Integer)'
LANGUAGE java;
In Java code, you should have something like:
package foo.fee;
public class Fum
{
static boolean trueIfEvenOrNull(Integer value)
{
return (value == null)
? true
: (value.intValue() % 1) == 0;
}
}
The following statements should yield true:
SELECT trueIfEvenOrNull(NULL); SELECT trueIfEvenOrNull(4);
To return NULL values from a Java method, use
the object type that corresponds to the primitive (i.e. return
java.lang.Integer instead of int).
pljava resolver mechanism finds the
method anyway. Since Java cannot have different return types for
methods with the same name, this does not introduce any ambiguities.
It is also possible to have NULL values in arrays.
pljava handles them in the same way
as with normal primitives, i.e. you can declare methods that use a
java.lang.Integer[] parameter instead of a
int[] parameter.
A composite type is passed as a read-only java.sql.ResultSet
instance with one row by default. ResultSet is
positioned on its row, so no call to next()
should be made. Values of the composite type are retrieved using
the standard getter methods of ResultSet.
CREATE TYPE compositeTest
AS(base integer, incbase integer, ctime timestamptz);
CREATE FUNCTION useCompositeTest(compositeTest)
RETURNS VARCHAR
AS 'foo.fee.Fum.useCompositeTest'
IMMUTABLE LANGUAGE java;
In the Fum class, the following static method
is added:
public static String useCompositeTest(ResultSet compositeTest)
throws SQLException
{
int base = compositeTest.getInt(1);
int incbase = compositeTest.getInt(2);
Timestamp ctime = compositeTest.getTimestamp(3);
return "Base = \\"" + base +
"\\", incbase = \\"" + incbase +
"\\", ctime = \\"" + ctime + "\\"";
}
Types that have no mapping are currently mapped to
java.lang.String. The standard Postgres Pro
textin/textout routines
registered for respective types are used when values are converted.
Using pljava, you can install a mapping between an arbitrary type and a Java class. There are the following prerequisites for doing this:
You must know the storage layout of the SQL type that you map.
The Java class that you map must implement the
java.sql.SQLData interface.
This example shows how to map the Postgres Pro
geometric point type to a Java class. A point is stored as two
float8 values: the x and
y coordinates.
Once the layout of the point type is known, you can create the
java.sql.SQLData implementation that uses the
java.sql.SQLInput class to read data and the
java.sql.SQLOutput class to write data.
package org.postgresql.pljava.example;
import java.sql.SQLData;
import java.sql.SQLException;
import java.sql.SQLInput;
import java.sql.SQLOutput;
public class Point implements SQLData {
private double m_x;
private double m_y;
private String m_typeName;
public String getSQLTypeName() {
return m_typeName;
}
public void readSQL(SQLInput stream, String typeName) throws SQLException {
m_x = stream.readDouble();
m_y = stream.readDouble();
m_typeName = typeName;
}
public void writeSQL(SQLOutput stream) throws SQLException {
stream.writeDouble(m_x);
stream.writeDouble(m_y);
}
/* Meaningful code that actually does something with this type was
* intentionally left out
*/
}
Finally, install the type mapping using the
add_type_mapping command:
SELECT sqlj.add_type_mapping('point', 'org.postgresql.pljava.example.Point');
Now you can use this new class. pljava
maps any point parameter to the
org.postgresql.pljava.example.Point class.
Here is an example of a complex type created as a composite user-defined type.
CREATE TYPE javatest.complextuple AS (x float8, y float8);
SELECT sqlj.add_type_mapping('javatest.complextuple',
'org.postgresql.pljava.example.ComplexTuple');
package org.postgresql.pljava.example;
import java.sql.SQLData;
import java.sql.SQLException;
import java.sql.SQLInput;
import java.sql.SQLOutput;
public class ComplexTuple implements SQLData {
private double m_x;
private double m_y;
private String m_typeName;
public String getSQLTypeName()
{
return m_typeName;
}
public void readSQL(SQLInput stream, String typeName) throws SQLException
{
m_typeName = typeName;
m_x = stream.readDouble();
m_y = stream.readDouble();
}
public void writeSQL(SQLOutput stream) throws SQLException
{
stream.writeDouble(m_x);
stream.writeDouble(m_y);
}
/* Meaningful code that actually does something with this type was
* intentionally left out
*/
}
SQL shown above for this example will be written by the Java
compiler if the ComplexTuple class
is annotated as a “mapped user-defined type”
with the desired SQL name and structure.
@MappedUDT(schema="javatest", name="complextuple",
structure={"x float8", "y float8"})
public class ComplexTuple implements SQLData {
...
Generating SQL reduces the burden of maintaining definitions in two places.
This text assumes that you have some familiarity with how scalar types are created and added to the Postgres Pro type system. For more information, refer to User-Defined Types.
Creating a new scalar type using Java functions is very similar to
how they are created using C functions from an SQL perspective but
different when looking at the actual implementation. Java stipulates
that a mapping between a Java class and a corresponding SQL type
should be done using the java.sql.SQLData,
java.sql.SQLInput, and java.sql.SQLOutput
interfaces, which are used by pljava.
In addition, the Postgres Pro type system
stipulates that each type must have a textual representation.
The example below shows how to create a type called
javatest.complex. The name of the corresponding
Java class will be
org.postgresql.pljava.example.ComplexScalar.
The Java class for a scalar UDT must implement the
java.sql.SQLData interface. In addition, it
must also implement the parse() method that
creates and returns an instance of the class and the
toString() method that returns something
that the parse() method can parse.
package org.postgresql.pljava.example;
import java.io.IOException;
import java.io.StreamTokenizer;
import java.io.StringReader;
import java.sql.SQLData;
import java.sql.SQLException;
import java.sql.SQLInput;
import java.sql.SQLOutput;
import java.util.logging.Logger;
import org.postgresql.pljava.annotation.Function;
import org.postgresql.pljava.annotation.SQLType;
import org.postgresql.pljava.annotation.BaseUDT;
import static org.postgresql.pljava.annotation.Function.Effects.IMMUTABLE;
import static
org.postgresql.pljava.annotation.Function.OnNullInput.RETURNS_NULL;
@BaseUDT(schema="javatest", name="complex",
internalLength=16, alignment=BaseUDT.Alignment.DOUBLE)
public class ComplexScalar implements SQLData
{
private double m_x;
private double m_y;
private String m_typeName;
@Function(effects=IMMUTABLE, onNullInput=RETURNS_NULL)
public static ComplexScalar parse(String input, String typeName)
throws SQLException
{
try
{
StreamTokenizer tz = new StreamTokenizer(new StringReader(input));
if(tz.nextToken() == '('
&& tz.nextToken() == StreamTokenizer.TT_NUMBER)
{
double x = tz.nval;
if(tz.nextToken() == ','
&& tz.nextToken() == StreamTokenizer.TT_NUMBER)
{
double y = tz.nval;
if(tz.nextToken() == ')')
{
return new ComplexScalar(x, y, typeName);
}
}
}
throw new SQLException("Unable to parse complex from string \""
+ input + '"');
}
catch(IOException e)
{
throw new SQLException(e.getMessage());
}
}
public ComplexScalar()
{
}
public ComplexScalar(double x, double y, String typeName)
{
m_x = x;
m_y = y;
m_typeName = typeName;
}
@Override
public String getSQLTypeName()
{
return m_typeName;
}
@Function(effects=IMMUTABLE, onNullInput=RETURNS_NULL)
@Override
public void readSQL(SQLInput stream, String typeName) throws SQLException
{
m_x = stream.readDouble();
m_y = stream.readDouble();
m_typeName = typeName;
}
@Function(effects=IMMUTABLE, onNullInput=RETURNS_NULL)
@Override
public void writeSQL(SQLOutput stream) throws SQLException
{
stream.writeDouble(m_x);
stream.writeDouble(m_y);
}
@Function(effects=IMMUTABLE, onNullInput=RETURNS_NULL)
@Override
public String toString()
{
s_logger.info(m_typeName + " toString");
StringBuffer sb = new StringBuffer();
sb.append('(');
sb.append(m_x);
sb.append(',');
sb.append(m_y);
sb.append(')');
return sb.toString();
}
/* Meaningful code that actually does something with this type was
* intentionally left out
*/
}
The class itself is annotated with @BaseUDT
giving its SQL schema, name, as well as the length and alignment needed
for its internal stored form.
Because the compiler knows that the class is a BaseUDT,
it expects the parse(), toString(),
readSQL(), and writeSQL()
methods to be present and will generate correct SQL to declare them
as functions to Postgres Pro. The
@Function annotations are only there to
declare immutability and on-null-input behavior for those
methods, because those values are not the defaults when
declaring a function.
pljava handles a complex return value as
the IN or OUT parameter. If you
declare a function that returns a complex type, you will need to
use a Java method with the boolean return type and with the last parameter
of the java.sql.ResultSet type added after all of
visible method parameters. The output parameter will be
initialized to an updatable ResultSet that contains
exactly one row.
CREATE FUNCTION createComplexTest(int, int) RETURNS complexTest AS 'foo.fee.Fum.createComplexTest' IMMUTABLE LANGUAGE java;
The pljava method resolver will now find the
following method in the foo.fee.Fum class:
public static boolean complexReturn(int base, int increment, ResultSet receiver)
throws SQLException
{
receiver.updateInt(1, base);
receiver.updateInt(2, base + increment);
receiver.updateTimestamp(3, new Timestamp(System.currentTimeMillis()));
return true;
}
The return value denotes if the receiver parameter
should be considered as a valid tuple (true) or
NULL (false).
Returning sets is tricky. You do not need to first build a set and then
return it, since large sets require excessive resources. It is better to
produce one row at a time. Incidentally, that is exactly what the
Postgres Pro backend expects from a function that
returns SETOF <type>. The <type>
can be a scalar type, such as int, float, or
varchar, can be a complex type, or the RECORD type.
In order to return a set of a scalar type, you need create a Java
method that returns an implementation of the
java.util.Iterator interface.
CREATE FUNCTION javatest.getNames() RETURNS SETOF varchar AS 'foo.fee.Bar.getNames' IMMUTABLE LANGUAGE java;
The corresponding Java class:
package foo.fee;
import java.util.Iterator;
import org.postgresql.pljava.annotation.Function;
import static org.postgresql.pljava.annotation.Function.Effects.IMMUTABLE;
public class Bar
{
@Function(schema="javatest", effects=IMMUTABLE)
public static Iterator<String> getNames()
{
ArrayList<String> names = new ArrayList<>();
names.add("Lisa");
names.add("Bob");
names.add("Bill");
names.add("Sally");
return names.iterator();
}
}
A method returning a set of a complex type must use either the
org.postgresql.pljava.ResultSetProvider or
org.postgresql.pljava.ResultSetHandle interface.
The reason for having two interfaces is that they cater for optimal
handling of two distinct use cases. The former is great when you
want to dynamically create each row that is to be returned from
the SETOF function. The latter makes sense
when you want to return the result of an executed query.
ResultSetProvider Interface #
This interface has two methods:
boolean assignRowValues(java.sql.ResultSet tupleBuilder, int rowNumber)
and void close(). The Postgres Pro
query evaluator will call the assignRowValues
method repeatedly until it returns false or until
the evaluator decides that it does not need any more rows. It will then
call the close method.
You can use this interface the following way:
CREATE FUNCTION javatest.listComplexTests(int, int) RETURNS SETOF complexTest AS 'foo.fee.Fum.listComplexTest' IMMUTABLE LANGUAGE java;
The function maps to a static Java method that returns an instance
that implements the ResultSetProvider interface.
public class Fum implements ResultSetProvider
{
private final int m_base;
private final int m_increment;
public Fum(int base, int increment)
{
m_base = base;
m_increment = increment;
}
public boolean assignRowValues(ResultSet receiver, int currentRow)
throws SQLException
{
// Stop when reaching 12 rows
//
if(currentRow >= 12)
return false;
receiver.updateInt(1, m_base);
receiver.updateInt(2, m_base + m_increment * currentRow);
receiver.updateTimestamp(3, new Timestamp(System.currentTimeMillis()));
return true;
}
public void close()
{
// Nothing needed in this example
}
@Function(effects=IMMUTABLE, schema="javatest", type="complexTest")
public static ResultSetProvider listComplexTests(int base, int increment)
throws SQLException
{
return new Fum(base, increment);
}
}
The listComplexTests(int base, int increment)
method is called once. It may return NULL if
no results are available or an instance of ResultSetProvider.
Here the Fum class implements this interface, so it
returns an instance of itself. The
assignRowValues(ResultSet receiver, int currentRow)
method will then be called repeatedly until it returns
false. At that time, close will
be called.
The currentRow parameter can be useful in some
cases and unnecessary in others. It will be passed as 0
on the first call and incremented by 1 on each
subsequent call. If ResultSetProvider is
returning results from some source (like Iterator)
that remembers its own position, it can simply ignore currentRow.
ResultSetHandle Interface #
This interface is similar to the
ResultSetProvider interface in that it has the
close method that will be called at the end. But
instead of having the evaluator call to a method that builds one row
at a time, this interface has the method that returns ResultSet.
The query evaluator will iterate over this set and deliver its contents,
one tuple at a time, until the call to next returns
false or the evaluator decides that no more
rows are needed.
Here is an example that executes a query using a statement that it obtained using the default connection. The SQL looks like this:
CREATE FUNCTION javatest.listSupers() RETURNS SETOF pg_user AS 'org.postgresql.pljava.example.Users.listSupers' LANGUAGE java; CREATE FUNCTION javatest.listNonSupers() RETURNS SETOF pg_user AS 'org.postgresql.pljava.example.Users.listNonSupers' LANGUAGE java;
And here is the Java code:
public class Users implements ResultSetHandle
{
private final String m_filter;
private Statement m_statement;
public Users(String filter)
{
m_filter = filter;
}
public ResultSet getResultSet()
throws SQLException
{
m_statement = DriverManager.getConnection("jdbc:default:connection")
.createStatement();
return m_statement.executeQuery("SELECT * FROM pg_user WHERE " + m_filter);
}
public void close()
throws SQLException
{
m_statement.close();
}
@Function(schema="javatest", type="pg_user")
public static ResultSetHandle listSupers()
{
return new Users("usesuper = true");
}
@Function(schema="javatest", type="pg_user")
public static ResultSetHandle listNonSupers()
{
return new Users("usesuper = false");
}
}
pljava contains a JDBC driver that maps to the Postgres Pro SPI functions. A connection that maps to the current transaction can be obtained using the following statement:
Connection conn = DriverManager.getConnection("jdbc:default:connection");
Now you can prepare and execute statements just like with any other JDBC connection. There are a couple of limitations:
The transaction cannot be managed in any way. Thus, you cannot use methods on the connection such as:
commit()
rollback()
setAutoCommit()
setTransactionIsolation()
A savepoint cannot outlive the function in which it was set and it must also be rolled back or released by that same function.
ResultSets returned from
executeQuery() are always
FETCH_FORWARD and
CONCUR_READ_ONLY.
CallableStatement (for stored procedures)
is not yet implemented.
Clob/Blob types need more work.
byte[] and String work fine for
bytea/text respectively. A more efficient
mapping is planned where the actual array is not copied.
You can catch and handle an exception in the Postgres Pro
backend just like any other exception. The backend
ErrorData structure is exposed as a property in
the ServerException class derived from
java.sql.SQLException, and the Java try/catch
mechanism is synchronized with the backend mechanism.
For several reasons, referring to ServerException
and ErrorData from your code is not currently
recommended and may become impossible in the future. An improved
mechanism is expected in future releases. Until then, using only
the standard Java API of java.sql.SQLException
and its standard attributes (such as SQLState)
is recommended wherever possible.
pljava will always catch exceptions that you
do not. They will cause a Postgres Pro error and
the message is logged using the Postgres Pro
logging utilities. The stack trace of the exception
will also be printed if the log_min_messages
configuration parameter is set to DEBUG1 or lower.
You will not be able to continue executing backend functions until your function returns and the error is propagated when the backend throws an exception unless you used a savepoint. When a savepoint is rolled back, the exceptional condition is reset and execution can continue.
Postgres Pro savepoints are exposed using the
standard setSavepoint() and
releaseSavepoint() methods of the
java.sql.Connection interface. The following restrictions
apply:
A savepoint must be rolled back or released in the function where it was set.
A savepoint must not outlive the function where it was set.
“Function” here refers to the pljava function that is called from SQL. The restrictions do not prevent the Java code from being organized into several methods but the savepoint cannot survive after the eventual return from Java to the SQL caller.
pljava uses the standard
java.util.logging.Logger class. Hence, you can
write things like:
Logger.getAnonymousLogger().info(
"Time is " + new Date(System.currentTimeMillis()));
At present, Logger is hardwired to a handler that
maps the level in the log_min_messages configuration
parameter to a valid Logger level and that outputs all
messages using the ereport() backend function.
Importantly, Logger methods can quickly discard any
message logged at a finer level than the one that was mapped from the
Postgres Pro parameter at the time
pljava was first used in the current session.
Such messages never even get as far as ereport() even
if the Postgres Pro parameter is changed later.
So, if expected messages from Java code are not showing up, be sure that
the Postgres Pro parameters are fine enough at the
time of the first use of pljava in the session, so
that Java will not throw the messages away. Once pljava
started, the parameters can be changed as desired and will control in
the usual way what ereport() does with the messages
pljava delivers to it.
The cutoff level in Java is set based on the finer of
log_min_messages and client_min_messages.
The following mapping applies between the Logger levels
and the Postgres Pro backend levels:
Table H.12. Logger Level Mapping
| java.util.logging.Level | Postgres Pro level |
|---|---|
| SEVERE | ERROR |
| WARNING | WARNING |
| INFO | INFO |
| FINE | DEBUG1 |
| FINER | DEBUG2 |
| FINEST | DEBUG3 |
sqlj.install_jar
#Loads a JAR file from a location appointed by an URL into the SQLJ repository. It is an error if a JAR file with the given name already exists in the repository.
Usage:
SELECT sqlj.install_jar(<jar_url>, <jar_name>, <deploy>);
Parameters:
jar_url: The URL that denotes the location
of the JAR file that should be loaded.
jar_name: The name by which this JAR file
can be referenced once it was loaded.
deploy: true if the
JAR file should be deployed according to a deployment
descriptor, false otherwise.
sqlj.replace_jar
#Replaces a loaded JAR file with another JAR file. Use it to update already loaded files. It is an error if the JAR file is not found.
Usage:
SELECT sqlj.replace_jar(<jar_url>, <jar_name>, <redeploy>);
Parameters:
jar_url: The URL that denotes the location
of the JAR file that should be loaded.
jar_name: The name of the JAR file to be
replaced.
redeploy: true if the
JAR file should be undeployed according to the deployment
descriptor of the old JAR file and deployed according to the
deployment descriptor of the new JAR file, false
otherwise.
sqlj.remove_jar
#
Drops the JAR file from the JAR repository. Any classpath
that references this JAR file is updated accordingly. It is an error
if the JAR file is not found.
Usage:
SELECT sqlj.remove_jar(<jar_name>, <undeploy>);
Parameters:
jar_name: The name of the JAR file to be removed.
undeploy: true if the
JAR file should be undeployed according to the deployment
descriptor, false otherwise.
sqlj.get_classpath
#
Returns the classpath that was defined for the
given schema. NULL is returned if the schema has
no classpath. It is an error if the given schema
does not exist.
Usage:
SELECT sqlj.get_classpath(<schema>);
Parameters:
schema: The name of the schema.
sqlj.set_classpath
#
Defines a classpath for the given schema. A
classpath consists of a colon-separated list of
JAR names. It is an error if the given schema does not exist or if
one or more JAR names reference nonexistent JAR files.
Usage:
SELECT sqlj.set_classpath(<schema>, <classpath>);
Parameters:
schema: The name of the schema.
classpath: The colon-separated list of
JAR names.
sqlj.add_type_mapping
#Installs a mapping between a SQL type and a Java class. Once the mapping is in place, parameters and return values are mapped accordingly. Read Mapping SQL Type to Java Class for detailed information.
Usage:
SELECT sqlj.add_type_mapping(<sql_type>, <java_class>);
Parameters:
sql_type: The name of the SQL type. The
name can be qualified with a schema (namespace). If the schema
is omitted, it is resolved according to the current value of
the search_path parameter.
java_class: The name of the class. The
class must be found in the classpath in
effect for the current schema.
sqlj.drop_type_mapping
#Removes a mapping between a SQL type and a Java class.
Usage:
SELECT sqlj.drop_type_mapping(<sql_type>);
Parameters:
sql_type: The name of the SQL type. The
name can be qualified with a schema (namespace). If the schema
is omitted, it is resolved according to the current value of
the search_path parameter.
The install_jar and replace_jar
functions accept a URL (that must be reachable from the server) to a
JAR file. It is even possible, using the rules for URLs of JAR files,
to construct one that refers to a JAR file within another JAR file. For example:
jar:file:outer.jar!/inner.jar
However, Java caching of the “outer” JAR file may frustrate attempts to replace or reload a newer version within the same session.
Several configuration parameters can affect pljava operation, including some common Postgres Pro parameters, as well as own parameters of pljava.
check_function_bodies
#
Affects how strictly pljava validates
a new function at the time of CREATE FUNCTION
execution or when installing a JAR file with
CREATE FUNCTION among its deployment actions.
With check_function_bodies set to on,
pljava makes sure that the referenced
class and method can be loaded and resolved. If the referenced
class depends on classes in other JAR files, those other JAR files
must be already installed and specified in the classpath,
so loading JAR files with dependencies in the wrong order can incur
validation errors. With check_function_bodies
set to off, only basic syntax is checked at
CREATE FUNCTION time, so it is possible to
declare functions or install JAR files in any order postponing
any errors about unresolved dependencies until later when the
functions are used.
dynamic_library_path
#
Influences where native pljava code
objects can be found if the full path is not given to the
LOAD command.
server_encoding
#
Affects all text/character strings exchanged between
Postgres Pro and Java.
UTF8 as the database and server encoding is
strongly recommended. If a different encoding is used, it should
be any of the available fully defined character encodings. In
particular, the Postgres Pro
SQL_ASCII pseudo-encoding does not fully define
what any values outside ASCII represent, it is usable but has
limitations.
pljava.allow_unenforced
#
Only used when pljava is run with no
policy enforcement, this parameter is a list of language names (such
as javau and java) in which
functions will be allowed to execute. This parameter has an empty
default and should be changed carefully.
pljava.allow_unenforced_udt
#
Only used when pljava is run with no
policy enforcement, this parameter controls whether data conversion
functions associated with pljava mapped
user-defined types are allowed to execute. This parameter defaults to
off and should be changed carefully.
pljava.enable
#
Setting this parameter to off prevents
pljava startup from completing until
the parameter is later set to on. It can be
useful for debugging purposes.
pljava.implementors
#
A list of “implementor names” that pljava
recognizes when processing deployment
descriptors inside a JAR file being installed or removed.
Deployment descriptors can contain commands with no implementor
name, which will be executed always, or with an implementor name
executed only on a system recognizing that name. By default, this
list contains only the postgresql entry. The
deployment descriptor that contains commands with other implementor
names can achieve a rudimentary kind of conditional execution if
earlier commands adjust this list of names. Commas separate
elements of this list. Elements that are not regular identifiers
need to be surrounded by double-quotes.
pljava.java_thread_pg_entry
#
A choice of allow, error,
block, or throw controlling
pljava thread management. Java makes
heavy use of threading, while Postgres Pro
may not be accessed by multiple threads concurrently. Historical
behavior of pljava is allow,
which serializes access by Java threads into
Postgres Pro allowing a different
Java thread in only when the current one calls or returns into Java.
pljava formerly made some use of
Java object finalizers, which required this approach, as finalizers
run in their own thread.
pljava itself no longer requires the
ability for any thread to access Postgres Pro
other than the original main thread. User code developed for
pljava, however, may still rely on
that ability. To test whether it does, the error
or throw value can be used here, and any
attempt by a Java thread other than the main one to enter
Postgres Pro incurs an
exception (and stack trace written to a standard error channel of
the server). When confident that there is no code that
will need to enter Postgres Pro except
on the main thread, the block value can be
used. That will eliminate pljava frequent
lock acquisitions and releases when the main thread crosses between
Postgres Pro and Java and will simply
indefinitely block any other Java thread that attempts to enter
Postgres Pro. This is an efficient
value but can lead to blocked threads or a deadlocked
backend if used with code that does attempt to access
Postgres Pro from more than one thread.
The throw value is like
error but more efficient. Under the
error value, attempted entry by the
wrong thread is detected in the native C code only after a
lock operation and call through JNI. Under the
throw value, the lock operations are elided
and an entry attempt by the wrong thread results in no
JNI call and an exception thrown directly in Java.
pljava.libjvm_location
#
Used by pljava to load the Java runtime.
The full path to a libjvm shared object.
The version of the Java library pointed to by this parameter
determines whether pljava can run
with security policy enforcement or with no policy enforcement.
pljava.module_path
#The module path to be passed to the Java application class loader. The default is computed from the Postgres Pro configuration and is usually correct, unless pljava files were installed in unusual locations. If the path must be set explicitly, there must be at least two (and usually only two) entries: the JAR file with the pljava API and the JAR file with pljava internals.
pljava.policy_urls
#
Only used when pljava is running
with security policy enforcement. When running with no policy
enforcement, this parameter is ignored. It is a list of URLs to
Java security policy files determining the permissions available
to pljava functions. Each URL should
be enclosed in double quotes; any double quote that is literally
part of the URL may be represented as two double quotes (in SQL
style) or as %22 in the URL convention. Between
double-quoted URLs, a comma is the list delimiter.
The java.security file of the Java installation
usually defines the following policy file locations:
A systemwide policy from the Java vendor sufficient for the Java runtime itself to function as expected.
A per-user location, where a policy file, if found, can add to the policy from the systemwide file.
The list in pljava.policy_urls modifies the
list from the Java installation, by default after the first entry,
keeping the Java-supplied systemwide policy but replacing the
customary per-user file (there probably is not one in the home of
the postgres user, and if there is it is probably
not tailored for pljava).
Any entry in this list can start with n =
(inside the quotes) for a positive integer n to
specify which entry of Java policy location list it replaces
(1 corresponds to the systemwide policy,
2 — to the customary user file). URLs
not prefixed with n = follow consecutively. If
the first entry is not so prefixed, 2= is assumed.
A final entry of = (in the required
double quotes) prevents use of any remaining entries in
the Java site-configured list.
This parameter defaults to
"file:${org.postgresql.sysconfdir}/pljava.policy","=".
pljava.release_lingering_savepoints
#
How a return value from a pljava function
treats any savepoints created within it that was explicitly
either released (the savepoint analog of “committed”)
or rolled back. If off (default),
they are rolled back. If on, they are
released/committed. If possible, rather than setting this
parameter to on, it would be safer to fix the
function to release its own savepoints when appropriate.
pljava.statement_cache_size
#The number of most recently prepared statements pljava can keep open.
pljava.vmoptions
#
Any parameters to be passed to the Java runtime in the same
form as the documented parameters for the java
command. The string is split on whitespace unless
found between single or double quotes. A backslash treats
the following character literally but the backslash itself
remains in the string, so not all values can be expressed
with these rules. If the server encoding is not
UTF8, only ASCII characters should be
used in pljava.vmoptions.