pg-protocol
Low level postgres wire protocol parser and serializer written in Typescript. Used by node-postgres. Needs more documentation. :smile:
The postgres client/server binary protocol, implemented in TypeScript
Low level postgres wire protocol parser and serializer written in Typescript. Used by node-postgres. Needs more documentation. :smile:
All major and minor releases are briefly explained below.
For richer information consult the commit log on github with referenced pull requests.
We do not include break-fix version release in this file.
queryMode
config option to force use of the extended query protocol on queries without any parameters.release
event when client is returned to the pool.pool.query
.allowExitOnIdle
is enabled).lock_timeout
in client config.--isolatedModules
.DatabaseError
from pg-protocol.sslmode
in the connection string.{ options: string }
field on client/pool config.pg-connection-string
. This includes better support for SSL argument parsing from connection strings and ensures continuity of support.&ssl=no-verify
option to connection string and PGSSLMODE=no-verify
environment variable support for the pure JS driver. This is equivalent of passing { ssl: { rejectUnauthorized: false } }
to the client/pool constructor. The advantage of having support in connection strings and environment variables is it can be "externally" configured via environment variables and CLI arguments much more easily, and should remove the need to directly edit any application code for the SSL default changes in 8.0. This should make using `pg@8.x` significantly less difficult on environments like Heroku for example.pg
impact pg-pool
as they both use the same connection parameter and connection string parsing code for configuring SSL.6 lts
from the test matrix. pg>=8.0
may still work on older versions but it is no longer officially supported.rejectUnauthorized
with the SSL connection parameters. Previously we defaulted to rejectUnauthorized: false
when it was not specifically included. We now default to rejectUnauthorized: true.
Manually specify { ssl: { rejectUnauthorized: false } }
for old behavior.user
config option if available. Previously process.env.USER
was used.pg.Pool
and pg.Query
to be an es6 class.pg.native
non enumerable.notice
messages are no longer instances of Error
.rejectUnauthorized
from false
to true
making things more secure in the general use case.stream.close
to stream.destroy
which is the official way to terminate a readable stream. This is a breaking change if you rely on the stream.close
method on pg-query-stream...though should be just a find/replace type operation to upgrade as the semantics remain very similar (not exactly the same, since internals are rewritten, but more in line with how streams are "supposed" to behave).config.batchSize
and config.highWaterMark
to both do the same thing: control how many rows are buffered in memory. The ReadableStream
will manage exactly how many rows are requested from the cursor at a time. This should give better out of the box performance and help with efficient async iteration.idle_in_transaction_session_timeout
option.new
.null
or undefined
to client.query
.4.x
.pg.connect
pg.end
and pg.cancel
singleton methods.Client#connect(callback)
now returns undefined
. It used to return an event emitter.2.x
.2.x
.SELECT 1; SELECT 2;
are now returned as an array of results instead of a single result with 1 array containing rows from both queries.Please see here for a migration guide
Client#connect() => Promise<void>
and Client#end() => Promise<void>
calls. Promises are now returned from all async methods on clients if and only if no callback was supplied to the method.connectionTimeoutMillis
to pg-pool.replicationStart
messages.client.end
method. The native client already supported this.pg.pools
. There is still a reference kept to the pools created & tracked by pg.connect
but it has been renamed, is considered private, and should not be used. Accessing this API directly was uncommon and was supposed to be private but was incorrectly documented on the wiki. Therefore, it is a breaking change of an (unintentionally) public interface to remove it by renaming it & making it private. Eventually pg.connect
itself will be deprecated in favor of instantiating pools directly via new pg.Pool()
so this property should become completely moot at some point. In the mean time...check out the new features...pg.connect
. The pg-pool constructor is exported from node-postgres at require('pg').Pool
. It provides a backwards compatible interface with pg.connect
as well as a promise based interface & additional niceties.You can now create an instance of a pool and don't have to rely on the pg
singleton for anything:
var pg = require('pg')
var pool = new pg.Pool()
// your friendly neighborhood pool interface, without the singleton
pool.connect(function(err, client, done) {
// ...
})
Promise support & other goodness lives now in pg-pool.
Please read the readme at pg-pool for the full api.
var client = new Client({ keepAlive: true })
This should help with backends incorrectly considering idle clients to be dead and prematurely disconnecting them.
client.query
implement the promise interface. This is the first step towards promisifying more of the node-postgres api.Example:
var client = new Client()
client.connect()
client.query('SELECT $1::text as name', ['brianc']).then(function (res) {
console.log('hello from', res.rows[0])
client.end()
})
require('pg').native
now returns null if the native bindings cannot be found; previously, this threw an exception.undefined
as a query parameterdefaults.connectionString
returnToHead
being passed to generic poolstderr
if a named query exceeds 63 characters which is the max length supported by postgres.pg-types
semver. Allow it to float against `pg-types@1.x`.npm install pg-native
try/catch
block.end
from pg
object when a pool is drainedAfter some discussion it was decided node-postgres was non-compliant in how it was handling DATE results. They were being converted to UTC, but the PostgreSQL documentation specifies they should be returned in the client timezone. This is a breaking change, and if you use the date
type you might want to examine your code and make sure nothing is impacted.
pg@v2.0 included changes to not convert large integers into their JavaScript number representation because of possibility for numeric precision loss. The same types in arrays were not taken into account. This fix applies the same type of type-coercion rules to arrays of those types, so there will be no more possible numeric loss on an array of very large int8s for example. This is a breaking change because now a return type from a query of int8[]
will contain string representations
of the integers. Use your favorite JavaScript bignum module to represent them without precision loss, or punch over the type converter to return the old style arrays again.
Single date
parameters were properly sent to the PostgreSQL server properly in local time, but an input array of dates was being changed into utc dates. This is a violation of what PostgreSQL expects. Small breaking change, but none-the-less something you should check out if you are inserting an array of dates.
This is a small change to bring the semantics of query more in line with other EventEmitters. The tests all passed after this change, but I suppose it could still be a breaking change in certain use cases. If you are doing clever things with the end
and error
events of a query object you might want to check to make sure its still behaving normally, though it is most likely not an issue.
The long & short of it is now any object you supply in the list of query values will be inspected for a .toPostgres
method. If the method is present it will be called and its result used as the raw text value sent to PostgreSQL for that value. This allows the same type of custom type coercion on query parameters as was previously afforded to query result values.
If domains are active node-postgres will honor them and do everything it can to ensure all callbacks are properly fired in the active domain. If you have tried to use domains with node-postgres (or many other modules which pool long lived event emitters) you may have run into an issue where the active domain changes before and after a callback. This has been a longstanding footgun within node-postgres and I am happy to get it fixed.
Avoids a scenario where your pool could fill up with disconnected & unusable clients.
To provide better documentation and a clearer explanation of how to override the query result parsing system we broke the type converters into their own module. There is still work around removing the 'global-ness' of the type converters so each query or connection can return types differently, but this is a good first step and allow a lot more obvious way to return int8 results as JavaScript numbers, for example
pg.defaults.parseInt8 = true
PostgreSQL | pg@v2.0 JavaScript | pg@v1.0 JavaScript
--------------------------------|----------------
float4 | number (float) | string
float8 | number (float) | string
int8 | string | number (int)
numeric | string | number (float)
decimal | string | number (float)
For more information see https://github.com/brianc/node-postgres/pull/353 If you are unhappy with these changes you can always override the built in type parsing fairly easily.
JSON
data type for PostgreSQL Server @ v9.2.0 or greaterpg.connect
now requires 3 argumentsString
end
when disconnected from back-end server