Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

pg-pool

brianc31.8mMIT3.9.6TypeScript support: definitely-typed

Connection pool for node-postgres

pg, postgres, pool, database

readme

pg-pool

Build Status

A connection pool for node-postgres

install

npm i pg-pool pg

use

create

to use pg-pool you must first create an instance of a pool

var Pool = require('pg-pool')

// by default the pool uses the same
// configuration as whatever `pg` version you have installed
var pool = new Pool()

// you can pass properties to the pool
// these properties are passed unchanged to both the node-postgres Client constructor
// and the node-pool (https://github.com/coopernurse/node-pool) constructor
// allowing you to fully configure the behavior of both
var pool2 = new Pool({
  database: 'postgres',
  user: 'brianc',
  password: 'secret!',
  port: 5432,
  ssl: true,
  max: 20, // set pool max size to 20
  idleTimeoutMillis: 1000, // close idle clients after 1 second
  connectionTimeoutMillis: 1000, // return an error after 1 second if connection could not be established
  maxUses: 7500, // close (and replace) a connection after it has been used 7500 times (see below for discussion)
})

//you can supply a custom client constructor
//if you want to use the native postgres client
var NativeClient = require('pg').native.Client
var nativePool = new Pool({ Client: NativeClient })

//you can even pool pg-native clients directly
var PgNativeClient = require('pg-native')
var pgNativePool = new Pool({ Client: PgNativeClient })
Note:

The Pool constructor does not support passing a Database URL as the parameter. To use pg-pool on heroku, for example, you need to parse the URL into a config object. Here is an example of how to parse a Database URL.

const Pool = require('pg-pool');
const url = require('url')

const params = url.parse(process.env.DATABASE_URL);
const auth = params.auth.split(':');

const config = {
  user: auth[0],
  password: auth[1],
  host: params.hostname,
  port: params.port,
  database: params.pathname.split('/')[1],
  ssl: true
};

const pool = new Pool(config);

/*
  Transforms, 'postgres://DBuser:secret@DBHost:#####/myDB', into
  config = {
    user: 'DBuser',
    password: 'secret',
    host: 'DBHost',
    port: '#####',
    database: 'myDB',
    ssl: true
  }
*/

acquire clients with a promise

pg-pool supports a fully promise-based api for acquiring clients

var pool = new Pool()
pool.connect().then(client => {
  client.query('select $1::text as name', ['pg-pool']).then(res => {
    client.release()
    console.log('hello from', res.rows[0].name)
  })
  .catch(e => {
    client.release()
    console.error('query error', e.message, e.stack)
  })
})

plays nice with async/await

this ends up looking much nicer if you're using co or async/await:

// with async/await
(async () => {
  var pool = new Pool()
  var client = await pool.connect()
  try {
    var result = await client.query('select $1::text as name', ['brianc'])
    console.log('hello from', result.rows[0])
  } finally {
    client.release()
  }
})().catch(e => console.error(e.message, e.stack))

// with co
co(function * () {
  var client = yield pool.connect()
  try {
    var result = yield client.query('select $1::text as name', ['brianc'])
    console.log('hello from', result.rows[0])
  } finally {
    client.release()
  }
}).catch(e => console.error(e.message, e.stack))

your new favorite helper method

because its so common to just run a query and return the client to the pool afterward pg-pool has this built-in:

var pool = new Pool()
var time = await pool.query('SELECT NOW()')
var name = await pool.query('select $1::text as name', ['brianc'])
console.log(name.rows[0].name, 'says hello at', time.rows[0].now)

you can also use a callback here if you'd like:

var pool = new Pool()
pool.query('SELECT $1::text as name', ['brianc'], function (err, res) {
  console.log(res.rows[0].name) // brianc
})

pro tip: unless you need to run a transaction (which requires a single client for multiple queries) or you have some other edge case like streaming rows or using a cursor you should almost always just use pool.query. Its easy, it does the right thing :tm:, and wont ever forget to return clients back to the pool after the query is done.

drop-in backwards compatible

pg-pool still and will always support the traditional callback api for acquiring a client. This is the exact API node-postgres has shipped with for years:

var pool = new Pool()
pool.connect((err, client, done) => {
  if (err) return done(err)

  client.query('SELECT $1::text as name', ['pg-pool'], (err, res) => {
    done()
    if (err) {
      return console.error('query error', err.message, err.stack)
    }
    console.log('hello from', res.rows[0].name)
  })
})

shut it down

When you are finished with the pool if all the clients are idle the pool will close them after config.idleTimeoutMillis and your app will shutdown gracefully. If you don't want to wait for the timeout you can end the pool as follows:

var pool = new Pool()
var client = await pool.connect()
console.log(await client.query('select now()'))
client.release()
await pool.end()

a note on instances

The pool should be a long-lived object in your application. Generally you'll want to instantiate one pool when your app starts up and use the same instance of the pool throughout the lifetime of your application. If you are frequently creating a new pool within your code you likely don't have your pool initialization code in the correct place. Example:

// assume this is a file in your program at ./your-app/lib/db.js

// correct usage: create the pool and let it live
// 'globally' here, controlling access to it through exported methods
var pool = new pg.Pool()

// this is the right way to export the query method
module.exports.query = (text, values) => {
  console.log('query:', text, values)
  return pool.query(text, values)
}

// this would be the WRONG way to export the connect method
module.exports.connect = () => {
  // notice how we would be creating a pool instance here
  // every time we called 'connect' to get a new client?
  // that's a bad thing & results in creating an unbounded
  // number of pools & therefore connections
  var aPool = new pg.Pool()
  return aPool.connect()
}

events

Every instance of a Pool is an event emitter. These instances emit the following events:

error

Emitted whenever an idle client in the pool encounters an error. This is common when your PostgreSQL server shuts down, reboots, or a network partition otherwise causes it to become unavailable while your pool has connected clients.

Example:

const Pool = require('pg-pool')
const pool = new Pool()

// attach an error handler to the pool for when a connected, idle client
// receives an error by being disconnected, etc
pool.on('error', function(error, client) {
  // handle this in the same way you would treat process.on('uncaughtException')
  // it is supplied the error as well as the idle client which received the error
})

connect

Fired whenever the pool creates a new pg.Client instance and successfully connects it to the backend.

Example:

const Pool = require('pg-pool')
const pool = new Pool()

var count = 0

pool.on('connect', client => {
  client.count = count++
})

pool
  .connect()
  .then(client => {
    return client
      .query('SELECT $1::int AS "clientCount"', [client.count])
      .then(res => console.log(res.rows[0].clientCount)) // outputs 0
      .then(() => client)
  })
  .then(client => client.release())

acquire

Fired whenever a client is acquired from the pool

Example:

This allows you to count the number of clients which have ever been acquired from the pool.

var Pool = require('pg-pool')
var pool = new Pool()

var acquireCount = 0
pool.on('acquire', function (client) {
  acquireCount++
})

var connectCount = 0
pool.on('connect', function () {
  connectCount++
})

for (var i = 0; i < 200; i++) {
  pool.query('SELECT NOW()')
}

setTimeout(function () {
  console.log('connect count:', connectCount) // output: connect count: 10
  console.log('acquire count:', acquireCount) // output: acquire count: 200
}, 100)

environment variables

pg-pool & node-postgres support some of the same environment variables as psql supports. The most common are:

PGDATABASE=my_db
PGUSER=username
PGPASSWORD="my awesome password"
PGPORT=5432
PGSSLMODE=require

Usually I will export these into my local environment via a .env file with environment settings or export them in ~/.bash_profile or something similar. This way I get configurability which works with both the postgres suite of tools (psql, pg_dump, pg_restore) and node, I can vary the environment variables locally and in production, and it supports the concept of a 12-factor app out of the box.

bring your own promise

In versions of node <=0.12.x there is no native promise implementation available globally. You can polyfill the promise globally like this:

// first run `npm install promise-polyfill --save
if (typeof Promise == 'undefined') {
  global.Promise = require('promise-polyfill')
}

You can use any other promise implementation you'd like. The pool also allows you to configure the promise implementation on a per-pool level:

var bluebirdPool = new Pool({
  Promise: require('bluebird')
})

please note: in node <=0.12.x the pool will throw if you do not provide a promise constructor in one of the two ways mentioned above. In node >=4.0.0 the pool will use the native promise implementation by default; however, the two methods above still allow you to "bring your own."

maxUses and read-replica autoscaling (e.g. AWS Aurora)

The maxUses config option can help an application instance rebalance load against a replica set that has been auto-scaled after the connection pool is already full of healthy connections.

The mechanism here is that a connection is considered "expended" after it has been acquired and released maxUses number of times. Depending on the load on your system, this means there will be an approximate time in which any given connection will live, thus creating a window for rebalancing.

Imagine a scenario where you have 10 app instances providing an API running against a replica cluster of 3 that are accessed via a round-robin DNS entry. Each instance runs a connection pool size of 20. With an ambient load of 50 requests per second, the connection pool will likely fill up in a few minutes with healthy connections.

If you have weekly bursts of traffic which peak at 1,000 requests per second, you might want to grow your replicas to 10 during this period. Without setting maxUses, the new replicas will not be adopted by the app servers without an intervention -- namely, restarting each in turn in order to build up new connection pools that are balanced against all the replicas. Adding additional app server instances will help to some extent because they will adopt all the replicas in an even way, but the initial app servers will continue to focus additional load on the original replicas.

This is where the maxUses configuration option comes into play. Setting maxUses to 7500 will ensure that over a period of 30 minutes or so the new replicas will be adopted as the pre-existing connections are closed and replaced with new ones, thus creating a window for eventual balance.

You'll want to test based on your own scenarios, but one way to make a first guess at maxUses is to identify an acceptable window for rebalancing and then solve for the value:

maxUses = rebalanceWindowSeconds * totalRequestsPerSecond / numAppInstances / poolSize

In the example above, assuming we acquire and release 1 connection per request and we are aiming for a 30 minute rebalancing window:

maxUses = rebalanceWindowSeconds * totalRequestsPerSecond / numAppInstances / poolSize
   7200 =        1800            *          1000          /        10       /    25

tests

To run tests clone the repo, npm i in the working dir, and then run npm test

contributions

I love contributions. Please make sure they have tests, and submit a PR. If you're not sure if the issue is worth it or will be accepted it never hurts to open an issue to begin the conversation. If you're interested in keeping up with node-postgres releated stuff, you can follow me on twitter at @briancarlson - I generally announce any noteworthy updates there.

license

The MIT License (MIT) Copyright (c) 2016 Brian M. Carlson

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

changelog

All major and minor releases are briefly explained below.

For richer information consult the commit log on github with referenced pull requests.

We do not include break-fix version release in this file.

pg@8.15.0

  • Add support for esm importing. CommonJS importing is still also supported.

pg@8.14.0

pg@8.13.0

pg@8.12.0

pg-pool@8.10.0

  • Emit release event when client is returned to the pool.

pg@8.9.0

pg@8.8.0

pg-pool@3.5.0

pg@8.7.0

  • Add optional config to pool to allow process to exit if pool is idle.

pg-cursor@2.7.0

pg@8.6.0

pg-query-stream@4.0.0

  • Library has been converted to Typescript. The behavior is identical, but there could be subtle breaking changes due to class names changing or other small inconsistencies introduced by the conversion.

pg@8.5.0

pg@8.4.0

  • Switch to optional peer dependencies & remove semver package which has been a small thorn in the side of a few users.
  • Export DatabaseError from pg-protocol.
  • Add support for sslmode in the connection string.

pg@8.3.0

pg@8.2.0

  • Switch internal protocol parser & serializer to pg-protocol. The change is backwards compatible but results in a significant performance improvement across the board, with some queries as much as 50% faster. This is the first work to land in an on-going performance improvement initiative I'm working on. Stay tuned as things are set to get much faster still! :rocket:

pg-cursor@2.2.0

  • Switch internal protocol parser & serializer to pg-protocol. The change is backwards compatible but results in a significant performance improvement across the board, with some queries as much as 50% faster.

pg-query-stream@3.1.0

  • Switch internal protocol parser & serializer to pg-protocol. The change is backwards compatible but results in a significant performance improvement across the board, with some queries as much as 50% faster.

pg@8.1.0

  • Switch to using monorepo version of pg-connection-string. This includes better support for SSL argument parsing from connection strings and ensures continuity of support.
  • Add &ssl=no-verify option to connection string and PGSSLMODE=no-verify environment variable support for the pure JS driver. This is equivalent of passing { ssl: { rejectUnauthorized: false } } to the client/pool constructor. The advantage of having support in connection strings and environment variables is it can be "externally" configured via environment variables and CLI arguments much more easily, and should remove the need to directly edit any application code for the SSL default changes in 8.0. This should make using `pg@8.x` significantly less difficult on environments like Heroku for example.

pg-pool@3.2.0

  • Same changes to pg impact pg-pool as they both use the same connection parameter and connection string parsing code for configuring SSL.

pg-pool@3.1.0

pg@8.0.0

note: for detailed release notes please check here

  • Remove versions of node older than 6 lts from the test matrix. pg>=8.0 may still work on older versions but it is no longer officially supported.
  • Change default behavior when not specifying rejectUnauthorized with the SSL connection parameters. Previously we defaulted to rejectUnauthorized: false when it was not specifically included. We now default to rejectUnauthorized: true. Manually specify { ssl: { rejectUnauthorized: false } } for old behavior.
  • Change default database when not specified to use the user config option if available. Previously process.env.USER was used.
  • Change pg.Pool and pg.Query to be an es6 class.
  • Make pg.native non enumerable.
  • notice messages are no longer instances of Error.
  • Passwords no longer show up when instances of clients or pools are logged.

pg@7.18.0

  • This will likely be the last minor release before pg@8.0.
  • This version contains a few bug fixes and adds a deprecation warning for a pending change in 8.0 which will flip the default behavior over SSL from rejectUnauthorized from false to true making things more secure in the general use case.

pg-query-stream@3.0.0

  • Rewrote stream internals to better conform to node stream semantics. This should make pg-query-stream much better at respecting highWaterMark and getting rid of some edge case bugs when using pg-query-stream as an async iterator. Due to the size and nature of this change (effectively a full re-write) it's safest to bump the semver major here, though almost all tests remain untouched and still passing, which brings us to a breaking change to the API....
  • Changed stream.close to stream.destroy which is the official way to terminate a readable stream. This is a breaking change if you rely on the stream.close method on pg-query-stream...though should be just a find/replace type operation to upgrade as the semantics remain very similar (not exactly the same, since internals are rewritten, but more in line with how streams are "supposed" to behave).
  • Unified the config.batchSize and config.highWaterMark to both do the same thing: control how many rows are buffered in memory. The ReadableStream will manage exactly how many rows are requested from the cursor at a time. This should give better out of the box performance and help with efficient async iteration.

pg@7.17.0

  • Add support for idle_in_transaction_session_timeout option.

7.16.0

  • Add optional, opt-in behavior to test new, faster query pipeline. This is experimental, and not documented yet. The pipeline changes will grow significantly after the 8.0 release.

7.15.0

7.14.0

7.13.0

7.12.0

7.11.0

7.10.0

7.9.0

7.8.0

7.7.0

7.6.0

7.5.0

7.4.0

7.3.0

7.2.0

  • Pinned pg-pool and pg-types to a tighter semver range. This is likely not a noticeable change for you unless you were specifically installing older versions of those libraries for some reason, but making it a minor bump here just in case it could cause any confusion.

7.1.0

Enhancements

7.0.0

Breaking Changes

  • Drop support for node < 4.x.
  • Remove pg.connect pg.end and pg.cancel singleton methods.
  • Client#connect(callback) now returns undefined. It used to return an event emitter.
  • Upgrade pg-pool to 2.x.
  • Upgrade pg-native to 2.x.
  • Standardize error message fields between JS and native driver. The only breaking changes were in the native driver as its field names were brought into alignment with the existing JS driver field names.
  • Result from multi-statement text queries such as SELECT 1; SELECT 2; are now returned as an array of results instead of a single result with 1 array containing rows from both queries.

Please see here for a migration guide

Enhancements

  • Overhauled documentation: https://node-postgres.com.
  • Add Client#connect() => Promise<void> and Client#end() => Promise<void> calls. Promises are now returned from all async methods on clients if and only if no callback was supplied to the method.
  • Add connectionTimeoutMillis to pg-pool.

v6.2.0

v6.1.0

  • Add optional callback parameter to the pure JavaScript client.end method. The native client already supported this.

v6.0.0

Breaking Changes

  • Remove pg.pools. There is still a reference kept to the pools created & tracked by pg.connect but it has been renamed, is considered private, and should not be used. Accessing this API directly was uncommon and was supposed to be private but was incorrectly documented on the wiki. Therefore, it is a breaking change of an (unintentionally) public interface to remove it by renaming it & making it private. Eventually pg.connect itself will be deprecated in favor of instantiating pools directly via new pg.Pool() so this property should become completely moot at some point. In the mean time...check out the new features...

New features

  • Replace internal pooling code with pg-pool. This is the first step in eventually deprecating and removing the singleton pg.connect. The pg-pool constructor is exported from node-postgres at require('pg').Pool. It provides a backwards compatible interface with pg.connect as well as a promise based interface & additional niceties.

You can now create an instance of a pool and don't have to rely on the pg singleton for anything:

var pg = require('pg')

var pool = new pg.Pool()

// your friendly neighborhood pool interface, without the singleton
pool.connect(function(err, client, done) {
  // ...
})

Promise support & other goodness lives now in pg-pool.

Please read the readme at pg-pool for the full api.

  • Included support for tcp keep alive. Enable it as follows:
var client = new Client({ keepAlive: true })

This should help with backends incorrectly considering idle clients to be dead and prematurely disconnecting them.

v5.1.0

  • Make the query object returned from client.query implement the promise interface. This is the first step towards promisifying more of the node-postgres api.

Example:

var client = new Client()
client.connect()
client.query('SELECT $1::text as name', ['brianc']).then(function (res) {
  console.log('hello from', res.rows[0])
  client.end()
})

v5.0.0

Breaking Changes

  • require('pg').native now returns null if the native bindings cannot be found; previously, this threw an exception.

New Features

  • better error message when passing undefined as a query parameter
  • support for defaults.connectionString
  • support for returnToHead being passed to generic pool

v4.5.0

  • Add option to parse JS date objects in query parameters as UTC

v4.4.0

  • Warn to stderr if a named query exceeds 63 characters which is the max length supported by postgres.

v4.3.0

  • Unpin pg-types semver. Allow it to float against `pg-types@1.x`.

v4.2.0

  • Support for additional error fields in postgres >= 9.3 if available.

v4.1.0

v4.0.0

  • Make native bindings an optional install with npm install pg-native
  • No longer surround query result callback with try/catch block.
  • Remove built in COPY IN / COPY OUT support - better implementations provided by pg-copy-streams and pg-native

v3.6.0

v3.5.0

  • Include support for parsing boolean arrays

v3.4.0

v3.2.0

v3.1.0

v3.0.0

Breaking changes

After some discussion it was decided node-postgres was non-compliant in how it was handling DATE results. They were being converted to UTC, but the PostgreSQL documentation specifies they should be returned in the client timezone. This is a breaking change, and if you use the date type you might want to examine your code and make sure nothing is impacted.

pg@v2.0 included changes to not convert large integers into their JavaScript number representation because of possibility for numeric precision loss. The same types in arrays were not taken into account. This fix applies the same type of type-coercion rules to arrays of those types, so there will be no more possible numeric loss on an array of very large int8s for example. This is a breaking change because now a return type from a query of int8[] will contain string representations of the integers. Use your favorite JavaScript bignum module to represent them without precision loss, or punch over the type converter to return the old style arrays again.

Single date parameters were properly sent to the PostgreSQL server properly in local time, but an input array of dates was being changed into utc dates. This is a violation of what PostgreSQL expects. Small breaking change, but none-the-less something you should check out if you are inserting an array of dates.

This is a small change to bring the semantics of query more in line with other EventEmitters. The tests all passed after this change, but I suppose it could still be a breaking change in certain use cases. If you are doing clever things with the end and error events of a query object you might want to check to make sure its still behaving normally, though it is most likely not an issue.

New features

The long & short of it is now any object you supply in the list of query values will be inspected for a .toPostgres method. If the method is present it will be called and its result used as the raw text value sent to PostgreSQL for that value. This allows the same type of custom type coercion on query parameters as was previously afforded to query result values.

If domains are active node-postgres will honor them and do everything it can to ensure all callbacks are properly fired in the active domain. If you have tried to use domains with node-postgres (or many other modules which pool long lived event emitters) you may have run into an issue where the active domain changes before and after a callback. This has been a longstanding footgun within node-postgres and I am happy to get it fixed.

Avoids a scenario where your pool could fill up with disconnected & unusable clients.

To provide better documentation and a clearer explanation of how to override the query result parsing system we broke the type converters into their own module. There is still work around removing the 'global-ness' of the type converters so each query or connection can return types differently, but this is a good first step and allow a lot more obvious way to return int8 results as JavaScript numbers, for example

v2.11.0

v2.10.0

v2.9.0

v2.8.0

  • Add support for parsing JSON[] and UUID[] result types

v2.7.0

  • Use single row mode in native bindings when available [@rpedela]
    • reduces memory consumption when handling row values in 'row' event
  • Automatically bind buffer type parameters as binary [@eugeneware]

v2.6.0

  • Respect PGSSLMODE environment variable

v2.5.0

  • Ability to opt-in to int8 parsing via pg.defaults.parseInt8 = true

v2.4.0

  • Use eval in the result set parser to increase performance

v2.3.0

  • Remove built-in support for binary Int64 parsing. Due to the low usage & required compiled dependency this will be pushed into a 3rd party add-on

v2.2.0

v2.1.0

v2.0.0

  • Properly handle various PostgreSQL to JavaScript type conversions to avoid data loss:
PostgreSQL | pg@v2.0 JavaScript | pg@v1.0 JavaScript
--------------------------------|----------------
float4     | number (float)     | string
float8     | number (float)     | string
int8       | string             | number (int)
numeric    | string             | number (float)
decimal    | string             | number (float)

For more information see https://github.com/brianc/node-postgres/pull/353 If you are unhappy with these changes you can always override the built in type parsing fairly easily.

v1.3.0

  • Make client_encoding configurable and optional

v1.2.0

  • return field metadata on result object: access via result.fields[i].name/dataTypeID

v1.1.0

  • built in support for JSON data type for PostgreSQL Server @ v9.2.0 or greater

v1.0.0

  • remove deprecated functionality
    • Callback function passed to pg.connect now requires 3 arguments
    • Client#pauseDrain() / Client#resumeDrain removed
    • numeric, decimal, and float data types no longer parsed into float before being returned. Will be returned from query results as String

v0.15.0

  • client now emits end when disconnected from back-end server
  • if client is disconnected in the middle of a query, query receives an error

v0.14.0

  • add deprecation warnings in prep for v1.0
  • fix read/write failures in native module under node v0.9.x