Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

ut-port-sql

softwaregroup-bg65Apache-2.09.2.8

UT port sql module

readme

SQL Port: ut-port-sql

The purpose of this port is to work with SQL Server database by connecting, creating database, creating schema objects and executing stored procedures.

For connecting to MSSQL it uses the mssql module

Configuration

SQL port is usually initialized like this:

module.exports = function db({config: {test}}) {
    return class db extends require('ut-port-sql')(...arguments) {
        get defaults() {
            return {
                createTT: true,
                retry: false,
                retrySchemaUpdate: true,
                namespace: [
                    'db/moduleName',
                    test && 'moduleNameTest'
                ].filter(value => value),
                imports: [
                    'sql',
                    'utModuleName.sql',
                    'utModuleName.sqlSeed',
                    'utModuleName.sqlStandard',
                    test && 'utModuleName.sqlTest',
                ].filter(value => value),
            };
        }
    };
};

Important part of the configuration is specifying connection parameters. This involves the following configuration:

db:
  compatibilityLevel: 120 # optional, the default is 120
  recoveryModel: 'Simple' # optional, possible values: 'Full', 'Simple', 'Bulk-Logged'
  connection:
    server: <db server>
    database: <db name>
    user: <db user>
    password: <db password>
    connectionTimeout: 60000
    requestTimeout: 60000
    # other parameters to pass: see general and tedious options in mssql module
    options: # this can be skipped if the below defaults are acceptable
      encrypt: true
      trustServerCertificate: true
  create:
    user: <user that can create db and the above db user>
    password: <password for the above user>

The parameters specified in connection are used during normal operation of the port. Optionally in the create, a user and password can be specified if automatic creation of the database is allowed.

Use the following additional configuration options to work with the alternative msnodesqlv8 driver:

db:
  connection:
    driver: msnodesqlv8
    TrustServerCertificate: 'yes'

See mssql module docs for more details about other options to pass.

Schema sync

The SQL port has a configuration property named schema. This property can be set to an array of objects, or functions returning array of objects, specifying folders that contain source code of database objects. Here is example:

module.exports = function sql() {
    return {
        // folders containing files, used to create the schemas
        schema: [{
            path: path.join(__dirname, 'schema1'), linkSP: true, exclude
        }, {
            path: path.join(__dirname, 'schema2'), linkSP: false
        }],

        // do not generate table types for the specified tables
        skipTableType: ['schema1.table1']
    }
}

This configuration will make the port scan the specified folders and synchronize database schema. The optional exclude property can specify files to be excluded from synchronization. It can be a string, a regular expression, of array of these.

Calling stored procedures

In addition, procedures from the first folder will be automatically linked to the port (because of the linkSP: true setting), so that they can be executed directly, based on their schema name and procedure name. Given the following procedure:

CREATE PROCEDURE schema1.storedProcedureName
  @param1 int,
  @param2 nvarchar(20)
  @param3 tableType
AS
  ...

It can be called using this:

bus.importMethod('schema1.storedProcedureName')({
  param1: 1,
  param2: 'value',
  param3:[{
    column1: 'value row 1'
  },{
    column1: 'value row 2'
  }]
})

Parameter names for the stored procedure can be named so that they can refer to nested properties of the first argument of the method call.

For example, the if we have the following procedure:

CREATE PROCEDURE schema1.storedProcedureName
  @a_b_c int,
  @d_0_e_1 nvarchar(20)
AS
  ...

When called from JS using this:

bus.importMethod('schema1.storedProcedureName')({
  a: {b: {c: 1}},
  d: [{e:['zero', 'one']}]
})

will receive the following values for the parameters:

  • @a_b_c = 1
  • @d_0_e_1 = 'one'

The parameter names (without @) are basically passed to the String.split function, using _ as separator, then the result is passed as the path argument for the lodash.get function.

Linking can be fine-tuned with the linkSP configuration property, that can be set on the port. It can be boolean, function, object, regular expression or array of the previous types, which will be checked against schema name or full object name for each procedure. Here are some examples:

module.exports = { linkSP: null || undefined /* link all SP by default */ };
module.exports = { linkSP: false /* do not link any SP */ };
module.exports = { linkSP: true /* link all SP */ };
module.exports = {
  linkSP: ['schema1', 'schema2'] /* link all SP in the listed schemas */
};
module.exports = {
  linkSP: /user\.identity\..*/ /* link all SP matching the pattern */
};
module.exports = { linkSP: ['user.identity.check'] /* link individual SPs */ };
module.exports = {
  linkSP: ({namespace, full}) =>
    shallLinkSP(full) /* link based on result of function call */
};
module.exports = {
  linkSP: [
    'schema1',
    /user\.identity\..*/,
    ({namespace, full}) => shallLinkSP(full)
  ] /* mix of the above */
};

The schema sync algorithm matches database objects to files in the file system and updates the database objects if differences are detected. The file names in the file system are supposed to follow the following naming: xxx-schemaName.objectName.(sql|js|json|yaml) , where xxx are numbers, which control the order of execution. The below convention is recommended. Note the camelCase naming convention. File names, including extension are case sensitive.

  • 150-schemaName.sql for schema creation:

    CREATE SCHEMA [schemaName]
  • 250-schemaName.synonymName.sql for single synonym per file creation. Note that this needs to be in single line in the actual file. Following the below exact query format (i.e. using the [ and ] brackets and no extra spaces) and filename matches the xxx-schema.object pattern. Any deviation from the below query format will lead to pointless recreation of synonyms:

    DROP SYNONYM [schemaName].[synonymName] CREATE SYNONYM [schemaName].
    [synonymName] FOR [foreignDatabaseName].[foreignSchemaName].[foreignObjectName]
  • 250-schemaName.scriptName.sql for multiple synonyms per file creation. Filename should not match any object name in the database. The recommended queries in this case are:

    IF NOT EXISTS (
      SELECT
        *
      FROM
        sys.synonyms sy
      JOIN
        sys.schemas s ON s.schema_id=sy.schema_id AND s.name='schemaName'
      WHERE
        sy.base_objectName=
          '[foreignDatabaseName].[foreignSchemaName].[foreignObjectName]' AND
        sy.type='SN' AND sy.name='synonymName'
    )
    BEGIN
      IF EXISTS (
      SELECT
        *
      FROM
        sys.synonyms sy
      JOIN
        sys.schemas s ON s.schema_id=sy.schema_id AND s.name='schemaName'
      WHERE
        sy.type='SN' AND sy.name='synonymName'
      ) DROP SYNONYM [schemaName].[synonymName]
      CREATE SYNONYM [schemaName].[synonymName] FOR [foreignDatabaseName].[foreignSchemaName].[foreignObjectName]
     END
  • 340-schemaName.typeName.sql for used defined table type. The script will be executed as is, only when the used defined table type does not exist.

    CREATE TYPE [schemaName].[typeName] AS TABLE(
      column1 int,
      ... -- other column definitions
    )
  • 350-schemaName.tableName.sql for tables without FK. File content should start with CREATE or ALTER keyword. Note the key field and primary key naming conventions. The script will be executed as is, only when the table does not exist.

    CREATE TABLE [schemaName].[tableName](
      tableNameId INT,
      ... -- other column definitions
      CONSTRAINT [pkSchemaNameTableName] PRIMARY KEY CLUSTERED (tableNameId ASC)
      CONSTRAINT [ukSchemaNameTableName_field1_field2]
        UNIQUE ([field1] ASC, [field2] ASC)
    )
  • 360-schemaName.tableName.sql for tables with 1 FK. File content should start with CREATE or ALTER keyword. Note the foreign field and foreign key naming conventions and the use of underscore to separate the foreign table name. The script will be executed as is, only when the table does not exist.

    CREATE TABLE [schemaName].[tableName](
      tableNameId INT,
      foreignTableNameId INT,
      ... -- other column definitions
      CONSTRAINT [pkSchemaNameTableName] PRIMARY KEY CLUSTERED (tableNameId ASC),
      CONSTRAINT [fkSchemaNameTableName_foreignTableName] FOREIGN KEY([foreignTableNameId])
        REFERENCES [schemaName].[foreignTableName] ([foreignTableNameId])
    )
  • 370-schemaName.tableName.sql for tables with more than 1 FK. File content should start with CREATE or ALTER keyword. Note the foreign field and foreign key naming conventions and the use of underscore to separate the foreign table name. The script will be executed as is, only when the table does not exist.

    CREATE TABLE [schemaName].[tableName](
      xxxxForeignTableNameId INT,
      yyyyForeignTableNameId INT,
      ... -- other column definitions
      CONSTRAINT [pkSchemaNameTableName]
        PRIMARY KEY CLUSTERED (xxxxForeignTableNameId ASC,yyyyForeignTableNameId ASC),
      CONSTRAINT [fkSchemaNameTableName_xxxxForeignTableName]
        FOREIGN KEY([xxxForeignTableNameId])
        REFERENCES [schemaName].[foreignTableName] ([foreignTableNameId]),
      CONSTRAINT [fkSchemaNameTableName_yyyyForeignTableName]
        FOREIGN KEY([yyyForeignTableNameId])
        REFERENCES [schemaName].[foreignTableName] ([foreignTableNameId])
    )
  • 380-schemaName.alterTable.sql for altering tables to add missing columns or modify existing columns that do not involve table recreation. This script will be executed on each reconnection to the database.

      IF NOT EXISTS(
        SELECT *
        FROM sys.columns
        WHERE Name = N'fieldName' and Object_ID = Object_ID(N'tableName')
      )
      BEGIN
        ALTER TABLE tableNAme ADD fieldName INT
      END
  • 450-schemaName.functionName.sql for functions. File content should start with CREATE or ALTER keyword.

    CREATE FUNCTION [schemaName].[functionName](
      @parameterName1 INT,
      @parameterName2 INT
    )
    RETURNS type
    AS
    BEGIN
      ...
    END
  • 550-schemaName.tableNameInsert.sql for INSERT triggers. File content should start with CREATE or ALTER keyword. Similar pattern can be used other triggers.

    CREATE TRIGGER [schemaName].[tableNameInsert]
    ON [schemaName].[tableName]
    FOR INSERT
    AS
      ...
  • 650-schemaName.viewName.sql for views. File content should start with CREATE or ALTER keyword.

     CREATE VIEW [schemaName].[viewName]
     AS
       ...
  • 750-schemaName.procedureName.sql for stored procedures. File content should start with CREATE or ALTER keyword. Note the mandatory camelCase naming convention for procedure, parameter and result column names.

    CREATE PROCEDURE [schemaName].[procedureName]
        @paramName1 INT,
        @paramName2 INT,
        @paramName3 INT OUT,
    AS
      ...
      SELECT x columnName1, y columnName2, ...
  • 800-schemaName.tableName.columnName1ColumnName2.sql for creating indexes.

CREATE NONCLUSTERED INDEX [schemaName.tableName.columnName1ColumnName2]
ON schemaName.tableName(columnName1, columnName2)
  • 850-schemaName.data.sql for loading system default data. WARNING - THIS IS DEPRECATED. THE RECOMMENDED APPROACH IS TO LOAD DATA THROUGH *.yaml FILES Used for any scripts that will insert missing data:

    IF NOT EXISTS (SELECT * FROM tableName WHERE tableNameId = '...')
    BEGIN
      SET IDENTITY_INSERT tableName ON --in case of presence of identity columns
      INSERT INTO tableName(...) VALUES  (...)
      SET IDENTITY_INSERT tableName OFF --in case of presence of identity columns
    END
  • 850-schemaName.entity.merge.yaml for loading system default data This will call stored procedure, which should have already been created by previous files. These are usually procedures, which can safely merge the provided data, which is often non normalized and include no DB keys. SP will receive parameters, based on the values of the yaml file:

    parameter1: value1
    parameter2:
      - {column1: row1value1, column2: row1value2}
      - {column1: row2value1, column2: row2value2}

    If the SP name includes [, it will be called only when the SP exists.

Automatic creation of user defined table types

The SQL port has a configuration property named createTT. This property can be set to a boolean or an array of strings, specifying whether user defined table types matching the tables' structure should be automatically created. There are several scenarios:

  • If the property is set to createTT: true, then user defined table types will be automatically created for all tables.

  • If the property is set to an array of strings (e.g. createTT: ['schema1.tableName1', 'schema2.tableName2']) it means that user defined table types will be created for these tables only.

  • If the property is omitted or set to createTT: false then no user defined table types will be created.

The user defined table types that get automatically created will have their names equal to the their corresponding tables' names followed by 'TT' and 'TTU'

E.g. if table's name is schema.name then it's corresponding user defined table types names would be schema.nameTT and schema.nameTTU

In case a user defined table type with the same name is manually defined in the schema folder then it will have a priority over the automatic one upon schema initialization/upgrade.

Automatic insertion of permission check snippets

The SQL port schemas have a configuration property named permissionCheck. This property can be set to boolean or an array of strings, specifying whether stored procedures defined in the schema should automatically check the caller permissions. Note that in addition to this configuration each stored procedure should have a specific comment (--ut-permission-check) in its body which to be replaced with the respective snippet. There are several scenarios:

  • If the property is set to true then the permission check snippet will be enabled for all the stored procedures which have the --ut-permission-check comment somewhere within their body. e.g

      const path = require('path');
    
      module.exports = function sql() {
          return {
            schema: [
              {
                path: path.join(__dirname, 'schema'),
                permissionCheck: true
              }
            ]
          };
      };
  • If the property is set to an array of strings the permission check snippet will be enabled for those stored procedures only.

      const path = require('path');
    
      module.exports = function sql() {
          return {
            schema: [
              {
                path: path.join(__dirname, 'schema'),
                permissionCheck: [
                  'schema1.entity1.action1',
                  'schema1.entity1.action2'
                ]
              }
            ]
          };
      };
  • if the property is omitted or set to false then no permission checking will be enabled.

--ut-permission-check

/*
  The comment above will be
  automatically replaced with the code below
  (
    note that all the code will be output
    on 1 row so that the line numbers correctness
    in the stack traces can be preserved
  ):
*/

DECLARE
  @actionID VARCHAR(100) = OBJECT_SCHEMA_NAME(@@PROCID) + '.' + OBJECT_NAME(@@PROCID),
  @return INT = 0;

EXEC @return = [user].[permission.check]
  @actionId = @actionID,
  @objectId = NULL,
  @meta = @meta;

IF (@return != 0)
  BEGIN RETURN 55555;
END

Table and column comments

Table and column comments are parsed and inserted as description in MSSQL. Single line comment '--' after every parameter, field, column or constraint will be added as description for itself.

Multiline comments between /*and*/ are added as description for the next param.

If both single line comment and multiline comment exist, only multiline comment will be saved.

  CREATE PROCEDURE [test].[db.method] -- Procedure description
      @param1 int, -- param1 description
      /* multiline description
        related to param2 */
      @param2 nvarchar(20)
      @param3 test.tableValue2 READONLY -- param3 description
  AS
      SELECT * from @param1
      IF @error=1 RAISERROR('test.error',11,0)

In case of table or type, the table description is at the same line as create statement after the opening bracket.

  CREATE TABLE [schema].[table] ( -- Table description
      /* multiline description
        related to column1 */
      [column1] varchar(200) NOT NULL,
      -- ignored, because multiline description for the same column already exists
      [column2] bigint NOT NULL, -- single line description related to column2
      [column3] varchar(50) NOT NULL
      CONSTRAINT [pk1] PRIMARY KEY
        NONCLUSTERED (sessionId), -- description for constraint are also parsed
      /* multiline is supported
        for constraints too */
      CONSTRAINT [fkSession_User] FOREIGN KEY([actorId]) REFERENCES [user].[user](actorId)
  )

Database diagrams

Database diagrams are stored in system tables which are not created by default. This tables will be created if create.diagram=true in the port configuration.

Result set processing

Single result set handling

Stored procedures can single result set like this:

CREATE PROCEDURE [module].[entity.action]
AS
SELECT * FROM module.someTable

This procedure can be called in one of the following ways, handling the procedure's returned non error result (which is array of arrays of row objects) in different ways:

  • bus.importMethod('subject.object.predicate')(params) - will resolve with result
  • bus.importMethod('subject.object.predicate#[^]')(params) - will resolve with null or throw error portSQL.noRowsExpected if procedure returned more than one result set or procedure returned more than 0 rows in the first result set
  • bus.importMethod('subject.object.predicate#[0]')(params) - will resolve with object containing the first row from the first result set (result[0][0]) or throw portSQL.oneRowExpected, if what procedure returned is not exactly one result set with exactly one row
  • bus.importMethod('subject.object.predicate#[?]')(params) - will resolve with null or object containing the first row from the first result set (result[0][0]) or throw portSQL.maxOneRowExpected, if what procedure returned is not exactly one result set with maximum one row
  • bus.importMethod('subject.object.predicate#[+]')(params) - will array holding first result set rows (result[0]) or throw portSQL.minOneRowExpected if what procedure returned is not exactly one result set with one or more rows
  • bus.importMethod('subject.object.predicate#[]')(params) - will resolve with array holding first result set rows [{row}, ...] or throw error portSQL.singleResultsetExpected if more than one result set is returned

Multiple result set handling

When executing stored procedures, the following pattern can be used to return named result sets:

CREATE PROCEDURE [module].[entity.action]
AS

SELECT 'result1' resultSetName
SELECT * FROM module.someTable

SELECT 'result2' resultSetName
SELECT * FROM module.someTable

Calling bus.importMethod('subject.object.predicate')(params) will resolve with {result1:[], result2:[]}

Crypto Algorithm or cryptoAlgorithm param in config will point how password is encrypted

Throwing errors with more data

In case simple error throwing through RAISERROR is not suitable, the following pattern can be used:

SELECT
    'someData' resultSetName
SELECT
    *
FROM
    someData

SELECT
    'ut-error' resultSetName,
    'someModule.someError' [type],
    @@servername serverName,
    @@version [version]
IF @@trancount > 0 ROLLBACK TRANSACTION -- if rollback is needed
EXEC [core].[error] @type = 'someModule.someError' -- if processing is needed
RETURN 55555

This will throw an error of type someModule.someError, which will also have properties set to the selected columns in the ut-error result set plus additional properties named after all the other result sets. If the specified error type is known, an instance of that error will be thrown. Otherwise PortSQL error will be thrown, with its cause property set to the returned error.

Storing and retrieving JSON

Storing of JSON happens automatically as objects are automatically serialized to JSON when needed. So if a parameter is not based on xml or some form of date or time, the parameter value is passed to JSON.stringify in cases it is an Object which is not a Buffer.

Converting such JSON to object can also happen automatically, if the column name in the result set has suffix .json. In this case the column value is parsed with JSON.parse and stored in a property without the .json suffix. This enables easier consumption of stored JSON by just renaming the columns in the result set.

Bulk import and export

SQL port can scan folders for *.fmt files and create methods, which invoke the bulk copy utility bcp (included with ODBC Driver for SQL Server) to import or export big amounts of data in efficient way. Pass the the list of folders in the format configuration property:

{
  format: [{path: path.join(__dirname, 'format')}]
}

The folder is scanned for files named:

  • schema.table.fmt - this will create two methods in the SQL port with names schema.table.import and schema.table.export
  • schema.table.operation.fmt - this will create one method in the SQL port with the name schema.table.operation

The files will be used as format files for import or export of data in or from the schema.table table. These methods accept the following properties in their first argument:

  • command - the bcp command to execute (in / out / format) For the *.import and *.export methods, this is implicitly set to in or out respectively
  • file - file to use during the import or export
  • firstRow - start row, the default is 1
  • lastRow- end row
  • maxErrors - maximum number of allowed errors
  • hints - hints for bulk import
  • tempDir - temporary folder to use for error messages, the default is os.tmpdir()
  • separator - field separator to use ( useful when generating a format file)
  • terminator- row terminator to use ( useful when generating a format file)
  • unicode - allow usage of unicode characters, default is true ( useful when generating a format file)

For more details see the bcp documentation

Usually the file parameter is enough for successful import or export of data. Note that files with data should be treated as temporary and be deleted or encrypted after the bulk operation is finished.

The recommended command for generating format files is:

bcp schema.table format nul -c -fschema.table.fmt -Uuser -Ppass -Sserver

Encryption of SP parameters

SQL server port will automatically encrypt values of stored procedure parameters in these cases:

  • parameter type is varbinary and size is multiple of 16
  • parameter name starts with 'encrypted'
  • if parameter type is table, values of these columns will be encrypted in these cases:
    • column type is varbinary and size is multiple of 16
    • column name starts with 'encrypted'

Since usually the stored procedures use auto generated table types, the encryption of parameters is achieved easily by just defining columns in the table, for example:

CREATE TABLE [schemaName].[tableName](
    [text] VARBINARY(2048),
    [encryptedAttachments] VARBINARY(MAX),
    [encryptedDetails] VARBINARY(MAX),
)

Decryption of SP results

SQL server port will automatically decrypt columns in the result set if:

column type is varbinary and size is multiple of 16 column name starts with 'encrypted', in this case the result will include a column without this prefix.

Indexing of encrypted values or tag lists

To enable easier indexing of encrypted values, SQL port will look for annotations in the *.sql files that define the tables and procedures. These annotations allow creation and usage of n-gram hashes to index and search the encrypted values.

Annotating tables

Annotation of tables allows automatic creation of some helper objects. Annotation uses JSON within multiline comment after the table definition:

CREATE TABLE [schemaName].[tableName](
    -- list of columns, constraints, etc.
)
/*{
    "ngram": {
        "index": true,
        "search": true
    }
}*/
  • Setting the ngram property will create table type named [schemaName].[ngramTT] that can be used in the stored procedures that want to receive n-grams
  • Setting the ngram.index to true will:
    • create an index table named [schemaName].[tableNameIndex] for storing the n-grams
    • create a table type named [schemaName].[ngramIndexTT]
    • create a stored procedure named [schemaName].[tableNameIndexMerge] for merging n-grams in the index table
  • Setting the ngram.search to true will create a function named [schemaName].[tableNameSearch] for searching the n-grams index

Annotating stored procedure parameters

Annotation of stored procedure parameters allows n-grams to be generated and passed to the stored procedure. The annotation consists of a JSON within a multiline comment before an ngramTT parameter:

CREATE PROCEDURE [schemaName].[method]
    @paramName [schemaName].[tableNameTT] READONLY,
/*{
    "paramName.text": 1,
    "paramName.demo": {
      "id": 2,
      "min": 3,
      "max": 5
    }
}*/
    @ngram [schemaName].[ngramTT] READONLY
AS

The JSON properties define how each column is to be indexed. It can include parameters to the n-gram generation. Passing just a number activates the most common one - trigram generation. All generated n-grams will be passed in the @ngram parameter. Each row in the @ngram variable will include information for the parameter name and table row number used during n-gram generation.

Stored procedure must use the @ngram parameter to save the passed data to the corresponding index table. For example if a variable named @inserted contains the rows inserted in the main table, the following code could be used to save the n-grams in the index:

INSERT INTO
    [schemaName].tableNameIndex(id, ngram)
SELECT
    i.id, n.ngram
FROM
    (SELECT id, RANK() OVER (ORDER BY id) rnk FROM @inserted) i
JOIN
    @ngram n ON n.row = i.rnk AND n.param = 'paramName'

Indexing of tags

Tags or labels can be used as flags which mark specific conditions for the data. They are useful when the data includes values, for which no individual table column is available.

Tags can be passed in a string parameter or table type column. The following format is recommended:

aaa.bbb=ccc
...
xxx.yyy=zzz

Individual tags are alphanumeric strings, which also can contain the dot and equal sign. They can be of any length, but for the index they are hashed to a fixed length of 32 bytes (256 bits) and passed in the ngramTT parameter for which they are configured.

Distinguishing between tags or n-grams is done by the name or by explicit configuration:

-- by column/parameter name suffix "Tags"
CREATE PROCEDURE [schemaName].[method]
    @someTags VARCHAR(100),
    @paramName [schemaName].[tableNameTT] READONLY
/*{
    "paramName.demoTags": 1,
    "someTags": 2
}*/
    @ngram [schemaName].[ngramTT] READONLY
AS
-- by configuration
CREATE PROCEDURE [schemaName].[method]
    @paramName [schemaName].[tableNameTT] READONLY
/*{
    "paramName.demo": {
        "tags": true,
        "id": 1
    }
}*/
    @ngram [schemaName].[ngramTT] READONLY
AS

Searching in n-gram index

To search in the n-gram index, a procedure must annotate a parameter in a way similar to the procedures that modifies data. Then it can use the automatically created function or write alternative one to search using the passed n-gram hashes. Here is an example how the automatically created function can be used:

CREATE PROCEDURE [schemaName].[search]
    @paramName [schemaName].[tableTT] READONLY
/*{
    "paramName.text": true
}*/
    @ngramTableName [schemaName].[ngramTT] READONLY
AS
DECLARE @ngramCount INT = (SELECT COUNT(*) FROM @ngram)
SELECT
    t.id, t.text
FROM
    [schemaName].[tableName] t
WHERE
    (
        @ngramCount = 0 OR
        t.id IN (SELECT id FROM [schemaName].[tableNameSearch](@ngram, 1))
    )

The automatically created search function expects 2 parameters:

  • @ngram - list of n-gram hashes to search for
  • @fuzzy - a float value, that specifies how many of the n-grams must match, where 1 means 100%. The function returns the list of keys that contain the passed n-grams

changelog

9.2.8 (2024-10-07)

Bug Fixes

  • default expressions for sql table columns (9dd68e9)

9.2.7 (2024-08-14)

Bug Fixes

  • decrypt column for sql query with process 'return' (#150) (783d3df)

9.2.6 (2024-07-16)

9.2.5 (2024-07-11)

9.2.4 (2024-06-13)

Bug Fixes

9.2.3 (2024-04-05)

9.2.2 (2024-04-04)

Bug Fixes

  • APEXG-2852 add missing error params (1993500)
  • APEXG-2852 comments (0fe9f79)

9.2.1 (2023-11-03)

Bug Fixes

  • support computed columns (7b591f2)

9.2.0 (2023-10-13)

Features

9.1.0 (2023-10-11)

Features

9.0.7 (2023-09-21)

Bug Fixes

9.0.6 (2023-08-19)

Bug Fixes

  • xmlParserOptions to be passed to xml2js parser (#142) (387db0b)

9.0.5 (2023-06-29)

9.0.4 (2023-06-08)

Bug Fixes

9.0.3 (2023-05-19)

Bug Fixes

9.0.2 (2023-05-05)

Bug Fixes

9.0.1 (2023-04-19)

9.0.0 (2023-04-19)

BREAKING CHANGES

  • --ut-permission-check is replaced by default with a call to [user].[permission.check], you can opt-out via permissionCheck: false in the schema object

8.2.2 (2023-02-01)

8.2.1 (2022-12-21)

Bug Fixes

8.2.2 (2023-02-01)

8.2.1 (2022-12-21)

Bug Fixes

8.2.0 (2022-12-13)

Features

8.1.9 (2022-11-28)

8.1.8 (2022-11-25)

Bug Fixes

8.1.7 (2022-11-16)

Bug Fixes

8.1.6 (2022-11-14)

Bug Fixes

8.1.5 (2022-10-31)

Bug Fixes

  • decrypt stable encrypted column in result (153c537)

8.1.4 (2022-10-20)

Bug Fixes

  • better doc parse error + usage of ut-function.ngram (#126) (6070dd5)

8.1.3 (2022-09-15)

8.1.2 (2022-09-15)

Bug Fixes

  • use local config when interpolating *.sql files (dac6f52)

8.1.1 (2022-07-21)

Bug Fixes

  • improve type generation (0af2345)

8.1.0 (2022-07-19)

Features

  • stored procedures that start with _ to be private (47be1dc)

8.0.4 (2022-07-19)

Bug Fixes

8.0.3 (2022-06-30)

Bug Fixes

8.0.2 (2022-06-23)

Bug Fixes

8.0.1 (2022-06-22)

Bug Fixes

  • explicitly convert stringified object to buffer when varbinary (#121) (2738124)

8.0.0 (2022-06-21)

Features

7.44.6 (2022-05-10)

7.44.5 (2022-04-15)

Bug Fixes

  • configurable name for ngram search (b95dca3)

7.44.4 (2022-03-22)

Bug Fixes

  • bcp trustServerCertificate (782ed6a)

7.44.3 (2022-03-22)

Bug Fixes

  • bcp trustServerCertificate (231b554)

7.44.2 (2022-03-16)

Bug Fixes

  • configure csv separator on csv export ALZ-7910 (#114) (620db65)

7.44.1 (2022-03-11)

7.44.0 (2022-03-10)

Features

7.43.0 (2022-03-01)

7.42.3 (2021-12-09)

7.42.2 (2021-12-06)

Bug Fixes

  • allow already prefixed times (d46cfb8)

7.42.1 (2021-11-16)

Bug Fixes

  • pass @callParams to core.errorLog if provided as a parameter of the procedure (f47f692)

7.42.0 (2021-10-26)

Features

  • pass callParams to user.permission.check (df352a1)

7.41.1 (2021-10-25)

Bug Fixes

  • explicit check for $meta.forward (127174f)

7.41.0 (2021-10-25)

Features

7.40.2 (2021-10-12)

Bug Fixes

  • switch dot-prop with lodash.get (faf9b73)

7.40.1 (2021-10-11)

Bug Fixes

7.40.0 (2021-09-09)

Features

  • collect SP parameter types (2f54379)

7.39.2 (2021-07-23)

Bug Fixes

  • xml decrypt stable [UTCORE-121] (5fc431f)

7.39.1 (2021-07-23)

Bug Fixes

7.39.0 (2021-07-22)

Features

  • decrypt XML tag and attr values [UTCORE-121] (4b2560e)

7.38.0 (2021-07-20)

Features

  • enable skipping calls to non-existing SP [UTCORE-121] (8a8e2e1)

7.37.5 (2021-06-24)

7.37.4 (2021-06-18)

Bug Fixes

  • change log level [UTCORE-121] (7382e0d)

7.37.3 (2021-05-21)

Bug Fixes

  • BDC-1866 - pass auth.checkSession to SPs (0d98b47)

7.37.2 (2021-02-05)

Bug Fixes

7.37.1 (2021-01-28)

Bug Fixes

  • Buffer objects are no more stringified when passed directly (7e944ec)
  • bug in fetching db table types when user defined data types exists (ba1ef0a)

7.37.0 (2021-01-25)

Features

  • config for loading dbo schema (6c8a577)

7.36.2 (2021-01-14)

Bug Fixes

  • do not log after port is stopped (33e37c9)

7.36.1 (2020-12-19)

Bug Fixes

7.36.0 (2020-11-25)

Features

7.35.1 (2020-11-16)

Bug Fixes

  • log filename for parse errors (5dd92bf)
  • return serializable data (65ea91a)

7.35.0 (2020-10-22)

Features

  • enable stable cbc with prefix (d0e690c)

7.34.0 (2020-09-24)

Features

7.33.1 (2020-09-23)

Bug Fixes

7.33.0 (2020-09-23)

Features

  • optional user and password for msnodesqlv8 (#100) (547c454)

7.32.0 (2020-08-25)

Features

  • enable saveAs stream result (401069f)

7.31.0 (2020-08-20)

Features

7.30.0 (2020-07-30)

Bug Fixes

Features

7.29.0 (2020-07-22)

Features

  • allow more exlude options (78e741a)

7.28.0 (2020-07-22)

Bug Fixes

  • use compatibility level 120 as default (07785b4)

Features

7.27.1 (2020-07-08)

7.27.0 (2020-07-01)

Features

  • allow passing error type to core.errorStack (a173cb3)

7.26.3 (2020-06-08)

Bug Fixes

7.26.2 (2020-05-05)

Bug Fixes

  • pass cbc where required (cb59c6b)

7.26.1 (2020-04-30)

Bug Fixes

7.26.0 (2020-04-29)

Features

7.25.6 (2020-03-11)

Bug Fixes

7.25.5 (2020-03-11)

Bug Fixes

  • properly pad when value is unicode string (7807493)

7.25.4 (2020-03-11)

Bug Fixes

7.25.3 (2020-03-06)

7.25.2 (2020-03-05)

Bug Fixes

  • do not auto create lowercase objects (503b177)

7.25.1 (2020-03-05)

Bug Fixes

7.25.0 (2020-03-05)

Features

  • auto create IndexMerge SP (0c4aa55)

7.24.1 (2020-03-02)

7.24.0 (2020-03-01)

Features

  • pass method for error interpolation (ab92a36)

7.23.0 (2020-02-28)

Features

  • encrypt stable and date (8cacc24)

7.22.2 (2020-02-27)

Bug Fixes

7.22.1 (2020-02-27)

Bug Fixes

7.22.0 (2020-02-27)

Features

  • add field id in ngram index, support unicode (4482a49)

7.21.3 (2020-02-25)

Bug Fixes

  • cbc.decrypt: handle gracefully bad length (650524e)

7.21.2 (2020-02-25)

Bug Fixes

  • pass cbc to processFiles function (#94) (019771d)

7.21.1 (2020-02-20)

Bug Fixes

  • allow logging of binary columns (0e99365)
  • do not try to create ngramtt again (6043f60)

7.21.0 (2020-02-11)

Features

  • support yaml files in schema folders (7e52b30)

7.20.1 (2020-01-29)

Bug Fixes

7.20.0 (2020-01-29)

Features

7.19.1 (2020-01-24)

7.19.0 (2020-01-24)

Bug Fixes

  • do not read cache with require() (8466081)

Features

  • include sql files in compiled app (3f60d55)

7.18.3 (2020-01-24)

7.18.2 (2020-01-14)

7.18.1 (2020-01-14)

7.18.0 (2019-12-17)

Features

7.17.0 (2019-12-04)

Features

7.16.2 (2019-10-30)

Bug Fixes

  • handle rowversion fields (f1a9717)

7.16.1 (2019-10-10)

Bug Fixes

  • use getError for variable errors (9706d1d)

7.16.0 (2019-10-10)

Bug Fixes

  • unify saveAs and resultsets processing (c9fe4a0)

7.15.4 (2019-09-18)

Bug Fixes

  • always delete ***.json columns from resultSet (#83) (3fec6be)

7.15.3 (2019-09-17)

Bug Fixes

7.15.2 (2019-09-03)

Bug Fixes

  • handle linux newline format (cb3a41d)

7.15.1 (2019-08-15)

Bug Fixes

  • skip unneeded auth properties (87910e5)

7.15.0 (2019-07-25)

Features

7.14.1 (2019-07-18)

Bug Fixes

  • check if stored procedure is linked before calling it (b29ee62)

7.14.0 (2019-06-25)

Features

  • add procedure description to sql extended properties (7c2a359)
  • add table description to sql extende properties (befbc4c)

7.13.4 (2019-05-29)

Bug Fixes

7.13.3 (2019-05-29)

Bug Fixes

  • skip null stringification (a058c21)

7.13.2 (2019-05-28)

Bug Fixes

  • skip date stringification (e6377cb)

7.13.1 (2019-05-26)

Bug Fixes

  • set hash only when dropped (39569ff)
  • show filename on error (6d15a81)

7.13.0 (2019-05-04)

Features

  • support json in columns and params (e6bd124)

7.12.2 (2019-04-27)

Bug Fixes

  • improve offline and cacheing (6f95974)

7.12.1 (2019-04-18)

Bug Fixes

7.12.0 (2019-04-17)

Features

  • convert objects parameters to JSON (9800354)

7.11.0 (2019-04-16)

Bug Fixes

  • stringify objects when encrypting (f7e3e8e)

Features

  • allow using '-' as prefix separator (41d2069)

7.10.1 (2019-03-27)

Bug Fixes

  • support encryption in TVP (5a1ca7f)

7.10.0 (2019-03-27)

Features

  • allow seed data coming from .js or .json (b15d772)

7.9.2 (2019-03-22)

7.9.1 (2019-03-14)

Bug Fixes

7.9.0 (2019-03-13)

Bug Fixes

Features

  • crypto activated by field/param prefix (a2b473a)
  • support ?# before the modifiers (36c8c81)

7.8.2 (2019-02-25)

Bug Fixes

  • only create when create.user is configured (4d52389)

7.8.1 (2019-02-19)

Bug Fixes

  • handle schemas in import order (7763192)

7.8.0 (2019-02-19)

Features

  • access whole config subtrees as JSON when interpolating (37b81ab)

7.7.2 (2019-02-14)

Bug Fixes

7.7.1 (2019-02-14)

Bug Fixes

  • interpolate falsy values correctly (d6aa73d)

7.7.0 (2019-02-08)

Bug Fixes

Features

  • easily encrypt test data (04440ff)

7.6.0 (2019-02-02)

Bug Fixes

7.5.0 (2018-12-06)

Features

7.5.0 (2018-12-06)

Features

7.4.0 (2018-08-02)

Features

  • add possibility to interpolate filenames, tables and tabletypes (#60) (51dc26b)

7.3.1 (2018-06-11)

Bug Fixes

7.3.0 (2018-06-07)

Bug Fixes

  • pass options to initial connection (a34918d)

7.2.1 (2018-05-09)

Bug Fixes

  • handle all updates cases (559a164)

7.2.0 (2018-04-17)

7.1.1 (2018-03-04)

Bug Fixes

  • log error instead of plain object (#52) (76aef9f)

7.1.0 (2018-01-30)

7.0.0 (2017-12-06)

Bug Fixes

  • add ability to exclude part of the schema (#42) (6f9af5e)

Features

6.8.0 (2017-10-09)

Features

  • allow disabling schemaUpdate retries (3220776)

6.7.1 (2017-09-28)

Bug Fixes

  • additional check if user exist but is not a db_owner (cea48cd)
  • deadlocked on parallel test caused by sp_addrolemember (c141540)

6.7.0 (2017-09-28)

Bug Fixes

  • deadlocked on parallel test caused by sp_addrolemember (#37) (0ae966c)
  • optimize when no schemas to update (bde3c93)

Features

  • improve logging and performance metrics (041af81)

6.6.4 (2017-08-21)

Bug Fixes

6.6.3 (2017-08-02)

Bug Fixes

  • remove dependency to when (65e14d8)

6.6.2 (2017-07-28)

Bug Fixes

  • throw proper error from resultSet (64a8107)

6.6.1 (2017-06-26)

Bug Fixes

6.6.0 (2017-06-20)

Features

  • grant server state view permission to the db user, generate uuid in meta UIS-2550 (f95967a)

6.5.0 (2017-05-31)

Features

6.4.11 (2017-05-31)

Bug Fixes

  • Change rowversion column type to binary for table-types (#30) (2725f6e)

6.4.10 (2017-05-22)

6.4.9 (2017-05-22)

6.4.8 (2017-05-22)

6.4.7 (2017-05-22)

6.4.6 (2017-05-22)

6.4.5 (2017-05-15)

Bug Fixes

6.4.4 (2017-05-11)

Bug Fixes

  • throw correct error when sql parser fails (#28) (8cfb82c)

6.4.3 (2017-05-05)

Bug Fixes

  • support default output params (dc8c917)

6.4.2 (2017-05-05)

Bug Fixes

6.4.1 (2017-05-04)

Bug Fixes

6.4.0 (2017-04-19)

Features

  • allow calling schema returning function (92469c5)

6.3.1 (2017-04-19)

Bug Fixes

6.3.0 (2017-03-31)

Features

  • support timestamp and rowversion (dd8e52b)

6.2.2 (2017-03-31)

Bug Fixes

6.2.1 (2017-03-30)

Bug Fixes

6.2.0 (2017-03-30)

Features

  • flexible SP linking and schema update (d139fb4)

6.1.1 (2017-03-27)

Bug Fixes

  • scalar type size parsing (01ef232)

6.1.0 (2017-03-23)

Bug Fixes

Features

  • enable parse of table check constraint (0f55920)
  • skipping creation of individual table types (80354e3)

6.0.1 (2017-03-08)

Bug Fixes

6.0.0 (2017-03-06)

Features

  • allow call modifiers [], [^], [?], [0], [+] (#22) (9cbcae9)
  • throw errors containing data (#23) (b8eed61)

BREAKING CHANGES

  • errors are now thrown using lowercase first letter

5.8.3 (2017-02-24)

Bug Fixes

  • know which port fails on connect error (#21) (0eb6b34)

5.8.2 (2017-02-18)

Bug Fixes

  • do not refresh views without schema (ff3386b)

5.8.1 (2017-02-10)

Bug Fixes

  • don't try to create user twice if config.create.user = config.db.user (aeed58e)

5.8.0 (2017-02-03)

Features

  • support for expressions (2c63a26)

5.7.3 (2017-01-17)

Bug Fixes

5.7.2 (2017-01-17)

Bug Fixes

5.7.1 (2017-01-13)

Bug Fixes

  • added error loging before adding query to fail array (fbc1f3e)
  • from error to warn (dd1950d)

5.7.0 (2016-12-19)

Features

  • add possibility to have multiple core.error procedure executions in different try catch blocks (9ed280f)

5.6.3 (2016-12-19)

Bug Fixes

  • conversion for datetime and xml parameters (#17) (375255b)

5.6.2 (2016-12-09)

Bug Fixes

  • do not flood with messages (9490e2f)

5.6.1 (2016-12-01)

Bug Fixes

5.6.0 (2016-12-01)

Features

5.5.10 (2016-11-23)

Bug Fixes

5.5.9 (2016-11-23)

Bug Fixes

5.5.8 (2016-11-07)

Bug Fixes

5.5.7 (2016-11-04)

Bug Fixes

5.5.6 (2016-11-04)

5.5.5 (2016-11-04)

5.5.4 (2016-11-04)

Bug Fixes

  • fix issue with national characters in XML columns (34dd4ae)

5.5.3 (2016-11-03)

Bug Fixes

  • fix failed queries check (75b04f9)

5.5.2 (2016-11-03)

Bug Fixes

  • should not try to read directory in the schema folder (44c0b06)

5.5.1 (2016-10-24)

5.5.0 (2016-10-13)

Features

  • schema property as function (cf472a5)

5.4.12 (2016-09-20)

5.4.11 (2016-09-20)

Bug Fixes

5.4.10 (2016-09-13)

5.4.9 (2016-09-13)

5.4.8 (2016-09-13)

5.4.7 (2016-08-24)

5.4.6 (2016-08-17)

Bug Fixes

5.4.5 (2016-08-12)

Bug Fixes

5.4.4 (2016-08-02)

5.4.3 (2016-07-26)

Bug Fixes

5.4.2 (2016-07-26)

5.4.1 (2016-07-11)

Features

5.3.65 (2016-06-23)

Features

  • add support for saving documentation when doc=true in the config (0f090fa)

5.3.64 (2016-06-20)

Features

  • add support for time type (6800701)

5.3.63 (2016-06-20)

Features

  • add support for time columns in a table type (9652a77)

5.3.62 (2016-06-16)

5.3.61 (2016-06-15)

Features

5.3.60 (2016-06-03)

Features

  • add parser for default expression (a6d5450)
  • add support for default values and datetime2 precision in TVP (5475910)
  • support parsing default values for table columns (abdaa51)
  • support sending default values in TVP and fix datetime2 handling (83a407b)

5.3.59 (2016-06-02)

Features

5.3.58 (2016-06-01)

5.3.57 (2016-06-01)

Features

5.3.56 (2016-05-30)

Features

  • add support for endpoint/method syntax (4a5d41f)

5.3.55 (2016-05-26)

Bug Fixes

  • remove comment (f47d03d)
  • remove fields filtration by default and handle in implementation (1fc3223)

Features

  • add automatig XML building from JSON (744f517)

5.3.54 (2016-05-25)

Bug Fixes

  • improve itteration algorithm (74e04bb)

Features

  • XML parsing functionality (76b4630)

5.3.53 (2016-05-17)

Bug Fixes

5.3.52 (2016-05-17)

Features

5.3.51 (2016-05-10)

Features

  • support passing $meta and using . in parameter/column names(706fe1f)

5.3.50 (2016-05-06)

Features

  • generate TTU automatically for all tables based on their structure(0c769cb)

5.3.49 (2016-04-28)

Bug Fixes

  • call-sp: Default parameters should not be overridden with NULL when properties not specified.(55c7ef9)

Features

  • resultsets-order: Include output parameters with predefined key when using naming resultsets.(25914da)

5.3.48 (2016-04-27)

5.3.47 (2016-04-25)

Features

  • allow retry to be configured and fail when retry is disabled (55783de)

5.3.46 (2016-04-22)

Features

  • add support for creating database (04ab4fb)

5.3.45 (2016-04-19)

Bug Fixes

  • fix naming convention for auditCall (edbec62)

5.3.44 (2016-04-15)

Features

  • add Gitlab-ci and Jenkins scripts (4591399)

5.3.43 (2016-04-05)

Features

  • report error location in file (1f60345)

5.3.42 (2016-04-05)

Features

  • parser: add function and params comment as documentation (fbb7e8a)
  • parser: add support for alter (ce74eab)
  • parser: add table and procedure comment as description in mssql (91075e5)

5.3.41 (2016-04-04)

Features

  • update ut-error dependency (fed374c)

5.3.40 (2016-03-31)

Bug Fixes

Features