Important: This documentation covers Yarn 1 (Classic).
For Yarn 2+ docs and migration guide, see yarnpkg.com.

Package detail

web-speech-cognitive-services

compulim40.4kMIT8.1.1TypeScript support: included

Polyfill Web Speech API with Cognitive Services Speech-to-Text service

cognitive services, dictation, microphone, polyfill, react, speak, speech recognition, speech synthesis, speech to text, speechsynthesis, stt, text to speech, tts, unified speech, utterance, voice recognition, web speech, webrtc, webspeech

readme

web-speech-cognitive-services

Web Speech API adapter to use Cognitive Services Speech Services for both speech-to-text and text-to-speech service.

npm version Build Status

Description

Speech technologies enables a lot of interesting scenarios, including Intelligent Personal Assistant and provide alternative inputs for assistive technologies.

Although W3C standardized speech technologies in browser, speech-to-text and text-to-speech support are still scarce. However, cloud-based speech technologies are very mature.

This polyfill provides W3C Speech Recognition and Speech Synthesis API in browser by using Azure Cognitive Services Speech Services. This will bring speech technologies to all modern first-party browsers available on both PC and mobile platforms.

Demo

Before getting started, please obtain a Cognitive Services subscription key from your Azure subscription.

Try out our demo at https://compulim.github.io/web-speech-cognitive-services. If you don't have a subscription key, you can still try out our demo in a speech-supported browser.

We use react-dictate-button and react-say to quickly setup the playground.

Browser requirements

Speech recognition requires WebRTC API and the page must hosted thru HTTPS or localhost. Although iOS 12 support WebRTC, native apps using WKWebView do not support WebRTC.

Special requirement for Safari

Speech synthesis requires Web Audio API. For Safari, user gesture (click or tap) is required to play audio clips using Web Audio API. To ready the Web Audio API to use without user gesture, you can synthesize an empty string, which will not trigger any network calls but playing an empty hardcoded short audio clip. If you already have a "primed" AudioContext object, you can also pass it as an option.

How to use

There are two ways to use this package:

  1. Using <script> to load the bundle
  2. Install from NPM

Using <script> to load the bundle

To use the ponyfill directly in HTML, you can use our published bundle from unpkg.

In the sample below, we use the bundle to perform text-to-speech with a voice named "Aria24kRUS".

<!DOCTYPE html>
<html lang="en-US">
  <head>
    <script src="https://unpkg.com/web-speech-cognitive-services/umd/web-speech-cognitive-services.production.min.js"></script>
  </head>
  <body>
    <script>
      const { speechSynthesis, SpeechSynthesisUtterance } = window.WebSpeechCognitiveServices.create({
        credentials: {
          region: 'westus',
          subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
        }
      });

      speechSynthesis.addEventListener('voiceschanged', () => {
        const voices = speechSynthesis.getVoices();
        const utterance = new SpeechSynthesisUtterance('Hello, World!');

        utterance.voice = voices.find(voice => /Aria24kRUS/u.test(voice.name));

        speechSynthesis.speak(utterance);
      });
    </script>
  </body>
</html>

We do not host the bundle. You should always use Subresource Integrity to protect bundle integrity when loading from a third-party CDN.

The voiceschanged event come shortly after you created the ponyfill. You will need to wait until the event arrived before able to choose a voice for your utterance.

Install from NPM

For production build, run npm install web-speech-cognitive-services.

For development build, run npm install web-speech-cognitive-services@master.

Since Speech Services SDK is not on NPM yet, we will bundle the SDK inside this package for now. When Speech Services SDK release on NPM, we will define it as a peer dependency.

Polyfilling vs. ponyfilling

In JavaScript, polyfill is a technique to bring newer features to older environment. Ponyfill is very similar, but instead polluting the environment by default, we prefer to let the developer to choose what they want. This article talks about polyfill vs. ponyfill.

In this package, we prefer ponyfill because it do not pollute the hosting environment. You are also free to mix-and-match multiple speech recognition engines under a single environment.

Options

The following list all options supported by the adapter.

Name and type Default value Description
audioConfig: AudioConfig fromDefaultMicrophoneInput() AudioConfig object to use with speech recognition. Please refer to this article for details on selecting different audio devices.
audioContext: AudioContext undefined The audio context is synthesizing speech on. If this is undefined, the AudioContext object will be created on first synthesis.
credentials: (
  ICredentials ||
  Promise<ICredentials> ||
  () => ICredentials ||
  () => Promise<ICredentials>
)

ICredentials: {
  authorizationToken: string,
  region: string
} || {
  region: string,
  subscriptionKey: string
} || {
  authorizationToken: string,
  customVoiceHostname?: string,
  speechRecognitionHostname: string,
  speechSynthesisHostname: string
} || {
  customVoiceHostname?: string,
  speechRecognitionHostname: string,
  speechSynthesisHostname: string,
  subscriptionKey: string
}
(Required) Credentials (including Azure region) from Cognitive Services. Please refer to this article to obtain an authorization token.

Subscription key is not recommended for production use as it will be leaked in the browser.

For sovereign cloud such as Azure Government (United States) and Azure China, instead of specifying region, please specify speechRecongitionHost and speechSynthesisHostname instead. You can find the sovereign cloud connection parameters from this article.
enableTelemetry undefined Pass-through option to enable or disable telemetry for Speech SDK recognizer as outlined in Speech SDK. This adapter does not collect any telemetry.

By default, Speech SDK will collect telemetry unless this is set to false.
looseEvents: boolean false Specifies if the event order should strictly follow observed browser behavior (false), or loosened behavior (true). Regardless of the option, both behaviors conform with W3C specifications.

You can read more about this option in event order section.
ponyfill.AudioContext: AudioContext window.AudioContext ||
window.webkitAudioContext
Ponyfill for Web Audio API.

Currently, only Web Audio API can be ponyfilled. We may expand to WebRTC for audio recording in the future.
referenceGrammars: string[] undefined Reference grammar IDs to send for speech recognition.
speechRecognitionEndpointId: string undefined Endpoint ID for Custom Speech service.
speechSynthesisDeploymentId: string undefined Deployment ID for Custom Voice service.

When you are using Custom Voice, you will need to specify your voice model name through SpeechSynthesisVoice.voiceURI. Please refer to the "Custom Voice support" section for details.
speechSynthesisOutputFormat: string "audio-24khz-160kbitrate-mono-mp3" Audio format for speech synthesis. Please refer to this article for list of supported formats.
textNormalization: string "display" Supported text normalization options:

  • "display"
  • "itn" (inverse text normalization)
  • "lexical"
  • "maskeditn" (masked ITN)

Setting up for sovereign clouds

You can use the adapter to connect to sovereign clouds, including Azure Government (United States) and Microsoft Azure China.

Please refer to this article on limitations when using Cognitive Services Speech Services on sovereign clouds.

Azure Government (United States)

createPonyfill({
  credentials: {
    authorizationToken: 'YOUR_AUTHORIZATION_TOKEN',
    speechRecognitionHostname: 'virginia.stt.speech.azure.us',
    speechSynthesisHostname: 'virginia.tts.speech.azure.us'
  }
});

Microsoft Azure China

createPonyfill({
  credentials: {
    authorizationToken: 'YOUR_AUTHORIZATION_TOKEN',
    speechRecognitionHostname: 'chinaeast2.stt.speech.azure.cn',
    speechSynthesisHostname: 'chinaeast2.tts.speech.azure.cn'
  }
});

Code snippets

For readability, we omitted the async function in all code snippets. To run the code, you will need to wrap the code using an async function.

Speech recognition (speech-to-text)

import { createSpeechRecognitionPonyfill } from 'web-speech-cognitive-services/lib/SpeechServices/SpeechToText';

const {
  SpeechRecognition
} = await createSpeechRecognitionPonyfill({
  credentials: {
    region: 'westus',
    subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
  }
});

const recognition = new SpeechRecognition();

recognition.interimResults = true;
recognition.lang = 'en-US';

recognition.onresult = ({ results }) => {
  console.log(results);
};

recognition.start();

Note: most browsers requires HTTPS or localhost for WebRTC.

Integrating with React

You can use react-dictate-button to integrate speech recognition functionality to your React app.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
import DictateButton from 'react-dictate-button';

const {
  SpeechGrammarList,
  SpeechRecognition
} = await createPonyfill({
  credentials: {
    region: 'westus',
    subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
  }
});

export default props =>
  <DictateButton
    onDictate={ ({ result }) => alert(result.transcript) }
    speechGrammarList={ SpeechGrammarList }
    speechRecognition={ SpeechRecognition }
  >
    Start dictation
  </DictateButton>

Speech synthesis (text-to-speech)

import { createSpeechSynthesisPonyfill } from 'web-speech-cognitive-services/lib/SpeechServices/TextToSpeech';

const {
  speechSynthesis,
  SpeechSynthesisUtterance
} = await createSpeechSynthesisPonyfill({
  credentials: {
    region: 'westus',
    subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
  }
});

speechSynthesis.addEventListener('voiceschanged', () => {
  const voices = speechSynthesis.getVoices();
  const utterance = new SpeechSynthesisUtterance('Hello, World!');

  utterance.voice = voices.find(voice => /Aria24kRUS/u.test(voice.name));

  speechSynthesis.speak(utterance);
});

Note: speechSynthesis is camel-casing because it is an instance.

List of supported regions can be found in this article.

pitch, rate, voice, and volume are supported. Only onstart, onerror, and onend events are supported.

Integrating with React

You can use react-say to integrate speech synthesis functionality to your React app.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';
import React, { useEffect, useState } from 'react';
import Say from 'react-say';

export default () => {
  const [ponyfill, setPonyfill] = useState();

  useEffect(async () => {
    setPonyfill(await createPonyfill({
      credentials: {
        region: 'westus',
        subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
      }
    }));
  }, [setPonyfill]);

  return (
    ponyfill &&
      <Say
        speechSynthesis={ ponyfill.speechSynthesis }
        speechSynthesisUtterance={ ponyfill.SpeechSynthesisUtterance }
        text="Hello, World!"
      />
  );
};

Using authorization token

Instead of exposing subscription key on the browser, we strongly recommend using authorization token.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';

const ponyfill = await createPonyfill({
  credentials: {
    authorizationToken: 'YOUR_AUTHORIZATION_TOKEN',
    region: 'westus'
  }
});

You can also provide an async function that will fetch the authorization token and Azure region on-demand. You should cache the authorization token for subsequent request. For simplicity of this code snippets, we are not caching the result.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';

const ponyfill = await createPonyfill({
  credentials: () => fetch('https://example.com/your-token').then(res => ({
    authorizationToken: res.text(),
    region: 'westus'
  }))
});

List of supported regions can be found in this article.

Lexical and ITN support

Lexical and ITN support is unique in Cognitive Services Speech Services. Our adapter added additional properties transcriptITN, transcriptLexical, and transcriptMaskedITN to surface the result, in addition to transcript and confidence.

Biasing towards some words for recognition

In some cases, you may want the speech recognition engine to be biased towards "Bellevue" because it is not trivial for the engine to recognize between "Bellevue", "Bellview" and "Bellvue" (without "e"). By giving a list of words, teh speech recognition engine will be more biased to your choice of words.

Since Cognitive Services does not works with weighted grammars, we built another SpeechGrammarList to better fit the scenario.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';

const {
  SpeechGrammarList,
  SpeechRecognition
} = await createPonyfill({
  credentials: {
    region: 'westus',
    subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
  }
});

const recognition = new SpeechRecognition();

recognition.grammars = new SpeechGrammarList();
recognition.grammars.phrases = ['Tuen Mun', 'Yuen Long'];

recognition.onresult = ({ results }) => {
  console.log(results);
};

recognition.start();

Custom Speech support

Please refer to "What is Custom Speech?" for tutorial on creating your first Custom Speech model.

To use custom speech for speech recognition, you need to pass the endpoint ID while creating the ponyfill.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';

const ponyfill = await createPonyfill({
  credentials: {
    region: 'westus',
    subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
  },
  speechRecognitionEndpointId: '12345678-1234-5678-abcd-12345678abcd',
});

Custom Voice support

Please refer to "Get started with Custom Voice" for tutorial on creating your first Custom Voice model.

To use Custom Voice for speech synthesis, you need to pass the deployment ID while creating the ponyfill, and pass the voice model name as voice URI.

import createPonyfill from 'web-speech-cognitive-services/lib/SpeechServices';

const ponyfill = await createPonyfill({
  credentials: {
    region: 'westus',
    subscriptionKey: 'YOUR_SUBSCRIPTION_KEY'
  },
  speechSynthesisDeploymentId: '12345678-1234-5678-abcd-12345678abcd',
});

const { speechSynthesis, SpeechSynthesisUtterance } = ponyfill;

const utterance = new SpeechSynthesisUtterance('Hello, World!');

utterance.voice = { voiceURI: 'your-model-name' };

await speechSynthesis.speak(utterance);

Event order

According to W3C specifications, the result event can be fire at any time after audiostart event.

In continuous mode, finalized result event will be sent as early as possible. But in non-continuous mode, we observed browsers send finalized result event just before audioend, instead of as early as possible.

By default, we follow event order observed from browsers (a.k.a. strict event order). For a speech recognition in non-continuous mode and with interims, the observed event order will be:

  1. start
  2. audiostart
  3. soundstart
  4. speechstart
  5. result (these are interim results, with isFinal property set to false)
  6. speechend
  7. soundend
  8. audioend
  9. result (with isFinal property set to true)
  10. end

You can loosen event order by setting looseEvents to true. For the same scenario, the event order will become:

  1. start
  2. audiostart
  3. soundstart
  4. speechstart
  5. result (these are interim results, with isFinal property set to false)
  6. result (with isFinal property set to true)
  7. speechend
  8. soundend
  9. audioend
  10. end

For error events (abort, "no-speech" or other errors), we always sent it just before the last end event.

In some cases, loosening event order may improve recognition performance. This will not break conformance to W3C standard.

Test matrix

For detailed test matrix, please refer to SPEC-RECOGNITION.md or SPEC-SYNTHESIS.md.

Known issues

  • Speech recognition
    • Interim results do not return confidence, final result do have confidence
      • We always return 0.5 for interim results
    • Cognitive Services support grammar list but not in JSGF format, more work to be done in this area
      • Although Google Chrome support grammar list, it seems the grammar list is not used at all
  • Speech synthesis
    • onboundary, onmark, onpause, and onresume are not supported/fired
    • pause will pause immediately and do not pause on word breaks due to lack of boundary

Roadmap

  • Speech recognition
    • <input checked="" disabled="" type="checkbox"> Add tests for lifecycle events
    • <input checked="" disabled="" type="checkbox"> Support stop() and abort() function
    • <input checked="" disabled="" type="checkbox"> Add dynamic phrases
    • <input checked="" disabled="" type="checkbox"> Add reference grammars
    • <input checked="" disabled="" type="checkbox"> Add continuous mode
    • <input disabled="" type="checkbox"> ~Investigate support of Opus (OGG) encoding~
    • <input checked="" disabled="" type="checkbox"> Support custom speech
    • <input checked="" disabled="" type="checkbox"> Support ITN, masked ITN, and lexical output
  • Speech synthesis
    • <input checked="" disabled="" type="checkbox"> Event: add pause/resume support
    • <input checked="" disabled="" type="checkbox"> Properties: add paused/pending/speaking support
    • <input checked="" disabled="" type="checkbox"> Support custom voice fonts

Contributions

Like us? Star us.

Want to make it better? File us an issue.

Don't like something you see? Submit a pull request.

changelog

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog and this project adheres to Semantic Versioning.

8.1.1 - 2025-01-21

Changed

8.1.0 - 2025-01-06

Added

  • Added initialSilenceTimeout option to shorten or prolong the timeout of silence detection before speech is detected, by @compulim, in PR #232

8.0.0 - 2024-11-26

Changed

Fixed

  • Fixed #218. Speech recognition should stopping properly in some cases, in PR #218
    • Interactive mode, muted microphone
    • Continuous and interactive mode, stop shortly after start
  • Fixed #221. Continuous mode with successful interims should stop without errors, in PR #222
  • Fixed #226. createSpeechServicesPonyfill should return both SpeechRecognition and SpeechSynthesis ponyfill, in PR #227
    • 💥 createSpeechServicesPonyfill will throw if the browser does not support Media Capture and Streams API, instead of warning and continue

7.1.3 - 2022-11-29

Changed

7.1.2 - 2022-09-27

Changed

7.1.1 - 2021-07-20

Changed

7.1.0 - 2021-02-01

Changed

7.0.1 - 2020-08-06

Changed

7.0.0 - 2020-05-27

Changed

  • 💥 To enable developers to select their version of Cognitive Services Speech SDK and use newer features, we are moving microsoft-cognitiveservices-speech-sdk to peerDependencies.
    • When you install web-speech-cognitive-services, you will also need to install a compatible version of microsoft-cognitiveservices-speech-sdk.

Fixed

6.3.0 - 2020-03-28

Changed

6.2.0 - 2020-03-27

Changed

  • 💥 Temporarily reverting breaking changes by reintroducing Bing Speech and fetchAuthorizationToken, by @compulim in PR #92.

6.1.0 - 2020-03-26

Added

Changed

Removed

6.0.0 - 2019-12-03

Added

  • playground: Add delayed start to playground for testing speech recognition initiated outside of user gestures, in PR #78
  • Speech recognition: New looseEvents option, default is false. When enabled, we will no longer follow observed browser event order. We will send finalized result event as early as possible. This will not break conformance to W3C specifications. By @compulim, in PR #79
  • Speech recognition: Create ponyfill using SpeechRecognizer object from microsoft-cognitiveservices-speech-sdk, by @compulim, in PR #73
  • credentials option is added for obtaining authorization token and region, or subscription key and region, in a single object or function call, by @compulim in PR #80
  • Speech recognition: Polyfill will have abort/stop function set to undefined if the underlying recognizer from Cognitive Services SDK does not support stop/abort, in PR #81

Changed

  • 💥 Unifying options to pass credentials
    • authorizationToken, region, and subscriptionKey are being deprecated in favor of credentials options. credentials can be one of the following types:
      • { authorizationToken: string, region: string? }
      • { region: string?, subscriptionKey: string }
      • Promise<{ authorizationToken: string, region: string? }>
      • Promise<{ region: string?, subscriptionKey: string }>
      • () => { authorizationToken: string, region: string? }
      • () => { region: string?, subscriptionKey: string }
      • () => Promise<{ authorizationToken: string, region: string? }>
      • () => Promise<{ region: string?, subscriptionKey: string }>
    • If credentials is a function, it will be called just before the credentials is needed and may be called very frequently. This behavior matches the deprecating authorizationToken. The result of the call is also expected to be cached.
    • If region is not returned, the default value of "westus" will be used.
  • Bumped dependencies, in PR #73

Removed

  • 🔥 authorizationToken, region, and subscriptionKey are being deprecated in favor of credentials options, by @compulim in PR #80

Fixed

  • Speech recognition: Removed extraneous finalized result event in continuous mode, by @compulim, in PR #79

5.0.1 - 2019-10-25

Changed

  • Fixed dependences in PR #76
    • bundle package
      • dependencies: Moved eslint to development dependencies
    • component package
      • peerDependencies: No longer requires react
      • dependencies
        • Moved eslint to development dependencies
        • Removed event-target-shim because incompatibility with ES5
      • devDependencies: Removed react
    • Removed import '@babel/runtime' explicitly

5.0.0 - 2019-10-23

Added

  • Speech recognition: Fix #23 and #24, support audiostart/audioend/soundstart/soundend event, in PR #33
  • Speech recognition: Fix #25 and #26, support true abort and stop function, in PR #33
  • Speech recognition: Fix #29, support continuous mode, in PR #33
    • Quirks: in continuous mode, calling stop in-between recognizing and recognized will not emit final result event
  • Speech recognition: New audioConfig option to override the default AudioConfig.fromDefaultMicrophoneInput, in PR #33
  • Speech synthesis: Fix #32, fetch voices from services, in PR #35
  • Speech synthesis: Fix #34, in PR #36 and PR #44
    • Support user-controlled AudioContext object to be passed as an option named audioContext
    • If no audioContext option is passed, will create a new AudioContext object on first synthesis
  • Speech synthesis: If an empty utterance is being synthesized, will play an local empty audio clip, in PR #36
  • Speech recognition: Fix #30, support dynamic phrases, in PR #37
    • Pass it as an array to SpeechRecognition.grammars.phrases
  • Speech recognition: Fix #31, support reference grammars, in PR #37
    • When creating the ponyfill, pass it as an array to referenceGrammars options
  • Speech recognition: Fix #27, support custom speech, in PR #41
    • Use option speechRecognitionEndpointId
  • Speech synthesis: Fix #28 and #62, support custom voice font, in PR #41 and PR #67
    • Use option speechSynthesisDeploymentId
    • Voice list is only fetch when using subscription key
  • Speech synthesis: Fix #48, support output format through outputFormat option, in PR #49
  • *: Fix #47, add enableTelemetry option for disabling collecting telemetry data in Speech SDK, in PR #51 and PR #66
  • *: Fix #53, added ESLint, in PR #54
  • Speech synthesis: Fix #39, support SSML utterance, in PR #57
  • Speech recognition: Fix #59, support stop() function by finalizing partial speech, in PR #60
  • Fix #67, add warning when using subscription key instead of authorization token, in PR #69
  • Fix #70, fetch authorization token before every synthesis, in PR #71

Changed

Fixed

  • Fix #45. Speech synthesize should emit "start" and "error" if the synthesized audio clip cannot be fetch over the network, in PR #46

4.0.0 - 2018-12-10

Added

  • New playground for better debuggability
  • Support of Speech Services SDK, with automated unit tests for speech recognition
  • Speech recognition: Support stop on Speech Services
  • Speech synthesis: Support pause and resume (with pause and resume event)
  • Speech synthesis: Support speaking property

Changed

  • Ponyfill are now constructed based on options (authorization token, region, and subscription key)
    • A new set of ponyfill will be created every time an option has changed

Fixed

  • Fix #13 Speech recognition: SpeechRecognitionResult should be iterable

3.0.0 - 2018-10-31

Added

  • Speech Synthesis: Will async fetch speech token instead of throwing exception

Changed

  • Use @babel/runtime and @babel/plugin-tranform-runtime, in favor of babel-polyfill
  • Better error handling on null token
  • Updated voice list from https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/supported-languages
  • Reliability around cancelling a playing utterance
    • Instead of shutting down the AudioContext, we will stop the AudioBufferSourceNode for a graceful stop
  • Simplify speech token authorization
    • recognition.fetchToken = async () => return await 'your subscription key';
    • recognition.fetchToken = createFetchTokenUsingSubscriptionKey('your subscription key');
    • fetchToken will be called every time a token is required, implementor should cache the token as needed
  • Bump to @babel/core@7.1.2 and jest@^23.6.0
  • Bump to react-scripts@2.0.4
  • Publish /packages/component/ instead of /
  • Bump to event-as-promise@1.0.5

2.1.0 - 2018-07-09

Added

  • Speech priming via custom SpeechGrammarList

2.0.0 - 2018-07-09

Added

  • SpeechSynthesis polyfill with Cognitive Services

Changed

  • Removed CognitiveServices prefix
    • Renamed CognitiveServicesSpeechGrammarList to SpeechGrammarList
    • Renamed CognitiveServicesSpeechRecognition to SpeechRecognition
    • Removed default export, now must use import { SpeechRecognition } from 'web-speech-cognitive-services';
  • Speech Recognition: changed speech token authorization
    • recognition.speechToken = new SubscriptionKey('your subscription key');

1.0.0 - 2018-06-29

Added

  • Initial release
  • SpeechRecognition polyfill with Cognitive Services