Solutions for Applications Requiring Audio and Speech Playback
Audio applications that require playback of voice often fall in one of two classes:
- Playback of pre-recorded and stored audio
- Playback of streaming audio
In both cases, the quality of audio depends on both the compression scheme (G.726A, Speex, etc.) used in firmware as well as the hardware peripherals used (Pulse-Width Modulator, Digital-to-Analog Converter , etc.) for reproducing the sound. Some applications that require playback of streaming audio may also require encoding of recorded speech data for duplex transmission. In such cases, the processor throughput (MIPS) required by the microcontroller (MCU) or Digital Signal Controller (DSC) is largely dependent on the encoding algorithm. These applications are discussed in detail in the Communications section. In this section, we present the solutions required for playback-only applications in two parts:
The choice of software algorithm used to playback audio and speech signals depends on the compression algorithm used to encode the raw speech data prior to storage. The graph below compares a variety of compression schemes in use today. We provide software libraries to support algorithms that require no payment of royalties, such as IMA ADPCM for 8- and16-bit PIC® MCUs, G.711, Speex and G.726A for PIC24 MCU and dsPIC® DSCs, and G.711, Speex and ADPCM for PIC32 MCUs.
Note: The MIPS usage statistics shown for algorithms within gray ellipses in the graph above represent the requirements of Microchip’s implementation of these algorithms on 16-bit PIC MCUs and dsPIC DSCs.
A decision on the algorithm to use is often based on a tradeoff between quality and system cost incurred in saving large amounts of audio data. The table below shows how the various algorithms perform in terms of actual seconds of speech they can store into memory.
|Memory needed to store 1 second of encoded speech||8 KB||2,3,4 or 5 KB||1 KB|
How does this affect your choice of MCU, DSC or Memory component?
This table shows how many seconds of speech can be stored in the on-chip Flash memory on some example devices by using the same encoding algorithms:
|Example devices and their code/audio storage capability||G.711||G.726A||Speex|
|25XX1024 Serial EEPROM (128 KB of storage)||16 sec||25 to 64 sec||128 sec|
|PIC24FJ256GA or PIC24HJ256GP610 (256 KB of storage)||32 sec1||52 to 128 sec||n/a|
|dsPIC33EP512MU810 (512 KB of storage)||64 sec1||102 to 256 sec||512 sec|
|PIC32MX360F512L (512 KB of storage)||64 sec1||102 to 256 sec||512 sec|
A few variations are available for implementation in hardware based on trade offs in quality of audio versus system cost, simplicity versus system-integration etc. Popular methods supported by our PIC MCUs and dsPIC DSCs are shown below:
- Use on-chip Analog-to-Digital Converter (ADC)
- Use off-chip codecs or ADCs
- Use off-chip codecs or Digital-to-Analog Converters (DACs)