AUDIO DSP

With our long experience with audio DSP, audio measurement systems, and state-of-the-art acoustic labs, not only can we develop and tune your own DSP algorithms, we can also evaluate the actual performance of DSP algorithms in different environmental conditions.

Sigma Connectivity has the knowledge and lab equipment to host a complete Audio DSP offering for our customers.

Audio Digital Sigma Processing enhances an audio signal to achieve a goal. That goal can be anything from reducing noise in a captured speech signal to enhance the intelligibility of the speech to tuning a playback system so that music sounds better through a loudspeaker than it otherwise would do. Audio DSP is a highly flexible tool, but employing it in a product requires in-depth knowledge and equipment to achieve the best quality and cost-effectiveness.

HOW WE DEVELOP AUDIO DSP

Audio DSP Architecture:

A combination of hardware, software and DSP blocks that can fulfill a product’s requirements. The complete system design is very important, DSP cannot compensate for bad HW design, at least not fully. One needs to make sure there is an adequate amount of memory and processing power for the task at hand. This is especially important if the DSP algorithms must run parallel with the general SW. Dedicated DSP HW can be a good solution. 

Develop from scratch:

Many types of audio DSP blocks are available commercially or as open source. If one of those fits the project's needs, then that can be a good choice because it’s tested and proven. If not, look at developing something new. Sigma Connectivity can help either way.

Algorithm Development:

Research methods to solve domain-specific problems and then transform this to prototype code. Typically done in MATLAB, Python or C/C++. Iterations are needed to test solutions and refine algorithms, often in combination with prototype hardware to make it realistic enough. Often there are HW constraints like a limited amount of memory or processing power to consider too, and optimizations might be needed to make the algorithm run more efficiently.

Porting to a hardware platform:

Sometimes, porting of high-level language DSP algorithms, or even graphical representations of an algorithm, can be done with tools like MATLAB/Simulink and DSP Concepts Audio Weaver and then directly from there to an embedded audio device. These tools create the C code needed to run the algorithm on a specific platform. At other times it is needed to manually re-write high-level DSP code to something optimized for the embedded platform chosen for example, various ARM’s, QC Hexagon aDSP, AD SHARC, Tensilica HIFI, XMOS etc.

OS Support:

Based on the architecture and use case of the product, porting to an operating system (Android, Linux, and others) can be performed by ensuring all the audio devices are mapped properly to a pipeline according to the architecture and ensuring that the product meets requirements by validating through Compatible Test Suites.

Audio Tuning:

Commonly, an audio product requires DSP parameter configuration and optimization to reach the design goals. This is often called “audio tuning” and is an iterative process shifting between objective and subjective measurements and DSP parameter updates.

Audio DSP Benchmarking:

With our long experience with audio DSP, audio measurement systems, and state-of-the-art acoustic labs, we can help you evaluate the actual performance of many types of DSP algorithms using standards from ITU, ETSI (3GPP), TIA, etc.

CAPTURE

  • Voice communications systems usually have firm size constraints and need to be used in noisy environments. Various DSP techniques are needed, to improve the loudness and frequency response of the loudspeaker, to increase the signal-to-noise ratio from the microphone and to limit the amount of acoustic and structural vibration feedback introduced between the loudspeaker and microphone. Common audio DSP blocks used are microphone beamforming, noise reduction, echo-cancellation, dynamic range control, automatic gain control and equalization.

  • Collect application-specific audio data and train a machine learning model to identify or categorize sounds. Applications can be anything where a system needs to respond to an audio event. Some examples are engine or machine diagnostics where a sound/vibration signature change could be an early sign of wear. A baby monitor that listens for baby cries. Alarm systems that listen for glass break or other suspicious sounds.

  • Typically consists of wake word models, command transcription and intent mapping. Applications are voice control of smart speakers, wireless headsets, phones, car media systems etc. but applications could be anything where there is a need for voice control. Many systems use cloud-services for command transcription and intent, but it is possible to do this locally on a device.

PLAYBACK

  • To maximize the loudness of small speakers, one needs to drive the transducer to its limits while simultaneously keeping it from breaking. Real-time monitoring of voltage and resistance at the coil makes it possible to limit both the peak excursion and excessive coil temperature.

  • Techniques to playback sounds and naturally position these in three dimensions. Has applications in virtual reality, augmented reality, gaming, teleconferencing, music and video playback. Builds on modeling the natural hearing where sounds at different angles of incidence give rise to interaural time, level and frequency differences. The brain uses this information to extract directions of sounds. One needs to mimic the natural 3D sound fields through for example a pair of headphones. 

  • Builds on the principle of measuring noise and outputting the inverted anti-noise that cancels the noise in a well-defined and limited volume of air. Typically used in headphones and headsets but can also be targeted at for example car and airplane cabins.

  • A group of techniques to optimize the response of a loudspeaker or headphones at the ear of the listener. Typically, it is linear distortion components like uneven frequency and phase response that are linearized. For loudspeaker playback one can include the room response in the linearization to make the speaker better integrate in the environment it is placed. For headphones, it is possible in some circumstances to get a more natural presentation by introducing a cross-feed of signals between the left and right channels or even introducing some artificial room reverberation. Another emerging area is a personalized equalization curve based on your hearing profile.

We design complex acoustic and audio systems in all kinds of products like mobile phones, smart speakers, loudspeakers, and headsets. We offer a wide range of design, testing, and measuring services for your products and solutions.


CASE STUDY: Speech algorithm development

A major technology company making wearables is developing a new way to pick up speech by using contact microphones

Sigma Connectivity provided:

To showcase the technology in the right context, some speech enhancement algorithms were developed. Sigma built prototypes which were recorded in various environments with a range of users, followed by expert analysis to identify what kind of processing blocks were needed. Then Sigma developed reference algorithms in Matlab which were tested, tuned and ported to c code which was deployed on a target DSP processor. The quality was formally evaluated using a structured and standardized subjective listening test, and with tools that calculated the expected quality of experience. Finally, the client could successfully demo the technology to its key stakeholders, with the code running live on the DSP.

CASE STUDY: Speaker Protection

A major technology company needed their own, in-house speaker protection algorithm as the off the shelf solutions on the market compromised performance too much.  

Sigma Connectivity provided:

Following a fast development cycle, a configurable and adjustable algorithm was developed that actively protects the speaker from overheating and mechanical damage while maintaining a high-quality, loud sound. The algorithm monitors the speaker coil's temperature and limits the excursion of the diaphragm by using signals measured on the speaker.