SciELO - Scientific Electronic Library Online

 
vol.18 número especialTransient surges analysis in low voltage networksProposed methodology for assignment of spectral bands in wireless cognitive radio networks índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

Links relacionados

  • Em processo de indexaçãoCitado por Google
  • Não possue artigos similaresSimilares em SciELO
  • Em processo de indexaçãoSimilares em Google

Compartilhar


Tecnura

versão impressa ISSN 0123-921X

Tecnura vol.18 no.spe Bogotá dez. 2014

https://doi.org/10.14483/udistrital.jour.tecnura.2014.DSE1.a04 

DOI: http://doi.org/10.14483/udistrital.jour.tecnura.2014.DSE1.a04

Embedded wavelet analysis of non-audible signals

Análisis embebido por ondículas de señales no audibles

Andrés Camilo Ussa Caycedo*, Olga Lucía Ramos Sandoval**, Darío Amaya Hurtado***, Jorge Enrique Saby Beltrán****

* Mechatronics Engineer, Research Assistant, Universidad Militar Nueva Granada, Engineering Faculty Mechatronics Engineering Program, Bogotá D.C., Colombia E-mail: u1801205@unimilitar.edu.co
** Electronics Engineer, Electronics Instrumentation Specialization, Master in Teleinformatics Universidad Militar Nueva Granada, Engineering Faculty-Mechatronics Engineering Program, Bogotá D.C., Colombia. E-mail: olga.ramos@unimilitar.edu.co
*** Electronics Engineer, Specialization in Industrial Process Automation, Master in Teleinformatics, Mechanical Engineering PhD., Universidad Militar Nueva Granada, Engineering Faculty Mechatronics Engineering Program, Bogotá D.C. E-mail: dario.amaya@unimilitar.edu.co
**** Bachelor Degree in Linguistics and Literature, Specialization in n Semiotics, master in Spanish Linguistics, Linguistics and letter PhD., Science of Education PhD., Universidad Distrital Francisco José de Caldas, Bogotá D.C., Colombia. E-mail: jesabyb@udistrital.edu.co

Fecha de recepción: June 10th, 2014 Fecha de aceptación: November 4th, 2014

Citation / Para citar este artículo: Ussa Caycedo, A. C., Ramos Sandoval, O. L., Amaya Hurtado, D., & Saby Beltrán, J. E. (2014). Embedded wavelet analysis of non-audible signals. Revista Tecnura, 18 (Edición especial doctorado), 51-60. doi: 10.14483/udistrital.jour.tecnura.2014.DSE1.a04


Abstract

The analysis of non-audible signals has gained a significant importance due to their many fields of application, among them, speech synthesis for people with speech disabilities. This analysis can be used to acquire information from the vocal apparatus without the need of speaking in order to produce a phonetic expression. The analysis of a Wavelet transformation of Spanish words recorded through a non-audible murmur microphone in order to achieve an embedded silent speech recognition system of Spanish language is proposed. A non-audible murmur microphone is used as sensor of non-vocal speech. Coding of the input data is done through a Wavelet transform using a fourth-order Daubechies function. The acquisition, processing and transmission system is applied through a STM32F4-Discovery evaluation board. The used vocabulary consists of command words aimed to control mobile robots or human-machine interfaces. The Wavelet transformation of four Spanish words, each of them having five independent samples, was accomplished. An analysis of the resulting data was performed, and features as average, peaks and frequency were distinguished. The processing of the signals is performed successfully and further work in speech activity detection and features classifiers is proposed.

Keywords: Communication Aids for Disabled, Esophageal, Phonetics, Speech, Wavelet Analysis.


Resumen

El análisis de señales no audibles ha ganado una importancia significativa debido a la gran cantidad de aplicaciones que estas tienen, como la síntesis de voz para personas con discapacidad del habla. Este análisis puede ser usarse para adquirir información del aparato fonador sin la necesidad del habla, para así producir una expresión fonética. Para esto se propone el análisis de la transformada de ondícula de palabras del español a través de un micrófono de murmuro no audible para el desarrollo de un sistema embebido de reconocimiento del habla silenciosa. Se usa un micrófono de murmuro no audible como sensor del habla no vocal. La codificación de los datos de entrada se realiza a través de una transformada de ondícula usando una función Daubechies de cuarto orden. El sistema de adquisición, procesamiento y transmisión se realiza a través de una tarjeta de evaluación STM32F4-Discovery. El vocabulario utilizado consistía de palabras de mando orientadas al control de robots móviles o de interfaces hombre-máquina. Se desarrolló exitosamente la transformada de ondícula de cuatro palabras del español, cada una con cinco muestras independientes. Se realizó un análisis de los datos obtenidos, y se discutieron características como el promedio, los picos y la frecuencia. El procesamiento de las señales se ejecutó de manera exitosa y se proponen futuros trabajos en la detección de la actividad del habla y clasificadores de características.

Palabras clave: análisis de ondículas, equipos de comunicación para personas con discapacidad, fonética, habla, voz esofágica.


Introduction

Research and development of automatic silent speech recognition systems had raised several approaches over the years. Along with electromyography (EMG), the Non-audible murmur (NAM) microphone is one of the more promising (Tomoki et al., 2009). This is due to its robustness against external noise, ease of using, low cost and non-invasive nature. It can be used to detect low amplitude sounds produced by air flow through the larynx, this way being a useful tool in speech synthesis that requires low or null sound made. In order to elaborate a silent speech recognition system, a way to code and analyze the acquired signals is required. A multi-resolution analysis through the Wavelet transform is proposed because of its feature characterization properties, easy implementation and wide use in this kind of applications. This study's goal is to embed the system in an electronic device, that's why acquisition, processing and transmission were performed in a ST evaluation board. This work describes previous work in this matter, the technology and techniques utilized while development and the results obtained when implemented.

A NAM microphone is a high sensitivity microphone which is adhered to the human skin and acts as a stethoscope. This is encapsulated by a soft silicone material, allowing the improvement of the impedance between the microphone's diaphragm and the skin, and avoiding the noise caused by clothes or other objects (Denby et al., 2010). The unit sensor is an electret capacitor without metallic cover to allow direct contact between the diaphragm and the silicone. Studies with this kind of microphone showed the proper placement of it was on the side of neck because this way it could capture vibrations from resonance caused by the vocal tract when the air flowed (Babani, Tomoki, Hiroshi & Kiyohiro, 2011).

The discrete Wavelet transform is a fast, linear operation that operates on a data vector whose length is an integer power of two, transforming it into a numerically different vector of the same length (Abair & Alimbayev, 2014). A diagram of this process can be seen in figure 1. The Wavelet transform is invertible and orthogonal. In the Wavelet domain, the basic functions have the names "mother functions" and "wavelets". Individual Wavelet functions are localized in space and simultaneously, localized in frequency or characteristic scale. Wavelets are defined by the Wavelet function ψ(t) (i.e. the mother Wavelet) and scaling function (Φt) (also called father Wavelet) in the time domain. Each of these functions is specified by a particular set of numbers, called Wavelet filter coefficients. The scaling function acts like a low pass filter which gets the «smooth» information, and the Wavelet function acts as a high pass filter which gets the «detail» information. A discrete Wavelet transform is defined in equation (1) (Xinyi, Gengyin, Ming, & K.I., 2011).

Where ψ(t) is the Wavelet function and s(t) a real signal. And where ' denotes the complex conjugate of ψ.

A popular mother function, the Daubechies Wavelet function, is defined in equation (2).

Where Φ(t) is the scaling function which is defined in equation (3) and β is the Wavelet sequence.

Where α is the scaling sequence. Then, the discrete Wavelet transform consists of applying the Wavelet and scaling functions, first to the full data vector of length N, then to the "smooth" vector of length N/2, then to the "smooth-smooth" vector of length N/4, and so on until only a trivial number of "smooth-. . .-smooth" components (usually two) remain (Abair & Alimbayev, 2014). The output of the transformation consists of these remaining components and all the "detail" components that were accumulated along the way.

A real-time operating system (RTOS) was implemented within the evaluation board. A RTOS is responsible for managing the hardware resources of the tasks running in the system with very precise timing and a high degree of reliability (Instruments, 2013). The RTOS kernel supplies four main types of basic services to application software: Inter Process communication (IPC) and synchronization, time management, dynamic memory allocation and task management (Laplante & Ovaska, 2011). It must have a known maximum time for each of the critical operations that it performs, engaging for this matter OS calls and interrupt handling. Each task is set with a specific priority, that way defining when each task will take place. This allows the development to be timing controlled, extensible, modular and easier to control interrupts and peripherals (FreeRTOS, 2013).

The remaining of this paper goes as follows. Some previous work regarding wavelets applications is presented in section two. The methodology followed is described in section three. The results are presented in section four, and lastly the conclusions in section five.

Previous work

Research regarding Wavelet analysis of signals is numerous. This technology has found a constant presence in diverse studies in the later years. Applications can be found in medicine, image processing, signal filtering, speech synthesis and classification, among others. It is a very useful technique that is permanently growing and finding more fields to perform. Some related research is as follows.

Some applications in the medical field are related to heart diseases. In 2007 (de Vos & Blanckenberg, 2007), aimed to discriminate between pathological and non-pathological heart sounds in child, through an automated artificial neural network as well as a direct ratio and a Wavelet analysis technique applied in electronic auscultation signals. Their objective was to help non-specialist physicians to evaluate heart murmurs with higher confidence. They used Wavelet analysis to reduce noise in input data, which was heart sound and electrocardiogram signals. This was done through a Daubechies Wavelet of order five (db5) with a decomposition level of eight. Wavelets were also used to analyze certain characteristics of the input data, where direct systolic energy values were taken to serve as an indicator. The fourth-order Daubechies Wavelet (db4) was used for this purpose.

Three years later, Choi, Shin et al. (Samjin, Youngkyun & Hun - Kuk, 2011) used a Wavelet package in which four features were analyzed: maximum peak frequency (MPF), the position index of Wavelet packet coefficient corresponding to MPF, and the S1nS2 to murmur ratios of energy and entropy. S1nS2 are heart sounds that are always audible and appear at very high amplitudes in normal subjects. And their objective was to identify aortic insufficiency and insufficiency murmurs (IM) using the resulting Wavelet decomposition. The input data for the experiment was a data set of normal and IM sounds, and recorded sounds using a wireless electric stethoscope from healthy subjects and heart disease patients. Their results suggested their novel IM identification method was applicable to clinical environments.

Another study regarding cardiac diseases was presented by Verma, Cabrera, Mayorga, & Nazeran (2013), where a robust algorithm was developed to derive heart rate variability from electrocardiogram or photoplethysmographic signals, in order to ease its digital spectral analysis which provides quantitative markers of the autonomic nervous system. An undecimated Discrete Wavelet Transform Daubechies-6 family of filters was used to selectively remove some of the high-frequency subbands from the signals. Wavelet transform was also used by Suresh & Balasubramanyam (2013) as a feature extractor of raw electroencephalography (EEG) data where transient features were accurately captured and localized in both time and frequency context. The mother Wavelet chosen was the fourth-order Daubechies. The objective was to detect peaks, which are related to head injuries and epilepsy. This was successfully achieved using neural networks based on Wavelet transform as a preprocessor.

Wavelet properties also allowed it to be used in the implementation of human interfaces. The following are some examples Satiyan, Hariharan, & Nagarajan (2010) investigated the performance of a Daubechies Wavelet family in recognizing facial expressions. The input data were 2D coordinates recorded from the movements of luminance stickers on the face while performing facial expressions. The Wavelet was applied with different orders, from db1 to db20. The standard deviation of the resulting approximation coefficients was used as input of a neural network to classify over eight facial expressions. The average maximum recognition rate of facial expression was 97% for the Daubechies Wavelet order one.

A research work regarding a silent speech interface also took advantage of wavelets features (Torres, Reyes & Villaseñor, 2012). Their intention was to interpret electroencephalography (EEG) signals associated with actions to imagine the pronunciation of words that belong to a reduced vocabulary without moving the articulatory muscles and without uttering any audible sound. The recorded vocabulary reflects movements to control the cursor on a computer, and the discrete Wavelet transform was used to extract features from the delimited windows. The subsets were used to train classifiers. They obtained evidence to affirm that EEG signals carry useful information to allow the classification of unspoken words.

It can be seen from some research works using wavelets that this technique is very useful for the processing of biological signals for its denoising, processing and classification, and that Daubechies Wavelet is vastly used for its easy implementation and suitable features for signal analysis.

Methodology

The method to apply a Wavelet transform on signals acquired by a NAM microphone can be divided in three stages. The first stage consists of the analog-digital conversion (ADC) of the signals coming from the sensor. The second stage consists of applying the Wavelet transform and transmitting its output to a PC through USB. And the third stage consists of receiving the USB data and store it for further analysis. The STM32F4-Discovery evaluation board (from this point onward, the "Discovery board") was used as the input data collector and data transmitter; it was also in charge of applying the Wavelet transform. A C# graphic user interface was designed for data reception and storing. The sensor used to acquire the murmur signals was a Non-Audible Murmur microphone.

One male subject Spanish speaker participated in the experiment. He was informed about the microphone's operation, the way of utterance and the vocabulary used in the test. The NAM microphone was placed below the ear, against the side of the neck. The vocabulary recorded consisted of four Spanish words: "adelante", "atrás», «derecha» and «izquierda», which translate forward, backwards, right and left, respectively. These words were chosen due to the intention of using this silent speech interface for mobile robots control. Five recordings of each word were acquired, each recording lasting approximately five seconds. In order to select the data range in which the transformation would be applied, the data recorded was directly stored in a computer. The period of the ADC was 1.5 milliseconds, which means the conversion was performed at a 666 Hz rate. This resulted in a 3300 data size approximately for each recording, from which 1024 data with valuable information was manually identified and extracted in order to be processed subsequently. A Wavelet transform was applied over each data sample; this was performed by the Discovery board, which transmitted the output back to the PC through USB.

The Discovery board consists of a 32-bit ARM Cortex-M4F core microcontroller, with 1 MB Flash and 192 KB RAM, along with some functions, among them the ADC and USB modules. Keil's MDK-ARM software development environment was used for the Discovery board programming. A Real-time operating system was implemented within the board for task managing, the chosen RTOS was FreeRTOS due to its professional development, strictly quality control, robustness, support, and free to use (FreeRTOS, 2013). Three tasks were implemented: the USB task, the ADC task and the Wavelet transform task; along with peripherals and interrupts configuration. The USB task was set with a higher priority level than the other tasks, since it was imperative to send and receive data when it was available. Two queues were created to handle data shared between tasks, a queue to allocate the input data acquired from the analog to digital conversion or from the received USB buffer, and another to allocate the output of the Wavelet transform to the transmission function. ADC and USB interrupts were also employed.

The ADC task was performed continuously; it had a 12-bit resolution, sampled the input voltage for 480 Cycles and had an end of conversion interrupt which was set in order to store the measured value. The USB module was configured with the communication device class (CDC) at full speed, as well as the related interrupt for reception. The system's configuration can be seen in Figure 2, the microphone was connected in series circuit to a 10k resistor, having 3V as voltage source. The discovery board was powered through USB.

A Wavelet transformation was chosen as the signal coding method because of its property to offer simultaneous time and frequency analysis, its ability to decompose a signal in fine details (Abair & Alimbayev, 2014), and due to previous successful approaches in signal and speech analysis and characterization. The mother Wavelet used in this process was the Daubechies orthogonal Wavelet of four coefficients (daub4), this specific function allowed a fast implementation and proper feature extraction (MAdishetty, MAdanayake, & Cintra, 2013). The action done on the input vector is to perform two related convolutions, one with c0, ..., c3 (scaling coefficients) which act as a smoothing filter and the other with c3,−c2, c1,−c0 (Wavelet coefficients) which gives detail information, then to decimate each of them by half, and interleave the remaining halves. And then applying the procedure to the resulting "smooth" vectors, and so on, until only two values are reached.

The daub4 coefficients are shown in equation (4).

The Wavelet task was constantly waiting for the queue to fill with an input float 1024 length vector; hence this procedure was only performed when the data was ready to be processed. The output was then assigned into the transmission queue, in a string format; this way saving memory space and easing the sending protocol. Due to the space required to use a float array of 1024 length, allocation and clear memory functions were actively used. This way stack overflow and memory allocation failures were avoided.

The following is the logic sequence of the program: The ADC task starts performing the conversion and when it finishes getting all the converted values occurred during a certain amount of time, it assigns the generated array into the ADC queue, and goes to ready state allowing the next task to run. The next task, being the Wavelet task, was indefinitely blocked waiting for the queue to get filled, and when it is ready to run it takes the input string a converts it to a float data type, in order to prepare the data for its processing. The Wavelet transform is performed on the data, and an array of same size is the result. This is then allocated in the transmission queue, which the USB task was waiting to get filled, for it to get unblocked and running. This happens immediately since the USB task has a higher priority than the other tasks, hence it blocks the Wavelet task. Just as it finishes sending the data to the PC, the Wavelet task completes its operation and then yields, allowing the process to start over again. A schematic of the whole process is showed in Figure 3.

The tasks were designed to be used in such a way the non-audible murmur processing could be performed solely by the Discovery board, without needing any assistance of external devices. Although, this approach is not yet implemented since it is necessary to design a trigger for speech activity detection. Instead, the range in which the speech information took place was manually selected, always obeying the required input data length, and later passed to the Discovery board.

Results

Each set of words was successfully processed, resulting in a dataset of five transformations per word. Since some of the words used in the experiment have the same number of syllables, input data from those can get to seem similar, as one can see in Figure 4 comparing "Izquierda" to "Derecha". Dissimilarity can be achieved by coding the signals through the Wavelet transform and analyzing certain features (e.g. standard deviation, average) of the output values, in order to discriminate among the vocabulary. Another feature to take into account is the amount of silent time, which in this case happens before and after the recorded data. This time exists because the recording lasted the same period for all the words, causing the shorter words to have longer silent intervals, and the longer words the opposite. In this case the word "Atrás» is the shorter word; consequently, it also presents the longer silent period. This experiment's specific circumstance also affects the transform output, and may cause issues in further intentions to classify the words. This is why it is mandatory to start recording as long as the utterance begins, that way registering only the useful information and additionally saving resources of the system.

The Wavelet transform of the sample input signals presented in Figure 4 can be seen in figure 5. All results show evident constant frequency high peaks; these peaks' amplitude varied depending on the word, but have equal recurrence due to the nature of the mother Wavelet. The words "Adelante" and "Atrás» have similar magnitudes for their higher peak in contrast to the other words; this could suggest that those peaks depend on the first syllable of a word.

It can be observed that regardless some input signals were similar, their outputs present some substantial differences marks (e.g. amplitude, frequency) that can be exploited in order to find a proper feature for further classification. For example, the words «Izquierda» and «Derecha» have the same amount of syllables; so they can be differentiated due to the average value of the signal, or the amount of peaks (high and low) they present. Some common feature extraction methods are moving root mean square (Basavaraj & Veerappa, 2009), short time Fourier transform (Junhong & Ming, 2010), linear predictive coefficients (Pattanaburi, Onshaunjit & Srinonchat, 2012), Zero-crossing points (Lopez - Larraz, Mozos, Antelis, & Minguez, 2010), among others. This extraction can also be performed by a combination of the previously stated techniques.

Concerning the consistency in the results associated with a specific word, output signals remained fairly constant. The words that had the most constant results were "Adelante" and "Atrás», the other two words had variability in their amplitude and average, not in its frequency. This aftermath can be related to the conditions in which the test was developed, a more controlled environment could lead to better results. Another cause of this is the not yet implemented speech activity detection, where manual selection was employed instead; subjectivity and inconsistency are consequences of this. Even applying that last component, it must be also considered the speed in which the words are pronounced, and the variation it can induce between same word recordings. If one manages to identify one word with a certain tone and speed, that same word may not be identified if the utterance's volume or duration changes. That is why this feature of the input data should get confronted and overcome by applying a time-domain feature extraction technique that deals with this situation.

Conclusions

A Wavelet analysis on non-audible signals of four Spanish words captured by a NAM microphone was made. The objective was to design a system in which a device could acquire, process and transmit Wavelet processed signals. The signals were successfully captured and transformed into their Wavelet counterpart. The acquired signals were manually selected so only the part with valuable information could be processed. This could be done automatically by the device if a speech activity detector was implemented. Studies in this matter will be done in the future. A standardize method for the development of the experiment is suggested. This relates to specify the exact position of the microphone in the neck (same for all subjects), determine a controlled space to perform it, safe from noise and disruptions, and establish the process in which the pronunciation should be done, testing different tones and duration. All these measures should lead to better quality tests, avoiding subjectivity and inconvenient at best. The next step for automatic speech recognition to be accomplished is to design a classifier which uses features of the resulted signals to discriminate and identify Spanish words. In order to get a better understanding of its operation, studies with an extended vocabulary and different mother wavelets will be performed.


References

Abair, D. & Alimbayev, T. (2014). Conjugate Heat Transfer in a Developing Laminar Boundary Layer. Proceedings of the World Congress on Engineering 2014, II, London, UK. Retrieved October 10, 2014, from http://www.iaeng.org/publication/WCE2014/WCE2014_pp1387-1392.pdf.         [ Links ]

Babani, D., Tomoki, T., Hiroshi, S. & Kiyohiro, S. (2011). Acoustic model training for non-audible murmur recognition using transformed normal speech data. In IEEE (Ed.), 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5224 - 5227. Prague. doi:10.1109/ICASSP.2011.5947535.         [ Links ]

Basavaraj, S. & Veerappa, B. P. (2009). An acoustic signature based neural network model for type recognition of two-wheelers. 2009. IMPACT '09. International Multimedia, Signal Processing and Communication Technologies,. Aligarh: IEEE. doi:10.1109/MSPCT.2009.5164166.         [ Links ]

de Vos, J. & Blanckenberg, M. (2007). Automated Pediatric Cardiac Auscultation. (IEEE, Ed.) IEEE Transactions on Biomedical Engineering, 54(2), 244 - 252. doi:10.1109/TBME.2006.886660        [ Links ]

Denby, B. et al. (April 2010). Silent speech interfaces. Speech Communication, 52(4), 270-267. doi:10.1016/j.specom.2009.08.002.         [ Links ]

FreeRTOS. (2013, June 6). FreeRTOS. Retrieved September 30, 2014, from http://www.freertos.org/FAQWhat.html#WhyUseRTOSFAQWhat.html.         [ Links ]

Instruments, N. (2013, November 22). National Instruments. Retrieved October 5, 2014, from http://www.ni.com/white-paper/3938/en/.         [ Links ]

Junhong, D. & Ming, Y. (2010). Analysis of guided wave signal based on LabVIEW and STFT. 2010 International Conference on Computer, Mechatronics, Control and Electronic Engineering (CMCE).5, 115-117. Changchun: IEEE. doi:10.1109/CMCE.2010.5610045.         [ Links ]

Laplante, P. & Ovaska, S. (2011). Concepts and misconceptions. In Wiley (Ed.), Real-Time Systems Design and Analysis: Tools for the Practitioner, 4th ed., 5-8, United States of America: IEEE. Retrieved October7, 2014, from https://books.google.com.co/books?hl=es&lr=&id=Ez6-aSfbqtsC&oi=fnd&pg=PR15&dq=Real-Time+Systems+Design+and+Analysis:+Tools+for+the+Practitioner.&ots=V4_Vm-UbDy&sig=o3bFajr9lh6z6oIFuJRn-Pr7V5Q#v=onepage&q=Real-Time%20Systems%20Design%20and%20Analysis%3A%20.         [ Links ]

López - Larraz, E., Mozos, O., Antelis, J., & Minguez, J. (2010). Syllable-based speech recognition using EMG. 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 4699-4702, Buenos Aires: IEEE. doi:10.1109/IEMBS.2010.5626426.         [ Links ]

MAdishetty, S. K., MAdanayake, A. & Cintra, R. (2013, June). Architectures for the 4-Tap and 6-Tap 2-D Daubechies Wavelet Filters Using Algebraic Integers. IEEE Transactions on Circuits and Systems I: Regular Papers, 60(6), 1455-1468. doi:10.1109/TCSI.2012.2221171.         [ Links ]

Pattanaburi, K., Onshaunjit, J. & Srinonchat, J. (2012). Enhancement Pattern Analysis Technique for Voiced/Unvoiced Classification. 2012 International Symposium on Computer, Consumer and Control (IS3C), 389-392. Taichung: IEEE. doi:10.1109/IS3C.2012.105.         [ Links ]

Samjin, C., Youngkyun, S. & Hun - Kuk, P. (2011, April). Selection of wavelet packet measures for insufficiency murmur identification. Expert Systems with Applications, 38(4), 4264- 4271. doi:10.1016/j.eswa.2010.09.094.         [ Links ]

Satiyan, M., Hariharan, M. & Nagarajan, R. (2010). Comparison of performance using daubechies wavelet family for facial expression recognition. 2010 6th International Colloquium on Signal Processing and Its Applications (CSPA), 1-5. Mallaca City: IEEE. doi:10.1109/CSPA.2010.5545262.         [ Links ]

Suresh, H. & Balasubramanyam, V. (2013). Wavelet transforms and neural network approach for epileptical EEG. 2013 IEEE 3rd International Advance Computing Conference (IACC). Ghaziabad: IEEE. doi:10.1109/IAdCC.2013.6506807.         [ Links ]

Tomoki, T., Keigo, N., Takayuki, N., Tomomi, K., Yoshitaka, N. & Kiyohiro, S. (2009). Technologies for Processing Body-Conducted Speech Detected with Non-Audible Murmur Microphone. 10th Annual Conference of the International Speech Communication Association, 632-635. Brigthon, UK. doi:10.1.1.158.763.         [ Links ]

Torres - García, A. A., Reyes - García, C. A. & Villaseñor - Pineda, L. (2012). Toward a silent speech interface based on unspoken speech. In Proceedings of the International Conference on Bio-inspired Systems and Signal Processing, 370-373. Portugal. Retrieved October 6, 2014, from http://ccc.inaoep.mx/~villasen/articulos/SilentSpeechInterfaceBasedOnUnspokenSpeech-BIOSTEC12.pdf.         [ Links ]

Verma, A., Cabrera, S., Mayorga, A. & Nazeran, H. (2013). A robust algorithm for derivation of heart rate variability spectra from ECG and PPG signals. In IEEE (Ed.), 2013 29th Southern Biomedical Engineering Conference (SBEC), 35-36. Miami, Florida. doi:10.1109/SBEC.2013.26.         [ Links ]

Xinyi, G., Gengyin, L., Ming, Z. & K.I., L. (2011). Wavelet transform based approach to harmonic analysis. 2011 11th International Conference on Electrical Power Quality and Utilisation (EPQU). Lisbon. doi:10.1109/EPQU.2011.6128954.         [ Links ]

Creative Commons License Todo o conteúdo deste periódico, exceto onde está identificado, está licenciado sob uma Licença Creative Commons