SciELO - Scientific Electronic Library Online

 
vol.6 issue1Adaptation and Validation of the Voice Handicap Index and its Abbreviated Version to Rioplatense Spanish of ArgentinaBioethical Considerations in the Management of Amyotrophic Lateral Slerosis: An Approach to Speech Therapy author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


Revista de investigación e innovación en ciencias de la salud

On-line version ISSN 2665-2056

Rev. Investig. Innov. Cienc. Salud vol.6 no.1 Medellín Jan./June 2024  Epub Jan 23, 2024

https://doi.org/10.46634/riics.238 

Research article

Surveying Colombian Speech-Language Pathologists on their Reported Training & Practices of Auditory-Perceptual Evaluation of Voice

Encuestando a fonoaudiólogos colombianos sobre su entrenamiento y prácticas reportadas en la evaluación perceptual auditiva de la voz

Martha Peña Sanchez1  * 
http://orcid.org/0000-0002-3112-3201

Fernando Delprado-Aguirre1 
http://orcid.org/0000-0002-0097-0475

1 Vocology Center; Bogotá; Colombia.


Abstract

Objective:

To explore the training and use of auditory perceptual evaluation of the voice reported by Colombian speech-language pathologists.

Study Design:

Cross-sectional observational research with a quantitative approach.

Methods:

A digital questionnaire was designed and distributed to gather information regarding professionals' training process and implementation of auditory-perceptual evaluation procedures. Descriptive statistics were applied, and several generalized linear models were adjusted to determine the influence of certain variables on others.

Results:

The survey received responses from 40 speech-language pathologists, revealing that the most used scales for training and evaluating vocal quality within this group are direct magnitude estimations (82.5% and 77.5%). Similarly, in this group, the tasks most frequently used to train and use as an evaluation strategy are vowel assessments (38%) followed by spontaneous speech (30%). Practitioners of this group were mostly trained using a conceptual framework involving multiple exposures to rating (42.5%). The use of direct magnitude estimation in training with a normal voice showed significance (p = 0.015), as did the use of the vowel /i/ in training with an equal-appearing interval (p = 0.013). The statistical models relating the scale used to the scale on which participants were trained were also significant (p < 0.05).

Conclusions:

The GRBAS scale is the training tool most used by the group of speech-language pathologists of the study group in Colombia. Future efforts should focus on improving training practices for auditory-perceptual evaluation, exploring alternative conceptual frameworks, and incorporating external references to enhance validity and reliability.

Keywords: Auditory-perceptual evaluation; voice; voice quality; training; voice assessment; perception; rating; anchors; voice judgments; scale

Resumen

Objetivo:

Explorar los reportes de fonoaudiólogos colombianos acerca del entrenamiento y uso de la evaluación perceptual auditiva de la voz.

Diseño de estudio:

Se eligió un diseño de investigación observacional transversal con un enfoque cuantitativo.

Metodología:

Se diseñó y distribuyó un cuestionario digital para recopilar información sobre el proceso de formación de los profesionales y la implementación de procedimientos de evaluación perceptual auditiva. Se aplicaron estadísticas descriptivas y se ajustaron varios modelos lineales generalizados para determinar la influencia de ciertas variables en otras.

Resultados:

La encuesta recibió respuestas de 40 fonoaudiólogos, revelando que las escalas más utilizadas para la formación y la evaluación de la calidad vocal en el grupo son las estimaciones de magnitud directa (82.5% y 77.5%). Del mismo modo, en este grupo las tareas más frecuentemente utilizadas para la formación y el uso como estrategia de evaluación son las vocales (38%), seguidas por el habla espontánea (30%). La mayoría de los profesionales del grupo fueron formados utilizando un marco conceptual que involucra múltiples exposiciones a la calificación (42.5%). El uso de la estimación de magnitud directa en la formación con una voz normal mostró significancia (p = 0.015), al igual que el uso de la vocal /i/ en la formación con intervalos de igual apariencia (p = 0.013). Los modelos estadísticos que relacionan la escala utilizada con la escala en la que los participantes fueron entrenados también fueron significativos (p < 0.05).

Conclusiones:

La escala GRBAS es la herramienta de formación más utilizada por el grupo de fonoaudiólogos del estudio. Los esfuerzos futuros deberían centrarse en mejorar las prácticas de formación para la evaluación perceptual auditiva, explorar marcos conceptuales alternativos e incorporar referencias externas para mejorar la validez y la confiabilidad.

Palabras clave: Evaluación perceptual auditiva; voz; calidad vocal; formación; evaluación vocal; percepción; calificación; anclajes; juicios vocales; escala

Introduction

For professionals specializing in voice analysis, auditory-perceptual evaluation is an essential process in the measurement exercise that enables them to clinically determine the presence or absence of a voice disorder [1]. Speech-language pathologists must have a thorough understanding of the conceptual framework underlying auditory-perceptual evaluation and the necessary conditions for conducting a reliable auditory-perceptual analysis. In the Colombian context, theoretical and practical training is provided during the undergraduate level. In such manner, basic knowledge and skills are offered to perform the evaluation process. Furthermore, this knowledge is a fundamental requirement for their professional practice [2]. Perceptual scales are part of the training and common use, specially GRBAS, as well as its derivatives RASAT and RASATI, which are tools widely used, not only by speech-language pathologists but also by other professionals such as ENT specialists [3].

Conceptual framework of auditory-perceptual evaluation of voice

Kreiman et al. [4] proposed a conceptual model that incorporates various intervening factors in assigning specific ratings to vocal acoustic signals. During perceptual evaluation, listeners compare several (potentially nonspecific) qualities they perceive in a speaker's voice with their own subjective understanding of how these qualities should be heard in a voice. Therefore, perceptual assessment involves comparing the evaluator's internal standards with the vocal production of the individual being assessed, allowing the evaluator to make judgments based on their own criteria or standards [5]. These internal reference standards are "average" or "typical" examples (normal or altered) of certain qualities being rated [4,6]. These standards are stored in memory and are developed through exposure to multiple voices. As a result, the standards can vary among listeners and are inherently unstable, influenced by factors such as memory and attention lapses or external factors like the acoustic context [5,7]. Given the variability of internal standards, the use of external standards or anchors, which are reference stimuli that listeners employ for comparison with the voice they are evaluating, is currently suggested [8].

However, the perception of voice quality by an evaluator is influenced by various factors, starting with listener attributes. These include internal reference standards and specific perceptual biases (such as being a native speaker of a particular language), professional training, or general sensitivity to certain vocal qualities [9-12]. It is widely acknowledged that training and extensive exposure to diverse voices are instrumental in refining internal reference standards [7,8,13,14]. Regarding training methods, Walden and Khayumov [15] discuss three theoretical foundations for auditory-perceptual assessment training: multiple exposures to rating demands participants to listen to voices repeatedly to enhance reliability; use of external references, in which the learner must compare the stimuli to be evaluated with a reference sample [16]; finally, incorporation of perceptual input provide additional support to the listener through the visual sensory channel, typically using spectrograms, although laryngeal images can also be used.. Additionally, random errors such as fatigue, attention lapses, or transcription mistakes fall into this category as well [12,17].

Up to this point, this could explain the fact that different scores exist between experienced and novice listeners. On one hand, a body of evidence suggests that most individuals have relatively stable internal references for normal voice because the experience with typical voices is comparatively similar. On the other hand, training methods may or may not differ among individuals. Therefore, when novice listeners rate vocal quality, they do so with reference to normalcy, whereas experienced listeners compare the signal to their internal repository of pathological voices acquired through training and rating practice [12,18,19]. Nevertheless, inter-rater reliability (agreement) appears to be low among experienced raters, even though intra-rater reliability (consistency) indices are high for these same subjects [12,20,21].

The second factor that influences perceived quality is related to the task itself, encompassing various aspects such as the scale used to quantify voice sound phenomena, instructions for completing the scale, the rating environment, and the quality of the voice sample [22-24]. The literature describes different types of scales, including categorical ratings, direct magnitude estimations, equal-appearing interval scales, visual analog scales, and paired comparisons [4,25]; a detailed description of each of the scales described in the literature is presented in Table 1. The use and reliability of these scales heavily rely on the level of training of the judges and the analysis strategies employed [4,12], hence, the various efforts to establish the diagnostic validity of the different scales [16,21,26,27].

Table 1 Types of vocal quality rating scales. 

Type of scale Characteristics Example
Categorical ratings Assignment of specific categories, with or without a specific order Descriptors such as muffled, hoarse, high-pitched, low-pitched, among others
Direct magnitude estimations Ordinal scale that presents numbers in a natural order but where the distances between one number and another are not equal. In this type of scale, the assigned number indicates the extent to which you have a certain quality GRBAS scale or its derivative versions GBA. RASAT or RASATI [45]
Equal-appearing interval (EAI) Ordinal scale with equidistant points between the quantities. Requires listeners to assign a number between 1 and n (number of points determined on the scale) Buffalo Voice Profile [46]
Visual Analog Scale Undifferentiated line is usually 100 mm that shows two extremes: one indicating the absence of disturbance and the other related to a complete disturbance. To establish the score, a vertical line is created crossing the undifferentiated line The Consensus Auditory-Perceptual Evaluation of Voice CAPE-V[1]
Paired comparisons Two stimuli are compared, usually opposite, where it is judged how different they are for each dimension Bipolar vocal self-estimate scale [47]

Moreover, speech-language pathologists need to consider the conditions of the speech samples, particularly the quality of the audio recordings. Additionally, authors like Maryn et al. [28] and Maryn and Roy [29] suggest the inclusion of vowels and connected speech in different modalities, such as phonetically balanced readings/phrases or spontaneous speech [30]. Lastly, it is crucial to implement the auditory-perceptual rating process in controlled environmental conditions to minimize biases or errors [31-33].

The final factor to consider is the interaction between the listener and the task, and how it relates to the signal being evaluated. This includes the selection of the scale utilized and the use of anchor stimuli [8,13]. Additionally, there is a phenomenon where the internal standards unconsciously shift when evaluating stimuli of varying severity, which can affect the assessment of subsequent samples [34].

So far, a portion of the robust conceptual framework for auditory perceptual assessment of voice has been described. It is important to acknowledge that in the Colombian context, it is unknown if the information obtained from auditory perceptual evaluation is only a component of a broader voice evaluation protocol and, if so, is not considered decisive in providing relevant information about the case of an individual, or selecting appropriate evaluation instruments, or making decisions regarding vocal treatment.

On the other hand, it is assumed that the training in auditory-perceptual assessment varies across the country. While this is a challenge inherent in the tool itself, the training provided to Colombian speech therapists in this area shows significant variability and may not be supported by a comprehensive conceptual framework like the one proposed by Walden and Khayumov [15]. Consequently, the practice of auditory-perceptual assessment may lack a solid theoretical foundation that considers the underlying variables and how they systematically influence the scores assigned to collected speech samples.

Furthermore, the relevance of auditory perceptual assessment of speech was highlighted in the context of speech-language services during the Covid-19 pandemic [35]. With the urgent need to make the transition to telepractice and the impossibility of performing instrumental examinations, several authors have emphasized the use of auditory perceptual assessment due to its compatibility with remote connections [36-38].

Considering this issue, one of the hypotheses of this study is that training in auditory perceptual evaluation of voice is variable among professionals of speech-language pathology in the nation. Furthermore, it is hypothesized that Colombian speech-language pathologists perform the auditory perceptual evaluation procedure without control of the factors associated with the process.

Accordingly with the description above, the objective of this research was to explore the training and use of auditory perceptual evaluation of the voice reported by Colombian speech-language pathologists. It is worth noting the importance of knowing the training that professionals receive in this field, as well as the specificities of the voice quality ratings, in order to facilitate decision-making processes aimed at standardizing auditory-perceptual evaluation practices in Colombia. Additionally, it can inform the qualification process for current and future generations of speech-language pathologists in the country.

Material and methods

This cross-sectional observational research employed a quantitative approach. A digital questionnaire was designed and distributed to speech-language pathologists in Colombia, which serves as a valuable tool to obtain initial information on a specific situation [39]. The initial version of the questionnaire comprised 26 questions categorized into five sections. Each section consisted of questions of various types, such as closed-ended questions with dichotomous options or multiple-choice questions with a single or varied response. Additionally, open-ended questions provided an opportunity to get concise or detailed answers. To ensure the content and grammatical structure of each statement, an evaluation instrument was developed and administered by an external evaluator whose profile is speech-language pathologist with PhD and master's degree in education, the survey assessment instrument filled with observations by the advisor is attached (see Appendix 1), After the necessary revisions, a final version of the structured questionnaire was obtained, and the sections are presented in Table 2. It is worth emphasizing that the second section was created with the understanding that this research is considered risk-free, as it employs questionnaires that do not intentionally modify biological, physiological, psychological, or social variables. Furthermore, the questions in the fourth and fifth sections align with the task and evaluator variables proposed by Kreiman et al. [4].

Table 2 Description of the sections in the questionnaire. 

Sections Objective Variables Number of questions
Inclusion criteria Define suitability to respond the survey Not applicable 1
Informed consent Voluntary manifestation of willingness to participate in the research Not applicable 1
Sociodemographic data To establish the general characteristics of the participants City, age, gender, year of graduation, highest level of education attained, years of experience in the area, types of populations served 7
Dimension 1: Training process To inquire how participants were trained in auditory perceptual assessment in typical and pathological voices: hours of training, types of scales, training samples, and continuing education programs Task aspects Evaluator aspects 9
Dimension 2: Implementation of the procedure To inquire about the implementation of auditory perceptual voice assessment in clinical practice: voice tasks, sample recording procedure, scale used, and perceived usefulness Task aspects Evaluator aspects 9
Total 27

In order to distribute the instrument, distribution requests were made to colleagues through the email of the Colegio Colombiano de Fonoaudiólogos (CCF) and other electronic channels. The CCF disseminated the invitation through mass communication among its registered members nationwide (n = 161). Simultaneously, a chain distribution was carried out through instant messaging applications (n = 25). It was made available to the public in August 2021 and remained accessible for 15 days, during which it was redistributed solely through instant messaging applications. The sample was conveniently selected, taking the precaution of including the professors from the 14 speech therapy schools that teach the subject of perceptual auditory voice evaluation in the country. This type of sampling was preferred because it is currently difficult to define random sampling, as there are no official statistics indicating the number of professionals dedicated to the field of voice. In addition, the following inclusion criteria were established: professionals attending voice consultation. Professionals focusing on other areas of speech-language pathology were excluded from the sample. Additionally, it was mandatory for participants to answer each of the questions presented in the instrument.

All responses were recorded and processed in a data table in Microsoft Excel. Descriptive analyses were conducted, including frequency counts and percentages for each item. Frequency graphs were also created to observe trends and response patterns. Additionally, several generalized linear models with binomial response and logit link function were fitted, with the response and predictor variables of each statistical model shown in Table 3. It is important to highlight that all variables were dichotomized so that statistical models with binomial response were possibly fitted. These analyses aimed to verify if, within the analyzed dataset, the response variable could be explained by the predictor variables. All analyses were conducted with a 95% confidence level using R software. Finally, open questions were analyzed considering trends identified in the participants' responses.

Table 3 Established generalized linear models with binomial response. 

Predictor variables Response variables Number of models fitted
Highest level of education and years of experience in the field. Training in normal and pathological voice 2
Hours of training in normal and pathological voice Type of scale trained 5
Type of scale trained Type of task trained 6
Type of scale trained Type of trained vowels 5
Hours of training in normal and pathological voice Performance of auditory perceptual evaluation 1
Type of scale trained Type of scale used 5
Type of task trained Type of task used 6

Results

Sociodemographic information

Sociodemographic information related to the participants was included in Table 4.

Table 4 Demographic information. 

Category Results
Sex 4 men (10%)
36 women (90%)
Average age 40.98 years (±10.25)
Study level 1 doctorate (2.5%)
11 master’s degree (27.5%)
18 specialization degree (45%)
4 diploma course (10%)
18 undergraduate degree (45%)
Average years of experience in the field of voice 11.93 years (±9.1)
Populations served 4 Neonates (10%)
2 Early childhood (5%)
7 Middle childhood (17.5%)
13 Adolescents (32.5%)
40 Adults (100%)
19 Elderly (47.5%)

Training of auditory-perceptual evaluation of voice

In this survey, 35 of the respondents (87.5%) reported receiving training in auditory-perceptual evaluation of voice, which involved listening exercises and analysis of typical voices across the lifespan. Meanwhile, 37 of the respondents (92.5%) stated that they had received training in listening exercises and analysis of disordered voices. The average training hours for the first and second tasks were 50.55 hours (±103,302) and 54.05 hours (±92,787), respectively. None of the generalized linear models showed a statistically significant association between training in normal and pathological voices with the participants' educational level and years of experience in vocology (p > 0.05). A detailed summary of the statistical results is displayed in Appendix 2.

Figure 1 displays the type of scale and the number of participants who received training with each scale. On the other hand, Figure 2 and Figure 3 depict the voice and speech tasks that respondents received training with, along with the number of respondents for each task. The statistical model that explained the use of direct magnitude scales resulting from training with normal voice was found to be significant (p = 0.015). Similarly, the statistical model that explained the use of the vowel /i/ resulting from training with the equal appearing interval was also significant (p = 0.013). However, none of the statistical models established an association between the use of a specific task and the scale on which the participants received training (p > 0.05).

Note. The bar graph shows the number of participants using different scales of auditory perceptual evaluation. The included scales were CR: categorical ratings, DME: direct magnitude estimation, EAI: equal appearing interval, PC: paired comparisons, VAS: visual analog scale.

Figure 1 Type of scale used in training 

Note. The bar graph shows the number of participants who received training in auditory perceptual evaluation using different voice tasks.

Figure 2 Vocal tasks used for training 

Note. The bar graph shows the number of participants who received training in auditory perceptual evaluation using different vowels.

Figure 3 Vowels used for training 

Some professionals received multiple forms of training, which is why the total number of participants for each type of training does not match the total study sample. Additionally, when a participant's response did not allow for inference regarding the type of training received, it was classified as undetermined. The conceptual framework of training is presented in Table 5.

Table 5 Conceptual framework for training and number of participants. 

Type of training Participants Percentage
• Use of external references
○ Anchor
- Consensus 1 2.5%
• Multiple exposures to rating
○ Practice
- With feedback 5 12.5%
- No feedback 9 22.5%
- Feedback unclear 3 7.5%
○ Group consensus 4 10%
• Addition of perceptual input
○ Use of spectrograms 2 5%
• Undetermined by response 18 45%

Note: This conceptual framework is taken from Walden and Khayumov [15].

Out of the total respondents, 92.5% (n = 37) reported conducting auditory-perceptual evaluations as part of their clinical practice. However, the adjusted statistical model that aimed to explain test performance based on the hours of training in normal and impaired voice was not statistically significant (p > 0.05). Furthermore, 80% of the respondents stated that this assessment strategy is very useful, while 15% considered it useful. Only 5% rated it as moderately useful, and none of the respondents considered perceptual assessment as not very useful or useless. The purposes of auditory perceptual assessment were categorized based on the participants' responses (refer to Table 6 for details).

Table 6 Purposes of auditory-perceptual evaluation of voice. 

Purposes of voice evaluation Number Percentage
Initial stage
• Determine:    
- Presence/absence of a voice disorder 11 27.5%
- Severity of voice disorder 9 22.5%
- Nature of voice disorder 16 40%
Treatment stage
• Define goals and methods 10 15%
• Educate/counsel the patient about the voice disorder 1 2.5%
• Identify outcomes 12 30%
Undetermined 14 35%

A total of 10 individuals reported correlating the results of perceptual evaluation with acoustic analysis of voice to establish a vocal diagnosis. Additionally, 2 participants mentioned that time plays a significant role in deciding whether or not to perform auditory-perceptual evaluation of voice in their daily clinical practice. Regarding the procedures for recording voice signals for auditory-perceptual evaluation of voice, diverse responses were obtained (refer to Table 7). It is worth noting that only one participant reported not making recordings due to a shortage of supplies.

Table 7 Recording practices reported by participants. 

Components Participant’s report
Microphone Type of microphone
Condenser
Microphone of recorder device
From smartphone
With WDRC
Unidirectional with frequency response (50Hz-20kHz)
Flat frequency omni-directional
Anti-pop
 
Mouth distance
One quarter
5-10 cm
7-10 cm, measured with ruler
15 cm
30 cm
 
Angulation from the mouth
30° angulation
Preamplifier Audio interface
Digital recording Software y Hardware
Audio editing (Audacity)
Acoustic analysis (Praat, WaveSurfer)
Smartphone application
Professional recorder
 
Format specifications
16-bit or 32-bit resolution
44,000 or 44,100 Hz sampling rate
WAV format
Decibel calibration Instrument
Not reported
Recording environment Sonometer to verify that samples have noise below 40 dB
Sound-proof cabinet
Quiet space

Note: WDRC: Wide dynamic range compression, WAV: Waveform audio file format.

A total of 25 participants (62.5%) indicated that they always perform auditory-perceptual evaluation of voice, while 8 (20%) reported doing it almost always, 4 (10%) mentioned doing it sometimes, 2 (5%) stated they almost never do it, and 1 (2.5%) reported never performing the procedure. Regarding the rating scales used by professionals, the most frequently utilized was direct magnitude estimations (n = 31; 77.5%), specifically with GRBAS and RASAT/RASATI. This was followed, in order of usage, by categorical ratings (n = 12; 30%), paired comparisons (n = 9; 22.5%), equal-appearing intervals (n = 8; 20%) with Buffalo Vocal Profile, and visual analog scale (n = 7; 17.5%) with CAPE-V. Only 2% of the participants reported not using any rating scale. The adjusted statistical models showed significant associations between the scale used and the scale on which the participants were trained: GRBAS (p = 0.012), Buffalo Vocal Profile (p = 0.033), and paired comparisons (p = 0.013).

Figure 4 displays the speech and voice tasks utilized by respondents in their daily auditory-perceptual evaluation practice. None of the statistical models used to associate task usage with the participants' training proved to be statistically significant. Regarding the timing of perceptual assessment, 20 respondents (50%) reported conducting the assessment in real-time, while 15 respondents (35%) stated that they perform a recording and subsequently rate it through one or multiple opportunities to listen. Additionally, 3 participants (7.5%) reported performing the process using a combination of the aforementioned conditions, and another 3 participants (7.5%) reported conducting a rating after recording and subsequently performing a new rating.

Note. The bar graph shows the number of participants who use auditory perceptual evaluation using different vocal tasks.

Figure 4 Vocal tasks used in auditory-perceptual evaluation practice. 

Discussion

Training in auditory perceptual evaluation of the voice

The association between training in auditory-perceptual evaluation of the voice, educational level, and years of experience in the field of voice/vocology was assessed. It is important to recognize that there is a reduction in generalizability based on the sampling method chosen for conducting this research. However, it can be concluded that, based on the analyzed data, a higher educational level or more years of experience in the field does not guarantee a higher level of training in this evaluation strategy. Consequently, further studies are necessary to investigate this matter.

Considering that the auditory-perceptual evaluation procedure is taught at the undergraduate level in Colombia [40], it would be expected that all professionals who participated in the survey had received training in performing this procedure. However, only a small percentage indicated that they had received training in evaluating both typical and pathological voices. One possible explanation is the variation in auditory-perceptual assessment training across the country. This assumption is based on the fact that the 14 existing speech therapy programs in the country provide training in auditory-perceptual evaluation either through dedicated voice courses or as part of fundamental courses in the speech area. These courses typically have a duration of 96 to 144 working hours, but the specific time dedicated to studying this tool varies between 6 to 9 hours. As a result, the training provided to Colombian speech-language pathologist in this area exhibits significant variability and lacks depth. Furthermore, it is important to highlight concerns about the reliability of the reported data due to the possibility of memory bias. Several professionals reported receiving training over ten years ago, which raises questions about the accuracy of their recollection.

Regarding training duration, the results indicated an average of approximately 50 hours for both normal and altered voices. This value significantly exceeds the training durations reported in the literature, which typically range from 1 to 20 hours. Analysis of the responses to open-ended questions revealed a confusion between the concept of auditory perceptual evaluation of the voice, patient-reported outcome measures, and even acoustic analysis. It is highly likely that the reported training hours encompass a combination of auditory perceptual evaluation and other vocal evaluation strategies. Furthermore, it is plausible that more training is focused on pathological voices rather than normal voices. This poses a problem, as it creates a bias by setting specific internal standards for certain pathologies instead of general vocal quality, which negatively impacts the training and execution of auditory perceptual assessments. However, it is important to highlight that the data collection instrument used in this study did not include questions about the number of training sessions. Therefore, it is recommended to investigate this aspect in future research.

Statistical tests revealed a relationship between the use of normal voice stimuli and direct magnitude estimation scales, despite other scales also providing a space for rating normal voices. This result may be associated with the fact that direct magnitude estimation scales were the most commonly used in the training of clinicians of the study.

Similarly, categorical ratings are utilized not only by healthcare professionals, but also by arts professionals due to their unique training methods. However, in the field of speech-language pathology, descriptors should align with physiological reasoning. The large number and variety of terms used in categorical ratings make it challenging to characterize and establish relationships between each attribute and their corresponding sound emission in auditory-perceptual evaluation [41]. Therefore, it raises concerns about the extensive use of this type of measurement scale in Colombian speech-language pathology practice, even in the present day.

Likewise, the Buffalo vocal profile was extended in certain areas of the country as a result of the initiative of some schools of speech-language pathology to extend the scientific advances of that time; the CAPE-V or paired comparison scales have been used in the country for a relatively short period of time. Based on the above, a national union reflection is sought in order to develop more training processes and promote the use of robust instruments/tools for auditory-perceptual assessment.

Regarding the speech tasks that professionals were trained in, it is evident that there is a preference for using vowels over other speech tasks. It is equally noteworthy that all respondents reported being trained with the vowel /a/ more frequently than other vowels. Although this research did not explore the reasons behind the choice of specific vowels for training in auditory perceptual evaluation, most protocols suggest the use of these vowels. For instance, authors such as Kempster et al. [1] recommend the use of /a/ due to its neutrality in the configuration of the vocal tract, and /i/ because it is the stimulus used in stroboscopy to observe laryngeal behavior [42]. Based on the above, the statistical tests only confirm an association between training with the equal-appearing interval scale and the use of the vowel /i/ as part of the trained tasks. This finding reinforces the idea that the stimuli used in training are selected without any apparent specific criteria or based on the stimuli available to the trainer.

At this point, it should be noted that tasks such as vowels, sentence reading, and spontaneous speech were not statistically associated with visual analog scales, since tools such as the CAPE-V have clearly defined the types of stimuli it should be trained and executed with. The findings mentioned so far differ from what has been reported in the literature in that the main stimulus for training evaluators is connected speech (spontaneous speech and sentence and paragraph reading) [7,43]. Simultaneously, the findings agree that the use of synthesized stimuli (both vocal and speech) are the least used ways to prepare judges' perceptual skills [14,44]. Finally, speech tasks with which the trainings were performed are not clear; it is possible that they were part of the casuistry of those conducting the trainings or a pre-existing database.

A different point of discussion is the theoretical basis of training in auditory-perceptual evaluation most commonly reported by respondents, which is multiple exposures to rating. Within this category, the most frequently indicated practice was to perform without feedback. From this finding, it can be inferred that expertise in voice rating is acquired simply by listening to stimuli a certain number of times or in multiple sessions. Consequently, internal standards would be developed and reinforced through the act of listening itself. While limited feedback can be beneficial for sensory learning, the complete absence of feedback for novice judges is particularly problematic. Without adequate feedback, the establishment and calibration of internal standards become uncertain [18].

It is noteworthy that only one participant mentioned receiving training that involved the use of external standards, specifically employing consensus-based anchors for scoring. It is striking that one of the most effective training methods has been underutilized in the country, as it has the potential to enhance the validity and reliability of auditory-perceptual evaluation of voice [8,13,32]. Furthermore, it is important to highlight the inability to classify nearly half of the respondents based on a theoretical training framework. Initially, it might be assumed that the participants' lack of reference for training suggests they did not receive it, which contradicts previous reports regarding the number of trained professionals. Additionally, the way in which participants discuss their training experiences reveals certain shortcomings in the training itself. However, the authors postulate that this may be the root cause of the prevailing notion in the country that auditory-perceptual evaluation does not require training. The widespread belief that this assessment process is straightforward and does not necessitate a deep understanding of the underlying processes or the appropriate scoring procedures for each tool indicates a difficulty to reflect on optimal training methods and an obstacle to recognize the need for training to establish and calibrate internal standards.

Performance of auditory-perceptual evaluation of voice

Regarding the implementation of the procedure, it is worth noting that not all professionals reported perform it. While some participants justified this response based on time constraints in their daily clinical practice, it is possible that speech-language pathologists themselves may not be fully aware that they are indeed implementing it. Numerous authors have emphasized that this is a crucial component for measuring vocal quality and conducting research [32,33]. Furthermore, speech-language pathologists often rely on the results of auditory perceptual evaluation to inform their efforts in training or rehabilitating individuals with voice disorders, which raises questions about this finding. The statistical test examining the relationship between training hours and the execution of the procedure supports the notion that, in the studied dataset, the number of training hours does not significantly impact the performance of the process in routine clinical practice.

It is encouraging that no participant considered this strategy as not very useful or useless in daily clinical practice. This confirms that even when presented with alternative tools for measuring vocal function, the inherent value of auditory perceptual evaluation is recognized by Colombian colleagues. However, when inquiring about the reasons for rating the usefulness of the strategy, it is observed that less than half of the professionals indicate their reasons. The aspects that are most frequently mentioned relate to the ability to determine the nature of the voice disorder and identify treatment outcomes. These trends in the results allow the authors to infer that there is a perception that auditory perceptual evaluation serves only as a partial baseline and post-treatment comparison measure. Nevertheless, it is evident that auditory-perceptual evaluation should also be directed towards achieving a diagnosis and establishing treatment goals and methods that educate and counsel the patient about their voice disorder and ways to improve it [34,35].

The findings presented in Table 7 highlight the significant disparity among Colombian speech-language pathologists in terms of adhering to established quality standards when recording vocal signals [36]. This is of utmost importance, considering that inaccurately or inadequately recorded voices can impact the reliability of auditory-perceptual judgments [38]. In future studies, it is crucial to thoroughly investigate the recording practices of acoustic vocal signals, with a particular focus on standardizing the fundamental conditions for audio sample recording across the country.

The latter data may be more reliable in relation to the number of professionals who actually perform this type of assessment. However, with respect to the measurement scale, the hypothesis that the GRBAS and RASAT/RASATI scales, together with categorical ratings, are the most widely used tools in the country is confirmed. The statistical results confirm that if the professionals were trained in GRBAS, then they would use this scale. The same happens with the Buffalo vocal profile; therefore, the use behavior may be due to the fact that most of the professionals were trained with these instruments.

When examining the tasks requested from the clients during auditory-perceptual evaluation, a divergence is observed compared to the use of scales. The tasks with which the participants were trained do not necessarily align with the tasks employed in their daily practice. Moreover, there is a consistent emphasis on using vowels over other voice and speech tasks. It is hypothesized that this preference may stem from the desire to obtain vowel samples for subsequent acoustic analysis, potentially at the expense of compromising auditory-perceptual evaluation. This discovery raises two uncertainties that warrant further exploration: firstly, the physiological mechanisms underlying the selection of certain tasks over others, which participants may not be aware of, leading to a limited utilization of available tasks without a clear rationale; secondly, there is uncertainty regarding the tasks associated with each auditory perceptual evaluation tool, suggesting that participants may not be familiar with the specific procedures required by each tool, resulting in inconsistent task selection [28].

Lastly, it is important to note that half of the surveyed speech-language pathologists reported conducting real-time auditory perceptual evaluation during consultations. This raises concerns regarding the validity of the results obtained through this approach, as many published reports emphasize the need to record voices and listen to them repeatedly to mitigate the potential impact of auditory memory or attention lapses [36].

Limitations

It is necessary to acknowledge various significant limitations of this study: first, a common risk when selecting a non-random sample is the inability to make generalizations about a population. Conducting convenience sampling significantly reduces the strength of any resulting generalizations due to selection bias. For this reason, it is not feasible to assume that the findings of this study can be applied to the entire Colombian speech-language pathologist´s population.

It is recognized that to enhance appearance validity, the questionnaire should have undergone a pilot test in a sample with similar characteristics before its administration to a definitive sample. Since this activity was not conducted, the results obtained with the questionnaire may be subject to bias. Likewise, consultation with experts in the field should have been sought to obtain indicators of content validity. Although the questionnaire was content evaluated by an external evaluator, the content could have benefited from a thorough review by experts.

Finally, this study considered Colombian speech-language pathologists as the population of interest. However, within this sample, teachers from different speech pathology schools that teach perceptual auditory voice evaluation were included. This selection of participants could introduce a bias in the sample, as these individuals may have a different level of involvement in the clinical field compared to other speech-language pathologists. Additionally, the sample represents individuals from different professional backgrounds, and the high level of education among the participants is noteworthy, as the sample included individuals with doctoral degrees, a significant proportion with master's degrees, and specialization degrees. This may also contribute to a potential selection bias.

Conclusions

Based on the findings presented in this study, there is an urgent need to establish systematic training programs for auditory-perceptual evaluation of voice. These programs should be based on a conceptual framework of sensory learning that considers both normal voices throughout the lifespan and disordered voices across different degrees of severity. It is crucial to recognize the differences in internal standards between those who have analyzed populations with pathological voices and those who have only had experience with typical voices.

Given the specific context in Colombia described earlier, it is essential to receive training in conducting evaluations that align with international standards, encompassing diverse scales, indices, and precise terminology. These standards should be adapted to meet the specific needs of the country. Similarly, voice clinicians must undergo training with standardized parameters to ensure consistency within the evaluation team, considering the characteristics of the population they serve.

Lastly, each institution or working group should develop controlled and systematic protocols for auditory-perceptual evaluation. These protocols should include reproducible and interpretable tests, as well as intra- and inter-rater comparisons, to guide the initial and ongoing training of the team. It is imperative to use methods that incorporate descriptors, scale values, and well-organized speech samples to foster group consensus and maintain a high level of reliability in the judgments issued during auditory-perceptual evaluation. This approach will facilitate clear diagnosis, goal setting, and selection of appropriate treatment methods.

References

1. Kempster GB, Gerratt BR, Verdolini Abbott K, Barkmeier-Kraemer J, Hillman RE. Consensus Auditory-Perceptual Evaluation of Voice: Development of a Standardized Clinical Protocol. Am J Speech Lang Pathol [Internet]. 2009;18(2):124-32. doi: https://doi.org/10.1044/1058-0360(2008/08-0017)Links ]

2. Van Stan JH, Whyte J, Duffy JR, Barkmeier-Kraemer J, Doyle P, Gherson S, et al. Voice Therapy According to the Rehabilitation Treatment Specification System: Expert Consensus Ingredients and Targets. Am J Speech Lang Pathol [Internet]. 2021;30(5):2169-201. doi: https://doi.org/10.1044/2021_AJSLP-21-00076Links ]

3. Morato-Galán M, Caminero Cueva MJ, Rodrigo JP, Suárez Nieto C, Núñez-Batalla F. Valoración de la calidad vocal tras el tratamiento del carcinoma faringolaríngeo avanzado en un protocolo de preservación de órgano. Acta Otorrinolaringol Esp [Internet]. 2014;65(5):283-8. doi: https://doi.org/10.1016/j.otorri.2013.12.005Links ]

4. Kreiman J, Gerratt BR, Kempster GB, Erman A, Berke GS. Perceptual Evaluation of Voice Quality: Review, Tutorial, and a Framework for Future Research. J Speech Lang Hear Res [Internet]. 1993;36(1):21-40. doi: https://doi.org/10.1044/jshr.3601.21Links ]

5. Gerratt BR, Kreiman J, Antonanzas-Barroso N, Berke GS. Comparing Internal and External Standards in Voice Quality Judgments. J Speech Lang Hear Res [Internet]. 1993;36(1):14-20. doi: https://doi.org/10.1044/jshr.3601.14Links ]

6. Kreiman J, Gerratt BR. Validity of rating scale measures of voice quality. J Acoust Soc Am [Internet]. 1998;104(3):1598-608. doi: https://doi.org/10.1121/1.424372Links ]

7. Ghio A, Dufour S, Wengler A, Pouchoulin G, Revis J, Giovanni A. Perceptual Evaluation of Dysphonic Voices: Can a Training Protocol Lead to the Development of Perceptual Categories? J Voice [Internet]. 2015;29(3):304-11. doi: https://doi.org/10.1016/j.jvoice.2014.07.006Links ]

8. Chan KMK, Yiu EM-L. The Effect of Anchors and Training on the Reliability of Perceptual Voice Evaluation. J Speech Lang Hear Res [Internet]. 2002;45(1):111-26. doi: https://doi.org/10.1044/1092-4388(2002/009)Links ]

9. Altenberg EP, Ferrand CT. Perception of Individuals with Voice Disorders by Monolingual English, Bilingual Cantonese-English, and Bilingual Russian-English Women. J Speech Lang Hear Res [Internet]. 2006;49(4):879-87. doi: https://doi.org/10.1044/1092-4388(2006/063)Links ]

10. Chaves CR, Campbell M, Côrtes Gama AC. The Influence of Native Language on Auditory-Perceptual Evaluation of Vocal Samples Completed by Brazilian and Canadian SLPs. J Voice [Internet]. 2017;31(2):258.e1-258.e5. doi: https://doi.org/10.1016/j.jvoice.2016.05.021Links ]

11. De Bodt MS, Wuyts FL, Van de Heyning PH, Croux C. Test-retest study of the GRBAS scale: Influence of experience and professional background on perceptual rating of voice quality. J Voice [Internet]. 1997;11(1):74-80. doi: https://doi.org/10.1016/S0892-1997(97)80026-4Links ]

12. Eadie TL, Kapsner M, Rosenzweig J, Waugh P, Hillel A, Merati A. The Role of Experience on Judgments of Dysphonia. J Voice [Internet]. 2010;24(5):564-73. doi: https://doi.org/10.1016/j.jvoice.2008.12.005Links ]

13. Eadie TL, Kapsner-Smith M. The Effect of Listener Experience and Anchors on Judgments of Dysphonia. J Speech Lang Hear Res [Internet]. 2011;54(2):430-47. doi: https://doi.org/10.1044/1092-4388(2010/09-0205)Links ]

14. Chan KMK, Yiu EM-L. A Comparison of Two Perceptual Voice Evaluation Training Programs for Naive Listeners. J Voice [Internet]. 2006;20(2):229-41. doi: https://doi.org/10.1016/j.jvoice.2005.03.007Links ]

15. Walden PR, Khayumov J. The Use of Auditory-Perceptual Training as a Research Method: A Summary Review. J Voice [Internet]. 2022;36(3):322-34. doi: https://doi.org/10.1016/j.jvoice.2020.06.032Links ]

16. Patel S, Shrivastav R, Eddins DA. Developing a Single Comparison Stimulus for Matching Breathy Voice Quality. J Speech Lang Hear Res [Internet]. 2012;55(2):639-47. doi: https://doi.org/10.1044/1092-4388(2011/10-0337)Links ]

17. Anand S, Kopf LM, Shrivastav R, Eddins DA. Objective Indices of Perceived Vocal Strain. J Voice [Internet]. 2019;33(6):838-45. doi: https://doi.org/10.1016/j.jvoice.2018.06.005Links ]

18. Eadie TL, Baylor CR. The Effect of Perceptual Training on Inexperienced Listeners’ Judgments of Dysphonic Voice. J Voice [Internet]. 2006;20(4):527-44. doi: https://doi.org/10.1016/j.jvoice.2005.08.007Links ]

19. Sofranko JL, Prosek RA. The Effect of Levels and Types of Experience on Judgment of Synthesized Voice Quality. J Voice [Internet]. 2014;28(1):24-35. doi: https://doi.org/10.1016/j.jvoice.2013.06.001Links ]

20. Webb AL, Carding PN, Deary IJ, MacKenzie K, Steen N, Wilson JA. The reliability of three perceptual evaluation scales for dysphonia. Eur Arch Otorhinolaryngol [Internet]. 2004;261(8):429-34. doi: https://doi.org/10.1007/s00405-003-0707-7Links ]

21. Zraick RI, Kempster GB, Connor NP, Thibeault S, Klaben BK, Bursac Z, et al. Establishing Validity of the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V). Am J Speech Lang Pathol [Internet]. 2011;20(1):14-22. doi: https://doi.org/10.1044/1058-0360(2010/09-0105)Links ]

22. Nagle KF. Emerging Scientist: Challenges to CAPE-V as a Standard. Perspect ASHA SIGs [Internet]. 2016;1(3):47-53. doi: https://doi.org/10.1044/persp1.SIG3.47Links ]

23. Shrivastav R, Sapienza CM, Nandur V. Application of Psychometric Theory to the Measurement of Voice Quality Using Rating Scales. J Speech Lang Hear Res [Internet]. 2005;48(2):323-35. doi: https://doi.org/10.1044/1092-4388(2005/022)Links ]

24. Patel RR, Awan SN, Barkmeier-Kraemer J, Courey M, Deliyski D, Eadie T, et al. Recommended Protocols for Instrumental Assessment of Voice: American Speech-Language-Hearing Association Expert Panel to Develop a Protocol for Instrumental Assessment of Vocal Function. Am J Speech Lang Pathol [Internet]. 2018;27(3):887-905. doi: https://doi.org/10.1044/2018_AJSLP-17-0009Links ]

25. Eadie TL, Doyle PC. Direct magnitude estimation and interval scaling of pleasantness and severity in dysphonic and normal speakers. J Acoust Soc Am [Internet]. 2002;112(6):3014-21. doi: https://doi.org/10.1121/1.1518983Links ]

26. Iwarsson J, Bingen-Jakobsen A, Johansen DS, Kølle IE, Pedersen SG, Thorsen SL, et al. Auditory-Perceptual Evaluation of Dysphonia: A Comparison Between Narrow and Broad Terminology Systems. J Voice [Internet]. 2018;32(4):428-36. doi: https://doi.org/10.1016/j.jvoice.2017.07.006Links ]

27. Yamauchi EJ, Imaizumi S, Maruyama H, Haji T. Perceptual evaluation of pathological voice quality: A comparative analysis between the RASATI and GRBASI scales. Logopedics Phoniatrics Vocology [Internet]. 2010;35(3):121-8. doi: https://doi.org/10.3109/14015430903334269Links ]

28. Maryn Y, Corthals P, Van Cauwenberge P, Roy N, De Bodt M. Toward Improved Ecological Validity in the Acoustic Measurement of Overall Voice Quality: Combining Continuous Speech and Sustained Vowels. J Voice [Internet]. 2010;24(5):540-55. doi: https://doi.org/10.1016/j.jvoice.2008.12.014Links ]

29. Maryn Y, Roy N. Sustained vowels and continuous speech in the auditory-perceptual evaluation of dysphonia severity. J Soc Bras Fonoaudiol [Internet]. 2012;24(2):107-12. doi: https://doi.org/10.1590/S2179-64912012000200003Links ]

30. Anand S, Skowronski MD, Shrivastav R, Eddins DA. Perceptual and Quantitative Assessment of Dysphonia Across Vowel Categories. J Voice [Internet]. 2019;33(4):473-81. doi: https://doi.org/10.1016/j.jvoice.2017.12.018Links ]

31. Delgado Romero C. La identificación de locutores en el ámbito forense [PhD’s thesis]. Madrid: Universidad Complutense de Madrid. 2001. 366 p. Available from: https://docta.ucm.es/handle/20.500.14352/55123Links ]

32. dos Santos PCM, Vieira MN, Sansão JPH, Gama ACC. Effect of Auditory-Perceptual Training With Natural Voice Anchors on Vocal Quality Evaluation. J Voice [Internet]. 2019;33(2):220-5. doi: https://doi.org/10.1016/j.jvoice.2017.10.020Links ]

33. Solomon NP, Helou LB, Stojadinovic A. Clinical Versus Laboratory Ratings of Voice Using the CAPE-V. J Voice [Internet]. 2011;25(1):e7-14. doi: https://doi.org/10.1016/j.jvoice.2009.10.007Links ]

34. Helou LB, Solomon NP, Henry LR, Coppit GL, Howard RS, Stojadinovic A. The Role of Listener Experience on Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) Ratings of Postthyroidectomy Voice. Am J Speech Lang Pathol [Internet]. 2010;19(3):248-58. doi: https://doi.org/10.1044/1058-0360(2010/09-0012)Links ]

35. Cantarella G, Barillari MR, Lechien JR, Pignataro L. The Challenge of Virtual Voice Therapy During the COVID-19 Pandemic. J Voice [Internet]. 2021;35(3):336-7. doi: https://doi.org/10.1016/j.jvoice.2020.06.015Links ]

36. Castillo-Allendes A, Contreras-Ruston F, Cantor-Cutiva LC, Codino J, Guzman M, Malebran C, et al. Voice Therapy in the Context of the COVID-19 Pandemic: Guidelines for Clinical Practice. J Voice [Internet]. 2021;35(5):717-27. doi: https://doi.org/10.1016/j.jvoice.2020.08.001Links ]

37. Fujiki RB, Sanders PW, Sivasankar MP, Halum S. Determining Medical Urgency of Voice Disorders Using Auditory-Perceptual Voice Assessments Performed by Speech-Language Pathologists. Ann Otol Rhinol Laryngol [Internet]. 2022;131(6):579-86. doi: https://doi.org/10.1177/00034894211032779Links ]

38. Dahl KL, Weerathunge HR, Buckley DP, Dolling AS, Díaz-Cádiz M, Tracy LF, et al. Reliability and Accuracy of Expert Auditory-Perceptual Evaluation of Voice via Telepractice Platforms. Am J Speech Lang Pathol [Internet]. 2021;30(6):2446-55. doi: https://doi.org/10.1044/2021_AJSLP-21-00091Links ]

39. Behrman A. Common Practices of Voice Therapists in the Evaluation of Patients. J Voice [Internet]. 2005;19(3):454-69. doi: https://doi.org/10.1016/j.jvoice.2004.08.004Links ]

40. Peña Sánchez MJ, Ángel Gordillo LF, Sastoque Hernández ME, Rodríguez Campo A, Calvache Mora CA. Evaluación Perceptual de la Voz: Resignifcando lo que hacemos. Jornada de teleconferencias nacionales Celebración Día mundial de la voz 2018. Rev Colomb Rehabil [Internet]. 2018;17(1):52-8. doi: https://doi.org/10.30788/RevColReh.v17.n1.2018.322Links ]

41. Peña Sánchez MJ. Evaluación perceptual auditiva. Fundamento de la valoración integral de la voz. In: Cecconello LA, editor. Iº Congreso Iberoamericano de voz cantada y hablada [Internet]; 2010 Oct 22-23; Buenos Aires, Argentina: Fundación Iberoamericana de voz cantada y hablada; 2010. p. 32-41. [ Links ]

42. Ma EP-M, Yiu EM-L. Handbook of Voice Assessments. 1st ed. San Diego: Plural Publishing; 2011. 400 p. [ Links ]

43. Hurren A, Miller N, Carding P. Perceptual Assessment of Tracheoesophageal Voice Quality with the SToPS: The Development of a Reliable and Valid Tool. J Voice [Internet]. 2019;33(4):465-72. doi: https://doi.org/10.1016/j.jvoice.2017.12.006Links ]

44. Chan KMK, Li M, Law TY, Yiu EML. Effects of immediate feedback on learning auditory perceptual voice quality evaluation. Int J Speech Lang Pathol [Internet]. 2012;14(4):363-9. doi: https://doi.org/10.3109/17549507.2012.679746Links ]

45. Hirano M. Clinical examination of voice. Berlin: Springer Verlag; 1981. 100 p. [ Links ]

46. Wilson D. Voice problems of children. Philadelphia: Williams & Wilkins; 1987. 400 p. [ Links ]

47. Heuillet-Martin G, Garson-Bavard H, Legré A. Una voz para todos. SOLAL; 2003. 400 p. [ Links ]

Cite like this: Peña Sanchez, Martha; Delprado-Aguirre, Fernando. (2024). Surveying Colombian Speech-Language Pathologists on their Reported Training & Practices of Auditory-Perceptual Evaluation of Voice. Revista de Investigación e Innovación en Ciencias de la Salud. 6(1), 148-168. . https://doi.org/10.46634/riics.238

Editor: Fraidy-Alonso Alzate-Pamplona, MSc. https://orcid.org/0000-0002-6342-3444

Copyright: © 2024. María Cano University Foundation. The Revista de Investigación e Innovación en Ciencias de la Salud provides open access to all its content under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license.

Declaration of interests: The authors have declared that there is no conflict of interest.

Data availability: All relevant data is in the article. For futher information, contact the corresponding author.

Financing: None. This research did not receive specific grants from funding agencies in the public, commercial, or nonprofit sectors.

Disclaimer: The content of this article is the sole responsibility of the authors and does not represent an official opinion of their institutions or of the Revista de Investigación e Innovación en Ciencias de la Salud.

Acknowledgements: We extend our gratitude to our Colombian colleagues who participated in the survey. Special thanks to Yenny Rodríguez for her efforts in creating and implementing the instrument to assess the validity of the data collection questionnaires. Additionally, we appreciate the valuable support provided by Cristian González in the statistical analysis of the data.

Contribution of the authors

Martha Peña Sanchez: Conceptualization, data curation, formal analysis, investigation, methodology, software, validation, visualization, writing - original draft, writing - review & editing.

Fernando Delprado-Aguirre: Conceptualization, data curation, formal analysis, investigation, methodology, software, validation, visualization, writing - original draft, writing - review & editing.

Received: June 20, 2023; Revised: August 05, 2023; Accepted: November 14, 2023

*Correspondence:Martha Peña Sanchez. Email: fonoaudiologiavozmarthap@gmail.com

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License