SciELO - Scientific Electronic Library Online

 
vol.26 issue3Teaching English Online during the National Lockdown: Students’ Perceptions and Experiences at a Spanish UniversityFace-to-Face and Virtual Academic Writing Tutoring Sessions: What Can Be Learnt from their Didactic Strategies? author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


Íkala, Revista de Lenguaje y Cultura

Print version ISSN 0123-3432

Íkala vol.26 no.3 Medellín Sep./Dec. 2021  Epub Oct 27, 2021

https://doi.org/10.17533/udea.ikala.v26n3a09 

Empirical Studies

“A Stressful Unknown” or “an Oasis”?: Undergraduate Students’ Perceptions of Assessment in an In-Class and Online English Phonetics Course* **

"Fuente de estrés" u "Oasis de paz": percepciones de estudiantes de pregrado sobre la evaluación en una clase de fonética inglesa presencial y en línea

« Source de stress » ou « Oasis de paix » : perceptions des étudiants sur l'évaluation dans un cours de phonétique anglaise présentiel et en ligne

Anna Czura1 

Małgorzata Baran-Łucarz2 

1Post-Doctoral Fellow, Universitat Autónoma de Barcelona, Barcelona, Spain. anna.czura@uab.cat https://orcid. org/0000-0001-5234-6618

2Assistant Professor, University of Wrocław, Wrocław, Poland. malgorzata.baran-lucarz@uwr.edu.pl https://orcid.org/0000-0001-5234-6618


Abstract

The sudden need to switch from traditional in-class instruction to online teaching and assessment due to the COVID-19 pandemic has posed considerable challenges to teachers, but also to learners. The mixed method study reported in this article compared Polish undergraduate students’ cognitive, affective, and behavioural responses to assessment provided in two practical English phonetics courses taught during an in-class fall semester and online spring semester. The quantitative data were collected by means of an online questionnaire, which consisted of three categories of semantic differential scales referring to the cognitive, affective, and behavioural responses. The qualitative data consisted of drawings, open-ended surveys, and individual interviews with the students. The t-test results showed significant differences in students’ perceptions in terms of cognitive and behavioural aspects. The qualitative data revealed that although the students highly valued formative assessment in the course, the online mode weakened their engagement and interest in receiving feedback. It was also observed that students’ perceptions of in-class and online assessment were shaped largely by their individual differences and learning preferences. The study underlies the importance of using anxiety-lowering techniques in both in-class and online classes, and the need for fostering undergraduate students’ autonomous learning skills.

Keywords: Assessment, English phonetics; students' perceptions; online instruction; ICT; COVID-19; EFL

Resumen

La repentina necesidad de pasar de la enseñanza tradicional en las aulas a la docencia y la evaluación en línea, por cuenta de la pandemia de COVID-19, no solo ha supuesto retos considerables para los docentes, sino también para los estudiantes. El estudio de método mixto que se presenta en este artículo comparó las respuestas cognitivas, afectivas y comportamentales de estudiantes universitarios polacos a las evaluaciones impartidas en dos cursos prácticos de fonética inglesa durante un semestre en modalidad presencial y un semestre en modalidad virtual. Los datos cuantitativos se recogieron mediante un cuestionario en línea que consistió en tres categorías de escalas semánticas diferenciales. Los datos cualitativos se obtuvieron mediante dibujos, encuestas de preguntas abiertas y entrevistas individuales con los estudiantes. Los resultados de las pruebas t exhibieron diferencias importantes en las percepciones de los estudiantes en términos de aspectos cognitivos y comportamentales. Los datos cualitativos revelaron que, si bien los estudiantes valoraron muy bien la evaluación formativa en el curso, el modo virtual debilitó su compromiso e interés en la realimentación. También se observó que las percepciones de los estudiantes sobre la evaluación presencial y virtual se vieron determinados ampliamente por sus diferencias individuales y sus preferencias de aprendizaje. El estudio hace hincapié en la importancia de emplear técnicas para mitigar la ansiedad tanto en las clases presenciales como virtuales y la necesidad de promover las destrezas de aprendizaje autónomo en los estudiantes de pregrado.

Palabras clave: evaluación, fonética inglesa; percepciones de estudiantes; enseñanza en línea; TIC; COVID-19; inglés como lengua extranjera

Résumé

Le besoin soudain de passer de l’enseignement traditionnel en classe à l’enseignement et à l’évaluation en ligne, en raison de la pandemie de COVID-19, a posé des défis considérables non seulement pour les enseignants, mais aussi pour les étudiants. L’étude de méthode mixte présentée dans cet article a comparé les réponses cognitives, affectives et comportementales des étudiants universitaires polonais aux évaluations données dans deux cours pratiques de phonétique anglaise au cours d’un semestre présentiel et d’un semestre en mode virtuel. Les données quantitatives ont été recueillies à l’aide d’un questionnaire en ligne composé de trois catégories d’échelles sémantiques différentielles. Les données qualitatives ont été obtenues via dessins, des enquêtes à questions ouvertes et des entretiens individuels avec les étudiants. Les résultats des tests t ont montré des différences importantes dans les perceptions des étudiants en termes d’aspects cognitifs et comportementaux. Les données qualitatives ont révélé que bien que les étudiants aient apprécié l’évaluation formative dans le cours, le mode virtuel a affaibli leur engagement et leur intérêt pour les commentaires. Il a également été observé que les perceptions des étudiants vis-à-vis de l’évaluation en face-à-face et virtuelle ont été largement déterminées par leurs différences individuelles et leurs préférences d’apprentissage. L’étude met l’accent sur l’importance d’employer des techniques pour atténuer l’anxiété dans les cours en face-à-face et virtuels et sur la nécessité de promouvoir les compétences d’apprentissage autonome chez les étudiants de premier cycle.

Mots-clés : évaluation; phonétique anglaise; perceptions des étudiants; enseignement en ligne; TIC; COVID-19; anglais langue étrangère

Introduction

Until recently, research on foreign language (FL) assessment tended to emphasise the role of the teachers - their perceptions (Sahinkarakas, 2012), the design of the assessment process, as well as the assessment tools and strategies used (Czura, 2013). Nowadays, in light of the learnedcentred and person-centred approaches (cf. Jacobs & Renandya, 2016) to education and educational research, more and more attention is being paid to the role learners play in the assessment process. Current approaches to assessment based on cognitive and constructivist theories underscore the importance of learner agency (Andrade & Brookhart, 2020) and self-regulation (Zimmerman & Schunk, 2011), which is central to the ability to derive learning gains from both formative and summative assessment. The meaning learners make of both explicit and implicit presentation of teacher expectations, assessment tasks and criteria, and the form of feedback provision shapes a unique classroom assessment environment (Brookhart & DeVoge, 1999; Stiggins & Conklin, 1992), which, in turn, affects learners’ willingness to engage in a task and develop their motivation to learn (McMillan & Workman, 1998). With this in mind, it is necessary to explore learner perceptions of and reactions to the assessment they are subjected to. “Students’ points of view are windows into their reasoning” (Brooks & Brooks, 1993, p. 60), and consequently, their perspectives need to be considered in the instructional planning, and assessment.

Earlier studies on learners’ perceptions of assessment typically adopted a quantitative Dorman and Knightley’s (2006) Perceptions of Assessment Tasks Inventory (PATI), or Alkharusi’s (2011) Perceived Classroom Assessment Environment Scale. Recently, more and more studies explore learner perceptions of assessment by means of qualitative research methods. For instance, Huhta et al. (2006) used oral diaries in a longitudinal study that focused on Finnish test-takers’ perceptions of a high-stakes language test. The qualitative studies that investigated learners’ views of English language assessment and the assessment-related emotions collected data through a critical incident technique (Czura, 2017), or a combination of a draw-a-picture technique, and an interview in primary school (Carless & Lam, 2012) and high school settings (Xiao & Carless, 2013).

The awareness of learner perceptions of and affective response to assessment is of particular importance in times of uncertainty and rapid changes. A sudden switch from traditional in-class instruction to fully online teaching and assessment during the COVID-19 pandemic has posed considerable challenges not only to teachers, but also to learners, and may have affected their participation, performance, and attainment. The different teaching mode necessitated the introduction of new assessment strategies that would allow for evaluating learning objectives in an online environment. The study reported in this article sets out to compare Polish undergraduate students’ responses to assessment strategies used in a practical phonetics course in the 2019/20 academic year during regular in-person classes in the fall semester and online instruction introduced as an emergency measure in March 2020. The data in this mixed-methods research were collected by means of semantic differential scales that encouraged a comparative analysis in terms of cognitive, affective, and behavioural responses; an interview; a draw-a-picture technique, and an online open-ended survey.

Theoretical Framework

This section presents several approaches to assessment in online education. In the second part, it discusses the content, and diagnostic, summative, and formative roles of assessing pronunciation.

Assessment in Online Learning

Since teacher-student interaction in online learning is mediated by computer, the teaching strategies, rather than being transferred directly from a traditional in-class lessons, should be adjusted to the affordances offered by this mode of communication. The planning process in online education entails different modes of communication (synchronous, asynchronous), the level of student engagement (Dennen et al., 2007; McLoughlin & Luca, 2002) and selfregulation (Vonderwell et al., 2007, p. 323), lack of visual cues, and the possible occurrence of technical problems (Reeves, 2000). Consequently, as Qing and Akins (2005, p. 52) observe, “face-toface pedagogy can and should be used to inform online pedagogy. But this in itself cannot be the driving force to designing online courses; one must consider e-pedagogy to create a successful and meaningful course.” The same reasoning should inform the design of the assessment process in terms of not only its form, but also the choice of objectives, tools, and strategies.

The implementation of CMC (computer-mediated communication) in education has opened new possibilities of efficient collaborative practice and synchronous and asynchronous communication between peers and teachers (Garrison, 1997). This constructivist turn in online education, characterised by its interactive and participatory nature, necessitated a radical change in course design, teaching, and assessment. Assessment in distance education involving CMC emphasises the role of learner-centred, formative approaches to assessment, which by offering meaningful feedback, guide students’ learning and help them select the most efficient learning strategies.

Beebe et al. (2010) identified the main factors that affected successful transition from in-person to online assessment: (1) efficient time management; (2) student responsibility and initiative in the assessment of learning; (3) structure of the online medium, which involved information about course requirements and assessment deadlines; (4) complexity of content and (5) informal assessment, which was tightly linked to student initiative in asking for feedback. Given the role of learner independence in assessment in online learning, the subject literature also emphasises the need for authentic assessment tools (e.g., Kim et al., 2008; Lino & Thomson, 2018) and assessment that supports learner autonomy and self-regulation (Booth et al., 2003). Designing assessment in online courses should also entail listening to students’ voices. In their study on student satisfaction in online courses, Fredericksen et al. (2000) observed that students appreciate assessment forms that value student learning. “The valuing of student performance” (Fredericksen et al., 2000, p. 36) can take the form of portfolio assessment or a discussion that is not only graded, but also authentic and interactive. Student/teacher and student/student interaction was also indicated as critical to successful on-line learning (Fredericksen et al., 2000).

Pronunciation Assessment

Irrespective of whether pronunciation is integrated in a FL general course or taught in a course dedicated exclusively to pronunciation improvement, its teaching ought to concern three areas: productive skills, listening/discrimination abilities, and phonological competence (Derwing & Munro, 2015; Pennington & Rogerson-Revell, 2019). Taking into account that assessment is a crucial element of the didactic process, these three areas should be systematically evaluated, taking the form of diagnostic, formative assessment (FA), and summative assessment (SA) (Celce-Murcia et al., 2010; Derwing & Munro, 2015). In order to determine teaching priorities, it is generally recommended that pronunciation teaching begins with diagnosing the level of productive and discrimination skills, and phonological competence of each student and the group. This initial stage is of utmost importance, since, as several studies show “even experienced L2 learners seem to find it difficult to self-assess correctly their pronunciation skills” (e.g., Dlaska & Klekeler, 2008, p. 506). Furthermore, identification of the priorities supports instructors in designing the treatment and selecting appropriate assessment tools (Celce-Murcia et al., 2010; Derwing & Munro, 2015).

An analytic/atomistic rather than holistic/ impressionistic evaluation is recommended (Harding, 2011) as the basis of diagnostic assessment of productive skills. It is advisable that it takes the form of recording the students’ performing tasks that allow for various degrees of speech control, such as reading short passages, sentences, words, describing pictures, and free speech (see e.g., Celce-Murcia et al., 2010). Following the atomistic approach, the judge (usually the teacher) identifies which particular areas of pronunciation require improvement. In summative assessment of productive skills, whether conducted after shorter or longer periods of time, the same types of tasks and criteria of assessment are suggested so as to increase the reliability of the observed progress (Celce-Murcia et al., 2010). Since both perceptive skills (Derwing & Munro, 2015) and phonological competence (Wrembel, 2003) support the ability to progress in pronunciation, they should also be properly diagnosed and developed throughout the course. Both of these abilities can be easily verified with various types of written tasks, such as discrimination, odd-one-out, cloze, dictation, in the case of perceptive abilities, and true/false, open questions, multiple-choice questions, in the case of phonological competence.

Formative assessment is used to determine the effectiveness of instruction and provide students with immediate assistance before the difficulties accumulate. It helps learners become aware of their strengths and weaknesses, and makes them more eager to implement new strategies that could facilitate their progress (Fernandes, 2011). Additionally, since it does not involve formal grading, it is less anxiety-generating (Cassady & Griedly, 2005), which is particularly important due to the highly emotional nature of pronunciation learning (Baran-Łucarz, 2014). Finally, FA can be expected to facilitate students’ progress in pronunciation, taking into account that it raises self-assessment, self-monitoring skills, and promotes autonomy (Butler & Jiyoon, 2010). Several researchers (e.g., Acton, 1984; Ricard, 1986) stress that self-directed pronunciation learning can significantly boost advancement in pronunciation. Consequently, FA should be applied daily in general FL courses, pronunciation courses or practical phonetics courses, by means of numerous exercises (see e.g., Celce-Murcia et al., 2010), many of which can be analogous to those used later for formal assessment. As Celce-Murcia et al. (2010) sum up, “The best tool we can provide our students is teaching them how to elicit feedback on their pronunciation from their environment and then how to make constructive use of this feedback” (p. 359).

Method

The study aimed to analyse the cognitive-affective-behavioural pathway of student response to the in-class and online assessment in a course of practical phonetics. The cognitive aspects referred to students’ positive or negative attitudes to the assessment process, its quality, perceived level of difficulty, fairness, and structure. In the second element, we focused on affective response, which included the participants’ motivation, anxiety, and the general sense of contentment that the two modes of assessment evoked. Finally, we analysed the expressions of behavioural response to the assessment measures in both semesters, which involved the students’ level of active involvement and independence. In particular, we addressed the following research questions: What were the students’ attitudes to the in-class and online assessment in the course of practical phonetics? What affective response did the students experience during the in-class and online assessment? What were the students’ behavioural reactions to the two modes of assessment?

Participants

Two groups of undergraduate first-year English majors (23 students) who had just finished their two-semester course (an in-class fall semester and online spring semester) of practical phonetics were invited to take part in the study. Although most of them showed interest and declared eagerness to participate in the project, eventually 10 students completed the questionnaire, seven of whom provided us with additional qualitative data, by either taking part in the interview (n=4) or an open-ended survey (n=3). The age of the participants ranged from 19 to 24. Six of the participants were females, four were males (interviews: two females and two males; open-ended survey: two females and one male). Except for one female Ukrainian student, the group consisted of Polish students. All participants but one had never taken part in a practical phonetics or pronunciation course before.

The Practical Phonetics Course

The following sections outlines the aims, content, teaching techniques, as well as the assessment procedures applied in the in-class and online course of practical phonetics.

Aims, Content and Teaching Approach

The most important aim of the first-year practical course of phonetics was to help students acquire English pronunciation at C1 level of the Common European Framework of Reference (CEFR), which is one of the requirements of the undergraduate study programme. The course attempted to help students gain the ability to “articulate virtually all of the sounds of the target language with a high degree of control” and to “self-correct if he/ she noticeably mispronounces a sound”, controlling at the same time stress rhythm and intonation (Council of Europe, 2018, p. 136). Standard models of pronunciation, i.e. modern Received Pronunciation (RP) or General American (GA), constitute the points of reference, which complies with the expectations and needs of most of our students (Baran-Łucarz, 2013). The detailed course syllabus and course objectives were presented at the very beginning of each semester.

It is recommended that three main aspects are developed in a pronunciation course - phonetic and phonological knowledge/awareness, perceptive/discriminative capacities, and articulatory skills (Derwing & Munro, 2015). In regard to the first domain, the participants were expected to gain competence in the characteristics of English segments, the articulatory differences between the target language and L1, knowledge on suprasegmentals (particularly word stress and rhythm), basic features of connected speech, and characteristics of RP and GA. Moreover, the ability to receptively and productively use the International Phonetic Alphabet (IPA) was systematically developed to increase phonetic competence, discrimination skills, and pronunciation of words. Finally, in both semesters, the participants’ autonomous pronunciation learning skills were gradually developed through different strategies and specific exercises to practice the articulation of particular aspects of pronunciation, perception, and transcription. The online semester additionally aimed at advancing students’ knowledge of and ability to recognize and understand native English non-standard accents.

Each class of 90 minutes had an analogous structure in both semesters. It would focus on 1-2 segments, complemented with basic information and practice of selected suprasegmentals or aspects of connected speech. The lesson usually opened with a game-like warm-up activity, homework checking, and reading aloud words and dialogues practiced during earlier classes. This stage, though rarely involved formal grading, allowed the teacher to monitor how much students worked individually after class. Then articulatory features of a new sound would be introduced, followed by simple gymnastics of articulators, and practice in transcribing selected vocabulary items or phrases. Finally, repetition of words and sentences, and practicing reading dialogues filled with the new sound took place. The class would usually end with a communicative task or relaxing game-like activity and assigning homework. Authentic materials, such as short film excerpts and songs, were also systematically implemented. Additionally, in the online semester, volunteers were invited to prepare a short PowerPoint® presentation on a non-standard English accent.

In the fall semester (October-February), the course was conducted in the classroom. Most of the spring semester (March-June) was taught online: first via Zoom and since April via Microsoft Teams. Neither the teacher nor the students had any earlier experience of distance education.

Concerning the online classes, several difficulties were encountered, particularly at the beginning of the course. Despite the teacher’s repeated requests, only approximately 30 %-40 % of the students would have their cameras switched on. Since the lack of vision, unlike in a traditional class, made it difficult for the teacher to observe students’ onspot reactions that could indicate their attitudes, involvement and motivation, they were encouraged to share their opinions at the end of each class about the exercises, materials used, and any difficulties they encountered. Only occasionally would students share their perceptions and if so, they were usually positive. Additionally, some students were not always audible enough, which they blamed on their microphones or poor Internet connection. Finally, the pace of the online lesson seemed a bit slower, due to, among other reasons, waiting longer for students’ answers, technical problems, or time spent on changing the materials shown on the screen.

While most of the in-class written exercises were conducted in pairs and then checked in unison, the online tasks were usually completed individually within a given time limit or done on-spot in lockstep to save time. Chorus repetition of words and sentences, one of the basic activities used in the classroom, was replaced in online teaching by students practicing quiet echo reading while listening together to recordings played by the teacher. This activity was used to practice proper positioning and movement of articulators. To improve ongoing teacher/student communication, the students were also reminded about the possibility of seeing the teacher individually online or sending a message via the chat panel.

Assessment in the In-Class and Online Semesters

Following the CEFR, assessment in this course is understood as any formal and informal measures that aim to respond to students’ performance and learning process. The in-class course opened in the fall semester with a diagnosis of the students’ pronunciation level and difficulties, and a survey about their needs and targets. During individual meetings they carried out a few tasks, i.e. passage and word reading, picture description and free speech. The learners’ performance was recorded and then supplemented with the teachers’ feedback. At the end of the in-class semester, a similar procedure was carried out to allow for comparison, and then graded. In another graded task, the participants were asked to prepare and imitate a fragment of their favourite movie. As before, the learners received recordings of their performance and detailed feedback.

During the in-class semester, the students took three written tests verifying their abilities to use IPA (a transcribing task) and phonological competence (true/false statements and cloze tasks). The teacher set some calm, quiet classical music in the background, whose successful application for stress reduction was emphasized in Suggestopedia (Lozanov, 1982), and observed in the phonetics teacher’s earlier teaching experience (Baran-Łucarz, 2013). The students could also volunteer to prepare extra credit exercises for their classmates. The results of the oral and written tests constituted each 50 % of the final grade.

Formal assessment during the online semester was conducted using analogous tasks and assessment criteria, and was explained to the students as soon as the transition to the online mode was confirmed. Among the graded tasks, which aimed at motivating all the students to work individually at home, was identifying selected consonants and vowels in a text of their choice. To minimize cheating, the written transcription tests and tests on standard/non-standard English accents consisted of multiple-choice questions with a set time limit. The test was taken simultaneously by all the students and the items were presented in a different order. The students could not return to the previous questions. Before the first test, the format was piloted and as a result a few more seconds were allotted to each task. The scores for both the sample and proper tests were normally distributed and ranged from very high to very low, with the average scores being the most frequent.

In the final online oral test, the same types of tasks (passage reading, word reading, free speech), criteria of assessment (accuracy in segment production, word stress and rhythm, consistency in using RP or GA) and benchmarks were used as in the in-class semester. As before, the outcomes were thoroughly discussed with each student and scans of the feedback were sent back to students. This time, however, the oral performance constituted 60 % of the final score, while written performance 40 %.

Some differences between the two semesters can be identified in terms of formative assessment. A mainstay during traditional classes - practicing reading dialogues aloud in pairs/small groups, during which the teacher would approach each pair and offer feedback - was replaced by students reading aloud in unison. Since the teacher considered this stressful for the learners, initially the task was performed by a few volunteers and only after some time would other students be nominated to read aloud. The feedback was given on the class forum and was not a basis for formal assessment. Although transcription exercises were also set, not all the students were eager to share their screens or write their answers in the chat. As before, the learners were encouraged to perform optional tasks - written and oral - to receive detailed written feedback or help during online office hours. Whereas in the in-class semester many students eagerly sought additional feedback, here only three students from among 23 took this opportunity.

To promote autonomous learning, the students were encouraged to write weekly diaries with entries devoted to potential progress, effectiveness of various strategies, and feelings accompanying their pronunciation practice. Although students could get additional credits for diary reflections, none of them followed the teacher’s suggestion. Finally, at the end of the online semester, to encourage reflective practice and to help the teacher grade the students fairly, each student filled out a self-assessment sheet, which focused on the quality of individual work, actual involvement in the online course and classes, level of and progress in IPA use, and accuracy in pronunciation. Students’ evaluations had to be justified, and any further comments were invited.

Data Collection and Analysis

The data in this mixed-methods research were collected by means of an online questionnaire, which consisted of 18 sets of semantic differential scales that encouraged a comparative analysis in terms of students’ cognitive, affective, and behavioural responses to assessment in two distinctive modes of teaching. Each of these scales represented a dimension with bipolar adjective pairs (cf. Osgood, 1952) and seven points in between, e.g.,

Challenging Easy

Commonly used to assess “the 3-dimensional structure of objects, events, and situations” (Bradley & Lang, 1994, p. 50), two identical sets of semantic differential scales were applied in this study to explore students’ attitudes to assessment used in a traditional in-class and an online course of practical phonetics. The instrument addressed three dimensions of students’ responses: cognitive (seven adjective pairs), affective (eight adjective pairs) and behavioural (three adjective pairs). Seven adjective pairs were framed negatively and were reverse coded prior to the analysis. For each pair of adjectives, the participants were asked to indicate the point between the adjective pair that best reflected their attitude to the assessment process in a given semester. The questionnaire was distributed in the form of an online Qualtrics tool after the end of the spring semester in June 2020. We calculated the statistical significance of results by means of the paired sample t-test in SPSS.

In the next step, we supplemented the quantitative data with individual interviews involving a draw-a-picture technique (cf. Kalaja, & Melo-Pfeifer, 2019; Kalaja & Pitkänen-Huhta, 2018) or an online open-ended survey. The use of graphic and visual imagery in social sciences dates back to Bronisław Malinowski’s use of photographs and illustrations in his anthropological work (cf. Kalaja & Pitkänen-Huhta, 2018). In studies on FL education, hand-drawn illustrations have been successfully implemented to explore linguistic landscapes (Kalaa & Melo-Pfeifer, 2019), multilingual practices (Pitkänen-Huhta & Rothoni, 2018), and teacher and learner beliefs and perceptions (Chik, 2018).

Shortly before the interview, the participants were requested to draw two pictures and formulate corresponding captions on the basis of their thoughts, experiences, understandings of, and attitudes towards assessment in each semester. When participants’ attitudes to assessment in the two semesters did not differ, one picture was sufficient. We then asked the participants to elaborate on the pictures at the beginning of the interviews, which lasted approximately 30 minutes each and were carried out by means of a video-conference tool. The remaining interview questions touched upon the participants’ perceptions of the strong and the weak points of the assessment process in both semesters, and their investment in completing the obligatory and optional assignments. The open-ended survey contained the same questions as the interview, except for the draw-a-picture technique, and was introduced as an emergency measure given the small number of interview participants. They could decide whether they preferred to be interviewed by their practical phonetics teacher (the second author) or an exterior researcher (the first author). Prior to the data collection, the participants granted their informed consent, which included explicit permission to use the pictures they produced for the purpose of research analysis and in scholarly publications. The interviews were then transcribed, anonymized, and pooled together with the survey data. The qualitative data, including the pictures drawn by students, were content analysed on the basis of the three dimensions that informed the structure of the quantitative questionnaire: cognitive, affective, and behavioural responses. These three categories, together with the constituent adjective pairs, were used as a framework for deductive qualitative data analysis. First, we coded the data individually, adding any emerging codes when necessary. In the next step, we compared the findings and agreed on the final coding of the data.

Results

The presentation of the results starts with the quantitative analysis of the data derived from the online questionnaires. In the second step, we attend to the data collected by means of the interview and draw-a-picture technique in reference to the three types of student response: cognitive, affective, and behavioural.

Quantitative Data

Table 1 presents the descriptive statistics illustrating the comparison of students’ perceptions of the in-class and online assessment. On the whole the perceptions of the assessment process in both inclass and online semesters are positive. Except for the ‘lenient-strict’, ‘happy-angry’ and ‘independent - imposed’ pairs, the responses in reference to the second semester tended to lean more towards the negatively phrased adjectives.

Table 1 Students’ Cognitive, Affective, and Behavioural Reactions to In-Class and Online Assessment 

Starting from the cognitive dimension, the students rather unanimously described the in-class assessment process as clear, fair, and worthwhile. The evaluations of these categories deteriorated in the online semester, whereas the values of standard deviation (SD) increased. In the affective dimension, except for the ‘happy-angry’ pair, which remained on the same level, the evaluations of the online assessment were more negative in comparison with the first semester, especially with regard to the level of stress and self-confidence. Additionally, the relatively high sd values in the evaluations of the online assessment suggest great variability in students’ responses, which is particularly visible in the ‘confident-shy’ and ‘cheerful-frustrated’ pairs. In the behavioural dimension, although students indicated that online assessment entailed more independence, they described it as more passive. The analysis of sd values indicates that the students tended to be less unanimous in their reactions to the online assessment than the one conducted in-class.

Since the assumption of normal distribution was met, paired t-tests were calculated for the whole scale, the constituent subscales, and the individual items to determine whether the students’ attitudes to the two modes of assessment were statistically different. With the alpha level set at 0.05, a significant difference was established between the participants’ overall attitudes to in-class vs. online assessment (df=9; t=-3.13; p=0.012), as well as in terms of the cognitive (df=9; t=3.07; p=0.13) and behavioural response (df=9; t= -2.492; p=0.034). Similar analyses for individual questionnaire items revealed statistical significance in the following adjective pairs: ‘demotivating-motivating’ (t=-.246, p=.015), ‘active-passive’ (t=-2.34, p=.023) and ‘confident-shy’ (t=3.28, p=.01).

Qualitative Data

The qualitative data analysis was structured around three main categories of students’ responses: cognitive, affective, and behavioural. For the sake of brevity, only selected, most representative pictures drawn by participants will be presented.

Fall Semester -In-Class Course

In terms of the cognitive dimension, Participant 1 [P1] entitled his/her drawing as “The working mechanism” (Figure 1) and explained that “a rope has a beginning and an end. (…) so the whole rope for me was the first semester which was, well, essentially how it was supposed to be.”

Figure 1 “The Working Mechanism”-A Drawing Illustrating In-Class Assessment [P1] 

In their evaluations of the assessment process, most of the participants highlighted that the assessment was fair, which was strengthened by the fact that two out of four illustrations were entitled “Fairness” (one presented in Figure 2). It was emphasised that the assessment was clear, transparent, and involved no ambiguity. All the participants believed that the grades reflected their engagement and/or skills. As P5 pointed out, “my final grade reflected my learning efforts because I received quite a high grade which, in my opinion, fully corresponds to the progress I made.” In the statement “in this course I got the real feedback”, P2 underlined that, unlike in some other courses, feedback in the phonetics course was detailed and offered in-depth comments that helped students improve their competences and track progress. One student drew an open door to illustrate that the assessment process would “open [his] eyes to new perspectives and new horizons,” and new possibilities of developing pronunciation skills. On the other hand, P7 perceived the content of assessment as excessively challenging as he/she would rather focus on the basics.

Figure 2 “Fairness”-A Drawing Illustrating In-Class Assessment [P4]. 

Regarding the affective aspects, the participants expressed predominantly positive feelings towards the subject, the teacher, and the assessment. They underlined that they felt calm and peaceful throughout the process. In the participants’ opinions, the tension was minimised by ongoing, continuous assessment based on clear rules and criteria, individual or group feedback sessions, and such anxiety-reducing techniques as playing music during tests. As P2 explained, “I had this feeling and later I discussed it with my colleagues [that] we went into the assessment and before we knew it, it was already over.” In contrast, for one person, who considered assessment challenging (see above), it was the source of embarrassment, whereas two persons voiced critical opinions about the written phonetic transcription test which P2 described as “less important, (…) inferior to the oral part, which in my mind was like the real part.”

A few students completed some additional assignments as they were encouraged by the positive atmosphere, an interesting and creative task, the perceived usefulness of feedback, and the clarity of assessment. P5 noted that such tasks served as a driving force “in a time of my laziness.” However, other students admitted that it was the grade that motivated them to engage in extra activities or do homework: “an extra grade is always welcome [laughter] in a positive way” [P3]. Even though P1 was aware of the value of the additional activities, she/he failed to do them: ‘[it was] an encouragement to try some tasks or try some exercises at home, which would improve one’s pronunciation and I didn’t do those.” Such optional tasks were juxtaposed with the obligatory homework exercises, which P1 did “because when we [the class] meet on weekly basis, you’re constantly, you have more invigilation.” She/he further explained: “In class you’re naked. You can’t really hide anything, so you know everything will be checked, so therefore, well, at least I did motivate myself to do more things.”

Spring Semester -Online Course

Most of the participants summed up the assessment positively, saying e.g., “I don’t have any major complaints when it comes to assessment. I didn’t see much difference in assessment” [P3]. It seems, however, that many shared the opinion of P1, who used the term “impaired mechanism” in reference to online assessment. As in the case of the in-class semester, the participant drew again a rope (see Figure 3), and explained:

it’s also a rope. It also has a beginning and end, but we can see that it’s being torn apart. We can see that the rope is very tense. It’s likely to be damaged. So there were some difficulties on the way.

Figure 3 “The Impaired Mechanism”-A Drawing Illustrating Online Assessment [P1] 

We will start with the cognitive evaluations of assessment, which at the same time lead to several emotional and behavioural reactions. Regarding the clarity of online assessment, most of the participants claimed it was well-organised and clear. As P2 put it, it “was very clear, very well-explained, (…), nothing surprising,” which contributed to their sense of security. This is reflected in the picture of P2 (see Figure 4), who drew an oasis and commented: “the phonetics assessment was sort of an oasis (…). It was like a safe haven that I knew I could, sort of, rely on.” The student explained the feeling of security by referring again to the lack of ambiguity in assessment. As he put it, “I knew that this assessment would be according to some standard (…), it was familiar to me. I knew from the previous semester what to work on, the form was quite similar, so it was all known and sort of safe. It was a peaceful place amongst the chaos.”

Figure 4 “Oasis” -A Drawing Illustrating Online Assessment [P2] 

P4 had a different impression, stating as follows: “the second semester, as much as I really enjoyed the class, was a bit of (…) an unknown when it came to the final grade.” It was further illustrated in a drawing representing a question mark and entitled “A stressful unknown.” All the participants claimed the final grade they received at the end of the online semester was fair (e.g. “Yeah, I think I received a grade that reflected my stage of pronunciation skills” [P1]). Some students, however, had the feeling it was more their effort that determined the final grade rather than their skills (“In fact I think the final mark reflected the amount of work and effort I put this semester” [P3]). Others, e.g., P2, believed that “perhaps the grade was more reflective of the skills and less reflective of the actual (…) effort.”

The problem of fairness was also indicated by most of the students in reference to the online written tests. Many of them complained that the time for providing the responses was too short, which, as they thought, did not allow them to show their actual knowledge (“I was feeling a bit unfair because I knew I was prepared and still I wouldn’t receive max points because of just the time” [P1]) and made them stressed. According to some students, the fairness of the written transcription tests was also negatively affected by their preference for pen and paper rather than online tests, and openended rather than multiple choice tests: “seeing it [transcriptions] on the computer is a bit different than transcribing by me. It’s easier for me with the ipa to write it down with my hand,” said P4. On the other hand, P3 commented on the general weaknesses of multiple-choice tests, claiming as follows: “I think this could not have been avoided, but the grades for this part did not represent the reality too well, […] but rather assessed the ability to shoot.” Finally, students’ doubts about the reliability of the online test were also strengthened by their anxiety related to encountering technical problems while writing. Student P1 explained as follows: “…it was much more tense for me because I was worried not only about my skills in transcription, but also about the condition of my Internet. […] So it was like extra baggage on the test.”However, further analyses of students’ responses revealed that the feeling of security during online classes, which some considered even higher than during in-class meetings, was due to yet another factor. As P1 explained, he/she did not do the homework as “it’s easier to hide behind the computer (...), hide things we didn’t do.” She/ he added that it is different in class, where “you feel ashamed in a way for not doing them [homework assignments] and you feel responsible in a way because that’s your fault.” Participant P2 also confessed that she/he rarely did homework assignments not only because she/he did not have time, studying in two faculties, but also because they were not graded. Similarly, the assurance of being secure, of the teacher not punishing the students with fail grades for not having done the homework, allowed P4 to remain more passive and not to do some homework or optional tasks: “We didn’t have to send them (…). We just had to have them with us. I sometimes didn’t do them just because I would forget, or I wouldn’t have the motivation to do them, have the stress or fear that if I don’t do them, it’s going to be something bad.” Since many of the homework assignments were voluntary, it was up to the students whether they would receive qualitative feedback from the teacher. Although P1 considered him/herself an autonomous learner, it is clear that he/she failed to understand that seeking feedback on one’s own performance (cf. Cotterall, 2000) is an important element of learner autonomy:

some people will ask for the feedback and some people will not and I think I belong rather to the second group of the people that I [sic] like to work autonomously and I’m kind of scared of remarks sometimes.

So when someone’s not giving me remarks directly, I’m not willing pretty [sic] to ask for them, so I probably did receive less feedback. [P1]

Some students, however, especially those who were more grade-oriented (e.g., P3), were still active and motivated to do most of the optional tasks. Moreover, it turned out that this was not only due to extrinsic motivation, but also because certain exercises were considered particularly worthwhile for them. Among these exercises were the presentations on various accents or writing dialogues filled with particular sounds, which “was simply a good, creative, challenging task” that allowed the students to “further develop” (P3).

As regards other emotional reactions, participant P4 expressed feeling weird and unnatural during online assessment: “it wasn’t as natural and easygoing as it was in the first semester, because not everyone had the chance to speak up and be heard properly.” Although she/he regarded her/himself to be the most active student, she/he still had the feeling of not having practised enough: “I knew I was speaking the most out of the class because I try to be active, but it was one minute per week or even less. It was a bit weird and stressful.” On the other hand, she/he enjoyed taking assessment at home “I think the assessment online, the final speaking exam was less stressful. You can sit in your own chair, wear anything you want and still be comfortable. […]. But I just think if it wasn’t for the pandemic, I would love to get back to the institute.” Student P5 shared another affective and behavioural response to online on-spot formative assessment, stressing that “feedback of tasks couldn’t be as fast as in the case of normal classes” and that she/he favoured formative assessment provided to her/him individually: “it was complicated for me to speak and discuss my mistakes because I felt that everyone has to hear it and spend their time on it instead of doing something more useful.” On the other hand, P7 acknowledged being happy about not having “direct contact with other students” for it made her/him less stressful. Although a few more students claimed they enjoyed working individually, since they could “talk a lot to themselves” [P7] and freely organize their time and work [P1, P2, P3], some found it difficult to motivate themselves to “do anything due to not being able to communicate in real life with others and spending almost all the time at home” [P5] or finding individual oral practice particularly strange and unnatural (“it’s a bit weird practicing by myself in my room during the night” [P4]). Finally, one of the participants strongly stressed the value of self-assessment (see Figure 5), which, according to her/him, was a new worthwhile experience, placing the learner in the centre, and encouraging reflection on how much progress was made and what still needs to be worked on to more successfully direct future work.

Figure 5 “Student Contribution” -A Drawing Illustrating Online Assessment [P2] 

Discussion

The quantitative data indicate that the participants held predominately positive perceptions of assessment in the in-class and online teaching modes; however, in most cases their evaluations of being assessed online leaned towards the negative side of the spectrum. This was confirmed by the statistical calculation, which revealed a significant difference in terms of the students’ overall perceptions of the in-class and online assessment, as well as in terms of cognitive and behavioural responses.

The general attitudes towards the assessment process in both semesters expressed during interviews were also largely positive. The students viewed it as well-organised, fair, clear, adjusted to the level of the students, and detailed. It is undoubtful that in their evaluations of the assessment process the students underlined the value of ongoing, continuous feedback, and formative forms of assessment.

However, a deeper analysis of the pictures and responses indicated that some aspects of online assessment were not perceived as sufficiently fair. From the perspective of the participants, the fairness of the written tests was distorted by the very form of the test (multiple-choice test), the anxiety generated by the short time limit to provide responses, and by the anticipation of technical problems while test taking. Such perceptions were revealed by the students despite the fact that the test was piloted, which gave them a chance to familiarise themselves with and voice their opinion about the test type. Moreover, the test results had normal distributions, which suggests they included a balanced amount of good, bad, and average scores. This poses the question of how to collect evidence of learning certain content (here phonetic transcription and phonological competence), which is normally verified in the form of a written test in a fair and stress-free way, in an online form. This may appear difficult, if we want to avoid resorting to proctoring, which raises a number of privacyrelated, environmental, and psychological concerns (Kharbat & Abu Daabes, 2021). This question becomes even more pertinent in grade-oriented contexts, in which students tend to express positive attitudes to cheating (Chudzicka-Czupała et al., 2013). It also seems that learners need adequate training in how to effectively manage online assessment and the emotions it evokes.

It appears that participants were very much aware that the attitudes they had to assessment were shaped by their individual differences and learning preferences. As student P1 put it, “It’s really about individuals.” Indeed, it appears that several learner-based factors - their self-perceived and actual levels of pronunciation, initial level of anxiety, preferences towards different strategies of learning, and probably also personality - affected their perceptions of assessment. These individual differences could also explain the lack of confidence experienced by some participants during formative assessment of oral performance provided during online lessons. What supports such a claim is the statistically significant difference in the confident-shy subcategory of the affective responses, with the shyness being higher in the online mode. The highly diversified preferences of learners call for the need to offer a variety of approaches to presentation, practice, and assessment in the two modes of learning.

The analysis of the quantitative data suggests that students’ affective reaction to different modes of assessment did not change. This may be attributed to a wide range of teaching techniques the teacher consciously introduced to create a positive atmosphere and reduce the anxiety level such as individualised feedback, music during tests, and voluntary activities. On the other hand, a more detailed analysis of this subscale implies significant differences in the item referring to students’ perceived motivation and self-confidence. Therefore, a more in-depth exploration of the affective domain in the online environment is needed.

Both the quantitative and qualitative data corroborate certain changes in students’ behavioural response. Some participants, particularly those who were grade-oriented, used the possibility of “hiding” behind the screen during online classes as an excuse not to do regular homework assignments. At the same time, despite considering themselves autonomous and valuing detailed formative assessment, they did not take advantage of the possibility of receiving systematic feedback from the teacher, which, as stressed by Celce-Murcia et al. (2010), they could have made constructive use of.

These findings confirm those of numerous studies which indicate that student engagement, understood here as students’ willingness to interact with the teacher, peers and the course content is central to student learning, which is even more pronounced in distance education, which entails the feeling of isolation and disconnection from the group (Dennen, Darabi, & Smith, 2007; Robinson & Hullinger, 2008). Following the assertion made by Vonderwell et al. (2007, p. 323) that “assessment as a process requires that online learning activities facilitate self-assessment, peer-assessment, self-regulatory mechanisms, and learner autonomy,” the assessment tasks in the phonetics course encouraged self-reflection and participatory practice. Nevertheless, not all students were interested in engaging in such activities. The emergency, unexpected, and rather abrupt introduction of distance learning during the COVID-19 pandemic was caused by the external situation and was not the mode of learning the students had signed up for; however, it revealed an urgent need to foster learner autonomy and self-regulation in students at the undergraduate level. Given that student engagement and participation had a direct impact on the effectiveness of the teaching and assessment processes, further studies and training opportunities that focus on student motivation and teacher motivational strategies in online classes are called for. The study findings also confirm earlier assertions (e.g., Qing & Akins, 2005) that although in-class pedagogy, especially the successful one, should inform the instructional planning and implementation of online education, it is necessary to take into account both the affordances, and constraints of learning and teaching in online environments.

The students’ propensity to hide behind a computer screen and remain inactive was also reflected in their unresponsiveness to invitations to the present study. Despite the initial declarations the students made during one-on-one online sessions with teachers, in the end only a fraction of students decided to participate in the project. The poor responsiveness of the students in this online study was somehow surprising, given our positive experiences of data collection in the same institution in the past. This observation indicates a more general problem of conducting online research, and a further search for effective and efficient ways of gathering data online is necessary. Consequently, although the study enriches our understanding of how learners view in-class and online assessment, caution is needed in drawing clear-cut conclusions. Additionally, we are aware that the comparison of students’ perceptions of two different modes of assessment would have been more accurate if the data concerning the in-class assessment had been collected directly after the end of the semester. However, it must be taken into account that the research design and data collection took place in a time of unprecedented uncertainty and exceptionally heavy workload that the transition from in-class to online learning entailed on the part of the researchers and participants.

Considering the methodological choices in the present study, we believe that the three instruments complemented each other and enabled both quantitative and qualitative data analysis and discussion centred around the three types of student response. Of note is that we found the participants’ commentary on the illustrations during interviews essential to fully understand the essence of the visual conceptualisations and the metaphors they used. Finally, the draw-a-picture technique seemed interesting for the interview participants, who eagerly submitted their illustrations and elaborated on the content.

Conclusions

The present study indicates that despite university students’ predominantly positive affective response to the transition from a traditional in-class to online assessment in a practical phonetics course, there was a marked contrast in their cognitive and behavioural response, which was mainly shaped by the level of learner autonomy, agency, motivation, and individual learning styles, and to a lesser extent, anxiety and technological limitations. Whereas the time management, content, and structure of assessment, except for an online test with strict time limits, did not raise the participants’ concerns, it appears that students did not fully benefit from the informal and formative forms of assessment offered by the instructor. In light of students’ rather passive participation in the online classes and tasks, there is a need to implement a wider systemic approach to fostering learner autonomy at the undergraduate level, as well as to introduce more interim measures of eliciting student work in online classes.

Acknowledgements

We wish to express our deepest gratitude to all the participants who took part in the study. We are most grateful to them for sacrificing their free time, spending it again in front of the screens, to provide us with all the necessary information. It is only thanks to their involvement that we have been able to shed more light on the matters presented in this paper. Many thanks!

References

Acton, W. (1984). Changing fossilized pronunciation. TESOL Quarterly, 18(1), 71-85. https://doi.org/10.2307/3586336Links ]

Alkharusi, H. (2011). Development and datametric properties of a scale measuring students’ perceptions of the classroom assessment environment. International Journal of Instruction, 4(1), 105-120. https://doi.org/10.1037/t03442-000Links ]

Andrade, H. L., & Brookhart, S. M. (2020). Classroom assessment as the co-regulation of learning. Assessment in Education: Principles, Policy and Practice, 27(4), 350-372. https://doi.org/10.1080/0969594X.2019.1571992Links ]

Baran-Łucarz, M. (2013). Phonetics learning anxiety -results of a preliminary study. Research in Language, 11(1), 57-79. https://doi.org/10.2478/v10015-012-0005-9Links ]

Baran-Łucarz, M. (2014). The link between pronunciation anxiety and willingness to communicate in the foreign language classroom: The Polish EFL context. Canadian Modern Language Review/La Revue canadienne des langues vivantes, 70(4), 445-473. https://doi.org/10.3138/cmlr.2666Links ]

Beebe, R, Vonderwell, S., & Boboc, M. (2010). Emerging patterns in transferring assessment practices from F2f to online environments. Electronic Journal of eLearning, 8(1), 1-12. [ Links ]

Booth, R., Clayton, B., Hartcher, R., Hungar, S., Hyde, P., & Wilson, P. (2003). The development of quality online assessment in vocational education and training: Vol. 1. Australian National Training Authority. [ Links ]

Bradley, M. M, & Lang, P. J. (1994). Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry, 25(1), 49-59. https://doi.org/10.1016/0005-7916(94)90063-9Links ]

Brookhart, S. M., & DeVoge, J. G. (1999). Testing a theory about the role of classroom assessment in student motivation and achievement. Applied Measurement in Education, 12(4), 409-425. https://doi.org/10.1207/S15324818AME1204_5Links ]

Brooks, J., & Brooks, M. (1993). In search of understanding: The case for constructivist classrooms. Association for Supervision and Curriculum Development. [ Links ]

Butler, Y., & Jiyoon, L. (2010). The effects of self-assessment among young learners of English. Language Testing, 27(1), 5-31. https://doi.org/10.1177/0265532209346370Links ]

Carless, D. & Lam, R. (2012). The examined life: perspectives of lower primary school students in Hong Kong. Education, 42(3), 313-329. https://doi.org/10.1080/03004279.2012.689988Links ]

Cassady, J. C., & Griedly, B. E. (2005). The effects of online formative and summative assessment on text anxiety and performance. The Journal of Technology, Learning and Assessment, 4(1). https://ejournals.bc.edu/index.php/jtla/article/view/1648Links ]

Celce-Murcia, M., Brinton, D. M., & Goodwin, J. M. (2010). Teaching pronunciation: A reference for teachers of English to speakers of other languages (2nd Ed.). Cambridge University Press. [ Links ]

Chik, A. (2018). Beliefs and practices of foreign language learning: A visual analysis. Applied Linguisics Review, 9(2-3), 307-331. https://doi.org/10.1515/applirev-2016-1068Links ]

Chudzicka-Czupała, A., Lupina-Wegener, A., Borter, S., & Hapon, N. (2013) Students’ attitude toward cheating in Switzerland, Ukraine and Poland. The New Education al Review, 32, 66-76. [ Links ]

Cotterall, S. (2000). Promoting learner autonomy through the curriculum: principles for designing language courses. ELT Journal, 54(2), 109-117. https://doi.org/10.1093/elt/54.2.109Links ]

Council of Europe. (2018). Common European framework of reference for languages: Learning, teaching, assessment. Companion volume with new descriptors. https://rm.coe.int/CEFR-companion-volume-withnew-descriptors-2018/1680787989Links ]

Czura, A. (2013). The role of peer-assessment in developing adolescent learners’ autonomy. Baltic Journal of English Language, Culture and Literature 3, 20-32. [ Links ]

Czura, A. (2017). Adolescent learner perceptions of foreign language assessment: Critical incident analysis. Glottodidactica, 44(2), 25-39. https://doi.org/10.14746/gl.2017.44.2.02Links ]

Dennen, V. P., Darabi, A. A., & Smith, L. J. (2007). Instructor-learner interaction in online courses: The relative perceived importance of particular instructor actions on performance and satisfaction. Distance Education, 28(1), 65-79. https://doi.org/10.1080/01587910701305319Links ]

Derwing, T. M., & Munro, M. J. (2015). Pronunciation fundamentals: Evidence-based perspectives for L2 teaching and research. John Benjamins. https://doi.org/10.1075/lllt.42Links ]

Dlaska, A., & Klekeler, C. (2008). Self-assessment of pronunciation. System, 36(4), 506-516. https://doi.org/10.1016/j.system.2008.03.003Links ]

Dorman, J. P., & Knightley, W. M. (2006). Initial use of the Perceptions of Assessment Tasks Inventory (PATI) in English secondary schools. Alberta Journal of Education al Research, 52(3), 196-199. https://doi.org/10.1037/t68629-000Links ]

Fernandes, D. (2011). Avaliar para melhorar as aprendizagens: Análise e discussão de algumas questões essenciais. [Evaluate to improve learning: Analysis and discussion of some key issues]. In I. Fialho & H. Salgueiro (Eds.), Turma mais e sucesso escolar: Contributos teóricos e práticos, (pp. 81-107). Centro de Investigação em Educação e Psicologia da Universidade de Évora. [ Links ]

Fredericksen, E., Pickett, A., Shea, P., Pelz, W., & Swan, K. (2000). Student satisfaction and perceived learning with online courses: Principles and examples from the SUNY learning network, Journal of Asynchronous Learning Networks, 4(2), 7-41. https://doi.org/10.24059/olj.v4i2.1899Links ]

Garrison, D. R. (1997). Computer conferencing: the post-industrial age of distance education. Open Learning, 12(2), 3-11. https://doi.org/10.1080/0268051970120202Links ]

Harding, L. (2011). Accent and listening assessment: A validation study of the use of speakers with L2 accents on an academic English listening test. Peter Lang. [ Links ]

Huhta, A., Kalaja, P., & Pitkanen-Huhta, A. (2006). The discursive construction of a high-stakes test: The many faces of a test-taker. Language Testing, 23(3), 326-350. https://doi.org/10.1191/0265532206lt331oaLinks ]

Jacobs G. M., & Renandya W. A. (2016). Student-centred learning in ELT. In W. A. Renandya & H. Widodo (Eds.), English language teaching today: Linking theory and practice (pp. 13-24). Springer. https://doi.org/10.1007/978-3-319-38834-2_2Links ]

Kalaja, P. & Melo-Pfeifer, S. (Eds.) (2019). Visualising multilingual lives. More than words. Multilingual Matters. https://doi.org/10.21832/9781788922616Links ]

Kalaja, P., & Pitkänen-Huhta, A. (2018). ALR special issue: Visual methods in applied language studies. Applied Linguistics Review, 9(2-3), 157-176. https://doi.org/10.1515/applirev-2017-0005Links ]

Kharbat, F. F., & Abu Daabes, A. S. (2021). E-proctored exams during the COVID-19 pandemic: A close understanding. Educ Inf Technol. (2021). https://doi.org/10.1007/s10639-021-10458-7Links ]

Kim, N., Smith, M. J., & Maeng, K. (2008). Assessment in online distance education: A comparison of three online programs at a university. Online Journal of Distance Learning Administration, 11(1). https://www.westga.edu/~distance/ojdla/spring111/kim111.htmlLinks ]

Lino, A., & Thomson, R. I. (2018). Effects of web-based hvpt on EFL learners’ recognition and production of L2 sounds. In P. Taalas, J. Jalkanen, L. Bradley and S. Thouësny (Eds.), Future-proof call: Language Learning as Exploration and Encounters - Short Papers from EUROCALL 2018 (pp. 106-111). Research-publishing.net. https://doi.org/10.14705/rpnet.2018.26.821Links ]

Lozanov, G. (1982). Suggestology and suggestopedia. In R. W. Blair (Ed.), Innovative approaches to language teaching (pp. 146-159). Newbury House Publishers. [ Links ]

McLoughlin, C., & Luca, J. (2002). A learner-centered approach to developing team skills through web based learning and assessment. British Journal of Education al Technology, 33(5), 571-582. https://doi.org/10.1111/1467-8535.00292Links ]

McMillan, J. H., & Workman, D. J. (1998). Classroom assessment and grading practices: A review of the literature. Metropolitan Education al Research Consortium. (ERIC Document Reproduction Service No. ED453263) [ Links ]

Osgood, C. (1952). The nature and measurement of meaning. Psychological Bulletin, 49, 172-237. https://doi.org/10.1037/h0055737Links ]

Pennington, M. & Rogerson-Revell, P. (2019). English pronunciation teaching and research: Contemporary perspectives. Palgrave Macmillan. https://doi.org/10.1057/978-1-137-47677-7Links ]

Pitkänen-Huhta, A., & Rothoni, A. (2018). Visual accounts of Finnish and Greek teenagers’ perceptions of their multilingual language and literacy practices. Applied Linguistics Review, 9(2-3), 333-364. https://doi.org/10.1515/applirev-2016-1065Links ]

Qing, L., & Akins, M. (2005). Sixteen myths about online teaching and learning: Don’t believe everything you hear. TechTrends, 49(4), 51-60. https://doi.org/10.1007/BF02824111Links ]

Reeves, R. (2000). Alternative approaches for online learning environments in higher education. Journal of Education al Computing Research, 23(1), 101-111. https://doi.org/10.2190/GYMQ-78FA-WMTX-J06CLinks ]

Ricard, E. (1986). Beyond fossilization: A course on strategies and techniques in pronunciation for advanced adult learners. TESL Canada Journal, Special Issue, 1, 243-253. https://doi.org/10.18806/tesl.v3i0.1009Links ]

Robinson, C. C., & Hullinger, H. (2008). New benchmarks in higher education: Student engagement in online learning. Journal of Education for Business, 84(2), 101-109. https://doi.org/10.3200/JOEB.84.2.101-109Links ]

Sahinkarakas, S. (2012). The role of teaching experience on teachers’ perceptions of language assessment. Procedia -Social and Behavioral Sciences, 47, 1787-1792. https://doi.org/10.1016/j.sbspro.2012.06.901Links ]

Stiggins, R. J., & Conklin, N. F. (1992). In teachers’ hands: Investigating the practices of classroom assessment. SUNY Press. [ Links ]

Vonderwell, S., Liang, X., & Alderman, K. (2007). Asynchronous discussions and assessment in online learning. Journal of Research on Technology in Education, 39(3), 309-328. https://doi.org/10.1080/15391523.2007.10782485Links ]

Wrembel, M. (2003). An empirical study on the role of metacompetence in the acquisition of foreign language phonology. In M.-J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS) (pp. 985-988). Universidad Autónoma de Barcelona; International Phonetic Association. [ Links ]

Xiao, Y., & Carless, D. R. (2013). Illustrating students’ perceptions of English language assessment: Voices from China. RELC Journal, 44(3), 319-340. https://doi.org/10.1177/0033688213500595Links ]

Zimmerman, B., & Schunk, D. (Eds.). (2011). Handbook of self-regulation of learning and performances. Routledge. [ Links ]

*This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement N.° 845783.

**How to cite this article: Czura, A., & Baran-Łucarz, M. (2021). “A stressful unknown” or “an oasis”?: Undergraduate students’ perceptions of assessment in an in-class and online English phonetics course. Íkala, Revista de Lenguaje y Cultura, 26(3), 623-641. https://doi.org/10.17533/udea.ikala.v26n3a09

Received: March 08, 2021; Accepted: June 21, 2021

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License