SciELO - Scientific Electronic Library Online

 
vol.44 número2Pronósticos con modelos multivariados autorregresivos de umbrales índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • No hay articulos similaresSimilares en SciELO
  • En proceso de indezaciónSimilares en Google

Compartir


Revista Colombiana de Estadística

versión impresa ISSN 0120-1751

Rev.Colomb.Estad. vol.44 no.2 Bogotá jul./dic. 2021  Epub 01-Sep-2021

https://doi.org/10.15446/rce.v44n2.89661 

Original articles of research

Bayesian Multi-Faceted TRI Models for Measuring Professor's Performance in the Classroom

Un modelo TRI de múltiples facetas bayesiano para la evaluación del desempeño docente en el aula

KAREN ROSANA CORDOBA1  a 

ALVARO MAURICIO MONTENEGRO1  b 

1 Departamento de Estadística, Facultad de Ciencias, Universidad Nacional de Colombia, Bogotá, Colombia


Abstract

Evaluations of professor performance are based on the assumption that students learn more from highly quali-ed professors and the fact that students observe professor performance in the classroom. However, many studies question the methodologies used for such measurements, in general, because the averages of categorical responses make little statistical sense. In this paper, we propose Bayesian multi-faceted item response theory models to measure teaching performance. The basic model takes into account efects associated with the severity of the students responding to the survey, and the courses that are evaluated. The basic model proposed in this work is applied to a data set obtained from a survey of perception of professor performance conducted by Science Faculty of the Universidad Nacional de Colombia to its students. Professor scores that are obtained as model outputs are real numerical values that can be used to calculate common statistics in profesor evaluation. In this case, the statistics are mathematically consistent. Some of them are shown to illustrate the usefulness of the model.

Key words: Bayesian inference; Multi-faceted IRT model; Professor performance

Resumen

Las evaluaciones del desempeño del profesor se basan en el supuesto de que los estudiantes aprenden más de profesores altamente calificados y el hecho de que los estudiantes observan el desempeño del profesor en el aula. Sin embargo, muchos estudios cuestionan las metodologías utilizadas para tales mediciones, en general, porque los promedios de las respuestas categóricas tienen poco sentido estadístico. En este artículo, proponemos modelos Bayesianos de Teoría de Respuesta al Ítem de múltiples facetas para medir el desempeño. El modelo propuesto tiene en cuenta los efectos asociados con la severidad de los estudiantes que responden a la encuesta y los cursos que se evalúan. El modelo se aplica a un conjunto de datos obtenido de una encuesta de percepción del desempeño del profesor realizada por la Facultad de Ciencias de la Universidad Nacional de Colombia a sus estudiantes. Los puntajes del profesor que se obtienen como resultados del modelo son valores numéricos reales que se pueden usar para calcular estadísticas comunes en la evaluación del profesor. En este caso, las estadísticas son matemáticamente consistentes. Se muestra que algunos de ellos ilustran la utilidad del modelo.

Palabras clave: Desempeño del profesor; Inferencia bayesiana; Modelo TRI de múltiples facetas

Full text available only in PDF format

References

Abrami, P. C., Perry, R. P. & Leventhal, L. (1982), 'The relationship between student personality characteristics, teacher ratings, and student achievement', Journal of Educational Psychology 74(1), 111. [ Links ]

Ariyo, O., Quintero, A., Muñoz, J., Verbeke, G. & Lesare, E. (2020), 'Bayesian model selection in linear mixed models for longitudinal data', Journal of Applied Statistics 47(5), 890-913. [ Links ]

Baker, F. B. & Kim, S.-H. (2004), Item response theory: Parameter estimation techniques, CRC Press. [ Links ]

Barkaoui, K. (2013), 'Multifaceted rasch analysis for test evaluation', The companion to language assessment 3, 1301-1322. [ Links ]

Bartholomew, D. J., Knott, M. & Moustaki, I. (2011), Latent variable models and factor analysis: A unified approach, Vol. 904, John Wiley & Sons. [ Links ]

Basow, S. A. & Silberg, N. T. (1987), 'Student evaluations of college professors: Are female and male professors rated diferently?', Journal of educational psychology 79(3), 308. [ Links ]

Becker, W. E. & Watts, M. (1999), 'How departments of economics evaluate teaching', American Economic Review 89(2), 344-349. [ Links ]

Bélanger, C. H. & Longden, B. (2009), 'The e-ective teacher's characteristics as perceived by students', Tertiary Education and Management 15(4), 323-340. [ Links ]

Birnbaum, A. (1968), Statistical Theories of mental test Scores, Reading, MA: Addison Wesley, chapter Trait models and their use in infering an examinee's ability. [ Links ]

Bock, R. D. (1997), 'A brief history of item response theory', Educational Measurement: Issues and Practice 16(4), 21-32. [ Links ]

Box, G. E. (1980), 'Sampling and Bayes' inference in scienti-c modelling and robustness', Journal of the Royal Statistical Society: Series A (General) 143(4), 383-404. [ Links ]

Braga, M., Paccagnella, M. & Pellizzari, M. (2014), 'Evaluating students' evaluations of professors', Economics of Education Review 41, 71-88. [ Links ]

Cameletti, M. & Caviezel, V. (2011), The Cronbach-Mesbah Curve for assessing the unidimensionality oh an item set: the R package CMC, in 'International Workshop on Patient reported outcomes and quality of life, 4-5 July 2011, Paris (France)', Vol. 55, Institut de statistique (Paris), pp. 37-40. [ Links ]

Centra, J. A. (1993), Re-ective Faculty Evaluation: Enhancing Teaching and Determining Faculty E-ectiveness. The Jossey-Bass Higher and Adult Education Series, ERIC. [ Links ]

Centra, J. A. & Creech, F. R. (1976), The relationship between student teachers and course characteristics and student ratings of teacher e-ectieness, in 'Project Report', Princeton, NJ, Educational Testing Service, pp. 76-1. [ Links ]

Cohen, P. A. (1981), 'Student ratings of instruction and student achievement: A meta-analysis of multisection validity studies', Review of educational Research 51(3), 281-309. [ Links ]

Cordoba, K. (2020), Un modelo TRI de múltiples facetas para la evaluación del desempeño docente en el aula, Master's thesis, Universidad Nacional de Colombia. [ Links ]

Cronbach, L. J. (1951), 'Coeficient alpha and the internal structure of tests', Psychometrika 16, 297-334. [ Links ]

Eckes, T. (2011), 'Introduction to many-facet rasch measurement', Franfurt am Main: Peter Lang . [ Links ]

Engelhard, G. (2002), Monitoring raters in performance assessment, Mahwah, NJ: Erlbaum, pp. 261-287. [ Links ]

Engelhard, G. (2013), Invariant measurement: Using Rasch models in the social, behavioral, and health sciences, New York, NY: Routledge. [ Links ]

Feldman, K. A. (1977), 'Consistency and variability among college students in rating their teachers and courses: A review and analysis', Research in Higher Education 6(3), 223-274. [ Links ]

Feldman, K. A. (1978), 'Course characteristics and college students' ratings of their teachers: What we know and what we don't', Research in Higher Education 9(3), 199-242. [ Links ]

Feldman, K. A. (1979), 'The significance of circumstances for college students' ratings of their teachers and courses', Research in Higher Education 10(2), 149-172. [ Links ]

Feldman, K. A. (1983), 'Seniority and experience of college teachers as related to evaluations they receive from students', Research in Higher Education 18(1), 3-124. [ Links ]

Feldman, K. A. (1987), 'Research productivity and scholarly accomplishment of college teachers as related to their instructional e-ectiveness: A review and exploration', Research in higher education 26(3), 227-298. [ Links ]

Feldman, K. A. (1989), 'The association between student ratings of specific instructional dimensions and student achievement: Refining and extending the synthesis of data from multisection validity studies', Research in Higher education 30(6), 583-645. [ Links ]

Gelfand, A. E., Dey, D. K. & Chang, H. (1992), Model determination using predictive distributions with implementation via sampling-based methods, Technical report, Stanford University CA Department of statistics. [ Links ]

Gelman, A., Hwang, J. & Vehtari, A. (2014), 'Understanding predictive information criteria for bayesian models', Statistics and computing 24(6), 997-1016. [ Links ]

Gelman, A., Meng, X.-L. & Stern, H. (1996), 'Posterior predictive assessment of model fitness via realized discrepancies', Statistica sinica pp. 733-760. [ Links ]

Gelman, A. et al. (2006), 'Prior distributions for variance parameters in hierarchical models (comment on article by browne and draper)', Bayesian analysis 1(3), 515-534. [ Links ]

Homan, M. D. & Gelman, A. (2014), 'The no-u-turn sampler: adaptively setting path lengths in hamiltonian monte carlo', Journal of Machine Learning Research 15(1), 1593-1623. [ Links ]

Jolli-e, I. (2003), 'Principal component analysis', Technometrics 45(3), 276. [ Links ]

Koushki, P. A. & Kunh, H. A. J. (1982), 'How realiable are student evaluations of teachers?', Engineering Education 72, 362-367. [ Links ]

Linacre, J. M. (1989), Many-facet Rasch measurement, Chicago: MESA Press. [ Links ]

Lord, F. M. & Novick, M. R. (2008), Statistical theories of mental test scores, IAP. [ Links ]

Luo, Y. & Jiao, H. (2018), 'Using the stan program for bayesian item response theory', Educational and psychological measurement 78(3), 384-408. [ Links ]

Marsh, H. W. (1987), 'Students' evaluations of university teaching: Research findings, methodological issues, and directions for future research', International journal of educational research 11(3), 253-388. [ Links ]

Marsh, H. W. (2007), Students' evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness, in 'The scholarship of teaching and learning in higher education: An evidence-based perspective', Springer, pp. 319-383. [ Links ]

Martin, E. (1984), 'Power and authority in the classroom: Sexist stereotypes in teaching evaluations', Signs: Journal of Women in Culture and Society 9(3), 482-492. [ Links ]

Murray, H. G. (2005), Student evaluation of teaching: Has it made a difference, in 'Annual Meeting of the Society for Teaching and Learning in Higher Education. Charlottetown, Prince Edward Island'. [ Links ]

Neal, R. (2011), MCMC using Hamiltonian dynamics in Handbook of Markov Chain Monte Carlo, New York, NY: CRC Press, pp. 113-162. [ Links ]

Perry, R. P., Niemi, R. R. & Jones, K. (1974), 'E-ect of prior teaching evaluations and lecture presentation on ratings of teaching performance', Journal of Educational Psychology 66(6), 851. [ Links ]

Philip, B. S. & Richard, F. (2014), 'An evaluation of course evaluations', ScienceOpen Research . [ Links ]

Small, A. C., Hollenbeck, A. R. & Haley, R. L. (1982), 'The e-ect of emotional state on student ratings of instructors', Teaching of Psychology 9(4), 205-211. [ Links ]

Spencer, P. A. & Flyr, M. L. (1992), 'The formal evaluation as an impetus to classroom change: Myth or reality?'. [ Links ]

Stan Development Team (2020a), 'RStan: the R interface to Stan'. R package version 2.19.3. http://mc-stan.org/Links ]

Stan Development Team (2020b), 'Stan language reference manual'. Version 2.22. http://mc-stan.orgLinks ]

Stan Development Team (2020c), 'Stan user's guide'. Version 2.22. http://mcstan.orgLinks ]

Uttl, B., Eche, A., Fast, O., Mathison, B., Valladares Montemayor, H. & Raab, V. (2012), 'Student evaluation of instruction/teaching (sei/set) review', Calgary, AB, Canada: Mount Royal Faculty Association Retrieved from: http://mrfa.net/-les/MRFA-SEI-Review-v6.pdf . [ Links ]

Uttl, B., White, C. A. & Gonzalez, D. W. (2017), 'Meta-analysis of faculty's teaching e-ectiveness: Student evaluation of teaching ratings and Student learning are not related', Studies in Educational Evaluation 54, 22-42. [ Links ]

Vehtari, A., Gelman, A. & Gabry, J. (2017), 'Practical bayesian model evaluation using leave-one-out cross-validation and waic', Statistics and computing 27(5), 1413-1432. [ Links ]

Wachtel, H. K. (1998), 'Student evaluation of college teaching effectiveness: A brief review', Assessment & Evaluation in Higher Education 23(2), 191-212. [ Links ]

Watanabe, S. (2010), 'Asymptotic equivalence of bayes cross validation and widely applicable information criterion in singular learning theory', Journal of Machine Learning Research 11(Dec), 3571-3594. [ Links ]

Received: August 2020; Accepted: June 2021

a Statistician. E-mail: krcordobap@unal.edu.co

b Associate professor. E-mail: ammontenegro@unal.edu.co

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License