Evidências de validade de conteúdo da prova de psicologia do Enade
DOI:
https://doi.org/10.18222/eae.v29i72.4897Palavras-chave:
Validade, Exame Nacional de Desempenho dos Estudantes (Enade), Blueprint, Demanda CognitivaResumo
O presente estudo tem como objetivo analisar evidências de validade com base no conteúdo da prova de psicologia aplicada no Exame Nacional de Desempenho dos Estudantes (Enade) de 2015. Foi utilizada a blueprint, ferramenta recomendada pela literatura internacional que embasa o planejamento dos testes. Houve divergência significativa entre as competências e habilidades apresentadas na matriz de referência e a demanda cognitiva solicitada na maior parte das questões objetivas. Há habilidades presentes na matriz que não foram contempladas na prova e outras foram contempladas com apenas um único item, o que compromete a confiabilidade da medida. As análises mostraram a necessidade do uso de técnicas que podem melhorar essa fonte primária de evidência.
Downloads
Referências
ALDERMAN, J. Test development process at ETS. Princeton: ETS Global Institute Course, 2015.
AMERICAN EDUCATIONAL RESEARCH ASSOCIATION; AMERICAN PSYCHOLOGICAL ASSOCIATION; NATIONAL COUNCIL ON MEASUREMENT IN EDUCATION. Standards for educational and psychological testing. Washington, DC: APA, 2014. AMERICAN PSYCHOLOGICAL ASSOCIATION. Standards for educational and psychological tests and manuals. Washington, DC: APA, 1966.
ANDERSON, L. W. et al. (Ed.). A taxonomy for learning, teaching, and assessing: a revision of Bloom’s Taxonomy of educational objectives. 2. ed. New York: Longman, 2001.
BORSBOOM, D.; MELLENBERGH, G. J.; VAN HEERDEN, J. The concept of validity. Psychological Review, Washington, v. 111, n. 4, p. 1061-1071, nov. 2004. DOI: https://doi.org/10.1037/0033-295X.111.4.1061
BRASIL. Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira. Exame nacional do desempenho dos estudantes: psicologia. Brasília, DF: Inep, 2015a. Disponível em: <http://download.inep.gov.br/educacao_superior/enade/provas/2015/09_psicologia.pdf>. Acesso em: 8 ago. 2017.
BRASIL. Portaria Inep n. 243, de 10 de junho de 2015. Estabelece as diretrizes da área de psicologia. Diário Oficial da União, Brasília, DF, 12 jun. 2015b. Seção 1, p. 27.
CHAPELLE, C. A.; ENRIGHT, M. K.; JAMIESON, J. M. Building a validity argument for the test of English as a foreign language. New York: Routledge, 2011. DOI: https://doi.org/10.4324/9780203937891
CIZEK, G. J. Validating test score meaning and defending test score use: different aims, different methods. Assessment in Education: Principles, Policy & Practice, London, v. 23, n. 2, p. 212-225, Aug. 2016. DOI: https://doi.org/10.1080/0969594X.2015.1063479
CIZEK, G. J.; WEBB, L. C.; KALOHN, J. C. The use of cognitive taxonomies in licensure and certification test development: reasonable or customary? Evaluation & The Health Professions, Thousand Oaks, v. 18, n. 1, p. 77-91, Mar. 1995. DOI: https://doi.org/10.1177/016327879501800106
COLLEGE BOARD. Test specifications for the redesigned SAT. New York: College Board, 2015.
CRONBACH, L. J.; MEEHL, P. E. Construct validity in psychological tests. Psychological Bulletin, Washington, DC, v. 52, p. 281-302, July 1955. DOI: https://doi.org/10.1037/h0040957
DEVILLE, C. W. An empirical link of content and construct validity evidence. Applied Psychological Measurement, Thousand Oaks, v. 20, n. 2, p. 127-139, June 1996.
DEVILLE, C. W.; PROMETRIC, S. An empirical link of content and construct validity evidence. Applied Psychological Measurement, Thousand Oaks, v. 20, n. 2, p. 127-139, 1996. DOI: https://doi.org/10.1177/014662169602000202
DOWNING, S. M. Twelve steps for effective test development. In: DOWNING, S. M.; HALADYNA, T. M. (Org.). Handbook of test development. New Jersey: Lawrence Erlbaum Associates, 2006. p. 3-25.
DOWNING, S. M.; HALADYNA, T. M. Test item development: validity evidence from quality assurance procedures. Applied Measurement in Education, Oxford, v. 10, n. 1, p. 61-82, Dec. 2009. DOI: https://doi.org/10.1207/s15324818ame1001_4
EDUCATIONAL TESTING SERVICE. ETS standards for quality and fairness. Princeton: ETS, 2014. FERRAZ, A. P. C. M.; BELHOT, R. V. Taxonomia de Bloom: revisão teórica e apresentação das adequações do instrumento para definição de objetivos instrucionais. Gestão & Produção, São Carlos, v. 17, n. 2, p. 421-431, 2010. DOI: https://doi.org/10.1590/S0104-530X2010000200015
HALADYNA, T. M. Developing and validating multiple-choice test items. 3. ed. New Jersey: Lawrence Erlbaum Associates, 2004. DOI: https://doi.org/10.4324/9780203825945
HALADYNA, T. M.; RODRIGUEZ, M. C. Developing and validating test items. New York: Taylor & Francis Group, 2013. DOI: https://doi.org/10.4324/9780203850381
HOGAN, T. P. Introdução à prática de testes psicológicos. São Paulo: LTC, 2006.
HUDDLESTON, E. M. Test development on the basis of content validity. Educational and Psychological Measurement, Thousand Oaks, v. 16, n. 3, p. 283-293, Oct. 1956. DOI: https://doi.org/10.1177/001316445601600302
HUTZ, C. S. Avanços e polêmicas em avaliação psicológica. Itatiba: Casa do Psicólogo, 2009.
HUTZ, C. S.; BANDEIRA, D. R; TRENTINI, C. M. Psicometria. Porto Alegre: Artmed, 2015.
JENSEN, J. L. et al. Teaching to the test… or testing to teach: exams requiring higher order thinking skills encourage greater conceptual understanding. Educational Psychology Review, New York, v. 26, n. 2, p. 307-329, Jan. 2014. DOI: https://doi.org/10.1007/s10648-013-9248-9
KANE, M. Content-related validity evidence in test development. In: DOWNING, S. M.; HALADYNA, T. M. (Org.). Handbook of test development. New Jersey: Lawrence Erlbaum Associates, 2006. p. 131-153.
KANE, M. Validating the interpretations and uses of test scores. Journal of Educational Measurement, New Jersey, v. 50, n. 1, p. 1-73, mar. 2013. DOI: https://doi.org/10.1111/jedm.12000
KELLEY, T. L. Interpretations of educational measurement. Yonkers-on-Hudson: World Book, 1927.
LAWSHE, C. H. A quantitative approach to content validity. Personnel Psychology, Hoboken, v. 28, n. 4, p. 563-575, dez. 1975. DOI: https://doi.org/10.1111/j.1744-6570.1975.tb01393.x
LEUNG, S. F.; MOK, E.; WONG, D. The impact of assessment methods on the learning of nursing students. Nurse Education Today, v. 28, n. 6, p. 711-719, Aug. 2008. DOI: https://doi.org/10.1016/j.nedt.2007.11.004
LI, X.; SIRECI, S. G. A new method for analyzing content validity data using multidimensional scaling. Educational and Psychological Measurement, Thousand Oaks, v. 73, n. 3, p. 365-385, Jan. 2013. DOI: https://doi.org/10.1177/0013164412473825
LINN, R. L. The standards for educational and psychological testing: guidance in test development. In: DOWNING, S. M.; HALADYNA, T. M. (Org.). Handbook of test development. New Jersey: Lawrence Erlbaum Associates, 2006. p. 27-38.
LYNN, M. R. Determination and quantification of content validity. Nursing Research, London, v. 35, n. 6, p. 382-385, Nov./Dec. 1986. DOI: https://doi.org/10.1097/00006199-198611000-00017
MARZANO, R. J.; KENDALL, J. S. The new taxonomy of educational objectives. 2. ed. Thousand Oaks: Corwin, 2007. MESSICK, S. Validity. In: LINN, R. L. (Ed.). Educational measurement. Washington, DC: American Council on Education; National Council on Measurement in Education, 1989. p. 13-103.
MISLEVY, R. J. Validity by design. Educational Reseacher, Washington, v. 36, n. 8, p. 463-469, Nov. 2007. DOI: https://doi.org/10.3102/0013189X07311660
MISLEVY, R. J.; ALMOND, R. G.; LUKAS, J. F. A brief introduction to Evidence-Centered Design. ETS Research Report Series, Princeton, v. 03-16, n. 1, p. i-29, July 2003. DOI: https://doi.org/10.1002/j.2333-8504.2003.tb01908.x
MOMSEN, J. L. et al. Just the facts? Introductory undergraduate biology courses focus on low-level cognitive skills. CBE-Life Sciences Education, Bethesda, v. 9, n. 4, p. 435-440, Dec. 2010. DOI: https://doi.org/10.1187/cbe.10-01-0001
NASCIMENTO, T. G.; SOUZA, E. C. L. Escala trifatorial da identidade social (ETIS): evidências de sua adequação psicométrica. Psico-USF, Bragança Paulista, v. 22, n. 2, p. 217-234, May/Aug. 2017. DOI: https://doi.org/10.1590/1413-82712017220203
NATIONAL ASSESSMENT OF EDUCATIONAL PROGRESS. Reading Framework for the 2015 National Assessment of Educational Progress. Washington: U.S. Government Printing Office, Jan. 2015.
NEWTON, P. E. Macro- and micro-validation: Beyond the “five sources” framework for classifying validation evidence and analysis. Practical Assessment, Research & Evaluation, College Park, v. 21, n. 12, p. 1-13, Dec. 2016.
PARTNERSHIP FOR ASSESSMENT OF READINESS FOR COLLEGE AND CAREERS. PARCC Grades 6-11 High Level Blueprints. EUA: 2017. Disponível em:<http://www.parcconline.org/files/83/Spring%202016/388/Grades%206-11%20High%20Level%20Blueprint%20(Updated).pdf>. Acesso em: 26 jun. 2017.
PASQUALI, L. Instrumentos psicológicos: manual prático de elaboração. Brasília, DF: LabPAM/IBAPP, 1999.
PASQUALI, L. Psicometria. Revista da Escola de Enfermagem da USP, São Paulo, v. 43, p. 992-999, dez. 2009. Edição especial. DOI: https://doi.org/10.1590/S0080-62342009000500002
PASQUALI, L. Instrumentação psicológica: fundamentos e prática. Porto Alegre: Artmed, 2010.
PASQUALI, L. Validade dos testes. Examen: Pesquisa em Avaliação, Certificação e Seleção, Brasília, DF, v. 1, n. 1, p. 14-48, jul./dez. 2017.
POLIT, D. F.; BECK, C. T.; OWEN, S. T. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Research in Nursing & Health, Thousand Oaks, v. 30, n. 4, p. 459-567, Aug. 2007. DOI: https://doi.org/10.1002/nur.20199
RULON, P. J. On the validity of educational tests. Harvard Educational Review, Washington, DC, v. 16, p. 290-296, 1946.
RUTKOWSKI, L.; VON DAVIER, M.; RUTKOWSKI, D. Handbook of international large-scale assessment: background, technical issues, and methods of data analysis. Boca Raton: CRC/Taylor & Francis Group, 2014. SCULLY, D. Constructing multiple-choice items to measure higher-order thinking. Practical Assessment, Research & Evaluation, College Park, v. 22, n. 4, p. 1-13, May 2017. Disponível em: <http://pareonline.net/getvn.asp?v=22&n=4>. Acesso em: 27 ago. 2017.
SIRECI, S. G. The construct of content validity. Social Indicators Research, New York, v. 45, n. 1-3, p. 83-117, Nov. 1998. DOI: https://doi.org/10.1023/A:1006985528729
SIRECI, S. G. Agreeing on validity arguments. Journal of Educational Measurement, New Jersey, v. 50, n. 1, p. 99-104, Mar. 2013. DOI: https://doi.org/10.1111/jedm.12005
SIRECI, S. G.; GEISINGER, K. F. Analyzing test content using cluster analysis and multidimensional scaling. Applied Psychological Measurement, Thousand Oaks, v. 16, n. 1, p. 17-31, Mar. 1992. DOI: https://doi.org/10.1177/014662169201600102
ZIEKY, M. J. An introduction to the use of Evidence-Centered Design in test development. Psicología Educativa, Madrid, v. 20, n. 2, p. 79-87, dic. 2014. DOI: https://doi.org/10.1016/j.pse.2014.11.003
ZUMBO, B. D. What role does, and should, the test standards play outside of the United States of America? Educational Measurement: Issues and Practice, Philadelphia, v. 33, n. 4, p. 31-33, Dec. 2014. DOI: https://doi.org/10.1111/emip.12052
Downloads
Publicado
Como Citar
Edição
Seção
Licença
Copyright (c) 2018 Girlene Ribeiro de Jesus, Renata Manuelly de Lima Rêgo, Victor Vasconcelos de Souza

Este trabalho está licenciado sob uma licença Creative Commons Attribution-NonCommercial 4.0 International License.
a. Autores mantêm os direitos autorais e concedem à revista o direito de primeira publicação.
b. Todos os trabalhos estão licenciados sob a Licença Creative Commons Attribution (CC BY 4.0), que permite o compartilhamento do trabalho com reconhecimento da autoria.





