Journal of East China Normal University(Educational Sciences) >
Investigation into the Transformation of Knowledge-Centered Pedagogy with ChatGPT/Generative AI
Online published: 2023-06-25
This paper explores the transformative role of ChatGPT in the teaching mode centered on knowledge concepts. As a language generation model, ChatGPT is capable of in-depth language comprehension and innovative combinations by mining the symbiotic relationships between words through massive language data learning. However, in the field of education, ChatGPT faces limitations such as over-reliance on training data, weak logical reasoning ability, and limited ability to handle new scenarios. To overcome these limitations and enhance the accuracy and relevance of ChatGPT’s generated content, this paper proposes an organic combination of ChatGPT with the organization of teaching resources centered on knowledge concepts, and improve ChatGPT by creating structure diagrams of knowledge concepts. Additionally, several specific and feasible ways to assist teachers and students using ChatGPT are also proposed. Finally, this paper discusses how to combine the prompt research paradigm with the teaching mode centered on knowledge concepts to help ChatGPT establish a “knowledge system”. This will enable ChatGPT to become a language generation model driven by both data and knowledge, providing more intelligent and personalized services in the education field, and promoting its development and transformation.
Jingyuan Chen , Liya Hu , Fei Wu . Investigation into the Transformation of Knowledge-Centered Pedagogy with ChatGPT/Generative AI[J]. Journal of East China Normal University(Educational Sciences), 2023 , 41(7) : 177 -186 . DOI: 10.16382/j.cnki.1000-5560.2023.07.016
null | 机器之心. (2023). 史上增速最快消费级应用, ChatGPT月活用户突破1亿. 取自: https://mp.weixin.qq.com/s/ahUJrwTgXJhc0Gc7CYG_7w. |
null | 吴飞, 陈为, 孙凌云, 肖俊 以知识点为中心建设AI+X微专业 科教发展研究 2023 3 1 吴飞, 陈为, 孙凌云, 肖俊. (2023). 以知识点为中心建设AI+X微专业. 科教发展研究,3(1). |
null | Keskar, N. S., McCann, B., Varshney, L. R., Xiong, C., & Socher, R. (2019). Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv: 1909.05858. |
null | Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing ACM Computing Surveys 2023 55 9 1 35 Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2023). Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9), 1—35. |
null | Liu, X., Zhang, F., Hou, Z., Mian, L., Wang, Z., Zhang, J., & Tang, J Self-supervised learning: Generative or contrastive IEEE Transactions on Knowledge and Data Engineering 2021 35 1 857 876 Liu, X., Zhang, F., Hou, Z., Mian, L., Wang, Z., Zhang, J., & Tang, J. (2021). Self-supervised learning: Generative or contrastive. IEEE Transactions on Knowledge and Data Engineering, 35(1), 857—876. |
null | Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., ... & Sutskever, I. (2021, July). Learning transferable visual models from natural language supervision. In International conference on machine learning (pp. 8748−8763). PMLR. |
null | Schulman, J., B. Zoph, C. Kim, J. Hilton, J. Menick, J. Weng, J. F. C. Uribe et al. (2022) “ChatGPT: Optimizing language models for dialogue.” |
null | Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. |
null | Wu, L. L., & Barsalou, L. W Perceptual simulation in conceptual combination: Evidence from property generation Acta psychologica 2009 132 2 173 189 Wu, L. L., & Barsalou, L. W. (2009). Perceptual simulation in conceptual combination: Evidence from property generation. Acta psychologica, 132(2), 173—189. |
null | Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., & Zhou, D. (2022). Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv: 2201.11903. |
/
〈 |
|
〉 |