Journal of East China Normal University(Educational Sciences) >
How Natural Language Processing Technology Empowers the AIED: The Perspective of AI Scientist
Accepted date: 2022-05-30
Online published: 2022-08-24
Natural language processing (NLP) is one of the most important research branches of artificial intelligence (AI). With the boosting of computer performance and the construction of large-scale corpora in the last decade, NLP technology has made great progress and has been widely applied in various areas, especially in the field of education. Specifically, in this paper, we investigate the present research and the trends of NLP technology, and how NLP promotes the development of artificial intelligence in education (AIED) through studying and analyzing publications, reports, and speeches, etc. from eminent domestic and international AI specialists. We are aiming to explore the direction and trend of AIED in the future.
Bo Zhang , Ruihai Dong . How Natural Language Processing Technology Empowers the AIED: The Perspective of AI Scientist[J]. Journal of East China Normal University(Educational Sciences), 2022 , 40(9) : 19 -31 . DOI: 10.16382/j.cnki.1000-5560.2022.09.003
1 | 阿里云. (2020). 智能文本分类: https://ai.aliyun.com/nlp/tc. |
2 | 戴静, 顾小清. (2020). 人工智能将把教育带往何方——WIPO《2019 技术趋势: 人工智能》 报告解读. 中国电化教育, (10), 24- 31. |
3 | 德勤. (2020). 全球教育智能化发展报告. 取自德勤网站: https://www2.deloitte.com/cn/zh/pages/technology-media-and-telecommunications/articles/development-of-ai-based-education-in-china.html. |
4 | 何晓东. (2019). 自然语言处理帮助人类更好地连接世界. 中国人工智能大会, http://2019.ccai.cn/. |
5 | 彭绍东.(2021). 人工智能教育的含义界定与原理挖掘. 中国电化教育, (06), 49?59. |
6 | 前瞻产业研究院. (2020). 2021—2026中国智能教育行业发展前景预测与投资战略规划分析报告. |
7 | 清华大学人工智能研究院. (2020). 人工智能发展报告(2011—2020). |
8 | 沈向洋.(2018).人工智能将会颠覆所有的商业应用.人工智能大会. 取自微软网站(2018年9月19日): https://www.microsoft.com/zh-cn/ard/news/news_2018_58. |
9 | 伍红林. (2020). 人工智能进步可能为当代教育学发展带来什么?. 大学教育科学, (05), 103- 111. |
10 | 微软亚洲研究院. (2019). 周明: 自然语言处理的技术体系和未来之路. 取自微软亚洲研究院网站(2019年7月15日): https://www.msra.cn/zh-cn/news/features/ccf-gair-2019-ming-zhou. |
11 | 吴永和, 刘博文, 马晓玲. (2017). 构筑“人工智能+ 教育”的生态系统. 远程教育杂志, 35 (5), 27- 39. |
12 | 郑南宁. (2019). 面对人工智能挑战 人才培养的下一步该如何走. 中国大学教学, (02), 9- 13+8. |
13 | 知乎. (2019). 华为语音语义首席科学家刘群谈“自然语言处理”. 取自知乎(2019年2月12日): https://zhuanlan.zhihu.com/p/56526597. |
14 | 宗成庆.(2020). 特约专栏: 人类语言技术展望. CAAI中国人工智能学会通讯, 1(10). |
15 | 庄福振, 罗平, 何清, 史忠植. (2015). 迁移学习研究进展. 软件学报, 26 (1), 26- 39. |
16 | Alhawiti, K. M. (2014). Natural language processing and its use in education. Tabuk, Saudi Arabia. |
17 | Ardoin, S. P., Williams, J. C., Christ, T. J., Klubnik, C., & Wellborn, C. (2010). Examining readability estimates’ predictions of students’ oral reading rate: Spache, Lexile, and Forcast. School Psychology Review, 39 (2), 277- 285. |
18 | Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv : 1409.0473. |
19 | Bengio, Y., Ducharme, R., Vincent, P., & Janvin, C. (2003). A neural probabilistic language model. Journal of Machine Learning Research, 3 (Feb), 1137- 1155. |
20 | Bin Dahmash, N. (2020). ‘I can’t live without Google Translate’: A close look at the use of Google Translate App by second language learners in Saudi Arabia. Arab World English Journal (AWEJ)., 11 (3), 226- 240. |
21 | Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. The Journal of machine Learning research, 3, 993- 1022. |
22 | Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P. , . . . & Amodei, D. (2020). Language models are few-shot learners. arXiv: 2005.14165. |
23 | Dan, J. (2017). Dan Jurafsky on natural language processing. Retrieved from https://www.youtube.com/watch?v=QIdB6M5WdkI. |
24 | Cancino, M., & Panes, J. (2021). The impact of Google Translate on L2 writing quality measures: Evidence from Chilean EFL high school learners. System, 98, 102464. |
25 | Chen, X., Cui, Z., Zhang, J., Wei, C., Cui, J., Wang, B. , . . . & Yan, R. (2020). Reasoning in dialog: Improving response generation by context reading comprehension. arXiv: 2012.07410. |
26 | Chen, P., Lu, Y., Liu, J., & Xu, Q. (2021). An intelligent assistant for problem behavior management. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 18, pp. 16007?16010). |
27 | Cho, K., Van Merri?nboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv: 1406.1078. |
28 | Conneau, A., Lample, G., Ranzato, M. A., Denoyer, L., & Jégou, H. (2017). Word translation without parallel data. arXiv : 1710.04087. |
29 | Cynthia, B. (2019). Developing social and empathetic AI. Retrieved from https://www.youtube.com/watch?v=T52g7dCxJ4A. |
30 | Dragomir, R. (2017). Deep learning for NLP. Retrieved from https://vimeo.com/230044807 . |
31 | Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv: 1810.04805. |
32 | Fedus, W., Zoph, B., & Shazeer, N. (2021). Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv: 2101.03961. |
33 | Guo, Q., Qiu, X., Liu, P., Xue, X., & Zhang, Z. (2020). Multi-scale self-attention for text classification. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 05, pp. 7847?7854). |
34 | Hofmann, T. (1999). Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval (pp. 50?57). |
35 | Kalchbrenner, N., & Blunsom, P. (2013). Recurrent continuous translation models. In Proceedings of the 2013 conference on empirical methods in natural language processing (pp. 1700?1709). |
36 | Koelstra, S., Muhl, C., Soleymani, M., Lee, J. S., Yazdani, A., Ebrahimi, T., . . & Patras, I. (2011). Deap: A database for emotion analysis; using physiological signals. IEEE transactions on affective computing, 3 (1), 18- 31. |
37 | Lex, F. (2021). MIT deep learning and artificial intelligence lectures. Retrieved from https://deeplearning.mit.edu/. |
38 | Liu, Y., Zhang, J., Xiong, H., Zhou, L., He, Z., Wu, H., . . & Zong, C. (2020). Synchronous speech recognition and speech-to-text translation with interactive decoding. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 05, pp. 8417?8424). |
39 | Lu, Y., Pian, Y., Chen, P., Meng, Q., & Cao, Y. (2021). RadarMath: An intelligent tutoring system for math education. Interaction, 1, U2. |
40 | Matthew, M. (2020). AI, Analytics, Machine Learning, Data Science, Deep Learning Research Main Developments in 2020 and Key Trends for 2021. Retrieved from https://www.kdnuggets.com/2020/12/predictions-ai-machine-learning-data-science-research.html. |
41 | Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv: 1301.3781. |
42 | Minaee, S., Kalchbrenner, N., Cambria, E., Nikzad, N., Chenaghlu, M., & Gao, J. (2021). Deep learning-based text classification: A comprehensive review. ACM Computing Surveys (CSUR), 54 (3), 1- 40. |
43 | Newman, H., & Joyner, D. (2018). Sentiment analysis of student evaluations of teaching. In International conference on artificial intelligence in education (pp. 246?250). Springer, Cham. |
44 | OpenAI. (2021). DALL·E: Creating Images from Text. Retrieved from https://openai.com/blog/dall-e/. |
45 | Russell, S. J. (2020) . The Future of Artificial Intelligence. Retrieved from https://www.carnegiecouncil.org/studio/multimedia/20200219-future-artificial-intelligence-stuart-russell. |
46 | Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. (2002). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics (pp. 311?318). |
47 | Pennington, J., Socher, R., & Manning, C. D. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532?1543). |
48 | Peng, Y., Chen, P., Lu, Y., Meng, Q., Xu, Q., & Yu, S. (2019). A task-oriented dialogue system for moral education. In International Conference on Artificial Intelligence in Education (pp. 392?397). Springer, Cham. |
49 | Qi, F., Chang, L., Sun, M., Ouyang, S., & Liu, Z. (2020). Towards building a multilingual sememe knowledge base: Predicting sememes for BabelNet synsets. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 05, pp. 8624?8631). |
50 | Rajput, Q., Haider, S., & Ghani, S. (2016). Lexicon-based sentiment analysis of teachers’ evaluation. Applied computational intelligence and soft computing, 2016. |
51 | Rajpurkar, P., Jia, R., & Liang, P. (2018). Know what you don't know: Unanswerable questions for SQuAD. arXiv: 1806.03822. |
52 | Ruder, S., Vulic, I., & S?gaard, A. (2019). A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research, 65, 569- 631. |
53 | Russell, S., & Norvig, P. (2002). Artificial intelligence: a modern approach. |
54 | Sebastian, R. (2021). ML and NLP Research Highlights of 2020. Retrieved from https://ruder.io/research-highlights-2020/ . |
55 | Shao, C., Zhang, J., Feng, Y., Meng, F., & Zhou, J. (2020). Minimizing the bag-of-ngrams difference for non-autoregressive neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 01, pp. 198?205). |
56 | Shoham, Y., Perrault, R., Brynjolfsson, E., & Clark,J. (2017) . 2017 Stanford Artificial Intelligence Index Annual Report. |
57 | Shoham, Y., Perrault, R., Brynjolfsson, E., Clark,J. Manyika, J., Niebles, J. C., Lyons,T., Etchemendy, J., Grosz, B., & Bauer, Z. (2018). The AI Index 2018 Annual Report.AI Index Steering Committee, Human-Centered AI Initiative, Stanford University, Stanford, CA. |
58 | SQuAD2.0. (2021). The Stanford Question Answering Dataset. Retrieved from https://rajpurkar.github.io/SQuAD-explorer/. |
59 | Swieczkowski, D., & Kułacz, S. (2021). The use of the Gunning Fog Index to evaluate the readability of Polish and English drug leaflets in the context of Health Literacy challenges in Medical Linguistics: An exploratory study. Cardiology Journal, 28(4), 627?631. |
60 | Tang, Y., & Yu, D. (2020). The method of calculating sentence readability combined with deep learning and language difficulty characteristics. In Proceedings of the 19th Chinese National Conference on Computational Linguistics (pp. 731?742). |
61 | Tongpoon-Patanasorn, A., & Griffith, K. (2020). Google Translate and Translation Quality: A Case of Translating Academic Abstracts from Thai to English. PASAA: Journal of Language Teaching and Learning in Thailand, 60, 134- 163. |
62 | Tzacheva, A., Ranganathan, J., & Jadi, R. (2019). Multi-label emotion mining from student comments. In Proceedings of the 2019 4th International Conference on Information and Education Innovations (pp. 120?124). |
63 | Tsai, S. C. (2020). Chinese students’ perceptions of using Google Translate as a translingual CALL tool in EFL writing. Computer assisted language learning, 1?23. |
64 | Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N. , . . . & Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998?6008). |
65 | Wang, Z., Liu, J., & Dong, R. (2018). Intelligent Auto-grading System. In 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS) (pp. 430?435). IEEE. |
66 | Xu, J., Wang, H., Niu, Z., Wu, H., & Che, W. (2020). Knowledge graph grounded goal planning for open-domain conversation generation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 05, pp. 9338?9345). |
67 | Xu, R., Tao, C., Jiang, D., Zhao, X., Zhao, D., & Yan, R. (2020). Learning an Effective Context-Response Matching Model with Self-Supervised Tasks for Retrieval-based Dialogues. arXiv: 2009.06265. |
68 | Xu, Y., Wu, Z., Huang, H., Yang, T., Yu, P., & Lu, E. (2016). Grammar Automatic Checking System for English Abstract of Master’s Thesis. In International Conference on Bio-Inspired Computing: Theories and Applications (pp. 497?506). Springer, Singapore. |
69 | Yang, L., Li, J., Cunningham, P., Zhang, Y., Smyth, B., & Dong, R. (2021). Exploring the Efficacy of Automatically Generated Counterfactuals for Sentiment Analysis. arXiv: 2106.15231. |
70 | Zhang, M., Liu, Y., Luan, H., & Sun, M. (2017). Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 1959?1970). |
71 | Zhang, Y., Ou, Z., & Yu, Z. (2020). Task-oriented dialog systems that consider multiple appropriate responses under the same context. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 05, pp. 9604?9611). |
72 | Zheng, L., Qi, F., Liu, Z., Wang, Y., Liu, Q., & Sun, M. (2020). Multi-channel reverse dictionary model. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 01, pp. 312?319). |
73 | Zheng, Y., Zhang, R., Mensah, S., & Mao, Y. (2020). Replicate, walk, and stop on syntax: an effective neural network model for aspect-level sentiment classification. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 05, pp. 9685?9692). |
74 | Zheng, Y., Zhang, R., Huang, M., & Mao, X. (2020). A pre-training based personalized dialogue generation model with persona-sparse data. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 05, pp. 9693?9700). |
/
〈 |
|
〉 |