Semi-automatic classification of student drawings and texts in Science Education
Students often interpret scientific processes based on everyday experiences. To design efficient learning processes means to draw on these student conceptions, to extend on them or, as the case may be, to contrast them. The diagnosis of the internal conceptions of the students present in a classroom is therefore an important first step. For diagnosis, students are asked to produce individual representations of the scientific process, in form of descriptions in their own words and/or sketched drawings. For a support of individual learning, the created artefacts need to be analysed in detail — a task which is hard to realise in the business of everyday school life.
The PhD project deals with the development of novel analysis methods, which integrate state-of-the-art machine learning techniques with knowledge graphs. The objective is the automatic pre-classification of student artefacts of different modalities (drawings, text and their combination) according to domain-specific similarity metrics. Different concepts of “similarity” will be explored: For example, a grouping could be based on the similarity of the conceptions expressed in the artefact, or the degree of their perceived correctness.
Existing works which deal with the automatic analysis of drawings in school contexts only treat very specific application scenarios, such as the analysis of component compositions in physics lessons (Shelton 2016), Forbus et al. (2018) discusses applications in computer science and earth sciences. However, the automatic analysis of drawings is still a great challenge – despite the progress made due to deep learning and neural networks (Krizhevsky et al. 2012, Sharif Razavian et al. 2014, overview see Ewerth et al. 2017).
In close cooperation with experts from the educational sciences, the project also investigates under which instructional conditions such an automated classification of individual representations can succeed.
Ewerth, R., Springstein, M., Phan-Vogtmann, L. A., & Schütze, J. (2017). “Are Machines Better Than Humans in Image Tagging?”-A User Study Adds to the Puzzle. In European Conference on Information Retrieval. Springer, Cham. 186-198
Forbus, K. D., Garnier, B., Tikoff, B., Marko, W., Usher, M., & McLure, M. D. (2018). Sketch Worksheets in STEM Classrooms: Two Deployments. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018 (pp. 7665–7672). Retrieved from https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16540
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105)
Sharif Razavian, A., Azizpour, H., Sullivan, J., & Carlsson, S. (2014). CNN features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 806-813).
Shelton, A., Smith, A., Wiebe, E., Behrle, C., Sirkin, R., & Lester, J. (2016). Drawing and writing in digital science notebooks: Sources of formative assessment data. Journal of Science Education and Technology, 25(3), 474-488.