Video lectures taxonomy
Alenka Lipovec
University of Maribor, Faculty of Education, Faculty of Natural Sciences and Mathematics, Maribor, Slovenia
Abstract
This is joint work with Martin Putzlochner, Stiftland-Gymnasium Tirschenreuth, Germany, m.putzlocher@stiftland-gymnasium.de
Video has created a pedagogy for current learning and teaching practices – video pedagogy. Flipped learning with video learning materials was used extensively during the coronavirus pandemic. The highest positive correlation between flipped learning and student achievement was reported for STEM. The effectiveness of video lectures in education depends on many characteristics. The vast majority of empirical results relate to the university level. It appears that people learn better from an instructional video when: the lesson includes prompts to engage in summarising or explaining the material (generative activity), the instructor draws graphics on the board during the lecture (dynamic drawing), the instructor shifts eye gaze between the audience and the panel during the lecture (gaze guidance), and a demonstration is filmed from a first-person perspective (perspective principle). Standard formats for video lectures include lecture capture, picture-in-picture, and voiceover. Lecture capture involves videotaping a physical lecture. Picture-in-picture combines a full-screen presentation of the slide content with a small video recording of the lecturer (e.g., talking head in a lower corner). In contrast, voiceover combines a full-screen presentation with audio narration by the instructor. Another format combines images of the instructor with content that the instructor can monitor in real-time; this format is called a “live composite” and has a distinct advantage over other video lecture formats. Further guidelines for video lecture design include multimedia (presentation of words and graphics), coherence (avoid redundant material in slides and script), signalling ( highlight key material), redundancy (no subtitles that repeat the spoken word), spatial contiguity (place printed text next to the corresponding part of the graphic), temporal contiguity (present related visual and verbal material at the same time), segmentation (break a complex lecture into progressively presented parts), modality ( present words as spoken text), personalisation (conversational language), voice ( use appealing voice), and embodiment (display gesturing instructor ).
The language of instruction plays a crucial role. It seems that subtitles cannot effectively overcome a language barrier. Instead of subtitles, we can use machine translation of audio into video. Additional features (e.g. no language-specific text) must be considered for multilingual videos created with automatic speech recognition and machine translation. The creation and use of educational video lectures for teaching purposes is not new. However, there is still no concrete guidance to help teachers choose the appropriate video for their students. We have therefore attempted to create a taxonomy of video lectures based on a hierarchical structure of levels. The goal is to use the characteristics/principles of video lectures to provide a foundation in the form of a taxonomy scheme that will help teachers determine the quality level of video resources they need for effective instruction. As an analogy, Bloom’s hierarchical taxonomy was chosen, which is the most widely used in education. Reaching the lower levels is a prerequisite for achieving the higher taxonomic levels. The revised Bloom’s taxonomy focuses on six levels: remember, understand, apply, analyse, evaluate and create.
We have summarised the various features of educational video lectures into five hierarchically organised levels: instruction (including generative activity and temporal contiguity); active learning (including generative activity and instructor visibility, e.g. recording format, gaze guidance, perspective, personalisation, voice and embodiment); interactivity; segmentation; dynamic visualisation (including dynamic drawing, multimedia, coherence, signalling and spatial contiguity) and multilingual principle (including redundancy and modality). For simplicity, the video lecture taxonomy is formatted as a checklist. he teacher scores the levels’ realisation for the video lecture with points ranging from 0 to 4 (1 – minimal requirements, 2 – medium quality, 3 – high quality, 4 – excellent quality). For each level, descriptions and example videos help the teacher decide how many points could be awarded to the evaluated video lecture.
For instance, in the dynamic visualisation level, one point is awarded if the dynamic drawing principle is followed, two points are awarded if the video shows the lecturer using dynamic visualisation available on a third-party site, three points are awarded if the lecturer is using dynamic visualisation/geometry linked and four points are awarded if students themselves are using dynamic visualisation tool (e.g. as a part of JSXGraph interactive element in H5P). In our talk, we will further focus on the interactivity level and present some video lectures performed with different features and options of JSXGraph between H5P.