Emerging research on the future of humans and digitalisation: GenZ pilot studies
The GenZ pilot studies focus on, e.g., the following questions:
- What are the human competences or skills needed for interaction and work in teams face to face and remotely?
- How do different technologies affect interaction?
- How can conditions for interactions be designed and supported for COVID/post-COVID needs?
- How has the COVID epidemic transformed (facilitated, constrained or something else) the ways of expressing, understanding, and interpreting interactions?
The pilot study Digital and Collaborative Learning Competence of the Students and their Educators in Health Sciences (DigiCollaborativeLearning) aims to reinforce digital collaborative learning competence in the health sciences education of bachelor's students and their educators by developing a hybrid model to create better conditions for students and educators in professional development during the COVID/post-COVID pandemic. The aim is to observe and interview health sciences bachelor's students during their learning of their experiences and competence for digital collaboration in order to achieve their learning outcomes, professional competence and quality of life in person and virtually. The pilot study involved educational intervention with 17 bachelor's students in health sciences and two educators in the spring of 2021. In addition, the experiences of bachelor's students in health sciences were examined through individual thematic interviews. The themes addressed students' experiences of distance learning, well-being and resilience during a pandemic crisis, as well as skills development and digital learning in health science education. The project team is conducting analysis of videotaped students' interaction and competence development and content analysis of the students' interviews. As the outcome of the project, the research team will prepare a preliminary hybrid learning guideline for educators, at least two scientific publications of original studies and apply additional funding to continue with the study. The HealthEduCom research group is led by Prof. Kristina Mikkonen and other researchers are Sari Pramila-Savukoski, Heli Kuivila, Kärnä, Ashlee Oikarainen, and Jonna Juntunen from the Research Unit of Nursing Science and Health Management, Faculty of Medicine.
The pilot study to develop machine learning methods for recognizing Interactive gestures between two persons is led by Dr. Xiaobai Li, Center for Machine Vision and Signal Analysis (CMVS), Faculty of Information Technology and Electrical Engineering. The pilot study aims to explore machine learning methods for recognizing interactive gestures between two persons. The research work contained three steps: first, collecting interactive gesture dataset at LeaF, second, training machine learning methods on the collected data, and third, adapting and testing the trained method on an educational data to support education research analysis. The data was gathered and re-organized from online resources and complemented with a literature review about related machine learning methods. The team is currently working on developing machine learning methods for interactive gesture recognition. ‘Transformer’ is gaining rising attention in the field of deep learning, which has been demonstrated to be effective not only for natural language processing, but also for visual stimuli processing, e.g., image or video pattern recognition. A ‘Transformer’ based approach, named as ‘MTimeSformer’, to model the dynamic interactive gestures between two persons will be built.
Co-shaping humane technology use: hybrid sociabilities in times of a pandemic pilot study is led by Dr. Samira Ibnelkaïd from the Research Unit for Languages and Literature, Faculty of Humanities. The pilot study focuses on the social experience of expatriates in Oulu, Finland, during the COVID19 pandemic. It explores the ways in which, despite the ongoing restrictions, these "expats" manage to build a new social network within their host city while keeping in touch with their relatives abroad, making use of digital technologies. The research group has collected a data set of video-recorded naturally occurring interactions in between foreigners in Oulu both offline and online. More specifically, 26 hours of interactions including 16 hours of face-to-face gatherings in between expatriates and 10 hoursof online video calls between foreigners and their relative abroad have feen collected so far. Moreover, the group has conducted 7 hours of interviews with expats. The processing and the analysis of the data is still ongoing but as preliminary result, the group has been building a collection of embodied and artifacted “experience-sharing” sequences, and identified the multimodal sequentiality of online “experience-sharing” involving eight subsequences: the preface, the initiation, the staging, the audit, the object assessment, the closing, the restaging and finally the transition to another conversational topic. It appears that these subsequences are accomplished through technobodily ethnomethods. This intersubjective process of showing extimacy allow participants to make the invisible visible, it prevents social exclusion and it fosters participation even remotely. However, it requires individuals to foster a “technobodily literacy” that is defined as the process of developing and nurturing the technical, cognitive, sensory, socioaffective and bodily skills needed in order to enact an artifacted intercorporeality, to fully partake in the socio-digital world and to fight exclusion.
Tracing teacher education students’ learning during an immersive virtual reality learning task using multimodal data pilot study is coordinated by doctoral student Marta Sobocinski from the Learning and Educational Technology Research Unit (LET). In the pilot data collection the research group was interested to find out more about teacher education students learn in an immersive virtual reality environment, and how it can be captured using multimodal data (physiological data, video data, self-reports, interviews). The group collected data in LeaF from 15 participants while they learnt about how viruses spread in a community. The participants were degree and doctoral students from the Faculty of Education, who have background in teacher education. Preliminary results show that participants achieved a high level of immersion in the environment (“I need to leave from here, people are coughing!”), monitored their learning processes often (“This is hard for me!”, “I’m so confused right now”). The interviews showed that the participants had a mostly positive experience learning with VR but see practical obstacles in implementing it in their own teaching in the future (space required, equipment costs, number of students in a class). The next step, according to the research group, is to go deeper in the data and explore also the physiological data, and how that can be used together with the video data to capture critical moments in the learning process.
The Translingual communication practices in global business (GloBus) project at the Universities of Oulu and Helsinki explores business professionals’ language and communication practices at work in the age of covid-19 and beyond. A pilot study was conducted in the spring of 2021, focusing on customer training carried out by the health technology company TE3 Mobility, tracking the planning and execution of the training, including video recording mobility analyses as well as collecting training videos and other materials related to the customer training trajectory. The LeaF research infrastructure services provided the test facilities. The preliminary findings of the pilot study suggest the use of a rich array of material, technological and spatial resources as well as body movement in support of verbal interaction in the mobility analyses and training videos. The data also illustrate the need on the part of the business professionals to reflect on their language and communication practices with their customers (e.g. what kind of terminology to use, how to explain movement in a clear way, and how to design training videos that are accessible and engaging without compromising professionalism). As an important outcome of the pilot study the company gained valuable experience about implementing training in virtual environments for the future. The GloBus team is led by Tiina Räisänen and other researchers are Niina Hynninen (University of Helsinki), Mu Zhao (University of Helsinki), and Janne Pysäys (University of Oulu).
The aim of the pilot study Remote Research Collaboration Using VR and 360° Video (RReCO-VR360) is to research social interaction and activity, as well as to explore new ways of collecting, editing, analysing, transcribing and viewing video data. The research team Henna Kaaresto, Tiina Keisanen, Mirka Rauniomaa, Pauliina Siitonen, Maarit Siromaa, Heidi Spets and Anna Vatanen collaborated for the pilot study with the Big Video team (Prof. Paul McIlvenny, Dr Jacob Davidsen and their team of researchers) at Aalborg, Denmark. Their team is presently developing software tools for the purpose of staging and inhabiting of video. Among the tools are AVA360VR (Annotate, Visualise, Analyse 360 videos in Virtual Reality), CAVA360VR (Collaborate, Annotate, Visualise, Analyse 360 video in VR) and DOTE (Distributed Open Transcription Environment).The group has acted as beta-testers for these three tools while working on our data. With the help of these software tools, they have explored new practices of collaboration, which has become especially important due to the COVID-19 pandemic making face-to-face collaboration challenging. The group has collected video data of a remote working environment in a family home, as well as a game environment where the research participants interact with one another in a virtual setting using VR (Virtual Reality) headsets and controllers. The data of the remote working environment consists of 360°, GoPro and screen recording videos showing the person working from home while also interacting with other members of the household. The data of the VR environment was collected in the LeaF infrastructure and consists of video material showing the participants acting in the physical space, as well as footage from inside the game captured from the participants’ perspectives.