VILA

VILA supports research into embodied human interaction in a wide range of environments and with a focus on (social) cognition, learning and design.

The labs qualify and support research activities that collect data in the field, i.e. typically outside the university such as patient education in the hospital, play and learning activities inside and outside of pedagogical institutions, mobility behaviour in exhibition halls, cooperation between elderly, nursing staff and robots, learning to drive, project work, fashion design processes and much more.

Learning Resources

Video tutorials

Check out VILA's video tutorials of software such as ELAN and Transana.

E-learning

Take a look at VILA's different e-learning modules on 360 video, basic video transcription and much more.

Booking labs and materials

Read more on how to book resources, lab equipment and see the full list of lab equipment.

Data treatment

Read more about tools for collecting video data, anonymisation of video data and informed consent forms.

Workshops and events

VILA organises three types of events – data sessions, research workshops and PhD seminars/research seminars.

Research from multiple perspectives

Although today, interactions can be recorded reasonably well with any-state-of-the art smartphone, interaction researchers need more advanced technical and analytic competences. While some years ago, recordings were typically made with one camera, researchers today often record with multiple camera and audio streams to document interaction from different perspectives.

Recording and editing multiple stream audio/video recordings is one of the technical focus areas of the labs in DIGHUMLAB. Another focus is the documentation of mobile interactions, e.g. people driving cars, travelling in bicycle formation or moving together through space in other ways.

A third focus is to experiment with and understand how technological innovations can be employed to capture and analyse interaction: 360 degrees cameras have just recently become affordable, but need to be tested in the field. Finally, researchers may want to work with all of these data sources in combination with physiological sensors or eye-trackers.

Selected tools

Adobe Premiere Pro is a timeline-based video editing app developed by Adobe Systems.

ELAN is a professional program for transcribing and annotating audio and video streams.

Transana is a tool for transcribing and analysing text, images and audio/video.

AVA360VR – the software that empowers users to be “sucked into their video data worlds”

AVA360VR (and CAVA360VR) is a tool developed by VILA and Big Video.
AVA360VR is short for Annotate, Visualise and Analyse 360° video in Virtual Reality.AVA360VR is a unique and flexible tool for annotating, visualising and analysing 360° video in virtual reality. The tool empowers users – researchers, students and educators alike – to relive any 360° video-recorded situation in virtual reality (with a VR headset). 

Guidance and support

When data has been collected, there may be a need to convert, compress, align, splice together, transcribe and host so it can be used and shared by researchers.

In many projects, (semi) automatic support for analysis is needed to handle large complex data streams. For sustainability, those data must be stored in reliable and robust data bases. Development and design of such data storage solutions and analytic tools are equally part of the experimental labs in DIGHUMLAB.

For guidance and support, VILA members can offer great insight into challenges of collecting and analysing data as well as converting, compressing, splicing, etc.

Get in touch with VILA

Questions or comments? Do you wish to join the community? Or do you want to get started with a workshop or a meeting?

Please get in touch with Jacob Davidsen, community lead in VILA.

Menu