This year, the ACM Conference on Human Factors in Computing Systems (CHI) 2026 was held at the Barcelona International Convention Center from April 13 to 17. CHI is the flagship venue in Human-Computer Interaction, bringing together thousands of researchers from computer science, Artificial Intelligence, psychology, and related fields. This year, emergenCITY presented 4 works that are researching the use of extended reality (XR) techniques in the area of urban resilience and risk management.

On-field Collaboration in risk management supported by AR

Risk management involves complex problem-solving and therefore requires domain experts to work together on-site. To support this scenario, researchers in emergenCITY are exploring various extended reality (XR) techniques to support colocated augmented reality (AR) collaboration from three perspectives: emotional rapport, shared understanding, and knowledge-gaining efficiency.

Yanni Mei et al. [1] researched how sharing visual social cues can support mutual understanding and emotional rapport among colocated AR users. In their paper, the authors provided design guidelines for visualizing social cues for different expressive purposes. Julian Rasch et al. [2] conducted a mixed-methods study to explore whether sharing a gaze can improve efficiency and communication in colocated AR collaboration tasks. He found that sharing a gaze does support shared attention and enhances the overall collaborative experience but does not provide performance benefits. Clara Sayffaerth et al. [3] focused on first-person-view physical task instructions in AR and explored how timing (parallel or sequential) and different limb visualizations influence users’ learning speed, memory retention, and comfort.

AI supports On-demand XR tool Creation

In risk management, different AR augmentation applications can help complex problem-solving in various ways, such as highlighting dangerous areas for attention-guiding, blurring distracting information to focus, or simulating visual deficiencies like colorblindness to create more accessible solutions. However, such needs are diverse and unpredictable a priori, making it difficult to provide off-the-shelf applications spanning the diverse use cases.

As a potential solution, Yanni Mei et al. [4] presented ShadAR, an application which allows users to create on-demand AR augmentations by verbally describing how they want to see the world (e.g., “highlight the pedestrians in red”). The system uses a large language model (LLM) to generate shader scripts and inject them into the AR application at runtime.

“We aim to support risk management scenarios by enabling domain experts to quickly create AR applications during problem solving, improving task efficiency and experimental practices. We demonstrated this in three CHI Interactivity sessions and had the prototype used by over 100 participants,” explains Yanni Mei.

Participants creatively combined their research work with the prototype. For example, visual health researchers created a simulator for colorblind individuals, and digital art researchers used it to “see the world like Van Gogh.”

About the Author

Yanni Mei is a PhD researcher working in the HCI Lab at TU Darmstadt, affiliated with emergenCITY. Her research focuses on exploring future everyday AR scenarios in the aspects of colocated collaboration, design, and productivity, as well as the integration of LLM and XR techniques.

Paper

[1] Mei, Yanni, et al. “Meme, Myself and AR: Exploring Memes Sharing in Face-to-face Conversation using Augmented Reality.” Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems. 2026.

[2] Rasch, Julian, et al. “Anticipation Without Acceleration: Benefits of Shared Gaze in Collocated Augmented Reality Collaboration.” Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems. 2026.

[3] Sayffaerth, Clara, et al. “Do It Fast, Forget It Fast: How Timing and Limb Visualizations Affect First-Person Augmented Reality Instructions.” Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems. 2026.

[4] Mei, Yanni, et al. “ShadAR: LLM-driven shader generation to transform visual perception in Augmented Reality.” Proceedings of the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems. 2026.