
See the 2025 conference program for more information.
PROCEEDINGS
SESSION: Tutorial on TIB AV-Analytics for AI-Assisted Video Analysis
Tutorial on TIB AV-Analytics for AI-Assisted Video Analysis
In this tutorial, we will present the web-based video analysis platform TIB AV-Analytics (https://service.tib.eu/tibava). It provides users with state-of-the-art methods from artificial intelligence (AI) including generative AI as well as visualizations for AI-assisted video analysis. Besides a short introduction and basics on the underlying deep learning technologies, the tutorial provides a hands-on experience for the participants including practical tips to analyze their own videos.
SESSION: Visual Storytelling in HCI
Visual Storytelling in HCI: A Workshop on Narrative Development Through Sequential Art
Visual narrative methodologies provide a more comprehensive and intuitive framework for articulating the multifaceted interactions between human users and computational systems. This workshop guides participants through five segments: image-based storytelling, figure sketching, narrative development, practical exercises, and collaborative critique. Participants learn to translate complex interactive systems into clear visual narratives using both analog and digital techniques. Through structured activities and provided resources, they develop skills to effectively communicate user experiences, system behaviors, and design concepts across stakeholder groups. The workshop equips both new and experienced practitioners with tools to enhance design communication and cross-cultural collaboration in Human-Computer Interaction (HCI).
SESSION: Explainable AI for the Arts 3 (XAIxArts3)
Explainable AI for the Arts 3 (XAIxArts3)
The third workshop on Explainable AI for the Arts (XAIxArts) continues to bring together and expand a community of researchers and creative practitioners in Human-Computer Interaction (HCI), Interaction Design, AI, explainable AI (XAI), and Digital Arts to explore the role of XAI for the Arts. XAI is a key concern of Responsible and Human-Centred AI, emphasising the use of HCI techniques to explore how to make complicated and opaque AI models more understandable to people. The previous workshops moved from mapping the landscape of XAI for the Arts to co-developing an XAIxArts manifesto. To continue driving discourse on XAIxArts, the anticipated outcomes of this workshop are: i) fresh insights into the evolving challenges of AI bias, lack of transparency and barriers to inclusivity through discussion of current and emerging XAIxArts practices; ii) co-developed speculative futures which expand XAIxArts discourse beyond post-hoc rationalisations of AI decisions into the imaginative possibilities of AI as an interlocutor in the creative process; iii) plans for a co-developed proposal of an edited book on XAIxArts; and iv) community expansion and engagement in wider discourses on Responsible and Human-Centred AI.
SESSION: Distributed Creativity: Individual and Collaborative Ideation in a Filmmaking Process
Distributed Creativity: Individual and Collaborative Ideation in a Filmmaking Process
Many practitioners in the film industries see AI as a threat to meaningful human employment, but have difficulty articulating what, specifically, may be irreplaceable about human input in creative process. This workshop considers the question: can we identify specific aspects of human cognition and experience that are uniquely at work in creativity on complex technologically mediated productions?
By engaging participants in activities and the semi-structured creative conversations that filmmakers (particularly writers, directors, producers, designers, and cinematographers) generally undertake when beginning a collaboration, this workshop aims to provide an experience of some of the multiple and diverse manifestations of human creativity and dynamic human systems in creative film production. Through participating in a guided experience of generating film ideas, stories, and plans for mise en scene, participants will develop a deeper understanding of the cognitive challenges and skills of collaborative creativity. We will discuss the points at which human centrality in creation and consumption of cinematic cultural experiences should potentially be sustained and why, and new insights into how and where they could potentially be augmented (not replaced) by AI tools.
SESSION: Exploring the Creative Potential of AI in Filmmaking
Exploring the Creative Potential of AI in Filmmaking
This workshop explores the integration of Artificial Intelligence (AI) into filmmaking, focusing on AI-driven video content analysis (VCA), AI-assisted content creation, and ethical considerations. As AI continues to reshape creative workflows, it opens new possibilities for filmmaking while raising important questions about human-AI collaboration. The workshop aims to bridge perspectives between creative practitioners, industry professionals, and AI researchers, fostering interdisciplinary dialogue on AI’s evolving role in creative practice. We will discuss how AI-powered VCA advances film grammar analysis and audience cognition research, informing creative decision-making, and how generative AI supports production processes while expanding artistic possibilities. Additionally, we will examine how AI-driven analysis can inform ethical practices, alongside addressing the risks associated with generative AI. Through a combination of theoretical discussion and practical demonstrations, participants will gain hands-on experience with AI filmmaking tools while critically engaging with the future directions of AI-augmented creativity.
SESSION: Future Cinema and Performance in the Age of Artificial Intelligence and Extended Reality
1st Workshop on Future Cinema and Performance in the Age of Artificial Intelligence and Extended Reality
This workshop aims to become a platform for researchers and industry practitioners to explore the transformative impact of Artificial Intelligence (AI) and Extended Reality (XR) on cinema and performance. This workshop will discuss cutting-edge AI and XR techniques that enhance creative processes, production, audience engagement, and prototyping methods. By fostering collaboration, sharing innovative approaches, and identifying research opportunities, the workshop seeks to advance the integration of AI and XR in artistic domains through educated community action. Participants will benefit from keynote presentations, screenings, and interactive group discussions. This workshop aims to foster the growing community dedicated to pioneering AI and XR driven artistic practices.
SESSION: Interaction Design as a Form of Decolonial Care
Interaction Design as a Form of Decolonial Care
As interaction designers address socially impactful issues, a caregiving approach introduces ethical and relational dimensions that enrich both research and practice. This workshop invites participants to view design as a community-centered practice responsive to diverse and often marginalized experiences. Through collaborative storytelling, reflection, and discussion, participants will explore the intersections of care with race, gender, and Southern epistemologies and perspectives, identifying strategies for fostering centring interconnectedness practices. By centering the voices of activists as knowledge holders in care praxis and feminists epistemologies, this workshop seeks to challenge dominant paradigms in Human-Computer Interaction (HCI).
SESSION: Graduate Student Symposium
Creative dialogue: evolving intent in designer-AI workflows
The aim of this doctoral research project is to study how the use of GenAI can influence the creative workflow and cognitive processes of product designers in formation. The focus is on the initial design stages of problem-solution framing and concept creation/exploration. As a first step, a literature review has been performed to identify the different variables that might influence results in similar studies. A second phase involves exploratory studies with bachelor level product design students. Based on the findings, a series of rapid design prototypes proposing new ways of interaction with GenAI will be tested and evaluated. The main contributions of this project are the development of principles and new models for co-creation that can support the evolutive process of design intention during the ideation process and assist designers in formation to develop their creative potential.
AI-Driven Interpretation of Nonverbal Communication in AR-Enhanced Real-Time Captions: Effects on Cognitive Load, Comprehension, and User Engagement
Current real-time captioning systems focus on transcribing speech, often overlooking facial expressions, body language, and vocal prosody that convey essential communicative cues. We present an AI-driven augmented reality (AR) captioning system that interprets non-verbal signals in real time and renders them as dynamic visual cues within the user’s view. Grounded in Cognitive Load Theory, cross-modal plasticity, and computational creativity, our approach supports Deaf and Hard of Hearing (DHH) and neurodiverse learners by transforming captions into creative, expressive media. We explore: (RQ1) how non-verbal cues affect comprehension, engagement, and creative interpretation; (RQ2) how cultural differences influence cue perception; and (RQ3) what AI and design strategies enable low-latency, customizable AR captions without increasing cognitive load. A user study shows 45% comprehension gains and 25% reduction in mental demand with emotional indicators in captions. Future work includes building a cross-cultural cue corpus, an open-source AR captioning pipeline, and design guidelines for inclusive STEM education, advancing accessibility and fostering creativity-driven communication.
Reclaiming Agency in the Age of AI Co-Writing: Locus of Control and Narrative Identity
Designing for Unpleasant Memories: Investigating the Role of Technology in Reflective Practices
While human memory-aiding technologies often focus on preserving and sharing positive moments, people also revisit their emotionally complex memories, including both positive and negative emotions, to reflect, make sense of their past experiences, and achieve personal growth. This research investigates how digital technologies can support such reflective engagement with unpleasant memories. Through three interrelated studies, I explore (1) how individuals with different cognitive thinking styles, ruminators and reflectors, engage with technology during reflection on unpleasant memories, (2) how existing reflective technologies support or overlook these practices, and (3) how co-design workshops can reveal new design directions that support personal growth. Collectively, these studies aim to inform the design of emotionally aware technologies that support psychological well-being by enabling meaningful reflection on including unpleasant personal experiences.
A philosopher, an artist, and a HCI researcher write a dissertation: Designing Ethical Human-AI Systems for Co-Creativity
For more than a decade now, artists’ immersion in garnering expression via AI has largely been left to emerge after the fact, where the incorporation of their values into the system is a burgeoning exploration of the latter years. Accordingly, in this dissertation, I ask how we can design harmonious AI systems for creative expression that optimize the balance between artists and AI systems while pushing the boundary and generating creative possibilities for the co-creative process. I take a four-part research approach, beginning by promoting the responsible design of creative AI, then studying the creative possibilities of AI in creative practice, followed by proposing novel interaction mechanisms and concluding with a design philosophy that centers the artist in the Human-AI co-creative process.
AI DoodleLab: Fostering Middle School Students’ AI Literacy through Project-Based Creative Drawing
This doctoral research proposes AI DoodleLab, a web-based learning tool designed to foster AI literacy through project-based creative drawing. AI DoodleLab engages middle school students in teaching AI by drawing multiple examples of an imaginary creature, allowing the AI to extract features, generalize patterns, and generate a refined version. Through this process, children develop an intuitive understanding of core AI concepts, including supervised learning, feature extraction, generalization, and model limitations. This research will explore how creative, hands-on interactions enhance children’s AI literacy by designing AI DoodleLab. Future studies will focus on evaluation methods, such as qualitative analysis, interviews, and custom assessment instruments, to investigate the tool’s educational impact. Additionally, we aim to examine the interdisciplinary potential of AI DoodleLab in art classrooms and its expansion beyond drawing to storytelling and animation. This research contributes to designing effective AI education tools, positioning AI as a co-creative learning partner for young learners.
SESSION: Augmented and Virtual Reality
Designing OWN, The Inner World as a Virtual Space: By and For Introspection
Nurturing the inner world of emotions, thoughts, and self is essential to our well-being. However, the abstract and invisible nature of the inner world makes it difficult for people to sense and manage. Then, what if people build their own virtual world where they can introspect inner sides? This pictorial presents the creation process of ‘OWN’, a personal virtual space where users, the OWNers, can explore and interact with their inner world. As part of the co-creation process, each OWNer expresses a personal narrative about themselves and their surroundings, allowing psychologists to gain an in-depth insight into their world. The psychologists’ analysis of the OWNer’s inner state suggests various elements that the designer visualizes as space in the virtual world. By designing OWN with expressive activities, OWNers were able to reflect more deeply on their lives. OWN then became a virtual oasis where OWNers could relax, reflect, and improve themselves.
Eliciting Change Towards Better Virtual Worlds: A Workshop Process to Foster Ethical Reflection in Creative Technology Design Processes
The concept of the metaverse, recently re-emerging in public discourse, is viewed by some as the internet’s next evolutionary stage, while others regard it as a dubious promise of a future where physical and digital worlds merge seamlessly. As the metaverse takes shape, it is crucial to question whether its design will embody the values of the societies it aims to serve.
Emphasizing ethical and inclusive technology development is essential, particularly through educating about the social impacts and ethical challenges in computer science and design. Our research contributes to this goal by introducing a five-day workshop process designed to encourage ethically reflective technology development. This workshop process integrates speculative design, service design, and digital ethics methodologies. We showcase its effectiveness by detailing the outcomes of its implementation in two distinct educational settings: a seminar in Germany and a summer school in Taiwan, both centered on the development of metaverse applications.
Seeing the Sound: Supporting Musical Collaboration with Augmented Reality
In musical collaboration, digital musical instruments often hinder effective communication and engagement by restricting visibility and limiting gestural and non-verbal interactions. These challenges reduce musicians’ situational awareness and complicate cohesive performance. To address this, we developed a head-mounted augmented reality (AR) system to enhance collaborative musical experiences by visualising musicians’ hand movements, eye gaze positions, and instrument interactions in real-time. We conducted a user study involving four pairs of musicians performing live music using different AR interface configurations. The results suggest that the AR system can enhance situational awareness and assist collaboration, as reflected in questionnaire responses. Interviews indicated that real-time visualisations of bodily movements and interactions helped participants better understand the collaborative process and anticipate their collaborators’ actions. These findings point to the potential of AR-assisted visualisation to support creative collaboration by tailoring visual information to different needs. Future research could explore its application in broader contexts of real-time creative cooperation.
Speculative Design for the Metaverse: Anti-Experiences in Virtual Retail
Retail experiences in the Metaverse are becoming increasingly popular, driven by a business model that relies on brand engagement and user-generated virtual worlds (UGVWs). While much attention has been given to ideal design patterns for enhancing user engagement, there is a need for a critical examination of pain points that negatively impact their virtual product experiences. This study introduces anti-experiences as a methodological contribution, using speculative design to deliberately explore undesirable design elements and highlight engagement barriers in these virtual product experiences. Through group co-design and storyboarding, we conducted workshops with 30 participants using the Roblox platform as a case study, including XR designers, physical-world designers, and non-users of the Metaverse. Through this process, participants identified three key pain points: (1) missing diegetic elements, (2) fragmented social interactions, and (3) navigational ambiguity. By developing anti-design storyboards, participants critically examined how these pain points affect user engagement and satisfaction, uncovering potential risks and limitations of current Metaverse retail experiences. From these findings, we propose three key design insights to enhance virtual shopping environments: (1) Introducing play and balancing risk, (2) Shopping together between realities, (3) Orchestrating sensory richness and subtlety.
SESSION: Data
Content Authenticities: A Discussion on the Values of Provenance Data for Creatives and Their Audiences
The proliferation of AI-generated digital content has intensified the user demand for accurate provenance information to ensure content authenticity. Technical advancements now provide tools to make the digital media content supply chain more transparent through the use of provenance data. This paper foregrounds the importance of understanding how the situated nature of user-content engagement influences perceptions and uses of this data. Insights from a workshop with experts in the creative media sector suggest that, as the adoption of provenance data becomes more common, users need richer and more nuanced information. We suggest that analyzing the increasing demand for content authenticity through the lens of multiple “authenticities”, each reflecting different user needs and contexts, can help identify and address the needs for, and uses of, provenance data by creators and audiences alike.
Data Sins: Exploring Data Colonialism through Storytelling-Based Speculative Design Practices
DATA SINS explores the concept of data colonialism as an emerging phenomenon in the early 21st century, fostering a critical understanding of its implications for human autonomy. Through speculative design, the project investigates how utopian ideals in data-driven technologies obscure neocolonial practices of data production and appropriation. Grounded in a theoretical framework, the storytelling mimics historical colonialism by exposing how the intertwined political, economic, and religious powers shape data-driven rituals and artifacts. This inquiry gains particular relevance in the context of Brazil’s political conservatism over the past decade, which has consolidated a union between religious moralism, authoritarianism, and economic liberalism. The research underscores the non-neutrality of data, highlighting its role in shaping creative socio-technological design practices that safeguard not only the integrity of the self but the future of democracy itself.
DataPhysIT an image-schema-based Data Physicalisation Design Toolkit developed by Research through Design
Data physicalisation has the potential to enhance communication and comprehension of data, as well as interaction with data. However, most current physicalisations do not exploit the full potential of this medium, remaining generic, passive and visual. We present image schemas as a design approach to address this and to promote data physicalisation design. Image schemas are abstract representations of multisensory experiences that promise to support the design process and encourage more intuitive, innovative, (inter)active, and multimodal designs. We conducted a research through design process to combine previously developed image-schema-based tools into a toolkit to increase their ease of use and efficiency. Our contribution are the emerging Data Physicalisation Inspiration Toolkit (DataPhysIT) and preliminary findings on how the toolkit influences the data physicalisation design process and design ideas.
Evaluating DataPhysIT: An Image-Schema-based Toolkit to support Data Physicalisation Design
A physical data representation may facilitate a more meaningful communication and understanding of data. Nevertheless, existing data physicalisations do not fully realise their potential. To foster the design of data physicalisations, we recommend image schemas. Image schemas are abstract representations of experienced interactions with the environment and showed to support interface design. Incorporated in the Data Physicalisation Inspiration Toolkit (DataPhysIT), image schemas may also work as tool to facilitate data physicalisation design. In this paper we present the evaluation of the DataPhysIT employed in the data physicalisation design process. To research the toolkit’s utilisation, effect on design action, and generated ideas, both qualitative and quantitative data were gathered. The results demonstrate that the toolkit was widely accepted, works as inspiration and provides structure to the design process. It reduces the perceived effort and facilitates more intuitive and tangible ideas.
Making Sense of Our Data: Exploring Well-Being Self-Tracking Through Creative Collaboration
Mental health and well-being research increasingly recognizes the potential of data to support preventative care and foster meaningful sensemaking. However, traditional health data visualizations often overlook emotional depth, contextual relevance, and lived experiences. In response, four HCI researchers conducted a 35-day field study, examining their well-being data as participants and co-designers to reimagine their relationship with self-tracking technologies and health data representations. Grounded in participatory, soma-inspired, and design fiction principles, this study encouraged participants to move beyond passive data consumption through creative experimentation, embedding personal experiences and collaborative exploration to challenge conventional representations and reimagine well-being through design. The findings demonstrate how collaborative sensemaking can reframe well-being interventions as creative processes that empower individuals’ lived experiences. By foregrounding reflection and shared interpretation, this work contributes to the discourse on how creative explorations with biodata redefine our relationship with wearable technology, highlighting the role of trust in understanding sensitive data collaboratively.
SESSION: Cultural Reimagination
“Salt is the Soul of Hakka Baked Chicken”: Reimagining Traditional Chinese Culinary ICH for Modern Contexts Without Losing Tradition
Intangible Cultural Heritage (ICH) like traditional culinary practices face increasing pressure to adapt to globalization while maintaining their cultural authenticity. Centuries-old traditions in Chinese cuisine are subject to rapid changes for adaptation to contemporary tastes and dietary preferences. The preservation of these cultural practices requires approaches that can enable ICH practitioners to reimagine and recreate ICH for modern contexts. To address this, we created workshops where experienced practitioners of traditional Chinese cuisine co-created recipes using GenAI tools and realized the dishes. We found that GenAI inspired ICH practitioners to innovate recipes based on traditional workflows for broader audiences and adapt to modern dining contexts. However, GenAI-inspired co-creation posed challenges in maintaining the accuracy of original ICH workflows and preserving traditional flavors in the culinary outcomes. This study offers implications for designing human-AI collaborative processes for safeguarding and enhancing culinary ICH.
3R (Robots, Rooms, Relationships): Speculative Homes, Sentient Machines, and the Future of Domesticity
The pictorial explores the role of domestic robots in shaping relational and emotional experiences in home environments. Through a participatory workshop featuring a speculative card game 3R (Robots, Rooms, Relationships), participants crafted future home scenarios by combining different robot types, household settings, and interaction modes. The results emphasize how different robot forms evoke unique relational responses. This research investigates how speculative scenarios expand human imagination and foster new emotional and cognitive dimensions in human-robot interaction (HRI). Our findings suggest that domestic robots are not merely functional tools but co-creators of affective ecologies, prompting new design frameworks. We advocate for a shift from deterministic robot functionalities to open-ended, imaginative design approaches that foster co-creative relationships between humans and robots.
Designing a Digital Game for Natureculture Heritage encounters
Human-computer interaction (HCI) increasingly explores the role of digital games in engaging audiences with cultural heritage. While heritage games exist, few effectively integrate participatory, reflective, and speculative storytelling to bridge nature and culture. Moreover, existing methods often lack inclusivity and fail to capture the evolving perspectives on heritage. Here, we show how an iterative Research through Design (RtD) approach led to the development of the Natureculture Heritage (NCH) Game, evolving from a board game to a digital platform that fosters deep engagement with Madeira’s natureculture heritage. Playtests revealed the game’s ability to enhance participation, inclusivity, and speculative storytelling through mechanics such as character embodiment, multi-perspective narration, and Hero’s Journey structuring. The study contributes to HCI and heritage research by demonstrating how digital storytelling games can support sustainable heritage engagement. Future directions include refining multiplayer interaction, integrating factual content, and expanding accessibility for diverse audiences.
LoRA-Based Pattern Generation for Yi Ethnic Embroidery Heritage Preservation
Alive Yi 2.0 combines cultural heritage, design innovation, and artificial intelligence (AI) to preserve and reimagine Yi minority embroidery patterns. Using a curated database of traditional Yi embroidery patterns, we implemented LoRA-based AI models to generate new designs that maintain cultural authenticity while enabling contemporary interpretations. This work transforms traditional patterns into modern variations through fine-tuned stable diffusion models, creating designs that respect cultural elements while appealing to younger generations. Our approach demonstrates the potential of AI-assisted design in cultural heritage preservation and provides a framework for using computational creativity to revitalize traditional heritage in the digital era.
RetroChat: Designing for the Preservation of Past Chinese Online Social Experiences
Rapid changes in social networks have transformed the way people express themselves, turning past neologisms, values, and mindsets embedded in these expressions into online heritage. How can we preserve these expressions as cultural heritage? Instead of traditional archiving methods for static material, we designed an interactive and experiential form of archiving for Chinese social networks. Using dialogue data from 2000-2010 on early Chinese social media, we developed a GPT-driven agent within a retro chat interface, emulating the language and expression style of the period for interaction. Results from a qualitative study with 18 participants show that the design captures the past chatting experience and evokes memory flashbacks and nostalgia feeling through conversation. Participants, particularly those familiar with the era, adapted their language to match the agent’s chatting style. This study explores how the design of preservation methods for digital experiences can be informed by experiential representations supported by generative tools.
SESSION: Form, Feeling, and Fabrication
A Declarative Human-Robot Interaction Workflow: Integrating Improvisation and Materiality in Robotic Fabrication and Design
Collaborative robots, with their computational power and versatile manipulation capabilities, hold significant potential in design and fabrication. However, their reliance on predefined CAD models and algorithms limits their effectiveness in creative, dynamic and unstructured contexts. In contrast, humans easily adapt to dynamic conditions during craft, improvise novel making strategies and embrace emergent material properties as part of form-finding. This paper investigates how human-robot interaction (HRI) can augment human adaptability through the computational power of robotics while embracing materials as a medium for creativity. The main contribution of this study is a declarative HRI workflow and its software implementation that relates real-time sensor feedback to robot’s action selection. The results showed how this workflow enabled the improvisation, reproduction and modification of material expressions through dynamic tool path planning in robotic sand casting and clay forming. Consequently, this paper expands ongoing discussions on innovative ways to combine robotic technology with craft sensibilities.
Architecture and the Self: Empirical Inquiry on Christopher Alexander’s Theory of Living Structure
The Biophilia hypothesis emphasizes humanity’s intrinsic desire to connect with nature, which in turn nurtures authenticity. Architectural designs that echo natural patterns can evoke feelings of wholeness, inspiration, and comfort, as proposed by architectural theorist Christopher Alexander in his Theory of Living Structure. This study empirically examined the connection between architecture and the authentic self. Participants engaged with Chinese philosophical texts from Chuang Tzu and The Analects of Confucius to explore their authentic selves. They then evaluated image pairs, assessing preference, liveliness, and self-connection, with one image exemplifying living structures characterized by multiple scales, varied patterns, and interconnected centers. Our participants exhibited a strong preference for living structures. Notably, individuals with lower susceptibility to external influence, an essential component of authenticity, were more likely to perceive living structures as self-connected. Our findings offer valuable insights into human-centered architectural design aimed at fostering authentic living.
Crafting a Personal Journaling Practice: Negotiating Ecosystems of Materials, Personal Context, and Community in Analog Journaling
Analog journaling has grown in popularity, with journaling on paper encompassing a range of motivations, styles, and practices including planning, habit-tracking, and reflecting. Journalers develop strong personal preferences around the tools they use, the ideas they capture, and the layout in which they represent their ideas and memories. Understanding how analog journaling practices are individually shaped and crafted over time is critical to supporting the varied benefits associated with journaling, including improved mental health and positive support for identity development. To understand this development, we qualitatively analyzed publicly-shared journaling content from YouTube and Instagram and interviewed 11 journalers. We report on our identification of the journaling ecosystem in which journaling practices are shaped by materials, personal context, and communities, sharing how this ecosystem plays a role in the practices and identities of journalers as they customize their journaling routine to best suit their personal goals. Using these insights, we discuss design opportunities for how future tools can better align with and reflect the rich affordances and practices of journaling on paper.
Emotional Dynamics in Art Appreciation: Aesthetic Engagement with Realist and Surrealist Artworks
In a temporally extended perceptual encounter with an artwork, an ideal challenge presents a manageable degree of unpredictability. This unpredictability triggers emotional satisfaction derived from resolving perceptual uncertainties, which, in turn, leads to rewarding experiences that motivate further engagement. Here we simulated and amplified perceptual unpredictability across three stages: a blurred version of the artwork, a clear version, and prolonged exposure to the clear artwork, and measured viewers’ dynamic emotional responses throughout this unfolding process. Our findings reveal significant associations between individuals’ aesthetic experiences and shifts in their emotional responses, specifically concerning pleasure and arousal. Additionally, Participants exhibited stronger emotional changes related to pleasure and dominance when unfolding realist paintings (certainty expected), compared to surrealist artworks (uncertainty expected). Overall, the results suggest that viewers generally appreciate unpredictability and experiences that transcend their preconceived expectations, fostering deeper exploration of the scientific inquiry into the predictive processing in human aesthetic appreciation.
RelieFab: Gradual-Depth 2.5D Texture Prototyping using a Laser Cutter
We propose RelieFab, a rapid prototyping method that produces 2.5D textured surfaces with gradual depth transitions. Conventional rapid prototyping methods using a laser cutter can quickly cut out parts for 3D prototypes, but their surfaces can only be flat and smooth. 3D printing offers detailed textures but is time-consuming. Our method enables the rapid creation of objects with finely detailed gradual-depth 2.5D textured surfaces, distinguishing it from conventional binary engraving using a laser cutter. By applying localized high-density energy from a laser cutter to low thermal conductivity polystyrene foam, the laser cutter can selectively remove only the specifically targeted parts of the surface of the polystyrene foam. We have built a computational model and conducted experiments to determine the appropriate laser parameters to accurately and stably remove the polystyrene foam. By utilizing the appropriate laser parameters derived from this process, we can demonstrate applications, including haptic textures and visual translucency.
SESSION: Opening Keynote
Procedural and Generative Art: when the Art-Subject Reverses Time to Design the Future
The emergence of generative and procedural art marks a fundamental shift in creative practice, where the traditional “art-object” (painting, sculpture…) transforms into an “art-subject” paradigm that challenges temporal assumptions about design and futurity. As computational systems evolve beyond mere extensions of human memory and processing capacity, they demonstrate a unique form of creativity that can be characterized as “imachination”—a term that captures the machine’s distinctive approach to creative synthesis, which parallels yet diverges from normative human imagination.
This transformation necessitates a reconceptualization of artistic skills and creative agency. While traditional artistic mastery centered on manual dexterity and technical craftsmanship (tekhnè vs tekhnè), contemporary practice reveals that manual execution may have restricted the creative process to a temporally and spatially contained form or reification. We may consider that, until the 19th century, the art-object has restrained the canonic practice to using hands as a constraint imposed by the inability to directly project mental concepts through the eyes. Assuming with Leonardo that “la pitturaè cosa mentale”, the art-object translates into the interruption, at a highly specific moment of the mental process putting a full stop to its dynamic expression. Temporality becomes a good way to understand the outcomes of generative media. While the traditional practice focusses on the trace left by the brush stroke or the chisel print in marble, the evidence of “what happened” frozen in time (Roland Barthes’ “Ça aété” (in La chambre claire), reversely, generative and processual artworks are designed for “what they will come”. Metaphorically we could say that one is the hundred millions years old petrified tree, the other is the seed of promises, thoroughly designed by the artist to open up fields of virtuality. The temporal switch is striking and it doesn’t affect only the artwork, but also the artist forced to reconsider one’s practice.
The advent of collective (Pierre Levy) and connective (Derrick de Kerckhove) intelligence introduces creativity models based on unlimited repositories of historical references. This creates a fundamental paradox: we expect creativity to forge new pathways while simultaneously grounding it in comprehensive analysis of existing patterns. The limitations of traditional IQ-based assessments of logical-mathematical intelligence become apparent in this context, neurotypical thinking is a highly normative conception of human thought, and neurodivergent thinking patterns—characterized by rapid, non-linear connections across conceptual spaces—may provide more accurate models for understanding how imachination operates.
During this talk, I will present original case studies exploring the conceptual territory of human-machine creative interweaving. Through the definition of conceptual building blocks such as iterative curatorship, egonomy, extended serendipity, critical fusion, artificial intentionality, and maieutic engines, this essay shed light on how human creative role will evolve, not competing with artificial intelligence, but through new forms of human-machine symbiosis that expand the boundaries of creative possibilities, while presenting to the human kind a magic mirror to see its true colors.
The investigation draws upon the ancient concept of clinamen–the random disruption of the flow of particles that generates diversity in atomic systems–as a metaphor for understanding how creative divergence functions in both human neurodivergence and artificial intelligence systems. By following these “lines of steepest slope” through the latent spaces of machine learning, we can better comprehend the emerging landscape of augmented creativity and its implications for future artistic practice.
SESSION: Posters and Technical Demonstrations
Taper: Creative Constraints and Minimalist Design in a Computational Poetry Publication
In an era defined by rapid technological evolution, digital publications are not only effective means of distribution; they also advance creativity, collaboration, and cultural impact.
This paper explores the seven-year journey of Taper, a magazine for computational poetry, broadly defined, that invites computational creativity and uses a minimal design. By embracing deliberate constraints, including a restriction on program/poem size and different themes for different issues, Taper fosters innovation through remix culture, experimentation, and collaboration. These approaches nurture a dynamic community of practice at the intersection of literary art and programming while advancing grassroots strategies for sustainable growth and long-term viability. Reflecting on Taper’s evolution, this paper illustrates how minimalist design principles and computational frameworks can amplify creative expression, strengthen community engagement, and cultivate ecosystems capable of addressing pressing societal challenges. These findings demonstrate how a collectively-edited project can spur artistic innovation and “creativity for change,” enabling lasting impact in a shifting creative landscape.
Understanding Students’ Acceptance, Trust, and Attitudes towards AI-generated Images for Educational Purposes
Recent advances in artificial intelligence (AI) have expanded the use of AI-generated images across sectors such as design and creative industries. However, their application in educational settings—particularly among undergraduate students in computer science and software engineering—remains underexplored. This exploratory study investigates students’ acceptance, trust, and attitudes toward AI-generated images for academic tasks, including presentations, reports, and web design. Using a mixed-methods approach involving questionnaires and interviews, the findings reveal generally high levels of acceptance and trust, driven by perceptions of ease of use and academic utility. Nonetheless, concerns about technical precision—such as the AI’s inability to consistently generate images aligned with detailed prompts—limit their effectiveness in accuracy-dependent tasks. The results highlight the need for clearer quality standards, ethical guidelines, and improved image generation capabilities to support educational use. Strengthening these tools to meet specific user requirements may enhance creativity and learning outcomes in technical disciplines.
No Hardware, No Problem: Wizard of LLMs for Scenario Based Prototyping of IoT Systems
This work in progress explores the potential of large language models (LLMs) for rapid prototyping of Internet of Things (IoT) projects. The study proposes a prompting framework to facilitate using LLM capabilities to generate simulated sensor data and events of IoT systems. We propose a proof-of-concept for an open-source web application named Wizard of LLMs, which is designed to facilitate scenario-based prototyping of IoT systems. The tool uses the GPT-4 API to simulate device states and human actors, reducing the need for hardware in IoT prototyping. The system was studied through claim analysis and focus group sessions. Participants recognized the effectiveness of this tool for early ideation without using hardware. However, a future research direction was identified to address the need to connect the system to real-world data and enhance the limited affordances that currently support creativity.
What Makes an AI System Human-Centered? Preliminary Findings from an Empirical Study
This study presents an empirical investigation into perceptions of human-centered artificial intelligence (HCAI) by analyzing qualitative responses from 136 AI practitioners, academics, and students. While AI systems are increasingly integrated into everyday life, a critical gap persists between theoretical HCAI frameworks and empirically derived, stakeholder-informed guidelines. To address this gap, participants were asked: “What makes an AI system human-centered?” Thematic analysis using affinity diagramming revealed seven core themes: Ethics and Privacy, User-Centric Design, Transparency and Explainability, Human Augmentation, Emotional Intelligence, Inclusivity, and Societal Responsibility. Among these, Ethics and User-Centric Design were most frequently emphasized, suggesting strong alignment between practitioner insights and existing theoretical models. This study contributes to HCAI research by identifying empirically grounded principles that support the development of AI systems that are ethical, transparent, inclusive, and responsive to human needs. The findings offer actionable guidance for AI developers, policymakers, and educators committed to advancing human-centered approaches that align technological innovation with societal values.
Sculpt-to-Image: Exploring Clay-based Sculpting to Support Human-AI-Co-Creation of Images
Interaction with clay is a creative and direct way to generate physical objects. It offers intuitive and playful possibilities to create sculptures and sceneries. This work explores how clay-based sculpting can be added to the text-to-image AI-generating process. While image generation is increasingly popular, it lacks the creative process and feeling of authorship sculpting can provide. Additionally, clay offers haptic control over the composition of images. We present Sculpt-to-Image, a system that combines sculpting physical clay with a generative AI (genAI) tool to provide users with means to influence the composition, express themselves, and have a human-AI-co-creation experience. The setup was examined in an explorative user study (n=18). Our results show a positive user experience in hedonic and exploratory aspects. Clay influenced the participants’ creative process by supporting the generation and visualization of ideas. Enabling them to contribute to the generated images physically, positively affected their feeling of authorship.
A Framework for Creative Experimentation in Extended Reality Using Face-Centered Spatial Relationships and On-Device Inference
Extended Reality (XR) offers a powerful conceptual framework and a creative ecosystem for developing spatially aware content. While proprietary wearable devices establish structured spatial relationships through user and environmental tracking, complementary open, cross-platform approaches enable artists and content creators to explore novel, unconventional interactions using accessible devices. Such approaches enhance key aspects of the creative process, including continuous sketching, self-observation, freedom of movement, mental mapping, and dynamic content transformation, while maintaining privacy. This paper presents an experimental framework for creative sketching and exploration in XR, leveraging spatial relationships between the creator, the device, and the surrounding space through face-tracking and smartphone sensors. The framework is implemented using Google’s MediaPipe, an open-source on-device machine-learning solution, and Unity. To demonstrate its potential, this study analyzes the arrangement of content 360° around the creator. Preliminary experimentations are presented and future research directions are discussed.
Generative AI for Affective Vibration: Human-Centered Evaluation of LLM and VAE Models
Haptic feedback plays a significant role in expressing emotions, however, there is a lack of research on haptics compared with visual and audio channels. In this paper we investigate AI and machine learning methodologies for generating affective vibrotactile feedback. Two generative AI (GenAI) approaches to vibration generation were examined using a custom dataset of vibration-emotion pairings: a Variational Autoencoder (VAE) approach and a fine-tuned large language model (LLM) approach. A quantitative user study involving 15 people validated the GenAIs’ capabilities to generate vibrations conveying a range of levels of emotion valence or arousal. Subjective interviews were conducted afterwards which provided valuable insights for multimodal interaction design and future research topics of affective haptics.
Designing a Teaching Interface for Tacit Knowledge: Approach and Implementation
Tacit knowledge remains central in design, art, and craft practices. At the same time, it remains difficult to capture, understand, and communicate such knowledge efficiently. We explore a 3-step pilot study on capturing, understanding, and communicating tacit knowledge in woodworking. We first present a way to capture tacit knowledge through a multi-sensory system that listens to human, material, and tool encounters. Then, we analyze data from the system comparing expert and amateur performances. Based on the analysis, we propose insights and design criteria for designing an educational interface for tacit learning in woodworking. Finally, we present a tangible interface based on these criteria and implemented for communicating tacit knowledge.
Algorithms in Art and Code: How Teaching Embodied Artmaking Procedures Can Stimulate Analytical Thinking in Art Crafting and Computer Programming
People have pointed to a connection between the creative arts and computing. In the present longitudinal pilot study we taught six programmers and six non-programmers how to read and write written crochet patterns with or without the accompanying crochet gestures. Half of programmers (three participants) and non-programmers (three participants) were taught with the gestures, while the other halves were not. Over two weeks we individually taught participants crochet during three separate 30 minute sessions. In a fourth session, we tested participants on crochet and elementary programming and algorithms. Test results showed that programmers and non-programmers performed better on average on both tests when they learned with gestures. We interviewed all participants afterwards; programmers provided examples of how crochet demonstrated elementary programming ideas, while non-programmers described what they thought about programming. Our empirical study provides evidence of embodied cognition and offers contributions towards developing novel teaching methods in computer science.
Gamified VR Human Posing Application for Deep Learning Synthetic Data Generation
As deep learning systems increasingly depend on human image and video data, privacy concerns make real data collection costly and time-consuming. Many companies turn to synthetic data, which is easier to acquire, GDPR-compliant, and helps reduce model bias. However, creating diverse human poses remains labor-intensive. To address this, we introduce VRPoser, a VR-based serious game and tool for intuitive pose creation. Users take on the role of artists, completing story-driven quests to pose mannequins. These poses are converted to OpenPose skeletons and used with ControlNet and Stable Diffusion to generate synthetic images. When evaluated for usability, VRPoser was found to be intuitive, fun, and efficient. Comparisons with standard posing software show that VRPoser achieves similar results in less time.
Player Agency Under Constraint: A Pilot Study on the Forced-Choice Effect in Narrative Difficulty Design
Narrative games increasingly rely on player agency to support immersive, expressive play. Yet little empirical work explores how difficulty design interacts with agency. In this pilot study, five participants completed 75 gameplay attempts across two boss encounters in Elden Ring, using embedded support mechanics introduced through gameplay and narrative framing. We hypothesised that providing affordances for agency under pressure would help sustain immersion. Instead, we observed convergence toward dominant strategies—what we describe as a forced-choice effect—with variation in how players interpreted them. While some preserved an interpretive sense of acting in character, others shifted attention toward the mechanics themselves, pulling players out of the character’s perspective and weakening perceived agency. These findings raise questions about how agency is shaped not just by design, but by interpretation. We flag this not as conclusions, but as timely concerns in an under-explored space of narrative game design.
Design Explorations in Distal Haptics for Touchscreen Input and Users with Upper-Body Motor Impairments
Haptic feedback, commonly experienced as vibrotactile cues on mobile devices, increases user performance and enhances user experience, but research addressing users with upper-body motor impairments remains scarce. In particular, variations in how touch input is performed across different motor abilities raise questions about the optimal location, on-device or on-body, for vibrotactile feedback to maximize its effectiveness. In this work, we apply design thinking to explore alternative approaches for delivering haptic feedback at locations distant from the on-screen touch point, such as the user’s hand, wrist, forearm, or even the other arm. To inform our design explorations, we leverage empirical findings from a dataset of touch gestures performed on mobile devices by users with various upper-body motor impairments. We present future research opportunities at the intersection of haptic technology, wearable devices, accessible computing, and touchscreen input.
Terms of Being Used: Rethinking User Agency in Surveillance Capitalism
“Terms of Being Used” critically explores the relationship between users and social media platforms through the lens of surveillance capitalism. By reconfiguring Instagram’s “Terms of Use,” the project inverts the roles of subject and object, transforming users from active participants into passive products. Using AI-generated art, the project employs selfies—collected via a Python-based web crawler—to create unsettling videos in which the subjects appear to recite the reversed terms. These AI-generated videos are presented in an immersive installation that surrounds the audience with visual and auditory cues, highlighting the paradox of voluntary submission in the digital age. The project challenges viewers to confront their complicity in self-commodification and critiques the exploitation inherent in digital platforms. “Terms of Being Used” aims to provoke a reflection on user agency, digital identity, and the ethical implications of data commodification within surveillance-driven ecosystems.
How design students construct electronic circuit designs for tiny interactive prototypes: A study in an undergraduate course
Our ability to study, develop, and innovate has drastically transformed with the advent of maker technology and personal fabrication. An increasing number of people have used a wide variety of these creative technologies in recent years. Creating an electronic circuit inside a tiny prototype can be challenging for design students. Studies on design students’ circuit assembly tasks on tiny prototypes are scarce. This study investigated how students assemble circuits in tiny prototyping and which factors influence these activities. Our interview study involved undergraduate students using technologies to prototype interactivities at a design school. We analyzed the students’ prototypes and discussions using the grounded theory framework. We observed that tiny prototype circuits might be characterised by volume, composition, structure, dynamics, and isolating properties. These properties might depend on the prototype’s form factor, circuit routing needs, circuit base material, and circuit flexibility. Planning, experimenting, and the varied availability of conductive and circuit support materials might facilitate experimentation during circuit design. Our results shed light on the need for additional study and the development of boards and electronic componentry for educational practitioners.
AI-Supported Dance Performances Provoke Audiences to Seek Creative Merit and Meaning in AI’s Artistic Decisions
With the development of tools using generative artificial intelligence (GenAI) to create art, stakeholders cannot come to an agreement on the value of these works. In this study we uncovered the mixed opinions surrounding art made by AI.
We developed two versions of a dance performance augmented by technology either with or without GenAI. For each version we informed audiences of how the performance was developed either before or after they had taken a survey on their perceptions of the performance. There were thirty-nine participants (13 males, 26 female) recruited and divided between the four performances. After the survey, we conducted focus groups with a subset of audience members. Results demonstrated that individuals were more inclined to attribute artistic merit to works made by GenAI when they were unaware its use. Our work contributes to the understanding of the design and reception of AI-made art.
Can AI Take a Joke—Or Make One? A Study of Humor Generation and Recognition in LLMs
Knowing when to joke—and when not to—is a subtle skill often missing in large language models (LLMs). This study examines how well LLMs generate and recognize humor in emotionally sensitive, support-oriented conversations. We introduce two targeted datasets: one for evaluating humor generation across distinct styles, and another for testing humor and speaker role recognition in human-written supportive statements. Using GPT-4o, LLaMA3, and Gemini 1.5, we assess humor style alignment, emotional appropriateness, and role sensitivity. While models produce fluent and stylistically varied humor, they often struggle with contextual nuance and role interpretation. GPT-4o consistently performs best in tone alignment and emotional fit, but subtle humor types remain challenging across models. These results highlight current limitations in LLMs’ pragmatic and relational understanding, underscoring the importance of human oversight in humor-sensitive applications.
Tracing the Invisible: Understanding Students’ Judgment in AI-Supported Design Work
As generative AI tools become integrated into design workflows, students increasingly engage with these tools not just as aids, but as collaborators. This study analyzes reflections from 33 student teams in an HCI design course to examine the kinds of judgments students make when using AI tools. We found both established forms of design judgment (e.g., instrumental, appreciative, quality) and emergent types: agency-distribution judgment and reliability judgment. These new forms capture how students negotiate creative responsibility with AI and assess the trustworthiness of its outputs. Our findings suggest that generative AI introduces new layers of complexity into design reasoning, prompting students to reflect not only on what AI produces, but also on how and when to rely on it. By foregrounding these judgments, we offer a conceptual lens for understanding how students engage in co-creative sensemaking with AI in design contexts.
Evaluating Narrative Coherence in Collaborative Storytelling with Generative AI
Collaborative storytelling leverages the creativity of diverse participants, but maintaining narrative coherence across branching plots presents a significant challenge. We introduce a co-creative storytelling system that integrates large language model–based text processing with symbolic narrative planning to ensure both flexibility and consistency in multi-author story worlds. Specifically, we employ a PDDL-style planning framework inspired by Sabre to automatically translate user-contributed story episodes into symbolic representations. Each episode is evaluated across three dimensions—explainability, narrative alignment, and episodic coherence—to assess its suitability for integration into the evolving storyline. When inconsistencies are detected, contributors receive automated feedback to support iterative refinement. A node-based interface visualizes the branching narrative structure, highlighting episodes based on coherence scores to assist users in navigating and shaping the story world. This system advances collaborative digital storytelling by supporting narrative integrity at scale, enhancing user creativity, and informing the design of future co-creative platforms.
My Data is a Mirror: Personal Data Physicalization & Practices of Positional-Reflexivity
We offer a process of data physicalization grounded in autoethnography and research/creation and suggest that this combination provides opportunities for deepening positional-reflexivity. Reflexivity about positionality is a necessary and fraught exercise; it requires researchers to see themselves in relation to the systems that they are working within, and it can become formulaic. We are interested in processes that can foster ongoing and deep reflection practices to support understanding of positionality. Starting from personal data physicalization, we provide an example of a personal data art piece (Curbside) produced by the first author Karly, and outline how the blending of methods deepened Karly’s positional-reflexivity. We suggest that personal data physicalization grounded in methods of research/creation and autoethnography may be one way for researchers to engage with the complexities of positionality.
The Enhanced Subgoal Manager: Supporting Complex Learning During Information Seeking for Creative Tasks
Traditional search systems and generative AI (GenAI) tools are designed for quick information retrieval, yet creative tasks involving complex learning require sustained engagement and deep cognitive processing. Existing AI and search tools rarely provide the structured support necessary for navigating the iterative, exploratory, and reflective nature of creative tasks. We introduce the Enhanced Subgoal Manager (EnSM), a tool that integrates GenAI that is specifically designed to scaffold complex learning during creative information-seeking tasks. Through structured reflection, goal-setting, and iterative adaptation across multiple sessions, the EnSM supports learners engaging in complex cognitive processes such as understanding abstract concepts, critically evaluating information, and synthesizing multiple sources. This work contributes to a broader reconceptualization of AI-assisted learning from rapid information access toward sustained, meaningful engagement.
SYNthia: An Interface Concept for Writing With Large Language Models
Artificial Intelligence (AI) can assist writers, but may also generate distracting or imperfect suggestions. Word choice presents a challenge for writers that is often addressed via thesaurus usage, but online thesauruses can disrupt writing flow by requiring context switching, and generally lack context awareness. We present SYNthia, an LLM-based word suggestion interface that lets users request synonyms and provide natural language context. Although we hypothesized that novice and expert writers would use SYNthia differently, in a user study (n=19) we found no significant differences, raising more questions for further study. However, qualitative results revealed that adding context increased cognitive load, but also encouraged critical thinking about word choice and even plot. Even when suggestions were rejected, SYNthia’s suggestions influenced writer decisions. These findings make progress towards the greater goal of integrating AI in the writing process while maintaining human agency and ownership. All code can be found at the Github repository: https://github.com/iliang1234/SYNthia
Beyond Measurement: The Emotion & Behavioral Intent Grid for Multi-interpretability in Environmental Art Exhibitions
While cultural institutions increasingly serve as platforms for sustainability education, traditional measurement approaches often fail to capture the diverse ways viewers interpret and emotionally connect with environmental art. This paper presents the development and implementation of the Emotion & Behavioral Intent Grid, a novel qualitative public engagement tool designed to bridge the measurement gap between aesthetic response and environmental engagement in exhibition spaces. Through a case study of “ArtScape: Connecting Art, Data, and Sustainability,” an interactive environmental art exhibition featuring Driessens & Verstappen’s “Fossil Remains,” this paper demonstrates how the proposed tool captured the relationship between emotional valence, intensity, and action readiness while maintaining visitor immersion and encouraging interpretive plurality. The findings suggest that environmental art installations can generate diverse interpretations and emotional responses ranging from distress to fascination, which potentially motivate different pathways to environmental action. This research contributes to our understanding of how multi-interpretable aesthetic experiences impact emotion, cognitive reflection, and collective meaning-making in sustainability contexts, offering cultural institutions practical tools for designing more effective and inclusive engagement approaches.
Promoting Collaboration and Empathy in an Arts-Based STEM Engagement Pilot with K-12 Tribal Students
This poster focuses on a pilot engagement activity implemented in 2024 with students ranging in age from elementary to early high school at a rural tribal school in the Midwest. Students worked in small groups on creative STEM activities designed to be empathy-driven, collaborative, and personally meaningful. Two of the activities are the focus of this poster, Avatar Making, and Sphero Bots, both of which aimed to foster empathy, critical thinking, and computational exploration among the K-12 student participants who were attending a summer enrichment program at the school. In Avatar Making, students worked in groups to create an avatar who was a fictional character representing the experiences and feelings of a new student at their school. The Sphero Bots activity introduced coding and simple robotics through a visual coding interface and an analog obstacle course design. The activity was designed to encourage iteration, experimentation, and teamwork while allowing space for artistic expression. This poster provides observations from the pilot activities and early evidence of how such engagement work may be a useful method to nurture technical and social-emotional skills among similar students in other educational settings. The design strategies utilized highlight the potential of combining creative tools with STEM-focused content to support inclusive, student-centered learning. Questions for further research have been developed based on the pilot study.
Advanced electronic connections challenges for design students engaged with constructing prototypes of interactive artefacts
Creating advanced connections in the electronic circuits of an interactive prototype can be challenging for design students. Today, studies on design students’ circuit construction tasks on prototypes of interactive artefacts are scarce. This research investigates how students assemble advanced circuits in interactive prototypes and identifies the factors that influence these activities. This study involved undergraduate students using technologies to prototype interactivities at a design school. We analyzed the students’ prototypes and discussions using the grounded theory framework. We observed that clear contextual information might affect advanced complex circuit connections. This information might depend on the connections’ types and prototype properties. Our results shed light on the need for additional study and the redesign of construction tools and early prototyping courses for design students.
IntraNote: Prototyping a Design Rationale System with AI-driven Reflective Reasoning
We demonstrate IntraNote, short for InteRaction Annotator, a reflection tool that explores new possibilities for Design Rationale Systems (DRS). Collaborative design plays a crucial role in architecture design process, where multiple designers discuss and interact with 2D or 3D design artifacts physically or virtually. As this discussion and the context of interaction fade from memory over time, existing 3D CAD workflows lack support to capture and revisit these design actions. Current DRS can document structured decisions and textual summaries but typically neglect semantic content and real-time interactions, especially in 3D CAD environments. Recent advancements in Large Language Models can track and interpret iterations in dialog, presenting a chance to bridge this gap. To explore this opportunity, IntraNote synchronously records dialogues and 3D object interactions, and generates timeline-based interaction tracking, LLM-driven semantic annotations, and customized causal inference. It demonstrates how generative AI can support reflection, understanding, and recall in collaborative design.
DreamScape: Fostering Financial Cognition through a Serious Game
Financial literacy is critical for economic empowerment, yet access to effective financial education remains uneven. Serious games provide an interactive and accessible approach to financial education, fostering decision-making and cognitive development through immersive gameplay. In this demonstration, we present DreamScape, a narrative-driven serious game designed to teach financial literacy through interactive storytelling and puzzle-based mini-games. By leveraging narrative immersion and problem-solving mechanics, this online demo illustrates how interactive systems and creative design can foster economic empowerment, contributing to a more financially literate and equitable society.
Interactive Movement-to-Audio with Pre-Trained Neural Networks
Systems to interactively generate audio from human movement are used by artists including dancers to support their performances and practice. However, current real-time movement-to-sound systems require specialized hardware or expertise, or map only very simple movement-to-audio relationships. We present a new technique and system implementation for interactive sonification of human movement through unsupervised machine learning. Our system maps between latent spaces, linking a pose estimator to a neural audio generator to enable sonification of human bodies. This may lower barriers to entry for artists to generate sound from their embodied movement through complex mappings. Our system requires no specialized hardware or niche AI expertise, minimal data to learn a user’s custom movements, and trains extremely fast. It represents a new method for mapping custom data to a latent space through unsupervised learning, and advances state-of-the-art interactive movement sonification through its increased accessibility and ease of use relative to its complexity.
Speculative Co-design With AI: An Artist-Friendly Prototype for Non-Human Avatar Creation
This technical demonstration responds to the growing importance of virtual identity. It introduces an artist-friendly, no-code prototype designed specifically to transparently integrate speculative AI processes into artistic co-design methods focused on virtual identity exploration. Artists and designers often struggle to effectively integrate AI into their creative practice because packaged commercial AI tools rarely accommodate their unique visual language and generally obscure creative control through black-box interfaces. These tools typically lack transparency for advanced users and are inconvenient when managing diverse creative inputs, especially in participatory co-design contexts involving multiple contributors. Addressing these challenges, this demonstration showcases a prototype developed using ComfyUI, a visual, node-based interface for building generative AI workflows without coding, combined with custom-trained LoRA models to ensure visual consistency and personal artistic expression in participant-driven avatar generation. Grounded in literature emphasising AI’s potential to foster imaginative engagement through transparency and creative flexibility, this transparent AI tool supports co-design activities, encouraging a wider community of creative practitioners to confidently experiment with speculative AI-driven co-design.
SESSION: Undergraduate Track
A Nature HCI Approach to Intergenerational Icebreaking
Technology and nature are often viewed as separate domains, yet their integration has the potential to foster deeper connections between generations. In this study, we explore how asynchronous video storytelling using the Marco Polo app can help connect older and younger adults while enhancing engagement with nature. Over a two-week period, participants recorded and exchanged short outdoor video reflections, creating discussions on personal experiences and memories tied to natural settings. Through analysis of participant diaries, interviews, and video interactions, we found that storytelling served as a useful icebreaker, with nature acting as a catalyst for meaningful intergenerational communication. Our findings highlight the role of place, time, and community in shaping these experiences, revealing both the benefits and limitations of asynchronous communication in fostering emotional connections. These insights inform the design of future digital tools to integrate storytelling, nature, and intergenerational engagement, ultimately strengthening social bonds through technology.
Decolonizing Computer Science: Holograms in Holborn
This paper describes “Holograms in Holborn”: a speculative design of an interactive holographic experience to engage people in reflection on decolonization in their everyday life. Specifically, the goal is to produce a speculative design which encourages people to apply foresight in considering colonial biases whilst going about their everyday activity on campus, at the University of the Arts London in High Holborn. The design uses 3D holograms, gesture-controlled maps, voice navigation, and 360-degree movement for immersive learning. It also draws inspiration from depictions of holograms from film. An iterative process was followed balancing the design goals to i) create an engaging experience, ii) encourage foresight in everyday experience and not just one time reflections, and iii) be inclusive in its interaction design. The paper explores possibilities for creating inclusive, community-driven digital experiences that challenge colonial legacies.
A Review of E-Textiles in Learning Environments
The field of electronic textiles (e-textiles) combines digital technology with textile objects, and has applications in fields such as wearable computing, theatrical design, and medicine. Prior work has examined deploying this technology in educational settings, to teach such skills as circuit design, computer programming, and iterative design. However, e-textile-based learning materials are still not commonly used, and more validated examples of such interventions would be valuable. The aim of this project is to investigate the state of the art in e-textile technology, especially in educational contexts, and to develop and evaluate an e-textiles intervention which could be deployed in a classroom or extra-curricular setting to teach introductory programming skills. So far, we have conducted a literature review examining applications of e-textiles in learning environments. For example, in one study [9], the researchers provided a safe environment for children with ASD (Autism Spectrum Disorder) to create their own sensory haptic toy. We found that many of the studies targeted middle and high school age children as a way to gauge and increase their knowledge of electricity and sewing techniques, but not many examined undergraduates. Therefore, in future work, we plan to conduct an experiment investigating the effectiveness of e-textiles in undergraduate learning.
Decolonizing Computer Science: The Immersive Elevator Experience
The immersive elevator experience is a speculative design for a system which informs people about the issue of decolonization in computer science. The specific area of decolonization that we have chosen to focus on is the issue of how people in first-world countries use and dispose of technology, and how this links to the exploitation of people in third-world countries. We underwent an iterative design process and created an interactive experience that is installed inside an elevator. The aim is to educate users about how the way they use technology is harming people in poorer regions. Through using the iterative design process and receiving feedback from lecturers and our peers, we continuously made changes to design an installation that was engaging and informative. We also discovered how to make our project as user-friendly as possible. This helped us to ensure that users would pay attention to the important issues that we are raising and gives us confidence that they will come out of the experience willing to make a change.
ThermAssist: Augmenting Heat Perception in Plastic Thermoforming Using Colorimetric Spray-on Diacetylene Polymer Sensors
The practice of thermoforming plastics relies on understanding the effects of temperature. Although simulations can predict these effects with precise material and equipment parameters, they often fail to communicate experiential knowledge of how different materials and processes interact. Tactile feedback and visual cues are central to determining whether a material is malleable, a skill that simulations cannot replicate. Our work explores the use of a heat-sensitive spray-on smart material made from polydiacetylene (PDA) to improve heat perception. This sensor exhibits reversible colorimetric changes in response to temperature variations from 100ºC to 200ºC, acting as a visual cue perceivable by humans. This study evaluates the sensitivity, accuracy, and practicality of PDAs in real-time temperature monitoring during vacuum forming and acrylic bending. Our findings demonstrate that PDA based sensors enhance visibility of material dispersion, provide safeguards to critical temperatures, and illustrate heat flow and conductivity, thereby improving accessibility, literacy, and relationships with materials in thermoforming practices.
SESSION: Composing with the Machine
From Pen to Prompt: How Creative Writers Integrate AI into their Writing Practice
Creative writing is a deeply human craft, yet AI systems using large language models (LLMs) offer the automation of significant parts of the writing process. So why do some creative writers choose to use AI? Through interviews and observed writing sessions with 18 creative writers who already use AI regularly in their writing practice, we find that creative writers are intentional about how they incorporate AI, making many deliberate decisions about when and how to engage AI based on their core values, such as authenticity and craftsmanship. We characterize the interplay between writers’ values, their fluid relationships with AI, and specific integration strategies—ultimately enabling writers to create new AI workflows without compromising their creative values. We provide insight for writing communities, AI developers and future researchers on the importance of supporting transparency of these emerging writing processes and rethinking what AI features can best serve writers.
Nabokov’s Cards: An AI Assisted Prewriting System to Support Bottom-Up Creative Writing
We introduce Nabokov’s Cards, a creativity support tool that uses Large Language Models (LLMs) to support prewriting. Inspired by the writing process of Vladimir Nabokov, Nabokov’s Cards enables prewriting ideation by providing users with an interface to write idea fragments on notecards and combine them into new sentences or concepts using an LLM. We evaluated Nabokov’s Cards through a one-week user study with professional creative writers (n=13) to explore writers’ prewriting processes and learn about their usage of the system. Through our interviews, we found that writers characterized prewriting as a long, amorphous process that involved observations of the real world and the accumulation of idea fragments. Writers in our study found that Nabokov’s Cards facilitated prewriting through nonlinear interactions, divergent thinking, play, improvisation, and reflection. It also encouraged innovative approaches among writers that surpassed the clichés and redundancy often found within AI generated text today. We note how future AI co-writing systems may benefit from designs that facilitate prompt engineering and modular thinking.
The Co-Creative Design Framework for Hybrid Intelligence
With the rapid advancement of generative AI, co-creation has emerged as a key interaction paradigm, enabling humans and AI to collaborate in creative processes. However, despite decades of research on co-creativity, recent AI developments often lack a structured framework to integrate these insights effectively. To address this gap, we propose the Co-Creative Design Framework (CCDF), which formalizes human-AI co-creation through cognitive and interaction principles. The framework is structured around three core dimensions: agency, which defines the balance of autonomy and control between user and AI; interaction dynamics, which describe the evolving relationship between collaborators and their shared creative product; and communication, which governs information exchange between human and AI. The CCDF provides a systematic approach to modeling co-creative AI and hybrid intelligence systems, defining key dimensions of variance that shape the interaction space of co-creation. In particular, it highlights agency and interaction dynamics, which have been underexplored in recent co-creative AI frameworks. This paper details the iterative development of CCDF, synthesizing insights from co-creativity literature and AI research. We apply the framework in a comparative analysis of Traditional ChatGPT, ChatGPT Canvas Mode, and DALL-E, demonstrating its ability to capture fine-grained differences in system design and user experience.
Thoughtful, Confused, or Untrustworthy: How Text Presentation Influences Perceptions of AI Writing Tools
AI writing tools have been shown to dramatically change the way people write, yet the effects of AI text presentation are not well understood nor always intentionally designed. Although text presentation in existing large language model interfaces is linked to the speed of the underlying model, text presentation speed can impact perceptions of AI systems, potentially influencing whether AI suggestions are accepted or rejected. In this paper, we analyze the effects of varying text generation speed in creative and professional writing scenarios on an online platform (n = 297). We find that speed is correlated with perceived humanness and trustworthiness of the AI tool, as well as the perceived quality of the generated text. We discuss its implications on creative and writing processes, along with future steps in the intentional design of AI writing tool interfaces.
WhatIF: Branched Narrative Fiction Visualization for Authoring Emergent Narratives using Large Language Models
Branched Narrative Fiction (BNF) are non-linear, text based narrative games, where the player of the game is an active participant shaping the story. Unlike linear narratives, BNF allows players to influence the direction, outcomes, and progression of the plot. A narrative fiction developer designs these branching storylines, creating a dynamic interaction between the player and the narrative which requires significant time and skill. In this work we build and investigate the use of a visual analytics tool to help narrative fiction developers generate and plan these parallel worlds within a BNF. We present WhatIF, a visual analytics tool that aids BNF developers to create BNF graphs, edit the graphs, obtain recommendations, visualize differences between storylines and finally verify their BNF on custom metrics. Through a formative study (3 participants) and a user study (11 participants), we observe that WhatIF helps users plan and prototype their BNF, provides avenues to support iterative refinement of narrative and also aids in removing writer’s block. Furthermore, we explore how contemporary generative AI (GenAI) tools can empower game developers to build richer and more immersive narratives.
SESSION: Visual Thinking, Co-Creativity, and Design
“The Diagram is like Guardrails”: Structuring GenAI-assisted Hypotheses Exploration with an Interactive Shared Representation
Data analysis encompasses a spectrum of tasks, from high-level conceptual reasoning to lower-level execution. While AI-powered tools increasingly support execution tasks, there remains a need for intelligent assistance in conceptual tasks. This paper investigates the design of interactive tree diagrams as effective shared representations for AI-assisted hypothesis exploration. We developed a system with ordered node-link diagram augmented with AI-generated information hints and visualizations. Through a design probe (n=22), participants generated diagrams averaging 21.82 hypotheses. Our findings showed that the node-link diagram acts as “guardrails” for hypothesis exploration, facilitating structured workflows, providing overviews, and enabling backtracking. The AI-generated information hints, particularly visualizations, aided users in transforming abstract ideas into data-backed concepts while reducing cognitive load. We further discuss how node-link diagrams can support both parallel exploration and iterative refinement in hypothesis formulation, potentially enhancing the breadth and depth of human-AI collaborative data analysis.
Coping with Uncertainty in UX Design Practice: Practitioner Strategies and Judgment
The complexity of UX design practice extends beyond ill-structured design problems to include uncertainties shaped by shifting stakeholder priorities, team dynamics, limited resources, and implementation constraints. While prior research in related fields has addressed uncertainty in design more broadly, the specific character of uncertainty in UX practice remains underexplored. This study examines how UX practitioners experience and respond to uncertainty in real-world projects, drawing on a multi-week diary study and follow-up interviews with ten designers. We identify a range of practitioner strategies—including adaptive framing, negotiation, and judgment—that allow designers to move forward amid ambiguity. Our findings highlight the central role of design judgment in navigating uncertainty, including emergent forms such as temporal and sacrificial judgment, and extend prior understandings by showing how UX practitioners engage uncertainty as a persistent, situated feature of practice.
Fuzzy Linkography: Automatic Graphical Summarization of Creative Activity Traces
Linkography—the analysis of links between the design moves that make up an episode of creative ideation or design—can be used for both visual and quantitative assessment of creative activity traces. Traditional linkography, however, is time-consuming, requiring a human coder to manually annotate both the design moves within an episode and the connections between them. As a result, linkography has not yet been much applied at scale. To address this limitation, we introduce fuzzy linkography: a means of automatically constructing a linkograph from a sequence of recorded design moves via a “fuzzy” computational model of semantic similarity, enabling wider deployment and new applications of linkographic techniques. We apply fuzzy linkography to three markedly different kinds of creative activity traces (text-to-image prompting journeys, LLM-supported ideation sessions, and researcher publication histories) and discuss our findings, as well as strengths, limitations, and potential future applications of our approach.
Human-Centered AI Communication in Co-Creativity: An Initial Framework and Insights
Effective communication between AI and humans is essential for successful human-AI co-creation. However, many current co-creative AI systems lack effective communication, which limits their potential for collaboration. This paper presents the initial design of the Framework for AI Communication (FAICO) for co-creative AI, developed through a systematic review of 107 full-length papers. FAICO presents key aspects of AI communication and their impact on user experience, offering preliminary guidelines for designing human-centered AI communication. To improve the framework, we conducted a preliminary study with two focus groups involving skilled individuals in AI, HCI, and design. These sessions sought to understand participants’ preferences for AI communication, gather their perceptions of the framework, collect feedback for refinement, and explore its use in co-creative domains like collaborative writing and design. Our findings reveal a preference for a human-AI feedback loop over linear communication and emphasize the importance of context in fostering mutual understanding. Based on these insights, we propose actionable strategies for applying FAICO in practice and future directions, marking the first step toward developing comprehensive guidelines for designing effective human-centered AI communication in co-creation.
Reflection on Iteration and Interruption: What translating ethnographic work into comics reveals about dominant assumptions in research and design)
This pictorial explores how researchers’ collaboration with an illustrator to create comics that visualize ethnographic research on creative family learning experiences revealed assumptions about family learning, and the facilitation of these experiences. We identify and visualize several key moments in the comic drafting process where an assumption made in the first comic draft needed to be addressed through careful visual revision by the illustrator, informed by feedback from researchers, to more accurately communicate the research. We conclude with insights for researchers and designers who are interested in using comics as a creative method for communicating research and a tool for uncovering new perspectives on their research.
SESSION: Alternative Views
“Can the rest of the world have flush toilets? No. Composting toilets? Yes!”: Mediating the Human-Nature Relations by Composting Toilets
Flush toilets pose challenges such as resource waste, public health, and social justice. To achieve the Sustainable Development Goal of clean water and sanitation for all, it is essential to address the complex issues around toilets and design alternative systems that effectively manage human waste. Inspired by Actor-Network Theory (ANT), this paper presents findings from a design ethnography in eco-villages about composting toilets and sustainable living. We describe three archetypal composting toilet systems and examine how they operate as creative, community-driven infrastructures that mediate human–nature relationships. We reflect on how ANT provides a useful lens for HCI to understand the socio-technical dynamics of alternative sanitation systems. We analyse how composting toilets mediate human-nature relations three interconnected processes: tinkering, linking, and becoming. We discuss the implications of fostering DIY infrastructure practices as a form of transformational creativity in the pursuit of more sustainable futures.
Entangled Weathers: A Noticing Tactic
This pictorial introduces entangled weathers, a noticing tactic that poetically examines the interplay between our internal states and external weather. By drawing from the feminist concept of weathering and the introspective practice of noticing the weather inside, this work highlights how bodies are situated in a dynamic overlap between nature, culture, space, and time. Anchored in stories grounded in the author’s design practice, this pictorial illustrates how cultivating sensibilities towards the weather brings several actionable viewpoints to notice, including (1) measuring and mapping, (2) uncovering space and time reflections, (3) weathering narratives, (4) metaphorising and (5) surrendering control. This work invites design researchers to reflect on their own weathers and emerging themes, connecting with sensuous knowledge central to more-than-human design.
Illustrating Creative Applications of Data and Technology: A Visual Vocabulary
Contemporary technologies and data-driven methods have much potential to support innovation in the creative industries – from fashion design and craft, to architecture and music. However, discussing and understanding the applied potential of data and technology can be especially difficult for creative practitioners who have limited previous experience with data-driven research and development. In this pictorial, we address this challenge through the design of a ‘Visual Vocabulary’ of illustrations aimed to scaffold creative practitioners’ thinking about how they might innovatively employ a diverse range of data and technology to address their creative and business challenges. The illustrations serve as a resource for subverting common imageries of technologies and computational methods in popular media – which often fail to showcase their many creative affordances. Moreover, as an ideation card deck, they also serve to support discussion and exploration of new data-driven projects for creative practitioners.
Perceptual Biases in Multiview Navigation: Insights from Embodied Spatial Cognition and Mental Rotation
This study revisits the Mental Rotation Experiment (MRE) to explore how predictive processing and proprioception influence visuo-spatial performance. By manipulating the pose and orientation of test stimuli—and their spatial relationship with the observer—we observe significant response time improvements (over 200 ms) when stimuli align with embodied expectations derived from prior proprioceptive experiences. Based on these findings, we propose a model of visual cognition that integrates three concurrent, interacting processes: unconscious predictive processing, rapid pixel matching, and the conscious process of mental rotation using the mind’s eye. These mechanisms work together to enhance spatial task accuracy. Our insights have practical implications for Multiview navigation interfaces, such as CAD tools and surveillance systems, where optimised spatial arrangements can reduce cognitive load. This study highlights how spatial congruence and embodied cognition can inform usability and accessibility improvements, with potential applications in creative systems, special navigation tasks, and environments with altered gravity conditions.
SESSION: Fragments of Thought
Beyond Productivity: Rethinking the Impact of Creativity Support Tools
Creativity Support Tools (CSTs) are widely used across diverse creative domains, with generative AI recently increasing the abilities of CSTs. To better understand how the success of CSTs is determined in the literature, we conducted a review of outcome measures used in CST evaluations. Drawing from (n=173) CST evaluations in the ACM Digital Library, we identified the metrics commonly employed to assess user interactions with CSTs. Our findings reveal prevailing trends in current evaluation practices, while exposing underexplored measures that could broaden the scope of future research. Based on these results, we argue for a more holistic approach to evaluating CSTs, encouraging the HCI community to consider not only user experience and the quality of the generated output, but also user-centric aspects such as self-reflection and well-being as critical dimensions of assessment. We also highlight a need for validated measures specifically suited to the evaluation of generative AI in CSTs.
Brewing Banter: Augmenting Intercontinental Studio Classes for Casual Communication
Modern online platforms provide highly functional live operational tools for Collaborative Online International Learning (COIL). However, teams of students often face communication challenges due to differences in cultural background, experience and language, which hinder true collaboration. This pictorial presents Brewing Banter, a concept designed to foster spontaneous, informal communication in art and design studio environments, using the analogy of water cooler conversations. Combining physical and digital interfaces, it connects geographically distant classrooms to encourage the exchange of perspectives in creative practice. The design augments activities in the studio space using peripheral awareness. Finally, we discuss perspectives on spatial approaches to blended learning in higher education.
Lyric Poetry in the Face of Posthumanism: An Analysis of Generative AI-Assisted Poetry Writing
Generative AI seems poised to transform a wide range of endeavors once thought to be solely the domain of humans—from journalism to legal practice to creative expression—into collaborative activities involving both human and machine. Poetry is no exception, as even general-purpose language models now routinely generate convincing emulations of poetic form. While researchers have closely examined such machine-generated poetry, few have studied human-AI collaboration in poetry writing from a posthuman perspective. Through semi-structured interviews with ten participants in an AI English poetry contest and an analysis of their dialogs with AIs, we summarize the affordances and challenges of such collaborative practice using posthumanism as a lens. We then expose interesting tensions, for example, between human self-expression and the diminished (or relocated) agency that AI collaboration often entails. This collaborative, yet often adversarial, process provides insights into the nature of the posthuman condition as regards creative collaboration between human and machine.
Orchid: A Creative Approach for Authoring LLM-Driven Interactive Narratives
Integrating Large Language Models (LLMs) into Interactive Digital Narratives (IDNs) enables dynamic storytelling where user interactions shape the narrative in real time, challenging traditional authoring methods. This paper presents the design study of Orchid, a creative approach for authoring LLM-driven IDNs. Orchid allows users to structure the hierarchy of narrative stages and define the rules governing LLM narrative generation and transitions between stages. The development of Orchid consists of three phases. 1) Formulating Orchid through desk research on existing IDN practices. 2) Implementation of a technology probe based on Orchid. 3) Evaluating how IDN authors use Orchid to design IDNs, verify Orchid’s hypotheses, and explore user needs for future authoring tools. This study confirms that authors are open to LLM-driven IDNs but desire strong authorial agency in narrative structures, highlighted in accuracy in branching transitions and story details. Future design implications for Orchid include introducing deterministic variable handling, support for trans-media applications, and narrative consistency across branches.
SESSION: Closing Keynote
Design After Extraction
In a world characterized by rapid change and uncertainty, I’ve come to realize that our instinct for control only exacerbates complexity. Amidst our urge to corral the vast intricacies of our world, only to find that we’ve magnified the challenge, I choose a different story about design and technology. One where design is not a tool to be wielded according to our intentions, but humble and adaptive attunement. And one where technology doesn’t just serve human needs, but helps us connect to other-than-human scales and sensibilities, opening up new possibilities for our collective survival.
SESSION: Art Exhibit
_Prog[W]res[tle]_: An Interactive Browser-Based Artwork
_Prog[W]res[tle]_ is a browser-based interactive artwork that examines the tangled urgencies of contemporary life—late-stage capitalism, the climate emergency, the relationship between dis/misinformation and radicalization, and gender-based power imbalances to name a few—through the unique linguistic and aesthetic lens of Mezangelle (a fusion of programming syntax and poetic expression). Structured around five sections of parallel scrolling text and imagery, the piece immerses viewers in a layered, non-linear journey that confronts the oscillation between despair and hope. By allowing both auto-scrolling and manual user control, _Prog[W]res[tle]_ highlights personal agency in interpreting societal and environmental challenges. This multi-modal narrative underscores an obligation to assess the state of late-stage capitalism, inspire dialogue concerning what lies past it, to foster empathy and provoke meaningful change in a time of global and personal upheaval.
Image of the Forest: Cognitive and Affective Responses to Spectral Frequencies in Virtual Nature
At present, the average user spends approximately seven hours per day consuming various types of digital media that aim to approximate or enhance real-world experiences. Image of the Forest is an immersive experience that provides a thorough account of the questions central to research on human interaction with artifacts that mimic aspects of nature within computationally generated environments. With the use of affective brain-computer interfaces (aBCIs), we aim to critically reflect on the ability of machine models of reality to provide real world experiences. We also address the research gap in understanding how spectral frequency manipulations can influence cognitive and affective responses.
Learning to Move, Learning to Play, Learning to Animate
Learning to Move, Learning to Play, Learning to Animate is a cross-disciplinary multimedia performance exploring the entanglement of human, synthetic, and organic intelligences. Through an interplay of human performers, robots crafted from organic materials, real-time AI-generated visuals, and biofeedback-driven soundscapes, the work critiques anthropocentrism and reimagines intelligence as an emergent, interdependent phenomenon. Drawing from David Abram’s concept of the “more-than-human world,” the performance dissolves traditional boundaries between nature and technology, fostering an interactive environment where embodied intelligence unfolds across physical and virtual dimensions. Shadows, movement, and sonic resonance serve as conduits for co-creation, allowing audiences to experience a world where intelligence is not singular but symbiotic. The work invites reflection on learning, adaptation, and coexistence, embracing a vision of shared creativity beyond human-centered paradigms.
Techno-empathy: Iterative Emotion Visualization
Techno-empathy is an interactive art experience that iteratively visualizes the emotions and perceptions of multiple participants in real time and present them as generative visuals. This research-based artwork develops a workflow using heart rate as a physiological signal, including multi-participant’s emotional perception and reaction to visual stimulus, to explore an iterative emotion visualization as a technology-mediated empathy, i.e., techno-empathy. Real-time Heart rate (HR) data from participants were mapped to dynamic visuals, creating an interactive and generative work that fosters emotional perception and empathy in multi-user collaborations. A crossover experiment with four participants showed that the proposed workflow implemented in the artwork enhanced emotional contagion, scoring high in visibility, engagement, and rationality, though improvements are needed in concentration and autonomy. Contributions include the development of a Heart Rate-based emotion visualization workflow (i.e., techno-empathy, a real-time iterative mapping from multi-participant’s biodata to visual art) and implemented it as an interactive artwork, as well as validation of the proposed approach. This study broadens emotion visualization applications in Affective Computing and offers new insights into multi-user emotional interactions.
Bend to: Explore the Proprioceptive Interaction Between Plants and Post-humans in an Immersive Experience
Bend to is an embodied experience based on plants’ phototropic sense in the digital sympathetic environment. By utilizing 3D scanning and time-lapse photography techniques, we gather plant phototropic data, which is then processed using the Support Vector Regression (SVR) model for prediction and ultimately visualized in a virtual reality experience. It immerses individuals into the sensual experience of plants’ responses to surroundings through proprioceptive interaction. With the embodied engagement of hand-tracking and virtual touching, this artwork aims to enhance participants’ awareness of their bodily position and strengthen connections with the spatial senses of plants. More importantly, it encourages individuals to explore the intricate relationship between nature and post-humans in the future light-radiated environment.
Artificial Life, One Leg at a Time: The Aesthetics of Trial-and-Error in AI Training
This paper explores Artificial Life: One Leg at a Time, an AI art installation that mediates the reinforcement learning process into an aesthetic and perceptual experience. The artwork simulates an agent learning to walk and run within a virtual environment, presenting its trial-and-error journey through various training scenarios. The depiction of the agent’s frequent failures introduces a layer of humor, inviting viewers to witness with the AI in an entertaining manner. By visualizing both the internal mechanisms of reinforcement learning and the humorous aspects of failure, the artwork demystifies AI, transforming its complexities into a perceptual experience. This study demonstrates how media art can provide an innovative lens through which to perceive AI, enhancing the public’s grasp of its operational mechanics and inherent constraints. It further shows how aesthetic strategies can reveal the iterative, often unpredictable nature of machine learning, encouraging critical reflection on the interplay between technology and artistic expression.
Blowing one hundred dandelions
Blowing One Hundred Dandelions is a research-through-design exploration that transforms an ephemeral, playful act into a deeper investigation of our relationship with time, surrounding environment, and everyday life. The project consists of five progressively efficient products, ranging from a simple mouthpiece to an electric fan mechanism, allowing the designer to blow more dandelions at a time. When dandelions were in full bloom, these products were taken to various parks and public green spaces in London to accomplish their mission. The process was documented and presented through photograph. Through the process of seeking, collecting, and blowing dandelions, this project served as a medium to reconfigure the designer’s perception of the surroundings and dandelions as resilient life forms. By interacting with the dandelion life cycle, the project highlights the dynamic interplay between natural rhythms, human influence, and designed artifacts.
Soundscape Thresholds: AI Hallucinations Reimagining the Sacred Experience of Ming Rituals
This video work explores how AI-generated hallucinations can recreate the spiritual dimension of the “unseen soundscape space” in Ming Dynasty heaven-worship rituals. Based on studies of Jiajing-era ceremonial music, the ritual constructs a liminal space for divine-human dialogue through specific soundscapes—rhythmic chanting, instrumental resonance, and synesthetic cultural symbols. Unlike fixed visual representations, its sacredness emerges from auditory ambiguity, forming an ontological contrast between clarity and transcendence.
The triptych video consists of: (1) Visual archive: over a thousand layered cultural symbols evolving with ritual lyrics; (2) Ritual program controller: an interactive diagram of “sound-symbol-space” from a celestial perspective; (3) AI-generated audiovisual hallucinations: a particle system merging historical imagery and sonic fluctuations. Here, AI hallucinations are not flaws but emergent collective imaginaries—when deities and mythical beings carry multiple historical archetypes, their indeterminacy reconstructs the sacred as an unfixed epistemic threshold.
Resonant Timekeepers: Echoes of the Periodical Cicada Emergence
This work captures and re-imagines the acoustic phenomenon of the rare emergence of periodical 17-year cicadas along the Sangamon River in Illinois. Conducted over a week during the summer of 2024, the field recordings document the interwoven calls of Magicicada Septendecim and Magicicada Cassini, revealing the sophisticated bio-acoustic mechanisms that enable these species to coexist without interference, each occupying distinct frequency bands in a continuously shifting sonic landscape. Through an immersive audiovisual installation, the work transforms these temporal soundscapes into an interactive experience, highlighting both the vibrancy and fragility of these biological rhythms. 3D-scanned models of cicada specimen and panoramic photographs of their habitat are integrated into the visual layer of the piece. Two modes of cognition are provided to the listener in the media art installation. Firstly, the simultaneous recorded calls of cicadas are played in an immersive backdrop and analyzed using real-time audio signal processing, as visuals respond to fluctuations in frequency, amplitude and rhythm. This experience is interrupted by juxtaposing the thriving forest habitat against the striking sonic absence in adjacent farmlands, where disturbed soil has rendered the landscape inhospitable to cicada populations. Secondly, the listener is offered an exploratory mode where performative interfaces are provided to manipulate spectro-temporal characteristics of the cicada calls, to highlight the calls of either specie and also discover new textured aesthetics in the cicada calls through the act of listening deeply. By merging emerging techniques in bio-acoustic field recording, generative audio processing and synthesis, and interactive visualization, Resonant Timekeepers invites reflection on deep-time cycles, ecological precarity, and the ways in which non-human temporalities intersect with human disruptions. The creative process explores how emergent sonic and biological systems can be archived, reinterpreted, and engaged with, across sensory modalities. The installation highlights the role of sound as both a cultural and ecological memory, urging new forms of bio-heritage archive based listening and performativity, that attune us to the delicate balance and synchronicity of planetary time.
Me vs. You: Wrestling with AI’s Limits Through Queer Experimental Filmmaking
Me vs. You is a multi-channel video installation that explores the complexities of queer intimacy by co-opting an AI machine vision system. In this work, footage from a wrestling match is transformed through a generative video pipeline into a fluid interaction that oscillates between aggression and tenderness. The initial wrestling footage is fed through an AI depth map network—designed to separate bodies—before being reconstructed using a diffusion video process. Rather than rendering discrete fighters, the system produces unstable, shifting forms that sensually collide and merge, destabilizing a clear reading of the interaction. The work exploits the machine vision system’s inability to delineate entangled bodies, challenging computational frameworks of classification and control; instead, it repurposes AI as a tool for poetic ambiguity. Situating Me vs. You within experimental filmmaking and AI surveillance debates, this paper examines how emerging technologies can disrupt narrow modes of machine perception and proposes more expansive ways of seeing.
The Myth of the Cave. Generated shadows and co-creation of light
The Myth of the Cave is an artwork generated by artificial intelligence (AI), a co-creation that reinterprets Plato’s famous allegory to invite us to reflect on how we perceive and understand reality. Using photographs recovered from historical archives, images that documented everyday reality (1900-1930), AI is integrated into the creative process to offer a speculative prediction of the moments captured in silver gelatin. Apparently, they are fictitious recreations generated by AI, however, the piece visually explores, from the generated story, the intersubjective mechanisms in which we build our historical memory. It is a dialogue with the past to develop a story of the future recreated by an algorithm that has been fed by collective imaginaries. A technology that allows us to observe not only the shadows projected in the cave, but also the fire that generates them, the objects that create these shadows and the forces that control the experience.
Paper Bark
“Paper Bark” is an experimental film and VR experience about colonization, collapse, and rebirth. It uses generative algorithms, photogrammetry, and image style transfer to reconstruct the past and show potential futures. In the 1990s, the Maine paper industry collapsed leaving mill towns without their primary industry. The focus of “Paper Bark” is the remnants of a 130-year-old mill in Winslow, Maine. The title of the work refers to the bark of paper birches, a plant native to Maine, which has historically been used to make canoes, baskets, and other goods. “Paper Bark” tells a cyclical story of deterioration, decolonization, and the possible futures of a paper mill town. More information about the work can be found on the artwork’s website: http://projectiveplanes.com/paperbark/.
Language Is Leaving Me -An AI Cinematic Opera Of The Skin
“Language Is Leaving Me – An AI Cinematic Opera Of The Skin” investigates newly emerging artificial intelligence cinema driven by LLMs or Large Language Models combined with human biometric measurements (skin, muscles) to reveal hidden and devastating aspects of the algorithmic processes underlying the use of AI in human perception, cognition, memory, and identity. The work focuses on epigenetic or inherited traumatic memories of cultures of diaspora that changes an individual’s inherited DNA structure that are then passed from generation to generation. AI purports to understand, codify, and tag these vastly complex and uniquely human traits. Using the LAION 5-B visual data set of Stable Diffusion LILM compares and contrasts an original narrated English language video of an epigenetic memory into a representation of an AI induced cognitive dementia. This cinematic performed biometric opera shows AI using different linguistic prompts or scripts in Yiddish, Chinese, Tamil, and Xhosa reinterpreting intergenerational memories, and changing or obliterating their inherent semiotic and semantic references. LILM had its World Premiere at the Copernicus Science Center in Warsaw, Poland on October 7, 2023 at the exact same moment the war in the Middle East broke out, thereby echoing the cyclic nature of epigenetic trauma.
Diffractive Constellations: Activating Human-Machine Co-Creation in Violin Performance through Embedded Modular Coding
Diffractive Constellations is a live performance and research project that combines acoustic violin improvisation with custom modular audio hardware. It extends the patching ethos of environments like Max into physical DSP modules, allowing sound and gesture to be shaped through embedded, materially constrained interactions. Each unit is prototyped in a visual patching interface, compiled onto embedded hardware, and interconnected in a Eurorack framework. Drawing on Karen Barad’s diffractive methodology, which foregrounds relational emergence over fixed boundaries, the system invites performer, code, and matter to co-constitute musical meaning. The modules capture, transform, re-sequence, and re-imagine violin gesture and sound by processing control voltages and audio signals that eschew prescriptive layers (e.g., MIDI) and standard design tropes. Minimal signals convey intentionality, echoing participatory sensemaking questions about the threshold of agency and foregrounding an ecological, materialist view of human–machine co-creation.
SESSION: Bodies in the Loop
Bringing LuminAI to Life: Studying Dancers’ Perceptions of a Co-Creative AI in Dance Improvisation Class
The intersection of dance and artificial intelligence presents fertile ground for exploring human-machine interaction, co-creation, and embodied expression. This paper reports on a seven month four-phase collaboration with fifteen dancers from a university dance department, encompassing a preliminary study, redesign of LuminAI-a co-creative AI dance partner-, a contextual diary study, and a culminating public performance. Thematic analysis of responses revealed LuminAI’s impact on dancers’ perceptions, improvisational practices, and creative exploration. By blending human and AI interactions, LuminAI influenced dancers’ practices by pushing them to explore the unexpected, fostering deeper self-awareness, and enabling novel choreographic pathways. The experience reshaped their creative sub processes, enhancing their spatial awareness, movement vocabulary, and openness to experimentation. Our contributions underscore the potential of AI to not only augment dancers’ immediate improvisational capabilities but also to catalyze broader transformations in their creative processes, paving the way for future systems that inspire and amplify human creativity.
Designing Counter-Choreographies: Embodied Choreographic Approaches for Critical Examination of Online Tracking
This paper describes a workshop conducted as part of practice-based research that aims at critiquing online tracking algorithms commonly found in everyday web environments. The workshop introduced participants to online tracking algorithms using a series of choreographic exercises that informed a discussion on the topic and strategies to counteract data-driven extractivist technologies. We analysed the outcomes of our workshop and showed that it allowed individuals to become more aware of their lack of agency over data harvesting and its use by digital services, and enabled them to develop strategies for reclaiming agency over their personal data. We discuss how the choreographic approach used in the workshop contributes to engaging people in a critical examination of online tracking in their everyday lives and to inspire forms of countering extractive algorithmic systems. Our paper contributes empirical insights on how choreography can be used to raise awareness of data tracking online.
Sense and Sensability: Exploring Future Immersive Environments for Scholarly Sensemaking
Scholars must often make sense of vast amounts of complex and diverse scholarly information, much of which is not “senseable”: crucial information like questions, concepts, or assertions, along with key properties like truthlikeness or evocativeness, are primarily identified through effortful search or reasoning, rather than direct perception through the senses. In this pictorial, we explore how we might augment scholarly sensemaking by making the full range of scholarly information more senseable. First, we systematically reviewed systems for scholarly sensemaking, and enumerated key types of scholarly information and their properties. Then, we synthesized design patterns for materializing abstract information in modern artworks, and connected them with our enumerated scholarly information and properties to develop three novel conceptual designs for senseable scholarly sensemaking in immersive environments. Our work lays the foundation for a novel design framework for exploring future immersive environments for scholarly sensemaking.
Sustainable Robot Future: A Speculative Design about Humanity, Robots, and Ecology
Robotics has emerged as a critical field of technological innovation. However, current design paradigms, rooted in industrial-era models, often prioritize centralized control, planned obsolescence, and rigid, one-size-fits-all solutions, undermining adaptability, sustainability, and personal autonomy. To address these limitations, we propose a speculative robotic design framework rooted in Sustainability, Adaptability, and Modularity. Our framework envisions robots as modular systems that can be assembled, reconfigured, and personalized by people, shifting design control away from centralized decision-makers and enabling long-term usability. Modularity enables adaptability and reduces obsolescence, while adaptability reinforces sustainability through circular resource use and extended lifecycles. This speculative framework not only provides a technical vision but also reimagines robotic design as a participatory and collaborative process aligned with ecological responsibility. Our speculative framework offers a manifesto for systemic change, reshaping robotics for a more equitable, sustainable future between humans, robots, and ecology.
SESSION: The AI Mirror
Art, Identity, and AI: Navigating Authenticity in Creative Practice
This pictorial explores the intersection of art practice and generative artificial intelligence (GAI) through a first-person approach, examining its role in overcoming creative block. It puts forward a journey from March 2023 to January 2025; the researcher (as artist/artist as a researcher) examines how generative artificial intelligence tools, including ChatGPT and MidJourney, reignited a passion for looser (imperfect) art practice. This process led to self-scrutiny through self-portrait that bridges personal and research. While generative artificial intelligence ignited creativity, it brought authenticity, originality, and tensions with ethics. These concerns are addressed through themes such as attribution transparency, the ethics of relying on non-human solutions and balancing inspiration versus imitation. The pictorial shares extracts from a two-year digital sketchbook that reflects how GAI can support artists in times of crisis and contributes to broader conversations about GAI’s role in art practice. It invites readers to consider the evolving relationship between artists and GAI in shaping the future of art practice.
Generative Rotoscoping: A First-Person Autobiographical Exploration on Generative Video-to-Video Practices
This paper contributes a first-person exploration on AI video-to-video technologies, which I call “Generative Rotoscoping”. This includes: insights on the creation process, a set of prototype explorations, and an integrated workflow for multi-modal video generation. Generative video is rapidly evolving and delivering higher quality outputs. While video generation models have potential for film-making and content creation, they lack controllability for creative expression: viable videos can require hundreds of unsuccessful attempts. To understand this emergent practice, and due to the constant evolution of models and limited number of early adopters, I explored Generative Rotoscoping over 12 months and created AI workflows leading to over 40,000 video/image files examining a variety of models and techniques including: structural guidance, frame consistency, image referencing and masks, compositing, among others. Insights from this work can serve as a starting point for designing the next generation of video authoring tools.
Kaleidoscope Gallery: Exploring Ethics and Generative AI Through Art
Ethical theories and Generative AI (GenAI) models are dynamic concepts subject to continuous evolution. This paper investigates the visualization of ethics through a subset of GenAI models. We expand on the emerging field of Visual Ethics, using art as a form of critical inquiry and the metaphor of a kaleidoscope to invoke moral imagination. Through formative interviews with 10 ethics experts, we first establish a foundation of ethical theories. Our analysis reveals five families of ethical theories, which we then transform into images using the text-to-image (T2I) GenAI model. The resulting imagery, curated as Kaleidoscope Gallery and evaluated by the same experts, revealed eight themes that highlight how morality, society, and learned associations are central to ethical theories. We discuss implications for critically examining T2I models and present cautions and considerations. This work contributes to examining ethical theories as foundational knowledge that interrogates GenAI models as socio-technical systems.
Pareidolia in the Machine: Unintended Figuration in AI-Generated Video from Abstract Visual Music
This pictorial explores the interplay between algorithmic abstraction and AI-driven figuration in generative visual art. Using an AI video synthesis model, the authors transformed abstract visual music frames—created with the Processing programming language—into dynamic video sequences, aiming to preserve their non-representational aesthetic. However, the AI often reinterpreted geometric and fluid forms as representational entities, such as human figures, birds, and insects, despite the absence of such subjects in the source material. This phenomenon reveals two key dynamics: technical bias, in which AI, trained on representational datasets, prioritizes figuration and mirrors human pareidolia; and creative negotiation, where the artist’s abstract intent is continually reinterpreted, raising questions about authorship and the boundaries of procedural art. This study prompts deeper inquiry into whether natural language is inherently sufficient to evoke abstract aesthetics—a fundamental question that merits further exploration.
Pressure to use AI for college admissions: implications for adolescent self-concept and intelligent coaching design
US adolescents are readily adopting general-purpose large language models (LLMs), powered by artificial intelligence (AI), to write college application essays. However, our understanding of applicant AI use patterns and motivations for this task and AI’s impact on self-concept development is limited. In interviews with 20 recent US college applicants, adolescents report feeling pressured to use AI to write essays to compete in an opaque process with peers who they suspect are using it for drafting essays. Most students, especially those who lack expert guidance turn to AI for feedback on all aspects of their writing (e.g., word choice, content, structure) to offload effort when strapped for time and to generate what they perceive as higher-quality text and recognize that AI prevents them from important self-concept development, a skill necessary for future opportunity-seeking (e.g., future job interviews). We contribute a qualitative analysis of applicant AI use patterns and motivations when writing college application essays and discuss implications for designing intelligent systems to coach self-concept development through writing.