Maivel Abdelnoor

Currently updating

Does AI Feel the Same?

Evaluating the Quality of Experience of AI vs. Human-Encoded Haptic Feedback in Cinematic Experiences

Does AI Feel the Same? interface mockup

Does AI Feel the Same?

Timeline
October 2023 – December 2024
Role
Lead Researcher (End-to-end)
Tools
SAS, SPSS, Qualtrics, Cobalt EDA Sensors, Hexoskin Smart Garment, Cognitive Absorption Scales, Affective Slider
Table of Contents

This research was conducted at Tech3Lab, HEC Montréal's neuroergonomics and user experience lab — one of the most advanced UX research facilities in North America. The industry partner was D-BOX Technologies (Longueuil, QC), a company that manufactures high-fidelity vibro-kinetic (HFVK) cinema seats — the technology behind motion-enabled movie experiences in theaters worldwide.

D-BOX had developed an AI algorithm to automate haptic feedback encoding for films. The traditional process — having a human haptic artist manually synchronize vibrations, motions, and forces with audiovisual content — takes over 100 hours per movie. AI could theoretically do this in a fraction of the time. But a critical question remained unanswered: does the audience actually feel the difference?

My role: End-to-end research ownership — literature review (50+ scientific articles), ethics approval, questionnaire development, experiment moderation, physiological data analysis (SAS + SPSS), and full thesis writing.

Research team: Directors Sylvain Sénécal and Constantinos K. Coursaris (HEC Montréal); co-author Thaddé; Tech3Lab operations team and research assistants.

Published: Master's thesis, HEC Montréal (M.Sc. in User Experience), December 2024. Prepared for journal submission to Interacting with Computers.

Haptic feedback in cinema is immersive — but creating it is expensive, slow, and doesn't scale. A professional haptic artist can spend 100+ hours encoding a single film, manually aligning every vibration and motion cue to the audiovisual content.

D-BOX developed an AI algorithm to automate this process. It could potentially encode haptic feedback in hours rather than weeks, enabling more films, more markets, more content — faster.

But no one had studied what this meant for the audience.

Would viewers notice? Would they care? Would the AI version deliver the same depth of experience as a human artist's work? And could physiological data reveal something that self-reports couldn't?

Research Questions:

  1. To what extent does the method of haptic feedback encoding (AI vs. human artist) influence the Quality of Experience (QoE) of viewers in a HFVK cinematic setting?
  2. To what extent does the haptic encoding method impact viewers' intentions to relive the experience and recommend it to others?

Process

Study Design

Within-subjects laboratory experiment. Each participant experienced both conditions — maximizing statistical power while controlling for individual differences.

Stimuli

6 movie clips (2–3 minutes each), spanning 6 genres:

  • Sci-Fi: Dune
  • Adventure: Guardians of the Galaxy
  • Action: John Wick
  • Animation: Moana
  • Romance: Love at First Sight
  • Horror: Talk to Me

Each clip was encoded in both AI and human-artist conditions by D-BOX. Participants watched 3 clips with AI-encoded haptics and 3 with human-encoded haptics, randomized using a Latin Square counterbalancing design to eliminate order effects.

Participants

  • 29 participants (F=10, M=19), ages 20–50 (M=35.17)
  • Recruited via HEC Montréal's participant panel
  • Inclusion: Regular streaming consumers, fluent English speakers, aged 20–50
  • Exclusion: Haptic cinema enthusiasts with high prior exposure; individuals with severe motion sickness
  • 62% had never experienced a D-BOX seat before — deliberately targeting novice users for ecological validity

The Setup

A controlled cinema environment:

  • Darkened room with black curtains, eliminating visual distraction
  • 70×120cm Samsung HD TV, Pioneer 5.1 surround sound
  • D-BOX HFVK recliner seat delivering motion, vibration, and texture
  • Cobalt EDA sensors on participants' non-dominant hand (skin conductance, recorded at 500Hz)
  • Hexoskin Smart Garment for continuous HRV monitoring throughout the session

Measures

After each clip, participants completed self-reported questionnaires (Qualtrics on iPad):

  • Cognitive Absorption: Focused Immersion, Heightened Enjoyment, Temporal Dissociation (7-point Likert)
  • Perceived Arousal + Valence: Affective Slider (0–100)
  • Satisfaction (CSAT), Intention to Relive, Intention to Recommend

After all clips: semi-structured qualitative interview.

Analysis

Logistic regression with random intercepts (for non-normal QoE metrics); linear mixed-effects models (for continuous physiological data); Bonferroni-corrected post-hoc comparisons; SAS and SPSS.

Insights

Finding 1: Perceptually, AI and human haptics feel the same

No statistically significant differences were found between AI and human-encoded haptics on any self-reported measure — cognitive absorption, perceived arousal, valence, or satisfaction. Viewers reported comparable levels of enjoyment, immersion, and emotional engagement regardless of encoding method.

What this means: From a user perception standpoint, AI-generated haptic feedback is a viable substitute for human artistry. For producers, this is a compelling case for scalability.

Finding 2: But the body tells a different story

Physiological arousal — measured via EDA — was significantly higher in the human-encoded condition (p = .013). The body reacted more strongly to human-crafted haptic feedback, even when the mind couldn't consciously tell the difference.

This is the study's most important and nuanced finding: AI can replicate the perceived experience, but it may not fully replicate the physiological depth of engagement that human artistry produces. The gap exists below the threshold of conscious awareness — but it's there.

What this means: AI haptics are good enough for everyday content delivery, but in high-stakes immersive contexts designed to maximize emotional impact — premium screenings, VR therapy, high-fidelity gaming — human encoding still has an edge.

Finding 3: What you watch matters more than how it was encoded

Movie genre had a significant effect on cognitive absorption, perceived arousal, valence, and physiological arousal — independent of encoding method.

  • Highest engagement: John Wick (action), Dune (sci-fi) — elevated CA and perceived arousal
  • Highest physiological arousal: Talk to Me (horror) — peak EDA response
  • Lowest engagement: Love at First Sight (romance) — lowest CA and behavioral intention scores

What this means: Content type is the primary driver of QoE in haptic-enhanced cinema. Both AI and human haptics enhance whatever genre they're applied to — neither undermines the other.

Finding 4: Intention to relive and recommend follows perception, not physiology

Satisfaction was driven by cognitive absorption, perceived arousal, and valence — and in turn predicted both intention to relive and recommend. But the physiological difference did not translate into behavioral differences. Both groups were equally likely to want to re-experience and recommend the film.

What this means: Physiological arousal matters for depth of experience, but doesn't override conscious enjoyment signals when it comes to decision-making.

Deliverables

This was a research study — the output was knowledge, not a product.

For the academic community:

  • A validated research model connecting haptic encoding method → QoE dimensions → satisfaction → behavioral intentions
  • First-ever comparative study of AI vs. human-encoded haptics in a HFVK cinematic context
  • A methodological contribution: demonstrating the value of combining physiological and self-reported measures to reveal what self-reports alone cannot

For D-BOX Technologies:

  • Evidence that AI-encoded haptic feedback is perceptually equivalent to human encoding — a strategic green light for AI-assisted production at scale
  • A critical caveat: human encoding still produces stronger physiological arousal — preserve human artistry for premium, high-fidelity experiences
  • Genre-specific insight: action and horror genres benefit most from haptic enhancement — prioritize encoding resources accordingly
  • Recommendation to study expert vs. novice haptic users separately — experienced D-BOX users may perceive differences that novices cannot

Outcomes

For D-BOX Technologies: The research directly informed the company's strategic decision-making around AI adoption in their haptic encoding pipeline. Findings provided evidence that AI can be deployed at scale for standard content, while human haptic artists should be retained for premium, experience-critical productions. This framing — AI as a scalable tool, not a replacement — shaped the company's roadmap positioning for AI in their workflow.

For the academic field:

  • First published comparative study of AI vs. human haptic encoding in immersive cinematic contexts
  • Contribution to QoE methodology: validating the use of combined physiological + perceptual measures to detect experience differences invisible to self-report
  • Prepared for submission to Interacting with Computers (Oxford University Press)

Personal: Completed a 100+ page thesis as sole author, managed end-to-end research with 29 participants through full ethics approval (Certificate No. 2024-5896, HEC Montréal), and presented findings at the UXC Conference.

Reflection

The body knows things the mind doesn't. The divergence between physiological and self-reported data was the most intellectually challenging — and most rewarding — part of this research. It reshaped how I think about measuring experience: perception alone is incomplete.

Bridging two audiences is a skill. Writing for academic reviewers and for D-BOX's product team required two completely different framings of the same findings. Learning to translate rigor into strategy — without losing either — is now a core part of how I do research.

AI is a tool, not a replacement. This study confirmed something I believe about all AI applications: the most effective use isn't substitution, it's augmentation. AI scales the routine; humans own the exceptional.

Constraints sharpen questions. Working within a lab environment with equipment, ethics timelines, and a fixed participant pool forced creative problem-solving at every stage. The constraints made the research more rigorous, not less.