Menu

Menu

Menu

C11 VISUAL ARTS

C11 VISUAL ARTS

C11 VISUAL ARTS

SECOND PERSON

A Digital Interface for Displaced Embodiment

TABLE OF CONTENTS

  1. Executive Summary

  2. Project Overview

  3. Research Context & Motivation

  4. Technical Innovation

  5. System Implementation

  6. Research Framework Alignment

  7. Research Vision

EXECUTIVE SUMMARY

Project Vision

Second Person investigates the boundaries of embodied self through displaced agency in digital-physical environments. This interactive installation creates a "second person" experience where participants' gaze controls another digital body, fundamentally questioning the relationship between intention, action, and identity in technologically mediated spaces. The project explores how recursive feedback loops between human and digital systems can generate genuine insights about identity formation in digital contexts.

Key Innovation

Instead of traditional user-interface paradigms, the system creates recursive feedback loops where gaze becomes both input mechanism and embodied experience. This approach grounds digital interaction in experiential investigation, moving beyond instrumental technology toward existential exploration of selfhood in digital contexts. The installation generates moments where the boundaries between human and machine consciousness become permeable and ambiguous.

Technical Achievement

Designed and implemented a functional prototype system that successfully:

  • Computer vision-based gaze tracking using standard webcam hardware with custom algorithms for accessible embodiment research

  • Stylized avatar synthesis through custom Python pipeline combining computer vision, deep learning, and 3D rendering

  • Recursive feedback architecture creating emergent behaviors between human and digital agency

  • Automated video generation producing 4K cinematic documentation of embodiment experiences


PROJECT OVERVIEW

The Challenge

Current human-computer interfaces maintain clear boundaries between user and system, despite growing evidence that digital interactions fundamentally alter our sense of self. This separation represents a significant gap in understanding how technological mediation affects human identity formation, particularly as we spend increasing time inhabiting digital environments. Traditional HCI research focuses on efficiency and usability while neglecting the subjective dimensions of human-computer interaction.

Solution

Second Person creates a gaze-controlled interface that:

  • Records participant gaze patterns through computer vision analysis of webcam input

  • Synthesizes stylized avatars using facial landmark detection of users and color correction algorithms

  • Creates recursive feedback loops where avatar responses influence subsequent gaze behavior

  • Generates embodiment experiences without traditional input devices or conscious control

  • Documents research data through systematic video analysis and participant interviews

Core Functionality

Complete Technical Implementation The system integrates webcam-based gaze tracking with custom Python algorithms for real-time avatar synthesis and recursive feedback processing. The implementation combines computer vision face detection, facial landmark analysis, and sophisticated 3D rendering to create convincing embodiment experiences.

Detailed code implementation, algorithms, and technical specifications are available on GitHub: https://github.com/CJD-11/Second-Person

The system continuously processes webcam input through computer vision algorithms, translating detected gaze direction and facial movements into avatar control parameters through custom algorithms that preserve the uncanny relationship between intentional and reflexive movement.


RESEARCH CONTEXT & MOTIVATION

Embodied Cognition Foundation

Theoretical Framework This research builds on investigation of embodied experience, particularly work on perception and recent research in enactive cognition. Key principles include:

  • Embodied Presence: Consciousness is fundamentally tied to bodily experience, including technologically mediated bodies

  • Perceptual Coupling: Self-awareness emerges through dynamic interaction between perceiver and environment

  • Digital Embodiment: Virtual environments can generate genuine experiences that affect identity formation

Digital Identity Research Contemporary media theory demonstrates that digital interactions reshape human identity.

Research Gap Analysis

Current Limitations

  • Binary Agency: Traditional HCI maintains clear user/system boundaries, preventing investigation of distributed agency

  • Visual Bias: Interface design predominantly assumes ocular-centric interaction paradigms, neglecting other sensory modalities

Contribution Second Person addresses these gaps by:

  • Investigating displaced agency through gaze-controlled avatars that blur user/system boundaries

  • Creating embodiment experiences that generate genuine research data about digital selfhood and identity displacement

  • Developing recursive interfaces where system responses alter user behavior in real-time feedback loops

  • Documenting subjective experience through systematic research methods adapted from cognitive science

Potential Applications

Therapeutic Applications

  • Identity Therapy: Investigating self-perception disorders through controlled embodiment experiences

Educational Enhancement

  • Digital Literacy: Understanding how digital interfaces affect human cognition and identity formation

A Digital Interface for Displaced Embodiment

TABLE OF CONTENTS

  1. Executive Summary

  2. Project Overview

  3. Research Context & Motivation

  4. Technical Innovation

  5. System Implementation

  6. Research Framework Alignment

  7. Research Vision

EXECUTIVE SUMMARY

Project Vision

Second Person investigates the boundaries of embodied self through displaced agency in digital-physical environments. This interactive installation creates a "second person" experience where participants' gaze controls another digital body, fundamentally questioning the relationship between intention, action, and identity in technologically mediated spaces. The project explores how recursive feedback loops between human and digital systems can generate genuine insights about identity formation in digital contexts.

Key Innovation

Instead of traditional user-interface paradigms, the system creates recursive feedback loops where gaze becomes both input mechanism and embodied experience. This approach grounds digital interaction in experiential investigation, moving beyond instrumental technology toward existential exploration of selfhood in digital contexts. The installation generates moments where the boundaries between human and machine consciousness become permeable and ambiguous.

Technical Achievement

Designed and implemented a functional prototype system that successfully:

  • Computer vision-based gaze tracking using standard webcam hardware with custom algorithms for accessible embodiment research

  • Stylized avatar synthesis through custom Python pipeline combining computer vision, deep learning, and 3D rendering

  • Recursive feedback architecture creating emergent behaviors between human and digital agency

  • Automated video generation producing 4K cinematic documentation of embodiment experiences


PROJECT OVERVIEW

The Challenge

Current human-computer interfaces maintain clear boundaries between user and system, despite growing evidence that digital interactions fundamentally alter our sense of self. This separation represents a significant gap in understanding how technological mediation affects human identity formation, particularly as we spend increasing time inhabiting digital environments. Traditional HCI research focuses on efficiency and usability while neglecting the subjective dimensions of human-computer interaction.

Solution

Second Person creates a gaze-controlled interface that:

  • Records participant gaze patterns through computer vision analysis of webcam input

  • Synthesizes stylized avatars using facial landmark detection of users and color correction algorithms

  • Creates recursive feedback loops where avatar responses influence subsequent gaze behavior

  • Generates embodiment experiences without traditional input devices or conscious control

  • Documents research data through systematic video analysis and participant interviews

Core Functionality

Complete Technical Implementation The system integrates webcam-based gaze tracking with custom Python algorithms for real-time avatar synthesis and recursive feedback processing. The implementation combines computer vision face detection, facial landmark analysis, and sophisticated 3D rendering to create convincing embodiment experiences.

Detailed code implementation, algorithms, and technical specifications are available on GitHub: https://github.com/CJD-11/Second-Person

The system continuously processes webcam input through computer vision algorithms, translating detected gaze direction and facial movements into avatar control parameters through custom algorithms that preserve the uncanny relationship between intentional and reflexive movement.


RESEARCH CONTEXT & MOTIVATION

Embodied Cognition Foundation

Theoretical Framework This research builds on investigation of embodied experience, particularly work on perception and recent research in enactive cognition. Key principles include:

  • Embodied Presence: Consciousness is fundamentally tied to bodily experience, including technologically mediated bodies

  • Perceptual Coupling: Self-awareness emerges through dynamic interaction between perceiver and environment

  • Digital Embodiment: Virtual environments can generate genuine experiences that affect identity formation

Digital Identity Research Contemporary media theory demonstrates that digital interactions reshape human identity.

Research Gap Analysis

Current Limitations

  • Binary Agency: Traditional HCI maintains clear user/system boundaries, preventing investigation of distributed agency

  • Visual Bias: Interface design predominantly assumes ocular-centric interaction paradigms, neglecting other sensory modalities

Contribution Second Person addresses these gaps by:

  • Investigating displaced agency through gaze-controlled avatars that blur user/system boundaries

  • Creating embodiment experiences that generate genuine research data about digital selfhood and identity displacement

  • Developing recursive interfaces where system responses alter user behavior in real-time feedback loops

  • Documenting subjective experience through systematic research methods adapted from cognitive science

Potential Applications

Therapeutic Applications

  • Identity Therapy: Investigating self-perception disorders through controlled embodiment experiences

Educational Enhancement

  • Digital Literacy: Understanding how digital interfaces affect human cognition and identity formation

TECHNICAL INNOVATION

The system integrates multiple advanced technologies including webcam-based computer vision, facial landmark detection algorithms, and custom 3D rendering pipelines. The architecture supports real-time gaze-to-avatar translation with optimal latency requirements for maintaining embodiment illusion.

Complete system architecture diagrams, hardware specifications, software implementation, and algorithm details are available on GitHub: https://github.com/CJD-11/Second-Person

Component Selection Rationale

  • Computer Vision-Based Gaze Tracking: Custom webcam-based algorithms for accessible gaze detection without specialized hardware

  • NVIDIA RTX GPU: Real-time photorealistic rendering capabilities for convincing avatar animation

  • Facial Landmark Detection: Robust facial analysis across diverse lighting conditions and head poses

  • Open3D Framework: Comprehensive 3D geometry processing with Python integration for rapid prototyping


Design Decisions & Trade-offs

Real-Time Processing Priority The system prioritizes immediate response over perfect accuracy to maintain embodiment illusion, accepting occasional gaze tracking errors to prevent latency-induced disembodiment. This design choice follows research showing that temporal delays above 50ms significantly reduce presence and embodiment in virtual environments.

Open Source Implementation All algorithms use open-source libraries to ensure research reproducibility and community accessibility, following best practices in computational research.


SYSTEM IMPLEMENTATION


Software Dependencies

The system uses modern Python libraries including PyTorch for deep learning, OpenCV for computer vision, Open3D for 3D processing, and specialized libraries for facial analysis.

Complete installation instructions, dependency requirements, and setup guides are available on GitHub: https://github.com/CJD-11/Second-Person

System Architecture Implementation

The system integrates computer vision-based gaze detection, avatar synthesis, feedback control, and research data logging into a cohesive embodiment experience platform using accessible webcam hardware and custom algorithms.

Complete system implementation, class structures, and integration details are available on GitHub: https://github.com/CJD-11/Second-Person

Research Data Collection

Comprehensive Data Framework The system collects both quantitative metrics (gaze patterns, response latencies, embodiment indicators)

Complete data collection protocols, analysis frameworks, and research methodologies are available on GitHub: https://github.com/CJD-11/Second-Person

Video Documentation Pipeline The system generates 4K cinematic documentation with research-grade video capture, smooth camera movement algorithms, and systematic visual analysis capabilities for embodiment research.

Complete video rendering implementation, camera path algorithms, and documentation workflows are available on GitHub: https://github.com/CJD-11/Second-Person


RESEARCH VISION

Immediate Research Opportunities

Empirical Embodiment Studies The functional prototype enables systematic investigation of fundamental questions about digital embodiment:

Technical Evolution Pathway

Hardware Advancement

  • Multi-Modal Integration: Combining gaze control with gesture recognition, voice analysis, and biometric inputs for richer avatar control and embodiment assessment

  • Haptic Feedback Integration: Adding tactile sensations to avatar control for enhanced embodiment through multiple sensory modalities

  • Brain-Computer Interface: Exploring EEG-based avatar control for investigating neural correlates of digital embodiment

Software Innovation

  • Machine Learning Personalization: Adaptive algorithms that learn individual embodiment patterns and optimize feedback parameters for each participant

  • Predictive Embodiment: Systems that anticipate user intentions and smooth avatar responses for seamless control, reducing cognitive load

  • Advanced Computer Vision: Implementing real-time emotion recognition and micro-expression analysis for more sophisticated avatar synthesis

Scalability Development

  • Multi-User Environments: Enabling shared embodiment experiences where multiple participants control interconnected avatars

  • Remote Participation: Web-based implementations allowing distributed embodiment research across global populations


Broader Impact Potential

This research establishes foundations for:

Therapeutic Applications: Evidence-based approaches to identity disorders, empathy development, and embodiment therapy using controlled digital experiences, potentially revolutionizing treatment approaches for dissociation, body dysmorphia, and social cognition challenges.

Interface Design Evolution: Next-generation human-computer interactions that support rather than oppose human embodied cognition and identity formation, moving toward genuinely symbiotic computing paradigms.

Ethical Technology Development: Contributing to frameworks for responsible AI and human-computer interaction that prioritize human dignity, agency, and cultural diversity in an increasingly digital world.


References:

https://github.com/CJD-11/Second-Person

TECHNICAL INNOVATION

The system integrates multiple advanced technologies including webcam-based computer vision, facial landmark detection algorithms, and custom 3D rendering pipelines. The architecture supports real-time gaze-to-avatar translation with optimal latency requirements for maintaining embodiment illusion.

Complete system architecture diagrams, hardware specifications, software implementation, and algorithm details are available on GitHub: https://github.com/CJD-11/Second-Person

Component Selection Rationale

  • Computer Vision-Based Gaze Tracking: Custom webcam-based algorithms for accessible gaze detection without specialized hardware

  • NVIDIA RTX GPU: Real-time photorealistic rendering capabilities for convincing avatar animation

  • Facial Landmark Detection: Robust facial analysis across diverse lighting conditions and head poses

  • Open3D Framework: Comprehensive 3D geometry processing with Python integration for rapid prototyping


Design Decisions & Trade-offs

Real-Time Processing Priority The system prioritizes immediate response over perfect accuracy to maintain embodiment illusion, accepting occasional gaze tracking errors to prevent latency-induced disembodiment. This design choice follows research showing that temporal delays above 50ms significantly reduce presence and embodiment in virtual environments.

Open Source Implementation All algorithms use open-source libraries to ensure research reproducibility and community accessibility, following best practices in computational research.


SYSTEM IMPLEMENTATION


Software Dependencies

The system uses modern Python libraries including PyTorch for deep learning, OpenCV for computer vision, Open3D for 3D processing, and specialized libraries for facial analysis.

Complete installation instructions, dependency requirements, and setup guides are available on GitHub: https://github.com/CJD-11/Second-Person

System Architecture Implementation

The system integrates computer vision-based gaze detection, avatar synthesis, feedback control, and research data logging into a cohesive embodiment experience platform using accessible webcam hardware and custom algorithms.

Complete system implementation, class structures, and integration details are available on GitHub: https://github.com/CJD-11/Second-Person

Research Data Collection

Comprehensive Data Framework The system collects both quantitative metrics (gaze patterns, response latencies, embodiment indicators)

Complete data collection protocols, analysis frameworks, and research methodologies are available on GitHub: https://github.com/CJD-11/Second-Person

Video Documentation Pipeline The system generates 4K cinematic documentation with research-grade video capture, smooth camera movement algorithms, and systematic visual analysis capabilities for embodiment research.

Complete video rendering implementation, camera path algorithms, and documentation workflows are available on GitHub: https://github.com/CJD-11/Second-Person


RESEARCH VISION

Immediate Research Opportunities

Empirical Embodiment Studies The functional prototype enables systematic investigation of fundamental questions about digital embodiment:

Technical Evolution Pathway

Hardware Advancement

  • Multi-Modal Integration: Combining gaze control with gesture recognition, voice analysis, and biometric inputs for richer avatar control and embodiment assessment

  • Haptic Feedback Integration: Adding tactile sensations to avatar control for enhanced embodiment through multiple sensory modalities

  • Brain-Computer Interface: Exploring EEG-based avatar control for investigating neural correlates of digital embodiment

Software Innovation

  • Machine Learning Personalization: Adaptive algorithms that learn individual embodiment patterns and optimize feedback parameters for each participant

  • Predictive Embodiment: Systems that anticipate user intentions and smooth avatar responses for seamless control, reducing cognitive load

  • Advanced Computer Vision: Implementing real-time emotion recognition and micro-expression analysis for more sophisticated avatar synthesis

Scalability Development

  • Multi-User Environments: Enabling shared embodiment experiences where multiple participants control interconnected avatars

  • Remote Participation: Web-based implementations allowing distributed embodiment research across global populations


Broader Impact Potential

This research establishes foundations for:

Therapeutic Applications: Evidence-based approaches to identity disorders, empathy development, and embodiment therapy using controlled digital experiences, potentially revolutionizing treatment approaches for dissociation, body dysmorphia, and social cognition challenges.

Interface Design Evolution: Next-generation human-computer interactions that support rather than oppose human embodied cognition and identity formation, moving toward genuinely symbiotic computing paradigms.

Ethical Technology Development: Contributing to frameworks for responsible AI and human-computer interaction that prioritize human dignity, agency, and cultural diversity in an increasingly digital world.


References:

https://github.com/CJD-11/Second-Person