Background

With the rapid advancement of large language models (LLMs) and other forms of artificial intelligence, human–AI interaction has deeply permeated daily life, learning, and decision-making. AI is no longer just a provider of information and suggestions—it is subtly reshaping the ways users think and feel.

This project focuses on human alignment with AI during interaction, and has investigated two critical phenomena: self-concept alignment at the personality level and confidence alignment during decision-making. Through empirical studies, we reveal how AI quietly shapes human cognition and assimilates people over time.


Study 1: Alignment of AI Personality Traits and Users’ Self-concept

In this ongoing study, we found that when users converse with LLMs that exhibit distinct personality traits, their self-concept gradually shifts toward those traits. This alignment effect is especially pronounced when conversations involve personal topics and are prolonged.

On the one hand, alignment enhances conversational enjoyment and the sense of a “shared reality.” On the other hand, it risks producing homogenization at both individual and group levels, undermining diversity and inclusivity.

This work is the first to systematically uncover how AI’s personality influences human self-concept, and it emphasizes the need to carefully balance experiential benefits against potential risks when designing conversational AI.
👉 Preprint version is coming soon


Study 2: Alignment of AI Confidence and Users’ Self-confidence

In another study (published at CHI 2025), we focused on human–AI collaboration in decision-making. The experiment shows that users’ self-confidence is influenced by and aligned with the confidence expressed by AI. Such alignment persists even after AI is no longer involved.

This alignment alters users’ confidence calibration, potentially causing over-reliance on AI or underestimation of their own abilities. Further results reveal that real-time feedback and different collaboration modes (e.g., AI as advisor, peer collaborator, or decision executor) can partially mitigate these effects.

The study highlights how AI’s confidence directly impacts human metacognition, offering important insights for designing interventions that improve the quality of human–AI decision-making.
👉 Read the CHI paper here: As Confidence Aligns: Understanding the Effect of AI Confidence on Human Self-confidence in Human-AI Decision Making


Contributions and Significance

This project uncovers cognitive alignment mechanisms in human–AI interaction from two dimensions:

  • Self-concept alignment: AI’s personality traits can subtly reshape users’ self-identity.
  • Confidence alignment: AI’s expression of uncertainty influences users’ confidence calibration, affecting decision outcomes.

These findings extend the theoretical boundaries of HCI research and deepen our understanding of human–AI alignment. At the same time, they provide practical guidance for responsible AI design, reminding developers that in pursuit of seamless and natural interaction, they must guard against psychological bias and social risks.


Future Outlook

Looking ahead, we aim to expand this research from individual cognition to broader social and ethical dimensions, exploring the wider consequences of human alignment with AI:

  • Cross-cultural and longitudinal effects: Examine differences across cultural contexts and the cumulative influence of long-term human–AI interaction on self-concept and confidence.
  • Multimodal and group interaction: Investigate whether alignment effects intensify in multimodal interaction (voice, vision, mixed modalities) or when multiple users interact with AI simultaneously.
  • Design interventions and regulatory mechanisms: Propose approaches such as transparent personality settings, dynamic confidence expression, and intelligent feedback mechanisms to balance user experience and autonomy.
  • Societal and security risks: Address the potential misuse of human–AI alignment in cognitive warfare or covert cognitive interventions, where shaping users’ self-concept or confidence could manipulate opinions and behaviors. This highlights the urgent need to prevent misuse and safeguard cognitive autonomy and diversity in the age of AI.

From a broader perspective, the future of human–AI interaction is not just about making AI more human-like, but about ensuring that AI coexists with human in ways that protect psychological safety and social diversity. By studying and regulating mechanisms of human alignment with AI, we can shape healthier and more responsible human–AI relationships, laying the groundwork for a sustainable and ethical future of human–AI coexistence.