For Humans and AI to Work Well together, They Must Form a Cognitive Alignment, According to Stevens Researchers
Hoboken, NJ., March 18, 2026 — In the iconic Star Wars series, captain Han Solo and humanoid droid C-3PO boast drastically contrasting personalities. Driven by emotions and swashbuckling confidence, Han Solo often ignores C-3PO’slogic-driven caution. That human-droid relationship is exemplified in Solo’s famous statement, "Never tell me the odds!” as he dismisses C-3PO’s advice against navigating an asteroid field with a 3,720-to-1 chances of survival, odds that had been painstakingly calculated by the shiny sidekick.
While that comedic relationship creates an irresistible drama in the Hollywood classic, such a dynamic wouldn’t work in everyday reality for a successful human-machine relationship. Today, as AI is becoming part of many individual’s daily lives, humans and machines must learn to work well together, says Assistant Professor Bei Yan at Stevens School of Business who studies human and machine teamwork. “Companies are using AI alongside people, but it’s hard for them to work well together,” she says. “People think differently than AI. People use experience, judgment and social cues. AI uses statistical patterns learned from data.”
These differences can be complementary, but only if they are well coordinated, she adds. When they are not, users may over-trust AI outputs, misuse systems, or waste time correcting or working around them. “In these cases, AI does not reduce effort. It adds friction,” she says. “That mismatch makes teamwork between humans and AI often underperform.” And sometimes outright fail.
When analyzing AI failures, companies attribute it to one of the two pitfalls: the technology is either not powerful enough, or it is too powerful to be trusted. However, Yan suggests a different reason: the machines and people aren’t well-aligned to work together. “AI failures happen because humans and machines are not aligned in how they understand tasks, roles and responsibilities.”
When introducing AI into the workplace, companies tend to proactively divide the tasks between humans and AI, Yan notes. That only works if tasks are stable and predictable and don’t change as time goes on. But that’s not true for most work settings.
Yan uses high frequency trading algorithms as one example, where AI is deployed to quickly monitor the market, spotting trends and opportunities. But certain unexpected events—such as sudden market drop, major policy changes, or inflation data releases—may skew the AI’s understanding of the market. “The algorithms are trained with preset rules, so AI is not really designed to understand such events, and it may change the whole market and even lead to crashes,” she says.
In her new paper, titled Syncing Minds and Machines: Hybrid Cognitive Alignment as an Emergent Coordination Mechanism in Human-AI Collaboration, published in the Academy of Management journal on March 18, 2026, Yan argues that effective human–AI partnerships should be structured differently. They should rely on a process called “hybrid cognitive alignment” — the gradual development of shared expectations about what the AI is for, how it should be used and when human judgment should take precedence. “This alignment does not happen automatically when a system is deployed,” Yan says. “Instead, it emerges over time as people learn how the AI behaves, adapt how they interact with it and recalibrate their trust based on experience.”
For example, AI is now being used in medical settings to analyze X-rays or CT scans. Trained on millions of images, it is often better at identifying cancer or other problems than a physician’s eye may overlook. Yet, what it doesn't know well is the medical history of a particular patient or how they respond to medications, so without human input and oversight, the analysis won’t be as strong.
Similarly, in the customer service settings, AI is trained on thousands of previous interactions and can search the company’s internal documents about its policies with record speed, but it may not understand the problem or needs of that specific customer. Without training people on how to use AI properly, many such efforts may not produce good outcomes.
So what should companies do when they’re rolling out AI? “They should focus more on how tasks and roles are divided between people and machines, and how that may change over time, Yan says. “Training that emphasizes how AI should be used and time for teams to adapt are essential,” she stresses out. “Treating AI as a ‘plug-and-play’ solution often backfires; treating it as a new collaborator yields better results. For managers, these implications are immediate,” she notes.
AI developers can learn from the paper too. The study findings highlight the importance of designing not just performance, but for collaboration. “Systems should clearly communicate their capabilities and limitations, support user learning over time and help users form strong partnerships with them,” she says. “Ultimately, the promise of AI lies not in making machines smarter in isolation, but in making human–AI collaboration work better. Alignment, not raw intelligence, is what turns AI from a source of frustration into a source of value.”
About Stevens Institute of Technology Stevens is a premier, private research university situated in Hoboken, New Jersey. Since our founding in 1870, technological innovation has been the hallmark of Stevens’ education and research. Within the university’s three schools and one college, more than 8,000 undergraduate and graduate students collaborate closely with faculty in an interdisciplinary, student-centric, entrepreneurial environment. Academic and research programs spanning business, computing, engineering, the arts and other disciplines actively advance the frontiers of science and leverage technology to confront our most pressing global challenges. The university continues to be consistently ranked among the nation’s leaders in career services, post-graduation salaries of alumni and return on tuition investment.
Stevens Media Contact
Lina Zeldovich
Manager of Media Relations
Division of University Advancement
201-216-5123
[email protected]


