The research highlights that the more an AI assistant—like Amazon’s Alexa or social robots like Jibo—exhibits , the more likely users are to perceive it as competent and emotionally engaging. Key insights from the MIT study include:
The feature title refers to a prominent study conducted by researchers at the MIT Media Lab exploring the relationship between AI social behaviors and human trust. The Core Finding: Social Cues Build Trust
: Researchers warned that personifying AI can be misleading, as it may mask the fact that a large corporation is the one actually collecting and accessing user data . Practical Privacy Context “Hey, Alexa! Are you trustworthy?
: Devices with high social abilities actually encouraged families to interact more with each other —leading to more laughing and side conversations during use.
: False triggers can lead to recordings being stored without the user's explicit intent. Managing Trust and Security “Hey, Alexa! Are you trustworthy?” | MIT News The research highlights that the more an AI
: Participants' perceptions changed significantly when the "wake word" was switched from a personified name ("Hey Alexa") to a brand name ("Hey Amazon").
While social behaviors increase trust, experts and privacy advocates often highlight the practical risks of smart speakers: Practical Privacy Context : Devices with high social
: Devices are always active in a low-power state to detect wake words, though they do not continuously record everything.