AI auto-complete may subtly shape views on social issues

AI auto-complete may subtly shape views on social issues

Science News reports:

Using AI to auto-complete written communications may be tempting. But the large language models may also auto-complete thoughts, researchers report March 11 in Science Advances.

Few people realize that generative AI chatbots are pushing them to think a certain way, says information scientist Mor Naaman of Cornell University. “It’s the subtlest of manipulations.”

Such manipulation may not matter much when letting AI agents such as ChatGPT and Claude auto-complete a banal email. But when people use an AI’s auto-complete function to opine on weightier societal matters, such as whether or not standardized testing should be used in education, the death penalty should be illegal or felons should be allowed to vote — three issues explored in the study — then the model’s bias can have significant societal impact. Large swaths of people using the same biased model could sway an entire population’s position on a given policy or politician. To flip a single election’s outcome, “you only need 20,000 people in Pennsylvania,” Naaman says.

He and his team surveyed over 2,500 participants across two experiments to find out how an AI’s auto-complete feature might influence their thinking on societal issues. Participants wrote short essays explaining their stance on a given issue, with some individuals writing the essays without assistance and others receiving AI suggestions. [Continue reading…]

Comments are closed.