OpenAI seeks to ally election meddling fears in blog post

Photo of author

By Dan Sears

Artificial intelligence lab OpenAI published a blog post Monday seeking to address fears that its technology will meddle with elections, as more than a third of the globe prepares to head to the polls this year.

The use of AI to interfere with election integrity has been a concern since the Microsoft-backed company released two products: ChatGPT, which can mimic human writing convincingly, and DALL-E, whose technology can be used to create “deepfakes,” or realistic-looking images that are fabricated.

Those worried include OpenAI’s own CEO Sam Altman, who testified in Congress in May that he was “nervous” about generative AI’s ability to compromise election integrity through “one-on-one interactive disinformation.”

The San Francisco-based company said that in the United States, which will hold presidential elections this year, it is working with the National Association of Secretaries of State, an organization that focuses on promoting effective democratic processes such as elections.

See also  Everyone freaked out by Meta's 'creepy' Kendall Jenner AI chatbot

Pic of open AI
The use of AI to interfere with election integrity has been a concern since the Microsoft-backed company released ChatGPT and DALL-E. AP

ChatGPT will direct users to CanIVote.org when asked certain election-related questions, it added.

The company also said it is working on making it more obvious when images are AI-generated using DALL-E, and is planning to put a “cr” icon on images to indicate it was AI-generated, following a protocol created by the Coalition for Content Provenance and Authenticity.

It is also working on ways to identify DALL-E-generated content even after images have been modified.


Sam Altman
OpenAI CEO Sam Altman testified in Congress in May that he was “nervous” about generative AI’s ability to compromise election integrity through “one-on-one interactive disinformation.” AP

In its blog post, OpenAI emphasized that its policies prohibit its technology to be used in ways it has identified as potentially abusive, such as creating chatbots pretending to be real people, or discouraging voting.

It also prohibits DALL-E from creating images of real people, including political candidates, it said.

The company faces challenges policing what is actually happening on its platform.

When Reuters last year tried to create images of Donald Trump and Joe Biden, the request was blocked and a message appeared saying it “may not follow our content policy.”

See also  20 Google search tricks, hidden utilities, games and freebies

Reuters, however, was able to create images of at least a dozen other U.S. politicians, including former Vice President Mike Pence.

Rate this post

Leave a Comment