From realistic photos of people that doesn't exist to methods of creating coherent texts, AI-generated contents have come to stay, looking more human than ever.
A game was designed see whether people can tell the difference. Actual images and contents are pitted against their AI-generated counterparts in a pseudo-randomized quiz of 10 questions.
With 4000+ people and 30000+ answers collected, normal distribution shows that the average game score of 5/10 indicates that people don't seem to be able to tell the difference after all — and we should be worried.
The work was presented at the 13th CMI Conference 2020, showcased at the Artificial Creativity Conference 2020, and published by IEEE.
4000+ players 35000+ answers 5/10 average score
2020
Academic
Artificial Intelligence
Social Experiment
Web Development
Research Design