Post by wsannhbz on Dec 5, 2023 4:36:08 GMT -5
Google’s Search Center blog states that it is key for Google to “reward quality content regardless of how it is created […]. Automation has long been used to generate useful content, such as sports scores, weather forecasts and transcripts. AI can open up new Special Data levels of expression and creativity and be a key tool to support the creation of great web content.” Unreliability of AI content detectors. Reality or myth? Although AI content detectors are ubiquitous,
their effectiveness can be questionable. The main problems are: low efficiency in detecting AI content, problems with false positives, as well as difficulties in adapting detectors to rapidly diversifying and improving new AI models. Tests conducted by OpenAI showed that their classifier recognized GPT-generated text only 26% of the time. An interesting example of the unreliability of generators can be seen in an experiment conducted by TechCrunch, which showed that the GPTZero tool correctly identified five out of seven AI-generated texts. While the OpenAI classifier only identified one.
AI content detectors Source: GPTZero (https://gptzero.me/) In addition, there is a risk of receiving a false positive, that is, identifying text written by a human as AI-generated. For example, the beginning of the second chapter of Miguel de Cervantes’ Don Quixote was marked by the OpenAI detector as most likely written by artificial intelligence. While errors in the analysis of historical literary texts can be treated as an amusing curiosity, the situation becomes more complicated when we want to use detectors as tools for evaluating texts. The U.S. Constitution was marked by ZeroGPT as 92.15% written by artificial intelligence. And, according to a study published by researchers at Stanford University, 61% of TOEFL essays written by non-native English-speaking students were classified as AI-generated.
their effectiveness can be questionable. The main problems are: low efficiency in detecting AI content, problems with false positives, as well as difficulties in adapting detectors to rapidly diversifying and improving new AI models. Tests conducted by OpenAI showed that their classifier recognized GPT-generated text only 26% of the time. An interesting example of the unreliability of generators can be seen in an experiment conducted by TechCrunch, which showed that the GPTZero tool correctly identified five out of seven AI-generated texts. While the OpenAI classifier only identified one.
AI content detectors Source: GPTZero (https://gptzero.me/) In addition, there is a risk of receiving a false positive, that is, identifying text written by a human as AI-generated. For example, the beginning of the second chapter of Miguel de Cervantes’ Don Quixote was marked by the OpenAI detector as most likely written by artificial intelligence. While errors in the analysis of historical literary texts can be treated as an amusing curiosity, the situation becomes more complicated when we want to use detectors as tools for evaluating texts. The U.S. Constitution was marked by ZeroGPT as 92.15% written by artificial intelligence. And, according to a study published by researchers at Stanford University, 61% of TOEFL essays written by non-native English-speaking students were classified as AI-generated.