Home About us

Don't dare to use ChatGPT water papers! OpenAI's anti-cheat tool has been exposed, with an accuracy of up to 99.9%, and the good news: it hasn't been launched yet

Qubits 2024/08/05 13:32
A water from the concave non-temple
qubit | Official account QbitAI

Check whether the content uses ChatGPT, and the accuracy rate is as high as 99.9%!

This tool is from OpenAI.

It can be used specifically to detect whether ChatGPT has been used to water papers/assignments. The idea was already proposed back in November 2022 (the same month that ChatGPT was released).

But!

Such a useful thing, but it has been hidden internally for 2 years, and it has not been disclosed to the public yet.

Qubits, don't dare to use ChatGPT water papers! OpenAI's anti-cheat tool has been exposed, with an accuracy of up to 99.9%, and the good news: it hasn't been launched yet

Why?

OpenAI surveyed loyal users and found that nearly a third of them said they would abandon ChatGPT if they used an anti-cheat tool. It may also have a greater impact on non-native English speakers.

However, there are also people within the company who have suggested that the use of anti-cheat methods is good for the OpenAI ecosystem.

The two sides have been at loggerheads, and the watermark detection tool has not been released as a result.

In addition to OpenAI, such as Google and Apple have also prepared similar tools, some of which have started internal testing, but none of them have been officially launched.

ChatGPT was discussed before it was released

After ChatGPT became popular, many high school students and college students used it to write homework, so how to screen AI-generated content has also become a hot topic in the circle.

Judging from the latest exposed information, OpenAI took this issue into account long before the release of ChatGPT.

The person who developed the technology at the time was Scott Aaronson, who works on security at OpenAI and is also a professor of computer science at the University of Texas.

Qubits, don't dare to use ChatGPT water papers! OpenAI's anti-cheat tool has been exposed, with an accuracy of up to 99.9%, and the good news: it hasn't been launched yet

In early 2023, John Schulman, one of the co-founders of OpenAI, outlined the pros and cons of the tool in a Google document.

The company's executives then decided that they would seek input from a range of people before taking further action.

In April 2023, a commissioned survey by OpenAI showed that only 1/4 of people worldwide support the addition of testing tools.

In the same month, OpenAI conducted another survey of ChatGPT users.

The results showed that nearly 30% of users said they would use ChatGPT less if it had deployed watermarks.

Since then, there has been a lot of controversy around the technical maturity of the tool, as well as the preferences of users.

In early June of this year, OpenAI convened senior staff and researchers to discuss the project again.

In the end, it is said that everyone agreed that although the technology is mature, the results of last year's ChatGPT user survey cannot be ignored.

Internal documents show that OpenAI believes they need to develop a plan by this fall to influence public perception of AI transparency.

However, until the news has been exposed, OpenAI has not revealed relevant countermeasures.

Why is it not public?

To summarize the reasons why OpenAI has been slow to disclose this technology, there are two main aspects: one is the technology, and the other is user preferences.

Let's talk about technology first, as early as January 2023, OpenAI developed a technology to screen the text of multiple AI models, including ChatGPT.

This technology uses a method similar to "watermarking" to embed invisible marks into the text.

This way, when someone analyzes the text with a detection tool, the detector can provide a score that indicates how likely the text is to be generated by ChatGPT.

However, the success rate was only 26% at that time, and only 7 months later, OpenAI withdrew.

Later, OpenAI gradually raised the success rate of the technology to 99.9%, and technically, the project was ready to be released about a year ago.

However, another controversy surrounding the technology is that internal employees believe that the technology could harm the quality of ChatGPT writing.

At the same time, employees also raised some potential risks about "people potentially circumventing watermarks".

For example, college students will use "translation techniques" to translate text into another language and then translate it back again in a similar way to Google Translate, which may be erased.

Another example is that someone makes a "policy and countermeasures", once there are more people who openly use watermark tools, netizens will specify a cracked version in minutes.

In addition to technology, another major obstacle is users, and multiple surveys by OpenAI show that users do not seem to be optimistic about this technology.

This also has to mention what are users doing with ChatGPT?

This question can be referred to a survey by The Washington Post, which looked at nearly 200,000 English-language chats from the dataset WildChat, which were generated by humans with two bots built on ChatGPT.

It can be seen that people mainly use ChatGPT for writing (21%) and helping with homework (18%).

Qubits, don't dare to use ChatGPT water papers! OpenAI's anti-cheat tool has been exposed, with an accuracy of up to 99.9%, and the good news: it hasn't been launched yet

In this way, it is not difficult to understand that people oppose this detection technology.

So, do you agree to adding a watermark when using a tool like ChatGPT?

Reference link:
[1]https://www.wsj.com/tech/ai/openai-tool-chatgpt-cheating-writing-135b755a?st=ejj4hy2haouysas&reflink=desktopwebshare_permalink
[2]https://x.com/emollick/status/1820161210949464515

This article is from Xinzhi self-media and does not represent the views and positions of Business Xinzhi.If there is any suspicion of infringement, please contact the administrator of the Business News Platform.Contact: system@shangyexinzhi.com