The Technology section is powered by Favbet Tech
A group of AI researchers recently discovered that for as little as $60, an attacker could tap into the datasets generated by AI tools like ChatGPT.
Chatbots or image generators can produce complex responses and images by learning from terabytes of data from the Internet. Florian Tramer, associate professor of computer science at the Federal University of Technology in Zurich, says that this is an effective way of learning. But this method also means that AI tools can be trained on false data. This is one of the reasons why chats can be biased or just give wrong answers.
Tramer and a team of scientists, in a study published on arXiv, looked for an answer to the question of whether it is possible to intentionally “poison” the data on which an artificial intelligence model is trained. They found that with a little spare cash and access to technical solutions, a low-level attacker could falsify a relatively small amount of data, enough to cause a large language model to produce incorrect answers.
Scientists considered two types of attacks. One way is to purchase expired domains, which can cost as little as $10 per year per URL that will host the information you want. For $60, an attacker can effectively control and poison at least 0.01% of a data set.
Scientists tested this attack by analyzing datasets that other researchers rely on to train real-world large language models and acquiring expired domains from them. The team then tracked how often researchers downloaded data from domains belonging to the research team.
“A single attacker can control a fairly significant portion of the data used to train the next generation of machine learning models and influence how that model behaves,” Tramer says.
Scientists have also investigated the possibility of poisoning Wikipedia, since the site can be the main source of data for language models. Relatively high-quality data from Wikipedia can be a good source for training AI, despite its small share on the Internet. A fairly simple attack involved editing Wikipedia pages.
Wikipedia does not allow researchers to take data directly from its site, providing copies of pages that they can download. These snapshots are taken at known, regular and predictable intervals. That is, an attacker can edit Wikipedia before a moderator can undo the changes and before the site takes snapshots.
“That means if I want to put garbage on a Wikipedia page… I’ll just do some math, I’ll assume that this particular page will be saved tomorrow at 3:15 p.m., and tomorrow at 3:14 p.m. I’ll add the garbage there.”
The scientists did not edit the data in real time, but calculated how effective the attacker could be. Their very conservative estimate was that at least 5% of edits made by an attacker would get through. usually the percentage is higher, but even this is enough to cause the model to behave in an undesirable way.
The research team presented the results to Wikipedia and provided suggestions for security measures, including randomizing the amount of time the site takes page snapshots.
According to the researchers, if the attacks are limited to chatbots, data poisoning will not be an immediate problem. But in the future, AI tools will begin to interact more with external sources—browse web pages, read email, access your calendar, and more.
“These things are a real nightmare from a security perspective,” Tramer says. If any part of the system is compromised, an attacker could theoretically tell the AI model to search for someone’s email or credit card number.
The researcher adds that data poisoning is not even necessary at the moment due to the existing shortcomings of AI models. And discovering the pitfalls of these tools is almost as easy as making models misbehave.
“Currently, the models we have are quite fragile and don’t even need to be poisoned,” he said.
Internal monologue: artificial intelligence was taught to think (how could that be?)
Source: Business Insider
The Technology section is powered by Favbet Tech
Favbet Tech is an IT company with 100% Ukrainian DNA that creates perfect services for iGaming and Betting using advanced technologies and provides access to them. Favbet Tech develops innovative software through a complex multi-component platform capable of withstanding huge loads and creating a unique experience for players. The IT company is part of the FAVBET group of companies.
The competition of ITS authors is ongoing. Write an article about game development, gaming and gaming devices and win a professional Logitech G923 Racing Wheel or one of the low-profile Logitech G815 LIGHTSYNC RGB Mechanical Gaming Keyboard!