|
Post by account_disabled on Dec 2, 2023 2:23:03 GMT -8
Vulcan Cyber has identified a new malware delivery technique that we call “ AI package hallucination .” This technique relies on the fact that ChatGPT, and likely other generative AI platforms, sometimes answers questions with imaginary sources, links, blogs, and statistics. It even generates questionable patches for CVEs and, in this case, offers links to coding libraries that don't actually exist. Using this technique, an attacker begins by formulating a question asking ChatGPT for software that will solve a coding problem. ChatGPT then responds with several programs, some of which may not exist. This is Country Email List where things get dangerous: when ChatGPT recommends software that is not published in a legitimate software repository When the hacker finds a recommendation for unpublished software, he may publish his own malware instead. The next time a user asks a similar question, they may receive a recommendation from ChatGPT to use the existing malware. 87% of French people (86% worldwide) say that the volume of data makes personal and professional decisions much more complicated and 48% of French people (59% worldwide) admit to being confronted more than once a year. day in a dilemma, that is to say they do not know what decision to make. 39% of French people (35% worldwide) do not know what data or sources to trust and 71% (70% worldwide) have already given up on making a decision because the volume of data was too large.
|
|