هجمات استنساخ واسعة تستهدف Gemini

هجمات استنساخ واسعة تستهدف Gemini

Google revealed that its advanced model, Gemini, was subjected to intense attempts at cloning during the first months of 2026. Hackers utilized over 100,000 programming requests to analyze the model’s behavior and mimic its intelligence. Google’s technical security team identified an organized campaign originating from various international locations, exploiting open programming interfaces to target both commercial and beta versions of the platform.

These attacks, known in the industry as “model stealing” or “adversarial prompting,” allow adversaries to develop competing models by inferring responses without accessing training data or source code. By repeating requests and analyzing text responses, hackers increase the risk of breaching privacy or revealing the advanced processing techniques that distinguish Gemini from other available models. In response to this threat, Google launched urgent security updates in early February 2026. The measures include tightening systems for monitoring unusual activity, analyzing programming requests with advanced algorithms, and applying strict limits on the number and type of requests allowed within specific timeframes.

Google stated that these new systems use AI to detect unconventional patterns in programming requests. An interactive security matrix has been activated to prevent attackers from extracting secrets about the model’s algorithms or linguistic processing methods. Additionally, periodic audits of text responses have been introduced, marking a significant shift in global AI security policies.

The AI sector has seen a significant rise in cloning attempts since 2025, particularly as companies and institutions increasingly rely on large models like Gemini and ChatGPT for commercial and educational applications. These attempts are often driven by competitive motives or the desire to leverage the original model’s capabilities to develop parallel software without bearing the costs of training and development. While comprehensive statistics on successful attacks in 2025 and 2026 are not yet available, major companies like Microsoft and Nvidia have announced efforts to strengthen the security of AI models provided to government and research institutions.

The exposure of Gemini to such massive cloning attempts carries significant implications for the 2026 AI market. It reflects the growing shift towards protecting intellectual property and raises questions about the ability of companies to counter organized cyber threats. Such attacks pose economic and security risks, ranging from the appearance of untrustworthy market clones to potential violations of user rights and data. Digital security experts warn that successfully cloning advanced AI models could lead to the spread of unsafe platforms used for fraud or the spread of disinformation, especially in vital sectors like health, finance, and education. Consequently, protecting data and verifying model integrity has become a top priority for technology manufacturers and service providers. Google expects the frequency of these attacks to increase amidst fierce global tech competition and is currently preparing awareness initiatives and distributed protection tools alongside ongoing security updates.

إرسال التعليق

You May Have Missed