• Welcome to RealShit, where Real Discussions happen. Whether it's movies, TV, music, games, or anything else on your mind, RealShit is the place to talk about it. No filters, just honest conversations. Jump in to join the conversation and share what you're thinking.

news Google Claims World First As AI Finds 0-Day Security Vulnerability

A thread covering the latest news on trends, groundbreaking technologies, and digital innovations reshaping the tech landscape.

Cpvr

Real Fanatic
RealShit Staff
Real Moderator
Joined
Aug 28, 2024
Messages
579
Reaction score
147
Shit Coins
Ṩ1,158
Cherries & Berries
Bolden Your Name
An AI agent has discovered a previously unknown, zero-day, exploitable memory-safety vulnerability in widely used real-world software. It’s the first example, at least to be made public, of such a find, according to Google’s Project Zero and DeepMind, the forces behind Big Sleep, the large language model-assisted vulnerability agent that spotted the vulnerability.


If you don’t know what Project Zero is and have not been in awe of what it has achieved in the security space, then you simply have not been paying attention these last few years. These elite hackers and security researchers work relentlessly to uncover zero-day vulnerabilitiesin Google’s products and beyond. The same accusation of lack of attention applies if you are unaware of DeepMind, Google’s AI research labs. So when these two technological behemoths joined forces to create Big Sleep, they were bound to make waves.

Google Uses Large Language Model To Catch Zero-Day Vulnerability In Real-World Code​

In a Nov. 1 announcement, Google’s Project Zero blog confirmed that the Project Naptime large language model assisted security vulnerability research framework has evolved into Big Sleep. This collaborative effort involving some of the very best ethical hackers, as part of Project Zero, and the very best AI researchers, as part of Google DeepMind, has developed a large language model-powered agent that can go out and uncover very real security vulnerabilities in widely used code. In the case of this world first, the Big Sleep team says it found “an exploitable stack buffer underflow in SQLite, a widely used open source database engine.”


The zero-day vulnerability was reported to the SQLite development team in October which fixed it the same day. “We found this issue before it appeared in an official release,” the Big Sleep team from Google said, “so SQLite users were not impacted.”


AI Could Be The Future Of Fuzzing, The Google Big Sleep Team Says​

Although you may not have heard the term fuzzing before, it’s been part of the security research staple diet for decades now. Fuzzing relates to the use of random data to trigger errors in code. Although the use of fuzzing is widely accepted as an essential tool for those who look for vulnerabilities in code, hackers will readily admit it cannot find everything. “We need an approach that can help defenders to find the bugs that are difficult (or impossible) to find by fuzzing,” the Big Sleep team said, adding that it hoped AI can fill the gap and find “vulnerabilities in software before it's even released,” leaving little scope for attackers to strike.

Finding a vulnerability in a widely-used and well-fuzzed open-source project is an exciting result,” the Google Big Sleep team said, but admitted the results are currently “highly experimental.” At present, the Big Sleep agent is seen as being only as effective as a target-specific fuzzer. However, it’s the near future that is looking bright. “This effort will lead to a significant advantage to defenders,” Google’s Big Sleep team said, “with the potential not only to find crashing test cases, but also to provide high-quality root-cause analysis, triaging and fixing issues could be much cheaper and more effective in the future.”

Source: https://www.forbes.com/sites/daveyw...rst-as-ai-finds-0-day-security-vulnerability/
 
The first time I heard about this was about a week ago from someone developing forum software with it.

Hopefully, they're able to push out a patch.

But this is amazing, and scary at the same time.

Any software developer could run their code through it to determine if there are security vulnerabilities in the work.

Though, then comes the ethical question: Does the LLM save that code after it's done or does it junk it?

It could easily help Google rival Adobe, let's say, if they pushed Creative Cloud through it. Google could essentially use the same model, bunched with other LLM programming models, to release something that'd take years of R&D in a matter of weeks to months.

Apply the same logic to smaller developers. Google could release smaller helpful utility apps and bundle them with Android.

I don't know how much I'd trust it if I were trying to profit from my work. But to find vulnerabilities in open source software seems like a good use for now.
 
I have heard of Google DeepMind, it is an AI research lab but I have limited information on this. I have no idea what exactly is Zero day vulnerabilities. I will have to read more about this.
 
I’ve always known that Google is the real deal when it comes to the safety and security of our data online. This tech giant has the best hackers in their team to help block the leaks and loopholes. This news further deepens my confidence in Google.
 
This showcases the power of computers and how it can keep us safe. It’s a great job for Google to discover the dangers that are hidden online. It's a great discovery! It makes it even safer to use the internet.
 
Back
Top Bottom