Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How many of those are false positives though? Probably just over 5000?

You get bug bounties if you report the kind of bugs Mythos identified. There's a reason no-one collected bounties from the "5000 defects" Coverity identified.

The Mythos reports have several examples of chaining a whole bunch of logic in different parts of the program together to exploit something very subtle. The Coverity reports aren't anything like that. These tools aren't remotely in the same league or even universe.

 help



Yeah, fuzzing, sanitizers, and bug bounties were our main pre-AI tools for finding bugs.

it's just sad that Coverity represents the best working C++ static analysis tool.

There's also PVS-Studio. They also scan open source projects - see https://pvs-studio.com/en/blog/inspections/

It's hard to convince managers to spend money on static analysis tools (or any development tool).

Unless your company just got bad publicity for a bug and your devs come to you and demonstrate that a certain static analysis tool would have flagged that particular piece of code, most managers would let the beancounter-facet dominate the decision making process.


The best general purpose one, anyway. Specialty tools can be much better for their niches. Heck, compiler warnings are one such niche tool, and some of them are quite good.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: