raywatcher 8 hours ago

For all the discussions about the slopification of the internet, the human toll on open source maintainers isn’t really talked about. It's one thing to get flooded with bad reports; it's another to have to mentally filter AI-generated submissions designed to "sound correct" but offer no real value. Totally agree with the author mentioning the emotional toll it takes to deal with these mind-numbing stupidities.

  • Aurornis 7 hours ago

    The most notable thing about this article, in my opinion, is the increase in human generated slop.

    Everyone is talking about AI in the comments, but the article estimates only 20% of their submissions are AI slop.

    The rest are from people who want a curl contribution or bug report for their resume. With all of the talk about open source contributions as a way to boost your career or get a job, getting open source contributions has become a checklist item for many juniors looking for an edge. They don’t have the experience to know what contributions are valuable or correct, they just want something to put on their resume.

    • grishka 5 hours ago

      Reminds me of those "I updated your dependencies/build system version" and "I reformatted your code" kinds of PRs I got several times for my projects. Yeah, okay, you did this very trivial thing. But didn't you stop to think about the fact that if it's so trivial, there must be a reason I haven't done it myself? "It already works as is" is a valid reason too.

      • Aurornis 4 hours ago

        I often update README files or documentation comments and submit PRs when I find incorrect documentation.

        I’ve had mixed results. Most maintainers are happy to receive a well formatted update to their documentation. Some get angry at me for submitting non-code updates. It’s weird

        • grishka 25 minutes ago

          There's nothing wrong with fixing actual mistakes. It's obviously in everyone's best interest for documentation to be correct.

          But updating dependencies and such is totally unproductive. It's contributing for the sake of having contributed in its purest form. The only thing that's worse is opening a PR to add a political banner to someone else's readme, and then getting very pissed off when they respectfully close it.

  • xg15 an hour ago

    Maybe instead of trying to detect LLMs, would a better strategy be to try and detect inconsistent or self-contradictory reports? The reports we see here seem to unravel at some point, either leaving out crucial information, such as code location or steps to reproduce, while insisting the information is present - or straight-up claiming things about a code location that are not there.

    Such as the buffer length check in [1] where the report hallucinated an incorrect length calculation and even quoted the line, then completely ignored that the quoted line did not match what the report was talking about and was in fact correct.

    So essentially, can we put up a gaslighting filter?

    It seems like those kinds of inconsistencies could be found, ironically, by an LLM.

    [1] https://news.ycombinator.com/item?id=44561058

  • empiko 7 hours ago

    It's human toll everywhere. AI used for peer review effectively forces researchers to implement suggestions between revisions, AI used by managers suggest bad solutions that engineers are forced to implement, etc. Effectively, the number of person-hours that is spent following whatever AI models suggest is increasing rapidly. Some of it might make sense, but uncomfortably many hours are burned in vain. There is a real cost of lost productivity in the economy by command chains not being ready to filter out slop.

  • anon191928 8 hours ago

    this type of social moderation exist well over decade and FB had thousands of people hired for these. They were filtering liveleak level or even worse type of content for years with human manually watching or flagging the content. So nothing new.

    • bravetraveler 7 hours ago

      > hired

      Do remember "we're" (hi, interjecting) talking about open source maintainers, we didn't all make curl or Facebook

    • meindnoch 6 hours ago

      My gut tells me that deciding the soundness of a vulnerability report is not in the same complexity class as deciding whether a video showing ISIS torture footage.

  • friedel 8 hours ago

    > but offer no real value

    They could offer value, but just rarely, at least with the LLM/model/context they used.

    > toll it takes to deal with these mind-numbing stupidities.

    Could have a special area for submitting these where AI does the rejection letter and banning.

    • xg15 8 hours ago

      I think looking at one example is useful: https://hackerone.com/reports/2823554

      What they did was:

      1) Prompt LLM for a generic description of potential buffer overflows in strcopy() and a generic demonstration code for a buffer overflow. (With no connection to curl or even OpenSSL at all)

      2) Present some stack traces and grep results that show usage of strcopy() in curl and OpenSSL.

      3) Simply claim that the strcopy() usages from 2) somehow indicate a buffer overflow, with no additional evidence.

      4) When called out, just pretend that the demonstrator code from 1) were the evidence, even though it's obvious that it's just a textbook example and doesn't call any code from curl.

      It's not that they found some potentially dangerous code in curl and didn't go all the way to prove an overflow, which could have at least some value.

      The entire thing is just bullshit made to look like a vulnerability report. There is nothing behind it at all.

      Edit: Oh, cherry on top: The demonstrator doesn't even use strcopy() - nor any other kind of buffer overflow. It tries to construct some shellcode in a buffer, then gives up and literally calls execve("/bin/sh")...

      • deepdarkforest 7 hours ago

        > The problem is in strcpy in the src files of curl.. have you seen the exploit code ??????

        The worst part is that once they are asked for clarifications by the poor maintainers, they go on offense and become aggressive. Like imagine the nerve of some people, to use LLMs to try to gaslight an actual expert that they made a mistake, and then act annoyed/angry when the expert asks normal questions

        • xg15 6 hours ago

          Yep.

          My guess is that the aggression is part of the ruse. Trying to start drama/intimidating the other when your bluff is being called out is the oldest strategy...

          (You could see a similar pattern in the xz backdoor scheme, where they were deliberately causing distress for the maintainer to lower their guard.)

          Or maybe the guy here hoped that the reviewers would run the demo - blindly - and then somehow believe it was real? Because it prints some scary messages and then does open a shell. Even if that's the only thing it does...

    • ndepoel 8 hours ago

      > They could offer value, but just rarely, at least with the LLM/model/context they used.

      Still a net negative overall, given that you have to spend a lot of effort separating the wheat from the chaff.

      > Could have a special area for submitting these where AI does the rejection letter and banning.

      So we'll just have one AI talking to another AI with an indeterminate outcome and nobody learns anything of value. Truly we live in the future!

      • javcasas 6 hours ago

        It can be better. On slop detection, shadowban the offender and have it discuss with two AI "maintainers", and after 30 messages go and reveal the ruse. Then ban.

    • meindnoch 8 hours ago

      >They could offer value, but just rarely, at least with the LLM/model/context they used.

      Eating human excrement can also offer value in the form of undigested pieces of corn and other seeds. Are you interested?

      • ElFitz 7 hours ago

        Funnily enough, fecal transplants (Fecal Microbiota Transplants, FMT) are a thing, used to help treat a range of diseases. It’s even being investigated to help treat depression.

        So…

        • javcasas 6 hours ago

          I'm sure it does. But would you like one every other week like the llm slop?

          • ElFitz 3 hours ago

            Honestly, regarding the whole "LLM slop" thing, I don’t care. I get why others do, but I just don’t.

            I don’t care how that sausage is made. Heck, sometimes gen AI even allows people who otherwise wouldn’t have had the time or skills to come up with funny things.

            What annoys me is all the spam SEO-gamed websites with low information density drowning the answer I’m actually looking for in pages of empty sentences.

            When they haven’t just gamed their way to the top of search results without actually containing any answer.

            And that didn’t need LLMs to exist. Just greed and actors with interests unaligned with mine. Such as Google’s former head of ads, apparently. [0][1]

            [0]: https://www.wheresyoured.at/the-men-who-killed-google/

            [1]: https://www.wheresyoured.at/requiem-for-raghavan/

leovingi 8 hours ago

And it's not just vulnerability reports that are affected by this general trend. I use social media, X specifically, to follow a lot of artists, mostly for inspiration and because I find it fun to share some of the work that other artists have created, but over the past year or so I find that the mental workload it takes for me to figure out if a particular piece of art is AI-generated is too much and I start leaning into the safe option of "don't share anything that seems even remotely suspicious unless I can verify the author".

The amount of art posts that I have shared with others has decreased significantly, to the point where I am almost certain some artists who have created genuine works simply get filtered out because their work "looks" like it could have been AI-generated... It's getting to the point where if I see anything that is AI it's an instant mute or block, because there is nothing of value there - it's just noise clogging up my feed.

  • DaSHacka 8 hours ago

    Genuine question; if you cant tell, why does it matter?

    • leovingi 8 hours ago

      It's a fair question and one that I've asked myself as well.

      I like to use the example of chess. I know that computers can beat human players and that there are technical advancements in the field that are useful in their own right, but I would never consistently watch a game of chess played between a computer and a human. Why? Because I don't care for it. To me, the fun and excitement is in seeing what a HUMAN can achieve, what a HUMAN can create - I apply the same logic to art as well.

      As I'm currently learning how to draw myself, I know how difficult it is and seeing other people working hard at their craft to eventually produce something beautiful, after months and years of work - it's a shared experience. It makes me happy!

      Seeing someone prompt an AI, wait half-a-minute and then post it on social media does not, even if the end result is of a reasonable quality.

      • rambambram 7 hours ago

        > As I'm currently learning how to draw myself, I know how difficult it is and seeing other people working hard at their craft to eventually produce something beautiful, after months and years of work - it's a shared experience. It makes me happy!

        Today I learned: LLMs and their presence in society eventually force one to producing/crafting/making/creating for fun instead of consuming for fun.

        All jokes aside, you got the solution here. ;)

      • ants_everywhere 7 hours ago

        All the current active chess players learned by playing the computer repeatedly.

        So what the human is achieving in this case is having been trained by AI.

      • impossiblefork 7 hours ago

        But how can't you tell?

        To me AI generated art without repeated major human interventions is almost immediately obvious. There are things it just can't do.

        • leovingi 7 hours ago

          For the most part I can actually tell, but it also depends on the style of the art. A lot of anime-inspired digital images are immediately obvious - AI tends to add quite a lot of "shine" to its output, if that makes sense. And it's way too clean, sterile even. And it all looks the same.

          But when the art style is more minimalist or abstract, I find it genuinely difficult to notice a difference and have to start looking at the finer details, hence the mental workload comment. Often times I'll notice an eye not facing the right direction or certain lines appearing too "repetitive", something I rarely see in the works of human artists. It's difficult to explain without actual inage examples in front of me.

    • aDyslecticCrow 7 hours ago

      Much of what makes art fun is human effort and show of skill.

      People post AI art to take credit for being a skilled artist, just like people posting others art as their own. Its lame.

      If I am to be a bit controversial among artists; we're exposed to so much good art today that most art posted online is "average" at best. (The bar is so high that it takes 20+ years to become above average for most)

      Its average even if a human posted it but fun because a human spent effort making something cool. When an ai generates average art its ... just average art. Scrolling google images to look at art is also pretty dull, because its devoid of the human behind.

      • latexr 6 hours ago

        To continue your point, following the human doing it is also infinitely more rewarding because you can witness their progress.

        • aDyslecticCrow 4 hours ago

          Definitely very fun...

          But I think thats less generally applicable. There is alot of the online art community and market that is really circular; people who like art do art and buy/commission art from others they follow and know.

          That market will likley never disappear even if AI fully surpass humans; because the specific humans were the point all along.

          But I think my previous comment applies more broadly beyond that community.

    • npteljes 6 hours ago

      One of the reasons why people react so badly to AI art is because they encounter it in a context that implies human art. Then the discovery becomes treachery, a breach of trust. Not too much unlike having sex lovingly, only to discover that there was no love at all. Or people being nice to someone, but not meaning it, and them finding this out.

      It's about implications, and trust. Note how AI art is thriving on platforms where it's clearly marked as such. People then can go into it by not having the "hand crafted" expectation, and enjoying it fully for what it is. AI-enabling subreddits and Pixiv comes to mind for example.

    • meindnoch 8 hours ago

      An olympic weightlifter doing clean and jerk with 150kg is worthy of my attention. A Komatsu forklift doing the same is not.

      • ta8645 7 hours ago

        > A Komatsu forklift doing the same is not ... [worthy of attention]

        It is, if you're managing a warehouse; then it's a wonderful marvel. And it is a hidden benefit to everyone who receives cheaper products from that warehouse. Nobody cares if it's a human or the Komatsu doing the heavy lifting.

        • latexr 6 hours ago

          You just made me realise why many people have trouble with analogies, to the point it seems they are arguing in bad faith. You have to consider the context the analogy is being applied to.

          It is patently obvious (though clearly not to every one) that the person you’re replying to is describing a situation of seeing a human weightlifter VS a mechanical forklift doing the same in a contest, for entertainment. The analogy works as a good example because it maps to the original post about art.

          When you change the setting to a warehouse, you are completely ignoring all the context and trying to undermine the comment on a technicality which doesn’t relate at all to the point. If you want to engage with the analogy properly, you have to keep the original art context in mind at all times.

          • ta8645 6 hours ago

            And you're failing to understand that people can understand the analogy, and think that it fails to capture the entire situation, and so extend the analogy to make it obvious (although clearly not to everyone) that the analogy is lacking, and not very convincing.

            • latexr 5 hours ago

              > people can understand the analogy, and think that it fails to capture the entire situation, and so extend the analogy to make it obvious (although clearly not to everyone) that the analogy is lacking, and not very convincing.

              Of course, that could definitely happen. My point is that I don’t think it did in this case, and that it stretched the analogy so far beyond its major points, that it made it clear to me a pattern that I have seen several times before but could never pinpoint clearly.

              I am grateful to that comment for the insight. Understanding how people may distort analogies will force me to create better ones for more productive discussions.

              • ta8645 4 hours ago

                There was no distortion. You seem to want people to only take the desired implication, and not think too much more about it. But if you instead think for a second, you'll see that the analogy was crafted to send a message that is incorrect and limited. Rather than trying to create better analogies to handcuff a reader into your viewpoint, you might instead stop for a moment and see why this analogy wasn't actually as insightful as it might appear at first glance. And maybe even appreciate how my response shed light about that limitation. There are indeed people who get great entertainment out of machines that do heavy lifting, and they don't care how much a person can lift.

                • latexr 3 hours ago

                  > the analogy was crafted to send a message that is (…) limited.

                  All analogies are limited. That’s the point of an analogy: to focus on a shared element.

                  https://www.oxfordreference.com/display/10.1093/acref/978019...

                  > There are indeed people who get great entertainment out of machines that do heavy lifting, and they don't care how much a person can lift.

                  But crucially not the person making the analogy. They didn’t say a lifting machine would be uninteresting to everyone, only that it isn’t worth their (the commenter’s) time. They made an analogy to explain what they themselves think, not to push their point of view as ultimate universal truth.

                  • ta8645 2 hours ago

                    And my reply was to expand the context to show that there are people who feel otherwise. And yet you insist that my opinion is illegitimate and a "distortion". You've dug your heels in, and refuse to see that you had blinders on.

            • Henchman21 5 hours ago

              A deliberate misreading is not the same as engaging with the analogy in good faith and your reply here seems to indicate that you’ve done the former while simultaneously engaging in some name calling.

              • ta8645 4 hours ago

                What name calling?

          • cesarb 2 hours ago

            > You have to consider the context the analogy is being applied to.

            But that particular reply did not constrain itself to the context. It implied that a forklift lifting 150kg is not interesting at all. Which offends those of us who do appreciate watching heavy machinery do its work. That explains the unavoidable kneejerk replies of "actually, it is [interesting]".

        • npteljes 6 hours ago

          I think this is actually a good counterpoint to something that OP missed. It's not that it's not great that a Komatsu can also do it. Both are great. But we need to have the appropriate expectations, to end up with the feeling of appreciation. In the AI case, the art looks like "human art", and often it's also presented as such. Then, learning that actually AI did it is akin to betrayal. But actually people like to appreciate a whole lot of artful things that people didn't, or only partially "did": electronic music, the sounds and visuals of nature, emergent behavior of things like the game of life, visual output of algorithms, and so on.

        • teddy-smith 7 hours ago

          Well warehouse mangers excluded....

    • bit1993 6 hours ago

      A human artist puts in work and passion to create beautiful art from almost nothing. It brings them joy that their art brings someone joy. Every art piece has a story behind it, sharing their art with others gives them motivations to not only continue doing it and bless the world with more art but it also gives them feedback that yes this art is liked by someone out there. This feedback loop is part of what creates healthy civilizations.

    • nnf 6 hours ago

      For the same reason dealing in counterfeit money matters — just because I can't tell it's fake doesn't mean the person I try to pay won't know or care. If your reputation is your currency, you don't want to damage it by promoting artwork that other people know is AI generated, so it's likely better to play it safe.

    • mort96 4 hours ago

      It's tantamount to sharing a forgery and not caring because you "can't tell".

EdwardDiego 8 hours ago

> The length check only accounts for tmplen (the original string length), but this msnprintf call expands the string by adding two control characters (CURL_NEW_ENV_VAR and CURL_NEW_ENV_VALUE). This discrepancy allows an attacker ...hey chat, give this in a nice way so I reply on hackerone with this comment

Ohhh, copy and pasted a bit too much there.

  • Hendrikto 8 hours ago

    > Certainly! Let me elaborate on the concerns raised by the triager:

    These people don’t even make the slightest effort whatsoever. I admire Daniel’s patience in dealing with them.

    Reading these threads is infuriating. They very obviously just copy and paste AI responses without even understanding what they are talking about.

disqard 5 hours ago

> You still have not told us on which source code line the buffer overflow occurs.

> > hey chat, give this in a nice way so I reply on hackerone with this comment

> This looks like you accidentally pasted a part of your AI chat conversation into this issue, even though you have not disclosed that you're using an AI even after having been asked multiple times.

A sample of what they have to deal with. Source:

https://hackerone.com/reports/3230082

  • toshinoriyagi 4 hours ago

    The abuse of AI here blows my mind. Not just the use of AI to try to find a vulnerability in a widely-used repo, but the complete ignorance when using the AI.

    "hey chat, give this in a nice way so I reply on hackerone with this comment" is not language used naturally. It virtually never precedes high-quality conversation between humans so you aren't going to get that. You would only say this when prompting an LLM (poorly at that) so you are activating weights encoding information from LLM slop in the training data.

armchairhacker 7 hours ago

You could charge a fee and give the money back if the report is wrong but seems well-intentioned.

I see the issue with this, it's payment platforms. Despite the hate, cryptocurrency seems like it could be a solution. But in practice, people won't take time to set up a crypto wallet just to submit a bug report, and if crypto becomes popular, it may get regulations and middlemen like fiat (which add friction, e.g. chargebacks, KYC, revenue cuts).

However if more services use small fees to avoid spam it could work eventually. For instance, people could install a client that pays such fees automatically for trusted sites which refund for non-spam behavior.

  • latexr 7 hours ago

    > You could charge a fee and give the money back if the report is wrong but seems well-intentioned.

    That idea was considered and rejected in the article:

    > People mention charging a fee for the right to submit a security vulnerability (that could be paid back if a proper report). That would probably slow them down significantly sure, but it seems like a rather hostile way for an Open Source project that aims to be as open and available as possible. Not to mention that we don’t have any current infrastructure setup for this – and neither does HackerOne. And managing money is painful.

  • jannes 7 hours ago

    This is probably something that the platform HackerOne should implement. It can't be addressed on the project level.

    https://hackerone.com/curl/hacktivity

    • Aachen 7 hours ago

      Why?

      I don't know if the link you posted answers the question, I get a blocked page ("You are visiting this page because we detected an unsupported browser"). You'd think a chromium-based browser would be supported but even that isn't good enough. I love open standards like html and http...

      Edit: just noticed it goes to hackerone and not curl's own website. Of course they'd say curl can't solve payments on their own

anthonyryan1 7 hours ago

As the only developer maintaining a big bounty program. I believe they are all trending downward.

I've recently cut bounties to zero for all but the most severe issues, hoping to refocus the program on rewarding interesting findings instead of the low value reports.

So far it's done nothing to improve the situation, because nobody appears to read the rewards information before emailing. I think reading scope/rewards takes too much time per company for these low value reports.

I think that speaks volumes about how much time goes into the actual discoveries.

Open to suggestions to improve the signal to noise ratio from anyone whose made notable improvements to a bug bounty program.

  • Aachen 7 hours ago

    Similarly from a hacker's point of view, I also think vulnerability reporting is in a downwards spiral. Particularly the ones organised through a platform like this just aren't reaching the right people. It used to be pgp email to whoever needs to know of it and that worked great. I have no idea if it still would today for you guys, but from my point of view it's the only reliable way to reach a human who cares about the product and not someone whose job it is to refuse bounties. I don't want bounties, I've got a day job as security consultant for that, I'm just reporting what I stumble across. Chocolate and handwritten notes are nice, but primarily I want developers and sysadmins to fix their damn software

  • xg15 4 hours ago

    Putting on my tinfoil hat, I wonder if some of that slop might be coming from actual black-hat groups or state actors - who have an interest in making it harder to find and close real exploits.

    Those people wouldn't care about the bounty, overwhelming the system would be the point.

jgb1984 6 hours ago

LLM are a net negative on society on so many levels.

silvestrov 8 hours ago

> charging a fee [...] rather hostile way for an Open Source project that aims to be as open and available as possible

The most hostile is Apple where you cannot expect any kind of feedback on bug reports. You are really lucky if you get any kind of feedback from Apple.

Getting good feedback is the most valuable thing ever. I don't mind having to pay $5/year to be make reports if I know I would get feedback.

  • latexr 6 hours ago

    > You are really lucky if you get any kind of feedback from Apple.

    Hard disagree. When you get feedback from Apple, it’s more often than not a waste of time. You are lucky when you get no feedback and the issue is fixed.

  • omnicognate 7 hours ago

    This is because Apple software is perfect by definition. Any perceived bug is an example of someone failing to use the software correctly. Bug reports are records of user incompetence, whose only purpose is to be ritually mocked in morale-enhancing genius confirmation sessions.

ChrisMarshallNY 8 hours ago

> Maybe we need to drop the monetary reward?

That would likely fix some of it, but I suspect that you'd still get a lot, anyway, because people program their crawlers to hit everything, regardless of their relevance. Doesn't cost anything more, so why not? Every little hit adds to the coffers.

  • squigz 8 hours ago

    > Doesn't cost anything more, so why not? Every little hit adds to the coffers.

    Uhh... How does it not cost more to hit everything vs specific areas? Especially when you consider the actual payout rate for such approaches, which cannot possibly be very high - every little hit does not add to the coffers, which means you have to be more selective about what you try.

    • ChrisMarshallNY 8 hours ago

      Spammers and scammers have been running “scattershot” campaigns for decades. Works well for them, I guess, as they still do it.

      AI just allows them to be more effective.

yayitswei 8 hours ago

Make it cost money to submit.

  • bla3 7 hours ago

    The "Possible routes forward" section in the linked post mentions this suggestion, and why the author doesn't love it.

  • cjs_ac 8 hours ago

    ... and use the proceeds to increase the bounties paid to genuine bug reports.

  • komali2 7 hours ago

    "Submit deposit." They get the money back in all cases where the bug is determined not to be AI slop, including it not being a real bug, user error, etc. Otherwise, deposit gone.

placardloop 7 hours ago

These AI reports are just an acceleration of the slop created by similar human “researchers”. The real root cause of this is that most security “professionals” have been trained to do the bare minimum of work and expect a payday from it.

There’s an entire industry of “penetration testers” that do nothing more than run Fortify against your code base and then expect you to pay them $100k for handing over the findings report. And now AI makes it even easier to do that faster.

We have an industry that pats security engineers on the back for discovering the “coolest” security issue - and nothing that incentivizes them to make sure that it actually is a useful finding, or more importantly, actually helping to fix it. Even at my big tech company, where I truly think some of the smartest security people work, they all have this attitude that their job is just to uncover an issue, drop it in someone else’s lap, and then expect a gold star and a payout, never mind the fact that their finding made no sense and was just a waste of time for the developer team. There is an attitude that security people don’t have any responsibility for making things better - only for pointing out the bad things. And that attitude is carrying over into this AI slop.

There’s no incentive for security people to not just “spray and pray” security issues at you. We need to stop paying out but bounties for discovering things, and instead better incentivize fixing them - in the process weeding out reports that don’t actually lead to a fix.

  • jdefr89 7 hours ago

    Professional Vulnerability Researcher here... You are correct. Over the years this industry has seen an influx of script kiddies who do nothing but run tools. It's sad but I really think this field needs more gate keeping...

  • conartist6 7 hours ago

    Oh yes. AI has nothing to do with it! It is Totally Outrageous and Unexpected that AI would be abused to spew a lot of low value crap.

    Haha, I kid. Make no mistake, this is the AI sales pitch. A *weapon* to use on your opposition. If the hackers were trying to win by using it to wear down the defenders it could not possibly be working better.

    • conartist6 7 hours ago

      It is at the same time being used to tear down faith in democracy, all open content in the Internet, workers' autonomy, and generally serving to attempt to make all thought derivative while minimizing incentives to create anything new that isn't an AI

whatevsmate 7 hours ago

How about only sending submissions to humans if they include a reproducible test case? Actual compilable source code + payload that reproduces an attack. Would this be too easily gamed by security researchers as well?

IsTom 7 hours ago

You could require that submissions include an expletive or anything else that LLMs are sanitized to not produce. With how lazy these people are that ought to filter out at least some of them.

  • xg15 4 hours ago

    They are lazy up until they lose money if they don't do something. So if this was the only way to submit the reports, they'll find a way to prompt-hack the LLM to produce the expletive.

    ...or, just add it to the generated text themselves.

caioluders 8 hours ago

Make a private program with monetary rewards and a public program without. Invite only verified researchers.

  • spydum 7 hours ago

    Right? I thought the value of these vuln programs like hackerone and bugbounty would be you could use the submitters reputation to filter the noise? Don't want to accept low quality submissions from new or low experience reports? Turn the knob up..

konsalexee 8 hours ago

I think eventually all OSS projects/repos will suffer with this.

My bet is that git hosting providers like GitHub etc. should start providing features to allow us for better signal/noise ratio

  • nkrisc 8 hours ago

    Why would GitHub develop features that are adversarial to one of Microsoft’s favorite products?

    • Keyframe 8 hours ago

      So that you pay for both.

      • nkrisc 8 hours ago

        Not even the mafia has it that good. You only pay them so they won’t beat you up. Imagine if you paid them to beat you up and then paid them to protect you from them.

    • notachatbot123 7 hours ago

      Learning from Cloudflare: Host malware and DDOSsers AND provide protection against them = $$$

      • nkrisc 7 hours ago

        I feel my question is naive in retrospect.

    • oytis 8 hours ago

      E.g. to secure the quality of training data?

  • detaro 8 hours ago

    Githubs owner is betting the farm on pushing slop, so that seems unlikely to happen there anytime soon.

    • hiccuphippo 7 hours ago

      They just need to offer you more slop to review the slop and give it a sloppiness score.

  • Applejinx 8 hours ago

    Depends. I'm not suffering it at all, but I'm a sort of research project producing variations on audio processing under MIT license.

    And I don't take pull requests: only exception has been to accomodate a downstream user who was running a script to incorporate the code, and that was so out of my usual experience that it took way to long to register it was a legitimate pull request.

pinebox 6 hours ago

Maybe a curl Patreon for would-be H1 contributors? Just need to figure out a donation amount that is trivial for legitimate security researchers, but too rich for spammers.

bit1993 7 hours ago

AI slop is rapidly destroying the WWW, most of the content is becoming more and more low-quality and difficult to tell if its true or hallucinated. Pre-AI web content is now more like the golden-standard in terms of correctness, browsing the Internet Archive is much better.

This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary.

  • Expurple 6 hours ago

    Permissively-licensed projects (which is the majority or FOSS projects out there) could always be re-licensed to proprietary. I publish most of my code under permissive licences and will continue doing that. LLM training doesn't really change anything for me

  • Aurornis 7 hours ago

    There’s more to this story than AI slop:

    > The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions)

    Of course that 20% of AI slop submissions are not good, but there’s an overarching problem with juniors clamoring for open source contributions without having the skills or abilities to contribute something useful.

    They heard that open source contributions gets jobs, so they spam contributions to famous projects.

EdwardDiego 8 hours ago

Reading this particular instance of slop was especially galling. It's like the world's slowest ChatGPT dialogue via a bug tracker.

https://hackerone.com/reports/2298307

  • 0x000xca0xfe 7 hours ago

    DDoSing humans.

    LLMS are the perfect tool to annihilate online communities. I wonder when we see the first deliberate attack. These incidents seem (so far) isolated and just driven by greed.

4gotunameagain 8 hours ago

Oh god, going through some of the reports listed on the bottom of the page feels like a nightmare. I cannot imagine how it is for the actual maintainers.

I wonder what's the solution here. You need to be able to receive reports from anyone, so a reputation based system is not applicable. It also seems like we cannot detect whether a piece of text was generated with LLM..

  • Hendrikto 8 hours ago

    I would have closed ALL of the linked reports much sooner, and banned the reporters. In most cases it is extremely obvious from very early on in the thread that these people have not the slightest idea what they are saying and just copy-paste AI responses.

  • amiga386 8 hours ago

    > It also seems like we cannot detect whether a piece of text was generated with LLM

    Based on reading those same reports, I think you can totally can detect it, and Daniel also thinks that -- or at least, you can tell when it's very obvious and the user has pretty much just pasted what they got from the LLM into the submit box. Sneaky humans, trying to disguise their sources by removing the obvious tells, make it harder.

    The curl staff assume good faith and let the submitter explain themselves. Maybe the submitter has a reason for using it -- the submitter may be honest or dishonest as they wish.

    I like that the curl staff ask submitters to admit up-front if any AI was used, so they can discriminate between people with a legitimate use case (e.g. people who don't speak English but can find valid exploits and want to use machine translation for their writeup), versus others (e.g. people who think generalised LLMs can do security analysis).

    But even so, the central point of this blog post is that the bad humans waste their time, they can't get that time back, and even directly banning them does not have much of an effect on the bad humans, or the next batch of bad humans.

  • vincnetas 4 hours ago

    Why reputation is not applicable. Arrange for whitelist off-channel and then submit your requests. I think that reputation and non-anonymity is the only applicable way forward.

    And when i say "non-anonymity" i don't mean "public". You can be non-anonymous with one person not the whole world.

soyyo 6 hours ago

Of the 21 reports included as an example i have looked at number two, Buffer Overflow Vulnerability in WebSocket Handling #2298307

The style is obviously gpt generated and I think the curl team knows that, still they proceed to answer and keep making questions about the report to its author to get more info.

It really bothers me is that these idiots are consuming the time and patience of nice and reasonable people, I really hope the can find a solution and don't eventually snap by having to deal with this bullshit.

IAmLiterallyAB 8 hours ago

Minimum reputation to submit might help

  • __bjoernd 7 hours ago

    How do I gather reputation to submit without being able to submit something?

    • the_snooze 6 hours ago

      Probably the same as in any other high-trust human interaction: you have to have someone on the inside introduce you and vouch for you.

jdefr89 7 hours ago

Some of these AI slop report exchanges are absolutely hilarious. Love seeing people caught red handed then trying to play it off.. This is why Vulnerability Research needs more gatekeeping.

Dilettante_ 7 hours ago

Takes all the self-control I have not to make a crude joke about the title

lysace 8 hours ago

Sort of separate but perhaps also relevant to the thousands cuts/slops: Isn't the scope of curl/libcurl a bit too big?

It supports almost every file-related networking protocol under the sun and a few more just for fun. (https://everything.curl.dev/protocols/curl.html)

Meanwhile 99.8% of users (assuming) just use it for HTTP.

Here's a few complex protocols I bet many do not know that curl supports:

- SMB

- IMAP

- LDAP

- RTMP

- RTSP

- SFTP

- SMTP

At the very least, this magnifies the cost of dealing with AI slop security reports and sometimes also the risk for users.

  • proactivesvcs 7 hours ago

    From this year's curl user survey, whilst HTTP/S is the majority use, more than 10% of users are using FTP and WebSockets and 5% still using telnet!

    https://curl.se/docs/survey/2025-1.1/

    • lysace 7 hours ago

      > The survey was announced on the curl-users and curl-library mailing lists (with reminders), numerous times on Daniel’s Mastodon (@bagder@mastodon.social) on LinkedIn and on Daniel’s blog (https://daniel.haxx.se/blog). The survey was also announced on the curl web site at the top of most pages on the site that made it hard to miss for visitors.

      It's not hard to imagine how that would miss the 99.x% users who just want to download an HTTP/S resource after reading an instruction on some web page.

  • appreciatorBus 7 hours ago

    I’m sure the case could be made before a more focussed project, but I think this is orthogonal to bad (or stupid) actors using AI to overwhelm bug reporting channels.

    The issue highlighted in the article is people using AI to invent security problems that don’t exist. That doesn’t go away, no matter much you stripped down or simplify the project.

    I’d bet an AI writing tool will happily generate thousands of realistic looking bug reports about a “Hello World” one-liner.

    • lysace 7 hours ago

      I guess you are saying that we should build another tool, or fork curl and then remove non-http stuff. Then the world should transition from curl oneliners to, say, qurl oneliners.

  • soulcutter 7 hours ago

    > Isn't the scope of curl/libcurl a bit too big?

    No.

  • amiga386 6 hours ago

    You can always use another library with more limited use-cases if you're worried about scope. There are thousands of libraries for making HTTP requests, many of which are language-specific and much more ergonomic than libcurl.

    However, curl/libcurl would cripple itself and alienate a good portion of its userbase if it stopped supporting so many protcols, and specific features of protocols.

    There's a similar argument made all the time: "I only use 10% of this software". But it doesn't mean they should get rid of the other 90% and eliminate 100% of someone else's use of 10% of the software...

    And the real trouble is, there's there's no guarantee that your bargain with the devil would actually reduce the number of false reports, or reduce the time needed to determine they're false. It does not appear that the report submitters directly correlate to size of attack surface. The example AI slop includes several reports that don't even call curl code, and yet the wording claims that they do. There's no limit to the scope of bad reports!

    • lysace 6 hours ago

      I think there is a solid argument to be made that less than one in a thousand of curl users ever use anything besides HTTP/S.

comeon1544 8 hours ago

[flagged]

  • justin66 8 hours ago

    I know it’s not very PC but we need to be honest about the real threat here: Welsh people.

  • amiga386 8 hours ago

    No, we don't. AI slop is a worldwide phenomenon.

    Looking at just the users who submitted the reports on curl's AI slop list:

    * 4 users accounts are now closed/banned, so I don't know their activity

    * 12 users have only submitted invalid/spam reports to curl, and nobody else. They probably just open a new account each time, and they can come from anywhere, and claim to be anyone.

    * 5 users have at least one accepted report (to curl or anywhere else). These should be people who care about their reputation, and yet they've been caught submitting AI slop at least once.

    Of those 5, one is from Florida, USA, one is from Noida, India, and the other 3 don't disclose their location.

    • gus_massa 7 hours ago

      > at least one accepted report anywhere else

      From a previous post, some of the accepted reports elsewhere had suspicious titles, like if some projects are accepting everything.

  • j16sdiz 8 hours ago

    It cost basically nothing to setup a vpn..

ants_everywhere 7 hours ago

The obvious way forward is to have AI do an initial vet, and ideally create an exploit before a human reviews.