Hope you can forgive this tangent, I considered posting this in the GH thread, but you asked nicely not to... So hopefully this is a middle ground, you can excuse or ignore
First, my original comment was going to ask if you're looked at what any other reputable repos are doing. Specifically popular FOSS projects that are not backed by a company looking to sell AI. Do any of them have a positive Policy, or positions that you want to include?
Second, if I was forced to take a stand on AI, I would duplicate the policy from Zig. I feel their policy hits the exact ethos FOSS should strive for. They even ban AI for translations, because the reader is just as capable a participant. And importantly, asking the author to do their best (without AI), and trust the reader to also try their best encourages human communication. It also gives the reader control and knowledge over the exact amount of uncertainty introduced by the LLM, which is critically important to understanding a poor quality bug report from a helpful users who is honestly trying to help. Lobste.rs github disallows AI contribution for an entirely different reason I haven't seen covered in your GH thread yet
Finally, you posted the Issue as an RFC. but then explicitly excluded, HN from commenting on the issue. I think that was a fantastic decision, and expertly written. (I also appreciate that lesson in tactfulness :) ) That said, if you're actually interested in requesting comments or thoughts you wouldn't have considered, I would encourage you to make a top level RFC comment this thread. There will likely be a lot of human slop to wade through, but occasionally I'll uncover a genuinely great comment on HN that improves my understanding. Here I think the smart pro-AI crowd that might have an argument I want to consider, but would be unlikely to on my own because of my bias on the quality of AI. Such a comment might would be likely to appear on HN, but the smart people who I'd want to learn from, would never comment on the GH thread now, and I appreciate it when smart people I disagree with, contribute to my understanding.
PS Thanks for working on opencontainers, and caring enough to keep trying to make it better, and healthier! I like having good quality software to work with :)
> I considered posting this in the GH thread, but you asked nicely not to... [...] you posted the Issue as an RFC. but then explicitly excluded, HN from commenting on the issue. I think that was a fantastic decision, and expertly written. (I also appreciate that lesson in tactfulness :) ) That said, if you're actually interested in requesting comments or thoughts you wouldn't have considered, I would encourage you to make a top level RFC comment this thread.
Well, I posted this as an RFC for other runc maintainers and contributors, I didn't expect it to get posted to Hacker News. I don't particularly mind hearing outsiders' opinions but it's very easy for things to get sidetracked / spammy if people with no stake in the game start leaving comments. My goal with the comment about "don't be spammy" was exactly that -- you're free to leave a comment, just think about whether it's adding to conversation or just looks like spam.
> Specifically popular FOSS projects that are not backed by a company looking to sell AI. Do any of them have a positive Policy, or positions that you want to include?
I haven't taken a very deep look, but from what I've seen, the most common setups are "blanket ban" and "blanket approval". After thinking about this for a few days, I'm starting to lean more towards:
1. LLM use must be marked as such (upfront) so maintainers know what they are dealing with, and possibly to (de)prioritise it if they wish.
2. Users are expected to (in the case of code contributions) have verified that their code is reasonable and they understand what it does, and/or (in the case of PRs) to have verified that the description is actually accurate.
Though if we end up with such a policy we will need to add AGENTS.md files to try to force this to happen, and we will probably need to have very harsh punishments for people who try to skirt the requirements.
> Lobste.rs github disallows AI contribution for an entirely different reason I haven't seen covered in your GH thread yet
AFAICS, it's because of copyright concerns? I did mention it in my initial comment, but I think that far too much of our industry is turning a blind eye to that issue that focusing on that is just going to lead to drawn out arguments with people cosplaying as lawyers (badly). I think that even absent of the obvious copyright issues, it is not possible to honestly sign the Developer Certificate of Origin[1] (a requirement to contribute to most Linux Foundation projects) so AI PRs should probably be rejected on that basis alone.
But again, everyone wants to discuss the utility of AI so I thought that was the simplest thing to start the discussion with. Also the recent court decisions in the Meta and Anthropic cases[2] (while not acting as precedent) are a bit disheartening for those of us with the view that LLMs are obviously industrial-grade copyright infringement machines.
Nominatively, yes. But I think I would describe it as risk tolerance. I'm going to be one of those bad cosplayers and assert that the two rulings mentioned even if they were precedent setting, don't actually apply to the risks themselves. Could you win a case is much less important than if you could survive the court costs. There's no doubt some value in LLM based code generation to many individuals. But does it's value outweigh the risks to a community?
> and we will probably need to have very harsh punishments for people who try to skirt the requirements.
I would need to spend hours of time to articulate exactly how uncomfortable this would make me if I was working along side you. So please forgive this abbreviated abstract. One of the worst things you can do to a community, is put it on rails towards an adversarial relationship. There's going to be a lot of administrative overhead to enabling this, it will be incredibly difficult to get the fairness correct the first time, and I assume (possibly without cause?) it's unlikely to feel fair to everyone if you ever need to enforce it. Is that effort and attention and time best spent there?
I believe that no matter what you decide, blanket acceptance, vs blanket denial, vs some middle ground, you're going to have to spend some of the reputation of the project on making the new rule.
If you ban it, you will turn away some contributions or new contributors, and a small subset of committers may see their velocity decrease. This counts for some value loss (some positive and some negative) But also accounts for decreased time costs... or rather it enables you to spend more time on people and their work instead.
If you allow it, you adopt a large set of new poorly understood risks, and administrative overhead, and time you could have spent working with other people... It will also, turn away contributors.
I'm not going to pretend like there was a chance in hell anyone should believe that I was likely to contribute to runc. It's possible in some hypothetical, but extremely unlikely in the current reality. And, if I cared enough about the diff I wanted to submit upstream, I still would open a PR... but, I saw an AGENTS.md in a different repo that I was considering using, was disappointed and decided not to use that repo. Seeing runc embrace AI code generation would without a doubt, cause me to look for an alternative, I assume a reasonable alt probably doesn't exist, and I would resign myself to the disappointment of using runc. I agree with your argument that it's commercial grade copyright laundering, but that's not my core ethical objection to its use.
> In the beginning the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move.
You're damned if you do, and damned if you don't. So the only real suggestion that I have is make sure you remember to optimize for how you want to spend your time. Calculate not just the expected value of the code within the repo, but the expected value of the people working on the repo.
> I would need to spend hours of time to articulate exactly how uncomfortable this would make me if I was working along side you.
I think this came out a little wrong -- my point was that if we are going to go with a middle-ground approach then we need to have a much lower tolerance for people who try to abuse the trust we gave in providing a middle-ground. (Also, there is little purpose in having a policy if you don't enforce it.)
For instance, someone knowing that I will deprioritise LLM PRs, and instead of deciding to write the code themselves or accept that that what I work on is my own personal decision to make, they instead decide to try to mask their LLM PR and lie about it -- I would consider this to be completely unacceptable behaviour in any kind of professional relationship.
(For what it's worth, I also consider it bad form to submit any patches or bug reports generated by any tool -- LLM or not -- without explaining what the tool was and what you did with it. The default assumption I have when talking to a human is that they personally did or saw something, but if a tool did it then not mentioning it feels dishonest in more ways than one.)
I did see that lobste.rs did a fairly cute trick to try to block agentic LLMs[1].
I think it came out exactly perfectly. Unrelated to this specific topic, I've been thinking a lot lately about reward vs punishment as a framework for promoting pro-social environments. I didn't read far into what you said. I was merely pattern matching it back to the common mistakes I see and want to discourage.
> but if a tool did it then not mentioning it feels dishonest in more ways than one.
Yeah, plagiarism is shockingly common. It's a sign of lacking the skill or ability to entertain 2rd order, or 3rd order thoughts/ideas.
First, my original comment was going to ask if you're looked at what any other reputable repos are doing. Specifically popular FOSS projects that are not backed by a company looking to sell AI. Do any of them have a positive Policy, or positions that you want to include?
Second, if I was forced to take a stand on AI, I would duplicate the policy from Zig. I feel their policy hits the exact ethos FOSS should strive for. They even ban AI for translations, because the reader is just as capable a participant. And importantly, asking the author to do their best (without AI), and trust the reader to also try their best encourages human communication. It also gives the reader control and knowledge over the exact amount of uncertainty introduced by the LLM, which is critically important to understanding a poor quality bug report from a helpful users who is honestly trying to help. Lobste.rs github disallows AI contribution for an entirely different reason I haven't seen covered in your GH thread yet
Finally, you posted the Issue as an RFC. but then explicitly excluded, HN from commenting on the issue. I think that was a fantastic decision, and expertly written. (I also appreciate that lesson in tactfulness :) ) That said, if you're actually interested in requesting comments or thoughts you wouldn't have considered, I would encourage you to make a top level RFC comment this thread. There will likely be a lot of human slop to wade through, but occasionally I'll uncover a genuinely great comment on HN that improves my understanding. Here I think the smart pro-AI crowd that might have an argument I want to consider, but would be unlikely to on my own because of my bias on the quality of AI. Such a comment might would be likely to appear on HN, but the smart people who I'd want to learn from, would never comment on the GH thread now, and I appreciate it when smart people I disagree with, contribute to my understanding.
PS Thanks for working on opencontainers, and caring enough to keep trying to make it better, and healthier! I like having good quality software to work with :)