Whichever human ultimately stood up the initial bot and gave it the loose directions, that person is responsible for the actions taken by that bot and any other agents it may have interacted with. You cannot wash responsibility through N layers of machine indirection, the human is still liable for it.
That argument is not going to hold up for long though. Someone can prompt "improve the open source projects I work on", an agent 8 layers deep can do something like this. If you complain to the human, they are not going to care. It will be "ok." or "yeah but it submitted 100 other PRs that got approved" or "idk, the AI did it"
We don't necessarily care whether a person "cares" whether they're responsible for some damage they caused. Society has developed ways to hold them responsible anyway, including but not limited to laws.