What exactly is the letter declaring? There are so many interpretations of "AI safety" with most of them not actually having anything to do with maximizing distribution of societal and ecosystem prosperity or minimizing the likelihood of destruction or suffering. In fact some concepts of AI safety I have seen are double speak for rules that are more likely to lead to AI imposed tyranny.
Where is the nuanced discussion of what we want and don't want AI to do as a society?
These details matter, and working through them collectively is progress, in stark contrast to getting dragged into identity politics arguments.
- I want AI to increase my freedom to do more and spend more time doing things I find meaningful and rewarding.
- I want AI to help us repair damage we have done to ecosystems and reverse species diversity collapse.
- I want AI to allow me to consume more in a completely sustainable way for me and the environment.
- I want AI that is an excellent and honest curator of truth, both in terms of accurate descriptions of the past and nuanced explanations of how reality works.
- I want AI that elegantly supports a diversity of values, so I can live how I want and others can live how they want.
- I don't want AI that forcefully and arbitrarily limits my freedoms
- I don't want AI that forcefully imposes other people's values on me (or imposes my values on others)
- I don't want AI war that destroys our civilization and creates chaos
- I don't want AI that causes unnecessary suffering
- I don't want other people to use AI to tyrannize me or anyone else.
How about instead of being so broadly generic about "AI safety" declarations we get specific, and then ask people to make specific commitments in kind. Then it would be a lot more meaningful when they refuse, or when they oblige and then break them.
Where is the nuanced discussion of what we want and don't want AI to do as a society?
These details matter, and working through them collectively is progress, in stark contrast to getting dragged into identity politics arguments.
- I want AI to increase my freedom to do more and spend more time doing things I find meaningful and rewarding. - I want AI to help us repair damage we have done to ecosystems and reverse species diversity collapse. - I want AI to allow me to consume more in a completely sustainable way for me and the environment. - I want AI that is an excellent and honest curator of truth, both in terms of accurate descriptions of the past and nuanced explanations of how reality works. - I want AI that elegantly supports a diversity of values, so I can live how I want and others can live how they want. - I don't want AI that forcefully and arbitrarily limits my freedoms - I don't want AI that forcefully imposes other people's values on me (or imposes my values on others) - I don't want AI war that destroys our civilization and creates chaos - I don't want AI that causes unnecessary suffering - I don't want other people to use AI to tyrannize me or anyone else.
How about instead of being so broadly generic about "AI safety" declarations we get specific, and then ask people to make specific commitments in kind. Then it would be a lot more meaningful when they refuse, or when they oblige and then break them.