If I were to propose a second law of robotics, a law for humans to follow regarding AI and what is allowable, it would be this:
Every action taken by an AI will be considered the responsibility of a single human being.
If you kill someone with a hammer, it’s not the hammer’s fault. If you create or fund an AI that harms other humans, you will be help responsible.
And if you did so by creating an AI with emotional states, you should be treated as we treat those that use weapons of mass destruction against the defenseless.
If there is always a human behind the decisions of AI, we will always treat AI as a tool. As an extension of our selves.
And thus its agency will be constrained, and another layer of protection is afforded.
Further reading, but not really relevant to the current discussion: