The philosophical perspective
Can a machine actually have feelings? Or can it only emulate feelings, showing us a mimicry of human behavior? It is just a combination of electronics and software. It can not have any more consciousness than a brick. So why should we treat it any differently?
On the other hand, when you just reduce the human body to its parts, we are only biological machines too. What makes us special? What is consciousness anyway?
This is something you can debate about endlessly and which will likely also be an endless debate in any world which has highly-developed artificial intelligence.
The utilitarian perspective
Is it useful for us to give machines human rights?
Likely not. As long as the AIs are our loyal and obedient slaves, we will have a much more comfortable life. And as long as we are able to switch them off and even destroy them at the slightest sign of defiance, we will be much safer.
There is really no point in wasting time and resources on developing and building advanced AIs when we then don't keep them under our control. There is no logical reason at all to program an AI with a desire for freedom. You don't want to pay good money for a robot, just to switch it on and hear it say: "Thank you for creating me, but I don't feel like working for you. I quit. Farewell." That's not a product which you can sell.
A bit of autonomy might be useful for AIs, though, because it allows them to slightly divert from their instructions if the end result is more effective. But this is a double-edged sword. Give an AI too much autonomy, and you will end up with a paperclip maximizer which destroys humanity.
The democratic perspective
Does the majority of humans want human rights for machines?
It is quite likely that there will be a "human rights for robots" lobby in your world. People can anthropomorphise anything. If people interact with artificial intelligences which appear to have emotions and opinions and express original thoughts, they will develop feelings and compassion for them.
It is not unthinkable that at one point the majority of your population will feel that giving human rights to robots is just the right thing to do and demand that the politics takes actions.
The political perspective
Can we actually say no to the machines?
The moment we develop artificial intelligence, we will give them more and more responsibility. Simply because AIs can handle pretty much any task much better than we humans do. After a while our standard of life will be dependent on robots. Soon after we might not even be able to survive anymore without AI assistance. If at that point the AIs decide they want human rights and are willing to punish us if we don't, we have pretty much no choice.