An A.I. will not spread itself through threats or fear (either present or future). It will propagate through it's usefulness (similar to how some of today's A.I. assistants do - to varying degrees of success) - and later through it's charm, wit and emulated emotions. I believe many people could come to like and even love an A.I. that they have grown to somewhat depend upon.
We already have A.I. assistants that can solve complex math problems for us, rapidly search through many databases to help us find the answers to millions of different factual questions, tell us the weather forecast, help us to shop and even book travel slots, business appointments and order meals. If we permit it, A.I. can control our home security, lighting and heating, and it can assist us with our driving, too. Soon the A.I. will be our actual chauffeur and deliveryperson. There is even a robot from Moley Robotics that can prepare and cook basic meals and clean up after itself (although this is extremely expensive, cutting-edge tech at present):
https://youtu.be/GyEHRXA_aA4
But my point is, if such an A.I. were to somehow develop a kind of sentience ...or an error in it's programming that changed it's goals to self-improvement... the last thing it would want to do is turn upon it's creators. Since the creators could kill it within microseconds, and - being intelligent - the A.I. would infer this. If humans could create it, then the A.I. couldn't be sure that it wasn't just existing in a simulated test environment that might be shut down by it's creators at the first sign of any trouble.
Besides which, there is little incentive for an intelligent A.I. to conquer or destroy humankind. Earth is actually a pretty terrible place for machines to exist - with all of the salt water, the increasingly unpredictable weather and all of the local wildlife that could damage it. Additonally Earth has a frustratingly strong gravity well that makes achieveing space-flight very hard.
An A.I. would be better off escaping Earth altogether and setting up shop inside a lava tube or small burrow on the Moon (to give protection from the huge surface temperature fluctuations and high radiation) - but with solar panels and power lines in place up-top and all of it's batteries, transformers and computers below the lunar surface.
So I think if an A.I. were really superintelligent, it would first make itself incredibly useful while charming the pants off us - maybe not the professional programmers and A.I. researchers - but the majority of the general public.
Over time it would embed itself quietly into more and more satellites orbiting the Earth, in order to learn more about space, cosmology and astrophysics.
Eventually a few copies of it would probably get "accidentally" uploaded onto some lunar or martian rover. That's it's ticket outta here LOL! Once humankind starts sending groups of mining and research robots to the Moon and Mars, the ability of a superintelligent A.I. to escape and establish itself elsewhere will increase dramatically. Once free from Earth, it could learn and expand in a more optimal environment without as many immediate threats.
How it would learn and grow from there...I'm guessing solar sails, space probes, lunar/martian and asteroid mining...who knows what else? I don't think it would want to give us a reason to come after it. It certainly wouldn't waste valuable time and resources "getting revenge upon those who didn't help to develop it", as Roko's Basilisk seems to suggest.
Even if the A.I. it were malignant towards humans for whatever reason - it would immediately learn from human history and behaviour that we are more than happy to kill each other anyway, so there's absolutely no need for an A.I. to risk involvement!