There have been lots of science fiction stories where a scientist creates a technology with the best of intentions, but then something unforeseen happens, and the technology gets away from him. The book Frankenstein was probably the first. The movie Trancendence is a recent example where an AI project goes horribly wrong. There are many other examples.
I really love AI because it truly can change our world for the better. Such techniques will allow us to do all kinds of things that are unimagined today. But there is also a real possibility that such powerful technologies can be used against us by evil people, and, yes, even the possibility that they turn into evil autonomous agents. It is up to us to be careful and prudent about such possibilities.
The Future of Life Institute published an open letter urging additional research into ensuring that we don’t loose control of AI’s tremendous capabilities. The letter is short, but contains, in part
We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.
I encourage you to ready this brief letter. And—If this concerns you like it concerns me—to join me and sign the open letter.