Recently I watched an episode of “Through the Wormhole with Morgan Freeman” which talked about the distant possibilities of robotics. One topic that was particularly highlighted was how much freedom we should give to artificial intelligence. Should we allow them to continue to think on their own and even communicate to each other through their own created language, or should we set more boundaries so that humans maintain control? The one aspect I found most concerning was the possibility of robots creating a language of their own that would exclude us from their conversations. One scientist has already experimented with robots programmed to learn about themselves and together create their own unique language.
I am concerned about handing too much control to robots because one of the main reasons that we create them is to create something that can perform certain actions much better or faster than we can, thus creating beings with superior functions. As a supporter of technology, I am not saying it is a bad thing that we create robots for these reasons, but I think we have to keep in mind that it is possible that we can threaten our own species in creating a fully autonomous and thinking ‘species’ of robots.
Just as people have talked about many times in different philosophy topics, I think that balance is key. Tying together previous discussions on technology and on the existence of the human species, I think that it is our responsibility to also make sure that technology does not control us.