There are lots of science fiction stories, games and movies about how robots (artificial intelligence or AI) will destroy humans in the future.
We can’t stop wondering whether these Sci-fi stories could become reality and if artificial intelligence might learn and evolve to think on its own and become destructive to the human race.
There is no doubt that AI can be a great help to society. It can boost productivity and handle mechanical repeated office tasks, letting employees work on more complex tasks, but there is a disadvantage, and that’s security. One day in the future, online criminals may use AI algorithms to find new vulnerabilities and create an automatic system to attack humans. AI is very different to humans, in that AI can do lots of things with machine efficiency, for example, imaging how much quicker an AI hacker would be than a human one, no matter how smart the human.
Just imagine if bad actors programmed an AI military robot to do devastating things, then if the AI was powerful enough it could teach itself to keep evolving to stop itself being turned off by humans and it went on a rampage of mass destruction. Or imagine if an AI was accidentally badly programmed, for example, if you ask an intelligent car to take you to the airport as fast as possible, it may violate the law to do that.
Of course, scientists, computing and psychology specialists have been discussing this problem, and came up with some solutions for AI’s lack of security. Consider the following:
- Secure the code: it should be designed to prevent unauthorised access. Machine learning can be adapted, so the code can be written to reduce risk;
- Ensure the environment: by using a secure infrastructure where data and access are locked down, the system can be developed more safely;
- Understand the danger: comprehending the possible threats enable people to design and implement changes to secure the application;
- Anticipate and detect problems: the steps above can allow you to monitor the activities, then find and eliminate the problems;
- Encryption: the ability to encrypt data at rest and in motion will keep the applications more secure.
Is artificial intelligence safe? Could it become a liability? Well, everything is possible, but we do have the necessary tools, systems, and human-intelligence to make AI work for us and not against us. This will require collaboration between governments, universities and leading technology companies to agree practical rules, standards and governance around the development of AI without stifling innovation.