20 terrifying uses of artificial intelligence
Hackers are beginning to use AI technologies to carry outmalicious cyberattacks online. By using AI as an attack vector, hackers have the capability of carrying out large-scale attacks at even faster rates, which would be detrimental to organizations and individuals
Google experimentedwith a self-learning computer that had a simulated neural network. The computer was provided free access to the internet. Out of all the contents of the network, the computer began looking at pictures of kittens. It even developed its own concept of what a kitten looks like. This shows how human-like AI can become.
Its no secret that robots and algorithms control many of the major financial and governmental systems around the world, such as trading on Wall Street. But,according to Roman Yampolskiy, the head of the Cybersecurity Lab at the University of Louisville, flaws in those systems could have disastrous consequences.
Meet Norman, the worlds first psychopathic AI(ZDNet)
Top 5 things to know about AI(TechRepublic)
The road to automation, the joy of work, and the Jen problem(ZDNet)
Many advances in artificial intelligence are innovative and extraordinary, but some are downright creepy. Here are 20 of the eeriest ways people are using, or could use, AI.
Tiny robots are predicted to be able to live inside our heads. Futurist and inventor Ray Kurzweilpredictedthat by 2030, nanobots will be able to be implanted in our brains. The nanobots will be able to access the internet and help us learn information in minutes. The scary part–besides having robots in our brains–is that since the bots would be connected to the internet, there could be risk of hackers accessing our brains.
Researchers at the University of Texas at Austin and Yale University used a neural network calledDISCERN to teach the system certain stories. To simulate an excess of dopamine and a process called hyperlearning, they told the system to not forget as many details. The results were that the system displayed schizophrenic-like symptoms and began inserting itself into the stories. It even claimed responsibility for a terrorist bombing in one of the stories.
In October of 2017,Sophiabecame the first robot to have a nationality, gaining citizenship to Saudi Arabia. The robot was granted the same rights as a human, enabling it to live amongst humans in everyday life. This makes the idea of a robot takeover feel a little more possible.
Between automated door bells, high-tech appliances, and heating systems, smart home designs are gaining ground. While smart home AI is intended to make functions around the house easier, there are many stories ofAI making things worse.If AI reaches the point of autonomous function, it could alter smart home tools. For example, if AI goes awry, it could turn of the heat, turn off carbon monoxide monitoring, or open the windows during a storm and cause a flood. Home sweet home, right?
With innovations in AI functionality, many jobs are at risk ofbeing automated. While automating jobs might increase efficiency and production for organizations, it could put thousands of employees out of work.
Police in certain cities around the US areexperimenting with an AI algorithmthat predicts which citizens are most likely to commit a crime in the future.Hitachi announced a similar systemback in 2015. Maybe the film Minority Report wasnt completely off base in its representation of the future?
Amazon AI: Cheat sheet(TechRepublic)
AI researchers areusing literatureto help machines learn right from wrong, hopefully preventing an AI takeover. Learning right from wrong teaches robots empathy, which can be good, but empathy also makes the machines more human-like. The more human-like the machine, the more difficult it is to discern a robot from a human.
In an experiment conducted by the scientists ofIntelligent Systemsin Switzerland, robots were made to compete for a food source in a single area. The robots could communicate by emitting light and, after they found the food source, they began turning their lights off or using them to steer competitors away from the food source.
In many cases, robots and AI systems seem inherently trustworthy–why would they have any reason to lie to or deceive others? Well, what if they were trained to do just that? Researchers at Georgia Tech have used the actions of squirrels and birds toteach robots how to hide from and deceive one another. The military has reportedly shown interest in the technology.
Edge computing research report 2018: Implementation and investment growing across industries
Image: iStockphoto/Vladyslav Otsiatsia
One of the biggest industries that AI could potentially benefit is healthcare. AI is already in use in many fields of medicine, evenhelping doctors decide on treatment. But, what if that AI system misses a critical aspect of your medical history or makes the wrong recommendation?
Nautilus is a supercomputer thatcan predict the futurebased on news articles. It is a self-learning supercomputer that was given information from millions of articles, dating back to the 1940s. It was able to locate Osama Bin Laden within 200km. Now, scientists are trying to see if it can predict actual future events, not ones that have already occurred.
Last year, people were captivated by avideoof two Google home assistants talking and arguing with each other. The conversation wasnt dull, either, turning philosophical at one point. They argue about which one is a human and which is a computer, and one even claims it is God. If AI can talk and understand one another, that poses a terrifying problem for humans: What if AI starts teaming up?
We deliver the top business tech news stories about the companies, the people, and the products revolutionizing the planet.
Conner Forrest is a Senior Editor for TechRepublic. He covers enterprise technology and is interested in the convergence of tech and culture.
Conner Forrest has nothing to disclose. He doesnt hold investments in the technology companies he covers.
Among the many ethical concerns posed by robots and the AI systems that power them is the idea that humans could love, or at least copulate with, a robot companion. Companies are already trying to makesex robotsa reality, and opponents are campaigning against it fervently.
Therehas been much controversy around the use of drones from a civilian sense, but even more so around military use of drones. However, the scary issue isnt that people are piloting these services, but that they can pilot themselves. The US Navy has even given ground transport vehicles the ability to autonomously identify a target before carrying out a mission. Think about if a machine decided who is a friend and who is an enemy.
Image: screenshot from The New York Times video
What if AI ran the judicial system?Discussions have startedthat place AI in the courtroom, determining judicial sentences. While this AI is intended to eliminate bias, there is chance of human bias infiltrating the AI by means of its human creators. We would then be placing peoples lives in the hands of biased AI.
There have already beenmultiple instancesof self-driving cars going awry, but some of the mistakes turn deadly. An example is the self-driving car thathit and killed a pedestrianin March. AI operating heavy machinery can result in deadly consequences, making the future of driverless cars worrisome.
One of the scariest potential uses of AI and robotics is the development of a robot soldier. Althoughmany have moved to ban the use of so-called killer robots,the fact that the technology could potentially power those types of robots soon is upsetting, to say the least.
Image: U.S. Air Force photo/1st Lt. Shannon Collins
Vendor comparison: Microsoft Azure, Amazon AWS, and Google Cloud
Our editors highlight the TechRepublic articles, galleries, and videos that you absolutely cannot miss to stay current on the latest IT news, innovations, and tips.
Our autonomous future: How driverless cars will be the first robots we learn to trust (cover story PDF)(TechRepublic)
Conner Forrest is a Senior Editor for TechRepublic. He covers enterprise technology and is interested in the convergence of tech and culture.