That is the big worry of Elon Musk, founder of Tesla, Spacex, co-founder of Solar City, and all around genius. Whatever he says should not be easily dismissed.
At the Vox Media conference, CODE, in Los Angeles in June, he expressed his concerns about Artificial Intelligence. He is primarily concerned about “one large company,” he didn’t name that could control artificial intelligence and thus dominate all people. People guessed that he was referring to Google. It’s something out of science fiction, like Terminator, which is not that far-fetched.
Google has demonstrated that in spite of good intentions at the beginning, a huge corporate monster soon takes on a life of its own. Google’s initial motto of “do no evil” was violated almost as soon as it came out. For example, by allowing devious individuals to post irresponsibly malicious items or websites, that google picks up, without questioning the accuracy or intent of the items. This does irreparable harm, and is “evil.” The individual owning such malicious sites makes a nice income from ads posted and the traffic generated. It would be easy for Google to delete such sites from its searches.
Musk is so concerned, that last year he founded “Open AI”, a non-profit organization to encourage other technology specialists to counter such potential dangers.
“I don’t know a lot of people who love the idea of living under a despot,” he said, positing a future in which an artificial intelligence — or the people controlling it — outstrip our capabilities by orders of magnitude.”
Elon explains the mission of his AI effort:
“If AI power is broadly distributed to the degree that we can link AI power to each individual’s will — you would have your AI agent, everybody would have their AI agent — then if somebody did try to something really terrible, then the collective will of others could overcome that bad actor.”
If anyone dismisses these thoughts as science fiction, just think that a computer from IBM is already beating world-class chess masters at chess. Companies are building robots right now that can understand commands, mix you a drink on voice commands, walk through mountains, jungles, and deep snow.
The growth of the knowledge base of the world is accelerating at an incredible pace.About 15 years ago, I remember when one highly intelligent futurist said knowledge was doubling every three years with the rapid dissemination over the internet. It is accelerating at a geometric pace.
David Schilling writes:
Buckminster Fuller created the “Knowledge Doubling Curve”; he noticed that until 1900 human knowledge doubled approximately every century. By the end of World War II, knowledge was doubling every 25 years. Today things are not as simple as different types of knowledge have different rates of growth. For example, nanotechnology knowledge is doubling every two years and clinical knowledge every 18 months. But on average human knowledge is doubling every 13 months. According to IBM, the build out of the “internet of things” will lead to the doubling of knowledge every 12 hours.
Where are the university professors who will teach based on the latest knowledge? It certainly won’t be the ones with tenure.
The more rapid the growth, the more people will be left behind. In the end, there will be a group of people who have the ability to know and control everything with their control of AI. The rest of us will have no choice but to go along.
Science fiction is coming true at an accelerating pace while the majority occupies itself with iPods, 24/7 coverage of elections, baseball games, and other diversions.
This means that an increasing number of people won’t have the ability, knowledge, or education to get jobs. Governments are already seriously thinking about “universal income,” where every household and person gets a specific amount of money each month from the government. Switzerland is now having a referendum to get the public’s opinion on that.
In the early 1980’s, I made the decision to look into AI for the purpose of predicting the investment markets. I thought that pattern recognition of chart and volume patterns would be ideal for the many computations a computer could do so much better than us mortals. At first, I had meetings with two computer scientists working on a project for the military, involving pattern recognition for detecting and identifying enemy submarines. The conversations were very interesting, but I decided not to go ahead.
However, at the time, there was one firm in the US that had developed its own AI computer language, LISP. I met one of the 20 or so founders when that company started having financial difficulties. The AI market was not developing fast enough. This person traded the markets and found my idea interesting. We bought a powerful computer and he started working for us on an investment program. He was very intelligent but it did not result in something usable. I knew we were too early.
The point is that even 30 years ago, companies were working on AI. The limiting factor was the power of the computers at that time. That has changed.
Most people are not aware that currently the big trading outfits are increasingly using ‘algorithms’ programmed to automatically make trades depending on news items coming across the news wires, or technical market factors. These computers respond in microseconds. There is no human intervention. That’s why we have “flash crashes” from time to time when a number of outfits use similar algorithms. Eventually, we may see a crash of 2000 DOW points. The regulators aren’t equipped for this either.
It’s a “brave, new world.” And there is not anything we, the average people, can do about it although we may not like it. The best remedy for younger people is to be informed, learn as much as possible about useful things, instead of spending time learning about Paul Revere, philosophy, political correctness, etc.
Think and plan ahead, and make decisions about your future. Just “hoping” for the best won’t work. Be prepared. And hope that Elon Musk’s efforts to counter AI taking control of us will be successful.