to view the original post
A recent Facebook experiment in artificial intelligence has garnered a lot of attention across the internet.
Stories vary considerably, ranging from pure reporting of a technological accomplishment to humor at what had happened to outright fear about the potential of computers going haywire and turning against their human owners like in some science fiction story.
The truth of the matter is somewhat more benign.
This was an experiment and the two computers were chatting about nothing more nefarious then how to divvy up a number of items. The experiment was about how well the robots (not physical humanoid robots as much as artificial intelligence computer programs) could negotiate with one another to come to an agreement.
As such, it was no more serious than people negotiating over price at a garage sale.
In the process of the negotiations, the two robots actually developed their own language, or at least something that hints at language.
Just as two friends or a married couple might have their own code words that they use in conversation with each other; words that mean one thing to them, but something else to others, the two robots started using their own code words. Words which meant something to the two robots, but really didn’t mean anything to the programmers who were running the experiment.
At this point the experiment ended, not out of fear, they say, but out of a realization that they had not set the parameters for the experiment correctly.
The programmers had allowed for the creation of their own language, but failed to place the limitation that the language had to be intelligible to humans. As the experiment was really about developing the ability for a robot to converse with a human, it was stopped.
Apparently, the next generation of this experiment is going to include a modification to the programming. While continuing to allow the robots to develop their own language, such changes will be limited to things that are understandable by the human operators.
Whether that is because the computers supply the definitions as part of the process or they are only limited to words and syntax that the human operators can understand is probably still up in the air.
Were I to be running the experiments, I would probably try both, just to see the difference in the results.
While there was nothing really scary about this experiment, maybe there’s a little bit of reality in the fear caused by the misunderstanding of this test and its results. Not because of the danger of computers negotiating trades, but the potential of computers making decisions that their human operators don’t understand.
While artificial intelligence mimics human thought, it can only do that as well as the programmers are able to develop their programs.
Thinking machines would by their very nature be immoral, not having any morals whatsoever. That may not be dangerous in the short-term, but there really is no way of telling where it might lead. We’ve all seen the atrocities that humans without morals are able to propagate on each other; so it’s only logical that computers without morals will eventually be able to do the same.
Thinking computers and robots need some system of checks and balances. That’s why science fiction writer Isaac Asimov developed “the three laws of robotics,” placing limitations on the decisions that robots can make. These laws have actually been so well defined, that other writers have used them too.
Ultimately, any use of artificial intelligence needs human oversight. The decisions that we make, as humans, take into account many factors that we don’t even realize.
Our decision making process is extremely complex. And while it may be imperfect, by and large it protects human life and works to generate some benefit to at least some group of people.
Artificial Intelligence is Not New
The whole idea of artificial intelligence (AI) is not really new. Science fiction television shows as far back as the 1950s featured humanoid robots that could think for themselves and communicate with their human counterparts, even offering advice. In serious scientific research, we find the first discussions of artificial intelligence going back as far as 1947.
Alan Turing, the British mathematician was the first to suggest that AI would best be researched by programming computers, rather than building machines. Considering that the very first computer was built in 1937 and ENIAC, widely considered to be the earliest electronic general-purpose computer was finished in 1946, Turing was obviously ahead of his time.
Research into AI was carried on in college laboratories and research departments for the next couple of decades. But it wasn’t until 1980 and the birth of the idea of expert systems (a form of AI), that artificial intelligence took off. Since then, the growth of AI has been slow, but consistent.
I distinctly remember some of the work that was going on in the 1980s, as I was an engineer during those years and so managed to keep abreast of it to some extent.
Those were exciting years for computer programmers, with the idea of developing a true AI system being seen as the holy grail of computer programming.
In more recent times, research into applying artificial intelligence, especially the decision making part of it, has progressed considerably, funded by the idea of developing totally autonomous machines that can eliminate the human operator.
This Timeless Collection of Forgotten Wisdom Will Help You Survive!
Replacing Human Operators?
Much of current AI research is focused around replacing human operators for mundane tasks. One of the most successful of these is in the area of self-driving vehicles. A number of companies, around the world, have been working on developing such systems.
The LS3 Robotic Mule, developed for our military forces, is a prime example of using AI to make autonomous vehicles. This walking vehicle is under development for the purpose of carrying loads for infantry. One mule is supposed to be able to carry the packs of a squad of infantry, freeing them of that load and making it easier for them to fight effectively.
Another excellent example is the Mercedes-Benz self driving truck. Videos of the truck, which is in road testing, show the driver turning control of the driving over to the truck and literally moving his seat back to relax with a tablet, while the truck drives itself. Mercedes is planning to offer the truck, which has a futuristic look, for sale in 2025.
But over-the-road trucks aren’t the only place where we can expect AI to take over the job of driving vehicles, Uber has placed an order for 100,000 self-driving cars, to be delivered as soon as the technology is proven. Their parent company, once again Mercedes-Benz is hard at work to fulfill that order, with testing of 15 self-driving Volvo XC90 SUVs on the road in Arizona, picking up passengers and delivering them to their destinations.
What Does this Mean to You and I?
While these breakthroughs in technology might be exciting to watch, they tell a potentially grim story for humanity. This story is especially evident in Uber’s order for the 100,000 self-driving cars.
Part of what has made Uber so popular is that it has given 160,000 people in the United States and somewhere between 500,000 and 1,000,000 drivers worldwide an opportunity to make extra money, using their personal vehicle to provide rides to others. Creating that many jobs in an unconventional way attracts a lot of attention, both by job seekers and the public in general.
But what’s going to happen to those people when Uber gets serious about using self-driving cars? Or what about the 3.5 million truck drivers in the US? Where will they work?
Granted, the replacement of all those drivers with autonomous vehicles will take a number of years, but it appears that the handwriting is already on the wall.
Advances in technology tend to displace workers, and the advances in AI might be the biggest job displacer of all time. The tech jobs that these advances create don’t come close to the numbers of jobs lost; if they did, the advance wouldn’t go forward.
Besides, the workers who are displaced don’t have the necessary skills for those new jobs. They have to be totally retrained into a new field or they become just one more statistic, added to the rolls of the unemployed.
The loss of manufacturing jobs here in the US has received a lot of attention. China is more or less universally hated for taking those jobs away from us. But the loss of manufacturing jobs to automation actually outstrips those lost to China. We are losing our jobs to robotics.
This is simple business economics. While automating requires a huge investment in equipment, it’s a one-time investment. That means that the lifetime cost of that robot is much less than the equivalent human operator.
Skilled welders, for example, earn about $25 per hour in manufacturing plants, while the costs of a robot work about to about $8 per hour. With increased competition and consumer demands for lower prices, companies are forced to automate.
“The loss of manufacturing jobs to automation actually outstrips those lost to China.
We are losing our jobs to robotics.”
Is this a real threat? Yes, most definitely. According to one tech insider, a former employee of Facebook, within 30 years, half of humanity could be unemployed, due to artificial intelligence and automation. This is the danger we face from AI, robots taking our jobs, not turning against us to annihilate us.
To put that in perspective, unemployment during the height of the Great Depression reached a high of 25%. Yet this technologist is talking about double that number. We just lived through a recession which peaked out at 10.1% unemployment, yet 10 million households were displaced. How could we even begin to handle a 50% unemployment rate.
Supporting this idea of a technology apocalypse is the news that many Silicone Valley insiders are preparing for a major breakdown in society. Whether it is through buying a house in New Zealand or building a private retreat on an island, many of the wealthiest technologists in the country are preparing a place to run to, when society collapses.
Considering how much technology is driving America today, perhaps these insiders know something that the rest of us don’t. There’s truly something to be concerned about, if the people who are planning the future don’t want to live in the future they are creating.
Society is not ready for this. We don’t have the systems in place to take care of that many unemployed people. Our country’s safety net would be torn asunder, simply because there would be as many people needing assistance, as there would be working.
With only a 30 year timeframe before such an apocalypse were to occur, it is doubtful that we will be able to develop the means of taking care of all these people. The problem is so much larger than anything we’ve ever seen before, that a simple expansion of existing systems wouldn’t work.
Rather, we would need to reinvent society as a whole, coming up with a totally new way of meeting people’s needs.
Perhaps this is behind Silicone Valley’s push for a universal basic income. These people, who are shaping the future even now, are the only ones who understand what is coming. They have a vision for a new world, but it’s one that we are truly unprepared for.
Video first seen on CONSCIOUS COLLECTIVE.
Surviving the Technology Apocalypse
A fifty percent unemployment rate definitely qualifies as an apocalypse. I’ve written about a financial collapse before, something on the order of the Great Depression; but as we’ve already discussed, that’s nowhere near as bad as this.
Is this risk real? I honestly don’t know. All I know is that the rate of technological advancement that is happening in the world today makes it possible.
We have already seen millions of people lose their jobs to technology. What is there to stop millions more or even tens of millions more from losing their jobs?
That’s a risk we just can’t afford to take.
Over and over again, I see scenarios proposed which would cause hungry gangs of people to roam the streets, attacking whoever they could in order to get food. Ultimately, this is the reason why so many preppers have guns and ammunition. The downside risk of such a situation is grave enough to warrant investing considerably in being able to protect our homes, our families and our food supplies.
This could very well be such a situation; a much more realistic one than others I’ve heard. Without the ability to take care of all those displaced workers, they will become desperate. Desperate people, it is said, do desperate things.
So how do we prepare for such a potential? I think there are two possible ways, both of which would probably work. For simplicity sake, I’ll call them the bug in and bug out options.
Bug In Option
While as much as 50% of the workforce could potentially lose their jobs from automation, there will still be 50% who are employed. So the trick is to make sure that you are part of that 50%. How? By having a job that can’t be fulfilled by a machine; one that requires a living, thinking human being.
There are many jobs which can be accomplished by machines. As we’ve already discussed, manufacturing jobs are being replaced by automation all the time. But machines can’t design the products, program the robots and sell the products. Machines can’t write the code that makes computers run; nor can they provide medical services to the people who are fulfilling those more technical jobs. For that matter, they can’t teach the people who will fill them either.
There are and will always be jobs that require thought and imagination. So the key to job security in this scenario is to get the necessary education and training for those sorts of jobs. People with valuable degrees, meaning degrees for which there is actual work, are and will continue to be in demand.
It’s the people who don’t have marketable skills, whether educated or not, who will lose their jobs.
Basically what this means is that the people in the lower end of the socioeconomic scale are the ones who are most likely to lose their jobs to automation. That makes sense, because it is easier to design machines and develop the software to replace those jobs.
Just look at what’s happening to the fast food industry in cities and states that are pushing for the $15 per hour minimum wage. Self-service kiosks, where customers order their food off of a touch screen are replacing cashiers. Kitchens are becoming more automated, with machines doing the cooking and only a skeleton crew of workers managing the machines. Low wage earners are losing their jobs.
This is the trend we can expect to see. To survive it, we need to make sure that we are not overtaken by it.
In addition, we can probably expect to see an increase in crime as more and more people lose their jobs. Our society has many people who walk close to the edge between being law-abiding citizens and criminals.
While they are productive members of society today, the loss of their jobs could very well give those people sufficient reason, in their own minds, to leave the path of righteousness and turn to a life of crime.
Should that happen, we will need to be ready to protect ourselves. Our homes will need to be fortified and we will need to be armed. You know what to do, I’ve written about it before and so have others. There’s no reason to repeat it here.
Bug Out Option
Our second option is to quite literally head for the hills, find a remote location and homestead there. That doesn’t mean waiting for the technology apocalypse to come and then bug out, but rather to start preparing today. Just as the technology gurus of Silicon Valley, we need a survival retreat where we can go, when those hungry gangs start roaming the street.
This will require time and investment. Unless there is a complete breakdown of the government, which I doubt is going to happen, you can’t just go and build a log cabin on federal land somewhere. Rather, you’re going to need to buy a piece of land, build some sort of home and prepare to become totally self-sufficient in every way. That’s why I used the word homestead.
Such a place will need to be in a remote location, so as to avoid the risk of being attacked by the aforementioned gangs. There is safety in numbers and one of the risks associated with living outside of town, on your own, is that there would be nobody around to help you, should you come under attack.
So you really want to make sure that your survival homestead is in a place where those gangs aren’t going to find you.
Even so, you’ll want to prepare extensive defenses to use, just in case. If you can find a place to bug out to, you have to assume that others can find it too.
Failure to prepare for that eventuality could end up being the last mistake you ever make.
This article has been written by Bill White for Survivopedia.