Wednesday, October 17, 2018

Something to Know - (Part 7 Atlantic article on "Is Democracy Dying")

Stuart Carlson Comic Strip for October 17, 2018

One wonders just how long Trump's insistence of vouching for the truth and sincerity of the thugs of this world can go on.   I mean, 16 Saudi Arabian hit men go into a room and ostensibly interrogate Jamal Khashoggi, and it "goes wrong" and he gets cut up into little pieces.   Trumpo cannot con the world into believing him.  Something has got to give here.  Keep an eye on the news.

(continuing on with the Chapter on "Why Technology Favors Tyranny")

3. THE RISE OF DIGITAL DICTATORSHIPS

As many people lose their economic value, they might also come to lose their political power. The same technologies that might make billions of people economically irrelevant might also make them easier to monitor and control.
AI frightens many people because they don't trust it to remain obedient. Science fiction makes much of the possibility that computers or robots will develop consciousness— and shortly thereafter will try to kill all humans. But there is no particular reason to believe that AI will develop conscious ness as it becomes more intelligent. We should instead fear AI because it will probably always obey its human masters, and never rebel. AI is a tool and a weapon unlike any other that human beings have developed; it will almost certainly allow the already powerful to consolidate their power further.
Consider surveillance. Numerous countries around the world, including several democracies, are busy building unprecedented systems of surveillance. For example, Israel is a leader in the eld of surveillance technology, and has created in the occupied West Bank a working prototype for a total-surveillance regime. Already today whenever Palestinians make a phone call, post something on Facebook, or travel from one city to another, they are likely to be monitored by Israeli microphones, cameras, drones, or spy software. Algorithms analyze the gathered data, helping the Israeli security forces pinpoint and neutralize what they consider to be potential threats. The Palestinians may administer some towns and villages in the West Bank, but the Israelis command the sky, the airwaves, and cyberspace. It therefore takes surprisingly few Israeli soldiers to e ffectively control the roughly 2.5 million Palestinians who live in the West Bank.
In one incident in October 2017, a Palestinian laborer posted to his private Facebook account a picture of himself in his workplace, alongside a bulldozer. Adjacent to the image he wrote, "Good morning!" A Facebook translation algo rithm made a small error when transliterating the Arabic letters. Instead of Ysabechhum (which means "Good morning"), the algorithm identified the letters as Ydbachhum (which means "Hurt them"). Suspecting that the man might be a terrorist intend ing to use a bulldozer to run people over, Israeli security forces swiftly arrested him. They released him after they realized that the algorithm had made a mistake. Even so, the o end ing Facebook post was taken down—you can never be too careful. What Palestinians are experiencing today in the West Bank may be just a primitive preview of what billions of people will eventually experience all over the planet. 
Imagine, for instance, that the current regime in North Korea gained a more advanced version of this sort of technology in the future. North Koreans might be required to wear a biometric bracelet that monitors everything they do and say, as well as their blood pressure and brain activity. Using the growing understanding of the human brain and drawing on the immense powers of machine learning, the North Korean government might eventually be able to gauge what each and every citizen is thinking at each and every moment. If a North Korean looked at a picture of Kim Jong Un and the bio metric sensors picked up telltale signs of anger (higher blood pressure, increased activity in the amygdala), that person could be in the gulag the next day. 
And yet such hard-edged tactics may not prove necessary, at least much of the time. A facade of free choice and free voting may remain in place in some countries, even as the public exerts less and less actual control. To be sure, attempts to manipulate voters' feelings are not new. But once somebody (whether in San Francisco or Beijing or Moscow) gains the technological ability to manipulate the human heart—reliably, cheaply, and at scale—democratic politics will mutate into an emotional puppet show. 
We are unlikely to face a rebellion of sentient machines in the coming decades, but we might have to deal with hordes of bots that know how to press our emotional buttons better than our mother does and that use this uncanny ability, at the behest of a human elite, to try to sell us something— be it a car, a politician, or an entire ideology. The bots might identify our deepest fears, hatreds, and cravings and use them against us. We have already been given a foretaste of this in recent elections and referendums across the world, when hackers learned how to manipulate individual voters by analyzing data about them and exploiting their prejudices. While science- fiction thrillers are drawn to dramatic apocalypses of re and smoke, in reality we may be facing a banal apocalypse by clicking.
THE BIGGEST AND MOST FRIGHTENING impact of the AI revolution might be on the relative e fficiency of democracies and dictatorships. Historically, autocracies have faced crippling handicaps in regard to innovation and economic growth. In the late 20th century, democracies usually outperformed dictatorships, because they were far better at processing information. We tend to think about the con ict between democracy and dictatorship as a conflict between two different ethical systems, but it is actually a con ict between two different data processing systems. Democracy distributes the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given 20th-century technology, it was in efficient to concentrate too much information and power in one place. Nobody had the ability to process all available information fast enough and make the right deci sions. This is one reason the Soviet Union made far worse decisions than the United States, and why the Soviet economy lagged far behind the American economy.
However, artificial intelligence may soon swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. In fact, it might make centralized systems far more e fficient than diffuse systems, because machine learning works better when the machine has more information to analyze. If you disregard all privacy concerns and concentrate all the information relating to a billion people in one database, you'll wind up with much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. An authoritarian government that orders all its citizens to have their DNA sequenced and to share their medical data with some central authority would gain an immense advantage in genetics and medical research over societies in which medical data are strictly private. The main handicap of authoritarian regimes in the 20th century—the desire to concentrate all information and power in one place—may become their decisive advantage in the 21st century.
New technologies will continue to emerge, of course, and some of them may encourage the distribution rather than the concentration of information and power. Blockchain technology, and the use of cryptocurrencies enabled by it, is currently touted as a possible counterweight to centralized power. But blockchain technology is still in the embryonic stage, and we don't yet know whether it will indeed counterbalance the centralizing tendencies of AI. Remember that the Internet, too, was hyped in its early days as a libertarian panacea that would free people from all centralized systems—but is now poised to make centralized authority more powerful than ever.

4. THE TRANSFER OF AUTHORITY TO MACHINES

Even if some societies remain ostensibly democratic, the increasing e ciency of algorithms will still shift more and more authority from individual humans to networked machines. We might willingly give up more and more authority over our lives because we will learn from experience to trust the algorithms more than our own feelings, eventually losing our ability to make many decisions for ourselves. Just think of the way that, within a mere two decades, billions of people have come to entrust Google's search algorithm with one of the most important tasks of all: fi nding relevant and trustworthy information. As we rely more on Google for answers, our ability to locate information independently diminishes. Already today, "truth" is de ned by the top results of a Google search. This process has likewise a ffected our physical abilities, such as navigating space. People ask Google not just to find information but also to guide them around. Self-driving cars and AI physicians would represent further erosion: While these innovations would put truckers and human doctors out of work, their larger import lies in the continuing transfer of authority and responsibility to machines.
Humans are used to thinking about life as a drama of decision making. Liberal democracy and free-market capitalism see the individual as an autonomous agent constantly making choices about the world. Works of art—be they Shakespeare plays, Jane Austen novels, or cheesy Hollywood comedies— usually revolve around the hero having to make some crucial decision. To be or not to be? To listen to my wife and kill King Duncan, or listen to my conscience and spare him? To marry Mr. Collins or Mr. Darcy? Christian and Muslim theology similarly focus on the drama of decision making, arguing that ever lasting salvation depends on making the right choice.
What will happen to this view of life as we rely on AI to make ever more decisions for us? Even now we trust Net ix to recommend movies and Spotify to pick music we'll like. But why should AI's helpfulness stop there? Every year millions of college students need to decide what to study. This is a very important and di cult decision, made under pressure from parents, friends, and professors who have varying interests and opinions. It is also in fluenced by students' own individual fears and fantasies, which are themselves shaped by movies, novels, and advertising campaigns. Complicating matters, a given student does not really know what it takes to succeed in a given profession, and doesn't necessarily have a realistic sense of his or her own strengths and weaknesses.
It's not so hard to see how AI could one day make better decisions than we do about careers, and perhaps even about relationships. But once we begin to count on AI to decide what to study, where to work, and whom to date or even marry, human life will cease to be a drama of decision making, and our conception of life will need to change. Democratic elections and free markets might cease to make sense. So might most religions and works of art. Imagine Anna Karenina taking out her smartphone and asking Siri whether she should stay married to Karenin or elope with the dashing Count Vronsky. Or imagine your favorite Shakespeare play with all the crucial decisions made by a Google algorithm. Hamlet and Macbeth would have much more comfortable lives, but what kind of lives would those be? Do we have models for making sense of such lives? 

CAN PARLIAMENTS AND POLITICAL PARTIES overcome these challenges and forestall the darker scenarios? At the current moment this does not seem likely. Technological disruption is not even a leading item on the political agenda. During the 2016 U.S. presidential race, the main reference to disruptive technology concerned Hillary Clinton's email debacle, and despite all the talk about job loss, neither candidate directly addressed the potential impact of automation. Donald Trump warned voters that Mexicans would take their jobs, and that the U.S. should therefore build a wall on its southern border. He never warned voters that algorithms would take their jobs, nor did he suggest building a fi rewall around California.
So what should we do?
For starters, we need to place a much higher priority on understanding how the human mind works—particularly how our own wisdom and compassion can be cultivated. If we invest too much in AI and too little in developing the human mind, the very sophisticated arti ficial intelligence of computers might serve only to empower the natural stupidity of humans, and to nurture our worst (but also, perhaps, most powerful) impulses, among them greed and hatred. To avoid such an outcome, for every dollar and every minute we invest in improving AI, we would be wise to invest a dollar and a minute in exploring and developing human consciousness.
More practically, and more immediately, if we want to prevent the concentration of all wealth and power in the hands of a small elite, we must regulate the ownership of data. In ancient times, land was the most important asset, so politics was a struggle to control land. In the modern era, machines and factories became more important than land, so political struggles focused on controlling these vital means of production. In the 21st century, data will eclipse both land and machinery as the most important asset, so politics will be a struggle to control data's flow. 
Unfortunately, we don't have much experience in regulating the ownership of data, which is inherently a far more di cult task than regulating land or machines. Data are every where and nowhere at the same time, they can move at the speed of light, and you can create as many copies of them as you want. Do the data collected about my DNA, my brain, and my life belong to me, or to the government, or to a corporation, or to the human collective? 
The race to accumulate data is already on, and is currently headed by giants such as Google and Facebook and, in China, Baidu and Tencent. So far, many of these companies have acted as "attention merchants"—they capture our attention by providing us with free information, services, and entertainment, and then they resell our attention to advertisers. Yet their true business isn't merely selling ads. Rather, by capturing our attention they manage to accumulate immense amounts of data about us, which are worth more than any advertising revenue. We aren't their customers—we are their product.
Ordinary people will nd it very di cult to resist this process. At present, many of us are happy to give away our most valuable asset—our personal data—in exchange for free email services and funny cat videos. But if, later on, ordinary people decide to try to block the ow of data, they are likely to have trouble doing so, especially as they may have come to rely on the network to help them make decisions, and even for their health and physical survival. 
Nationalization of data by governments could o er one solution; it would certainly curb the power of big corporations. But history suggests that we are not necessarily better o in the hands of over mighty governments. So we had better call upon our scientists, our philosophers, our lawyers, and even our poets to turn their attention to this big question: How do you regulate the ownership of data?
Currently, humans risk becoming similar to domesticated animals. We have bred docile cows that produce enormous amounts of milk but are otherwise far inferior to their wild ancestors. They are less agile, less curious, and less resourceful. We are now creating tame humans who produce enormous amounts of data and function as e cient chips in a huge data-processing mechanism, but they hardly maximize their human potential. If we are not careful, we will end up with downgraded humans misusing upgraded computers to wreak havoc on themselves and on the world.
If you nd these prospects alarming—if you dislike the idea of living in a digital dictatorship or some similarly degraded form of society—then the most important contribution you can make is to nd ways to prevent too much data from being concentrated in too few hands, and also nd ways to keep distributed data processing more e cient than centralized data processing. These will not be easy tasks. But achieving them may be the best safeguard of democracy. #
------------
Yuval Noah Harari is a historian and philosopher at the Hebrew University of Jerusalem. This article has been adapted from his new book, 21 Lessons for the 21st Century.



-- 
****
Juan
Social progress can be measured by the social position of the female sex.
- Karl Marx     (so, by what measure does the GOP and Trump judge the progress so far?)

No comments:

Post a Comment