Google Searching for Moral High Ground in the Wrong Places

COMMENTARY Technology

Google Searching for Moral High Ground in the Wrong Places

Jun 21, 2018 4 min read
COMMENTARY BY
Klon Kitchen

Former Director, Center for Technology Policy

Klon was Director of The Heritage Foundation's Center for Technology Policy.
Google and its employees may have deluded themselves into believing that their fates are independent from those of the United States. But this is an illusion. Prykhodov/Getty Images

Key Takeaways

Google has a problem: The company’s incredibly gifted and creative employees are also profoundly ignorant—and in a way that threatens our national security.

The United States government cannot secure its people or its interests without direct support from our private sector, particularly the technology industry.

Google's efforts to avoid contributing to wars may very well make future wars more likely and costly.

Google has a problem: The company’s incredibly gifted and creative employees are also profoundly ignorant—and in a way that threatens our national security.

Recently, Google Cloud CEO Diane Greene said the company is withdrawing from the Department of Defense’s Project Maven, which is a multifaceted effort to apply artificial intelligence to the Pentagon’s huge information stores, especially imagery data. The decision comes after more than 4,000 of the tech giant’s employees signed a petition protesting the company’s participation in Maven. At least 12 engineers resigned after the company’s involvement was revealed in March.

“We believe that Google should not be in the business of war,” the petitioners declared, demanding a policy “that neither Google nor its contractors will ever build warfare technology.” The petition also suggested that working with the Defense Department violates the corporate motto: “don’t be evil.”

The company’s leadership heard the complaint loud and clear, with CEO Sundar Pichai issuing a set of “principles” that will guide Google’s AI development going forward—including prohibiting development of AI for “weapons” or “surveillance,” or threatening “human rights.”

It’s easy to appreciate the humanitarian instincts behind these concerns. That doesn’t make the corresponding ignorance and naivete any less dangerous.

In the contemporary security environment, more and more of the burden for assuring national security falls to the private sector. Put simply: The United States government cannot secure its people or its interests without direct support from our private sector, particularly the technology industry. It’s not a one-way street. The private sector cannot thrive absent the peace and security provided by government.

The growing and evolving threats posed by hostile states, by non-state actors such as terrorists and hacking syndicates, by so-called “gray zone” conflicts like those in Africa and in the Baltics, and a host of other challenges all demand awareness, insight, and capabilities that can be realized only by effectively integrating human and machine capabilities. Many of these capabilities are being developed in the private sector—under the protective economic, social, and political umbrella provided by our government.

Effective cooperation is now essential for both the private and public sectors. But this cooperation should not be coerced—it should be voluntarily pursued, which brings us to the prohibitive ignorance of Google’s protests and “principles.”

Take, for example, its insistence that “Google should not be in the business of war.” War is, and always has been, a fact of life. Recognizing this truth is not the same as wanting war, but denying it does nothing to prevent war. In fact, by minimizing the proven danger of man’s thirst for power, denial makes war more likely.

Certainly efforts like Project Maven aim to improve our military’s lethality. And we need not apologize for that. If wars are inevitable, we ought to win them quickly and decisively. But those same efforts will also improve the Pentagon’s ability to reduce the combat-related deaths of innocents, provide disaster support, prevent terrorist attacks, deter hostile foreign countries, and complete the humanitarian missions with which we frequently task our armed forces. Surely these advances are not evil.

Some things are evil, however—like developing and selling artificial intelligence to authoritarian regimes who will use the technology to control and to oppress their people. Yet, strangely, I haven’t heard of any employee protests or resignations over Google’s expanding AI research in China. Do the company’s engineers not understand that advanced algorithms and mass data processing capabilities being developed at the new Google Artificial Intelligence Center in China could, and likely will, be used to monitor and to oppress Chinese citizens and even political dissidents around the world? Doesn’t this violate Google’s AI “principles?”

Can Google prove that this is not or will not be the case? And is Google so infatuated with gaining greater access to the growing Chinese market that it is willing to hypocritically constrain its cooperation with the U.S. government while turning a blind eye to the totalitarian sins of Beijing?

Perhaps technology leaders are on the horns of a dilemma where they understand many of these global variables but are held hostage by an internal constituency of highly-sought technical experts who can easily secure alternative employment if they become aggrieved by this or that company policy. If that is the case, these leaders have a vested interest in educating and informing their employees, so that they have a fuller understanding of the world as it exists, of the good their efforts can produce, and of the evil that their absence can enable.

Google and its employees may have deluded themselves into believing that their fates are independent from those of the United States. But this is an illusion. Their efforts to avoid contributing to wars may very well make future wars more likely and costly.

This piece originally appeared in the Weekly Standard