Exploring AI / In Chapter One, “What is AI?” the authors embrace the definition offered by Indian engineers Shukla Shubhendu and Jaiswal Vijay: “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgement, and intention.” This definition differentiates AI from mechanical devices or traditional computer software, as AI-based computer systems learn from data, text, or images and undertake intentional and intelligent decisions based on that analysis.
AI operating today is considered “artificial narrow intelligence” (ANI), which is defined as supporting specific processes with well-defined rules (and, incidentally, does not have any “intelligence” or “common sense”). The next phase of AI, “artificial general intelligence” (AGI), would consist of software that has cognitive abilities similar to humans and a sense of consciousness; so far, it remains technologically aspirational. Machine learning (ML), an important part of AI, consists of algorithms that can classify and learn from data, pictures, text, or objects without relying on rules-based programming. AI is dependent on data analytics, which involve the application of statistical techniques to uncover trends or patterns in large data sets. For AI to make informed decisions, effective ML and data analytics are prerequisites.
AI applied / In Chapter Two, “Healthcare,” West and Allen note AI opportunities that already exist in assisting physician diagnostics in the fields of dermatology (“skin cancer”), ophthalmology (“diabetic retinopathy”), radiology (“detecting breast cancer”), and oncology (“offering personalized treatment of cancer at the molecular level”). Moreover, ML (specifically, “natural language processing”) is being used to analyze text-based medical records to anticipate patient risks. In new drug clinical trials, the application of AI and ML can reduce the time necessary to bring new drugs to market. In addition, AI can more efficiently scan research studies, molecular databases, and conference proceedings to identify possible drug candidates. And AI and ML can combat health care fraud, abuse, and waste (estimated by the U.S. Government Accountability Office at $75 billion annually) by identifying suspicious treatment plans or lab test usage. Yet, AI problems in health care are pervasive and include having unrepresentative or incomplete data or using AI operationally in a manner that promotes biases based on race, gender, age, income, and geography.
Chapter Three, “Education,” finds that AI assists administrative processes, augments human teaching resources, and makes it possible for policymakers to make sense of large-scale data. Moreover, AI opportunities include helping manage school enrollment decisions, personalizing instruction for individual students, employing teaching assistants to answer basic student questions online, tracking “at-risk” students, and protecting against school violence by monitoring AI-linked video cameras. On the other hand, AI risks in educational systems involve student privacy, bias in educational algorithms, and inequitable access to K–12 quality schools.
Chapter Four, “Transportation,” focuses on autonomous vehicles (AVs) using AI and ML to combine data from dozens of onboard cameras and sensors, then analyze this information in real time and automatically guide the vehicles using high-definition maps. The authors argue that the benefits of AVs include improving highway safety (according to the U.S.-based Insurance Institute for Highway Safety, a full deployment of AI-based vehicles would lead to 11,000 American lives saved each year), alleviating highway congestion (translating to an annual savings of $121 billion in lost human labor), reducing air pollution and carbon emissions, and improving energy usage. In the United States, the authors believe that the major challenge to broad AV deployment is overcoming state governments’ idiomatic vehicle laws and standardizing guidelines across state boundaries. Other issues to be resolved include a significant national investment in infrastructure to facilitate advanced AV deployment, establishing how AVs are regulated, deciding where legal liability claims reside, and settling data protection, privacy, and security issues involving automotive industry safeguards, as well as adopting legislation against malicious behavior perpetrated against AVs.
In Chapter Five, “E‑Commerce,” West and Allen explore how this economic sector has grown so dramatically in the United States and the role AI, targeted advertising, and data analytics have played in its expansion. U.S. e‑commerce grew from $28 billion in sales in 2000 to $451.9 billion in 2017, including 24% growth over the period 2015–2017. Amazon, the largest e‑commerce company in America, utilizes ML to predict what products will most likely interest customers and make recommendations (estimated to drive 35% of its annual total sales) to those customers. Likewise, eBay, another U.S. company, deploys AI and ML to design systems for advertisement placement, personalization, visual search, and shipping recommendations for customer-to-customer sellers.
E‑commerce challenges include expanding universal access to home internet access (89% of Americans presently have access) and 5G networks; addressing drone delivery and zoning obstacles; dynamic pricing (i.e., charging different prices based on consumer traffic volume or product demand), often leading to charges of price gouging or overt discrimination based on geography, income, race, age, or gender factors; revising labor laws and improving working conditions for independent contractors; and increasing cybersecurity and data breaches. West and Allen recommend addressing these challenges by encouraging telecommunication firms, internet providers, and satellite companies to build out their networks to underserved communities, experimenting with drone delivery systems to reduce neighborhood traffic congestion, changing rules to require new, large apartment buildings to have a loading dock for delivery purposes, harmonizing tax rules for physical and digital retailers, recognizing labor unions to represent independent contractors, and enacting federal legislation to protect consumers from e‑commerce data breaches.
The authors evaluate the role of AI in “National Defense” in Chapter Six. They argue that AI will dramatically change the speed of war, not only enhancing the human role in conflict, but leveraging technology as never before, as technology is not only changing, but its rate of change is accelerating. The ethical and human rights challenges of relinquishing human control of autonomous weapons systems in combat remain unresolved but are balanced by AI’s potential to improve military and political decision-making, speed, and scalability, resulting in a strengthening of leadership capacity, general readiness, and performance on the battlefield. Furthermore, both China and Russia have enhanced their AI capabilities and are investing in robotics and autonomous systems with military applications, resulting in the U.S. confronting increased AI-based national security risks. The authors advocate for increased financial investment in AI for national security, workforce development in STEM fields (to address a shortage of trained professionals with AI skills), general digital literacy programs, cybersecurity and infrastructure, and strengthening domestic technology transfer and export controls.
Overcoming technology backlash / Techlash is a growing phenomenon. In Chapter Seven, West and Allen review several Brookings Institution opinion surveys examining American attitudes toward four emerging technologies: AI, robots, autonomous vehicles, and facial recognition software. In 2018, 14% of Americans surveyed were very positive about AI, 27% were somewhat positive, 23% were not very positive, and 36% did not know or gave no answer. When it came to Americans’ impression of robots, 61% were uncomfortable with robots, while only 16% were comfortable and 23% were unsure. Furthermore, when asked how likely they would be to ride in a self-driving vehicle, only 23% of American adult internet users said they would, compared to 61% who would not. Lastly, concerning whether facial recognition violates personal privacy, 42% of Americans thought it does, 28% did not, and 30% were unsure. The authors recognize a substantive backlash among Americans against emerging technologies they believe will invade their personal privacy, be used for public surveillance, take away employment opportunities, and bias certain individuals — all factors leading to a world where machines are ascendant and humans are oppressed.
Given those concerns, West and Allen discuss potential “ethical safeguards” in Chapter Eight, reviewing ways to build trustworthy AI and incorporate ethical considerations in corporate decision-making. Many non-government, academic, and corporate organizations have developed principles for AI development and processes to safeguard humanity. For example, Google, Microsoft, Amazon, Facebook, Apple, and IBM have collectively formed the Partnership on Artificial Intelligence to Benefit People and Society. The organization seeks to develop industry best practices to guide AI development with the goal of promoting “ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the technology.” The authors recommend that business organizations hire professional ethicists for corporate staff; establish an ethical code that prescribes principles, processes, and ways of handling ethical aspects of AI development; establish internal AI review boards for evaluating product development and deployment; require annotated AI and AI audit trails; implement AI training programs; and provide a means of remediation for aggrieved AI consumers.
West and Allen, in Chapter Nine, offer a series of recommendations for “Building Responsible AI.” First, in a time of pandemic and remarkable technological change, it is appropriate to directly address AI’s challenges, improve its governance through distributed collaboration involving frontline people with others who have differing skills to solve AI-related problems, and create guiding principles establishing values, objectives, and criteria for AI’s further development. Moreover, the authors recommend adopting horizontal rules that apply across every industry sector and vertical rules that address AI problems in specific sectors, strengthening public sector oversight through formal AI impact assessments, and restoring the federal Office of Technology Assessment (abolished by Congress in 1995) to evaluate AI and other emerging technologies. West and Allen also recommend creating AI federal agency advisory boards comprised of relevant stakeholders; defining corporate culpability, including reconsidering legal immunity now accorded to digital platforms; and administratively enforcing privacy violations, anticompetitive practices, and discriminatory behavior through existing federal statutes.
The authors strongly endorse improving digital access to Americans; reducing AI biases through independent, third-party audits; and moving beyond existing personal privacy notice-and-consent requirements to data sharing rules. West and Allen support the use of business and personal insurance to mitigate exposure to AI risks, diversifying the tech industry workforce, and penalizing (and thus, discouraging) malicious or abusive treatments designed to inappropriately manipulate software or use it for unsavory purposes. They also recommend establishing a national research “cloud” that provides computing access to technical experts and academic investigators, developing a U.S. data strategy that enables fair and unbiased exercise of AI, and addressing geographic inequalities and workforce training in America, especially for those Americans not attending universities or colleges. Lastly, the authors argue for improving mechanisms that exercise oversight and control of AI systems, encouraging AI for the “public good,” and actively building a community of democracies deploying AI technology in responsible ways.
Conclusion / West and Martin have written a well-researched book that comprehensively covers AI as it has emerged in applications relevant to health care, education, transportation, e‑commerce, and national defense and law enforcement. The authors have thoughtfully recognized the “dual-use” aspects of AI, ML, and data analytics, focusing not only on the potential benefits that AI offers American society and the global community, but also the potential threats of misuse and anti-democratic applications by authoritarian governments and mega-corporatist entities. In this light, West and Martin follow the late Georgia Tech technology historian Melvin Kranzberg, whose “Six Laws of Innovation” began with “Technology is neither good nor bad, nor is it neutral.”
One can reasonably conclude from the Brookings survey results that Americans are not comfortable with emerging technology. But this backlash goes much deeper, and West and Allen do not discuss the issue of the economic power (and political influence) that the tech industry has — or, at least, is perceived to have — on American institutions. The authors argue for a litany of ethical safeguards for tech giants to implement (and they have the potential to be important safeguards for internal control and governance), but a more fundamental problem looms: do Americans trust these corporations to place the “right” people in these important deliberative positions of authority? Moreover, will the ethical safeguards carry any authoritative weight — other than of an “advisory” nature — in the C‑suites? Lastly, do tech companies’ previous performance on issues of consumer privacy, security, transparency, censorship, and competitive behavior offer solace to Americans seeking substantive improvement?
Daily headlines announcing “hacks” to major tech (and non-tech) companies’ databases, “ransomware” attacks on businesses, and cyberattacks on defense agencies and government and federal systems proliferate — as most recently evidenced by the attack on Colonial Pipeline. Each year since 2001, the monetary damage caused by cybercrime has increased exponentially and reached an estimated $4.2 billion in 2020. Moreover, data breaches reportedly exposed 36 billion records in the first half of 2020, alone.
With AI systems eventually having the ability to make decisions that carry life-and-death consequences for individuals, how assured should Americans be that errors or malicious behavior will not have dire consequences for people? The data on the integrity of digital systems appears to be worsening. If “AI is here,” as the authors emphatically state, the future for potential threats from AI applications to consumers and citizens — whether from malicious behavior by hackers or from violations of human liberty and privacy exercised by authoritarian governments — is a realistic outcome. The discussion of the threats of AI misuse and vulnerability needs to move from academia and think tanks to actionable policies developed and implemented by government agencies and corporations.
To that end, West and Martin offer thoughtful recommendations that should be considered by legislatures in democratic countries — as well as by industry associations and major corporations — seriously interested in developing AI for its potential benefits, while ensuring liberty, privacy, and security for their citizens and customers.