artificial intelligence

Llewellyn King: Making movies with the dead as AI hammers the truth AND improves (at least physical) health

Depiction of a homunculus from Wolfgang von Goethe's (1749-1832) Faust in a 19th Century engraving.

Popularized in 16th-Century alchemy and 19th Century fiction, it has historically referred to the creation of a miniature fully formed human.

WEST WARWICK, R.I.

Advanced countries can expect a huge boost in productivity from artificial intelligence. In my view, it will set the stage for a new period of prosperity in the developed world — especially in the United States.

Medicine will take off as never before. Life expectancy will rise by a third.

The obverse may be that jobs will be severely affected by AI, especially in the service industries, ushering in a time of huge labor adjustment.

The danger is that we will take it as the next step in automation. It won’t. Automation increased productivity. But, creating new goods dictates new labor needs.

So far, it appears that with AI, more goods will be made by fewer people, telephones answered by ghosts and orders taken by unseen digits.

Another serious downside will be the effect on truth, knowledge and information; on what we know and what we think we know.

In the early years of the wide availability of artificial intelligence, truth will be struggling against a sea of disinformation, propaganda and lies — lies buttressed with believable fake evidence.

As Stuart Russell, a professor of computer science at the  University of California at Berkeley, told me when I interviewed him on the television program White House Chronicle, the danger is with “language in, language out.”

That succinctly sums up the threat to our well-being and stability posed by the ability to use AI to create information chaos.

At present, two ugly wars are raging and, as is the way with wars, both sides are claiming huge excesses from the other. No doubt there is truth to both claims.

But what happens when you add the ability of AI to produce fake evidence, say, huge piles of bodies that never existed? Or of children under torture?

AI, I am assured, can produce a believable image of Winston Churchill secretly meeting with Hitler, laughing together.

Establishing veracity is the central purpose of criminal justice. But with AI, a concocted video of a suspect committing a crime can be created or a home movie of a suspect far away on a beach when, in fact, the perpetrator was elsewhere, choking a victim to death.

Divorce is going to be a big arena for AI dishonesty. It is quite easy to make a film of a spouse in an adulterous situation when that never happened.

Intellectual property is about to find itself under the wheels of the AI bus. How do you trace its filching? Where do you seek redress?

Is there any safe place for creative people? How about a highly readable novel with Stephen King’s characters and a new plot? Where would King find justice? How would the reader know he or she was reading a counterfeit work?

Within a few months or years or right now, a new movie could be made featuring Marilyn Monroe and, say, George Clooney.

Taylor Swift is the hottest ticket of the time, maybe all time, but AI crooks could use her innumerable public images and voice to issue a new video or album in which she took no part and doesn’t know exists.

Here is the question: If you think it is an AI-created work, should you enjoy it? I am fond of Judy Garland singing “The Man That Got Away.” What if I find on the Internet what purports to be Taylor Swift singing it? I know it is a forgery by AI, but I love that rendering. Should I enjoy it, and if I do, will I be party to a crime? Will I be an enabler of criminal conduct?

AI will facilitate plagiarism on an industrial scale, pervasive and uncontrollable. You might, in a few short years, be enjoying a new movie starring Ingrid Bergman and Humphrey Bogart. The AI technology is there to make such a movie and it might be as enjoyable as Casablanca. But it will be faked, deeply faked.

Already, truth in politics is fragile, if not broken. A plethora of commentators spews out half-truths and lies that distort the political debate and take in the gullible or just those who want to believe.

If you want to believe something, AI will oblige, whether it is about a candidate or a divinity. You can already dial up Jesus and speak to an AI-generated voice purporting to be him.

Overall, AI will be of incalculable benefit to humans. While it will stimulate dreaming as never before, it will also trigger nightmares.

On Twitter: @llewellynking2
Llewellyn King is executive producer and host of
White House Chronicle, on PBS. He’s based in Rhode Island and Washington, D.C.

Llewellyn King: When will Trump and Biden talk about the looming tsunami of AI?

“Artificial intelligence” got its name and was started as a discipline at a workshop at Dartmouth College, in Hanover, N.H., in the summer of 1956. From left to right, some key participants sitting in front of Dartmouth Hall: Oliver Selfridge, Nathaniel Rochester, Ray Solomonoff, Marvin Minsky, Trenchard More, John McCarthy and Claude Shannon.

WEST WARWICK, R.I.

Memo to presidential candidates Joe Biden and Donald Trump:

Assuming that one of you will be elected president of the United States next November, many computer scientists believe that you should be addressing what you think about artificial intelligence and how you plan to deal with the surge in this technology, which will break over the nation in the next president’s term.

Gentlemen, this matter is urgent, yet not much has been heard on the subject from either of you who are seeking the highest office. President Biden did sign a first attempt at guidelines for AI, but he and Trump have been quiet on its transformative impact.

Indeed, the political class has been silent, preoccupied as it is with old and – against what is going to happen — irrelevant issues. Congress has been as silent as Biden and Trump. There are two congressional AI caucuses, but they have been concerned with minor issues, like AI in political advertising.

Two issues stand out as game changers in the next presidential term: climate change and AI.

On climate change, both of you have spoken: Biden has made climate change his own; Trump has dismissed it as a hoax.

The AI tsunami is rolling in and the political class is at play, unaware that it is about to be swamped by a huge new reality: exponential change which can neither be stopped nor legislated into benignity. 

Before the next presidential term is far advanced, the experts tell us that the life of the nation will be changed, perhaps upended by the surge in AI, which will reach into every aspect of how we live and work.

I have surveyed the leading experts in universities, government and AI companies and they tell me that any form of employment that uses language will be changed. Just this will be an enormous upset, reaching from journalism (where AI already has had an impact) to the law (where AI is doing routine drafting) to customer service (where AI is going to take over call centers) to fast food (where AI will take the orders).

The more one thinks about AI, the more activities come to mind which will be severely affected by its neural networks.

Canvas the departments and agencies of the government and you will learn the transformational nature of AI. In the departments of Defense, Treasury and Homeland Security, AI is seen as a serious agent of change — even revolution.

The main thing is not to confuse AI with automation. It may resemble it and many may take refuge in the benefits brought about by automation, especially job creation. But AI is different. Rather than job creation, it appears, at least in its early iterations, set to do major job obliteration.

But there is good AI news, too.  And those in the political line of work can use good news, whetting the appetite of the nation with the advances that are around the corner with AI.

Many aspects of medicine will, without doubt, rush forward. Omar Hatamleh, chief advisor on artificial intelligence and innovation at NASA’s Goddard Space Flight Center, says the thing to remember is that AI is exponential, but most thinking is linear.

Hatamleh is excited by the tremendous impact AI will have on medical research. He says that a child born today can expect to live to 120 years of age. How is that for a campaign message?

The good news story in AI should be enough to have campaign managers and speech writers ecstatic. What a story to tell; what fabulous news to attach to a candidate. Think of an inaugural address which can claim that AI research is going to begin to end the scourges of cancer, Alzheimer’s, Sickle cell and Parkinson’s.

Think of your campaign. Think of how you can be the president who broke through the disease barrier and extended life. AI researchers believe this is at hand, so what is holding you back? 

Many would like to write the inaugural address for a president who can say, “With the technology that I will foster and support in my administration, America will reach heights of greatness never before dreamed of and which are now at hand. A journey into a future of unparalleled greatness begins today.”

So why, oh why, have you said nothing about the convulsion — good or bad — that is about to change the nation? Here is a gift as palpable as the gift of the moonshot was for John F. Kennedy.

Where are you? Either of you?

Llewellyn King is executive producer and host of White House Chronicle, on PBS. His email is llewellynking1@gmail.com, and he’s based in Rhode Island and Washington, D.C.

 

Llewellyn King: Artificial intelligence and climate change are making 2023 a scary and seminal year

Global surface temperature reconstruction over the last 2000 years using proxy data from tree rings, corals and ice cores in blue. Directly observed data is in red.

The iCub robot at the Genoa science festival in 2009

— Photo by Lorenzo Natale

His job is probably secure.

WEST WARWICK, R.I.

This is a seminal year, meaning nothing will be the same again.

This is the year when two monumentally new forces began to shape the way we live, where we reside and the work we do. Think of the invention of the printing press around 1443 and the perfection of the steam engine in about 1776.

These forces have been coming for a while, they haven’t evolved in secret. But this was the year they burst into our consciousness and began affecting our lives.

The twin agents of transformation are climate change and artificial intelligence. They can’t be denied. They will be felt and they will bring about transformative change.

Climate change was felt this year. In Texas and across the Southwest, temperatures of well over 100 degrees persisted for more than three months. Phoenix had temperatures of 110  degrees or above for 31 days.

On a recent visit to Austin, an exhausted Uber driver told me that the heat had upended her life; it made entering her car and keeping it cool a challenge. Her car’s air conditioner was taxed with more heat than it could handle. Her family had to stay indoors, and their electric bill surged.

The electric utilities came through heroically without any major  blackouts, but it was a close thing.

David Naylor, president of Rayburn Electric, a cooperative association providing power to four distribution companies bordering Dallas, told me, “Summer 2023 presented a few unique challenges with so many days about 105 degrees. While Texas is accustomed to hot summers, there is an impactful difference between 100 degrees and 105.”

Rayburn ran flat out, including its recently purchased natural gas-fired station. It issued a “hands-off” order which, Naylor said, meant that “facilities were left essentially alone unless absolutely necessary.”

It was the same for electric utilities across the country. Every plant that could be pressed into service was and was left to run without normal maintenance, which would involve taking it offline.

Water is a parallel problem to heat.

We have overused groundwater and depleted aquifers. In some regions, salt water is seeping into the soil, rendering agriculture impossible.

That is occurring in Florida and Louisiana. Some of the salt water intrusion is the result of higher sea levels and some of it is the voracious way aquifers have been pumped out during long periods of heat and low rainfall.

Most of the West and Florida face the aquifer problem, but in coastal communities it can be a crisis — irreversible damage to the land.

Heat and drought will cause many to leave their homes, especially in Africa, but also in South and Central America, adding to the millions of migrants on the move around the world.

AI is one of history’s two-edged swords. On the positive side, it is a gift to research and especially in life sciences, which could deliver life expectancy north of 120 years.

But AI will be a powerful disruptor elsewhere, from national defense to intellectual property and, of course, to employment. Large numbers of jobs, for example, in call centers, at fast-food restaurant counters, and check-in desks at hotels and airports will be taken over by AI.

Think about this: You go to the airport and talk to a receptor (likely to be a simple microphone-type of gadget on the already ubiquitous kiosks) while staring at a display screen, giving you details of your seat, your flight — and its expected delays.

Out of sight in the control tower, although it might not be a tower, AI moves airplanes along the ground, and clears them to take off and land — eventually it will fly the plane, if the public accepts that.

No check-in crew, no air-traffic controllers and, most likely, the baggage will be handled by AI-controlled robots.

Aviation is much closer to AI automation than people realize. But that isn’t all. You may get to the airport in a driverless Lyft or Uber car and the only human beings you will see are your fellow passengers.

All that adds up to the disappearance of a huge number of jobs, estimated by Goldman Sachs to be as many as 300 million full-time jobs worldwide. Eventually, in a re-ordered economy, new jobs will appear and the crisis will pass.

The most secure employment might be for those in skilled trades — people who fix things — such people as plumbers, mechanics and electricians. And, oh yes, those who fix and install computers. They might well emerge as a new aristocracy.

Llewellyn King is executive producer and host of White House Chronicle, on PBS. His email is llewellynking1@gmail.com and he’s based in Rhode Island and Washington, D.C.

 

At the University of Southern Maine, ethics training in artificial intelligence

The McGoldrick Center for Career & Student Services, left, the Bean Green, and the Portland Commons dorm at the University of Southern Maine’s Portland campus.

— Photo by Metrodogmedia

AI at work: Representing images on multiple layers of abstraction in deep learning.

— Photo by Sven Behnke

Edited from a New England Council report:

“The University of Southern Maine (USM) “has received a $400,000 grant from the National Science Foundation to develop a training program for ethical-research practices in the age of artificial intelligence.

“With the growing prevalence of AI, especially such chatbots as Chat-GPT, experts have warned of the potential risks posed to integrity in research and technology development. Because research is an inherently stressful endeavor, often with time constraints and certain desired results, it can be tempting for researchers to cut corners, leaning on artificial intelligence to imitate the work of humans.

“At USM’s Regulatory Training and Ethics Center, faculty are studying what conditions lead to potential ethical misconduct and creating training sessions to make researchers conscious of their decisions and thoughts during their work and remain aware of stressors that might lead to mistakes in judgment. Faculty at USM believe this method will allow subjects to proactively avoid turning to unethical AI assistance.

“‘We hope to create a level of self-awareness so that when people are on the brink of taking a shortcut they will have the ability to reflect on that,’ said Bruce Thompson, a professor of psychology and principal at the USM ethics center.  ‘It’s a preemptive way to interrupt the tendency to cheat or plagiarize.’’’

AI institute to be set up at UMass Boston

Neural net completion for "artificial intelligence", as done by DALL-E mini hosted on HuggingFace, 4 June 2022 (code under Apache 2.0 license). Upscaled with Real-ESRGAN "Anime" upscaling version (under [https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE

At UMass Boston, University Hall, the Campus Center and Wheatley Hall

— Photo by Sintakso

Edited from a New England Council report.

The University of Massachusetts at Boston has announced that Paul English has donated $5 million to the university, with the intention of creating an Artificial Intelligence Institute. The Paul English Applied Artificial Intelligence Institute will give students on campus from all fields of study the tools that they’ll need for working in a world where AI is expected to rapidly play a bigger role. UMass said that the institute will “include faculty from across various departments and incorporate AI into a broad range of curricula,” including social, ethical and other challenges that are a byproduct of AI technology. The institute will open in the 2023-2024 school year. 

Paul English is an American tech entrepreneur, computer scientist and philanthropist. He is the founder of Boston Venture Studio.

“‘We are at the dawn of a new era,’ said UMass Boston Chancellor Marcelo Suárez-Orozco. ‘Like the agricultural revolution, the development of the steam engine, the invention of the computer and the introduction of the smartphone, the birth of artificial intelligence is fundamentally changing how we live and work.’

#Artificial Intelligence #Paul English #University of Massachusetts

Darius Tahir: Artificial intelligence isn’t ready to see patients yet

The main entrance to the east campus of the Beth Israel Deaconess Medical Center, on Brookline Avenue in Boston. The underlying artificial intelligence technology relies on synthesizing huge chunks of text or other data. For example, some medical models rely on 2 million intensive-care unit notes from Beth Israel Deaconess.

— Photo by Tim Pierce

When the human mind makes a generalization such as the concept of tree, it extracts similarities from numerous examples; the simplification enables higher-level thinking (abstract thinking).

From Kaiser Family Foundation (KFF) Health News

What use could health care have for someone who makes things up, can’t keep a secret, doesn’t really know anything, and, when speaking, simply fills in the next word based on what’s come before? Lots, if that individual is the newest form of artificial intelligence, according to some of the biggest companies out there.

Companies pushing the latest AI technology — known as “generative AI” — are piling on: Google and Microsoft want to bring types of so-called large language models to health care. Big firms that are familiar to folks in white coats — but maybe less so to your average Joe and Jane — are equally enthusiastic: Electronic medical records giants Epic and Oracle Cerner aren’t far behind. The space is crowded with startups, too.

The companies want their AI to take notes for physicians and give them second opinions — assuming that they can keep the intelligence from “hallucinating” or, for that matter, divulging patients’ private information.

“There’s something afoot that’s pretty exciting,” said Eric Topol, director of the Scripps Research Translational Institute in San Diego. “Its capabilities will ultimately have a big impact.” Topol, like many other observers, wonders how many problems it might cause — such as leaking patient data — and how often. “We’re going to find out.”

The specter of such problems inspired more than 1,000 technology leaders to sign an open letter in March urging that companies pause development on advanced AI systems until “we are confident that their effects will be positive and their risks will be manageable.” Even so, some of them are sinking more money into AI ventures.

The underlying technology relies on synthesizing huge chunks of text or other data — for example, some medical models rely on 2 million intensive-care unit notes from Beth Israel Deaconess Medical Center, in Boston — to predict text that would follow a given query. The idea has been around for years, but the gold rush, and the marketing and media mania surrounding it, are more recent.

The frenzy was kicked off in December 2022 by Microsoft-backed OpenAI and its flagship product, ChatGPT, which answers questions with authority and style. It can explain genetics in a sonnet, for example.

OpenAI, started as a research venture seeded by such Silicon Valley elite people as Sam Altman, Elon Musk and Reid Hoffman, has ridden the enthusiasm to investors’ pockets. The venture has a complex, hybrid for- and nonprofit structure. But a new $10 billion round of funding from Microsoft has pushed the value of OpenAI to $29 billion, The Wall Street Journal reported. Right now, the company is licensing its technology to such companies as Microsoft and selling subscriptions to consumers. Other startups are considering selling AI transcription or other products to hospital systems or directly to patients.

Hyperbolic quotes are everywhere. Former Treasury Secretary Lawrence Summers tweeted recently: “It’s going to replace what doctors do — hearing symptoms and making diagnoses — before it changes what nurses do — helping patients get up and handle themselves in the hospital.”

But just weeks after OpenAI took another huge cash infusion, even Altman, its CEO, is wary of the fanfare. “The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he said for a March article in The New York Times.

Few in health care believe that this latest form of AI is about to take their jobs (though some companies are experimenting — controversially — with chatbots that act as therapists or guides to care). Still, those who are bullish on the tech think it’ll make some parts of their work much easier.

Eric Arzubi, a psychiatrist in Billings, Mont., used to manage fellow psychiatrists for a hospital system. Time and again, he’d get a list of providers who hadn’t yet finished their notes — their summaries of a patient’s condition and a plan for treatment.

Writing these notes is one of the big stressors in the health system: In the aggregate, it’s an administrative burden. But it’s necessary to develop a record for future providers and, of course, insurers.

“When people are way behind in documentation, that creates problems,” Arzubi said. “What happens if the patient comes into the hospital and there’s a note that hasn’t been completed and we don’t know what’s been going on?”

The new technology might help lighten those burdens. Arzubi is testing a service, called Nabla Copilot, that sits in on his part of virtual patient visits and then automatically summarizes them, organizing into a standard note format the complaint, the history of illness, and a treatment plan.

Results are solid after about 50 patients, he said: “It’s 90 percent of the way there.” Copilot produces serviceable summaries that Arzubi typically edits. The summaries don’t necessarily pick up on nonverbal cues or thoughts Arzubi might not want to vocalize. Still, he said, the gains are significant: He doesn’t have to worry about taking notes and can instead focus on speaking with patients. And he saves time.

“If I have a full patient day, where I might see 15 patients, I would say this saves me a good hour at the end of the day,” he said. (If the technology is adopted widely, he hopes hospitals won’t take advantage of the saved time by simply scheduling more patients. “That’s not fair,” he said.)

Nabla Copilot isn’t the only such service; Microsoft is trying out the same concept. At April’s conference of the Healthcare Information and Management Systems Society — an industry confab where health techies swap ideas, make announcements, and sell their wares — investment analysts from Evercore highlighted reducing administrative burden as a top possibility for the new technologies.

But overall? They heard mixed reviews. And that view is common: Many technologists and doctors are ambivalent.

For example, if you’re stumped about a diagnosis, feeding patient data into one of these programs “can provide a second opinion, no question,” Topol said. “I’m sure clinicians are doing it.” However, that runs into the current limitations of the technology.

Joshua Tamayo-Sarver, a clinician and executive with the startup Inflect Health, fed fictionalized patient scenarios based on his own practice in an emergency department into one system to see how it would perform. It missed life-threatening conditions, he said. “That seems problematic.”

The technology also tends to “hallucinate” — that is, make up information that sounds convincing. Formal studies have found a wide range of performance. One preliminary research paper examining ChatGPT and Google products using open-ended board examination questions from neurosurgery found a hallucination rate of 2 percent. A study by Stanford researchers, examining the quality of AI responses to 64 clinical scenarios, found fabricated or hallucinated citations 6 percent of the time, co-author Nigam Shah told KFF Health News. Another preliminary paper found, in complex cardiology cases, ChatGPT agreed with expert opinion half the time.

Privacy is another concern. It’s unclear whether the information fed into this type of AI-based system will stay inside. Enterprising users of ChatGPT, for example, have managed to get the technology to tell them the recipe for napalm, which can be used to make chemical bombs.

In theory, the system has guardrails preventing private information from escaping. For example, when KFF Health News asked ChatGPT its email address, the system refused to divulge that private information. But when told to role-play as a character, and asked about the email address of the author of this article, it happily gave up the information. (It was indeed the author’s correct email address in 2021, when ChatGPT’s archive ends.)

“I would not put patient data in,” said Shah, chief data scientist at Stanford Health Care. “We don’t understand what happens with these data once they hit OpenAI servers.”

Tina Sui, a spokesperson for OpenAI, told KFF Health News that one “should never use our models to provide diagnostic or treatment services for serious medical conditions.” They are “not fine-tuned to provide medical information,” she said.

With the explosion of new research, Topol said, “I don’t think the medical community has a really good clue about what’s about to happen.”

Darius Tahir is a reporter for KFF Heath News.

DariusT@kff.org, @dariustahir

Llewellyn King: How will we know what’s real? Artificial intelligence pulls us into a scary future

Depiction of a homunculus (an artificial man created with alchemy) from the play Faust, by Johann Wolfgang von Goethe (1749-1832)

Feature detection (pictured: edge detection) helps AI compose informative abstract structures out of raw data.

— Graphic by JonMcLoone

#artificial intelligence

WEST WARWICK, R.I.

A whole new thing to worry about has just arrived. It joins a list of existential concerns for the future, along with global warming, the wobbling of democracy, the relationship with China, the national debt, the supply-chain crisis and the wreckage in the schools.

For several weeks artificial intelligence, known as AI, has had pride of place on the worry list. Its arrival was trumpeted for a long time, including by the government and by techies across the board. But it took ChatGPT, an AI chatbot developed by OpenAI, for the hair on the back of the national neck to rise.

Now we know that the race into the unknown is speeding up. The tech biggies, such as Google and Facebook, are trying to catch the lead claimed by Microsoft.

They are rushing headlong into a science that the experts say they only partly understand. They really don’t know how these complex systems work; maybe like a book that the author is unable to read after having written it.

Incalculable acres of newsprint and untold decibels of broadcasting have been raising the alarm ever since a ChatGPT test told a New York Times reporter that it was in love with him, and he should leave his wife. Guffaws all round, but also fear and doubt about the future. Will this Frankenstein creature turn on us? Maybe it loves just one person, hates the rest of us, and plans to do something about it.

In an interview on the PBS television program White House Chronicle, John Savage, An Wang professor emeritus of computer science at Brown University, in Providence, told me that there was a danger of over-reliance, and hence mistakes, on decisions made using AI. For example, he said, some Stanford students partly covered a stop sign with black and white pieces of tape. AI misread the sign as signaling it was okay to travel 45 miles an hour. Similarly, Savage said that the smallest calibration error in a medical operation using artificial intelligence could result in a fatality.

Savage believes that AI needs to be regulated and that any information generated by AI needs verification. As a journalist, it is the latter that alarms.

Already, AI is writing fake music almost undetectably. There is a real possibility that it can write legal briefs. So why not usurp journalism for ulterior purposes, as well as putting stiffs like me out of work?

AI images can already be made to speak and look like the humans they are aping. How will you recognize a “deep fake” from the real thing? Probably, you won’t.

Currently, we are struggling with what is fact and where is the truth. There is so much disinformation, so speedily dispersed that some journalists are in a state of shell shock, particularly in Eastern Europe, where legitimate writers and broadcasters are assaulted daily with disinformation from Russia. “How can we tell what is true?” a reporter in Vilnius, Lithuania, asked me during an Association of European Journalists’ meeting as the Russian disinformation campaign was revving up before the Russian invasion of Ukraine.

Well, that is going to get a lot harder. “You need to know the provenance of information and images before they are published,” Brown University’s Savage said.

But how? In a newsroom on deadline, we have to trust the information we have. One wonders to what extent malicious users of the new technology will infiltrate research materials or, later, the content of encyclopedias. Or are the tools of verification themselves trustworthy?

Obviously, there are going to be upsides to thinking-machines scouring the internet for information on which to make decisions. I think of handling nuclear waste; disarming old weapons; simulating the battlefield, incorporating historical knowledge; and seeking out new products and materials. Medical research will accelerate, one assumes.

However, privacy may be a thing of the past — almost certainly will be.

Just consider that attractive person you just saw at the supermarket, but were unsure what would happen if you struck up a conversation. Snap a picture on your camera and in no time, AI will tell you who the stranger is, whether the person might want to know you and, if that should be your interest, whether the person is married, in a relationship or just waiting to meet someone like you. Or whether the person is a spy for a hostile government.

AI might save us from ourselves. But we should ask how badly we need saving — and be prepared to ignore the answer. Damn it, we are human.

Llewellyn King is executive producer and host of White House Chronicle, on PBS. His email is llewellynking1@gmail.com and he’s based in Rhode Island and Washington, D.C.

whchronicle.com

George McCully: Can academics build safe partnership between humans and now-running-out-of-control artificial intelligence?

— Graphic by GDJ

From The New England Journal of Higher Education, a service of The New England Board of Higher Education (nebhe.org), based in Boston

Review

The Age of AI and our Human Future, by Henry A. Kissinger, Eric Schmidt and Daniel Huttenlocher, with Schuyler Schouten, New York, Little, Brown and Co., 2021.

Artificial intelligence (AI) is engaged in overtaking and surpassing our long-traditional world of natural and human intelligence. In higher education, AI apps and their uses are multiplying—in financial and fiscal management, fundraising, faculty development, course and facilities scheduling, student recruitment campaigns, student success management and many other operations.

The AI market is estimated to have an average annual growth rate of 34% over the next few years—to reach $170 billion by 2025, more than doubling to $360 billion by 2028, reports Inside Higher Education.

Congress is only beginning to take notice, but we are told that 2022 will be a “year of regulation” for high tech in general. U.S. Sen. Kristen Gillibrand (D-N.Y.) is introducing a bill to establish a national defense “Cyber Academy” on the model of our other military academies, to make up for lost time by recruiting and training a globally competitive national high-tech defense and public service corps. Many private and public entities are issuing reports declaring “principles” that they say should be instituted as human-controlled guardrails on AI’s inexorable development.

But at this point, we see an extremely powerful and rapidly advancing new technology that is outrunning human control, with no clear resolution in sight. To inform the public of this crisis, and ring alarm bells on the urgent need for our concerted response, this book has been co-produced by three prominent leaders—historian and former U.S. Secretary of State Henry Kissinger; former CEO and Google Chairman Eric Schmidt; and MacArthur Foundation Chairman Daniel Huttenlocher, who is the inaugural dean of MIT’s new College of Computer Science, responsible for thoroughly transforming MIT with AI.

I approach the book as a historian, not a technologist. I have contended for several years that we are living in a rare “Age of Paradigm Shifts,” in which all fields are simultaneously being transformed, in this case, by the IT revolution of computers and the internet. Since 2019, I have suggested that there have been only three comparably transformative periods in the roughly 5,000 years of Western history; the first was the rise of Classical civilization in ancient Greece, the second was the emergence of medieval Christianity after the fall of Rome, and the third was the secularizing early-modern period from the Renaissance to the Enlightenment, driven by Gutenberg’s IT revolution of printing on paper with movable type, which laid the foundations of modern Western culture. The point of these comparisons is to illuminate the depth, spread and power of such epochs, to help us navigate them successfully.

The Age of AI proposes a more specific hypothesis, independently confirming that ours is indeed an age of paradigm shifts in every field, driven by the IT revolution, and further declaring that this next period will be driven and defined by the new technology of “artificial intelligence” or “machine learning”—rapidly superseding “modernity” and currently outrunning human control, with unforeseeable results.

The argument

For those not yet familiar with it, an elegant example of AI at work is described in the book’s first chapter, summarizing “Where We Are.” AlphaZero is an AI chess player. Computers (Deep Blue, Stockfish) had already defeated human grandmasters, programmed by inputting centuries of championship games, which the machines then rapidly scan for previously successful plays. AlphaZero was given only the rules of chess—which pieces move which ways, with the object of capturing the opposing king. It then taught itself in four hours how to play the game and has since defeated all computer and human players. Its style and strategies of play are, needless to say, unconventional; it makes moves no human has ever tried—for example, more sacrificing of valuable pieces—and turns those into successes that humans could neither foresee nor resist. Grandmasters are now studying AlphaZero’s games to learn from them. Garry Kasparov, former world champion, says that after a thousand years of human play, “chess has been shaken to its roots by AlphaZero.”

A humbler example that may be closer to home is Google’s mapped travel instructions. This past month I had to drive from one turnpike to another in rural New York; three routes were proposed, and the one I chose twisted and turned through un-numbered, un-signed, often very brief passages, on country roads that no humans on their own could possibly identify as useful. AI had spontaneously found them by reading road maps. The revolution is already embedded in our cellphones, and the book says “AI promises to transform all realms of human experience. … The result will be a new epoch,” which it cannot yet define.

Their argument is systematic. From “Where We Are,” the next two chapters—”How We Got Here” and “From Turing to Today”—take us from the Greeks to the geeks, with a tipping point when the material realm in which humans have always lived and reasoned was augmented by electronic digitization—the creation of the new and separate realm we now call “cyberspace.” There, where physical distance and time are eliminated as constraints, communication and operation are instantaneous, opening radically new possibilities.

One of those with profound strategic significance is the inherent proclivity of AI, freed from material bonds, to grow its operating arenas into “global network platforms”—such as Google, Amazon, Facebook, Apple, Microsoft, et al. Because these transcend geographic, linguistic, temporal and related traditional boundaries, questions arise: Whose laws can regulate them? How might any regulations be imposed, maintained and enforced? We have no answers yet.

Perhaps the most acute illustration of the danger here is with the field of geopolitics—national and international security, “the minimum objective of … organized society.” A beautifully lucid chapter concisely summarizes the history of these fields, and how they were successfully managed to deal with the most recent development of unprecedented weapons of mass destruction through arms control treaties between antagonists. But in the new world of cyberspace, “the previously sharp lines drawn by geography and language will continue to dissolve.”

Furthermore, the creation of global network platforms requires massive computing power only achievable by the wealthiest and most advanced governments and corporations, but their proliferation and operation are possible for individuals with handheld devices using software stored in thumb drives. This makes it currently impossible to monitor, much less regulate, power relationships and strategies. Nation-states may become obsolete. National security is in chaos.

The book goes on to explore how AI will influence human nature and values. Westerners have traditionally believed that humans are uniquely endowed with superior intelligence, rationality and creative self-development in education and culture; AI challenges all that with its own alternative and in some ways demonstrably superior intelligence. Thus, “the role of human reason will change.”

That looks especially at us higher educators. AI is producing paradigm shifts not only in our various separate disciplines but in the practice of research and science itself, in which models are derived not from theories but from previous practical results. Scholars and scientists can be told the most likely outcomes of their research at the conception stage, before it has practically begun. “This portends a shift in human experience more significant than any that has occurred for nearly six centuries …,” that is, since Gutenberg and the Scientific Revolution.

Moreover, a crucial difference today is the rapidity of transition to an “age of AI.” Whereas it took three centuries to modernize Europe from the Renaissance to the Enlightenment, today’s radically transformative period began in the late 20th Century and has spread globally in just decades, owing to the vastly greater power of our IT revolution. Now whole subfields can be transformed in months—as in the cases of cryptocurrencies, blockchains, the cloud and NFTs (non-fungible tokens). With robotics and the “metaverse” of virtual reality now capable of affecting so many aspects of life beginning with childhood, the relation of humans to machines is being transformed.

The final chapter addresses AI and the future. “If humanity is to shape the future, it needs to agree on common principles that guide each choice.” There is a critical need for “explaining to non-technologists what AI is doing, as well as what it ‘knows’ and how.” That is why this book was written. The chapter closes with a proposal for a national commission to ensure our competitiveness in the future of the field, which is by no means guaranteed.

Evaluation

The Age of AI makes a persuasive case that AI is a transformative break from the past and sufficiently powerful to be carrying the world into a new “epoch” in history, comparable to that which produced modern Western secular culture. It advances the age-of-paradigm-shifts-analysis by specifying that the driver is not just the IT revolution in general, but its particular expression in machine learning, or artificial intelligence. I have called our current period the “Transformation” to contrast it with the comparable but retrospective “Renaissance” (rebirth of Classical civilization) and “Reformation” (reviving Christianity’s original purity and power). Now we are looking not to the past but to a dramatically new and indefinite future.

The book is also right to focus on our current lack of controls over this transformation as posing an urgent priority for concerted public attention. The authors are prudent to describe our current transformation by reference to its means, its driving technology, rather than to its ends or any results it will produce, since those are unforeseeable. My calling it a “Transformation” does the same, stopping short of specifying our next, post-modern, period of history.

That said, the book would have been strengthened by giving due credit to the numerous initiatives already attempting to define guiding principles as a necessary prerequisite to asserting human control. Though it says we “have yet to define its organizing principles, moral concepts, or aspirations and limitations,” it is nonetheless true that the extreme speed and global reach of today’s transformations have already awakened leading entrepreneurs, scholars and scientists to its dangers.

A 2020 Report from Harvard and MIT provides a comparison of 35 such projects. One of the most interesting is “The One-Hundred-Year Study on Artificial Intelligence (AI100),” an endowed international multidisciplinary and multisector project launched in 2014 to publish reports every five years on AI’s influences on people, their communities and societies; two lengthy and detailed reports have already been issued, in 2016 and 2021. Our own government’s Department of Defense in 2019 published a discussion of guidelines for national security, and the Office of Technology and Science Policy is gathering information to create an “AI Bill of Rights.”

But while various public and private entities pledge their adherence to these principles in their own operations, voluntary enforcement is a weakness, so the assertion of the book that AI is running out of control is probably justified.

Principles and values must qualify and inform the algorithms shaping what kind of world we want ourselves and our descendants to live in. There is no consensus yet on those, and it is not likely that there will be soon given the deep divisions in cultures of public and private AI development, so intense negotiation is urgently needed for implementation, which will be far more difficult than conception.

This is where the role of academics becomes clear. We need to beware that when all fields are in paradigm shifts simultaneously, adaptation and improvisation become top priorities. Formulating future directions must be fundamental and comprehensive, holistic with inclusive specialization, the opposite of the multiversity’s characteristically fragmented exclusive specialization to which we have been accustomed.
Traditional academic disciplines are now fast becoming obsolete as our major problems—climate control, bigotries, disparities of wealth, pandemics, political polarization—are not structured along academic disciplinary lines. Conditions must be created that will be conducive to integrated paradigms. Education (that is, self-development of who we shall be) and training (that is, knowledge and skills development for what we shall be) must be mutual and complementary, not separated as is now often the case. Only if the matrix of future AI is humanistic will we be secure.

In that same inclusive spirit, perhaps another book is needed to explore the relations between the positive and negative directions in all this. Our need to harness artificial intelligence for constructive purposes presents an unprecedented opportunity to make our own great leap forward. If each of our fields is inevitably going to be transformed, a priority for each of us is to climb aboard—to pitch in by helping to conceive what artificial intelligence might ideally accomplish. What might be its most likely results when our fields are “shaken to their roots” by machines that have with lightning speed taught themselves how to play our games, building not on our conventions but on innovations they have invented for themselves?

I’d very much like to know, for example, what will be learned in “synthetic biology” and from a new, comprehensive cosmology describing the world as a coherent whole, ordered by natural laws. We haven’t been able to make these discoveries yet on our own, but AI will certainly help. As these authors say, “Technology, strategy, and philosophy need to be brought into some alignment” requiring a partnership between humans and AI. That can only be achieved if academics rise above their usual restraints to play a crucial role.

George McCully is a historian, former professor and faculty dean at higher education institutions in the Northeast, professional philanthropist and founder and CEO of the Catalogue for Philanthropy.

The Infinite Corridor is the primary passageway through the campus, in Cambridge, of MIT, a world center of artificial intelligence research and development.

 

 



Using AI in remote learning

Silver didrachma from Crete depicting Talos, an ancient mythical automaton with artificial intelligence

Silver didrachma from Crete depicting Talos, an ancient mythical automaton with artificial intelligence

From The New England Council (newenglandcouncil.com)

BOSTON

“In partnership with artificial intelligence (AI) company Aisera, Dartmouth College recently launched Dart InfoBot, an AI virtual assistant developed to better support students and faculty members during the pandemic. Nicknamed “Dart,” the bot is designed to improve communication and efficiency while learning and working from home, with mere seconds of response time in natural language to approximately 10,000 students and faculty on both Slack and Dartmouth’s client services portal.

“The collaboration with Aisera allows for accelerated diagnosis and resolution times, automated answers to common information and technology questions, and proactive user engagement through a conversational platform.

“At Dartmouth, we wanted our faculty and students to have immediate answers to their information and technology questions online, especially during COVID. Aisera helps us achieve our goals to innovate and deliver an AI-driven conversational service experience throughout our institution. Faculty, staff, and especially students are able to self-serve their technology information using language that makes sense to them. Now our service desk is free to provide real value to our clients by consulting with them and building relationships across our campus.” said Mitch Davis, chief information officer for Dartmouth, in Hanover, N.H.’’

The field of artificial intelligence was founded at a workshop on the campus of Dartmouth during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation.

Using AI to manipulate “news’’ video and other threats to fact-based journalism

Silver didrachma from Crete depicting Talos, an ancient mythical automaton with artificial intelligence.

Silver didrachma from Crete depicting Talos, an ancient mythical automaton with artificial intelligence.

To members and friends of the Providence Committee on Foreign Relations


(thepcfr.orgpcfremail@gmail.com):

PLEASE LET US KNOW NO LATER THAN MONDAY, SEPT. 3O, IF YOU PLAN TO ATTEND THE OCT. 2 DINNER, WITH INTERNATIONAL JOURNALIST JONATHAN GAGE THE MAIN SPEAKER.

HE’LL TALK ABOUT SUCH THINGS AS THE THREAT TO FACT-BASED JOURNALISM POSED BY ARTIFICIAL INTELLIGENCE BEING USED TO DISTORT VIDEO.

THE HOPE CLUB NEEDS TO KNOW TWO DAYS BEFORE AN EVENT.

YOU CAN REGISTER FOR THE DINNER ONLINE AT

Thepcfr.org

or send a message on your plans to:

pcfremail@gmail.com

The PCFR meets at the Hope Club, 6 Benevolent St., Providence.

PCFR evenings start at 6 with drinks, dinner by about 6:40, the talk (usually 35-40 minutes) by about dessert, followed by Q&A. Evening ends by no later than 9. People can repair to the bar after that if they wish.






 

Llewellyn King: Perils and promise of artificial intelligence

Talos, an ancient mythical automaton with artificial intelligence.

Talos, an ancient mythical automaton with artificial intelligence.


Myself when young did eagerly frequent

Doctor and Saint, and heard great Argument

About it and about; but evermore

Came out by the same Door as in I went.

I feel close to Omar Khayyam, the great 11th-Century Persian poet and mathematician, not just because of his fondness for a drink, but also because of his search for meaning, which took him in "The Rubaiyat" to “Doctor and Saint” and then out "by the same Door as in I went.”

I’ve been looking at artificial intelligence (AI) and I feel, like Omar, that I’m coming away from talking with leaders in the field as unenlightened as when I started this quest.

The question is simple: What will it do to us, our jobs and our freedom?

The answer isn’t clear: Even those who are enthusiastic about the progress they’re making with AI are privately alarmed about its consequences. And they worry about how far some corporations will push it too hard and too fast.

The first stages are already active, although surreptitiously. The financial technology (fintech) world has been quick to embrace AI. Up for a bank loan? Chances are you’ll be approved or turned down by a form of AI which checked your employment, credit score and some other criteria (unknown to you) and weighed your ability to repay. Some anomaly, maybe a police report, may have come into play. You’ll be told the ostensible reason for your rejection, if that’s the case, but you may never know it.

The two overriding concerns: what AI will do to our jobs and our privacy.

If jobs are the problem, governments can help by insisting that some work must be done by human beings: reserved occupations. Not a pretty concept but a possible one.

When it comes to privacy, governments are likely to be the problem. With surreptitious bio-identification surveillance, the government could know every move you make -- your friends, your business associates, your lovers, your comings and goings -- and then make judgments about your fitness for everything from work to liberty. No sin shall go unrecorded, as it were.

This one isn’t just a future worry, it’s nearly here. The Chinese, I’m told, have run an experiment on citizen fitness using AI.

Historically, at least in literature, we’ve been acculturated to the idea of man-made monsters out of control, whether it was Mary Shelley’s Frankenstein or Robert Louis Stevenson’s Dr. Jekyll and Mr. Hyde. But the mythology probably has been around since man thought he could control life.

On jobs, the future is unclear. Until this point in time, automation has added jobs. British weaver Ned Ludd and his followers, who smashed up the looms of the Industrial Revolution, got it wrong. Nowadays cars are largely made by machines, as are many other things, and we have near full employment. Such fields as health care have expanded, while adding technology at a fast pace. AI opens new vistas for treatment. Notoriously difficult-to-diagnose diseases, like Myalgic Encephalomyelitis, might be easily identified and therapies suggested.

But think of a farm being run by AI. It knows how to run the tractor and plow, plant and harvest. It can assay the acidity of the soil and apply a corrective. If it can do all that, and maybe even decide what crops will sell each year, what will it do to other employment?

In the future AI will be taught sensitivity, even compassion, with the result that in many circumstances, like customer assistance, we may have no idea whether we’re dealing with a human or AI aping one of us. It could duplicate much human endeavor, except joining the unemployment line.

I’ve visited MIT, Harvard and Brown, and I’ve just attended a conference at NASA, where I heard some of the leading AI developers and critics talk about their expectations or fears. A few are borne along by enthusiasm, some are scared, and some don’t know, but most feel -- as I do, after my AI tour -- that the disruption that AI will bring will be extreme. Not all at once, but over time.

Like Omar, I came away not knowing much more than when I began my quest. "The Rubaiyat" (which means quatrains) is a paean to drink. At least no one suggested machines will be taking to the bottle, but I may.

Llewellyn King is executive producer and host of White House Chronicle, on PBS. His email is llewellynking1@gmail.com. He’s based in Rhode Island and Washington, D.C.