Friend or foe — dawn of the AI revolution Friend or foe — dawn of the AI revolution
 
 
 
 
 
 

Friend or foe — dawn of the AI revolution

As Big Tech races to dominate AI, technologists say the benefits are real but the loss of trust in what we see and read could be the price. REUTERS PHOTO

As Big Tech races to dominate AI, technologists say the benefits are real but the loss of trust in what we see and read could be the price. REUTERS PHOTO

A research scientist who works on the intersection of reasoning and machine learning likens the boom in artificial intelligence to the Industrial Revolution.

“I wasn’t around then,” says Hector Palacios, “but back in the day people would say, ‘I’m going to buy an engine.’” After a while, nobody was thinking about engines, Palacios says. Engines powered machinery “to become many specialized things in many contexts.”

AI has been evolving for decades, but the recent unveiling of chatbots that write scientific papers, legal briefs, and news stories has many fearing that a robot will someday replace them. Still, others point to AI’s potential to revolutionize fields like education, journalism, and science.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

FTC chair Lina Khan recently summed up the contradictions inherent in this burgeoning technology when she said that AI can deliver critical innovation, but also “turbocharge fraud and automate discrimination.”

Vice-President Kamala Harris waded into the debate when she met last week with chief executives of four Big Tech firms to discuss artificial intelligence.

In a statement after the three-hour meeting, the administration said there had been “frank and constructive discussion” about companies being more open about their products and the importance that those products be kept away from bad actors.

ADVERTISEMENT

Ramping up misinformation

“It used to be the case that we saw some text quickly, and said, ‘Oh, yes, that was fully written by a human’. But this is not true anymore,” Palacios says.

You may also like: On critical thinking and artificial intelligence

ADVERTISEMENT

For example, the Republican National Committee released a 30-second spot last week in response to President Biden’s announcement that he was running for reelection. The ad showed fake visuals of China invading Taiwan and 80,000 immigrants overwhelming the Southern border, all of it interspersed with disturbing footage of civic unrest.

A barely noticeable disclaimer along the upper left-hand corner of the screen read: “Built entirely with AI imagery.”

“Artificial intelligence is software,” Palacios told reporters during a news briefing last week, noting the technology is based on math, which can do complex computing that algorithms take one step further with large language models (LLMs).

Generative AI uses computers and LLMs to create new content from large sets of data. LLMs are designed specifically to generate text-based content.

Dr. Christopher Dede is a senior research fellow at Harvard’s Graduate School of Education and Associate Director of Research for the National AI Institute for Adult Learning and Online Education. When he reads a student essay that is just too good to believe, Dede doesn’t worry about it too much. Plagiarism has been around a lot longer than LLMs.

“So, at the start of the spring semester in my online course at Harvard, I spent 5 minutes talking to students about generative AI and I said, ‘You can use generative AI, and if you’re smart about it, we’re not going to be able to tell.’”

But when they go out for a job interview and are asked to produce a marketing plan in half an hour, he warns them, “If your marketing plan isn’t a lot better than what comes out of the AI, you’re not going to get hired.”

Seeing AI as a partner

The hit TV series Star Trek: The Next Generation offered a hopeful view of artificial intelligence as a valued partner for humanity.

Dede tells his students to think about AI as a partner, not as a substitute, an insight he came to as a young graduate student who loved Star Trek.

“In Star Trek, the Next Generation, where you have Captain Picard, the wise human starship captain, and then you have Data who looks like a person, but is actually an android, a machine.”

“Data is capable of absorbing enormous amounts of data in a matter of a seconds, and doing what’s called reckoning, which is calculative prediction. Captain Pickard has sort of judgment-applied wisdom, and so he’s the one that’s in charge of the starship, and he uses Data’s calculative predictions to help him make good decisions.”

Dede says Data augmented Picard’s human experience and the two partners did things together neither could do by themselves.

“To illustrate this in a less fantastic way, there are cancer specialists, oncologists now who have AI partners. The AI can do something that no cancer specialist can do.

It can scan every morning 1,500 medical journals online and see if there’s something new about the treatment of a particular patient. It can scan medical records worldwide of similar patients undergoing a variety of treatments and get advice about what’s working and what’s not working,” he says.

But you would never want the AI making the decisions because the doctor knows things the AI doesn’t know. The doctor knows about pain and death. The doctor understands that cultures have different points of view about death, its effects on family as well as an individual, and so on.

“AI does not understand any of those things. It’s an alien kind of intelligence,” Dede says. And sometimes it really blows it.

Tracking AI bloopers

Sean McGregor got his PhD in machine learning. He was the lead technical consultant for the IBM Watson AIXPRIZE and founded the Responsible AI Collaborative. He is developing an AI Incident Database to index AI performance in the real world.

Basically, McGregor scours the world for AI bloopers.

One woman in China was publicly shamed for jaywalking when AI picked up her image on the side of a bus. In 2021, a man in Bath, England was cited for driving in a bus lane after AI captured a photo of a woman wearing a shirt that said “KNI9TER.” It looked a lot like the man’s license plate.

“What the database does is it collects each of these incidents that happen in the world and puts a number to them.” The goal, says McGregor, is to “make the whole AI industry safer.”

In the meantime, Palacios says the existential tenor of discussions around AI’s potential applications and impacts misses a finer point.

“Many things are going to happen with the AI revolution and probably the biggest surprises are going to come in the small details.”

Want stories like this delivered straight to your inbox? Stay informed. Stay ahead. Subscribe to InqMORNING

MORE STORIES
Don't miss out on the latest news and information.
TAGS: artificial intelligence
For feedback, complaints, or inquiries, contact us.
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.




This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.