Share

How businesses can win over AI skeptics

0 0

In a short time, AI has gone from being a technology used by the tech elite to one that most people use—or at least encounter—daily. AI is being deployed in health apps, customer service interactions, on social media, and in marketing emails, to name just a few examples. While companies are building their own AI and figuring out how the technology fits into their businesses, they’re also facing the challenge of how to transparently convey the ways in which they’re using these advancements.

“Many are subjected to AI, often without an explicit decision to use these systems,” said Julia Stoyanovich, professor and director of the Center for Responsible AI at New York University. “We want to give the ability to decide, to help debug, to help understand the benefits, and look out for risks and harms, back to people.”

According to a KPMG survey released this year, 42% of people believe generative AI is already having a “significant impact” on their personal lives, while 60% expect this within the next two years. Despite AI’s outsize impact, only 10% of Americans report being “more excited than concerned” about AI, according to a study last year from the Pew Research Center. As policymakers around the world examine potential regulations for AI, some companies are proactively offering insight into steps they’re taking to innovate responsibly.

At Intuit, AI is integrated across the company’s line of products, including generative AI assistants in TurboTax, QuickBooks, Credit Karma, and a suite of tools on the company’s email marketing platform, Mailchimp. Millions of models are driving 65 billion machine learning predictions everyday and conducting 810 million AI-powered interactions annually, according to the company.

“Five years ago we declared our strategy as a company was to build an AI-driven expert platform, which combines AI and expertise. We now have millions of live, AI-driven models in the offerings today as a result of that investment,” said Rania Succar, CEO of Intuit Mailchimp. “When generative AI came along, we were ready to go really big because of the investment we’ve made and because of the potential we saw for our end customers.”

With so many data points in small businesses demonstrating what works and what doesn’t, the company saw an opportunity to bring generative AI to the masses—not just the big players who can afford to build their own AI models. Intuit built its own generative AI operating system that keeps the data it trains on private, Succar said. Intuit Mailchimp customers are then able to use the AI to generate marketing emails and text in their brand’s voice, and set up automated emails to help welcome new customers or remind someone when they’ve left an item in their online cart.

In the past few months, Intuit Mailchimp has seen generative AI text generation adoption grow by more than 70%, Succar said. Despite the growth, the company is being careful about how the product is scaled.

One of the inherent problems with AI models everywhere is that they are never perfect. AI can hallucinate false information, generate offensive information, and exacerbate biases that might be present in the model’s training data. In an effort to keep this from happening, Succar said Intuit Mailchimp is being deliberate in selecting industries that have access to its generative AI tools. (She declined to say which industries Intuit Mailchimp currently does not support with generative AI.)

Perhaps the differentiator, though, is that Intuit still believes there’s a place for humans in a world where AI is rapidly becoming capable of taking over everything from the mundane to the creative. Every piece of generated content is reviewed by the user before it is sent out to clients. Escalations, such as poor or inaccurate answers, can be reported to human content reviewers. Just as people can connect with a human expert on TurboTax, Succar said, there’s a place for human experts in marketing.

“Human experts will always be able to add the next level of expertise that AI doesn’t and create confidence for the small business,” Succar noted.

Other technology companies are taking steps to help people understand how their AI works and discern between what’s real and what isn’t. TikTok rolled out a tool for creators to label their AI-generated content and said last year it is also testing ways to do so automatically. Meta announced it will label AI-generated images on Facebook, Instagram, and Threads. Microsoft explained in a blog post the safeguards it’s put in place for its generative AI products Copilot and Microsoft Designer. And last year, Google revised its search algorithm to consider high-quality AI-generated content.

Understanding what’s real and what isn’t is only one part of the equation. The proliferation of deepfakes, most recently explicit images in the likeness of Taylor Swift, have highlighted a fundamental problem with AI. Dan Purcell, cofounder and CEO of Ceartas, a company that uses AI models to combat online piracy, said he’s been able to get an increasing number of AI-generated images removed for his clients, who range from celebrities and content creators to C-suite executives.

“The way our technology works is we build a model of an infringement. We don’t need access to the original content. We don’t need to fingerprint clips. We just need the name of the content, because that is how people find it online,” he said. “When we look at individual content creators and businesses, we slightly change ingredients to be more specific to that brand or individual, and then apply the learning to a broad spectrum.”

As the past two years have demonstrated, advancements in AI are only going to keep getting better. (Look no further than the reaction to Sora, OpenAI’s text-to-video platform.) While there may no longer be an option to avoid AI, Stoyanovich said there’s more work that will need to be done, bringing together industry players, academics, policymakers, and users to come to a consensus on an actionable AI governance framework. In the meantime, as people start to notice more examples of AI in their day-to-day, she offered this advice:

“What is important is to keep a healthy dose of skepticism about the capabilities of this and other kinds of technology,” she said. “If it sounds too good to be true and, at the same time, if we don’t know what data the model is based on and how it was validated, then it probably doesn’t work as advertised.”

This story was originally featured on Fortune.com

You may also like...