Paul Roetzer doesn’t like artificial intelligence.
The founder of Cleveland’s Marketing AI Institute in 2016 isn’t, necessarily, a proponent of the technology any more than he is of air. He just thinks you should breathe.
“It’s a fatal mistake to not embrace it,” says the author of Marketing Artificial Intelligence: AI, Marketing, and the Future of Business. “When we started the institute, we said our job was to make AI approachable and actionable. It’s evolved into helping people understand and apply AI in a responsible way.”
From July 26 to 28, Roetzer’s company hosts its Marketing AI Conference at the Huntington Convention Center. The conference, which offers workshops and networking, is in its fourth year, but the introduction of ChatGPT — a generative AI tool that can write anything from book reports to website code — has given it a newfound energy.
We talked to Roetzer about ethical use of the transformational technology.
On Taking a Human-Centric Approach to AI: AI should benefit humans. If we have a team of 50 and we reduce that by 30, that would be a non-human-centered approach. The human-centered approach is: We have 50 amazing people; how can we enrich their jobs? How can we improve their creativity and productivity and innovation? Let them have their nights and weekends back?
On AI’s Exponential Growth: The tech we see today is doubling in its capability every six to 12 months. It’s literally impossible for the human mind to envision what the near future will be like. The whole future of work is going to change in the next one to three years. Some industries may be reluctant to change because it threatens their business models, like law firms with billable hours. But it’s going to be like life before and after the internet or smartphones.
On the First Thing a Business Should Do to Confront AI: If you’re in a business setting, it’s critical to create a generative AI policy for your employees. Are we or are we not allowed to write content or create video and images with ChatGPT? Right now, most employees don’t know.
On Apocalyptic AI Warnings: Yes, there could be an existential threat to humanity. But the probability is extremely low. Very few [researchers] can even vocalize what exactly would have to happen. What we do have at the moment is AI that can dramatically shift elections. We have the ability for almost anyone to create synthetic images, video and text. We have biases within systems that are making decisions around peoples' future with education, court systems and home lending. I would much prefer that AI researchers and scientists were focused on the known near term threats that AI presents.