Mark Lusky04.04.23
If you haven’t heard about ChatGPT yet, you will before long. Essentially, it’s artificial intelligence enabling creation of content, graphics, and much more. Along with nascent ChatGPT offerings from such organizations as Microsoft and Google in addition to ChatGPT itself, there are apps cropping up to detect its use – one of them developed by the ChatGPT creator.
Global consulting firm McKinsey recently weighed in on the discussion: “Products like ChatGPT and GitHub Copilot, as well as the underlying AI models that power such systems…are taking technology into realms once thought to be reserved for humans. With generative AI, computers can now arguably exhibit creativity. They can produce original content in response to queries, drawing from data they’ve ingested and interactions with users. They can develop blogs, sketch package designs, write computer code, or even theorize on the reason for a production error…Generative AI promises to make 2023 one of the most exciting years yet for AI. But as with every new technology, business leaders must proceed with eyes wide open, because the technology today presents many ethical and
practical challenges.”
Proceed prudently and cautiously in the labeling realm. Not properly used or monitored, ChatGPT and other AI can wind up being a process run amok, impacting everything from consumer protection on label disclosures and claims to effective, consumer-friendly branding.
As these “shiny new toys” roll out, excitement and dread are both appearing. I framed this discussion in a recent blogpost: “Support vs. Supplant role…a chief cause of AI malaise is the flawed assertion that it fundamentally replaces human involvement and interaction. AI can play a great, complementary support role in tandem with human beings. But ‘shiny new tech toys’ do not eliminate the need for human due diligence, insight, capabilities, and experience…As a writer, I can see using ChatGPT as a support tool – much like many others out there. When I’m looking for ideas and insights, I often scour the internet as a brainstorming tool. Just as I don’t accept claims or assertions of fact without verifying the source and/or confirming information with multiple sources, I would verify ChatGPT-generated information…In some respects, I see this technology in the same way as money. According to showman P.T. Barnum, ‘Money is, in some respects, like fire. It is a very excellent servant, but a terrible master.’”
Unlike some shiny new toys, ChatGPT-type AI technology is here to stay. Notes a digitaltrends.com article: “Google has been attempting what ChatGPT can do now for decades, and the chatbot reportedly set off a ‘code red’ within Google. In response, the company announced it would slowly roll out its rival Google Bard AI, which will be integrated into search over time. We expect more of these ChatGPT alternatives to pop up in the coming months, as we’ve already seen with services like Jasper AI…Microsoft announced it’s bringing ChatGPT into Bing, as well as its full Edge browser.”
The article continues, “Generative AI is also pushing technology into a realm thought to be unique to the human mind – creativity. The technology leverages its inputs (the data it has ingested and a user prompt) and experiences (interactions with users that help it ‘learn’ new information and what’s correct/incorrect) to generate entirely new content. While dinner table debates will rage for the foreseeable future on whether this truly equates to creativity, most would likely agree that these tools stand to unleash more creativity into the world by prompting humans with starter ideas.”
As these AI offerings proliferate, the following are top tips to “stay safe” in the labeling realm if/when trying out the technology:
Mark Lusky is a marketing communications professional who has worked with Lightning Labels, an all-digital custom label printer in Denver, CO, USA, since 2008. Find Lightning Labels on Facebook for special offers and label printing news.
Global consulting firm McKinsey recently weighed in on the discussion: “Products like ChatGPT and GitHub Copilot, as well as the underlying AI models that power such systems…are taking technology into realms once thought to be reserved for humans. With generative AI, computers can now arguably exhibit creativity. They can produce original content in response to queries, drawing from data they’ve ingested and interactions with users. They can develop blogs, sketch package designs, write computer code, or even theorize on the reason for a production error…Generative AI promises to make 2023 one of the most exciting years yet for AI. But as with every new technology, business leaders must proceed with eyes wide open, because the technology today presents many ethical and
practical challenges.”
Proceed prudently and cautiously in the labeling realm. Not properly used or monitored, ChatGPT and other AI can wind up being a process run amok, impacting everything from consumer protection on label disclosures and claims to effective, consumer-friendly branding.
As these “shiny new toys” roll out, excitement and dread are both appearing. I framed this discussion in a recent blogpost: “Support vs. Supplant role…a chief cause of AI malaise is the flawed assertion that it fundamentally replaces human involvement and interaction. AI can play a great, complementary support role in tandem with human beings. But ‘shiny new tech toys’ do not eliminate the need for human due diligence, insight, capabilities, and experience…As a writer, I can see using ChatGPT as a support tool – much like many others out there. When I’m looking for ideas and insights, I often scour the internet as a brainstorming tool. Just as I don’t accept claims or assertions of fact without verifying the source and/or confirming information with multiple sources, I would verify ChatGPT-generated information…In some respects, I see this technology in the same way as money. According to showman P.T. Barnum, ‘Money is, in some respects, like fire. It is a very excellent servant, but a terrible master.’”
Unlike some shiny new toys, ChatGPT-type AI technology is here to stay. Notes a digitaltrends.com article: “Google has been attempting what ChatGPT can do now for decades, and the chatbot reportedly set off a ‘code red’ within Google. In response, the company announced it would slowly roll out its rival Google Bard AI, which will be integrated into search over time. We expect more of these ChatGPT alternatives to pop up in the coming months, as we’ve already seen with services like Jasper AI…Microsoft announced it’s bringing ChatGPT into Bing, as well as its full Edge browser.”
The article continues, “Generative AI is also pushing technology into a realm thought to be unique to the human mind – creativity. The technology leverages its inputs (the data it has ingested and a user prompt) and experiences (interactions with users that help it ‘learn’ new information and what’s correct/incorrect) to generate entirely new content. While dinner table debates will rage for the foreseeable future on whether this truly equates to creativity, most would likely agree that these tools stand to unleash more creativity into the world by prompting humans with starter ideas.”
As these AI offerings proliferate, the following are top tips to “stay safe” in the labeling realm if/when trying out the technology:
- Feel free to experiment to aid in brainstorming label content, disclosures, claims, graphics, aligned QR code or Augmented Reality development, and the like.
- Use AI as a support tool only. Don’t let it replace human creativity, input, due diligence, oversight, and common sense. (Remember, AI-detection apps are sprouting up everywhere. Think about whether or not you want users, viewers, and competitors to be able to detect it.)
- Research and review the “goods,” “bads” and “uglies” before proceeding. For example, with so much emphasis on transparency and authenticity in product development and presentation, do you want a competitor to “call you out” for something artificial versus real? I think about this a lot when it comes to thought leadership articles supposed to reflect the true views, voice, and insights of the author. If a thought leadership piece is detected to be nothing more than a ChatGPT-generated piece, what does that do to transparency and authenticity?
- Think of AI as another tool in the arsenal, not the arsenal itself. As more and more customer service-related functions and practices are automated, many mistakes are occurring – in large part because of lack of human involvement and due diligence.
Mark Lusky is a marketing communications professional who has worked with Lightning Labels, an all-digital custom label printer in Denver, CO, USA, since 2008. Find Lightning Labels on Facebook for special offers and label printing news.