If a week is traditionally a long time in politics, it is a yawning chasm when it comes to AI. The pace of innovation from the leading providers is one thing; the ferocity of innovation as competition hots up is quite another. But are the ethical implications of AI technology being left behind by this fast pace?
Anthropic, creators of Claude, released Claude 3 this week and claimed it to be a ‘new standard for intelligence’, surging ahead of competitors such as ChatGPT and Google’s Gemini. The company says it has also achieved ‘near human’ proficiency in various tasks. Indeed, as Anthropic prompt engineer Alex Albert pointed out, during the testing phase of Claude 3 Opus, the most potent LLM (large language model) variant, the model exhibited signs of awareness that it was being evaluated.
Moving to text-to-image, Stability AI announced an early preview of Stable Diffusion 3 at the end of February, just days after OpenAI unveiled Sora, a brand new AI model capable of generating almost realistic, high definition videos from simple text prompts.
While progress marches on, perfection remains difficult to attain. Google’s Gemini model was criticised for producing historically inaccurate images which, as this publication put it, ‘reignited concerns about bias in AI systems.’
Getting this right is a key priority for everyone. Google responded to the Gemini concerns by, for the time being, pausing the image generation of people. In a statement, the company said that Gemini’s AI image generation ‘does generate a wide range of people… and that’s generally a good thing because people around the world use it. But it’s missing the mark here.’ Stability AI, in previewing Stable Diffusion 3, noted that the company believed in safe, responsible AI practices. “Safety starts when we begin training our model and continues throughout the testing, evaluation, and deployment,” as a statement put it. OpenAI is adopting a similar approach with Sora; in January, the company announced an initiative to promote responsible AI usage among families and educators.
That is from the vendor perspective – but how are major organisations tackling this issue? Take a look at how the BBC is looking to utilise generative AI and ensure it puts its values first. In October, Rhodri Talfan Davies, the BBC’s director of nations, noted a three-pronged strategy: always acting in the best interests of the public; always prioritising talent and creativity; and being open and transparent.
Last week, more meat was put on these bones with the BBC outlining a series of pilots based on these principles. One example is reformatting existing content in a way to widen its appeal, such as taking a live sport radio commentary and changing it rapidly to text. In addition, editorial guidance on AI has been updated to note that ‘all AI usage has active human oversight.’
It is worth noting as well that the BBC does not believe that its data should be scraped without permission in order to train other generative AI models, therefore banning crawlers from the likes of OpenAI and Common Crawl. This will be another point of convergence on which stakeholders need to agree going forward.
Another major company which takes its responsibilities for ethical AI seriously is Bosch. The appliance manufacturer has five guidelines in its code of ethics. The first is that all Bosch AI products should reflect the ‘invented for life’ ethos which combines a quest for innovation with a sense of social responsibility. The second apes the BBC; AI decisions that affect people should not be made without a human arbiter. The other three principles, meanwhile, explore safe, robust and explainable AI products; trust; and observing legal requirements and orienting to ethical principles.
When the guidelines were first announced, the company hoped its AI code of ethics would contribute to public debate around artificial intelligence. “AI will change every aspect of our lives,” said Volkmar Denner, then-CEO of Bosch at the time. “For this reason, such a debate is vital.”
It is in this ethos with which the free virtual AI World Solutions Summit event, brought to you by TechForge Media, is taking place on March 13. Sudhir Tiku, VP, Singapore Asia Pacific region at Bosch, is a keynote speaker whose session at 1245 GMT will be exploring the intricacies of safely scaling AI, navigating the ethical considerations, responsibilities, and governance surrounding its implementation. Another session, at 1445 GMT explores longer-term impact on society and how business culture and mindset can be shifted to foster greater trust in AI.
Book your free pass to access the live virtual sessions today.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Photo by Jonathan Chng on Unsplash