EDGE 2024: Exploring the real world possibilities of AI

News
30 Jul 20245 mins
GovernmentIndustry

The waves of AI usage shows the need for organisations to proactively support their workforce

Stela Solar (National AI Centre)
Credit: Stela Solar (National AI Centre)

There is a real growing vibrancy to the local artificial industry, despite common perception that Australia is lagging in AI adoption, noted speakers at EDGE 2024.

According to Stela Solar, director at the National AI Centre, many individuals within organisations are exploring generative AI without formal acknowledgment from their employers.

“Data from reports by Microsoft, Deloitte and Slack indicates that Australian knowledge workers are increasingly utilising generative AI in their workplaces, with Microsoft reporting an 84 per cent usage rate,” she said.

According to Solar, “each of the reports finds Australian employees at the top of the Generative AI adoption curve. This reflects the grassroots, bottoms up a transformation that Generative AI has catalysed within organisations and a strong desire for individuals to engage with generative AI to enhance their work processes.”

With these waves of AI usage, Solar said there was a need for organisations to proactively support their workforce through this transformation.

“LinkedIn research identifies nearly 500 tasks that generative AI can augment or automate, while about 800 tasks are exclusively human skills,” she said. “This presents an opportunity for organisations to assist employees in adapting to these changes, as 97 per cent of workers expect their companies to teach them about new AI tools.”

The growth in AI is particularly important to small and medium-sized enterprises (SME), which make up 99.8 per cent of Australian businesses and employ 70 per cent of the workforce, said Solar.

“Despite the predominance of SMEs, more than 80 per cent of the AI skills required are being outsourced, indicating the need for support in AI adoption,” she said.

Larger businesses can assist their SME partners in navigating the complexities of AI implementation to ensure mutual success in leveraging AI for growth and innovation.

However, Solar pointed out that that not all AI is created equal and there were critical issues surrounding safe and responsible AI that need to be underscored.

“Relying on poorly trained and deployed AI can lead to misinformation, potentially harming customer relationships and non-compliance with regulations,” she said. “Thus, establishing safe and responsible AI systems is essential for delivering accurate, valuable and responsible outcomes for organisations and their stakeholders.”

However Nancy Rademaker, keynote speaker, believes that many people fear AI and as such, they need to break it down into three stages to consider the potential of the technology.

“The first stage is narrow AI, sometimes referred to as weak AI, where algorithms are designed for specific tasks,” she said at EDGE 2024. “Our brains are far more advanced than narrow AI because we can use a single brain to do everything—talk, paint, create music, socialise, run a business—while narrow AI is dedicated solely to specific tasks.”

According to Rademaker, the second phase is general AI, also known as strong AI, which can transfer knowledge from one domain to another, somewhat resembling human capabilities.

“However, we are still far from achieving this,” she said. “The third stage is super AI, which could be the best or worst thing for humanity. If we’re not even close to general AI, why worry about these doom scenarios related to super AI? It’s more crucial to discuss what AI can do today. For many, AI still feels magical, almost like hocus pocus, but for me, it’s more about its potential—the wonderful acronym for all the things AI can achieve.”

If partners are interested, there are steps that can be taken to build an AI program, Mantel Group partner of data and AI Emma Bromet said.

Speaking on stage at EDGE 2024, Bromet said that historically, the starting point is the need for a data platform and strong data foundations.

“Nowadays, there are many off-the-shelf products available that can be used without vast amounts of data,” she said. “However, as a general rule, having a solid data foundation and good data practices is crucial. For example, if you’re building your own generative AI, your solution is only as good as your data. Poor data will yield poor results.”

Bromet said in the last 12 months, most of the projects Mantel Group has worked on have focused on experimentation, testing and proving concepts.

“Some customers must demonstrate that they’ve accomplished something, but their risk appetite and funding constraints limit models into production,” she said. “They test the tech and concepts but hesitate to move to production, often due to challenges in getting the necessary funding. Without a full machine-learning operations stack, it’s challenging to have a reliable and monitorable model in production.”

Ultimately, the key to success is selecting the right use case, noted Bromet. For example, an internal-facing use case allows for more risk and learning opportunities compared to a customer-facing one, which must be foolproof.

“From a funding perspective, if a use case promises a high ROI, it’s easier to secure funding from the CFO,” she said. “Conversely, if it’s aimed at improving customer experience without direct dollar value, convincing stakeholders to invest can be more challenging.”

Bromet also said it was “essential to clarify your objectives, identify the most significant case and discuss how to manage risks and secure the necessary budget.”