Transforming Digital Business: A Guide to Preparing for ChatGPT’s Impact

By Dr. Atif Farid Mohammad – Head - AI R&D Center of Excellence, Apexon.

  • 1 year ago Posted in

Generative AI has been gaining traction for years, but the arrival of programs like DALL-E and ChatGPT have catapulted it into the limelight in the past 12 months. Despite the inevitable hype, these advanced AI systems, employing massive data sets to create content, are gaining traction because of their immediate accessibility. Domain experts across a wide range of fields have been quick to embrace them and explore their many applications.

Unlike earlier forms of AI that were exclusively accessible to data scientists, anyone can now use the new wave of generative AI systems, regardless of their level of technical expertise. From sales and marketing to web design, legal, IT, and HR, generative AI has the potential to revolutionize many industries. Professionals in these fields are now actively experimenting to find innovative ways to integrate them into their workflows.

Already, we are seeing how sales and marketing professionals are using generative AI to create more engaging and personalized content for their audiences. Web designers have been leveraging it to generate custom designs that better reflect their clients' needs. Legal experts have been exploring how generative AI can be used to automate routine tasks and streamline their workflows, while HR professionals have been experimenting with it to improve candidate screening and onboarding processes.

And these are just the early days.

AI-powered Digital Engineering: Accelerating Faster Than Businesses Can Keep Up?

Generative AI is poised to have a transformative impact on virtually every industry in the world, and the field of digital engineering is no exception. While AI has been used in software engineering for some time now, recent advancements in large language models like ChatGPT have taken it to a whole new level of sophistication and autonomy. These advanced systems are capable of performing a wide range of tasks, including code generation, bug detection, natural language processing, documentation and testing. ChatGPT is arguably the most prominent of these generative AI models, but many other coding-specific alternatives exist.

By automating and streamlining many of the processes involved in software development, implementation, and infrastructure, organizations can greatly accelerate their digital evolution and continue to meet the ever-increasing demand for updates, improvements, and new digital products. These time savings and productivity gains are particularly valuable in the age of speed.

Despite being at an early stage in the AI learning curve, data sets and models are rapidly advancing. With each new generation of the GPT (generative pre-trained transformer) model used to train ChatGPT, we’ve seen giant technical leaps. However, as organizations race to harness AI’s capabilities, they also need to invest in preparing for its consequences. These will be significant and wide-ranging; and will affect processes, policies, and people alike.

Developing a Trust Framework

The rise of ML algorithms capable of generating new content presents ethical, privacy, and ownership challenges. While regulators work to keep up, organizations are left with an ideal opportunity to take the lead in ensuring their AI models are trustworthy and transparent.

Establishing a framework of policies that inspire trust is going to be crucial to success. Gartner recommends an approach called AI TRiSM (AI trust, risk, and security management) that it anticipates will lead to a 50% improvement in AI adoption, attainment of business goals, and user acceptance for organizations that implement it by 2026.

The combination of generative and ‘ethical’ AI face one common issue, centred on how these algorithms make assumptions. As AI algorithms’ decision trees are extremely complex, it can be very difficult to know if their datasets comprise any biases, which influence the AI’s final decisions. Another problem is developing generative AI algorithms that work together to attempt to deliver human-like conclusions. We aren’t sure whether these AI algorithms consider ethical issues humans would consider when passing a verdict/judgment or taking an action.

Managing Content Ownership and Data Privacy

Ownership and intellectual property are other contested issues in the world of AI. High-profile examples include deepfakes of famous faces or even singer-songwriter Nick Cave’s reaction when ChatGPT generated a song in his style. As generative AI adoption rises, establishing ownership and usage rights will be critical from an early stage. While the basic premise holds that if a business owns the AI model, its content would belong to the business, the situation becomes more complex if the ownership of the data used to train the model is contested.

Additional major concerns involve data privacy and security. We’re already familiar with the ongoing battle to keep cyber threats at bay. AI-powered security practices will need to keep up with the cybercriminals or risk exposing the organisation to privacy breaches. In a security incident, there must be clear lines of accountability, including understanding how or why the AI’s insights or recommendations failed.

A Business-Wide Strategy to Drive AI Productivity

The sheer versatility of generative AI has already profoundly affected public awareness and engagement with the technology. With continued innovation and development, these systems have the potential to revolutionize every industry. Digital engineers are experiencing this first-hand: significantly reducing the costs of software production, expediting release dates, driving innovation and increasing productivity.

As the potential for competitive advantage becomes clear, organisations are racing to get to grips with its applications. There is no doubt that now is an opportune time for organisations to experiment with AI, however they must also lay the groundwork for their proper use and wider adoption.

This first involves preparing the foundations within their existing digital infrastructure. AI model creation, refinement, and integration require skills that are complex and currently in short supply. The second requirement is for businesses to create a robust governance framework comprising tools, policies, and protocols that govern ethical, legal, and trust-related matters. Lastly, as well as technical considerations and a trust framework, people will play a turnkey role in successful AI implementations. Employee engagement will be a vital part of the puzzle in addressing issues around user adoption and driving acceptance.

By Yiannis Antoniou, Head of Data, AI, and Analytics at Lab49.
By Shaked Reiner, Principal Cyber Researcher, CyberArk Labs.
By James Fisher, Chief Strategy Officer, Qlik.
By Kamlesh Patel, VP Data Center Market Development at CommScope.
By Brandon Green, Senior Solutions Architect & Threat Modeling SME, IriusRisk.