The Persian Gulf has long had designs on leveraging technology to diversify its economies: All six regional states have drawn up no-holds-barred plans to put digitization at the heart of their futures. But this year, much like its Western peers, the Gulf region has zeroed in on generative (GenAI) and large language models (LLMs) to help catapult progress.
ChatGPT creator OpenAI has partnered with Abu Dhabi government-owned G42 to push AI adoption in the United Arab Emirates (UAE) and wider Middle East markets. Such a move only adds weight to the UAE’s plans to become a global leader in AI by 2031.
Management consultancy Strategy& estimates that the overall economic impact of GenAI in the Persian Gulf could reach $23.5 billion annually by 2030.
Through solutions built by tech conglomerate G42, regional organizations will be able to simplify the process of integrating advanced AI capabilities into their existing enterprise landscapes, unlocking the potential of OpenAI’s models.
In July, G42 also announced it had developed the world’s largest supercomputer for AI training, in collaboration with US-headquartered AI firm Cerebras Systems.
Taken together, this enormous computing power will help fuel “Jais” — the world’s first high-quality Arabic LLM. Launched in July by the Abu Dhabi government, this open source, bilingual model is now available for use by the world’s 400-million-plus Arabic speakers, built on a bed of Arabic- and English-language data.
The ability of LLMs to be fine-tuned for specific applications represents a compelling advantage for the region. For example, Abu Dhabi’s Department of Health plans to use Jais for data analysis and patient interactions; Abu Dhabi National Oil Company is implementing Jais for tasks ranging from predictive maintenance to data analytics; and Etihad Airways is deploying it for customer service and logistical optimization.
The most advanced LLMs today, including GPT-4, which powers ChatGPT and Google’s PaLM behind its Bard chatbot, all have the ability to understand and generate text in Arabic; however their aptitudes are diluted because they are working with up to 100 languages. Jais, having been trained in the Arabic language, produces more accurate, contextualized results and, therefore, better business outcomes.
With Opportunities Come Risk
Jais’ owners have said privacy and intellectual property considerations are integral to its development, and the model’s operational framework incorporates guidelines and regulations concerning data privacy and complies with global intellectual property laws.
However, the Gulf states, like the rest of the world, are still in the early stages of AI regulation. Omar Sultan Al Olama, UAE minister of state for artificial intelligence, digital economy, and remote work applications, recently called for a fresh approach to how countries govern AI, urging the global community to come to a governance consensus based on use cases.
Certainly, as regional LLM usage ramps up, laser focus needs to be placed on ensuring transparency and accountability within the models. This includes establishing best practices for auditing algorithmic decision-making aids designed for use in government services and policy domains.
What’s more, data protection is paramount, as these systems contain such vast troves of confidential user and business information. Lessons should be heeded from ChatGPT’s massive data leak in March of this year, where over 1 million users had private credentials exposed on the Web.
There has been an overall 724% increase in global attacks against open source repositories since 2019, many of these caused by a lack of vulnerability management, not just one-off bugs.
Attention also needs to be paid to wider security concerns, among them the threat of easily generated deepfake content, the ability to more easily code malware, harmful or religiously offensive content production, and exacerbated racial, gender, or economic bias. Ethical considerations are even more essential in diverse linguistic contexts.
Finally, there is the potential for threat actors to exploit GenAI to manipulate AI systems, causing them to make incorrect predictions or deny service to customers.
One way to mitigate risk is to identify critical services that require “human-in-the-loop” decision-making. Selection criteria could include high-risk systems or systems that require special accountability.
According to Martin Borrett, a senior executive at IBM Security, mega LLMs such as Jais have more of a tendency toward hallucinations and inaccuracy because they are so broad in scope. Their strength in size is also their weakness in security. “Building for general purpose across multiple languages sounds great from a mindshare point of view but, in reality, it’s harder to govern and manage,” he says.
The region’s rapid journey toward AI holds much promise for exponential progress, but all stakeholders must stay vigilant and proactive in managing the associated cybersecurity risks.