ALIBABA Group’s research institute DAMO Academy has rolled out its artificial intelligence (AI)-powered large language models (LLMs) called SeaLLMs, which include support for Tagalog and other Southeast Asian languages.

“The models represent a technological leap forward in terms of inclusivity, offering optimized support for local languages in the region including Tagalog, Vietnamese, Indonesian, Thai, Malay, Khmer, Lao, and Burmese,” Alibaba said in a statement late last week.

“The conversational models, SeaLLM-chat, exhibit great adaptability to the unique cultural fabric of each market, aligning with local customs, styles, and legal frameworks, and emerging as an invaluable chatbot assistant for businesses engaging with SEA markets,” it added.

LLMs are a type of generative AI meant to help produce and predict text content.

Alibaba said SeaLLMs have 13-billion-parameter and 7-billion-parameter versions and are meant to cater to the “linguistic diversity” of Southeast Asia. SeaLLMs are now open-sourced on AI community Hugging Face and can be used for research and commercial purposes.

“In our ongoing effort to bridge the technological divide, we are thrilled to introduce SeaLLMs, a series of AI models that not only understand local languages but also embrace the cultural richness of Southeast Asia,” Lidong Bing, director of the Language Technology Lab at Alibaba DAMO Academy, said. “This innovation is set to hasten the democratization of AI, empowering communities historically underrepresented in the digital realm.”

“Alibaba’s strides in creating a multi-lingual LLM are impressive. This initiative has the potential to unlock new opportunities for millions who speak languages beyond English and Chinese. Alibaba’s efforts in championing inclusive technology have now reached a milestone with SeaLLMs’ launch,” said Luu Anh Tuan, assistant professor at the School of Computer Science and Engineering at Nanyang Technological University, which is a partner of Alibaba in multi-language AI study.

The SeaLLM-base models went through pre-training on a data set including Southeast Asian languages to ensure understanding of local nuances and native communication contexts, Alibaba said.

“This foundational work lays the groundwork for chat models, SeaLLM-chat models, which benefit from advanced fine-tuning techniques and a custom-built multilingual dataset. As a result, chatbot assistants based on these models can not only comprehend but respect and accurately reflect the cultural context of these languages in the region, such as social norms and customs, stylistic preferences, and legal considerations,” it added.

“A notable technical advantage of SeaLLMs are their efficiency, particularly with non-Latin languages. They can interpret and process up to 9 times longer text (or fewer tokens for the same length of text) than other models like ChatGPT for non-Latin languages such as Burmese, Khmer, Lao, and Thai. That translates into more complex task execution capabilities, reduced operational and computational costs, and a lower environmental footprint,” Alibaba said. — BVR

CEDadiantiTyClea