SSNLP 2023

The 2023 Singapore Symposium on Natural Language Processing


We are pleased to announce that the Singapore Symposium on Natural Language Processing (SSNLP 2023) will be held on Monday, December 4 (full day). This is an annual Singapore-based pre-conference practice workshop for both local students, practitioners and faculty working in Natural Language Processing to network. It has been successfully held in 2018, 2019, 2020, and 2022, becoming an increasingly popular and impactful event.

We are further excited about the unique opportunity presented to Singapore on 6-10 December as the EMNLP 2023, a premier Computational Linguistics and NLP conference, is being held in Singapore. Leveraging the attendance of many reputed academicians, we’re looking forward to hosting them as a part of SSNLP – and, further, taking the opportunity to invite them to the School of Computing of National University of Singapore (NUS) to engage with us.

The venue will be held at the Shaw Foundation Alumni House Auditorium, NUS (11 Kent Ridge Dr, Singapore 119244, [Map], [Outdoor photo], [Indoor photo]). We are planning a hybrid (onsite & online) format for SSNLP to ensure maximal outreach. Please note our venues can only accommodate a limited seating capacity; in the event of oversubscription, we may offer online attendance to registered participants. So make early bird registration as quick as possible to secure seats.

Registration closed

Right now, our in-person registration is closed, but feel free to make virtual registrations for online attendance by dropping an email. We will send you a Zoom link.

Virtual Registration closed

Latest news

Dec 19, 2023 — All the Posters and relevant materials can be downloaded from here.

Nov 30, 2023 — Program is confirmed, see you all

Oct 26, 2023 — Registration is open, please register now

Sep 30, 2023 — The date is confirmed: December 4, 2023


We've planned to host Oral and Post sessions for paper presentations, and also have invited Keynote Presentations

Time Event
08:00 - 08:15 Welcome and Opening Remarks
introducer:   TAN Kian Lee    
08:15 - 09:15 Paper session 1
session chair:   Sun Shuo    
09:30 - 10:30 Keynote 1 and Keynote 2 (each 20mins + 10mins Q&A)
speakers:   Preslav & Farah    
session chair:   Su Jian    
10:30 - 11:30 Paper session 2
session chair:   Tan Qingyu    
11:30 - 12:30 Keynote 3 and Keynote 4 (each 20mins + 10mins Q&A)
speakers:   Vivian & Tanya    
session chair:   Nancy Chen    
12:30 - 13:00 Poster lightning talks
session chair:   Yanxia Qin   
13:00 - 14:00 Poster session 1 and Lunch
14:00 - 15:00 Paper session 3
session chair:   Taha Aksu    
15:00 - 16:00 Keynote 5 and Keynote 6 (each 20mins + 10mins Q&A)
speakers:   Diyi & Joao
session chair:   Kokil Jaidka   
16:00 - 16:30 Poster session 2 and Tea
16:30 - 18:30 Industry session: Speaker presentations (each 20mins + 10mins Q&A)
speakers:   Daniel & Huda & Alessandro & Lidong
session chairs:   Suzanna Sia & Gao Wei    

Oral & Poster Papers

Papers are mainly exported from the EMNLP 2023 and ACL 2023. Oral papers are for 12 minutes plus 3 minutes for immediate questions. Poster boards can accommodate (1m x 1m) sized posters, in either portrait or landscape. We split all the posters into Poster session 1 and Poster session 2, divided as follows, with each containing 13 poster boards.

Click to see the paper list ↓
Paper session 1   

[1] Ye, Hai, & Xie, Qizhe, & Ng, Hwee Tou. Multi-Source Test-Time Adaptation as Dueling Bandits for Extractive Question Answering (Slot: 08:15 - 08:30)
[2] Zhengyuan Liu, Yong Keong Yap, Hai Leong Chieu and Nancy F. Chen. Guiding Computational Stance Detection with Expanded Stance Triangle Framework (Slot: 08:30 - 08:45)
[3] Ahmed Masry*, Parsa Kavehzadeh*, Xuan Long Do, Enamul Hoque, Shafiq Joty. UniChart: A Universal Vision-language Pretrained Model for Chart Comprehension and Reasoning (Slot: 08:45 - 09:00)
[4] Ibrahim Taha Aksu, Devamanyu Hazarika, Shikib Mehri, Seokhwan Kim, Dilek Hakkani-Tur, Yang Liu, Mahdi Namazifar. CESAR: Automatic Induction of Compositional Instructions for Multi-turn Dialogs (Slot: 09:00 - 09:15)

Paper session 2   

[1] Hannan Cao, Liping Yuan, Yuchen Zhang, Hwee Tou Ng. Unsupervised Grammatical Error Correction Rivaling Supervised Methods (Slot: 10:30 - 10:45)
[2] Moxin Li, Wenjie Wang, Fuli Feng, Yixin Cao, Jizhi Zhang, Tat-Seng Chua. Robust Prompt Optimization for Large Language Models Against Distribution Shifts (Slot: 10:45 - 11:00)
[3] Ratish Puduppully, Anoop Kunchukuttan, Raj Dabre, Ai Ti Aw, Nancy F. Chen. Decomposed Prompting for Machine Translation between Related Languages using Large Language Models (Slot: 11:00 - 11:15)
[4] Zhiyuan Liu, Sihang Li, Yanchen Luo, Hao Fei, Yixin Cao, Kenji Kawaguchi, Xiang Wang, Tat-Seng Chua. MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter. (Slot: 11:15 - 11:30)

Paper session 3   

[1] Jinggui Liang, Lizi Liao. ClusterPrompt: Cluster Semantic Enhanced Prompt Learning for New Intent Discovery (Slot: 14:00 - 14:15)
[2] Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, Roy Ka-Wei Lee. LLM-Adapter: An Empirical Study of Adapter-based Parameter-Efficient Fine-Tuning for Large Language Models (Slot: 14:15 - 14:30)
[3] Minzhi Li, Taiwei Shi, Caleb Ziems, Min-Yen Kan, Nancy F Chen, Zhengyuan Liu, Diyi Yang. CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation (Slot: 14:30 - 14:45)
[4] Quanyu Long, Wenya Wang, Sinno Jialin Pan. Adapt in Contexts: Retrieval-Augmented Domain Adaptation via In-Context Learning (Slot: 14:45 - 15:00)

Poster session 1  

[1] Yikang Pan, Liangming Pan, Wenhu Chen, Preslav Nakov, Min-Yen Kan. On the Risk of Misinformation Pollution with Large Language Models (Board: P101)
[2] Fengzhu Zeng, Wei Gao. Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language Models (Board: P102)
[3] Shuo Sun, Yuchen Zhang, Jiahuan Yan, Yuze GAO, Donovan Ong, Bin Chen, Jian Su. Battle of the Large Language Models: Dolly vs LLaMA vs Vicuna vs Guanaco vs Bard vs ChatGPT - A Text-to-SQL Parsing Comparison (Board: P105)
[4] Liangming Pan, Alon Albalak, Xinyi Wang, William Yang Wang. Logic-LM: Empowering large language models with symbolic solvers for faithful logical reasoning (Board: P106)
[5] Jiaxi Li and Wei Lu. Contextual Distortion Reveals Constituency: Masked Language Models are Implicit Parsers (Board: P109)
[6] Shaz Furniturewala, Abhinav Java, Surgan Jandial, Simra Shahid, Pragyan Banerjee, Balaji Krishnamurthy, Sumit Bhatia and Kokil Jaidka. Evaluating the Efficacy of Prompting Techniques for Debiasing Language Model Outputs (Board: P110)
[7] Tan, Qingyu, & Xu, Lu, & Bing, Lidong, & Ng, Hwee Tou. Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data (Board: P103)
[8] Yixi Ding, Yanxia Qin, Qian Liu, Min Yen Kan. CocoSciSum: A Scientific Summarization Toolkit with Compositional Controllability (Board: P104)
[9] Shaurya Rohatgi, Yanxia Qin, Benjamin Aw, Niranjana Unnithan, Min-Yen Kan. The ACL OCL Corpus: Advancing Open Science in Computational Linguistics (Board: P107)
[10] Xinyuan Lu, Liangming Pan, Qian Liu, Preslav Nakov, Min-Yen Kan. SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim Verification on Scientific Tables (Board: P108)
[11] Yeo, Gerard., Jaidka K. The PEACE-Reviews dataset: Modeling Cognitive Appraisals in Emotion Text Analysis (Board: P111)
[12] Kankan Zhou, Eason Lai, Wei Bin Au Yeong, Kyriakos Mouratidis, Jing Jiang. ROME: Evaluating Pre-trained Vision-Language Models on Reasoning beyond Visual Common Sense (Board: P112)
[13] Xiaobing Sun, Jiaxi Li, and Wei Lu. Unraveling Feature Extraction Mechanisms in Neural Networks (Board: P113)

Poster session 2  

[1] Xuan Long Do, Bowei Zou, Shafiq Joty, Anh Tai Tran, Liangming Pan, Nancy F. Chen, Ai Ti Aw. Modeling What-to-ask and How-to-ask for Answer unaware Conversational Question Generation (Board: P201)
[2] Rui Cao, Jing Jiang. Modularized Zero-shot VQA with Pre-trained Models (Board: P202)
[3] Bin Wang, Zhengyuan Liu, Nancy F. Chen. Instructive Dialogue Summarization with Query Aggregations (Board: P205)
[4] Huy Quang Dao, Lizi Liao, Dung D. Le, Yuxiang Nie. Reinforced Target-driven Conversational Promotion (Board: P206)
[5] Ibrahim Taha Aksu, Min-Yen Kan and Nancy F. Chen. Zero-shot Adaptive Prefixes for Dialogue State Tracking Domain Adaptation (Board: P209)
[6] Guangsheng Bao, Zhiyang Teng, Hao Zhou, Jianhao Yan, Yue Zhang. Non-Autoregressive Document-Level Machine Translation (Board: P210)
[7] Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, Preslav Nakov. Fact-Checking Complex Claims with Program-Guided Reasoning (Board: P203)
[8] Muhammad Reza Qorib, Hwee Tou Ng. System Combination via Quality Estimation for Grammatical Error Correction (Board: P204)
[9] Tan, Qingyu, & Ng, Hwee Tou, & Bing, Lidong. Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models (Board: P207)
[10] Ruichao Yang, Wei Gao, Jing Ma, Zhiwei Yang. WSDMS: Debunk Fake News via Weakly Supervised Detection of Misinforming Sentences with Contextualized Social Wisdom (Board: P208)
[11] Ratish Puduppully, Parag Jain, Nancy F. Chen and Mark Steedman. Multi-Document Summarization with Centroid-Based Pretraining (Board: P211)
[12] Mathieu Ravaut, Shafiq Joty, Nancy F. Chen. Unsupervised Summarization Re-ranking (Board: P212)
[13] Zhiqiang Hu, Nancy F. Chen, Roy Ka-Wei Lee. Adapter-TST: A Parameter Efficient Method for Multiple-Attribute Text Style Transfer (Board: P213)

Keynote Speakers

The following speakers are invited to give keynotes at SSNLP 2023. Please click the profile image to view the detailed content of the talk.

Title: Jais and Jais-chat: Building the World's Best Open Arabic-Centric Foundation and Instruction-Tuned Open Generative Large Language Models
Speaker: Preslav Nakov

Abstract: I will discuss Jais and Jais-chat, two state-of-the-art Arabic-centric foundation and instruction-tuned open generative large language models (LLMs). The models are based on the GPT-3 decoder-only architecture and are pretrained on a mixture of Arabic and English texts, including source code in various programming languages. The models demonstrate better knowledge and reasoning capabilities in Arabic than previous open Arabic and multilingual models by a sizable margin, based on extensive evaluation. Moreover, they are competitive in English compared to English-centric open models of similar size, despite being trained on much less English data. I will discuss the training, the tuning, the safety alignment, and the evaluation, as well as the lessons we learned.

Bio: Dr. Preslav Nakov is Professor and Department Chair for NLP at the Mohamed bin Zayed University of Artificial Intelligence. Previously, he was Principal Scientist at the Qatar Computing Research Institute, HBKU, where he led the Tanbih mega-project, developed in collaboration with MIT, which aims to limit the impact of "fake news", propaganda and media bias by making users aware of what they are reading, thus promoting media literacy and critical thinking. He received his PhD degree in Computer Science from the University of California at Berkeley, supported by a Fulbright grant. He is Chair-Elect of the European Chapter of the Association for Computational Linguistics (EACL), Secretary of ACL SIGSLAV, and Secretary of the Truth and Trust Online board of trustees. Formerly, he was PC chair of ACL 2022, and President of ACL SIGLEX. He is also member of the editorial board of several journals including Computational Linguistics, TACL, ACM TOIS, IEEE TASL, IEEE TAC, CS&L, NLE, AI Communications, and Frontiers in AI. He authored a Morgan & Claypool book on Semantic Relations between Nominals, two books on computer algorithms, and 250+ research papers. He received a Best Paper Award at ACM WebSci'2022, a Best Long Paper Award at CIKM'2020, a Best Demo Paper Award (Honorable Mention) at ACL'2020, a Best Task Paper Award (Honorable Mention) at SemEval'2020, a Best Poster Award at SocInfo'2019, and the Young Researcher Award at RANLP’2011. He was also the first to receive the Bulgarian President's John Atanasoff award, named after the inventor of the first automatic electronic digital computer. His research was featured by over 100 news outlets, including Reuters, Forbes, Financial Times, CNN, Boston Globe, Aljazeera, DefenseOne, Business Insider, MIT Technology Review, Science Daily, Popular Science, Fast Company, The Register, WIRED, and Engadget, among others.

Title: Generative IA for Social Good: A Myth or a Reality?
Speaker: Farah Benamara

Abstract: Linguistically-informed processing of unstructured textual interactions offers an important testing ground for hybrid AI and AI for social good, in particular when the attempt is to automatically under- stand beyond what is said. This talk is about the implicit nature of linguistic expressions investigating the role of context in their automatic processing: If humans need context, what about machines? I attempt to answer this question on two particular NLP applications: Hate speech detection and crisis management. I review main findings of current studies and question the use of generative IA models in applications with great social and ethical implications for society.

Bio: Dr. Farah Benamara is a Full Professor of computer science at Toulouse University Paul Sabatier. She is member of IRIT laboratory and co-head of the MELODI group. Her research concerns Natural Language Processing and focuses on the development of semantic and pragmatic models for language understanding with a particular attention on evaluative language processing, discourse processing and information extraction from texts. She published more than 100 publications in peer-reviewed international conference and journal papers. She has been designed to be area chair at ACL 2019, EACL 2021, EACL 2024 and Senior Area Chair at NAACL 2024. She is member of the editorial board of the journal of Dialogue and Discourse, IEEE Affective Computing and Traitement Automatique des Langues. She co-edited a special issue on contextual phenomena in evaluative language processing in the journal of Computational Linguistics. She is PI of several ongoing projects among which DesCartes at CNRA@CREATE Singapore on hybrid IA for NLP, Sterheotypes an EU project on the detection of racial stereotypes, QualityOnto an ANR-DFG project on fact-checking for knowledge graph validation and finally INTACT, a CNRS prematuration project on NLP-based crisis management from social media.

Title: From Bots to Buddies: Making Conversational Agents More Human-Like
Speaker: Vivian Chen

Abstract: While today's conversational agents are equipped with impressive capabilities, there remains a clear distinction between the intuitive prowess of humans and the operational limits of machines. An example of this disparity is evident in the human ability to infer implicit intents from users' utterances, subsequently guiding conversations toward specific topics or recommending appropriate tasks or products. This talk aims to elevate conversational agents to a more human-like realm, enhancing user experience and practicality. By exploring innovative strategies and frameworks that leverages commonsense knowledge, we delve into the potential ways conversational agents can evolve to offer more seamless, contextually aware, and user-centric interactions. The goal is to not only close the gap between human and machine interactions but also to unlock new possibilities in how conversational agents can be utilized in our daily lives.

Bio: Dr. Yun-Nung (Vivian) Chen is currently an associate professor in the Department of Computer Science & Information Engineering at National Taiwan University. She earned her Ph.D. degree from Carnegie Mellon University, where her research interests focus on spoken dialogue systems and natural language processing. She was recognized as the Taiwan Outstanding Young Women in Science and received Google Faculty Research Awards, Amazon AWS Machine Learning Research Awards, MOST Young Scholar Fellowship, and FAOS Young Scholar Innovation Award. Her team was selected to participate in the first Alexa Prize TaskBot Challenge in 2021. Prior to joining National Taiwan University, she worked in the Deep Learning Technology Center at Microsoft Research Redmond.

Title: Evaluation in the era of GPT-4
Speaker: Tanya Goyal

Abstract: As large language models become more embedded in user applications, there is a push to align their outputs with human preferences. But human preferences are highly subjective, making both model alignment and evaluation extremely challenging. In this talk, I will first outline work that highlights this subjectivity, for a relatively well-defined tasks like summarization, and its effects on downstream model evaluations. Next, I will discuss how effectively trained models can capture human preferences and the impact of integrating these models into RLHF pipelines.

Bio: Dr. Tanya Goyal is an incoming (Fall 2024) assistant professor of Computer Science at Cornell University. For the 2023-2024 academic year, she is a postdoctoral researcher at the Princeton Language and Intelligence (PLI) group. Her current research focuses on designing scalable and cost-effective evaluation techniques for LLMs. Particularly, she is interested in understanding and modeling the subjectivity in human feedback, and how this affects both evaluation and training of LLMs at scale. Previously, she received her Ph.D. in computer science from the University of Texas at Austin in 2023, advised by Dr. Greg Durrett. Her thesis research focused on building tools to automatically detect attribution errors in generated text.

Title: Human-AI Interaction in the Age of LLMs
Speaker: Diyi Yang

Abstract: Large language models have revolutionized the way humans interact with AI systems, transforming a wide range of fields and disciplines. In this talk, I share two distinct approaches to empowering human-AI interaction using LLMs. The first one explores how large language models transform computational social science, and how human-AI collaboration can reduce costs and improve the efficiency of social science research. The second part focuses on social skill learning via LLMs by empowering therapists with LLM-empowered feedback and deliberative practices. These two works demonstrate how human-AI interaction via LLMs can foster positive change.

Bio: Dr. Diyi Yang is an assistant professor in the Computer Science Department at Stanford University. Her research focuses on natural language processing for social impact. She has received multiple best paper awards and recognitions at leading conferences in NLP and HCI. She is a recipient of IEEE “AI 10 to Watch” (2020), Intel Rising Star Faculty Award (2021), Samsung AI Researcher of the Year (2021), Microsoft Research Faculty Fellowship (2021), NSF CAREER Award (2022), and an ONR Young Investigator Award (2023).

Title: New Dimensions for the Evaluation of Conversational Agents
Speaker: João Sedoc

Abstract: The rapid advances in large language models brought about disruptive innovations in the field of conversational agents. However, recent advances also present new challenges in evaluating the quality of such systems, as well as the underlying models and methods. As conversational agents increasingly match and or even surpass human performance in dimensions like 'coherence,' we must shift our focus to the qualities of conversational agents that are fundamental to human-like conversation (e.g., empathy and emotion). In this talk, I will focus on how we can integrate psychological metrics for evaluating conversational agents along dimensions such as emotion, empathy, and user traits.

Bio: Dr. João Sedoc is an Assistant Professor of Information Systems in the Department of Technology, Operations and Statistics at New York University Stern School of Business. He is also affiliated with the Center for Datascience ML^2 Lab at NYU. His research areas are at the intersection of machine learning and natural language processing. His interests include conversational agents, hierarchical models, deep learning, and time series analysis. Before joining NYU Stern, he worked as an Assistant Research Professor in the Department of Computer Science at Johns Hopkins University. He received his PhD in Computer and Information Science from the University of Pennsylvania.

Title: Modular Language Modeling through Model Merging
Speaker: Daniel Preoțiuc-Pietro

Abstract: Pre-trained language models are the cornerstone of most NLP applications and their performance is dependent on using large and diverse data sets. Combining knowledge of multiple data sets in a single model either in pretraining or fine-tuning can lead to better overall performance on in-domain data and can better generalize on out-of-domain data. This talk will present methods and experiments with model merging, defined as combining multiple models into a single one in parameter space without access to data or retraining. This enables models to be modular by design, where models trained on individual data sets can be dynamically used, combined or, if needed, removed under arbitrary constraints. Merging is compute and parameter efficient and allows leveraging models without access to potentially private data used in their training.

Bio: Dr. Daniel Preoțiuc-Pietro is a Senior Research Scientist and the manager of the NLP Platforms group in the Bloomberg AI Engineering group. The group's work powers products for news, financial documents, social media and search. His main research interests are on understanding and modelling the social, pragmatic and temporal aspects of text, especially from social media, with applications in domains such as Psychology, Law, Political Science and Journalism. His research was featured in popular press including the Washington Post, BBC, Scientific American or FiveThirtyEight. He is a co-organizer of the Natural Legal Language processing workshop series. Prior to joining Bloomberg, he obtained his PhD from the University of Sheffield and was a postdoctoral researcher at the University of Pennsylvania.

Title: Perplexity-Driven Case Encoding Needs Augmentation for CAPITALIZATION Robustness
Speaker: Huda Khayrallah

Abstract: For most NLP models, upper and lower case letters are represented with distinct code-points. In contrast, most people naturally connect upper and lower-cased letters as highly similar and therefore expect NLP models to perform similarly on inputs that only differ in casing. However, that is often not the case, and NLP models are often unstable on non-standard casings. Subword segmentation methods (e.g., BPE (Sennrich et al., 2016) and SPM (Kudo and Richardson, 2018)) handle the sparsity introduced by a variety of linguistic features (e.g. concatenative morphology) by learning a segmentation of words into shorter sequences of characters. However, such methods do not currently handle the sparsity introduced by casing well and can lead to terrible quality on ALL CAPS data. Prior work (Berard et al., 2019; Etchegoyhen and Gete, 2020) overcame the quality drop in machine translation but did so in a way that breaks the encoding optimality of perplexity driven methods, leading to impractical sequence length/runtime. In this work, we re-encode capitalization to allow the perplexity-driven subword segmentation model to learn how to best segment this linguistic feature. Naturally occurring data accurately describes the prevalence of capitalization but underestimates the importance humans ascribe to capitalization robustness. We propose data augmentation to fill this gap. Overall, we increase translation quality on data with different casings (compared to standard SPM), with minimal impact on decoding speed on standard cased data and large speed improvements on ALL CAPS data.

Bio: Dr. Huda Khayrallah is a senior researcher at Microsoft, working on the Microsoft Translator team. She holds a PhD in computer science from The Johns Hopkins University (JHU), where she was advised by Philipp Koehn. She also holds a bachelor’s in computer science from UC Berkeley. She has worked on a variety of topics in machine translation and NLP including: low resource MT, noisy data in MT, domain adaptation, chatbots, and more.

Title: Retrieval-Augmented Large Language Models for Personal Assistants
Speaker: Alessandro Moschitti

Abstract: Recent work has shown that Large Language Models (LLMs) can potentially answer any question with high accuracy, also providing justifications of the generated output. At the same time, other research work has shown that even the most powerful and accurate models, such as ChatGPT 4, produce hallucinations, which often invalidated their answers. Retrieval-Augmented LLMs are currently a practical solution that can effectively solve the above-mentioned problem. However, the quality of grounding is essential in order to improve the model, since noisy context deteriorates the overall performance. In this talk, we present our experience with Generative Question Answering, which uses basic search engines and accurate passage rerankers to augment relatively small language models. Interestingly, our approach provides a more direct interpretation of knowledge grounding for LLMs.

Bio: Dr. Alessandro Moschitti is a Principal Research Scientist of Amazon Alexa AI, where he has been leading the science of Alexa information service since 2018. He designed the Alexa QA system based on unstructured text and more recently the first Generative QA system to extend the answer skills of Alexa. He obtained his Ph.D. in CS from the University of Rome in 2003, and then did his postdoc at The University of Texas at Dallas for two years. He was professor of the CS Dept. of the University of Trento, Italy, from 2007 to 2021. He participated to the Jeopardy! Grand Challenge with the IBM Watson Research center (2009 to 2011), and collaborated with them until 2015. He was a Principal Scientist of the Qatar Computing Research Institute (QCRI) for five years (2013-2018). His expertise concerns theoretical and applied machine learning in the areas of NLP, IR and Data Mining. He is well-known for his work on structural kernels and neural networks for syntactic/semantic inference over text, documented by more than 330 scientific articles. He has received four IBM Faculty Awards, one Google Faculty Award, and five best paper awards. He was the General Chair of EACL 2023 and EMNLP 2014, a PC co-Chair of CoNLL 2015, and has had a chair role in more than 70 conferences and workshops. He is currently a senior action/associate editor of ACM Computing Survey and JAIR. He has led ~30 research projects, e.g., with MIT CSAIL.

Title: Research and Implementation of Large Language Models at Alibaba DAMO Academy
Speaker: Lidong Bing

Abstract: Over the past year, Large Language Models (LLMs) have brought about a significant transformation in the field of Natural Language Processing (NLP) and artificial intelligence (AI). This presentation will provide an overview of the research and practical initiatives carried out by Alibaba DAMO Academy in the domain of LLMs. On the practical front, the team has introduced an LLM called SeaLLMs, which demonstrates remarkable capabilities across major languages in the ASEAN region. When compared to models with similar parameter sizes, SeaLLMs has achieved state-of-the-art performance on various datasets, spanning from fundamental NLP tasks to complex general task solving. Additionally, SeaLLMs has been meticulously customized to enhance safety in these languages and improve its understanding of local cultures. On the research side, the presenter will introduce several recent projects undertaken by the team to advance the development of superior multilingual LLMs. These initiatives include the creation of a multilingual evaluation benchmark for LLMs, an extensive investigation into multilingual jailbreak, a framework that enhances LLMs by incorporating adaptive knowledge sources, a method for extending context length in pretraining, and a framework aimed at making LLMs more effective for low-resource languages. Lastly, the presenter will offer insights into the directions that the team will investigate in the near future. Additionally, he will provide information about career opportunities at DAMO Academy.

Bio: Dr. Lidong Bing is the director of the Language Technology Lab at DAMO Academy of Alibaba Group. He received a Ph.D. from The Chinese University of Hong Kong and was a postdoc research fellow at Carnegie Mellon University. His research interests include various low-resource and multilingual NLP problems, large language models and their applications, etc. He has published over 150 papers on these topics in top peer-reviewed venues. Currently, he is serving as an Action Editor for Transactions of the Association for Computational Linguistics (TACL) and ACL Rolling Review (ARR), as well as an Area Chair for AI conferences and Associate Editors for AI journals.


General Chair:

Kokil Jaidka, National University of Singapore

PC Chair:

Wei Gao, Singapore Management University

Organizing Committee:

Yanxia Qin, National University of Singapore

Hao Fei, National University of Singapore

Yang Deng, National University of Singapore

Sun Shuo, Institute for Infocomm Research

Suzanna Sia, Johns Hopkins University

Gerard Yeo, National University of Singapore


Min-Yen Kan, National University of Singapore

Hwee Tou Ng, National University of Singapore

Tat-Seng Chua, National University of Singapore

Shafiq Joty, Nanyang Technological University

Nancy Chen, Institute for Infocomm Research

Jian Su, Institute for Infocomm Research

Jing Jiang, Singapore Management University

Wei Lu, Singapore University of Technology and Design

Soujanya Poria, Singapore University of Technology and Design


SSNLP 2023 will be held at the Shaw Foundation Alumni House Auditorium, NUS (11 Kent Ridge Dr, Singapore 119244).


Contact Us

Please feel free to reach out if you have any inquiries: Yanxia Qin and Hao Fei.