image-title-here

Jinlan Fu

  • Postdoc @ NUS-CS
  • jinlanjonna@gmail.com
  • NUS Innovation 4.0 Rm 406
  • Natural Language Processing

About Me

Hi there, I am a postdoc at the National University of Singapore, working with Prof. See-Kiong Ng. I received my Ph.D. degree from the School of Computer Science, Fudan University (Sep. 2016 ~ Jul. 2021), supervised by Prof. Xuanjing Huang and Prof. Qi Zhang . From Dec. 2019 to Jun. 2022, I was lucky to work closely (remotely) with Dr. Pengfei Liu and Prof. Graham Neubig of the Language Technologies Institute (LTI) at Carnegie Mellon University.

My research focuses on the following perspectives for natural language processing:

  • Dialogue System;

  • Interpretable Analysis and Text Evaluation;

  • Information Extraction, Sequence Labeling;

  • Cross-lingual Transfer Learning.

🔥🔥 We are looking for highly-motivated interns, Research Assistants, and Ph.D to work on Natural Language Processing. Please drop me an email with your CV if you are interested.

Awards


Projects


Selected Publications

A complete list is in Google Scholar.

* denotes the corresponding author.

2022

  • Polyglot Prompt: Multilingual Multitask PrompTraining

    Jinlan Fu, See-Kiong Ng, Pengfei Liu
    EMNLP Full Text Code Abstract BibTeX
    This paper aims for a potential architectural improvement for multilingual learning and asks: Can different tasks from different languages be modeled in a monolithic framework, i.e. without any task/language-specific module? The benefit of achieving this could open new doors for future multilingual research, including allowing systems trained on low resources to be further assisted by other languages as well as other tasks. We approach this goal by developing a learning framework named Polyglot Prompting to exploit prompting methods for learning a unified semantic space for different languages and tasks with multilingual prompt engineering. We performed a comprehensive evaluation of 6 tasks, namely topic classification, sentiment classification, named entity recognition, question answering, natural language inference, and summarization, covering 24 datasets and 49 languages. The experimental results demonstrated the efficacy of multilingual multitask prompt-based learning and led to inspiring observations. We also present an interpretable multilingual evaluation methodology and show how the proposed framework, multilingual multitask prompt training, works. We release all datasets prompted in the best setting and code. 
    @inproceedings{fu2022poly,
      title = {Polyglot Prompt: Multilingual Multitask PrompTraining},
      author = {Jinlan Fu, See-Kiong Ng, Pengfei Liu},
      booktitle = {EMNLP},
      year = {2022}
    }
    
  • CorefDiffs: Co-referential and Differential Knowledge Flow in Document Grounded Conversations

    Lin Xu, Qixian Zhou, Jinlan Fu, Min-Yen Kan, See-Kiong Ng
    COLING Full Text Code Abstract BibTeX
    Knowledge-grounded dialog systems need to incorporate smooth transitions among knowledge selected for generating responses, to ensure that dialog flows naturally. For document-grounded dialog systems, the inter- and intra-document knowledge relations can be used to model such conversational flows. We develop a novel Multi-Document Co-Referential Graph (Coref-MDG) to effectively capture the inter-document relationships based on commonsense and similarity and the intra-document co-referential structures of knowledge segments within the grounding documents. We propose CorefDiffs, a Co-referential and Differential flow management method, to linearize the static Coref-MDG into conversational sequence logic. CorefDiffs performs knowledge selection by accounting for contextual graph structures and the knowledge difference sequences. CorefDiffs significantly outperforms the state-of-the-art by 9.5%, 7.4%, and 8.2% on three public benchmarks. This demonstrates that the effective modeling of co-reference and knowledge difference for dialog flows are critical for transitions in document-grounded conversation 
    @inproceedings{lin2022corefdiffs,
      title = {CorefDiffs: Co-referential and Differential Knowledge Flow in Document Grounded Conversations},
      author = {Lin Xu, Qixian Zhou, Jinlan Fu, Min-Yen Kan, See-Kiong Ng},
      booktitle = {COLING},
      year = {2022}
    }
    
  • Are All the Datasets in Benchmark Necessary? A Pilot Study of Dataset Evaluation for Text Classification

    Yang Xiao, Jinlan Fu*, See-Kiong Ng, Pengfei Liu
    NAACL Full Text Code DataLab Abstract BibTeX
    In this paper, we ask the research question of whether all the datasets in the benchmark are necessary. We approach this by first characterizing the distinguishability of datasets when comparing different systems. Experiments on 9 datasets and 36 systems show that several existing benchmark datasets contribute little to discriminating top-scoring systems, while those less used datasets exhibit impressive discriminative power. We further, taking the text classification task as a case study, investigate the possibility of predicting dataset discrimination based on its properties (e.g., average sentence length). Our preliminary experiments promisingly show that given a sufficient number of training experimental records, a meaningful predictor can be learned to estimate dataset discrimination over unseen datasets. We released all datasets with features explored in this work on DataLab 
    @inproceedings{xiao2022eval,
      title = {Are All the Datasets in Benchmark Necessary? A Pilot Study of Dataset Evaluation for Text Classification},
      author = {Yang Xiao, Jinlan Fu, See-Kiong Ng, Pengfei Liu},
      booktitle = {NAACL},
      year = {2022}
    }
    
  • DataLab: A Platform for Data Analysis and Intervention

    Yang Xiao, Jinlan Fu, Weizhe Yuan, Vijay Viswanathan, Zhoumianze Liu, Yixin Liu, Graham Neubig, Pengfei Liu
    ACL-2022, Outstanding Demo Full Text Code DataLab Abstract BibTeX
    Despite data’s crucial role in machine learning, most existing tools and research tend to focus on systems on top of existing data rather than how to interpret and manipulate data.In this paper, we propose DataLab, a unified data-oriented platform that not only allows users to interactively analyze the characteristics of data but also provides a standardized interface so that many data processing operations can be provided within a unified interface. Additionally, in view of the ongoing surge in the proliferation of datasets, DataLab has features for dataset recommendation and global vision analysis that help researchers form a better view of the data ecosystem. So far, DataLab covers 1,300 datasets and 3,583 of its transformed version, where 313 datasets support different types of analysis (e.g., with respect to gender bias) with the help of 119M samples annotated by 318 feature functions. DataLab is under active development and will be supported going forward. We have released a web platform, web API, Python SDK, and PyPI published package, which hopefully, can meet the diverse needs of researchers. 
    @inproceedings{xiao2022datalab,
      title = {DataLab: A Platform for Data Analysis and Intervention},
      author = {Yang Xiao, Jinlan Fu, Weizhe Yuan, Vijay Viswanathan, Zhoumianze Liu, Yixin Liu, Graham Neubig, Pengfei Liu},
      booktitle = {ACL},
      year = {2022}
    }
    
  • Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing

    Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig
    ACM Computing Surveys Full Text Resource Abstract BibTeX
    This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website http://pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist. 
    @inproceedings{liu2021pretrain,
      title = {Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing},
      author = {Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig},
      booktitle = {ACM Computing Surveys},
      year = {2022}
    }
    

2021

  • XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation

    Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Junjie Hu, Graham Neubig, Melvin Johnson
    EMNLP Full Text Code Leaderboard ExplainaBoard Abstract BibTeX
    Machine learning has brought striking advances in multilingual natural language processing capabilities over the past year. For example, the latest techniques have improved the state-of-the-art performance on the XTREME multilingual benchmark by more than 13 points. While a sizeable gap to human-level performance remains, improvements have been easier to achieve in some tasks than in others. This paper analyzes the current state of cross-lingual transfer learning and summarizes some lessons learned. In order to catalyze meaningful progress, we extend XTREME to XTREME-R, which consists of an improved set of ten natural language understanding tasks, including challenging language-agnostic retrieval tasks, and covers 50 typologically diverse languages. In addition, we provide a massively multilingual diagnostic suite and fine-grained multi-dataset evaluation capabilities through an interactive public leaderboard to gain a better understanding of such models.
    @inproceedings{ruder2021xtremer,
      title = {XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation},
      author = {Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Junjie Hu, Graham Neubig, Melvin Johnson},
      booktitle = {EMNLP},
      year = {2021}
    }
    
  • EXPLAINABOARD: An Explainable Leaderboard for NLP

    Pengfei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaicheng Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, Zi-Yi Dou, Graham Neubig
    ACL, Best Demo Full Text Code ExplainaBoard Abstract BibTeX
    With the rapid development of NLP research, leaderboards have emerged as one tool to track the performance of various systems on various NLP tasks. They are effective in this goal to some extent, but generally present a rather simplistic one-dimensional view of the submitted systems, communicated only through holistic accuracy numbers. In this paper, we present a new conceptualization and implementation of NLP evaluation: the ExplainaBoard, which in addition to inheriting the functionality of the standard leaderboard, also allows researchers to (i) diagnose strengths and weaknesses of a single system (e.g.~what is the best-performing system bad at?) (ii) interpret relationships between multiple systems. (e.g.~where does system A outperform system B? What if we combine systems A, B, and C?) and (iii) examine prediction results closely (e.g.~what are common errors made by multiple systems, or in what contexts do particular errors occur?). So far, ExplainaBoard covers more than 400 systems, 50 datasets, 40 languages, and 12 tasks. ExplainaBoard keeps updated and is recently upgraded by supporting (1) multilingual multi-task benchmark, (2) meta-evaluation, and (3) more complicated task: machine translation, which reviewers also suggested.} We not only released an online platform on the website \url{http://explainaboard.nlpedia.ai/} but also make our evaluation tool an API with MIT Licence at Github \url{https://github.com/neulab/explainaBoard} and PyPi \url{https://pypi.org/project/interpret-eval/} that allows users to conveniently assess their models offline. We additionally release all output files from systems that we have run or collected to motivate "output-driven" research in the future. 
    @inproceedings{liu2021explain,
      title = {EXPLAINABOARD: An Explainable Leaderboard for NLP},
      author = {Pengfei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaicheng Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, Zi-Yi Dou, Graham Neubig},
      booktitle = {ACL},
      year = {2021}
    }
    
  • SpanNER: Named Entity Re-/Recognition as Span Prediction

    Jinlan Fu, Xuanjing Huang, Pengfei Liu
    ACL Full Text Code Demo Abstract BibTeX
    Recent years have seen the paradigm shift of Named Entity Recognition (NER) systems from sequence labeling to span prediction. Despite its preliminary effectiveness, the span prediction model's architectural bias has not been fully understood. In this paper, we first investigate the strengths and weaknesses when the span prediction model is used for named entity recognition compared with the sequence labeling framework and how to further improve it, which motivates us to make complementary advantages of systems based on different paradigms. We then reveal that span prediction, simultaneously, can serve as a system combiner to re-recognize named entities from different systems' outputs. We experimentally implement 154 systems on 11 datasets, covering three languages, comprehensive results show the effectiveness of span prediction models that both serve as base NER systems and system combiners. We make all code and datasets available: \url{https://github.com/neulab/spanner}, as well as an online system demo: \url{http://spanner.sh}. Our model also has been deployed into the ExplainaBoard platform, which allows users to flexibly perform a system combination of top-scoring systems in an interactive way: \url{http://explainaboard.nlpedia.ai/leaderboard/task-ner/}.
    @inproceedings{fu2021spanner,
      title = {SpanNER: Named Entity Re-/Recognition as Span Prediction},
      author = {Jinlan Fu, Xuanjing Huang, Pengfei Liu},
      booktitle = {ACL},
      year = {2021}
    }
    
  • Larger-Context Tagging: When and Why Does It Work?

    Jinlan Fu, Liangjing Feng, Qi Zhang, Xuanjing Huang, Pengfei Liu
    NAACL Full Text Demo Abstract BibTeX
    The development of neural networks and pretraining techniques has spawned many sentence-level tagging systems that achieved superior performance on typical benchmarks. However, a relatively less discussed topic is what if more context information is introduced into current top-scoring tagging systems. Although several existing works have attempted to shift tagging systems from sentence-level to document-level, there is still no consensus conclusion about when and why it works, which limits the applicability of the larger-context approach in tagging tasks. In this paper, instead of pursuing a state-of-the-art tagging system by architectural exploration, we focus on investigating when and why the larger-context training, as a general strategy, can work. To this end, we conduct a thorough comparative study on four proposed aggregators for context information collecting and present an attribute-aided evaluation method to interpret the improvement brought by larger-context training. Experimentally, we set up a testbed based on four tagging tasks and thirteen datasets. Hopefully, our preliminary observations can deepen the understanding of larger-context training and enlighten more follow-up works on the use of contextual information.
    @inproceedings{fu2021larger,
      title = {Larger-Context Tagging: When and Why Does It Work?},
      author = {Jinlan Fu, Liangjing Feng, Qi Zhang, Xuanjing Huang, Pengfei Liu},
      booktitle = {NAACL},
      year = {2021}
    }
    
  • Towards More Fine-grained and Reliable NLP Performance Prediction

    Zihuiwen Ye, Pengfei Liu, Jinlan Fu, Graham Neubig
    EACL Full Text Code Demo Abstract BibTeX
    Performance prediction, the task of estimating a system's performance without performing experiments, allows us to reduce the experimental burden caused by the combinatorial explosion of different datasets, languages, tasks, and models. In this paper, we make two contributions to improving performance prediction for NLP tasks. First, we examine performance predictors not only for holistic measures of accuracy like F1 or BLEU but also fine-grained performance measures such as accuracy over individual classes of examples. Second, we propose methods to understand the reliability of a performance prediction model from two angles: confidence intervals and calibration. We perform an analysis of four types of NLP tasks, and both demonstrate the feasibility of fine-grained performance prediction and the necessity to perform reliability analysis for performance prediction methods in the future. We make our code publicly available: \url{https://github.com/neulab/Reliable-NLPPP} 
    @inproceedings{ye2021towards,
      title = {Towards More Fine-grained and Reliable NLP Performance Prediction},
      author = {Zihuiwen Ye, Pengfei Liu, Jinlan Fu, Graham Neubig},
      booktitle = {EACL},
      year = {2021}
    }
    
  • Textflint: Unified multilingual robustness evaluation toolkit for natural language processing

    Xiao Wang, Qin Liu, Tao Gui, Qi Zhang, Yicheng Zou, Xin Zhou, Jiacheng Ye, Yongxin Zhang, Rui Zheng, Zexiong Pang, Qinzhuo Wu, Zhengyan Li, Chong Zhang, Ruotian Ma, Zichu Fei, Ruijian Cai, Jun Zhao, Xingwu Hu, Zhiheng Yan, Yiding Tan, Yuan Hu, Qiyuan Bian, Zhihua Liu, Shan Qin, Bolin Zhu, Xiaoyu Xing, Jinlan Fu, Yue Zhang, Minlong Peng, Xiaoqing Zheng, Yaqian Zhou, Zhongyu Wei, Xipeng Qiu, Xuan-Jing Huang
    ACL Full Text Code Textflint Abstract BibTeX
    TextFlint is a multilingual robustness evaluation toolkit for NLP tasks that incorporates universal text transformation, task-specific transformation, adversarial attack, subpopulation, and their combinations to provide comprehensive robustness analyses. This enables practitioners to automatically evaluate their models from various aspects or to customize their evaluations as desired with just a few lines of code. TextFlint also generates complete analytical reports as well as targeted augmented data to address the shortcomings of the model in terms of its robustness. To guarantee acceptability, all the text transformations are linguistically based and all the transformed data selected (up to 100,000 texts) scored highly under human evaluation. To validate the utility, we performed large-scale empirical evaluations (over 67,000) on state-of-the-art deep learning models, classic supervised methods, and real-world systems. The toolkit is already available at https://github.com/textflint with all the evaluation results demonstrated at textflint.io.
    @inproceedings{wang2021textflint,
      title = {TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing},
      author = {Xiao Wang, Qin Liu, Tao Gui, Qi Zhang, Yicheng Zou, Xin Zhou, Jiacheng Ye, Yongxin Zhang, Rui Zheng, Zexiong Pang, Qinzhuo Wu, Zhengyan Li, Chong Zhang, Ruotian Ma, Zichu Fei, Ruijian Cai, Jun Zhao, Xingwu Hu, Zhiheng Yan, Yiding Tan, Yuan Hu, Qiyuan Bian, Zhihua Liu, Shan Qin, Bolin Zhu, Xiaoyu Xing, Jinlan Fu, Yue Zhang, Minlong Peng, Xiaoqing Zheng, Yaqian Zhou, Zhongyu Wei, Xipeng Qiu, Xuan-Jing Huang},
      booktitle = {ACL},
      year = {2021}
    }
    

2020

  • Interpretable Multi-dataset Evaluation for Named Entity Recognition

    Jinlan Fu, Pengfei Liu, Graham Neubig
    EMNLP Full Text Code Demo Abstract BibTeX
    With the proliferation of models for natural language processing tasks, it is even harder to understand the differences between models and their relative merits. Simply looking at differences between holistic metrics such as accuracy, BLEU, or F1 does not tell us why or how particular methods perform differently and how diverse datasets influence the model design choices. In this paper, we present a general methodology for interpretable evaluation for the named entity recognition (NER) task. The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them, identifying the strengths and weaknesses of current systems. By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area: https://github.com/neulab/InterpretEval. 
    @inproceedings{fu2020interpret,
      title = {Interpretable Multi-dataset Evaluation for Named Entity Recognition},
      author = {Jinlan Fu, Pengfei Liu, Graham Neubig},
      booktitle = {EMNLP},
      year = {2020}
    }
    
  • RethinkCWS: Is Chinese Word Segmentation a Solved Task?

    Jinlan Fu, Pengfei Liu, Qi Zhang, Xuanjing Huang
    EMNLP Full Text Code Demo Abstract BibTeX
    The performance of the Chinese Word Segmentation (CWS) systems has gradually reached a plateau with the rapid development of deep neural networks, especially the successful use of large pre-trained models. In this paper, we take stock of what we have achieved and rethink what's left in the CWS task. Methodologically, we propose a fine-grained evaluation for existing CWS systems, which not only allows us to diagnose the strengths and weaknesses of existing models (under the in-dataset setting), but enables us to quantify the discrepancy between different criterion and alleviate the negative transfer problem when doing multi-criteria learning. Strategically, despite not aiming to propose a novel model in this paper, our comprehensive experiments on eight models and seven datasets, as well as thorough analysis, could search for some promising direction for future research. We make all codes publicly available and release an interface that can quickly evaluate and diagnose user's models: https://github.com/neulab/InterpretEval.  
    @inproceedings{fu2020rethinkcws,
      title = {RethinkCWS: Is Chinese Word Segmentation a Solved Task?},
      author = {Jinlan Fu, Pengfei Liu, Qi Zhang, Xuanjing Huang},
      booktitle = {EMNLP},
      year = {2020}
    }
    
  • Rethinking Generalization of Neural Models: A Named Entity Recognition Case Study

    Jinlan Fu, Pengfei Liu, Qi Zhang, Xuanjing Huang
    AAAI Full Text Data Abstract BibTeX
    While neural network-based models have achieved impressive performance on a large body of NLP tasks, the generalization behavior of different models remains poorly understood: Does this excellent performance imply a perfect generalization model, or are there still some limitations? In this paper, we take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives and characterize the differences of their generalization abilities through the lens of our proposed measures, which guides us to better design models and training methods. Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models in terms of breakdown performance analysis, annotation errors, dataset bias, and category relationships, which suggest directions for improvement. We have released the datasets:(ReCoNLL, PLONER) for the future research at our project page: http://pfliu.com/InterpretNER/. 
    @inproceedings{fu2020rethinking,
      title = {Rethinking Generalization of Neural Models: A Named Entity Recognition Case Study.},
      author = {Jinlan Fu, Pengfei Liu, Qi Zhang, Xuanjing Huang},
      booktitle = {AAAI},
      year = {2020}
    }
    
  • Recurrent Memory Reasoning Network for Expert Finding in Community Question Answering

    Jinlan Fu, Yi Li, Qi Zhang, Qinzhuo Wu, Renfeng Ma, Xuanjing Huang, Yu-Gang Jiang
    WSDM Full Text Abstract BibTeX
    Expert finding is a task designed to enable recommendation of the right person who can provide high-quality answers to a requester's question. Most previous works try to involve a content-based recommendation, which only superficially comprehends the relevance between a requester's question and the expertise of candidate experts by exploring the content or topic similarity between the requester's question and the candidate experts' historical answers. However, if a candidate expert has never answered a question similar to the requester's question, then existing methods have difficulty making a correct recommendation. Therefore, exploring the implicit relevance between a requester's question and a candidate expert's historical records by perception and reasoning should be taken into consideration. In this study, we propose a novel \textslrecurrent memory reasoning network (RMRN) to perform this task. This method focuses on different parts of a question, and accordingly retrieves information from the histories of the candidate expert.Since only a small percentage of historical records are relevant to any requester's question, we introduce a Gumbel-Softmax-based mechanism to select relevant historical records from candidate experts' answering histories. To evaluate the proposed method, we constructed two large-scale datasets drawn from Stack Overflow and Yahoo! Answer. Experimental results on the constructed datasets demonstrate that the proposed method could achieve better performance than existing state-of-the-art methods. 
    @inproceedings{fu2020rethinkcws,
      title = {Recurrent Memory Reasoning Network for Expert Finding in Community Question Answering},
      author = {Jinlan Fu, Yi Li, Qi Zhang, Qinzhuo Wu, Renfeng Ma, Xuanjing Huang, Yu-Gang Jiang},
      booktitle = {WSDM},
      year = {2020}
    }
    

2019

  • Distantly Supervised Named Entity Recognition using Positive-Unlabeled Learning

    Minlong Peng, Xiaoyu Xing, Qi Zhang, Jinlan Fu, Xuanjing Huang
    ACL Full Text Code Abstract BibTeX
    In this work, we explore the way to perform named entity recognition (NER) using only unlabeled data and named entity dictionaries. To this end, we formulate the task as a positive-unlabeled (PU) learning problem and accordingly propose a novel PU learning algorithm to perform the task. We prove that the proposed algorithm can unbiasedly and consistently estimate the task loss as if there is fully labeled data. A key feature of the proposed method is that it does not require the dictionaries to label every entity within a sentence, and it even does not require the dictionaries to label all of the words constituting an entity. This greatly reduces the requirement on the quality of the dictionaries and makes our method generalize well with quite simple dictionaries. Empirical studies on four public NER datasets demonstrate the effectiveness of our proposed method. We have published the source code at \url{https://github.com/v-mipeng/LexiconNER}. 
    @inproceedings{peng2019distantly,
      title = {Distantly Supervised Named Entity Recognition using Positive-Unlabeled Learning},
      author = {Minlong Peng, Xiaoyu Xing, Qi Zhang, Jinlan Fu, Xuanjing Huang},
      booktitle = {ACL},
      year = {2019}
    }
    
  • Learning Task-specific Representation for Novel Words in Sequence Labeling

    Minlong Peng, Qi Zhang, Xiaoyu Xing, Tao Gui, Jinlan Fu, Xuanjing Huang
    IJCAI Full Text Abstract BibTeX
    Word representation is a key component in neural-network-based sequence labeling systems. However, representations of unseen or rare words trained on the end task are usually poor for appreciable performance. This is commonly referred to as the out-of-vocabulary (OOV) problem. In this work, we address the OOV problem in sequence labeling using only training data of the task. To this end, we propose a novel method to predict representations for OOV words from their surface-forms (e.g., character sequence) and contexts. The method is specifically designed to avoid the error propagation problem suffered by existing approaches in the same paradigm. To evaluate its effectiveness, we performed extensive empirical studies on four part-of-speech tagging (POS) tasks and four named entity recognition (NER) tasks. Experimental results show that the proposed method can achieve better or competitive performance on the OOV problem compared with existing state-of-the-art methods. 
    @inproceedings{peng2019learning,
      title = {Learning Task-specific Representation for Novel Words in Sequence Labeling},
      author = {Minlong Peng, Qi Zhang, Xiaoyu Xing, Tao Gui, Jinlan Fu, Xuanjing Huang},
      booktitle = {IJCAI},
      year = {2019}
    }
    

2018

  • Adaptive Co-Attention Network for Named Entity Recognition in Tweets

    Qi Zhang, Jinlan Fu, Xiaoyu Liu, Xuanjing Huang
    AAAI Full Text Code Abstract BibTeX
    In this study, we investigate the problem of named entity recognition for tweets. Named entity recognition is an important task in natural language processing and has been carefully studied in recent decades. Previous named entity recognition methods usually only used the textual content when processing tweets. However, many tweets contain not only textual content, but also images. Such visual information is also valuable in the name entity recognition task. To make full use of textual and visual information, this paper proposes a novel method to process tweets that contain multimodal information. We extend a bi-directional long short term memory network with conditional random fields and an adaptive co-attention network to achieve this task. To evaluate the proposed methods, we constructed a large scale labeled dataset that contained multimodal tweets. Experimental results demonstrated that the proposed method could achieve a better performance than the previous methods in most cases. 
    @inproceedings{zhang2018adaptive,
      title = {Adaptive Co-Attention Network for Named Entity Recognition in Tweets},
      author = {Qi Zhang, Jinlan Fu, Xiaoyu Liu, Xuanjing Huang},
      booktitle = {AAAI},
      year = {2018}
    }
    
  • Neural Networks Incorporating Dictionaries for Chinese Word Segmentation

    Qi Zhang, Xiaoyu Liu, Jinlan Fu
    AAAI Full Text Abstract BibTeX
    In recent years, deep neural networks have achieved significant success in Chinese word segmentation and many other natural language processing tasks. Most of these algorithms are end-to-end trainable systems and can effectively process and learn from large scale labeled datasets. However, these methods typically lack the capability of processing rare words and data whose domains are different from training data. Previous statistical methods have demonstrated that human knowledge can provide valuable information for handling rare cases and domain shifting problems. In this paper, we seek to address the problem of incorporating dictionaries into neural networks for the Chinese word segmentation task. Two different methods that extend the bi-directional long short-term memory neural network are proposed to perform the task. To evaluate the performance of the proposed methods, state-of-the-art supervised models based methods and domain adaptation approaches are compared with our methods on nine datasets from different domains. The experimental results demonstrate that the proposed methods can achieve better performance than other state-of-the-art neural network methods and domain adaptation approaches in most cases. 
    @inproceedings{zhang2018neural,
      title = {Neural Networks Incorporating Dictionaries for Chinese Word Segmentation},
      author = {Qi Zhang, Xiaoyu Liu, Jinlan Fu},
      booktitle = {AAAI},
      year = {2018}
    }
    

Service

  • Senior PC: IJCAI 2021
  • Conference Reviewer: ACL/EMNLP/IJCAI/AAAI (since 2019); CCL 2020; NIPS 2021.
  • Journal Reviewer: Expert Systems with Applications