At UCLA-NLP, our mission is to develop reliable, fair, accountable, robust natural language understanding and generation technology to benefit everyone.

Please see our recent papers at

In the following, we will highlight our research papers at ACL 2021 on the following topics:


Fairness and Social NLP

[1], [2], [3], [4]
  1. Societal Biases in Language Generation: Progress and Challenges

    Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng, in ACL, 2021.
    Full Text BibTeX Details
    @inproceedings{sheng2021societal,
      title = {Societal Biases in Language Generation: Progress and Challenges},
      author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun},
      booktitle = {ACL},
      year = {2021}
    }
    

    Related Publications

    1. Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia

      Jiao Sun and Nanyun Peng, in ACL, 2021.
      Full Text BibTeX Details
      @inproceedings{sun2021men,
        title = {Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia},
        author = {Sun, Jiao and Peng, Nanyun},
        booktitle = {ACL},
        year = {2021}
      }
      
      Details
    2. Societal Biases in Language Generation: Progress and Challenges

      Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng, in ACL, 2021.
      Full Text BibTeX Details
      @inproceedings{sheng2021societal,
        title = {Societal Biases in Language Generation: Progress and Challenges},
        author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun},
        booktitle = {ACL},
        year = {2021}
      }
      
      Details

    Details
  2. Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble

    Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, and Xuanjing Huang, in ACL, 2021.
    Full Text Code BibTeX Details
    Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples. We propose Dirichlet Neighborhood Ensemble (DNE), a randomized method for training a robust model to defense synonym substitutionbased attacks. During training, DNE forms virtual sentences by sampling embedding vectors for each word in an input sentence from a convex hull spanned by the word and its synonyms, and it augments them with the training data. In such a way, the model is robust to adversarial attacks while maintaining the performance on the original clean data. DNE is agnostic to the network architectures and scales to large models (e.g., BERT) for NLP applications. Through extensive experimentation, we demonstrate that our method consistently outperforms recently proposed defense methods by a significant margin across different network architectures and multiple data sets.
    @inproceedings{zhou2021defense,
      title = {Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble},
      author = {Zhou, Yi and Zheng, Xiaoqing and Hsieh, Cho-Jui and Chang, Kai-Wei and Huang, Xuanjing},
      booktitle = {ACL},
      year = {2021}
    }
    

    Related Publications

    1. VideoCon: Robust video-language alignment via contrast captions

      Hritik Bansal, Yonatan Bitton, Idan Szpektor, Kai-Wei Chang, and Aditya Grover, in CVPR, 2024.
      Full Text Code Demo Abstract BibTeX Details Best paper at DPFM workshop at ICLR
      Despite being (pre)trained on a massive amount of data, state-of-the-art video-language alignment models are not robust to semantically-plausible contrastive changes in the video captions. Our work addresses this by identifying a broad spectrum of contrast misalignments, such as replacing entities, actions, and flipping event order, which alignment models should be robust against. To this end, we introduce the VideoCon, a video-language alignment dataset constructed by a large language model that generates plausible contrast video captions and explanations for differences between original and contrast video captions. Then, a generative video-language model is finetuned with VideoCon to assess video-language entailment and generate explanations. Our VideoCon-based alignment model significantly outperforms current models. It exhibits a 12-point increase in AUC for the video-language alignment task on human-generated contrast captions. Finally, our model sets new state of the art zero-shot performance in temporally-extensive video-language tasks such as text-to-video retrieval (SSv2-Temporal) and video question answering (ATP-Hard). Moreover, our model shows superior performance on novel videos and human-crafted captions and explanations.
      @inproceedings{bansal2023videocon,
        author = {Bansal, Hritik and Bitton, Yonatan and Szpektor, Idan and Chang, Kai-Wei and Grover, Aditya},
        title = {VideoCon: Robust video-language alignment via contrast captions},
        booktitle = {CVPR},
        year = {2024}
      }
      
      Details
    2. Red Teaming Language Model Detectors with Language Models

      Zhouxing Shi, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang, and Cho-Jui Hsieh, in TACL, 2023.
      Full Text Code Abstract BibTeX Details
      The prevalence and high capacity of large language models (LLMs) present significant safety and ethical risks when malicious users exploit them for automated content generation. To prevent the potentially deceptive usage of LLMs, recent works have proposed several algorithms to detect machine-generated text. In this paper, we systematically test the reliability of the existing detectors, by designing two types of attack strategies to fool the detectors: 1) replacing words with their synonyms based on the context; 2) altering the writing style of generated text. These strategies are implemented by instructing LLMs to generate synonymous word substitutions or writing directives that modify the style without human involvement, and the LLMs leveraged in the attack can also be protected by detectors. Our research reveals that our attacks effectively compromise the performance of all tested detectors, thereby underscoring the urgent need for the development of more robust machine-generated text detection systems.
      @inproceedings{shi2023red,
        author = {Shi, Zhouxing and Wang, Yihan and Yin, Fan and Chen, Xiangning and Chang, Kai-Wei and Hsieh, Cho-Jui},
        title = {Red Teaming Language Model Detectors with Language Models},
        booktitle = {TACL},
        year = {2023}
      }
      
      Details
    3. CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning

      Hritik Bansal, Nishad Singhi, Yu Yang, Fan Yin, Aditya Grover, and Kai-Wei Chang, in ICCV, 2023.
      Full Text Code Abstract BibTeX Details Best Paper Award at ICLR Workshop, Oral at ICCV (195 out of 8088 submissions, top 2.5%)
      Multimodal contrastive pretraining has been used to train multimodal representation models, such as CLIP, on large amounts of paired image-text data. However, previous studies have revealed that such models are vulnerable to backdoor attacks. Specifically, when trained on backdoored examples, CLIP learns spurious correlations between the embedded backdoor trigger and the target label, aligning their representations in the joint embedding space. Injecting even a small number of poisoned examples, such as 75 examples in 3 million pretraining data, can significantly manipulate the model’s behavior, making it difficult to detect or unlearn such correlations. To address this issue, we propose CleanCLIP, a finetuning framework that weakens the learned spurious associations introduced by backdoor attacks by independently re-aligning the representations for individual modalities. We demonstrate that unsupervised finetuning using a combination of multimodal contrastive and unimodal self-supervised objectives for individual modalities can significantly reduce the impact of the backdoor attack. We show empirically that CleanCLIP maintains model performance on benign examples while erasing a range of backdoor attacks on multimodal contrastive learning.
      @inproceedings{bansal2023cleanclip,
        author = {Bansal, Hritik and Singhi, Nishad and Yang, Yu and Yin, Fan and Grover, Aditya and Chang, Kai-Wei},
        title = {CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning},
        booktitle = {ICCV},
        year = {2023}
      }
      
      Details
    4. ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation

      Fan Yin, Yao Li, Cho-Jui Hsieh, and Kai-Wei Chang, in EMNLP, 2022.
      Full Text Abstract BibTeX Details
      Adversarial Examples Detection (AED) is a crucial defense technique against adversarial attacks and has drawn increasing attention from the Natural Language Processing (NLP) community. Despite the surge of new AED methods, our studies show that existing methods heavily rely on a shortcut to achieve good performance. In other words, current search-based adversarial attacks in NLP stop once model predictions change, and thus most adversarial examples generated by those attacks are located near model decision boundaries. To surpass this shortcut and fairly evaluate AED methods, we propose to test AED methods with Far Boundary (FB) adversarial examples. Existing methods show worse than random guess performance under this scenario. To overcome this limitation, we propose a new technique, ADDMU, adversary detection with data and model uncertainty, which combines two types of uncertainty estimation for both regular and FB adversarial example detection. Our new method outperforms previous methods by 3.6 and 6.0 AUC points under each scenario. Finally, our analysis shows that the two types of uncertainty provided by ADDMU can be leveraged to characterize adversarial examples and identify the ones that contribute most to model’s robustness in adversarial training.
      @inproceedings{yin2022addmu,
        title = {ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation},
        author = {Yin, Fan and Li, Yao and Hsieh, Cho-Jui and Chang, Kai-Wei},
        booktitle = {EMNLP},
        year = {2022}
      }
      
      Details
    5. Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers

      Jieyu Zhao, Xuezhi Wang, Yao Qin, Jilin Chen, and Kai-Wei Chang, in EMNLP-Finding (short), 2022.
      Full Text BibTeX Details
      @inproceedings{zhao2022investigating,
        title = {	Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers},
        author = {Zhao, Jieyu and Wang, Xuezhi and Qin, Yao and Chen, Jilin and Chang, Kai-Wei},
        booktitle = {EMNLP-Finding (short)},
        year = {2022}
      }
      
      Details
    6. Unsupervised Syntactically Controlled Paraphrase Generation with Abstract Meaning Representations

      Kuan-Hao Huang, Varun Iyer, Anoop Kumar, Sriram Venkatapathy, Kai-Wei Chang, and Aram Galstyan, in EMNLP-Finding (short), 2022.
      Full Text BibTeX Details
      @inproceedings{huang2022unsupervised,
        title = {Unsupervised Syntactically Controlled Paraphrase Generation with Abstract Meaning Representations},
        author = {Huang, Kuan-Hao and Iyer, Varun and Kumar, Anoop and Venkatapathy, Sriram and Chang, Kai-Wei and Galstyan, Aram},
        booktitle = {EMNLP-Finding (short)},
        year = {2022}
      }
      
      Details
    7. Improving the Adversarial Robustness of NLP Models by Information Bottleneck

      Cenyuan Zhang, Xiang Zhou, Yixin Wan, Xiaoqing Zheng, Kai-Wei Chang, and Cho-Jui Hsieh, in ACL-Finding, 2022.
      Full Text Abstract BibTeX Details
      Existing studies have demonstrated that adversarial examples can be directly attributed to the presence of non-robust features, which are highly predictive, but can be easily manipulated by adversaries to fool NLP models. In this study, we explore the feasibility of capturing task-specific robust features, while eliminating the non-robust ones by using the information bottleneck theory. Through extensive experiments, we show that the models trained with our information bottleneck-based method are able to achieve a significant improvement in robust accuracy, exceeding performances of all the previously reported defense methods while suffering almost no performance drop in clean accuracy on SST-2, AGNEWS and IMDB datasets.
      @inproceedings{zhang2022improving,
        title = {Improving the Adversarial Robustness of NLP Models by Information Bottleneck},
        author = {Zhang, Cenyuan and Zhou, Xiang and Wan, Yixin and Zheng, Xiaoqing and Chang, Kai-Wei and Hsieh, Cho-Jui},
        booktitle = {ACL-Finding},
        year = {2022}
      }
      
      Details
    8. Searching for an Effiective Defender: Benchmarking Defense against Adversarial Word Substitution

      Zongyi Li, Jianhan Xu, Jiehang Zeng, Linyang Li, Xiaoqing Zheng, Qi Zhang, Kai-Wei Chang, and Cho-Jui Hsieh, in EMNLP, 2021.
      Full Text Abstract BibTeX Details
      Recent studies have shown that deep neural networks are vulnerable to intentionally crafted adversarial examples, and various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models. However, there is a lack of systematic study on comparing different defense approaches under the same attacking setting. In this paper, we seek to fill the gap of systematic studies through comprehensive researches on understanding the behavior of neural text classifiers trained by various defense methods under representative adversarial attacks. In addition, we propose an effective method to further improve the robustness of neural text classifiers against such attacks and achieved the highest accuracy on both clean and adversarial examples on AGNEWS and IMDB datasets by a significant margin.
      @inproceedings{li2021searching,
        title = {Searching for an Effiective Defender: Benchmarking Defense against Adversarial Word Substitution},
        author = {Li, Zongyi and Xu, Jianhan and Zeng, Jiehang and Li, Linyang and Zheng, Xiaoqing and Zhang, Qi and Chang, Kai-Wei and Hsieh, Cho-Jui},
        presentation_id = {https://underline.io/events/192/posters/8225/poster/38025-searching-for-an-effective-defender-benchmarking-defense-against-adversarial-word-substitution},
        booktitle = {EMNLP},
        year = {2021}
      }
      
      Details
    9. On the Transferability of Adversarial Attacks against Neural Text Classifier

      Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, and Kai-Wei Chang, in EMNLP, 2021.
      Full Text Abstract BibTeX Details
      Deep neural networks are vulnerable to adversarial attacks, where a small perturbation to an input alters the model prediction. In many cases, malicious inputs intentionally crafted for one model can fool another model. In this paper, we present the first study to systematically investigate the transferability of adversarial examples for text classification models and explore how various factors, including network architecture, tokenization scheme, word embedding, and model capacity, affect the transferability of adversarial examples. Based on these studies, we propose a genetic algorithm to find an ensemble of models that can be used to induce adversarial examples to fool almost all existing models. Such adversarial examples reflect the defects of the learning process and the data bias in the training set. Finally, we derive word replacement rules that can be used for model diagnostics from these adversarial examples.
      @inproceedings{yuan2021on,
        title = {On the Transferability of Adversarial Attacks against Neural Text Classifier},
        author = {Yuan, Liping and Zheng, Xiaoqing and Zhou, Yi and Hsieh, Cho-Jui and Chang, Kai-Wei},
        presentation_id = {https://underline.io/events/192/posters/8223/poster/38067-on-the-transferability-of-adversarial-attacks-against-neural-text-classifier},
        booktitle = {EMNLP},
        year = {2021}
      }
      
      Details
    10. Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble

      Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, and Xuanjing Huang, in ACL, 2021.
      Full Text Code Abstract BibTeX Details
      Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples. We propose Dirichlet Neighborhood Ensemble (DNE), a randomized method for training a robust model to defense synonym substitutionbased attacks. During training, DNE forms virtual sentences by sampling embedding vectors for each word in an input sentence from a convex hull spanned by the word and its synonyms, and it augments them with the training data. In such a way, the model is robust to adversarial attacks while maintaining the performance on the original clean data. DNE is agnostic to the network architectures and scales to large models (e.g., BERT) for NLP applications. Through extensive experimentation, we demonstrate that our method consistently outperforms recently proposed defense methods by a significant margin across different network architectures and multiple data sets.
      @inproceedings{zhou2021defense,
        title = {Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble},
        author = {Zhou, Yi and Zheng, Xiaoqing and Hsieh, Cho-Jui and Chang, Kai-Wei and Huang, Xuanjing},
        booktitle = {ACL},
        year = {2021}
      }
      
      Details
    11. Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation

      Chong Zhang, Jieyu Zhao, Huan Zhang, Kai-Wei Chang, and Cho-Jui Hsieh, in NAACL, 2021.
      Full Text Video Code Abstract BibTeX Details
      Robustness and counterfactual bias are usually evaluated on a test dataset. However, are these evaluations robust? If the test dataset is perturbed slightly, will the evaluation results keep the same? In this paper, we propose a "double perturbation" framework to uncover model weaknesses beyond the test dataset. The framework first perturbs the test dataset to construct abundant natural sentences similar to the test data, and then diagnoses the prediction change regarding a single-word substitution. We apply this framework to study two perturbation-based approaches that are used to analyze models’ robustness and counterfactual bias in English. (1) For robustness, we focus on synonym substitutions and identify vulnerable examples where prediction can be altered. Our proposed attack attains high success rates (96.0%-99.8%) in finding vulnerable examples on both original and robustly trained CNNs and Transformers. (2) For counterfactual bias, we focus on substituting demographic tokens (e.g., gender, race) and measure the shift of the expected prediction among constructed sentences. Our method is able to reveal the hidden model biases not directly shown in the test dataset.
      @inproceedings{zhang2021double,
        title = {	Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation},
        booktitle = {NAACL},
        author = {Zhang, Chong and Zhao, Jieyu and Zhang, Huan and Chang, Kai-Wei and Hsieh, Cho-Jui},
        year = {2021},
        presentation_id = {https://underline.io/events/122/sessions/4229/lecture/19609-double-perturbation-on-the-robustness-of-robustness-and-counterfactual-bias-evaluation}
      }
      
      Details
    12. Provable, Scalable and Automatic Perturbation Analysis on General Computational Graphs

      Kaidi Xu, Zhouxing Shi, Huan Zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, and Cho-Jui Hsieh, in NeurIPS, 2020.
      Full Text Code Abstract BibTeX Details
      Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense. The majority of LiRPA-based methods only consider simple feed-forward networks and it needs particular manual derivations and implementations when extended to other architectures. In this paper, we develop an automatic framework to enable perturbation analysis on any neural network structures, by generalizing exiting LiRPA algorithms such as CROWN to operate on general computational graphs. The flexibility, differentiability and ease of use of our framework allow us to obtain state-of-the-art results on LiRPA based certified defense on fairly complicated networks like DenseNet, ResNeXt and Transformer that are not supported by prior work. Our framework also enables loss fusion, a technique that significantly reduces the computational complexity of LiRPA for certified defense. For the first time, we demonstrate LiRPA based certified defense on Tiny ImageNet and Downscaled ImageNet where previous approaches cannot scale to due to the relatively large number of classes. Our work also yields an open-source library for the community to apply LiRPA to areas beyond certified defense without much LiRPA expertise, e.g., we create a neural network with a provably flat optimization landscape. Our open source library is available at https://github.com/KaidiXu/auto_LiRPA
      @inproceedings{xu2020provable,
        author = {Xu, Kaidi and Shi, Zhouxing and Zhang, Huan and Wang, Yihan and Chang, Kai-Wei and Huang, Minlie and Kailkhura, Bhavya and Lin, Xue and Hsieh, Cho-Jui},
        title = {Provable, Scalable and Automatic Perturbation Analysis on General Computational Graphs},
        booktitle = {NeurIPS},
        year = {2020}
      }
      
      Details
    13. On the Robustness of Language Encoders against Grammatical Errors

      Fan Yin, Quanyu Long, Tao Meng, and Kai-Wei Chang, in ACL, 2020.
      Full Text Slides Video Code Abstract BibTeX Details
      We conduct a thorough study to diagnose the behaviors of pre-trained language encoders (ELMo, BERT, and RoBERTa) when confronted with natural grammatical errors. Specifically, we collect real grammatical errors from non-native speakers and conduct adversarial attacks to simulate these errors on clean text data. We use this approach to facilitate debugging models on downstream applications. Results confirm that the performance of all tested models is affected but the degree of impact varies. To interpret model behaviors, we further design a linguistic acceptability task to reveal their abilities in identifying ungrammatical sentences and the position of errors. We find that fixed contextual encoders with a simple classifier trained on the prediction of sentence correctness are able to locate error positions. We also design a cloze test for BERT and discover that BERT captures the interaction between errors and specific tokens in context. Our results shed light on understanding the robustness and behaviors of language encoders against grammatical errors.
      @inproceedings{yin2020robustness,
        author = {Yin, Fan and Long, Quanyu and Meng, Tao and Chang, Kai-Wei},
        title = {On the Robustness of Language Encoders against Grammatical Errors},
        booktitle = {ACL},
        presentation_id = {https://virtual.acl2020.org/paper_main.310.html},
        year = {2020}
      }
      
      Details
    14. Robustness Verification for Transformers

      Zhouxing Shi, Huan Zhang, Kai-Wei Chang, Minlie Huang, and Cho-Jui Hsieh, in ICLR, 2020.
      Full Text Video Code Abstract BibTeX Details
      Robustness verification that aims to formally certify the prediction behavior of
      neural networks has become an important tool for understanding the behavior of
      a given model and for obtaining safety guarantees. However, previous methods
      are usually limited to relatively simple neural networks. In this paper, we consider the robustness verification problem for Transformers. Transformers have
      complex self-attention layers that pose many challenges for verification, including
      cross-nonlinearity and cross-position dependency, which have not been discussed
      in previous work. We resolve these challenges and develop the first verification
      algorithm for Transformers. The certified robustness bounds computed by our
      method are significantly tighter than those by naive Interval Bound Propagation.
      These bounds also shed light on interpreting Transformers as they consistently
      reflect the importance of words in sentiment analysis.
      @inproceedings{shi2020robustness,
        author = {Shi, Zhouxing and Zhang, Huan and Chang, Kai-Wei and Huang, Minlie and Hsieh, Cho-Jui},
        title = {Robustness Verification for Transformers},
        booktitle = {ICLR},
        year = {2020}
      }
      
      Details
    15. Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification

      Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, and Wei Wang, in EMNLP, 2019.
      Full Text Code Abstract BibTeX Details
      Adversarial attacks against machine learning models have threatened various real-world applications such as spam filtering and sentiment analysis. In this paper, we propose a novel framework, learning to DIScriminate Perturbations (DISP), to identify and adjust malicious perturbations, thereby blocking adversarial attacks for text classification models. To identify adversarial attacks, a perturbation discriminator validates how likely a token in the text is perturbed and provides a set of potential perturbations. For each potential perturbation, an embedding estimator learns to restore the embedding of the original word based on the context and a replacement token is chosen based on approximate kNN search. DISP can block adversarial attacks for any NLP model without modifying the model structure or training procedure. Extensive experiments on two benchmark datasets demonstrate that DISP significantly outperforms baseline methods in blocking adversarial attacks for text classification. In addition, in-depth analysis shows the robustness of DISP across different situations.
      @inproceedings{zhou2019learning,
        author = {Zhou, Yichao and Jiang, Jyun-Yu and Chang, Kai-Wei and Wang, Wei},
        title = {Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification},
        booktitle = {EMNLP},
        year = {2019}
      }
      
      Details
    16. Retrofitting Contextualized Word Embeddings with Paraphrases

      Weijia Shi, Muhao Chen, Pei Zhou, and Kai-Wei Chang, in EMNLP (short), 2019.
      Full Text Slides Video Code Abstract BibTeX Details
      Contextualized word embedding models, such as ELMo, generate meaningful representations of words and their context. These models have been shown to have a great impact on downstream applications. However, in many cases, the contextualized embedding of a word changes drastically when the context is paraphrased. As a result, the downstream model is not robust to paraphrasing and other linguistic variations. To enhance the stability of contextualized word embedding models, we propose an approach to retrofitting contextualized embedding models with paraphrase contexts. Our method learns an orthogonal transformation on the input space, which seeks to minimize the variance of word representations on paraphrased contexts. Experiments show that the retrofitted model significantly outperforms the original ELMo on various sentence classification and language inference tasks.
      @inproceedings{shi2019retrofitting,
        author = {Shi, Weijia and Chen, Muhao and Zhou, Pei and Chang, Kai-Wei},
        title = {Retrofitting Contextualized Word Embeddings with Paraphrases},
        booktitle = {EMNLP (short)},
        vimeo_id = {430797636},
        year = {2019}
      }
      
      Details
    17. Generating Natural Language Adversarial Examples

      Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang, in EMNLP (short), 2018.
      Full Text Code Abstract BibTeX Details Top-10 cited paper at EMNLP 18
      Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the network to misclassify. In the image domain, these perturbations can often be made virtually indistinguishable to human perception, causing humans and state-of-the-art models to disagree. However, in the natural language domain, small perturbations are clearly perceptible, and the replacement of a single word can drastically alter the semantics of the document. Given these challenges, we use a population-based optimization algorithm to generate semantically and syntactically similar adversarial examples. We demonstrate via a human study that 94.3% of the generated examples are classified to the original label by human evaluators, and that the examples are perceptibly quite similar. We hope our findings encourage researchers to pursue improving the robustness of DNNs in the natural language domain.
      @inproceedings{alzanto2018generating,
        author = {Alzantot, Moustafa and Sharma, Yash and Elgohary, Ahmed and Ho, Bo-Jhang and Srivastava, Mani and Chang, Kai-Wei},
        title = {Generating Natural Language Adversarial Examples},
        booktitle = {EMNLP (short)},
        year = {2018}
      }
      
      Details

    Details
  3. Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?

    Jieyu Zhao, Daniel Khashabi, Tushar Khot, Ashish Sabharwal, and Kai-Wei Chang, in ACL-Finding (short), 2021.
    Full Text BibTeX Details
    Is it possible to use natural language to intervene in a model’s behavior and alter its prediction in a desired way? We investigate the effectiveness of natural language interventions for reading-comprehension systems, studying this in the context of social stereotypes. Specifically, we propose a new language understanding task, Linguistic Ethical Interventions (LEI), where the goal is to amend a question-answering (QA) model’s unethical behavior by communicating context-specific principles of ethics and equity to it. To this end, we build upon recent methods for quantifying a system’s social stereotypes, augmenting them with different kinds of ethical interventions and the desired model behavior under such interventions. Our zero-shot evaluation finds that even today’s powerful neural language models are extremely poor ethical-advice takers, that is, they respond surprisingly little to ethical interventions even though these interventions are stated as simple sentences. Few-shot learning improves model behavior but remains far from the desired outcome, especially when evaluated for various types of generalization. Our new task thus poses a novel language understanding challenge for the community.
    @inproceedings{zhao2021ethical,
      title = {Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?},
      author = {Zhao, Jieyu and Khashabi, Daniel and Khot, Tushar and Sabharwal, Ashish and Chang, Kai-Wei},
      booktitle = {ACL-Finding (short)},
      year = {2021}
    }
    
    Details
  4. Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification

    Yada Pruksachatkun, Satyapriya Krishna, Jwala Dhamala, Rahul Gupta, and Kai-Wei Chang, in ACL-Finding, 2021.
    Full Text Code BibTeX Details
    Existing bias mitigation methods to reduce disparities in model outcomes across cohorts have focused on data augmentation, debiasing model embeddings, or adding fairness-based optimization objectives during training. Separately, certified word substitution robustness methods have been developed to decrease the impact of spurious features and synonym substitutions on model predictions. While their end goals are different, they both aim to encourage models to make the same prediction for certain changes in the input. In this paper, we investigate the utility of certified word substitution robustness methods to improve equality of odds and equality of opportunity on multiple text classification tasks. We observe that certified robustness methods improve fairness, and using both robustness and bias mitigation methods in training results in an improvement in both fronts.
    @inproceedings{pruksachatkun2021robustness,
      title = {Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification},
      author = {Pruksachatkun, Yada and Krishna, Satyapriya and Dhamala, Jwala and Gupta, Rahul and Chang, Kai-Wei},
      booktitle = {ACL-Finding},
      year = {2021}
    }
    

    Related Publications

    1. Measuring Fairness of Text Classifiers via Prediction Sensitivity

      Satyapriya Krishna, Rahul Gupta, Apurv Verma, Jwala Dhamala, Yada Pruksachatkun, and Kai-Wei Chang, in ACL, 2022.
      Full Text Abstract BibTeX Details
      With the rapid growth in language processing applications, fairness has emerged as an important consideration in data-driven solutions. Although various fairness definitions have been explored in the recent literature, there is lack of consensus on which metrics most accurately reflect the fairness of a system. In this work, we propose a new formulation : ACCUMULATED PREDICTION SENSITIVITY, which measures fairness in machine learning models based on the model’s prediction sensitivity to perturbations in input features. The metric attempts to quantify the extent to which a single prediction depends on a protected attribute, where the protected attribute encodes the membership status of an individual in a protected group. We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness. It also correlates well with humans’ perception of fairness. We conduct experiments on two text classification datasets : JIGSAW TOXICITY, and BIAS IN BIOS, and evaluate the correlations between metrics and manual annotations on whether the model produced a fair outcome. We observe that the proposed fairness metric based on prediction sensitivity is statistically significantly more correlated with human annotation than the existing counterfactual fairness metric.
      @inproceedings{krishna2022measuring,
        title = {Measuring Fairness of Text Classifiers via Prediction Sensitivity},
        author = {Krishna, Satyapriya and Gupta, Rahul and Verma, Apurv and Dhamala, Jwala and Pruksachatkun, Yada and Chang, Kai-Wei},
        booktitle = {ACL},
        year = {2022}
      }
      
      Details
    2. Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification

      Yada Pruksachatkun, Satyapriya Krishna, Jwala Dhamala, Rahul Gupta, and Kai-Wei Chang, in ACL-Finding, 2021.
      Full Text Code Abstract BibTeX Details
      Existing bias mitigation methods to reduce disparities in model outcomes across cohorts have focused on data augmentation, debiasing model embeddings, or adding fairness-based optimization objectives during training. Separately, certified word substitution robustness methods have been developed to decrease the impact of spurious features and synonym substitutions on model predictions. While their end goals are different, they both aim to encourage models to make the same prediction for certain changes in the input. In this paper, we investigate the utility of certified word substitution robustness methods to improve equality of odds and equality of opportunity on multiple text classification tasks. We observe that certified robustness methods improve fairness, and using both robustness and bias mitigation methods in training results in an improvement in both fronts.
      @inproceedings{pruksachatkun2021robustness,
        title = {Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text Classification},
        author = {Pruksachatkun, Yada and Krishna, Satyapriya and Dhamala, Jwala and Gupta, Rahul and Chang, Kai-Wei},
        booktitle = {ACL-Finding},
        year = {2021}
      }
      
      Details
    3. LOGAN: Local Group Bias Detection by Clustering

      Jieyu Zhao and Kai-Wei Chang, in EMNLP (short), 2020.
      Full Text Code Abstract BibTeX Details
      Machine learning techniques have been widely used in natural language processing (NLP). However, as revealed by many recent studies, machine learning models often inherit and amplify the societal biases in data. Various metrics have been proposed to quantify biases in model predictions. In particular, several of them evaluate disparity in model performance between protected groups and advantaged groups in the test corpus. However, we argue that evaluating bias at the corpus level is not enough for understanding how biases are embedded in a model. In fact, a model with similar aggregated performance between different groups on the entire data may behave differently on instances in a local region. To analyze and detect such local bias, we propose LOGAN, a new bias detection technique based on clustering. Experiments on toxicity classification and object classification tasks show that LOGAN identifies bias in a local region and allows us to better analyze the biases in model predictions.
      @inproceedings{zhao2020logan,
        author = {Zhao, Jieyu and Chang, Kai-Wei},
        title = {LOGAN: Local Group Bias Detection by Clustering},
        booktitle = {EMNLP (short)},
        presentation_id = {https://virtual.2020.emnlp.org/paper_main.2886.html},
        year = {2020}
      }
      
      Details
    4. Towards Understanding Gender Bias in Relation Extraction

      Andrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jing Qian, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang, in ACL, 2020.
      Full Text Abstract BibTeX Details
      Recent developments in Neural Relation Extraction (NRE) have made significant strides towards automated knowledge base construction. While much attention has been dedicated towards improvements in accuracy, there have been no attempts in the literature to evaluate social biases exhibited in NRE systems. In this paper, we create WikiGenderBias, a distantly supervised dataset composed of over 45,000 sentences including a 10% human annotated test set for the purpose of analyzing gender bias in relation extraction systems. We find that when extracting spouse and hypernym (i.e., occupation) relations, an NRE system performs differently when the gender of the target entity is different. However, such disparity does not appear when extracting relations such as birth date or birth place. We also analyze two existing bias mitigation techniques, word embedding debiasing and data augmentation. Unfortunately, due to NRE models relying heavily on surface level cues, we find that existing bias mitigation approaches have a negative effect on NRE. Our analysis lays groundwork for future quantifying and mitigating bias in relation extraction.
      @inproceedings{gaut2020towards,
        author = {Gaut, Andrew and Sun, Tony and Tang, Shirlyn and Huang, Yuxin and Qian, Jing and ElSherief, Mai and Zhao, Jieyu and Mirza, Diba and Belding, Elizabeth and Chang, Kai-Wei and Wang, William Yang},
        title = {Towards Understanding Gender Bias in Relation Extraction},
        booktitle = {ACL},
        year = {2020},
        presentation_id = {https://virtual.acl2020.org/paper_main.265.html}
      }
      
      Details
    5. Mitigating Gender Bias Amplification in Distribution by Posterior Regularization

      Shengyu Jia, Tao Meng, Jieyu Zhao, and Kai-Wei Chang, in ACL (short), 2020.
      Full Text Slides Video Code Abstract BibTeX Details
      Advanced machine  learning  techniques  have boosted  the  performance  of  natural  language processing.  Nevertheless, recent studies, e.g., Zhao et al. (2017) show that these techniques inadvertently capture the societal bias hiddenin the corpus and further amplify it.  However,their analysis is conducted only on models’ top predictions.   In this paper,  we investigate thegender  bias  amplification  issue  from  the  distribution perspective and demonstrate that thebias is amplified in the view of predicted probability distribution over labels. We further propose a bias mitigation approach based on posterior regularization.   With little performance loss,  our method can almost remove the bias amplification  in  the  distribution. Our study sheds the light on understanding the bias amplification.
      @inproceedings{jia2020mitigating,
        author = {Jia, Shengyu and Meng, Tao and Zhao, Jieyu and Chang, Kai-Wei},
        title = {Mitigating Gender Bias Amplification in Distribution by Posterior Regularization},
        booktitle = {ACL (short)},
        year = {2020},
        presentation_id = {https://virtual.acl2020.org/paper_main.264.html}
      }
      
      Details
    6. Mitigating Gender in Natural Language Processing: Literature Review

      Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Kai-Wei Chang, and William Yang Wang, in ACL, 2019.
      Full Text Slides Video Abstract BibTeX Details
      As Natural Language Processing (NLP) and Machine Learning (ML) tools rise in popularity, it becomes increasingly vital to recognize the role they play in shaping societal biases and stereotypes. Although NLP models have shown success in modeling various applications, they propagate and may even amplify gender bias found in text corpora. While the study of bias in artificial intelligence is not new, methods to mitigate gender bias in NLP are relatively nascent. In this paper, we review contemporary studies on recognizing and mitigating gender bias in NLP. We discuss gender bias based on four forms of representation bias and analyze methods recognizing gender bias. Furthermore, we discuss the advantages and drawbacks of existing gender debiasing methods. Finally, we discuss future studies for recognizing and mitigating gender bias in NLP.
      @inproceedings{sun2019mitigating,
        author = {Sun, Tony and Gaut, Andrew and Tang, Shirlyn and Huang, Yuxin and ElSherief, Mai and Zhao, Jieyu and Mirza, Diba and Chang, Kai-Wei and Wang, William Yang},
        title = {Mitigating Gender in Natural Language Processing: Literature Review},
        booktitle = {ACL},
        vimeo_id = {384482151},
        year = {2019}
      }
      
      Details
    7. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods

      Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang, in NAACL (short), 2018.
      Full Text Poster Code Abstract BibTeX Details Top-10 cited paper at NAACL 18
      In this paper, we introduce a new benchmark for co-reference resolution focused on gender bias, WinoBias. Our corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter). We demonstrate that a rule-based, a feature-rich, and a neural coreference system all link gendered pronouns to pro-stereotypical entities with higher accuracy than anti-stereotypical entities, by an average difference of 21.1 in F1 score. Finally, we demonstrate a data-augmentation approach that, in combination with existing word-embedding debiasing techniques, removes the bias demonstrated by these systems in WinoBias without significantly affecting their performance on existing datasets.
      @inproceedings{zhao2018gender,
        author = {Zhao, Jieyu and Wang, Tianlu and Yatskar, Mark and Ordonez, Vicente and Chang, Kai-Wei},
        title = {Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods},
        booktitle = {NAACL (short)},
        press_url = {https://www.stitcher.com/podcast/matt-gardner/nlp-highlights/e/55861936},
        year = {2018}
      }
      
      Details
    8. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints

      Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang, in EMNLP, 2017.
      Full Text Slides Code Abstract BibTeX Details EMNLP 2017 Best Long Paper Award; Top-10 cited paper at EMNLP 17
      Language is increasingly being used to define rich visual recognition problems with supporting image collections sourced from the web. Structured prediction models are used in these tasks to take advantage of correlations between co-occuring labels and visual input but risk inadvertently encoding social biases found in web corpora.
      In this work, we study data and models associated with multilabel object classification and visual semantic role labeling. We find that (a) datasets for these tasks contain significant gender bias and (b) models trained on these datasets further amplify existing bias. For example, the activity cooking is over 33% more likely to involve females than males in a training set, but a trained model amplifies the disparity to 68% at test time. We propose to inject corpus-level constraints for calibrating existing structured prediction models and design an algorithm based on Lagrangian relaxation for the resulting inference problems. Our method results in no performance loss for the underlying recognition task but decreases the magnitude of bias amplification by 33.3% and 44.9% for multilabel classification and visual semantic role labeling, respectively.
      @inproceedings{zhao2017men,
        author = {Zhao, Jieyu and Wang, Tianlu and Yatskar, Mark and Ordonez, Vicente and Chang, Kai-Wei},
        title = {Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints},
        booktitle = {EMNLP},
        year = {2017}
      }
      
      Details

    Details
[1]
  1. Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia

    Jiao Sun and Nanyun Peng, in ACL, 2021.
    Full Text BibTeX Details
    @inproceedings{sun2021men,
      title = {Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia},
      author = {Sun, Jiao and Peng, Nanyun},
      booktitle = {ACL},
      year = {2021}
    }
    

    Related Publications

    1. Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia

      Jiao Sun and Nanyun Peng, in ACL, 2021.
      Full Text BibTeX Details
      @inproceedings{sun2021men,
        title = {Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia},
        author = {Sun, Jiao and Peng, Nanyun},
        booktitle = {ACL},
        year = {2021}
      }
      
      Details
    2. Societal Biases in Language Generation: Progress and Challenges

      Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng, in ACL, 2021.
      Full Text BibTeX Details
      @inproceedings{sheng2021societal,
        title = {Societal Biases in Language Generation: Progress and Challenges},
        author = {Sheng, Emily and Chang, Kai-Wei and Natarajan, Premkumar and Peng, Nanyun},
        booktitle = {ACL},
        year = {2021}
      }
      
      Details

    Details

Language Generation

[1]
  1. Metaphor Generation with Conceptual Mappings

    Kevin Stowe, Tuhin Chakrabarty, Nanyun Peng, Smaranda Muresan, and Iryna Gurevych, in ACL, 2021.
    Full Text BibTeX Details
    @inproceedings{stowe2021metaphor,
      title = {Metaphor Generation with Conceptual Mappings},
      author = {Stowe, Kevin and Chakrabarty, Tuhin and Peng, Nanyun and Muresan, Smaranda and Gurevych, Iryna},
      booktitle = {ACL},
      year = {2021}
    }
    

    Related Publications

    1. Metaphor Generation with Conceptual Mappings

      Kevin Stowe, Tuhin Chakrabarty, Nanyun Peng, Smaranda Muresan, and Iryna Gurevych, in ACL, 2021.
      Full Text BibTeX Details
      @inproceedings{stowe2021metaphor,
        title = {Metaphor Generation with Conceptual Mappings},
        author = {Stowe, Kevin and Chakrabarty, Tuhin and Peng, Nanyun and Muresan, Smaranda and Gurevych, Iryna},
        booktitle = {ACL},
        year = {2021}
      }
      
      Details
    2. MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding

      Tuhin Chakrabarty, Xurui Zhang, Smaranda Muresan, and Nanyun Peng, in The 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2021.
      Full Text Poster Code Abstract BibTeX Details
      Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning. In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs. Based on a theoretically-grounded connection between metaphors and symbols, we propose a method to automatically construct a parallel corpus by transforming a large number of metaphorical sentences from the Gutenberg Poetry corpus (CITATION) to their literal counterpart using recent advances in masked language modeling coupled with commonsense inference. For the generation task, we incorporate a metaphor discriminator to guide the decoding of a sequence to sequence model fine-tuned on our parallel data to generate high-quality metaphors. Human evaluation on an independent test set of literal statements shows that our best model generates metaphors better than three well-crafted baselines 66% of the time on average. A task-based evaluation shows that human-written poems enhanced with metaphors proposed by our model are preferred 68% of the time compared to poems without metaphors.
      @inproceedings{chakrabarty2021mermaid,
        title = {MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding},
        author = {Chakrabarty, Tuhin and Zhang, Xurui and Muresan, Smaranda and Peng, Nanyun},
        booktitle = {The 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)},
        presentation_id = {https://underline.io/events/122/sessions/4240/lecture/19642-mermaid-metaphor-generation-with-symbolism-and-discriminative-decoding},
        talk_url = {https://underline.io/events/122/sessions/4240/lecture/19642-mermaid-metaphor-generation-with-symbolism-and-discriminative-decoding},
        year = {2021}
      }
      
      Details

    Details
[1]
  1. Select, Extract and Generate: Neural Keyphrase Generation with Layer-wise Coverage Attention

    Wasi Ahmad, Xiao Bai, Soomin Lee, and Kai-Wei Chang, in ACL, 2021.
    Full Text BibTeX Details
    In recent years, deep neural sequence-to-sequence framework has demonstrated promising results in keyphrase generation. However, processing long documents using such deep neural networks requires high computational resources. To reduce the computational cost, the documents are typically truncated before given as inputs. As a result, the models may miss essential points conveyed in a document. Moreover, most of the existing methods are either extractive (identify important phrases from the document) or generative (generate phrases word by word), and hence they do not benefit from the advantages of both modeling techniques. To address these challenges, we propose \emphSEG-Net, a neural keyphrase generation model that is composed of two major components, (1) a selector that selects the salient sentences in a document, and (2) an extractor-generator that jointly extracts and generates keyphrases from the selected sentences. SEG-Net uses a self-attentive architecture, known as, \emphTransformer as the building block with a couple of uniqueness. First, SEG-Net incorporates a novel \emphlayer-wise coverage attention to summarize most of the points discussed in the target document. Second, it uses an \emphinformed copy attention mechanism to encourage focusing on different segments of the document during keyphrase extraction and generation. Besides, SEG-Net jointly learns keyphrase generation and their part-of-speech tag prediction, where the later provides syntactic supervision to the former. The experimental results on seven keyphrase generation benchmarks from scientific and web documents demonstrate that SEG-Net outperforms the state-of-the-art neural generative methods by a large margin in both domains.
    @inproceedings{ahmad2021select,
      title = {Select, Extract and Generate: Neural Keyphrase Generation with Layer-wise Coverage Attention},
      author = {Ahmad, Wasi and Bai, Xiao and Lee, Soomin and Chang, Kai-Wei},
      booktitle = {ACL},
      year = {2021}
    }
    

    Related Publications

    1. Representation Learning for Resource-Constrained Keyphrase Generation

      Di Wu, Wasi Uddin Ahmad, Sunipa Dev, and Kai-Wei Chang, in EMNLP-Finding, 2022.
      Full Text Code Abstract BibTeX Details
      State-of-the-art keyphrase generation methods generally depend on large annotated datasets, limiting their performance in domains with limited annotated data. To overcome this challenge, we design a data-oriented approach that first identifies salient information using unsupervised corpus-level statistics, and then learns a task-specific intermediate representation based on a pre-trained language model. We introduce salient span recovery and salient span prediction as denoising training objectives that condense the intra-article and inter-article knowledge essential for keyphrase generation. Through experiments on multiple keyphrase generation benchmarks, we show the effectiveness of the proposed approach for facilitating low-resource and zero-shot keyphrase generation. We further observe that the method especially benefits the generation of absent keyphrases, approaching the performance of models trained with large training sets.
      @inproceedings{wu2022representation,
        title = {Representation Learning for Resource-Constrained Keyphrase Generation},
        author = {Wu, Di and Ahmad, Wasi Uddin and Dev, Sunipa and Chang, Kai-Wei},
        booktitle = {EMNLP-Finding},
        year = {2022}
      }
      
      Details
    2. Select, Extract and Generate: Neural Keyphrase Generation with Layer-wise Coverage Attention

      Wasi Ahmad, Xiao Bai, Soomin Lee, and Kai-Wei Chang, in ACL, 2021.
      Full Text Abstract BibTeX Details
      In recent years, deep neural sequence-to-sequence framework has demonstrated promising results in keyphrase generation. However, processing long documents using such deep neural networks requires high computational resources. To reduce the computational cost, the documents are typically truncated before given as inputs. As a result, the models may miss essential points conveyed in a document. Moreover, most of the existing methods are either extractive (identify important phrases from the document) or generative (generate phrases word by word), and hence they do not benefit from the advantages of both modeling techniques. To address these challenges, we propose \emphSEG-Net, a neural keyphrase generation model that is composed of two major components, (1) a selector that selects the salient sentences in a document, and (2) an extractor-generator that jointly extracts and generates keyphrases from the selected sentences. SEG-Net uses a self-attentive architecture, known as, \emphTransformer as the building block with a couple of uniqueness. First, SEG-Net incorporates a novel \emphlayer-wise coverage attention to summarize most of the points discussed in the target document. Second, it uses an \emphinformed copy attention mechanism to encourage focusing on different segments of the document during keyphrase extraction and generation. Besides, SEG-Net jointly learns keyphrase generation and their part-of-speech tag prediction, where the later provides syntactic supervision to the former. The experimental results on seven keyphrase generation benchmarks from scientific and web documents demonstrate that SEG-Net outperforms the state-of-the-art neural generative methods by a large margin in both domains.
      @inproceedings{ahmad2021select,
        title = {Select, Extract and Generate: Neural Keyphrase Generation with Layer-wise Coverage Attention},
        author = {Ahmad, Wasi and Bai, Xiao and Lee, Soomin and Chang, Kai-Wei},
        booktitle = {ACL},
        year = {2021}
      }
      
      Details

    Details

Multilinguality

[1]
  1. Syntax-augmented Multilingual BERT for Cross-lingual Transfer

    Wasi Ahmad, Haoran Li, Kai-Wei Chang, and Yashar Mehdad, in ACL, 2021.
    Full Text Code BibTeX Details
    In recent years, we have seen a colossal effort
    in pre-training multilingual text encoders using large-scale corpora in many languages to
    facilitate cross-lingual transfer learning. However, due to typological differences across languages, the cross-lingual transfer is challenging. Nevertheless, language syntax, e.g., syntactic dependencies, can bridge the typological gap. Previous works have shown that pretrained multilingual encoders, such as mBERT
    (Devlin et al., 2019), capture language syntax, helping cross-lingual transfer. This work
    shows that explicitly providing language syntax and training mBERT using an auxiliary
    objective to encode the universal dependency
    tree structure helps cross-lingual transfer. We
    perform rigorous experiments on four NLP
    tasks, including text classification, question answering, named entity recognition, and taskoriented semantic parsing. The experiment results show that syntax-augmented mBERT improves cross-lingual transfer on popular benchmarks, such as PAWS-X and MLQA, by 1.4
    and 1.6 points on average across all languages.
    In the generalized transfer setting, the performance boosted significantly, with 3.9 and 3.1
    points on average in PAWS-X and MLQA.
    @inproceedings{ahmad2021syntax,
      title = {Syntax-augmented Multilingual BERT for Cross-lingual Transfer},
      author = {Ahmad, Wasi and Li, Haoran and Chang, Kai-Wei and Mehdad, Yashar},
      booktitle = {ACL},
      year = {2021}
    }
    

    Related Publications

    1. Multilingual Generative Language Models for Zero-Shot Cross-Lingual Event Argument Extraction

      Kuan-Hao Huang, I.-Hung Hsu, Prem Natarajan, Kai-Wei Chang, and Nanyun Peng, in ACL, 2022.
      Full Text Code Abstract BibTeX Details
      We present a study on leveraging multilingual pre-trained generative language models for zero-shot cross-lingual event argument extraction (EAE). By formulating EAE as a language generation task, our method effectively encodes event structures and captures the dependencies between arguments. We design language-agnostic templates to represent the event argument structures, which are compatible with any language, hence facilitating the cross-lingual transfer. Our proposed model finetunes multilingual pre-trained generative language models to generate sentences that fill in the language-agnostic template with arguments extracted from the input passage. The model is trained on source languages and is then directly applied to target languages for event argument extraction. Experiments demonstrate that the proposed model outperforms the current state-of-the-art models on zero-shot cross-lingual EAE. Comprehensive studies and error analyses are presented to better understand the advantages and the current limitations of using generative language models for zero-shot cross-lingual transfer EAE.
      @inproceedings{huang2022multilingual,
        title = {Multilingual Generative Language Models for Zero-Shot Cross-Lingual Event Argument Extraction},
        author = {Huang, Kuan-Hao and Hsu, I-Hung and Natarajan, Prem and Chang, Kai-Wei and Peng, Nanyun},
        booktitle = {ACL},
        year = {2022}
      }
      
      Details
    2. Improving Zero-Shot Cross-Lingual Transfer Learning via Robust Training

      Kuan-Hao Huang, Wasi Ahmad, Nanyun Peng, and Kai-Wei Chang, in EMNLP, 2021.
      Full Text Code Abstract BibTeX Details
      Pre-trained multilingual language encoders, such as multilingual BERT and XLM-R, show great potential for zero-shot cross-lingual transfer. However, these multilingual encoders do not precisely align words and phrases across languages. Especially, learning alignments in the multilingual embedding space usually requires sentence-level or word-level parallel corpora, which are expensive to be obtained for low-resource languages. An alternative is to make the multilingual encoders more robust; when fine-tuning the encoder using downstream task, we train the encoder to tolerate noise in the contextual embedding spaces such that even if the representations of different languages are not aligned well, the model can still achieve good performance on zero-shot cross-lingual transfer. In this work, we propose a learning strategy for training robust models by drawing connections between adversarial examples and the failure cases of zero-shot cross-lingual transfer. We adopt two widely used robust training methods, adversarial training and randomized smoothing, to train the desired robust model. The experimental results demonstrate that robust training improves zero-shot cross-lingual transfer on text classification tasks. The improvement is more significant in the generalized cross-lingual transfer setting, where the pair of input sentences belong to two different languages.
      @inproceedings{huang2021improving,
        title = {Improving Zero-Shot Cross-Lingual Transfer Learning via Robust Training},
        author = {Huang, Kuan-Hao and Ahmad, Wasi and Peng, Nanyun and Chang, Kai-Wei},
        presentation_id = {https://underline.io/events/192/posters/7783/poster/40656-improving-zero-shot-cross-lingual-transfer-learning-via-robust-training},
        booktitle = {EMNLP},
        year = {2021}
      }
      
      Details
    3. Syntax-augmented Multilingual BERT for Cross-lingual Transfer

      Wasi Ahmad, Haoran Li, Kai-Wei Chang, and Yashar Mehdad, in ACL, 2021.
      Full Text Video Code Abstract BibTeX Details
      In recent years, we have seen a colossal effort
      in pre-training multilingual text encoders using large-scale corpora in many languages to
      facilitate cross-lingual transfer learning. However, due to typological differences across languages, the cross-lingual transfer is challenging. Nevertheless, language syntax, e.g., syntactic dependencies, can bridge the typological gap. Previous works have shown that pretrained multilingual encoders, such as mBERT
      (Devlin et al., 2019), capture language syntax, helping cross-lingual transfer. This work
      shows that explicitly providing language syntax and training mBERT using an auxiliary
      objective to encode the universal dependency
      tree structure helps cross-lingual transfer. We
      perform rigorous experiments on four NLP
      tasks, including text classification, question answering, named entity recognition, and taskoriented semantic parsing. The experiment results show that syntax-augmented mBERT improves cross-lingual transfer on popular benchmarks, such as PAWS-X and MLQA, by 1.4
      and 1.6 points on average across all languages.
      In the generalized transfer setting, the performance boosted significantly, with 3.9 and 3.1
      points on average in PAWS-X and MLQA.
      @inproceedings{ahmad2021syntax,
        title = {Syntax-augmented Multilingual BERT for Cross-lingual Transfer},
        author = {Ahmad, Wasi and Li, Haoran and Chang, Kai-Wei and Mehdad, Yashar},
        booktitle = {ACL},
        year = {2021}
      }
      
      Details
    4. Evaluating the Values of Sources in Transfer Learning

      Md Rizwan Parvez and Kai-Wei Chang, in NAACL, 2021.
      Full Text Video Code Abstract BibTeX Details
      Transfer learning that adapts a model trained on data-rich sources to low-resource targets has been widely applied in natural language processing (NLP). However, when training a transfer model over multiple sources, not every source is equally useful for the target. To better transfer a model, it is essential to understand the values of the sources. In this paper, we develop SEAL-Shap, an efficient source valuation framework for quantifying the usefulness of the sources (e.g., domains/languages) in transfer learning based on the Shapley value method. Experiments and comprehensive analyses on both cross-domain and cross-lingual transfers demonstrate that our framework is not only effective in choosing useful transfer sources but also the source values match the intuitive source-target similarity.
      @inproceedings{parvez2021evaluating,
        title = {Evaluating the Values of Sources in Transfer Learning},
        author = {Parvez, Md Rizwan and Chang, Kai-Wei},
        booktitle = {NAACL},
        presentation_id = {https://underline.io/events/122/sessions/4261/lecture/19707-evaluating-the-values-of-sources-in-transfer-learning},
        year = {2021}
      }
      
      Details
    5. GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and Event Extraction

      Wasi Ahmad, Nanyun Peng, and Kai-Wei Chang, in AAAI, 2021.
      Full Text Code Abstract BibTeX Details
      Prevalent approaches in cross-lingual relation and event extraction use graph convolutional networks (GCNs) with universal dependency parses to learn language-agnostic representations such that models trained on one language can be applied to other languages. However, GCNs lack in modeling long-range dependencies or disconnected words in the dependency tree. To address this challenge, we propose to utilize the self-attention mechanism where we explicitly fuse structural information to learn the dependencies between words at different syntactic distances. We introduce GATE, a \bf Graph \bf Attention \bf Transformer \bf Encoder, and test its cross-lingual transferability on relation and event extraction tasks. We perform rigorous experiments on the widely used ACE05 dataset that includes three typologically different languages: English, Chinese, and Arabic. The evaluation results show that GATE outperforms three recently proposed methods by a large margin. Our detailed analysis reveals that due to the reliance on syntactic dependencies, GATE produces robust representations that facilitate transfer across languages.
      @inproceedings{ahmad2021gate,
        author = {Ahmad, Wasi and Peng, Nanyun and Chang, Kai-Wei},
        title = {GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and Event Extraction},
        booktitle = {AAAI},
        year = {2021}
      }
      
      Details
    6. Cross-Lingual Dependency Parsing by POS-Guided Word Reordering

      Lu Liu, Yi Zhou, Jianhan Xu, Xiaoqing Zheng, Kai-Wei Chang, and Xuanjing Huang, in EMNLP-Finding, 2020.
      Full Text Abstract BibTeX Details
      We propose a novel approach to cross-lingual dependency parsing based on word reordering. The words in each sentence of a source language corpus are rearranged to meet the word order in a target language under the guidance of a part-of-speech based language model (LM). To obtain the highest reordering score under the LM, a population-based optimization algorithm and its genetic operators are designed to deal with the combinatorial nature of such word reordering. A parser trained on the reordered corpus then can be used to parse sentences in the target language. We demonstrate through extensive experimentation that our approach achieves better or comparable results across 25 target languages (1.73% increase in average), and outperforms a baseline by a significant margin on the languages that are greatly different from the source one. For example, when transferring the English parser to Hindi and Latin, our approach outperforms the baseline by 15.3% and 6.7% respectively.
      @inproceedings{liu2020cross-lingual,
        author = {Liu, Lu and Zhou, Yi and Xu, Jianhan and Zheng, Xiaoqing and Chang, Kai-Wei and Huang, Xuanjing},
        title = {Cross-Lingual Dependency Parsing by POS-Guided Word Reordering},
        booktitle = {EMNLP-Finding},
        year = {2020}
      }
      
      Details
    7. Cross-lingual Dependency Parsing with Unlabeled Auxiliary Languages

      Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Kai-Wei Chang, and Nanyun Peng, in CoNLL, 2019.
      Full Text Poster Code Abstract BibTeX Details
      Cross-lingual transfer learning has become an important weapon to battle the unavailability of annotated resources for low-resource languages.  One of the fundamental techniques to transfer across languages is learning language-agnostic representations, in the form of word embeddings or contextual encodings. In this work, we propose to leverage unannotated sentences from auxiliary languages to help learning language-agnostic representations  Specifically, we explore adversarial training for learning contextual encoders that produce invariant representations across languages to facilitate cross-lingual transfer. We conduct experiments on cross-lingual dependency parsing where we train a dependency parser on a source language and transfer it to a wide range of target languages.  Experiments on 28 target languages demonstrate that adversarial training significantly improves the overall transfer performances under several different settings.  We conduct a careful analysis to evaluate the language-agnostic representations resulted from adversarial training.  
      @inproceedings{ahmad2019crosslingual,
        author = {Ahmad, Wasi and Zhang, Zhisong and Ma, Xuezhe and Chang, Kai-Wei and Peng, Nanyun},
        title = {  Cross-lingual Dependency Parsing with Unlabeled Auxiliary Languages},
        booktitle = {CoNLL},
        year = {2019}
      }
      
      Details
    8. Target Language-Aware Constrained Inference for Cross-lingual Dependency Parsing

      Tao Meng, Nanyun Peng, and Kai-Wei Chang, in EMNLP, 2019.
      Full Text Poster Code Abstract BibTeX Details
      Prior work on cross-lingual dependency parsing often focuses on capturing the commonalities between source and target languages and overlooks the potential of leveraging linguistic properties of the languages to facilitate the transfer. In this paper, we show that weak supervisions of linguistic knowledge for the target languages can improve a cross-lingual graph-based dependency parser substantially. Specifically, we explore several types of corpus linguistic statistics and compile them into corpus-wise constraints to guide the inference process during the test time. We adapt two techniques, Lagrangian relaxation and posterior regularization, to conduct inference with corpus-statistics constraints. Experiments show that the Lagrangian relaxation and posterior regularization inference improve the performances on 15 and 17 out of 19 target languages, respectively. The improvements are especially significant for target languages that have different word order features from the source language.
      @inproceedings{meng2019target,
        author = {Meng, Tao and Peng, Nanyun and Chang, Kai-Wei},
        title = {Target Language-Aware Constrained Inference for Cross-lingual Dependency Parsing},
        booktitle = {EMNLP},
        year = {2019}
      }
      
      Details
    9. On Difficulties of Cross-Lingual Transfer with Order Differences: A Case Study on Dependency Parsing

      Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng, in NAACL, 2019.
      Full Text Video Code Abstract BibTeX Details
      Different languages might have different wordorders. In this paper, we investigate cross-lingual transfer and posit that an order-agnostic model will perform better when trans-ferring to distant foreign languages. To test ourhypothesis, we train dependency parsers on anEnglish corpus and evaluate their transfer per-formance on 30 other languages. Specifically,we compare encoders and decoders based onRecurrent Neural Networks (RNNs) and mod-ified self-attentive architectures. The formerrelies on sequential information while the lat-ter is more flexible at modeling word order.Rigorous experiments and detailed analysisshows that RNN-based architectures transferwell to languages that are close to English,while self-attentive models have better overallcross-lingual transferability and perform espe-cially well on distant languages.
      @inproceedings{ahmad2019difficulties,
        author = {Ahmad, Wasi Uddin and Zhang, Zhisong and Ma, Xuezhe and Hovy, Eduard and Chang, Kai-Wei and Peng, Nanyun},
        title = {On Difficulties of Cross-Lingual Transfer with Order Differences: A Case Study on Dependency Parsing},
        booktitle = {NAACL},
        year = {2019}
      }
      
      Details

    Details

Information Extraction and Question Answering

[1]
  1. COM2SENSE: A Commonsense Reasoning Benchmark with Complementary Sentences

    Shikhar Singh, Nuan Wen, Yu Hou, Pegah Alipoormolabashi, Te-lin Wu, Xuezhe Ma, and Nanyun Peng, in ACL-Findings, 2021.
    Full Text BibTeX Details
    @inproceedings{sw2021com,
      title = {COM2SENSE: A Commonsense Reasoning Benchmark with Complementary Sentences},
      author = {Singh, Shikhar and Wen, Nuan and Hou, Yu and Alipoormolabashi, Pegah and Wu, Te-lin and Ma, Xuezhe and Peng, Nanyun},
      booktitle = {ACL-Findings},
      year = {2021}
    }
    

    Related Publications

    1. COM2SENSE: A Commonsense Reasoning Benchmark with Complementary Sentences

      Shikhar Singh, Nuan Wen, Yu Hou, Pegah Alipoormolabashi, Te-lin Wu, Xuezhe Ma, and Nanyun Peng, in ACL-Findings, 2021.
      Full Text BibTeX Details
      @inproceedings{sw2021com,
        title = {COM2SENSE: A Commonsense Reasoning Benchmark with Complementary Sentences},
        author = {Singh, Shikhar and Wen, Nuan and Hou, Yu and Alipoormolabashi, Pegah and Wu, Te-lin and Ma, Xuezhe and Peng, Nanyun},
        booktitle = {ACL-Findings},
        year = {2021}
      }
      
      Details
    2. Identifying Distributional Perspective Differences from Colingual Groups

      Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, and Nanyun Peng, in NAACL 2021 Workshop of Social NLP, 2021.
      Full Text Code Abstract BibTeX Details
      Perspective differences exist among different cultures or languages. A lack of mutual understanding among different groups about their perspectives on specific values or events may lead to uninformed decisions or biased opinions. Automatically understanding the group perspectives can provide essential background for many downstream applications of natural language processing techniques. In this paper, we study colingual groups and use language corpora as a proxy to identify their distributional perspectives. We present a novel computational approach to learn shared understandings, and benchmark our method by building culturally-aware models for the English, Chinese, and Japanese languages. On a held out set of diverse topics including marriage, corruption, democracy, our model achieves high correlation with human judgements regarding intra-group values and inter-group differences.
      @inproceedings{tian2021identifying,
        title = {Identifying Distributional Perspective Differences from Colingual Groups},
        author = {Tian, Yufei and Chakrabarty, Tuhin and Morstatter, Fred and Peng, Nanyun},
        booktitle = {NAACL 2021 Workshop of Social NLP},
        presentation_id = {https://underline.io/events/122/posters/4298/poster/20429-identifying-distributional-perspectives-from-colingual-groups},
        year = {2021}
      }
      
      Details

    Details
[1]
  1. Intent Classification and Slot Filling for Privacy Policies

    Wasi Ahmad, Jianfeng Chi, Tu Le, Thomas Norton, Yuan Tian, and Kai-Wei Chang, in ACL, 2021.
    Full Text Code BibTeX Details
    Understanding privacy policies is crucial for users as it empowers them to learn about the information that matters to them. Sentences written in a privacy policy document explain privacy practices, and the constituent text spans convey further specific information about that practice. We refer to predicting the privacy practice explained in a sentence as intent classification and identifying the text spans sharing specific information as slot filling. In this work, we propose PolicyIE, a corpus consisting of 5,250 intent and 11,788 slot annotations spanning 31 privacy policies of websites and mobile applications. PolicyIE corpus is a challenging benchmark with limited labeled examples reflecting the cost of collecting large-scale annotations. We present two alternative neural approaches as baselines: (1) formulating intent classification and slot filling as a joint sequence tagging and (2) modeling them as a sequence-to-sequence (Seq2Seq) learning task. Experiment results show that both approaches perform comparably in intent classification, while the Seq2Seq method outperforms the sequence tagging approach in slot filling by a large margin. Error analysis reveals the deficiency of the baseline approaches, suggesting room for improvement in future works. We hope the PolicyIE corpus will stimulate future research in this domain.
    @inproceedings{ahmad2021intent,
      title = {Intent Classification and Slot Filling for Privacy Policies},
      author = {Ahmad, Wasi and Chi, Jianfeng and Le, Tu and Norton, Thomas and Tian, Yuan and Chang, Kai-Wei},
      booktitle = {ACL},
      year = {2021}
    }
    

    Related Publications

    1. DEGREE: A Data-Efficient Generative Event Extraction Model

      I.-Hung Hsu, Kuan-Hao Huang, Elizabeth Boschee, Scott Miller, Prem Natarajan, Kai-Wei Chang, and Nanyun Peng, in NAACL, 2022.
      Full Text Abstract BibTeX Details
      Event extraction (EE), the task that identifies event triggers and their arguments in text, is usually formulated as a classification or structured prediction problem. Such models usually reduce labels to numeric identifiers, making them unable to take advantage of label semantics (e.g. an event type named Arrest is related to words like arrest, detain, or apprehend). This prevents the generalization to new event types. In this work, we formulate EE as a natural language generation task and propose GenEE, a model that not only captures complex dependencies within an event but also generalizes well to unseen or rare event types. Given a passage and an event type, GenEE is trained to generate a natural sentence following a predefined template for that event type. The generated output is then decoded into trigger and argument predictions. The autoregressive generation process naturally models the dependencies among the predictions – each new word predicted depends on those already generated in the output sentence. Using carefully designed input prompts during generation, GenEE is able to capture label semantics, which enables the generalization to new event types. Empirical results show that our model achieves strong performance on event extraction tasks under all zero-shot, few-shot, and high-resource scenarios. Especially, in the high-resource setting, GenEE outperforms the state-of-the-art model on argument extraction and gets competitive results with the current best on end-to-end EE tasks.
      @inproceedings{hsu2021degree,
        title = {DEGREE: A Data-Efficient Generative Event Extraction Model},
        author = {Hsu, I-Hung and Huang, Kuan-Hao and Boschee, Elizabeth and Miller, Scott and Natarajan, Prem and Chang, Kai-Wei and Peng, Nanyun},
        booktitle = {NAACL},
        year = {2022}
      }
      
      Details
    2. Intent Classification and Slot Filling for Privacy Policies

      Wasi Ahmad, Jianfeng Chi, Tu Le, Thomas Norton, Yuan Tian, and Kai-Wei Chang, in ACL, 2021.
      Full Text Video Code Abstract BibTeX Details
      Understanding privacy policies is crucial for users as it empowers them to learn about the information that matters to them. Sentences written in a privacy policy document explain privacy practices, and the constituent text spans convey further specific information about that practice. We refer to predicting the privacy practice explained in a sentence as intent classification and identifying the text spans sharing specific information as slot filling. In this work, we propose PolicyIE, a corpus consisting of 5,250 intent and 11,788 slot annotations spanning 31 privacy policies of websites and mobile applications. PolicyIE corpus is a challenging benchmark with limited labeled examples reflecting the cost of collecting large-scale annotations. We present two alternative neural approaches as baselines: (1) formulating intent classification and slot filling as a joint sequence tagging and (2) modeling them as a sequence-to-sequence (Seq2Seq) learning task. Experiment results show that both approaches perform comparably in intent classification, while the Seq2Seq method outperforms the sequence tagging approach in slot filling by a large margin. Error analysis reveals the deficiency of the baseline approaches, suggesting room for improvement in future works. We hope the PolicyIE corpus will stimulate future research in this domain.
      @inproceedings{ahmad2021intent,
        title = {Intent Classification and Slot Filling for Privacy Policies},
        author = {Ahmad, Wasi and Chi, Jianfeng and Le, Tu and Norton, Thomas and Tian, Yuan and Chang, Kai-Wei},
        booktitle = {ACL},
        year = {2021}
      }
      
      Details

    Details