Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers
Jieyu Zhao, Xuezhi Wang, Yao Qin, Jilin Chen, and Kai-Wei Chang, in EMNLP-Finding (short), 2022.
Abstract
Bib Entry
@inproceedings{zhao2022investigating,
title = { Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers},
author = {Zhao, Jieyu and Wang, Xuezhi and Qin, Yao and Chen, Jilin and Chang, Kai-Wei},
booktitle = {EMNLP-Finding (short)},
year = {2022}
}
Related Publications
-
VideoCon: Robust video-language alignment via contrast captions, CVPR, 2024
-
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning, ICCV, 2023
-
Red Teaming Language Model Detectors with Language Models, TACL, 2023
-
ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation, EMNLP, 2022
-
Unsupervised Syntactically Controlled Paraphrase Generation with Abstract Meaning Representations, EMNLP-Finding (short), 2022
-
Improving the Adversarial Robustness of NLP Models by Information Bottleneck, ACL-Finding, 2022
-
Searching for an Effiective Defender: Benchmarking Defense against Adversarial Word Substitution, EMNLP, 2021
-
On the Transferability of Adversarial Attacks against Neural Text Classifier, EMNLP, 2021
-
Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble, ACL, 2021
-
Double Perturbation: On the Robustness of Robustness and Counterfactual Bias Evaluation, NAACL, 2021
-
Provable, Scalable and Automatic Perturbation Analysis on General Computational Graphs, NeurIPS, 2020
-
On the Robustness of Language Encoders against Grammatical Errors, ACL, 2020
-
Robustness Verification for Transformers, ICLR, 2020
-
Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification, EMNLP, 2019
-
Retrofitting Contextualized Word Embeddings with Paraphrases, EMNLP (short), 2019
-
Generating Natural Language Adversarial Examples, EMNLP (short), 2018