Abstract: Textual adversarial samples are widely used to assess the robustness and security of language models. Most existing methods generate these samples by substitution or deletion. However, such ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results