Unsupervised Representation Disentanglement of Text: An Evaluation on Synthetic Datasets

Lan Zhang, Victor Prokhorov, Ehsan Shareghi

Published in Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), 2021

Abstract

To highlight the challenges of achieving representation disentanglement for text domain in an unsupervised setting, in this paper we select a representative set of successfully applied models from the image domain. We evaluate these models on 6 disentanglement metrics, as well as on downstream classification tasks and homotopy. To facilitate the evaluation, we propose two synthetic datasets with known generative factors. Our experiments highlight the existing gap in the text domain and illustrate that certain elements such as representation sparsity (as an inductive bias), or representation coupling with the decoder could impact disentanglement. To the best of our knowledge, our work is the first attempt on the intersection of unsupervised representation disentanglement and text, and provides the experimental framework and datasets for examining future developments in this direction.

BibTex citation:


  @inproceedings{zhang-etal-2021-unsupervised-representation,
    title = "Unsupervised Representation Disentanglement of Text: An Evaluation on Synthetic Datasets",
    author = "Zhang, Lan  and
      Prokhorov, Victor  and
      Shareghi, Ehsan",
    editor = "Rogers, Anna  and
      Calixto, Iacer  and
      Vuli{\'c}, Ivan  and
      Saphra, Naomi  and
      Kassner, Nora  and
      Camburu, Oana-Maria  and
      Bansal, Trapit  and
      Shwartz, Vered",
    booktitle = "Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.repl4nlp-1.14/",
    doi = "10.18653/v1/2021.repl4nlp-1.14",
    pages = "128--140"}