A Theory of Unsupervised Translation for Understanding Animal Communication

Abstract

We propose a theoretical framework for analyzing unsupervised machine translation when no parallel data are available and when it cannot be assumed that the source and target corpora address related subject domains or posses similar linguistic structure. The framework requires access to a prior probability distribution that should assign non-zero probability to possible translations. We instantiate our framework with two models of language. Our analysis suggests that accuracy of translation depends on the complexity of the source language and the amount of common ground between the source language and target prior. We also prove upper bounds on the amount of data required from the source language in the unsupervised setting as a function of the amount of data required in a hypothetical supervised setting. Surprisingly, our bounds suggest that the amount of source data required for unsupervised translation is comparable to the supervised setting. For one of the language models which we analyze we also prove a nearly matching lower bound. Our analysis is purely information-theoretic and as such can inform how much source data needs to be collected, but does not yield a computationally efficient procedure. Our work is motivated by an ambitious interdisciplinary initiative, Project CETI, which is collecting a large corpus of sperm whale communications for machine analysis.

Publication
The 36th Annual Conference on Neural Information Processing Systems (NeurIPS 2023); Spotlight presentation at the Workshop on Information-Theoretic Principles in Cognitive Systems at The 35th Annual Conference on Neural Information Processing Systems (InfoCog @ NeurIPS 2022); The Second Workshop on Efficient Natural Language and Speech Processing at The 35th Annual Conference on Neural Information Processing Systems (ENLSP @ NeurIPS 2022); The Third International Symposium on the Mathematics of Neuroscience (Math of Neuro 2022)