Can you hear me now?
The article is from 2019. I was wondering because my impression was that was pretty old news and common knowledge by now.
The methodology sounds bizarrely complex to me for the purposes of establishing comparative information transfer rate.
Wouldn’t just timing how long it takes to communicate a controlled set of information answer that?
I’m confused by the concept of establishing an average “bitrate per syllable” and multiplying that through. Is this trying to address cases where language constructs DEMAND additional information be encoded in speech? Can one not construct a set of information intended to be communicated that could account for those quirks? Find some “lowest common denominator” sentences?
I feel like I’m missing something and I’m very curious about what my faulty assumption is
Can one not construct a set of information intended to be communicated that could account for those quirks? Find some “lowest common denominator” sentences?
I think this would require deeper knowledge of all 17 languages in question, and be a potential source of errors - for example, if you include some info in the set that is easier/harder to convey succinctly in one language than in the other languages.
In the meantime, it’s easy to get good averages for bits/syllable and syllables/second, even if you don’t know the languages in question.