Monotonicity of meaning: between simplicity and expressiveness
Dariusz Kalociński (University of Warsaw)

Monotonicity is an abstract property of meaning that has been attested in several semantic domains, including adjectives, quantifiers and modals. The meaning of a signal is upward/downward monotone if the signal refers to all upper/lower bounds of each of its referents, according to a certain underlying ordering. It has been shown that monotonicity arises via artificial iterated learning with pragmatic agents biased towards simplicity and expressiveness (Carcassi et al., 2018). Monotone concepts have been also demonstrated to be easier to learn by humans (Chemla et al., 2019) and neural networks (Steinert-Threlkeld & Szymanik, forthcoming).
In my talk I will attempt to explain monotonicity in terms of a domain-general optimization principle seeking to reduce communicative and cognitive costs associated with a language (Kemp & Regier, 2012). Communicative cost of a language is understood as the probability of confusing two random values from the scale. The cognitive cost is understood as change complexity (Aksentijevic & Gibson, 2012). Optimization depends on the relative value of these two types of cost. For a wide range of possible divisions of labour between communication and cognition (including equal division), optimal languages turn out to be monotone. This shows that the tradeoff between simplicity and informativeness might be sufficient to explain monotonicity. Moreover, the generality of this argument suggests that monotonicity might arise at various timescales for which such optimization is viable. We backup this conclusion with initial simulations based on a recent model of meaning coordination (Kalociński et al., 2018).