It is widely accepted that speakers exhibit considerable gradience, both aggregate and individual, in their acceptability judgements for sentences (strings) in their language. Gradience is pervasive in judgements concerning syntactic, semantic, and phonological well formedness, Lau, Clark, and Lappin (2014) present detailed experimental evidence for gradience in syntactic acceptability. There are at least three possible ways in which gradience can be accommodated in linguistic theory. One approach involves constructing a theory of grammar that assigns degrees of well formedness to sentences on the basis of their formal properties. Although linguists have proposed several tentative suggestions of this kind (for example, Chomsky 1965, 1986), they have not been developed in a comprehensive and systematic way. Therefore, they cannot deal with the full range of gradient phenomena that natural languages exhibit. A second strategy is to develop a formal grammar that provides a recursive binary set membership condition for enumerating all and only the sentences, and their possible structures, of a language.This strategy explains gradience as an effect of external processing and performance mechanisms, rather than of linguistic competence. This approach offers a natural division of labour between different aspects of representation and interpretation. It has tended to be the default option in the theory of formal grammar. Finally, a third approach characterises linguistic knowledge as probabilistic in nature. It represents this knowledge as an enriched language model. This view has achieved increasing acceptance among computational linguists and cognitive scientists. It entails that gradience is intrinsic to linguistic competence, and so it does not require performance mechanisms to explain the gradient nature of linguistic phenomena. Both of the latter two views of linguistic knowledge have advantages and weaknesses. In this talk I will compare these views, and I will consider some of the arguments that have been advanced for each of them.