There’s a seldom-discussed concept in web software development called cyclomatic complexity which is a metric used to indicate how complex a program or unit of code is.
It’s not discussed very often because, well, it’s really not exciting. You can get VS Code plugins that will measure the cyclomatic complexity and they can be somewhat helpful because Cyclomatic Complexity is a formula. The number it provides you is directly correlated to the likelihood of errors, difficulty of maintaining it, and difficulty of reading it.
But, quite interestingly, there’s no formula for measuring complexity in CSS because CSS is not like any typical programming language.
So, buckle up butter cups. I want to present a way to measure complexity in CSS. Because if we can measure CSS complexity, we can identify the likelihood of it having bugs, being hard to maintain, and hard to understand.
Specificity is an Existing Way to Measure Complexity
CSS offers one way to qualitatively measure complexity and that’s specificity. The gist of how it works is this:
- There are four categories:
- Id
- Class
- Type (sometimes called element)
- No value (includes
*
,>
,+
,~
)
- In the declaration, each selector of a given category adds a value of 1 to that category
- The sum of values for each category is the declaration’s specificity
Each category (outside of No Value) sets a value of 1 to its own category
#header-123 {} /* One Id 1,0,0 */ .header {} /* One class 0,1,0 */ header {} /* One type 0,0,1 */ * {} /* No value 0,0,0 */
Each category of computed specificity is the sum of all selectors in that category:
#header-123 #title-321 {} /* Two Ids 2,0,0 */ .header .title {} /* Two classes 0,2,0 */ header h1 {} /* Two Types 0,0,2 */ * * {} /* Two ... no... values 0,0,0 */
And therefore overall specificity is not a single number, but three categories:
#header-123 .title {} /* Id + class 1,1,0 */ .header h1{} /* class + type 0,1,1 */ #header h1 {} /* Id + type 1,0,1 */ .header * {} /* Class 0,1,0 */
Specificity Fails to Adequately Measure Complexity
Specificity is an algorithm used to to help the browser determine which styles should apply to an element. More specifically, it’s a conflict resolution algorithm.
Specificity makes no consideration of targets
If there are two conflicting styles that apply to an element, the browser will rely on specificity to determine the winner.
Let’s assume three units of markup:
<section id=header-123></section> <div class=header></div> <header></header>
And now let’s evaluate three CSS rulesets:
#header-123 {font-size: 2em} /* 1,0,0 */ .header {font-size: 3em} /* 0,1,0 */ header {font-size: 4em } /* 0,0,1 */
In such a case, all three rulesets are equally complex because for each, their respective selector categories are equal and they target different elements, providing zero side effects.
Markup creates complexity that specificity can’t identify
Suppose a unit of markup became the target of all three rulesets:
<header id=header-123 class=header></header>
Now, we have an actual textbook definition of complex here because the styles for this element are composed of many parts. But Specificity cannot communicate this to us. Specificity can only tell us why it’s going to have a font-size
of 2em.
Specificity shows no awareness of its context within a “logic path”
Hear me out here:
Media queries are a kind of logic path. They’re if
conditions that determine when certain rulesets will apply:
@media screen { #header { font-size: 2em; } } @media screen and (min-width: 768px) { header { font-size: 2em; } }
It doesn’t matter that you have 0,0,1, or 1,0,0 as your specificity. Despite the fact that they are equally complex, they will not equally apply. And, BTW, @media
is not the only at-rule to exist in CSS. There are a lot.
What about CSS Layers? That’s not a logic path so to speak because rules always apply, regardless of being in a path. But presence within a layer will control position within a cascade, which will impact if the rule is overwritten. And specificity doesn’t reflect this, either.
Specificity doesn’t communicate the kind of markup dependence
All four these selectors have the same specificity of 0,0,2:
header h1 {} /* h1 must be a descendant of header */ header > h1 {} /* h1 must be a direct descendant (child) of header */ header ~ h1 {} /* h1 must come somewhere after header in the same container */ header + h1 {} /* h1 must come immediately after header in the same container */
- The first two rules apply when the target is a child of some parent
- The latter two rules apply when the target shares a parent
- Rule 2 is also Rule 1, but Rule 1 may not also be Rule 2
- Rule 4 is also Rule 3, 3ut Rule 4 may not also be Rule 3
Specificity fails to communicate that two of these rules are more likely to break than another two. Combinators add a kind of complexity.
Specificity doesn’t communicate the side-effects of functional pseudo-class selectors
I’ve already written about the highly problematic :not()
selector before so I don’t want to rehash that. But it is worth emphasizing again that it introduces side-effects anytime it has arguments of unequal specificity. And the same thing happens with :has()
and :is()
, too:
:not(#header, .header) {} /* 1,0,0 every element on the page except #header receives this style */ :has(#header, .header) {} /* 1,0,0 every element containing #header or .header receives this style */ :is(#header, .header) {} /* 1,0,0 every element that #header or .header receives this style */
Your browser dev tools will tell you the correct specificity. Chances are nearly 100% that you didn’t want a .header
to have a specificity of an id. But that’s what happened. But you didn’t know it.
If specificity isn’t complexity, what even is complexity in CSS?
I’m going to posit that CSS complexity is just three things:
- Expectations of the document
- Expectations of state
- Expectations of the device
How CSS makes document expectations complex
So, here’s the thing about CSS: it’s read from right-to-left. The right-most thing is what receives the style. Everything in front of it is just an increasingly complex definition of how to find it.
This means that all of these rules are equally complex because of the assumptions they place on the document:
#page-123 #header #title {} /* 3, 0, 0*/ body header .title {} /* 0, 1, 2 */ .page .header h1 {} /* 0, 2, 1 */
All three of those CSS rules require three elements, which could actually be just this markup right here:
<body id=page-123 class=page> <header id=header class=header> <h1 id=title class=title></h1> </header> </body>
The implication then is that if you’re going to use specificity to tell you about complexity, you could need to add up the values of each specificity category and that will give you the measure of complexity.
By the way, this will hold true even if you use structural pseudo-class selectors or linguistic selectors:
:root
:first-child
,:last-child
- :
dir()
,:lang()
Therefore these three selectors apply equal expectations on markup:
header h1:first-child {} /* 0, 1, 2 */ #page-123 h1:nth-of-type(1) {} /* 1, 1, 1 */ :root:lang(en) .title {} /* 0, 3, 0 */
Take note that you can calculate this simply by adding the category values.
How CSS makes State Expectations Complex
CSS allows you to not only target elements based on document structure, but also on things the user is doing. Or has done. Or will do. There’s a whole slew of different kinds of state we can target:
- Element state
-
:fullscreen
,:modal
- Input
-
:enabled
,:checked
- Location
-
:link
,:target
, - Resource state
-
:playing
,:paused
- Time-dimensional
-
:current
,:future
- User action
-
:hover
,:active
We can do the same thing we did with document expectations and add computed category specificities together and see that these all have equal complexity:
ul:hover a:hover {} /* 0, 2, 2 */ #form-123:focus-within input:hover /*1, 2, 1 */ a.cta:visited:focus /* 0, 3, 1 */
How CSS Makes Device Expectations Complex
This is where any and all at-rules come into play. Whether it’s a media query or a support statement or a layer. It’s a contribution to the likelihood that a selector will or won’t apply.
Both of these statements make the same number of expectations on the device:
@supports (display:flex) {} @media screen {}
And that means that these media queries are equally complex in their expectation:
@media screen and (min-width: 320px) and (max-width: 480px) {} /* 3 */ @media screen and (min-width: 481px) and (max-width: 768px) {} /* 3 */ @media supports (display:grid) and (min-width: 768px) {} /* 3 */
How can we Measure Complexity in Selectors?
First sum all the specificities
Look at all the specificities in all the categories and add them together. We will let DocumentExpectations tell us the total number of expectations are placed on document and state.
#page-123 .header .title a:hover {} /* 1 + 3 + 1 = 5 */ .header h1.title a:visited {} /* 0 + 3 + 2 = 5 */ #form-123 .fields input[type=text]:invalid {} /* 1 + 3 + 1 = 5 */
Then add weight for structure expectations placed on the document
We want to count up the structural document expectations that specificity misses: the combinator (+ ~ >
). That puts the combinators on equal footing with any :nth-child
selectors we’ve already identified.
Additionally, while specificity in general already incorporated the arguments of :has()
, :is()
, :not()
, and :where()
— it hasn’t yet accounted for how those have also put expectations on document structure. So we’re just going to add up how many of those we see, too.
So, let’s let StructuralExpectations = TotalCombinators + TotalFunctionalPseudoSelectors
So our formula is now DocumentExpectations + StructuralExpectations
:
.header .title:first-child {} /* (0 + 3 + 0 ) + (0) = 3 */ .header > .title {} /* (0 + 2 + 0) + (1) = 3 */ header + section ~ * {} /* (0 + 0 + 2) + (1 + 1) = 4 */ .header :is(.title > *) {} /* (0 + 2 + 0) + (1 + 1) = 4 */ .header h2:not(.title) {}. /* (0 + 2 + 1) + (0 + 1) = 4 */
Finally, add device expectations
After we’ve computed expectations on document and state, we incorporate expectations on device. For this we’ll add the number of at-rules that wrap the selector and add the number of conditions that are placed on that rule (basically any usages of a comparison operator like and
, not
)
This means our formula becomes (DocumentExpectations + StructuralExpecations + TotalAtRules)
The implication, therefore, is that these rules are equal in complexity:
@media screen and (min-width: 320px) and (max-width: 480px) { /* 1 at-rule + 2 additional conditions*/ .header a:hover {} /* (0 + 2 + 1) + (0) + (1 + 1 + 1) = 6 */ } @media (min-width:768px) { header > .title a:hover {} /* (0 + 2 + 2) + (1) + (1) = 6 */ } body > header > h1 > a:hover {} /* (0 + 1 + 4) + (1 + 1) + (0) = 6 */ @supports (display:grid) { /* 1 at rule */ @media screen and (min-width: 768px) and (max-width: 1024px) { /* 1 at-rule + 2 additional conditions */ .header h1 {} /* (0 + 1 + 1) + (0 + 0) + (1 + 3) = 6 */ } }
The Cyclomatic Complexity of a CSS Selector is a Formula that Adds Three-ish things:
DocumentExpectations + StructuralExpecations + TotalAtRules
This number tells us:
- How much is expected from the markup
- how much is expected on the state of the element
- The device conditions in which it may (not) apply
The higher the number goes, the harder it is for a rule to apply. Which means:
- More device conditions we need to test
- More user-caused conditions we need to test
- More variations of markup we must account for
So, yeah.
There it is.
Cyclomatic Complexity of a CSS Selector. A stylesheet would then be an average complexity of all rulesets.
The next step would be to write a parser that can compute this for you. Any takers?
Update:
It me. I’m the taker. I started working on a CSS complexity analyzer.