The Fifth Circle

There's an idea that hortensis differs from nemoralis by the radius of the 5th band - wide in hortensis, more tight in nemoralis, sort of like this:

Here, I cherrypicked the specimens to make the difference easily visible. These things vary. But I mean - just look at them:

These aren't cherrypicked and the difference is pretty obviously there. Right?
But no. Apparently certain people think differently. Hannah J. Jackson, Jenny Larsson and Angus Davison (hi @angus10) co-authored this paper here https://onlinelibrary.wiley.com/doi/full/10.1002/ece3.7517 about the positions of bands on cepaea snails and not only did they not replicate the effect, but they even found a significant weak effect in the opposite direction:

What on Earth is that supposed to mean? How? Why?
Whatever. I'll do my own analysis. I'll go to Wikipedia and grab all the cepaea photos from the H. Zell image gallery here and here, all very nice and conveniently photographed in the same positions, and I'll measure this sort of thing:

I'll measure a and b, and then watch how a/b quantity is distributed by species.
(It wasn't immediately obvious to me, but this was a bit dumb and it would have been better to measure the diameter of the ring and the shell instead of trying to measure radii, because it would get rid of the subjectivity of trying to find the center, but I didn't think of that until I already measured a bunch of shells and I wasn't going to redo too much work.)

Anyway, after I did the measuring I got myself this dataset here. I excluded 3 alleged hortensis from Pyrenees that in actuality were probably misidentified nemoralis (spoilers: this move doesn't make much of a difference for final analysis.) This is what I got, an approximation of distribution of the band radius as a fraction of the shell radius by species:

Visibly pretty different and consistent with the idea that hortensis have wider bands. Let's do statistics, to be sure. I was unsure what statistical test would be the best here to reject the null that both distributions have the same mean. There's Welch's t-test that assumes that both distributions are normal and starts with H0 that mean_1 = mean_2 (doesn't assume same variance). The assumption of normality is a bit dubious here. There's Mann–Whitney U test that tests the H0 that distributions are the same, with no assumptions. Let's do both. I pasted data into these calculator things:
https://www.statskingdom.com/150MeanT2uneq.html
https://www.statskingdom.com/170median_mann_whitney.html
and got p = 0.00008822 for the slightly dubious Welch test and p = 0.0001173 for the Mann–Whitney test. Which is pretty much what you'd expect: yeah, the distributions are different, the means aren't the same. Not sure which tests Jackson et al used for the same particular comparison in their paper.

So yeah, the result replicates on photos stolen from Wikipedia. What does it mean? Why did that paper find the opposite result, and highly significant, too?

Some things about the Jackson paper: first, as far as I understand they excluded the shells with band merging. Understandable move if you want to measure band positions, but what if that's where this particular effect was hiding? Maybe it only manifests when the bands merge? Second, a whole lot of cepaeas (over 200) in their dataset came from Pyrenees, and were dead-collected. Were they accurately IDd? Pyrenees are an evil place where you may possibly find white lip nemoralis. Did that distort the results? (Still though - render them significant in the opposite direction?) To be fair - you can also criticize the Zell dataset I based my numbers on - it was never meant to be representative, rather the opposite - it was assembles to showcase various rare banding patterns. You could argue that is distorting things. I don't think so though. Ultimately, more than p values I trust looking at the picture 2. Still curious about the paper!

Posted on August 24, 2024 07:53 PM by tasty_y tasty_y

Comments

No comments yet.

Add a Comment

Sign In or Sign Up to add comments