No new casualties today, but I noticed immediately that the Mourning Dove carcass had been removed. Closer inspection revealed it to have been scavenged from its original location with remains scattered near the base of the building about 5 m away.
So what is scavenging rate all about, anyway?
The idea is that our detection of dead birds (or anything else) is imperfect. We can collect data and report that, for example, 50 birds died at a building. That estimate can only be a minimum, however. Our raw counts underestimate the true number of casualties because our detection cannot be > 100% but it can be far lower than 100%. Birds can collide but manage to flutter away and die outside of our search area. Some might be difficult to see against the substrate on which they land. Most important, some will be removed before we get there to find them. Cats, rats, opossums, raccoons, crows, etc. tend to be abundant in urban/suburban areas where most window collision research takes place and they can often remove a carcass before the investigator arrives onsite to conduct a survey.
For example, assume that the removal rate (whether by scavengers, human maintenance crews, etc.) is 25%. This means that, at best, the investigator is only predicted to encounter 75% of the casualties. That raw count of 50 dead birds? The detection-corrected number is actually closer to 50/0.75 = 67 dead birds.
Does that matter, though? I struggle to attach relevance to what the removal rate is for any given study. Is there some magic number of casualties that is a threshold for conservation action? Are there people for whom 50 dead birds wouldn’t register as important but 67 would? For comparing mortality rates among sites where removal rate might vary we assume that it is important to determine a separate removal rate for each site, but is it? Imagine 50 dead birds at our site with high removal of 25% compared to 50 dead birds at a site with low removal rate of 5%. That’d be 67 compared to 50/0.95 = 53. So? Would we really be concerned about 67 dead birds at one building but not 53 at another?
My final concern is the false sense of security that we’ve determined “the” removal rate. These rates are widely variable across space and time. We’re kidding ourselves to think that we’re improving our estimates of collision mortality by adjusting raw counts with a detection probability that is itself a moving target.
In my study, I’ve conducted approximately 86 removal trials over the past several years. On average, a carcass lasts about 10.5 days on the ground before it is removed. On average, I conduct a survey every 1.5 days. That gives me 10.5/1.5 = 7.0 opportunities to find a dead bird before it is removed. Ergo, removal rate is hardly noticeable in my study. Whatsmore, scavenging and removal are not the same thing. It is often the case – as with today’s Mourning Dove – that the carcass is scavenged but evidence remains. The Mourning Dove died on April 5th and was scavenged on the 15th. That’s 10 days. The remaining bones and feathers, however, might still be here weeks from now. On multiple occasions, I have found evidence of scavenging in the 24 hrs since my previous survey. For example, I check one morning and find feathers that weren’t there the day before. I refer to these as “day 0” removals, but the feathers are still there to provide evidence of the casualty for days and weeks after the event. The longest I have had feathers or other remains in evidence is > 90 days.
So I see scavenging and removal rates – and detection rates in general – as red herrings in our monitoring of collision mortality. Unless part of a well-controlled design to compare, for example, mortality from two facades of the same building, there’s not much to gain from collecting data to estimate such rates. There are, however, potential costs. Many avocational birders and conservationists collect data on collisions opportunistically, and their presumed lack of rigor in methods limits the use of their data for serious analysis. I maintain that those data are perhaps far more useful than we might presume because of an ill-defined obsession with calculation of detection as a study’s ticket to the club of legitimacy.