Monday 24 November 2008

Kendall et al.'s Grizzly work.

Well, it was a bit later than I expected, but Kendall's paper on the non-invasive sampling of Glacier National Park is finally out in this month's edition of the Journal of Wildlife Management. I've had some time to digest the actual paper now - I didn't blog about it right away because I wanted to pour over it - and I'm very satisfied with the resulting publication. This project cost American tax payers about 5 million dollars, and every penny of it was well spent.

As I discussed earlier, one of the biggest issues in managing threatened or endangered wildlife is getting an accurate count of just how many individuals remain. Some species are easier than others, for example, one of my graduate students just got back from helping moose count somewhere over 40 mile country. However, the same graduate student is involved in working on Sitka Blacktail Deer, where we can't do aerial surveys because of tree cover. How are we supposed to manage species if we don't even know this basic information? It can't be done truly effectively.

Not long ago, as genetic methods became more common, people began discussing using DNA to fingerprint individuals. From early on, we were using DNA to fingerprint species, using portions of DNA that were unique to species to identify them from others. This remains very successful, and we've used it to discover two species that we thought were just one species. We've also used it to find the opposite - that two different looking critters are just different ends of one species. But then, technology had come far enough that identifying individuals was possible and economical.

The trick was using hyper-variable regions of the genome I've talked about before, called `Microsatellites.` These bits of DNA don't do anything, they just waft to and fro, becoming common or uncommon; growing in length or shortening. They're actually copy-errors when DNA replicates, sort of like noise around a xerox, except this is a xerox of a xerox of a xerox to the nth degree. Because they're so random, the odds that two individuals would have the same group of microsatellites by chance can be low.

Let's move over to Glacier National Park, and set the scene there. In 1975, lower 48 Grizzly bears were listed as threatened under the Endangered Species Act, amid concerns over their dwindling population. Around the Glacier NP area, there was a long history of harvesting Grizzlies for purposes of protecting livestock. Because of the listing, a hunting quota was set for these animals, and that quota was gradually decreased. Eventually, hunting was terminated in 1991.

There had been increased in sighting estimates in Glacier NP after the elimination of the harvest, but these estimates were sporadic, often lacked even internal consistency, and couldn't be used for estimates. Further, there wasn't any data from areas around the GNP they could use to infer even sweeping generalizations from. Kentall et al. wanted to use microsatellites to try and get a count of how many bears there are. She wanted to rely on the fact that bears tend to leave little bits of them where they go. They'll rub against trees, and hair can get snagged on fences as the bears meander about to do bear-like things.

Now when you're trying to do density estimates by trapping (this is a technique primarily for small animals), you need something that comes close to a random trapping pattern, so you don't bias your estimates by getting a whole bunch of 'captures' in some locations. For example, if I trapped exclusively by a water hole on the Serengeti, I would get a large number of unique animals a day. I would falsely assume that I had very high densities of animals, as I had many hits in a small area. Instead, you need to cast your net even over areas where you wouldn't get many individuals, expanding your area to get a more true approximation of natural densities.

Back to our bears, you can probably guess the problem: the fences are going to be concentrated in some areas, and non-existent in others. There'll be fences along the edges of the park, but very few inside. So the first step in doing a more rigourous analysis would be distributing the `traps` (in this case, hair snags) in a more random method over 8km square quadrants. They baited the traps with a variety of different scent baits, changing the composition often enough to keep the bears from becoming habituated to them.

On a rolling basis, they would go out to the field and collect hairs from the hair traps, and they would opportunistically collect hairs from tree rubs. The tree rubs were still non-randomly distributed - that means the data she gathered can only be used in a few ways, and you can't make any statements about habitat use by the bears. Similarly, it seems the baited hair snags were placed mostly in preferred habitat. So we still violate a number of the assumptions of random sampling, but Kendall et al. got around a number of the problems they had with capture efficiency - the rate at which they're detecting actual animals.

That's enough for now. Next time, I'll talk about what Kendall et al. actually found, and what the management implications are.

Kentall, K., Stetz, J., Roon, D., Waits, L., Boulanger, J., and Paetkau, D. (2008) Grizzly Bear Density in Glacier National Park, Montana. Journal of Wildlife Management, vol. 72(8), pp. 1693-1705

No comments:


Click for Fairbanks, Alaska Forecast