Smile! And say ‘conservation’

News

Smile! And say ‘conservation’

3.2 million wildlife photos, put into a computer system, could be a ‘game-changer’ for wildlife ecology

Cape Town bureau chief

Millions of animals roaming southern Africa have pushed the boundaries of artificial intelligence and wildlife ecology.
Images of the animals collected by motion-sensing cameras were used to train a form of computational intelligence called “deep neural networks” how to identify, count and describe 48 species in their natural habitat.
The 3.2 million images — labelled over several years by 50,000 human volunteers — were correctly analysed by the system at an accuracy rate of 96.6%, the same level achieved by the crowdsourced volunteers.
The photographs came from Snapshot Serengeti, which has large numbers of camera traps in Tanzania, and a new paper in the Proceedings of the National Academy of Sciences reports how the deep learning technique can automate classification of 99.3% of them.“This technology lets us accurately, unobtrusively and inexpensively collect wildlife data, which could help catalyse the transformation of many fields of ecology, wildlife biology, zoology, conservation biology and animal behaviour into ‘big data’ sciences,” said senior author Jeff Clune, from the University of Wyoming in the US.
“This will dramatically improve our ability to both study and conserve wildlife and precious ecosystems.”
Deep neural networks are loosely inspired by how animal brains see and understand the world. They require vast amounts of training data which must be accurately labelled.
Craig Packer, from the University of Minnesota, who heads Snapshot Serengeti, said: “When I told Jeff Clune we had 3.2 million labelled images, he stopped in his tracks.“Our citizen scientists have done phenomenal work, but we needed to speed up the process to handle ever greater amounts of data. The deep learning algorithm is amazing and far surpassed my expectations. This is a game-changer for wildlife ecology."Snapshot Serengeti founder Ali Swanson, from the University of Oxford in the UK, said very few of the hundreds of camera trap projects around the world were able to classify their photographs to extract data.
“That means that much of the knowledge in these important data sets remains untapped. Although projects are increasingly turning to citizen science for image classification, we’re starting to see it take longer and longer to label each batch of images as the demand for volunteers grows. We believe deep learning will be key in alleviating the bottleneck for camera-trap projects: the effort of converting images into usable data.”
Another Snapshot Serengeti leader, Margaret Kosmala from Harvard University in the US, said: “We estimate that the deep learning technology pipeline we describe would save more than eight years of human labelling effort for each additional three million images. That is a lot of valuable volunteer time that can be redeployed to help other projects.”And first-author Sadegh Norouzzadeh said there was much more to come from the deep learning technology the team had developed.
“We expect that as more people research how to improve deep learning for this application and publish their datasets, the sky’s the limit. It is exciting to think of all the different ways this technology can help with our important scientific and conservation missions,” he said.

This article is reserved for Times Select subscribers.
A subscription gives you full digital access to all Times Select content.

Times Select

Already subscribed? Simply sign in below.

Questions or problems?
Email helpdesk@timeslive.co.za or call 0860 52 52 00.

Next Article

Previous Article