Towards Dense Object Tracking in a 2D Honeybee Hive

Katarzyna Bozek, Laetitia Hebert, Alexander S. Mikheyev, Greg J. Stephens

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

From human crowds to cells in tissue, the detection and efficient tracking of multiple objects in dense configurations is an important and unsolved problem. In the past, limitations of image analysis have restricted studies of dense groups to tracking a single or subset of marked individuals, or to coarse-grained group-level dynamics, all of which yield incomplete information. Here, we combine convolutional neural networks (CNNs) with the model environment of a honeybee hive to automatically recognize all individuals in a dense group from raw image data. We create new, adapted individual labeling and use the segmentation architecture U-Net with a loss function dependent on both object identity and orientation. We additionally exploit temporal regularities of the video recording in a recurrent manner and achieve near human-level performance while reducing the network size by 94% compared to the original U-Net architecture. Given our novel application of CNNs, we generate extensive problem-specific image data in which labeled examples are produced through a custom interface with Amazon Mechanical Turk. This dataset contains over 375,000 labeled bee instances across 720 video frames at 2FPS, representing an extensive resource for the development and testing of tracking methods. We correctly detect 96% of individuals with a location error of ~ 7% of a typical body dimension, and orientation error of 12°, approximating the variability of human raters. Our results provide an important step towards efficient image-based dense object tracking by allowing for the accurate determination of object location and orientation across time-series image data efficiently within one network architecture.

Original languageEnglish
Title of host publicationProceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018
PublisherIEEE Computer Society
Pages4185-4193
Number of pages9
ISBN (Electronic)9781538664209
DOIs
Publication statusPublished - 14 Dec 2018
Event31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 - Salt Lake City, United States
Duration: 18 Jun 201822 Jun 2018

Publication series

NameIEEE/CVF Conference on Computer Vision and Pattern Recognition

Conference

Conference31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018
CountryUnited States
CitySalt Lake City
Period18/06/1822/06/18

Fingerprint

Neural networks
Video recording
Network architecture
Labeling
Image analysis
Time series
Tissue
Testing

Bibliographical note

15 pages, including supplementary figures. 1 supplemental movie available as an ancillary file

Keywords

  • cs.CV
  • q-bio.QM
  • stat.ML

Cite this

Bozek, K., Hebert, L., Mikheyev, A. S., & Stephens, G. J. (2018). Towards Dense Object Tracking in a 2D Honeybee Hive. In Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 (pp. 4185-4193). [8578538] (IEEE/CVF Conference on Computer Vision and Pattern Recognition). IEEE Computer Society. https://doi.org/10.1109/CVPR.2018.00440
Bozek, Katarzyna ; Hebert, Laetitia ; Mikheyev, Alexander S. ; Stephens, Greg J. / Towards Dense Object Tracking in a 2D Honeybee Hive. Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018. IEEE Computer Society, 2018. pp. 4185-4193 (IEEE/CVF Conference on Computer Vision and Pattern Recognition).
@inproceedings{40e1840a48ba4449909ebd509a9105d7,
title = "Towards Dense Object Tracking in a 2D Honeybee Hive",
abstract = "From human crowds to cells in tissue, the detection and efficient tracking of multiple objects in dense configurations is an important and unsolved problem. In the past, limitations of image analysis have restricted studies of dense groups to tracking a single or subset of marked individuals, or to coarse-grained group-level dynamics, all of which yield incomplete information. Here, we combine convolutional neural networks (CNNs) with the model environment of a honeybee hive to automatically recognize all individuals in a dense group from raw image data. We create new, adapted individual labeling and use the segmentation architecture U-Net with a loss function dependent on both object identity and orientation. We additionally exploit temporal regularities of the video recording in a recurrent manner and achieve near human-level performance while reducing the network size by 94{\%} compared to the original U-Net architecture. Given our novel application of CNNs, we generate extensive problem-specific image data in which labeled examples are produced through a custom interface with Amazon Mechanical Turk. This dataset contains over 375,000 labeled bee instances across 720 video frames at 2FPS, representing an extensive resource for the development and testing of tracking methods. We correctly detect 96{\%} of individuals with a location error of ~ 7{\%} of a typical body dimension, and orientation error of 12{\^A}°, approximating the variability of human raters. Our results provide an important step towards efficient image-based dense object tracking by allowing for the accurate determination of object location and orientation across time-series image data efficiently within one network architecture.",
keywords = "cs.CV, q-bio.QM, stat.ML",
author = "Katarzyna Bozek and Laetitia Hebert and Mikheyev, {Alexander S.} and Stephens, {Greg J.}",
note = "15 pages, including supplementary figures. 1 supplemental movie available as an ancillary file",
year = "2018",
month = "12",
day = "14",
doi = "10.1109/CVPR.2018.00440",
language = "English",
series = "IEEE/CVF Conference on Computer Vision and Pattern Recognition",
publisher = "IEEE Computer Society",
pages = "4185--4193",
booktitle = "Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018",
address = "United States",

}

Bozek, K, Hebert, L, Mikheyev, AS & Stephens, GJ 2018, Towards Dense Object Tracking in a 2D Honeybee Hive. in Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018., 8578538, IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, pp. 4185-4193, 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, United States, 18/06/18. https://doi.org/10.1109/CVPR.2018.00440

Towards Dense Object Tracking in a 2D Honeybee Hive. / Bozek, Katarzyna; Hebert, Laetitia; Mikheyev, Alexander S.; Stephens, Greg J.

Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018. IEEE Computer Society, 2018. p. 4185-4193 8578538 (IEEE/CVF Conference on Computer Vision and Pattern Recognition).

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - Towards Dense Object Tracking in a 2D Honeybee Hive

AU - Bozek, Katarzyna

AU - Hebert, Laetitia

AU - Mikheyev, Alexander S.

AU - Stephens, Greg J.

N1 - 15 pages, including supplementary figures. 1 supplemental movie available as an ancillary file

PY - 2018/12/14

Y1 - 2018/12/14

N2 - From human crowds to cells in tissue, the detection and efficient tracking of multiple objects in dense configurations is an important and unsolved problem. In the past, limitations of image analysis have restricted studies of dense groups to tracking a single or subset of marked individuals, or to coarse-grained group-level dynamics, all of which yield incomplete information. Here, we combine convolutional neural networks (CNNs) with the model environment of a honeybee hive to automatically recognize all individuals in a dense group from raw image data. We create new, adapted individual labeling and use the segmentation architecture U-Net with a loss function dependent on both object identity and orientation. We additionally exploit temporal regularities of the video recording in a recurrent manner and achieve near human-level performance while reducing the network size by 94% compared to the original U-Net architecture. Given our novel application of CNNs, we generate extensive problem-specific image data in which labeled examples are produced through a custom interface with Amazon Mechanical Turk. This dataset contains over 375,000 labeled bee instances across 720 video frames at 2FPS, representing an extensive resource for the development and testing of tracking methods. We correctly detect 96% of individuals with a location error of ~ 7% of a typical body dimension, and orientation error of 12°, approximating the variability of human raters. Our results provide an important step towards efficient image-based dense object tracking by allowing for the accurate determination of object location and orientation across time-series image data efficiently within one network architecture.

AB - From human crowds to cells in tissue, the detection and efficient tracking of multiple objects in dense configurations is an important and unsolved problem. In the past, limitations of image analysis have restricted studies of dense groups to tracking a single or subset of marked individuals, or to coarse-grained group-level dynamics, all of which yield incomplete information. Here, we combine convolutional neural networks (CNNs) with the model environment of a honeybee hive to automatically recognize all individuals in a dense group from raw image data. We create new, adapted individual labeling and use the segmentation architecture U-Net with a loss function dependent on both object identity and orientation. We additionally exploit temporal regularities of the video recording in a recurrent manner and achieve near human-level performance while reducing the network size by 94% compared to the original U-Net architecture. Given our novel application of CNNs, we generate extensive problem-specific image data in which labeled examples are produced through a custom interface with Amazon Mechanical Turk. This dataset contains over 375,000 labeled bee instances across 720 video frames at 2FPS, representing an extensive resource for the development and testing of tracking methods. We correctly detect 96% of individuals with a location error of ~ 7% of a typical body dimension, and orientation error of 12°, approximating the variability of human raters. Our results provide an important step towards efficient image-based dense object tracking by allowing for the accurate determination of object location and orientation across time-series image data efficiently within one network architecture.

KW - cs.CV

KW - q-bio.QM

KW - stat.ML

UR - http://www.scopus.com/inward/record.url?scp=85062825617&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85062825617&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2018.00440

DO - 10.1109/CVPR.2018.00440

M3 - Conference contribution

T3 - IEEE/CVF Conference on Computer Vision and Pattern Recognition

SP - 4185

EP - 4193

BT - Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018

PB - IEEE Computer Society

ER -

Bozek K, Hebert L, Mikheyev AS, Stephens GJ. Towards Dense Object Tracking in a 2D Honeybee Hive. In Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018. IEEE Computer Society. 2018. p. 4185-4193. 8578538. (IEEE/CVF Conference on Computer Vision and Pattern Recognition). https://doi.org/10.1109/CVPR.2018.00440