3rd Funding Call: Yonghuai Liu
Edge Hill University
DeepEarNet: Accurate Segmentation and Measurement of Cereal Grain Spikes Directly in Point Clouds Using Latest Deep Learning and Domain Knowledge.
Total Fund Requested – £24,639
Background: Wheat is a major cereal crop and is globally important as a staple source of nutrients for around 40% of the world’s population (Giraldo, et al (2019)) with more than 700 million tonnes of grain produced annually (FAO report 2020: http://www.fao.org/worldfoodsituation/csdb/en/). There is widespread interest in estimating the number of ears per unit area (Ferrante et al., 2017) as this is a key yield component. Computer vision can estimate this and determine yield from crop images (Tan et al., 2020). However, accuracy and usability leave much room for improvement and most techniques can only count numbers of ears, not measure their shape and size. Therefore, manual data collection, involving visual inspection of the standing crop is still a gold standard for the validation of semi-automatic (Tan et al., 2020) and automatic methods (Sadeghi-Tehran, et al (2019)). This process is labor-intensive, error prone and time-consuming. The proposed approach aims to address this in a novel way by advancing state-of-the-art image processing and computer vision techniques with an effective user interface to facilitate high-throughput measurement of ears or the equivalent structures in a variety of cereals.
Currently, 2D image analysis is the main approach used to estimate ear number (Sadeghi-Tehran, et al (2019), Tan et al (2020), Ma, et al (2020), and Fernandez-Gallego, et al. 2019). To facilitate the research for ear counting, Global Wheat Head Detection (GWHD) dataset was recently established (David, et al (2020)), including 4700 high resolution RGB images with 190000 labelled wheat heads. More than 300000 ears are available for a competition in May 2021 (http://www.global-wheat.com/2020-challenge/). All these use single view images, which are subject to challenging issues such as occlusion, perspective distortion, lighting, and inaccurate perception of geometric structure and shape, widely occurring in the growth of many crops. Consequently, these techniques usually produce limited accuracy, have limited applicability, and are usually sensitive to noise in data.
Ear counting using 3D point clouds. To address drawbacks in 2D images-based approaches, recently, 3D point clouds are explored which can provide more comprehensive geometric/texture information, resulting in improved accuracy. However, it is still challenging to analyse such point clouds due to loss of structure among the points and contamination by imaging noise and background. To date, few techniques have been developed for counting ears using 3D point clouds ((Paulus et al (2013); Velumani, et al (2017); Wang, Mohan et al (2020)) collected using either laser scanners (Paulus et al (2013)) or LiDAR (Velumani, et al (2017)). However, these techniques are based on hand-crafted features, projections or voxelization of 3D point clouds, involving heuristic rules and parameters, and are subject to the influence of specular reflection from the points of interest. Although accuracy over 2D images-based approaches is improved, these techniques are usually not robust, lose information or are limited in applicability.