2nd Call for Funding: Eric Ober – NIAB
Project Title
In-field 3D imaging for high resolution morphometric phenotyping in wheat
Total Fund Requested –£24,772
Project Summary
The are a number of phenotyping tools currently available that are capable of producing high resolution 3D images of plant features such as the ears of a wheat plant and the individual spikelets within those structures. Quantitative data on the morphology of these key components of the plant that contribute to yield are of interest to plant breeders, biologists and geneticists. However, to be of relevance to breeders, plants must be measured in their natural growing environment in the field, and this is where many imaging methods do not function well. Translation of phenotyping methods established in controlled laboratory or glasshouse conditions to the field is a significant bottleneck to applications of imaging to high-throughput field phenotyping. We propose to tackle some of these challenges by building on preliminary results showing some measure of success using a structured light laser scanning camera to quantify physical attributes of the wheat ear in field plots. Specific technical challenges include ambient light, movement of plants in wind, obtaining sufficient image resolution for accurate segmentation, feature detection and automated measurement, and robust algorithms that work for a range of plant types. The proposed work will help support an early career scientist at NPL who contributed to the preliminary study. Results from the study will advance capabilities and knowledge for field phenotyping, and help foster collaborations between the fields of engineering, physics and plant biology.
The phenotyping need
Breeding for yield improvement fundamentally depends on accurate phenotyping early generation genetic materials for yield potential, but this has to be done early in the breeding cycle when only small quantities of seed are available. Thus, the breeder assesses the physical appearance –in the case of wheat–of the ears (spikes) on the main tillers of an individual plant. Desirable for selection are ears that have larger number of spikelets, with full grains in larger numbers of florets in each spikelet, and no sign of fertility issues such as blind grain sites (aborted grains) (Rawson, 1970, Toyota et al. 2001). Opposite phenotypes are discarded from further crosses. In a commercial programme, tens of thousands of ears are examined in each cycle, and thus the assessments must be rapid. However, it requires the ‘breeders’ eye’, which is a skill learned from years of experience. And it is not fool proof. It is a subjective and qualitative assessment, but it has worked for thousands of years of wheat domestication. Now, however, breeding must be accelerated, and yield gains must occur more quickly using smarter tools to complement the traditional methods. High-throughput phenotyping for wheat traits can thus be achieved by automating the process of identifying key features and extracting relevant quantitative information from wheat crop images. We propose to develop such a tool, and have made significant early progress, but there are important technical challenges that stand in the way of moving this forward to application in practice. The PhenomUK funding will help us solve these technical challenges.
Current state of the art imaging systems
High resolution images of wheat ears have been obtained by many research groups around the world. Some of the best images, which show detailed 3D morphology of each grain within the ear, have been obtained with microcomputed tomography. Understandably, this technology is not suited to high-throughput phenotyping in the field. Most plant traits of interest to breeders are influenced by genetics and by the growing environment, and thus breeders focus on how plants perform in realistic field situations, rather than controlled environments. Other imaging methods work well in controlled environments, such as 3D TOF cameras, LiDAR, or imaging platforms that rotate plants to obtain composite 3D images using fixed position high resolution RGB cameras. Similarly, the logistical challenges of translating these methods to field-grown material makes them unlikely as cost effective solutions.
A prototype high resolution imaging system for field phenotyping
The NPL team have used a Photoneo PhoXi 3D structured light laser scanning (SLLS) camera to obtain 3D images in the lab and field (Fig. 1, 2). By using parallel structured light, the PhoXi XL camera in scan times of 1-2 sec captures up to 3.2 million 3D points for each scan, at 16 million points per second throughput. Under good lighting conditions, the resolution is sufficient to abstract quantitative measures of the length and width and volume of individual wheat ears in the image, and count individual spikelets (Fig. 1). This is already four times more quantitative information than traditional methods using manual visual assessments. However, use of the camera for practical field phenotyping requires substantial further development. In preliminary work, measurements were made early in the day or late in the evening when the sun is low in the sky, because practically every camera technology used struggles with direct sunlight. Placing a shroud over the phenotyping platform to block some of the sun for daytime measurements helped somewhat, but high levels of ambient light diminished the signal leading to poorer resolution. LiDAR is less sensitive to ambient light, but other issues make field-based LiDAR solutions for high resolution imaging problematic, such as adjusting for movement during scans.
Another relatively low-cost method to obtain 3D images of field plots was developed by NPL using a multi-stereo imaging system of 18 cameras on the mobile phenotyping platform. High resolution RGB 2D images from each camera were stitched together to form the 3D image, which was processed using the ML algorithms to extract the wheat crop traits. These systems have a faster image capture rate and are more suitable for mass deployment due to the attractive lower cost, and has better sunlight performance. This technique would also be employed to capture the field data and for fusion of 3D data with the SLLS 3D data in the proposed work.
NPL has a ground-based hyperspectral camera system covering the range of VIS-NIR-SWIR bands which will also be utilised to capture high resolution 2D images of the wheat plot. These hyperspectral data cubes could then be converted into hyperspectral 3D models using the principle of photogrammetry. This would give additional insights and the additional features generated can feed in to train the ML networks for better segmentation and classification of the wheat heads.
The SLLS data were previously using in-house Matlab vision algorithms. These look for densities of points to identify shapes in the point cloud that differentiate stems from ears. Density-based DBSCAN (Density Based Spatial Clustering of Applications with Noise) algorithm was well suited to the task and performed better than the popular centroid-based K-means approach (Thompson et al 2019).
After segmentation of the point cloud using the DBSCAN clustering algorithm and refining clusters using least-squares data fitting, the feature extraction toolbox was run to quantify crop parameters such as height, ear width and length. Height estimates were accurate (scaled RMSE 1.2%), and estimates of ear width were reasonably good (scaled RMSE = 25.2%), but ear length was poor because current algorithms underestimate due to occlusion of features in the point cloud.
Incorrect segmentation and classification of wheat heads from the stems also generated incorrect wheat counts when the algorithm was applied on dense and highly occluded outdoor environment. At maturity, bending wheat heads also generated incorrect length and volume estimates due to improper shape fitting.
As an improvement, machine learning algorithms (e.g. using fast CNN) using relatively small number of annotated 3D image training sets were developed and are currently showing 90% correct identification in a crowded image of around 100 to 200 ears of wheat, which includes many overlapping features. However, in dense crops typical of field plots, we still need to find solutions to improve the current algorithms to improve the speed and accuracy of the algorithm. Larger annotated 3D data sets of wheat heads are required to be generated for training the networks for better accuracy.
Currently the data captured in the field are post-processed in the lab, usually within a day. However, we eventually aim to do real-time data processing using cloud computing and introduce and utilise 5G capability for faster data transfer rates and automation of image capture.
The knowledge and datasets from the project will be useful for researchers to make advancements in developing high throughput phenotyping systems.
Important technical challenges and how we propose to address them using this grant
- Lighting under field conditions:
- Conduct a rigorous comparison of imaging using a light shroud, imaging during low angle incident solar radiation, night-time captures.
- Improving image resolution:
- Explore fusion of the 3D data from multi-stereo imaging with SLLS
- Fusion of LiDAR, Hyperspectral imaging and SLLS to obtain the best resolution of point cloud data.
- Improving algorithms for automated feature detection quantitative analysis:
- Develop routines that can filter out occluded ears from the segmentation clusters prior to sub-feature extraction such as ear width and length.
- Develop ML based algorithms further by incorporating the additional features extracted from Hyperspectral 3D data.