- This event has passed.
Plant Feature Extraction from 3D Point Clouds Workshop
July 1 @ 10:30 am - 3:30 pmFree
Plant Feature Extraction from 3D Point Clouds Workshop, 1st July 2021 10:30am – 3:30pm
Contributed Talks abstracts are invited. Please send abstracts to email@example.com deadline is May 14th 2021
CALL FOR PAPERS
Submission due (1 page abstract): 14 May 2021
Notification of acceptance: 28 May 2021
3D imaging is increasingly being used in the context of crop imaging, driven in large part by the challenge of high throughput phenotyping (identification of effects on plant structure and function resulting from genotypic differences and environmental conditions). This workshop will focus upon the general challenge of feature extraction from 3D imaging such as point clouds, bringing together those working on challenges of this type in different applications, including crop imaging. The workshop will provide the opportunity to discover together how cutting edge computer vision approaches find application in crop imaging.
Specific topics of interest include, but are not limited to, the following:
- generic methods for extraction of features from 3D imaging including deep learning
- advances in segmentation, tracking, detection, reconstruction and identification methods for 3D imaging which address unsolved plant phenotyping problems
- advances in feature extraction and related 3D imaging computer vision tasks which address challenges in other applications
- Dr Gert Kootstra (Wageningen University and Research) “3D digital plant phenotyping”
- Dr Nick Pears (University of York) “A tour of deep learning on 3D images”
In addition to the invited speakers, there will be contributed talks and breakout discussions. There will also be an opportunity to share posters using Jamboard.
Further Information and Submission Guidelines:
Please send abstracts by the deadline to Claire.Hayes@nottingham.ac.uk, indicating at the time whether your preference is for an oral or poster presentation. Further information about the workshop is available at: ?
We look forward to you joining us!
Andrew Thompson (National Physical Laboratory, UK) Andrew.Thompson@npl.co.uk
Tony Pridmore (University of Nottingham, UK), Tony.Pridmore@nottingham.ac.uk
Date: 1st July 2021
Venue: Remotely on PhenomUK Zoom
11:00-11:10 Welcome + introduction (Andrew Thompson)
11.10 -11.50 Nick Pears, University of York
- A tour of deep learning on 3D images
11.50 -12.05 David Rousseau
- Semantic segmentation of 3D point clouds of leaf-off apple orchards
12:05 – 12:20 Fuli Wang
- Dimension fitting of wheat spikes based on the unsupervised algorithm in dense 3D point clouds
12:20 – 12:35 Morteza Ghahremani
- Trait analysis of plants in 3D space
12.35-13.30 Lunch break
13.30-14.10 Gert Kootstra, Wageningen University and Research
- 3D digital plant phenotyping
14:10-14:25 Bo Li
- Quantitative strawberry and potato tuber phenotyping by 3D imaging
14:25-14:40 Haolin Pan
- Towards Accurate 3D Registration of Growing Plants
14.40-15.10 Panel Discussion
KEYNOTE SPEAKER INFORMTION
3D digital plant phenotyping
Gert Kootstra, Wageningen University and Research, The Netherlands
Abstract: In order to get a better grip on plant breeding, plant scientists need to understand the interaction of the genotype, the environment and the resulting phenotype. With next-generation sequencing methods, a vast amount of genotypic data became available. Phenotyping, on the other hand, is mainly still a manual process with human experts grading and measuring the plants. In order to tackle this phenotyping bottleneck, research is focussed on digital plant phenotyping, with the aim to develop automatic methods to measure a range of plant traits. In particular, many people work on image-processing methods to extract plant traits from camera images. The 3D aspects of plants, however, cannot be estimated well from 2D images. In this presentation, I will therefore discuss methods for 3D image acquisition and processing of the 3D data.
A tour of deep learning on 3D images
Nick Pears, Department of Computer Science, University of York, UK
Abstract: Within the last decade, deep learning techniques have revolutionised Computer Vision bringing about step-changes in performance on large, diverse and challenging image and video datasets. Computer vision tasks include scene segmentation, object recognition, biometrics, gesture recognition, human activity tracking – indeed any mainstream vision application that you care to mention has a state-of-the-art solution built on deep learning techniques, provided that there is enough training data or augmentable training data. This success largely lies in the ability of Convolutional Neural Nets (CNNs) to exploit local spatial similarities in the same way over the global image. However, standard 2D images lie on a regular grid-like structure making them highly amenable to encoding via convolution and pooling operations. In contrast, a 3D point cloud is an unordered set of points that is significantly more difficult to handle. In this talk, we outline the development of deep learning techniques that are applicable to 3D images. This starts with the early approaches that used either 3D-to-2D projections or voxels to achieve grid-like structures and highlights the pros and cons of such approaches. We then discuss the landmark approach of PointNet, which allows for permutation-invariance of the input point set. Further derivatives of PointNet are discussed, and we also discuss alternative competitive deep learning based approaches, such as Dynamic Graph CNNs (DGCNNs) and transformer networks.