Loading Events

« All Events

Plant Feature Extraction from 3D Point Clouds​ Workshop

July 1 @ 10:30 am - 3:30 pm

Free

Plant Feature Extraction from 3D Point Clouds​ Workshop, 1st July 2021 10:30am – 3:30pm

Contributed Talks abstracts are invited. Please send abstracts to enquiries@phenomuk.net deadline is May 14th 2021

CALL FOR PAPERS

 Submission due (1 page abstract): 14 May 2021

Notification of acceptance: 28 May 2021

 

3D imaging is increasingly being used in the context of crop imaging, driven in large part by the challenge of high throughput phenotyping (identification of effects on plant structure and function resulting from genotypic differences and environmental conditions). This workshop will focus upon the general challenge of feature extraction from 3D imaging such as point clouds, bringing together those working on challenges of this type in different applications, including crop imaging. The workshop will provide the opportunity to discover together how cutting edge computer vision approaches find application in crop imaging.

Specific topics of interest include, but are not limited to, the following:

  • generic methods for extraction of features from 3D imaging including deep learning
  • advances in segmentation, tracking, detection, reconstruction and identification methods for 3D imaging which address unsolved plant phenotyping problems
  • advances in feature extraction and related 3D imaging computer vision tasks which address challenges in other applications

Invited Speakers:

  • Dr Gert Kootstra (Wageningen University and Research) “3D digital plant phenotyping”
  • Dr Nick Pears (University of York) “A tour of deep learning on 3D images”

In addition to the invited speakers, there will be contributed talks and breakout discussions. There will also be an opportunity to share posters using Jamboard.

Further Information and Submission Guidelines:

Please send abstracts by the deadline to Claire.Hayes@nottingham.ac.uk, indicating at the time whether your preference is for an oral or poster presentation. Further information about the workshop is available at: ?

We look forward to you joining us!

Workshop Organizers:

Andrew Thompson (National Physical Laboratory, UK) Andrew.Thompson@npl.co.uk

Tony Pridmore (University of Nottingham, UK), Tony.Pridmore@nottingham.ac.uk

 

Itinerary

 

10:30am – Welcome by Dr Andrew Thompson

10:40am – 1st Keynote speaker – Dr Gert Kootstra, Wageningen University and Research

  • 3D digital plant phenotyping

11:10am – Contributed talks

12:30pm – Lunch

1:30pm – 2nd Keynote speaker – Dr Nick Pears, University of York

  • A tour of deep learning on 3D images

2:10pm – Contributed talks

3:10pm – Breakout groups for discussion

3:30pm – End of meeting

KEYNOTE SPEAKER INFORMTION

 

3D digital plant phenotyping

Gert Kootstra, Wageningen University and Research, The Netherlands

Abstract: In order to get a better grip on plant breeding, plant scientists need to understand the interaction of the genotype, the environment and the resulting phenotype. With next-generation sequencing methods, a vast amount of genotypic data became available. Phenotyping, on the other hand, is mainly still a manual process with human experts grading and measuring the plants. In order to tackle this phenotyping bottleneck, research is focussed on digital plant phenotyping, with the aim to develop automatic methods to measure a range of plant traits. In particular, many people work on image-processing methods to extract plant traits from camera images. The 3D aspects of plants, however, cannot be estimated well from 2D images. In this presentation, I will therefore discuss methods for 3D image acquisition and processing of the 3D data.

 

A tour of deep learning on 3D images

Nick Pears, Department of Computer Science, University of York, UK

 

Abstract: Within the last decade, deep learning techniques have revolutionised Computer Vision bringing about step-changes in performance on large, diverse and challenging image and video datasets. Computer vision tasks include scene segmentation, object recognition, biometrics, gesture recognition, human activity tracking – indeed any mainstream vision application that you care to mention has a state-of-the-art solution built on deep learning techniques, provided that there is enough training data or augmentable training data. This success largely lies in the ability of Convolutional Neural Nets (CNNs) to exploit local spatial similarities in the same way over the global image. However, standard 2D images lie on a regular grid-like structure making them highly amenable to encoding via convolution and pooling operations. In contrast, a 3D point cloud is an unordered set of points that is significantly more difficult to handle. In this talk, we outline the development  of deep learning techniques that are applicable to 3D images. This starts with the early approaches that used either 3D-to-2D projections or voxels to achieve grid-like structures and highlights the pros and cons of such approaches. We then discuss the landmark approach of PointNet, which allows for permutation-invariance of the input point set. Further derivatives of PointNet are discussed, and we also discuss alternative competitive deep learning based approaches, such as Dynamic Graph CNNs (DGCNNs) and transformer networks.

 

 

Details

Date:
July 1
Time:
10:30 am - 3:30 pm
Cost:
Free

RSVP

86 available Plant feature Extraction from 3D Point Clouds
Send RSVP confirmation to:
Login to RSVP