3D Acquired Research Dataset (3d-ard)

3D Acquired Research Dataset (3d-ard)

3D-Acquired Research Dataset (3D-ARD) is a research-oriented collection of complex and large-scale 3D objects and scenes, acquired and processed using state of the art hardware and software. Its goal is to bridge the gap between computer scientists (who develop acquisition and processing tools) and practitioners (who generate and work daily with these data), by providing high-quality data representative of practitioners usages.

For each item of the collection (e.g. an object, a building), we provide:

  • raw data, acquired using different modalities (laser scanning, photogrammetry, micro-tomography),
  • cleaned and reconstructed point-clouds and meshes,
  • modeled version(s) of the item, produced by professional artists under experts supervision.
Stable and efficient differential estimators on oriented point clouds

Stable and efficient differential estimators on oriented point clouds

Thibault Lejemble (1), David Coeurjolly (2), Loïc Barthe (1), Nicolas Mellado (1).
(1) CNRS, IRIT, Université de Toulouse, France.
(2) CNRS, LIRIS, Université de Lyon, France.

Computer Graphics Forum.
Eurographics Symposium on Geometry Processing (SGP)


Point clouds are now ubiquitous in computer graphics and computer vision. Differential properties of the point-sampled surface,such as principal curvatures, are important to estimate in order to locally characterize the scanned shape. To approximate the surface from unstructured points equipped with normal vectors, we rely on the Algebraic Point Set Surfaces (APSS) [GG07] for which we provide convergence and stability proofs for the mean curvature estimator. Using an integral invariant viewpoint, this first contribution links the algebraic sphere regression involved in the APSS algorithm to several surface derivatives of different orders. As a second contribution, we propose an analytic method to compute the shape operator and its principal curvatures from the fitted algebraic sphere. We compare our method to the state-of-the-art with several convergence and robustness tests performed on a synthetic sampled surface. Experiments show that our curvature estimations are more accurate and stable while being faster to compute compared to previous methods. Our differential estimators are easy to implement with little memory footprint and only require a unique range neighbors query per estimation. Its highly parallelizable nature makes it appropriate for processing large acquired data, as we show in several real-world experiments


Differential estimations computed with our stable estimators on a large point cloud with normals (2.5M points). Zoom on: (a) the initial point cloud, (b) our corrected normal vectors, (c) mean curvature, (d,e) principal curvatures, and (f) principal curvature directions.
PCEDNet : A Lightweight Neural Network for Fast and Interactive Edge Detection in 3D Point Clouds

PCEDNet : A Lightweight Neural Network for Fast and Interactive Edge Detection in 3D Point Clouds

Chems-Eddine Himeur, Thibault Lejemble, Thomas Pellegrini, Mathias Paulin, Loic Barthe, Nicolas Mellado. CNRS, IRIT, Université de Toulouse, France.

ACM Transactions on Graphics, Volume 41, Issue 1 February 2022, Article No.: 10, pp 1–21


In recent years, Convolutional Neural Networks (CNN) have proven to be efficient analysis tools for processing point clouds, e.g., for reconstruction, segmentation and classification. In this paper, we focus on the classification of edges in point clouds, where both edges and their surrounding are described. We propose a new parameterization adding to each point a set of differential information on its surrounding shape reconstructed at different scales. These parameters, stored in a Scale-Space Matrix (SSM), provide a well suited information from which an adequate neural network can learn the description of edges and use it to efficiently detect them in acquired point clouds. After successfully applying a multi-scale CNN on SSMs for the efficient classification of edges and their neighborhood, we propose a new light-weight neural network architecture outperforming the CNN in learning time, processing time and classification capabilities. Our architecture is compact, requires small learning sets, is very fast to train and classifies millions of points in seconds.

Teaser

Three examples of edge detection in point clouds by our PCEDNet neural network. It handles both:
(a) the imperfect edges of large scale scans (here 12million vertices) subject to irregular sampling and noise while detecting both sharp (in red) and smoother (in yellow) edges in few minutes (here less than6) -and –
(b) accurate CAD data on which it can focus on sharp edges if desired, in a few seconds for this model.
(c) Our network can also be trained in a fewseconds to detect edges following the edge definition provided by a user in an interactive model annotation. We show two annotations corresponding todifferent user expectations. Most of the processing is precomputed and at runtime edges of this model are classified in less than a second.

Bibtex

@article{10.1145/3481804, 
    author = {Himeur, Chems-Eddine and Lejemble, Thibault and Pellegrini, Thomas and Paulin, Mathias and Barthe, Loic and Mellado, Nicolas}, 
    title = {PCEDNet: A Lightweight Neural Network for Fast and Interactive Edge Detection in 3D Point Clouds}, 
    year = {2021}, 
    issue_date = {February 2022}, 
    publisher = {Association for Computing Machinery}, 
    address = {New York, NY, USA}, 
    volume = {41}, 
    number = {1}, 
    issn = {0730-0301}, 
    url = {https://doi.org/10.1145/3481804}, 
    doi = {10.1145/3481804}, 
    journal = {ACM Trans. Graph.}, 
    month = nov, 
    articleno = {10}, 
    numpages = {21}, 
    keywords = {Point clouds processing, neural networks, edge detection, datasets, energy efficiency, low resource computing} 
}
Seed point set-based building roof extraction from airborne LiDAR point clouds using a top-down strategy

Seed point set-based building roof extraction from airborne LiDAR point clouds using a top-down strategy

Jie Shao (1,3), Wuming Zhang (1,2), Aojie Shen (4), Nicolas Mellado (3), Shangshu Cai (4), Lei Luo (5), Nan Wang (6), Guangjian Yan (4), Guoqing Zhou (7)

(1) School of Geospatial Engineering and Science, Sun Yat-Sen University, China
(2) Southern Marine Science and Engineering Guangdong Laboratory, China
(3) IRIT, CNRS, University of Toulouse, France
(4) State Key Laboratory of Remote Sensing Science, Beijing Normal University, China
(5) Key Laboratory of Digital Earth Science, Chinese Academy of Sciences, Beijing, China
(6) School of Geography and Tourism, Anhui Normal University, China
(7) Guangxi Key Laboratory of Spatial Information and Geomatics, Guilin University of Technology, China

Automation in Construction, Elsevier, 2021, 126


Building roof extraction from airborne laser scanning point clouds is significant for building modeling. The common method adopts a bottom-up strategy which requires a ground filtering process first, and the subsequent process of region growing based on a single seed point easily causes oversegmentation problem. This paper proposes a novel method to extract roofs. A top-down strategy based on cloth simulation is first used to detect seed point sets with semantic information; then, the roof seed points are extracted instead of a single seed point for region-growing segmentation. The proposed method is validated by three point cloud datasets that contain different types of roof and building footprints. The results show that the top-down strategy directly extracts roof seed point sets, most roofs are extracted by the region-growing algorithm based on the seed point set, and the total errors of roof extraction in the test areas are 0.65%, 1.07%, and 1.45%. The proposed method simplifies the workflow of roof extraction, reduces oversegmentation, and determines roofs in advance based on the semantic seed point set, which suggests a practical solution for rapid roof extraction.

Bibtex

@article{SHAO2021103660,
title = {Seed point set-based building roof extraction from airborne LiDAR point clouds using a top-down strategy},
journal = {Automation in Construction},
volume = {126},
pages = {103660},
year = {2021},
issn = {0926-5805},
doi = {https://doi.org/10.1016/j.autcon.2021.103660},
url = {https://www.sciencedirect.com/science/article/pii/S0926580521001114},
author = {Jie Shao and Wuming Zhang and Aojie Shen and Nicolas Mellado and Shangshu Cai and Lei Luo and Nan Wang and Guangjian Yan and Guoqing Zhou}
}
Persistence Analysis of Multi-scale Planar Structure Graph in Point Clouds

Persistence Analysis of Multi-scale Planar Structure Graph in Point Clouds

Thibault Lejemble (1), Claudio Mura (2), Loïc Barthe (1), Nicolas Mellado (1).
(1) CNRS, IRIT, Université de Toulouse, France.
(2) Department of Informatics, University of Zurich

Computer Graphics Forum (Eurographics 2020)


Modern acquisition techniques generate detailed point clouds that sample complex geometries. For instance, we are able to produce millimeter-scale acquisition of whole buildings. Processing and exploring geometrical information within such point clouds requires scalability, robustness to acquisition defects and the ability to model shapes at different scales. In this work, we propose a new representation that enriches point clouds with a multi-scale planar structure graph. We define the graph nodes as regions computed with planar segmentations at increasing scales and the graph edges connect regions that are similar across scales. Connected components of the graph define the planar structures present in the point cloud within a scale interval. For instance, with this information, any point is associated to one or several planar structures existing at different scales. We then use topological data analysis to filter the graph and provide the most prominent planar structures.

Our representation naturally encodes a large range of information. We show how to efficiently extract geometrical details (e.g. tiles of a roof), arrangements of simple shapes (e.g. steps and mean ramp of a staircase), and large-scale planar proxies (e.g. walls of a building) and present several interactive tools to visualize, select and reconstruct planar primitives directly from raw point clouds. The effectiveness of our approach is demonstrated by an extensive evaluation on a variety of input data, as well as by comparing against state-of-the-art techniques and by showing applications to polygonal mesh reconstruction.


Starting from an input point cloud equipped with normal vectors, our approach extracts meaningful planar components describing the geometry at multiple scales. Using persistence analysis, we offer to the user several ways to interactively explore, visualize and reconstruct the input data. The user can for instance generate planar reconstructions at arbitrary scales, select planar components by sketching directly on the point clouds, and/or find similar planar components.

Bibtex

@article{https://doi.org/10.1111/cgf.13910,
author = {Lejemble, T. and Mura, C. and Barthe, L. and Mellado, N.},
title = {Persistence Analysis of Multi-scale Planar Structure Graph in Point Clouds},
journal = {Computer Graphics Forum},
volume = {39},
number = {2},
pages = {35-50},
keywords = {CCS Concepts, • Computing methodologies → Point-based models; Shape analysis},
doi = {https://doi.org/10.1111/cgf.13910},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.13910},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.13910},
year = {2020}
}
SLAM-aided forest plots mapping combining terrestrial and mobile laser scanning

SLAM-aided forest plots mapping combining terrestrial and mobile laser scanning

Jie Shao (1,2), Wuming Zhang (3,4), Nicolas Mellado (2), Nan Wang (5), Shuangna Jin (1), Shangshu Cai(1), Lei Luo (6), Thibault Lejemble (2), Guangjian Yan (1).
(1) State Key Laboratory of Remote Sensing Science, Beijing Normal University, Beijing, China
(2) IRIT, CNRS, University of Toulouse, France
(3) School of Geospatial Engineering and Science, Sun Yat-Sen University, China
(4) Southern Marine Science and Engineering Guangdong Laboratory, China
(5) School of Remote Sensing and Information Engineering, Wuhan University, China

(6) Key Laboratory of Digital Earth Science, Chinese Academy of Sciences, China

ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 163, May 2020


Precise structural information collected from plots is significant in the management of and decision-making regarding forest resources. Currently, laser scanning is widely used in forestry inventories to acquire three-dimensional (3D) structural information. There are three main data-acquisition modes in ground-based forest measurements: single-scan terrestrial laser scanning (TLS), multi-scan TLS and multi-single-scan TLS. Nevertheless, each of these modes causes specific difficulties for forest measurements. Due to occlusion effects, the single-scan TLS mode provides scans for only one side of the tree. The multi-scan TLS mode overcomes occlusion problems, however, at the cost of longer acquisition times, more human labor and more effort in data preprocessing. The multi-single-scan TLS mode decreases the workload and occlusion effects but lacks the complete 3D reconstruction of forests. These problems in TLS methods are largely avoided with mobile laser scanning (MLS); however, the geometrical peculiarity of forests (e.g., similarity between tree shapes, placements, and occlusion) complicates the motion estimation and reduces mapping accuracy.

Therefore, this paper proposes a novel method combining single-scan TLS and MLS for forest 3D data acquisition. We use single-scan TLS data as a reference, onto which we register MLS point clouds, so they fill in the omission of the single-scan TLS data. To register MLS point clouds on the reference, we extract virtual feature points that are sampling the centerlines of tree stems and propose a new optimization-based registration framework. In contrast to previous MLS-based studies, the proposed method sufficiently exploits the natural geometric characteristics of trees. We demonstrate the effectiveness, robustness, and accuracy of the proposed method on three datasets, from which we extract structural information. The experimental results show that the omission of tree stem data caused by one scan can be compensated for by the MLS data, and the time of the field measurement is much less than that of the multi-scan TLS mode. In addition, single-scan TLS data provide strong global constraints for MLS-based forest mapping, which allows low mapping errors to be achieved, e.g., less than 2.0 cm mean errors in both the horizontal and vertical directions.

Bibtex

@article{SHAO2020214,
title = {SLAM-aided forest plot mapping combining terrestrial and mobile laser scanning},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
volume = {163},
pages = {214-230},
year = {2020},
issn = {0924-2716},
doi = {https://doi.org/10.1016/j.isprsjprs.2020.03.008},
url = {https://www.sciencedirect.com/science/article/pii/S0924271620300782},
author = {Jie Shao and Wuming Zhang and Nicolas Mellado and Nan Wang and Shuangna Jin and Shangshu Cai and Lei Luo and Thibault Lejemble and Guangjian Yan},
keywords = {Forest mapping, LiDAR, SLAM, Single-scan TLS, MLS},
}
Single Scanner BLS System for Forest Plot Mapping

Single Scanner BLS System for Forest Plot Mapping

Jie Shao (1), Wuming Zhang (2), Nicolas Mellado (3), Shuangna Jin (1), Lei Luo, Shangshu Cai (1), Lingbo Yang, Guangjian Yan (1),
(1) BNU – Beijing Normal University 
(2) ICJ – Institut Camille Jordan, 
(3) IRIT, CNRS, University of Toulouse, France

IEEE Transactions on Geoscience and Remote Sensing, Vol. 59, Feb. 2021


The 3D information collected from sample plots is significant for forest inventories. Terrestrial laser scanning (TLS) has been demonstrated to be an effective device in data acquisition of forest plots. Although TLS is able to achieve precise measurements, multiple scans are usually necessary to collect more detailed data, which generally requires more time in scan preparation and field data acquisition. In contrast, mobile laser scanning (MLS) is being increasingly utilized in mapping due to its mobility. However, the geometrical peculiarity of forests introduces challenges. In this article, a test backpack-based MLS system, i.e., backpack laser scanning (BLS), is designed for forest plot mapping without a global navigation satellite system/inertial measurement unit (GNSS-IMU) system. To achieve accurate matching, this article proposes to combine the line and point features for calculating transformation, in which the line feature is derived from trunk skeletons. Then, a scan-to-map matching strategy is proposed for correcting positional drift. Finally, this article evaluates the effectiveness and the mapping accuracy of the proposed method in forest sample plots. The experimental results indicate that the proposed method achieves accurate forest plot mapping using the BLS; meanwhile, compared to the existing methods, the proposed method utilizes the geometric attributes of the trees and reaches a lower mapping error, in which the mean errors and the root square mean errors for the horizontal/vertical direction in plots are less than 3 cm.


Bibtex

@article{9118969,
  author={Shao, Jie and Zhang, Wuming and Mellado, Nicolas and Jin, Shuangna and Cai, Shangshu and Luo, Lei and Yang, Lingbo and Yan, Guangjian and Zhou, Guoqing},
  journal={IEEE Transactions on Geoscience and Remote Sensing}, 
  title={Single Scanner BLS System for Forest Plot Mapping}, 
  year={2021},
  volume={59},
  number={2},
  pages={1675-1685},
  doi={10.1109/TGRS.2020.2999413}
}