Automatic shape adjustment at joints for the implicit skinning

Automatic shape adjustment at joints for the implicit skinning

Olivier Hachette, Florian Canezin, Rodolphe Vaillant, Nicolas Mellado, Loic Barthe.
CNRS, IRIT, Université de Toulouse, France.

Elsevier Computer&Graphics (proc. of SMI 2021)


The implicit skinning is a geometric interactive skinning method, for skeleton-based animations, enabling plausible deformations at joints while resolving skin self-collisions. Even though requiring a few user interactions to be adequately parameterized, some efforts have to be spent on the edition of the shapes at joints.

In this research, we introduce a dedicated optimization framework for automatically adjusting the shape of the surfaces generating the deformations at joints when they are rotated during an animation. This approach directly fits in the implicit skinning pipeline and it has no impact on the algorithm performance during animation. Starting from the mesh partition of the mesh representing the animated character, we propose a dedicated hole filling algorithm based on a particle system and a power crust meshing. We then introduce a procedure optimizing the shape of the filled mesh when it rotates at the joint level. This automatically generates plausible skin deformation when joints are rotated without the need of extra user editing.


Overview of our approach. (a) Input mesh and its associated skeleton. (b) Model segmentation with respect to the skinning weights. (c) Mesh parts resulting from the input mesh segmentation. (d) 0.5-iso-surfaces interpolating the mesh parts of the forearm in blue and the hand in yellow. (e) Bulging shape generated at the wrist joint when parts are rotated. (f) Closure of the mesh parts following the 0.5-iso-surfaces and pre-fairing of the extremity shapes. (g) Iterative shape optimization removing the bulge when limbs rotate. (h) Post-fairing generating the final shape for rigging the model. (i) Result of the implicit skinning applied on our optimized shapes at joints.

Bibtex

@article{HACHETTE2021,
title = {Automatic shape adjustment at joints for the implicit skinning},
journal = {Computers & Graphics},
year = {2021},
issn = {0097-8493},
doi = {https://doi.org/10.1016/j.cag.2021.10.018},
url = {https://www.sciencedirect.com/science/article/pii/S0097849321002296},
author = {Olivier Hachette and Florian Canezin and Rodolphe Vaillant and Nicolas Mellado and Loïc Barthe},
keywords = {Shape deformation, Geometric modeling, Skinning},
}
Dynamic Decals: Pervasive Freeform Interfaces Using Constrained Deformable Graphical Elements

Dynamic Decals: Pervasive Freeform Interfaces Using Constrained Deformable Graphical Elements

Aziz Niyazov, Nicolas Mellado, Loic Barthe and Marcos Serrano.
CNRS, IRIT, Université de Toulouse, France.

ACM ISS 2021 (to appear)


Pervasive interfaces can present relevant information anywhere in our environment, and they are thus challenged by the non rectilinearity of the display surface (e.g. circular table) and by the presence of objects that can partially occlude the interface (e.g. a book or cup on the table). To tackle this problem, we propose a novel solution based on two core contributions: the decomposition of the interface into deformable graphical units, called Dynamic Decals, and the control of their position and behaviour by a constraint-based approach. Our approach dynamically deforms the interface when needed while minimizing the impact on its visibility and layout properties. To do so, we extend previous work on implicit deformations to propose and experimentally validate functions defining different decal shapes and new deformers modeling decal deformations when they collide. Then, we interactively optimize the decal placements according to the interface geometry and their interrelations. Relations are modeled as constraints and the interface evolution results from an easy and efficient to solve minimization problem. Our approach is validated by a user study showing that, compared to two baselines, Dynamic decals is an aesthetically pleasant interface that preserves visibility, layout and aesthetic properties.

Stable and efficient differential estimators on oriented point clouds

Stable and efficient differential estimators on oriented point clouds

Thibault Lejemble (1), David Coeurjolly (2), Loïc Barthe (1), Nicolas Mellado (1).
(1) CNRS, IRIT, Université de Toulouse, France.
(2) CNRS, LIRIS, Université de Lyon, France.

Computer Graphics Forum.
Eurographics Symposium on Geometry Processing (SGP)


Point clouds are now ubiquitous in computer graphics and computer vision. Differential properties of the point-sampled surface,such as principal curvatures, are important to estimate in order to locally characterize the scanned shape. To approximate the surface from unstructured points equipped with normal vectors, we rely on the Algebraic Point Set Surfaces (APSS) [GG07] for which we provide convergence and stability proofs for the mean curvature estimator. Using an integral invariant viewpoint, this first contribution links the algebraic sphere regression involved in the APSS algorithm to several surface derivatives of different orders. As a second contribution, we propose an analytic method to compute the shape operator and its principal curvatures from the fitted algebraic sphere. We compare our method to the state-of-the-art with several convergence and robustness tests performed on a synthetic sampled surface. Experiments show that our curvature estimations are more accurate and stable while being faster to compute compared to previous methods. Our differential estimators are easy to implement with little memory footprint and only require a unique range neighbors query per estimation. Its highly parallelizable nature makes it appropriate for processing large acquired data, as we show in several real-world experiments


Differential estimations computed with our stable estimators on a large point cloud with normals (2.5M points). Zoom on: (a) the initial point cloud, (b) our corrected normal vectors, (c) mean curvature, (d,e) principal curvatures, and (f) principal curvature directions.
Seed point set-based building roof extraction from airborne LiDAR point clouds using a top-down strategy

Seed point set-based building roof extraction from airborne LiDAR point clouds using a top-down strategy

Jie Shao (1,3), Wuming Zhang (1,2), Aojie Shen (4), Nicolas Mellado (3), Shangshu Cai (4), Lei Luo (5), Nan Wang (6), Guangjian Yan (4), Guoqing Zhou (7)

(1) School of Geospatial Engineering and Science, Sun Yat-Sen University, China
(2) Southern Marine Science and Engineering Guangdong Laboratory, China
(3) IRIT, CNRS, University of Toulouse, France
(4) State Key Laboratory of Remote Sensing Science, Beijing Normal University, China
(5) Key Laboratory of Digital Earth Science, Chinese Academy of Sciences, Beijing, China
(6) School of Geography and Tourism, Anhui Normal University, China
(7) Guangxi Key Laboratory of Spatial Information and Geomatics, Guilin University of Technology, China

Automation in Construction, Elsevier, 2021, 126


Building roof extraction from airborne laser scanning point clouds is significant for building modeling. The common method adopts a bottom-up strategy which requires a ground filtering process first, and the subsequent process of region growing based on a single seed point easily causes oversegmentation problem. This paper proposes a novel method to extract roofs. A top-down strategy based on cloth simulation is first used to detect seed point sets with semantic information; then, the roof seed points are extracted instead of a single seed point for region-growing segmentation. The proposed method is validated by three point cloud datasets that contain different types of roof and building footprints. The results show that the top-down strategy directly extracts roof seed point sets, most roofs are extracted by the region-growing algorithm based on the seed point set, and the total errors of roof extraction in the test areas are 0.65%, 1.07%, and 1.45%. The proposed method simplifies the workflow of roof extraction, reduces oversegmentation, and determines roofs in advance based on the semantic seed point set, which suggests a practical solution for rapid roof extraction.

Bibtex

@article{SHAO2021103660,
title = {Seed point set-based building roof extraction from airborne LiDAR point clouds using a top-down strategy},
journal = {Automation in Construction},
volume = {126},
pages = {103660},
year = {2021},
issn = {0926-5805},
doi = {https://doi.org/10.1016/j.autcon.2021.103660},
url = {https://www.sciencedirect.com/science/article/pii/S0926580521001114},
author = {Jie Shao and Wuming Zhang and Aojie Shen and Nicolas Mellado and Shangshu Cai and Lei Luo and Nan Wang and Guangjian Yan and Guoqing Zhou}
}