Publications

Publications

2017

2017-sig-thumb

Constrained Palette-Space Exploration

Nicolas Mellado; David Vanderhaeghe; Charlotte Hoarau; Sidonie Christophe; Mathieu Bredif; Loic Barthe
ACM Transaction on Graphics (Siggraph 2017).
[ abstract ] [ bibtex ] [ paper ] [ supplementaries ] [ project page ]

2017-eg-thumb

Mesh Simplification With Curvature Error Metric

Céline Michaud; Nicolas Mellado; Mathias Paulin
Eurographics Poster (2017).
[ abstract ] [ bibtex ] [ short paper ] [ poster ]

2016

2016-mapstyle-thumb

Map Style Formalization: Rendering Techniques Extension for Cartography

Sidonie Christophe; Bertrand Dumenieu; Jérémie Turbet; Charlotte Hoarau; Nicolas Mellado; Jérémie Ory; Hugo Loi; Antoine Masse; Benoit Arbelot; Romain Vergne; Mathieu Brédif; Thomas Hurtut; Joëlle Thollot; David Vanderhaeghe
Non-Photorealistic Animation and Rendering (NPAR, production paper) (2016).
[ abstract ] [ bibtex ] [ preprint ] [ supplementaries ] [ project page ]

2015

2015-multiscalematching

Relative scale estimation and 3D registration of multi-modal geometry using Growing Least Squares

Nicolas Mellado; Matteo Dellepiane; Roberto Scopigno
Transactions on Visualization and Computer Graphics (2016).
[ abstract ] [ bibtex ] [ preprint ] [ video ] [ slides (SGP 2016) ] [ doi ]

2015-rapter

Rapter: Rebuilding Man-made Scenes with Regular Arrangements of Planes

Aron Monszpart; Nicolas Mellado; Gabriel J. Brostow; Niloy J. Mitra
Siggraph 2015.
[ abstract ] [ bibtex ] [ project page ]

2015-lighttransport

Light Transport Editing with Ray Portals

Thomas Subileau; Nicolas Mellado; David Vanderhaeghe; Mathias Paulin
Computer Graphics International 2015.

RayPortals: A Light Transport Editing Framework (extended version)

Thomas Subileau; Nicolas Mellado; David Vanderhaeghe; Mathias Paulin
The Visual Computer (2015).
[ abstract ] [ bibtex ] [ preprint ]


2014

2014-mcgraph

MCGraph: Multi-criterion representation for scene understanding

Moos Hueting*; Aron Monszpart*; Nicolas Mellado
Siggraph Asia 2014 Workshop on Indoor Scene Understanding: Where Graphics meets Vision.
(* joint first authors)
[ abstract ] [ bibtex ] [ project page ]

2014-msdecomp

Adaptive multi-scale analysis for point-based surface editing

Georges Nader; Gael Guennebaud; Nicolas Mellado
Pacific Graphics (2014).
[ abstract ] [ bibtex ] [ video ] [ project page ]

2014-reciprocal

Computational Design and Construction of Notch-free Reciprocal Frame Structures

Nicolas Mellado; Peng Song; Xiaoqi Yan; Chi-Wing Fu; Niloy J. Mitra
Advances in Architectural Geometry (2014).
[ abstract ] [ bibtex ] [ video ] [ project page ]

2014-s4pcs

Super 4PCS: Fast Global Pointcloud Registration via Smart Indexing

Nicolas Mellado; Dror Aiger; Niloy Mitra
Best paper award
12th Symposium on Geometry Processing, 2014.
[ abstract ] [ bibtex ] [ video ] [ project page ]

2014-flashlight

The Revealing Flashlight: Interactive spatial augmented reality for detail exploration of cultural heritage artifacts

Brett Ridel; Patrick Reuter; Jeremy Laviole; Nicolas Mellado; Xavier Granier; Nadine Couture
ACM Journal on Computing and Cultural Heritage, Special Issue on “Interacting with the past”
Selected as a notable article in computing in 2014 by ACM ThinkLoud Computing Reviews (complete list here)
[ abstract ] [ bibtex ] [ preprint ] [ project page ]


2013

2013-flashlight

La Lampe torche magique : Une interface tangible pour l’inspection géométrique d’objets en réalite augmentée spatiale

Brett Ridel; Patrick Reuter; Jeremy Laviole; Nicolas Mellado; Xavier Granier; Nadine Couture
IHM’13
[ abstract ] [ bibtex ] [ preprint ]

2013-ssc

Screen-Space Curvature for Production-Quality Rendering and Compositing

Nicolas Mellado; Pascal Barla; Gael Guennebaud; Patrick Reuter; Gregory Duquesne
Siggraph Talk 2013
[ talk abstract (html) (pdf) ] [ bibtex ] [ additional results ] [ slides + video ]  [ Modo plugin ]


2012

2012-thesis

Analysis of 3D objects at multiple scales: application to shape matching

Nicolas Mellado
PhD Thesis
[ abstract ] [ bibtex ] [manuscript (revised version)]

2012-gls

Growing Least Squares for the Analysis of Manifolds in Scale-Space

Nicolas Mellado; Pascal Barla; Gaël Guennebaud; Patrick Reuter; Christophe Schlick
CGF 2012 (Proc. of Symposium on Geometry Processing)
[ abstract ] [ bibtex ] [ preprint ] [ slides (pdf) ] [ code ] [ project page ]


2010

2010-archeotui

Semi-automatic geometry-driven reassembly of fractured archeological objects

Nicolas Mellado; Patrick Reuter; Christophe Schlick
VAST 2010
[ abstract ] [ bibtex ] [ paper ]

Constrained Palette-Space Exploration


Color palettes are widely used by artists to define colors of artworks and explore color designs. In general, artists select the colors of a palette by following a set of rules, eg contrast or relative luminance. Existing interactive palette exploration tools explore palette spaces following limited constraints defined as geometric configurations in color space eg{} harmony rules on the color wheel.Palette search algorithms sample palettes from color relations learned from an input dataset, however they cannot provide interactive user edits and palette refinement.

We introduce in this work a new versatile formulation enabling the creation of constraint-based interactive palette exploration systems. Our technical contribution is a graph-based palette representation, from which we define palette exploration as a minimization problem that can be solved efficiently and provide real-time feedback. Based on our formulation, we introduce two interactive palette exploration strategies: constrained palette exploration, and for the first time, constrained palette interpolation. We demonstrate the performances of our approach on various application cases and evaluate how it helps users finding trade-offs between concurrent constraints.

Mesh Simplification With Curvature Error Metric


Progressive meshes algorithms aim at computing levels of detail from a highly detailed mesh. Many of these algorithms are based on a mesh decimation technique, generating coarse triangulation while optimizing for a particular metric which minimizes the distance to the original shape. However these metrics do not robustly handle high curvature regions, sharp features, boundaries or noise. We propose a novel error metric, based on algebraic spheres as a measure of the curvature of the mesh, to preserve curvature along the simplification process. This metric is compact, does not require extra input from the user, and is as simple to implement as a conventional quadric error metric.

@inproceedings {egp.20171040,
 booktitle = {EG 2017 - Posters},
 editor = {Pierre Benard and Daniel Sykora},
 title = {{Mesh SimplificationWith Curvature Error Metric}},
 author = {Michaud, Céline and Mellado, Nicolas and Paulin, Mathias},
 year = {2017},
 publisher = {The Eurographics Association},
 ISSN = {1017-4656},
 DOI = {10.2312/egp.20171040}
}

Map Style Formalization: Rendering Techniques Extension for Cartography


Cartographic design requires controllable methods and tools to produce maps that are adapted to users' needs and preferences. The formalized rules and constraints for cartographic representation come mainly from the conceptual framework of graphic semiology. Most current Geographical Information Systems (GIS) rely on the Styled Layer Descriptor and Semiology Encoding (SLD/SE) specifications which provide an XML schema describing the styling rules to be applied on geographic data to draw a map. Although this formalism is relevant for most usages in cartography, it fails to describe complex cartographic and artistic styles. In order to overcome these limitations, we propose an extension of the existing SLD/SE specifications to manage extended map stylizations, by the means of controllable expressive methods. Inspired by artistic and cartographic sources (Cassini maps, mountain maps, artistic movements, etc.), we propose to integrate into our system three main expressive methods: linear stylization, patch-based region filling and vector texture generation. We demonstrate how our pipeline allows to personalize map rendering with expressive methods in several examples.  

@inproceedings {Christophe-2016,
booktitle = {Non-Photorealistic Animation and Rendering},
editor = {Pierre Bénard and Holger Winnemöller},
title = {{Map Style Formalization: Rendering Techniques Extension for Cartography}},
author = {Christophe, Sidonie and Duménieu, Bertrand and Turbet, Jérémie and Hoarau, Charlotte and Mellado, Nicolas and Ory, Jérémie and Loi, Hugo and Masse, Antoine and Arbelot, Benoit and Vergne, Romain and Brédif, Mathieu and Hurtut, Thomas and Thollot, Joëlle and Vanderhaeghe, David},
year = {2016},
publisher = {The Eurographics Association},
ISSN = {-},
ISBN = {978-3-03868-002-4},
DOI = {10.2312/exp.20161064}
}

Relative scale estimation and 3D registration of multi-modal geometry using Growing Least Squares


The advent of low cost scanning devices and the improvement of multi-view stereo techniques have made the acquisition of 3D geometry ubiquitous. Data gathered from different devices, however, result in large variations in detail, scale, and coverage. Registration of such data is essential before visualizing, comparing and archiving them.
However, state-of-the-art methods for geometry registration cannot be directly applied due to intrinsic differences between the models, e.g. sampling, scale, noise. In this paper we present a method for the automatic registration of multi-modal geometric data, i.e. acquired by devices with different properties (e.g. resolution, noise, data scaling). The method uses a descriptor based on Growing Least Squares, and is robust to noise, variation in sampling density, details, and enables scale-invariant matching. It allows not only the measurement of the similarity between the geometry surrounding two points, but also the estimation of their relative scale. As it is computed locally, it can be used to analyze large point clouds composed of millions of points.
We implemented our approach in two registration procedures (assisted and automatic) and applied them successfully on a number of synthetic and real cases. We show that using our method, multi-modal models can be automatically registered, regardless of their differences in noise, detail, scale, and unknown relative coverage

@ARTICLE{Mellado2015-multimodal,
    author={Mellado, Nicolas and Dellepiane, Matteo and Scopigno, Roberto},
    journal={Visualization and Computer Graphics, IEEE Transactions on},
    title={Relative scale estimation and 3D registration of multi-modal geometry using Growing Least Squares},
    year={2015},
    volume={PP},
    number={99},
    pages={1-1},
    doi={10.1109/TVCG.2015.2505287},
    ISSN={1077-2626},
    month={},
}

Rapter: Rebuilding Man-made Scenes with Regular Arrangements of Planes


With the proliferation of acquisition devices, gathering massive volumes of 3D data is now easy. Processing such large masses of pointclouds, however, remains a challenge. This is particularly a problem for raw scans with missing data, noise, and varying sampling density. In this work, we present a simple, scalable, yet powerful data reconstruction algorithm. We focus on reconstructing man-made scenes as regular arrangements of planes (RAP), thereby selecting both local plane-based approximations along with their global inter-plane relations. We propose a novel selection formulation to directly balance between data fitting and the simplicity of the resulting arrangement of extracted planes. The main technical contribution is a formulation that allows less-dominant orientations to still retain their internal regularity, and not being overwhelmed and regularized by the dominant scene orientations. We evaluate our approach on a variety of complex 2D and 3D pointclouds, and demonstrate the advantages over existing alternative methods.

@article{Monszpart2015-Rapter,
 author = {Monszpart, Aron and Mellado, Nicolas and Brostow, Gabriel J. and Mitra, Niloy J.},
 title = {RAPter: Rebuilding Man-made Scenes with Regular Arrangements of Planes},
 journal = {ACM Trans. Graph.},
 issue_date = {August 2015},
 volume = {34},
 number = {4},
 month = jul,
 year = {2015},
 issn = {0730-0301},
 pages = {103:1--103:12},
 articleno = {103},
 numpages = {12},
 url = {http://doi.acm.org/10.1145/2766995},
 doi = {10.1145/2766995},
 acmid = {2766995},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {RANSAC, pointcloud, reconstruction, regular arrangement, scene understanding},
} 

RayPortals: A Light Transport Editing Framework


Physically based rendering, using path-space formulation of global illumination, has become a standard technique for high-quality computer-generated imagery. Nonetheless, being able to control and edit the resulting picture so that it corresponds to the artist vision is still a tedious trial-and-error process. We show how the manipulation of light transport translates into the path-space integral formulation of the rendering equation. We introduce portals as a path-space manipulation tool to edit and control renderings and show how our editing tool unifies and extends previous work on lighting editing. Portals allow the artist to precisely control the final aspect of the image without modifying neither scene geometry nor lighting setup. According to the setup of two geometric handles and a simple path selection filter, portals capture specific lightpaths and teleport them through 3D space. We implement portals in major path-based algorithms (Photon Mapping, Progressive Photon Mapping and Bi-directional Path Tracing) and demonstrate the wide range of control this technique allows on various lighting effects, from low-frequency color bleeding to high-frequency caustics as well as view-dependent reflections.

@article{Subileau2015-RayPortals
    author={Subileau, Thomas and Mellado, Nicolas and Vanderhaeghe, David and Paulin, Mathias},
    year={2015},
    issn={0178-2789},
    journal={The Visual Computer},
    doi={10.1007/s00371-015-1163-2},
    title={RayPortals: a light transport editing framework},
    url={http://dx.doi.org/10.1007/s00371-015-1163-2},
    publisher={Springer Berlin Heidelberg},
    keywords={Rendering; Global illumination; Editing; Manipulation; Physically based},
    pages={1-10},
    language={English}
}

MCGraph: Multi-criterion representation for scene understanding


The field of scene understanding endeavours to extract a broad range of information from 3D scenes. Current approaches exploit one or at most a few different criteria (e.g., spatial, semantic, functional information) simultaneously for analysis. We argue that to take scene understanding to the next level of performance, we need to take into account many different, and possibly previously unconsidered types of knowledge simultaneously. A unified representation for this type of processing is as of yet missing. In this work we propose MCGraph: a unified multi-criterion data representation for understanding and processing of large-scale 3D scenes. Scene abstraction and prior knowledge are kept separated, but highly connected. For this purpose, primitives (i.e., proxies) and their relationships (e.g., contact, support, hierarchical) are stored in an abstraction graph, while the different categories of prior knowledge necessary for processing are stored separately in a knowledge graph. These graphs complement each other bidirectionally, and are processed concurrently. We illustrate our approach by expressing previous techniques using our formulation, and present promising avenues of research opened up by using such a representation. We also distribute a set of MCGraph annotations for a small number of NYU2 scenes, to be used as ground truth multi-criterion abstractions.

@inproceedings{Hueting2014-MCGraph,
    author = {Hueting, Moos and Monszpart, Aron and Mellado, Nicolas},
    title = {MCGraph: Multi-criterion Representation for Scene Understanding},
    booktitle = {SIGGRAPH Asia 2014 Indoor Scene Understanding Where Graphics Meets Vision},
    series = {SA '14},
    year = {2014},
    isbn = {978-1-4503-3242-2},
    location = {Shenzhen, China},
    pages = {3:1--3:9},
    articleno = {3},
    numpages = {9},
    url = {http://doi.acm.org/10.1145/2670291.2670292},
    doi = {10.1145/2670291.2670292},
    acmid = {2670292},
    publisher = {ACM},
    address = {New York, NY, USA},
    keywords = {multi-criteria, scene abstraction, scene understanding},
}

Adaptive multi-scale analysis for point-based surface editing


This paper presents a tool that enables the direct editing of surface features in large point-clouds or meshes. This is made possible by a novel multi-scale analysis of unstructured point-clouds that automatically extracts the number of relevant features together with their respective scale all over the surface. Then, combining this ingredient with an adequate multi-scale decomposition allows us to directly enhance or reduce each feature in an independent manner. Our feature extraction is based on the analysis of the scale-variations of locally fitted surface primitives combined with unsupervised learning techniques. Our tool may be applied either globally or locally, and millions of points are handled in real-time. The resulting system enables users to accurately edit complex geometries with minimal interaction

@article{Nader:2014:AMA:2771634.2771652,
 author = {Nader, G. and Guennebaud, G. and Mellado, N.},
 title = {Adaptive Multi-scale Analysis for Point-based Surface Editing},
 journal = {Comput. Graph. Forum},
 issue_date = {October 2014},
 volume = {33},
 number = {7},
 month = oct,
 year = {2014},
 issn = {0167-7055},
 pages = {171--179},
 numpages = {9},
 url = {http://dx.doi.org/10.1111/cgf.12485},
 doi = {10.1111/cgf.12485},
 acmid = {2771652},
 publisher = {The Eurographs Association \&\#38; John Wiley \&\#38; Sons, Ltd.},
 address = {Chichester, UK},
 keywords = {Categories and Subject Descriptors according to ACM CCS, I.3.3 [Computer Graphics]: Picture/Image Generation-Line and curve generation},
} 

Computational Design and Construction of Notch-free Reciprocal Frame Structures


A reciprocal frame (RF) is a self-standing 3D structure typically formed by a complex grillage created as an assembly of simple atomic RF-units, which are in turn made up of three or more sloping rods forming individual units. While RF-structures are attractive given their simplicity, beauty, and ease of deployment; creating such structures, however, is difficult and cumbersome. In this work, we present an interactive computational framework for designing and assembling RF-structures around a 3D reference surface. Targeting notch-free assemblies, wherein individual rods or sticks are simply tied together, we focus on simplifying both the process of exploring the space of aesthetic designs and also the actual assembly process. By providing computational support to simplify the design and assembly process, our tool enables novice users to interactivity explore a range of design variations, and assists them to construct the final RF-structure design. We use the proposed framework to design a range of RF-structures of varying complexity and also physically construct a selection of the models.

@inproceedings{Mellado2014-RF,
	author={Nicolas Mellado and Peng Song and Xiaoqi Yan and Chi-Wing Fu and Niloy J. Mitra},
	title={Computational Design and Construction of Notch-free Reciprocal Frame Structures},
	pages={181-198},
	booktitle={Advances in Architectural Geometry 2014},
	editor={Philippe Block and others},
	publisher={Springer Verlag},
	year={2014},
}

Super 4PCS: Fast Global Pointcloud Registration via Smart Indexing


Data acquisition in large-scale scenes regularly involves accumulating information across multiple scans. A common approach is to locally align scan pairs using Iterative Closest Point (ICP) algorithm (or its variants), but requires static scenes and small motion between scan pairs. This prevents accumulating data across multiple scan sessions and/or different acquisition modalities (e.g., stereo, depth scans). Alternatively, one can use a global registration algorithm allowing scans to be in arbitrary initial poses. The state-of-the-art global registration algorithm, 4PCS, however has a quadratic time complexity in the number of data points. This vastly limits its applicability to acquisition of large environments. We present Super 4PCS for global pointcloud registration that is optimal, i.e., runs in linear time (in the number of data points) and is also output sensitive in the complexity of the alignment problem based on the (unknown) overlap across scan pairs. Technically, we map the algorithm as an ‘instance problem’ and solve it efficiently using a smart indexing data organization. The algorithm is simple, memory-efficient, and fast. We demonstrate that Super 4PCS results in significant speedup over alternative approaches and allows unstructured efficient acquisition of scenes at scales previously not possible. Complete source code and datasets are available for research use at http://geometry.cs.ucl.ac.uk/projects/2014/super4PCS/.

@inproceedings{Mellado2014-Super4PCS,
 author = {Mellado, Nicolas and Aiger, Dror and Mitra, Niloy J.},
 title = {Super 4PCS Fast Global Pointcloud Registration via Smart Indexing},
 booktitle = {Proceedings of the Symposium on Geometry Processing},
 series = {SGP '14},
 year = {2014},
 location = {Cardiff, United Kingdom},
 pages = {205--215},
 numpages = {11},
 url = {http://dx.doi.org/10.1111/cgf.12446},
 doi = {10.1111/cgf.12446},
 acmid = {2855610},
 publisher = {Eurographics Association},
 address = {Aire-la-Ville, Switzerland, Switzerland},
} 

The Revealing Flashlight: Interactive spatial augmented reality for detail exploration of cultural heritage artifacts


Cultural heritage artifacts often contain details that are difficult to distinguish due to aging effects such as erosion. We propose the revealing flashlight, a new interaction and visualization technique in spatial augmented reality that helps to reveal the detail of such artifacts. We locally and interactively augment a physical artifact by projecting an expressive 3D visualization that highlights its features, based on an analysis of its previously acquired geometry at multiple scales. Our novel interaction technique simulates and improves the behavior of a flashlight: according to 6-degree-of-freedom input, we adjust the numerous parameters involved in the expressive visualization - in addition to specifying the location to be augmented. This makes advanced 3D analysis accessible to the greater public with an everyday gesture, by naturally combining the inspection of the real object and the virtual object in a co-located interaction and visualization space.

The revealing flashlight can be used by archeologists, for example, to help decipher inscriptions in eroded stones, or by museums to let visitors interactively discover the geometric details and meta-information of cultural artifacts. We confirm its effectiveness, ease-of-use and ease-of-learning in an initial preliminary user study and by the feedbacks of two public exhibitions.

@article{Ridel:2014,
author = {Ridel, Brett and Reuter, Patrick and Laviole, Jeremy and Mellado, Nicolas and Couture, Nadine and Granier, Xavier}, title = {The Revealing Flashlight: Interactive Spatial Augmented Reality for Detail Exploration of Cultural Heritage Artifacts}, journal = {J. Comput. Cult. Herit.}, issue_date = {July 2014}, volume = {7}, number = {2}, month = jun, year = {2014}, issn = {1556-4673}, pages = {6:1--6:18}, articleno = {6}, numpages = {18}, url = {http://doi.acm.org/10.1145/2611376}, doi = {10.1145/2611376}, acmid = {2611376}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {Spatial interaction techniques, expressive visualization, spatial augmented reality, tangible interaction}, }

La Lampe torche magique : Une interface tangible pour l'inspection geometrique d'€™objets en realite augmentee spatiale


La Réalité Augmentée Spatiale (RAS) permet d'enrichir des objets du monde réel par apposition d'informations numériques à l'aide de vidéo-projecteurs. Elle présente un fort potentiel pour introduire de nouvelles techniques d'interaction, car la co-localisation de l'espace de rendu et de l'espace d'interaction dans le monde réel permet de se baser sur nos habitudes spontanées, comme l'interaction directe avec les mains. Nous proposons la Lampe torche magique, une nouvelle interaction à six degrés de libertés destinée à améliorer l'analyse visuelle d'un objet réel grâce à l'apposition sélective d'informations numériques par le biais de la RAS. Cette interaction fait référence à une triple métaphore de lampe torche : la zone à inspecter déterminée par le spot lumineux, l'angle d'inspection caractérisé par la direction de la lampe torche, et l'intensité de la visualisation déterminée par la distance entre la lampe torche et l'objet "éclairé". Grâce à une numérisation 3D préalable de l'objet et une analyse géométrique multi-échelle de sa surface, nous augmentons l'objet réel avec une visualisation expressive qui met en évidence les détails de l'objet, tels que les courbures, à différentes échelles et selon différents angles. Une première étude utilisateur exploratoire montre que sur une stèle égyptienne comportant une inscription peu visible à l'oeil nu, notre technique permet d'améliorer la lisibilité sans perdre le lien entre l'objet réel et les informations abstraites.

@inproceedings{Ridel:2013,
    title = {{La Lampe torche magique : Une interface tangible pour l'inspection g{\'e}om{\'e}trique d'objets en r{\'e}alit{\'e} augment{\'e}e spatiale}},
    author = {Ridel, Brett and Reuter, Patrick and Laviole, Jeremy and Mellado, Nicolas and Granier, Xavier and Couture, Nadine},
    language = {French},
    booktitle = {{25{\`e}me conf{\'e}rence francophone sur l'Interaction Homme-Machine, IHM'13}},
    publisher = {ACM},
    address = {Bordeaux, France},
    organization = {AFIHM},
    audience = {international },
    doi = {10.1145/2534903.2534906 },
    year = {2013},
    month = Jul,
} 
@inproceedings{Mellado:2013:SSC,
 author = {Mellado, Nicolas and Barla, Pascal and Guennebaud, Gael and Reuter, Patrick and Duquesne, Gregory},
 title = {Screen-space Curvature for Production-quality Rendering and Compositing},
 booktitle = {ACM SIGGRAPH 2013 Talks},
 series = {SIGGRAPH '13},
 year = {2013},
 isbn = {978-1-4503-2344-4},
 location = {Anaheim, California},
 pages = {42:1--42:1},
 articleno = {42},
 numpages = {1},
 url = {http://doi.acm.org/10.1145/2504459.2504512},
 doi = {10.1145/2504459.2504512},
 acmid = {2504512},
 publisher = {ACM},
 address = {New York, NY, USA},
} 

Analysis of 3D objects at multiple scales: application to shape matching


Over the last decades, the evolution of acquisition techniques yields the generalization of detailed 3D objects, represented as huge point sets composed of millions of vertices. The complexity of the involved data often requires to analyze them for the extraction and characterization of pertinent structures, which are potentially defined at multiple scales. Among the wide variety of methods proposed to analyze digital signals, the scale-space analysis is today a standard for the study of 2D curves and images. However, its adaptation to 3D data leads to instabilities and requires connectivity information, which is not directly available when dealing with point sets.

In this thesis, we present a new multi-scale analysis framework that we call the Growing Least Squares (GLS). It consists of a robust local geometric descriptor that can be evaluated on point sets at multiple scales using an efficient second-order fitting procedure. We propose to analytically differentiate this descriptor to extract continuously the pertinent structures in scale-space. We show that this representation and the associated toolbox define an efficient way to analyze 3D objects represented as point sets at multiple scales. To this end, we demonstrate its relevance in various application scenarios.

A challenging application is the analysis of acquired 3D objects coming from the Cultural Heritage field. In this thesis, we study a real-world dataset composed of the fragments of the statues that were surrounding the legendary Alexandria Lighthouse. In particular, we focus on the problem of fractured object reassembly, consisting of few fragments (up to about ten), but with missing parts due to erosion or deterioration. We propose a semi-automatic formalism to combine both the archaeologists knowledge and the accuracy of geometric matching algorithms during the reassembly process. We use it to design two systems, and we show their efficiency in concrete cases.

@phdthesis{Mellado:2013:PhDThesis,
  title={Analysis of 3D objects at multiple scales: application to shape matching},
  author={Mellado, Nicolas},
  year={2013},
  school={Universit{\'e} Sciences et Technologies-Bordeaux I}
} 

Fixed typos in Equation 2.21 and Appendix A. Thanks to Gaël Guennebaud and Émilie Guy for spotting these bugs.

Update: December 22nd, 2015

Growing Least Squares for the Analysis of Manifolds in Scale-Space


We present a novel approach to the multi-scale analysis of point-sampled manifolds of co-dimension 1. It is based on a variant of Moving Least Squares, whereby the evolution of a geometric descriptor at increasing scales is used to locate pertinent locations in scale-space, hence the name œGrowing Least Squares. Compared to existing scale-space analysis methods, our approach is the first to provide a continuous solution in space and scale dimensions, without requiring any parametrization, connectivity or uniform sampling. An important implication is that we identify multiple pertinent scales for any point on a manifold, a property that had not yet been demonstrated in the literature. In practice, our approach exhibits an improved robustness to change of input, and is easily implemented in a parallel fashion on the GPU. We compare our method to state-of-the-art scale-space analysis techniques and illustrate its practical relevance in a few application scenarios

@article{Mellado:2012:GLS,
 author = {Mellado, Nicolas and Guennebaud, Gaël and Barla, Pascal and Reuter, Patrick and Schlick, Christophe},
 title = {Growing Least Squares for the Analysis of Manifolds in Scale-Space},
 journal = {Comp. Graph. Forum},
 issue_date = {August 2012},
 volume = {31},
 number = {5},
 month = aug,
 year = {2012},
 issn = {0167-7055},
 pages = {1691--1701},
 numpages = {11},
 url = {http://dx.doi.org/10.1111/j.1467-8659.2012.03174.x},
 doi = {10.1111/j.1467-8659.2012.03174.x},
 acmid = {2346805},
 publisher = {John Wiley \& Sons, Inc.},
 address = {New York, NY, USA},
} 

Semi-automatic geometry-driven reassembly of fractured archeological objects


3D laser scanning of broken cultural heritage content is becoming increasingly popular, resulting in large collections of detailed fractured archeological 3D objects that have to be reassembled virtually. In this paper, we present a new semi-automatic reassembly approach for pairwise matching of the fragments, that makes it possible to take into account both the archeologist's expertise, as well as the power of automatic geometry-driven matching algorithms. Our semi-automatic reassembly approach is based on a real-time interaction loop: an expert user steadily specifies approximate initial relative positions and orientations between two fragments by means of a bimanual tangible user interface. These initial poses are continuously corrected and validated in real-time by an algorithm based on the Iterative Closest Point (ICP): the potential contact surface of the two fragments is identified by efficiently pruning insignificant areas of a pair of two bounding sphere hierarchies, that is combined with a k-d tree for closest vertex queries. The locally optimal relative pose for the best match is robustly estimated by taking into account the distance of the closest vertices as well as their normals. We provide feedback to the user by a visual representation of the locally optimal best match and its associated error. Our first results on a concrete dataset show that our system is capable of assisting an expert user in real-time during the pairwise matching of downsampled 3D fragments.

@inproceedings{Mellado:2010:VAST,
 author = {Mellado, Nicolas and Reuter, Patrick and Schlick, Christophe},
 title = {Semi-automatic Geometry-driven Reassembly of Fractured Archeological Objects},
 booktitle = {Proceedings of the 11th International Conference on Virtual Reality, Archaeology and Cultural Heritage},
 series = {VAST'10},
 year = {2010},
 isbn = {978-3-905674-29-3},
 location = {Paris, France},
 pages = {33--38},
 numpages = {6},
 url = {http://dx.doi.org/10.2312/VAST/VAST10/033-038},
 doi = {10.2312/VAST/VAST10/033-038},
 acmid = {2384531},
 publisher = {Eurographics Association},
 address = {Aire-la-Ville, Switzerland, Switzerland},
}