Студопедия
rus | ua | other

Home Random lecture






Delaunay triangulations and large data sets


Date: 2015-10-07; view: 523.


The Delaunay triangulation and its dual Voronoi diagram [Okabe et al. 2000] have been ubiquitous in geometry processing since algorithms for them first appeared in the 1970s [Frederick et al. 1970; Shamos and Hoey 1975]. The Delaunay triangulation (or tetrahedralization) of a set of points has the property that the circumscribing circle of every triangle, or the circumscribing sphere of every tetrahedron, encloses no point in the set. Many surveys of Delaunay triangulations are available: see Fortune [1992] for mathematical properties, Su and Drysdale [1995] for a summary of two-dimensional algorithms and their behavior in practice, and Liu and Snoeyink [2005] for a survey and comparison of five threedimensional implementations.

Because of its simplicity, we implemented Lawson's [1977] incremental insertion algorithm as modified and extended to any dimension by Bowyer [1981] and Watson [1981]. Clarkson and Shor [1989] were first to show that incremental algorithms can run in optimal time in any dimension if the points are inserted in random order. Nearly all modern three-dimensional implementations use incremental insertion, with various strategies for point location ‑ determining where each new point should be inserted.

Our 2D in-core standard for comparison is the divide-andconquer algorithm [Shamos and Hoey 1975] as implemented in Triangle [Shewchuk 1996], which runs in optimal O(nlogn) time and is the fastest in practice (if the data structures fit in main memory).

Recent papers address the problem of computing Delaunay triangulations too large to fit in memory. Blandford et al. [2005] describe data structures for dynamically maintaining compressed triangulations in two or three dimensions, thereby increasing the size of triangulation that fits in memory by a factor of three to five. For larger triangulations, researchers turn to disk storage. Unfortunately, the randomness that makes incremental insertion fast distributes data structures randomly through memory, with no spatial coherence, so the virtual memory thrashes as soon as the physical memory is exhausted. Amenta, Choi, and Rote [2003] address this problem (in any dimension) by choosing a point insertion order that has strong spatial coherence, but retains just enough randomness to preserve the proof of optimal running time. They call this order a biased randomized insertion order (BRIO). By using a BRIO, they increase substantially the size of triangulation they can construct with a fixed main memory and a large virtual memory.

As an unexpected bonus, their method speeds up point location so much that their implementation triangulates most real-world three-dimensional point sets in linear time, after an O (n log n) sorting step whose hidden constant factor is tiny. Buchin [2005] proves that incremental insertion, coupled with similar randomization and point location based on space-filling curves and bucketing, runs in O(n) time on certain random point sets.

Agarwal, Arge, and Yi [2005] have designed and implemented an external memory algorithm for constructing constrained Delaunay triangulations in the plane, with theoretical bounds on the number of disk block accesses their algorithm performs. They use a divide-and-conquer approach in which a small random sample of the points breaks the problem up into small subproblems, which are triangulated in-core by Triangle. Their algorithm uses no complicated external data structures (not even B-trees) and is akin to streaming, but it does many read passes over the points. Our streaming implementation outperforms their external memory implementation strikingly ‑ see Section 6-but we have not yet implemented support for constrained Delaunay triangulations.

 


<== previous lecture | next lecture ==>
Algorithms for large data sets | Biskup M, Chayes L. and Kotecky R.
lektsiopedia.org - 2013 год. | Page generation: 0.002 s.