Vantage-point tree

Summary

A vantage-point tree (or VP tree) is a metric tree that segregates data in a metric space by choosing a position in the space (the "vantage point") and partitioning the data points into two parts: those points that are nearer to the vantage point than a threshold, and those points that are not. By recursively applying this procedure to partition the data into smaller and smaller sets, a tree data structure is created where neighbors in the tree are likely to be neighbors in the space.[1]

One generalization is called a multi-vantage-point tree (or MVP tree): a data structure for indexing objects from large metric spaces for similarity search queries. It uses more than one point to partition each level.[2][3]

History edit

Peter Yianilos claimed that the vantage-point tree was discovered independently by him (Peter Yianilos) and by Jeffrey Uhlmann.[1] Yet, Uhlmann published this method before Yianilos in 1991.[4] Uhlmann called the data structure a metric tree, the name VP-tree was proposed by Yianilos. Vantage-point trees have been generalized to non-metric spaces using Bregman divergences by Nielsen et al.[5]

This iterative partitioning process is similar to that of a k-d tree, but uses circular (or spherical, hyperspherical, etc.) rather than rectilinear partitions. In two-dimensional Euclidean space, this can be visualized as a series of circles segregating the data.

The vantage-point tree is particularly useful in dividing data in a non-standard metric space into a metric tree.

Understanding a vantage-point tree edit

The way a vantage-point tree stores data can be represented by a circle.[6] First, understand that each node of this tree contains an input point and a radius. All the left children of a given node are the points inside the circle and all the right children of a given node are outside of the circle. The tree itself does not need to know any other information about what is being stored. All it needs is the distance function that satisfies the properties of the metric space.[6]

Searching through a vantage-point tree edit

A vantage-point tree can be used to find the nearest neighbor of a point x. The search algorithm is recursive. At any given step we are working with a node of the tree that has a vantage point v and a threshold distance t. The point of interest x will be some distance from the vantage point v. If that distance d is less than t then use the algorithm recursively to search the subtree of the node that contains the points closer to the vantage point than the threshold t; otherwise recurse to the subtree of the node that contains the points that are farther than the vantage point than the threshold t. If the recursive use of the algorithm finds a neighboring point n with distance to x that is less than |td| then it cannot help to search the other subtree of this node; the discovered node n is returned. Otherwise, the other subtree also needs to be searched recursively.

A similar approach works for finding the k nearest neighbors of a point x. In the recursion, the other subtree is searched for kk′ nearest neighbors of the point x whenever only k′ (< k) of the nearest neighbors found so far have distance that is less than |td|.

Advantages of a vantage-point tree edit

  1. Instead of inferring multidimensional points for domain before the index being built, we build the index directly based on the distance.[6] Doing this, avoids pre-processing steps.
  2. Updating a vantage-point tree is relatively easy compared to the FastMap approach. For FastMap, after inserting or deleting data, there will come a time when FastMap will have to rescan itself. That takes up too much time and it is unclear to know when the rescanning will start.[6]
  3. Distance based methods are flexible. It is “able to index objects that are represented as feature vectors of a fixed number of dimensions."[6]

Complexity edit

The time cost to build a vantage-point tree is approximately O(n log n). For each element, the tree is descended by log n levels to find its placement. However there is a constant factor k where k is the number of vantage points per tree node.[3]

The time cost to search a vantage-point tree to find a single nearest neighbor is O(log n). There are log n levels, each involving k distance calculations, where k is the number of vantage points (elements) at that position in the tree.

The time cost to search a vantage-point tree for a range, which may be the most important attribute, can vary greatly depending on the specifics of the algorithm used and parameters. Brin's paper[3] gives the result of experiments with several vantage point algorithms with various parameters to investigate the cost, measured in number of distance calculations.

The space cost for a vantage-point tree is approximately n. Each element is stored, and each tree element in each non-leaf node requires a pointer to its descendant nodes. (See Brin for details on one implementation choice. The parameter for number of elements at each node plays a factor.)

With n points there are O(n2) pairwise distances between points. However, the creation of a vantage-point tree requires that only O(n log n) distances be calculated explicitly, and a search requires only O(log n) distance calculations. For example, if x and y are points and it is known that the distance d(x, y) is small then any point z that is far from x will also necessarily be almost as far from y because the metric space's triangle inequality gives d(y, z) ≥ d(x, z) − d(x, y).

References edit

  1. ^ a b Yianilos (1993). Data structures and algorithms for nearest neighbor search in general metric spaces. Fourth annual ACM-SIAM symposium on Discrete algorithms. Society for Industrial and Applied Mathematics Philadelphia, PA, USA. pp. 311–321.
  2. ^ Bozkaya, Tolga; Ozsoyoglu, Meral (September 1999). "Indexing Large Metric Spaces for Similarity Search Queries". ACM Trans. Database Syst. 24 (3): 361–404. doi:10.1145/328939.328959. ISSN 0362-5915. S2CID 6486308.
  3. ^ a b c Brin, Sergey (Sep 1995). "Near Neighbor Search in Large Metric Spaces". VLDB '95 Proceedings of the 21th International Conference on Very Large Data Bases. Zurich, Switzerland: Morgan Kaufmann Publishers Inc.: 574–584. ISBN 9781558603790.
  4. ^ Uhlmann, Jeffrey (1991). "Satisfying General Proximity/Similarity Queries with Metric Trees". Information Processing Letters. 40 (4): 175–179. doi:10.1016/0020-0190(91)90074-r.
  5. ^ Nielsen, Frank (2009). "Bregman vantage point trees for efficient nearest Neighbor Queries". Proceedings of Multimedia and Exp (ICME). IEEE. pp. 878–881.
  6. ^ a b c d e Fu, Ada Wai-chee; Polly Mei-shuen Chan; Yin-Ling Cheung; Yiu Sang Moon (2000). "Dynamic vp-tree indexing for n-nearest neighbor search given pair-wise distances". The VLDB Journal — The International Journal on Very Large Data Bases. Springer-Verlag New York, Inc. Secaucus, NJ, USA. pp. 154–173. vp. Retrieved 2012-10-02.

External links edit

  • Understanding VP Trees