KNOWPIA
WELCOME TO KNOWPIA

In computer science, a **search algorithm** is an algorithm (if more than one, algorithms ^{[1]}) designed to solves a search problem. Search algorithms work to retrieve information stored within particular data structure, or calculated in the search space of a problem domain, with either discrete or continuous values.

While the search problems described above and web search are both problems in information retrieval, they are generally studied as separate subfields and are solved and evaluated differently. Web search problems are generally focused on filtering and finding documents highly relevant to human queries. Classic search algorithms are evaluated on how fast they can find a solution, and whether the solution found is optimal. Though information retrieval algorithms must be fast, the quality of ranking, and whether good results have been left out and bad results included, is more important.

The appropriate search algorithm often depends on the data structure being searched, and may also include prior knowledge about the data. Search algorithms can be made faster or more efficient by specially constructed database structures, such as search trees, hash maps, and database indexes.^{[2]}^{[full citation needed]}^{[3]}

Search algorithms can be classified based on their mechanism of searching into three types of algorithms: linear, binary, and hashing. Linear search algorithms check every record for the one associated with a target key in a linear fashion.^{[4]} Binary, or half-interval, searches repeatedly target the center of the search structure and divide the search space in half. Comparison search algorithms improve on linear searching by successively eliminating records based on comparisons of the keys until the target record is found, and can be applied on data structures with a defined order.^{[5]} Digital search algorithms work based on the properties of digits in data structures by using numerical keys.^{[6]} Finally, hashing directly maps keys to records based on a hash function.^{[7]}

Algorithms are often evaluated by their computational complexity, or maximum theoretical run time. Binary search functions, for example, have a maximum complexity of *O*(log *n*), or logarithmic time. In simple terms, the maximum number of operations needed to find the search target is a logarithmic function of the size of the search space.

Specific applications of search algorithms include:

- Problems in combinatorial optimization, such as:
- The vehicle routing problem, a form of shortest path problem
- The knapsack problem: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
- The nurse scheduling problem

- Problems in constraint satisfaction, such as:
- The map coloring problem
- Filling in a sudoku or crossword puzzle

- In game theory and especially combinatorial game theory, choosing the best move to make next (such as with the minmax algorithm)
- Finding a combination or password from the whole set of possibilities
- Factoring an integer (an important problem in cryptography)
- Optimizing an industrial process, such as a chemical reaction, by changing the parameters of the process (like temperature, pressure, and pH)
- Retrieving a record from a database
- Finding the maximum or minimum value in a list or array
- Checking to see if a given value is present in a set of values

Algorithms for searching virtual spaces are used in the constraint satisfaction problem, where the goal is to find a set of value assignments to certain variables that will satisfy specific mathematical equations and inequations / equalities. They are also used when the goal is to find a variable assignment that will maximize or minimize a certain function of those variables. Algorithms for these problems include the basic brute-force search (also called "naïve" or "uninformed" search), and a variety of heuristics that try to exploit partial knowledge about the structure of this space, such as linear relaxation, constraint generation, and constraint propagation.

An important subclass are the local search methods, that view the elements of the search space as the vertices of a graph, with edges defined by a set of heuristics applicable to the case; and scan the space by moving from item to item along the edges, for example according to the steepest descent or best-first criterion, or in a stochastic search. This category includes a great variety of general metaheuristic methods, such as simulated annealing, tabu search, A-teams, and genetic programming, that combine arbitrary heuristics in specific ways. The opposite of local search would be global search methods. This method is applicable when the search space is not limited and all aspects of the given network are available to the entity running the search algorithm.^{[8]}

This class also includes various tree search algorithms, that view the elements as vertices of a tree, and traverse that tree in some special order. Examples of the latter include the exhaustive methods such as depth-first search and breadth-first search, as well as various heuristic-based search tree pruning methods such as backtracking and branch and bound. Unlike general metaheuristics, which at best work only in a probabilistic sense, many of these tree-search methods are guaranteed to find the exact or optimal solution, if given enough time. This is called "completeness".

Another important sub-class consists of algorithms for exploring the game tree of multiple-player games, such as chess or backgammon, whose nodes consist of all possible game situations that could result from the current situation. The goal in these problems is to find the move that provides the best chance of a win, taking into account all possible moves of the opponent(s). Similar problems occur when humans or machines have to make successive decisions whose outcomes are not entirely under one's control, such as in robot guidance or in marketing, financial, or military strategy planning. This kind of problem — combinatorial search — has been extensively studied in the context of artificial intelligence. Examples of algorithms for this class are the minimax algorithm, alpha–beta pruning, and the A* algorithm and its variants.

The name "combinatorial search" is generally used for algorithms that look for a specific sub-structure of a given discrete structure, such as a graph, a string, a finite group, and so on. The term combinatorial optimization is typically used when the goal is to find a sub-structure with a maximum (or minimum) value of some parameter. (Since the sub-structure is usually represented in the computer by a set of integer variables with constraints, these problems can be viewed as special cases of constraint satisfaction or discrete optimization; but they are usually formulated and solved in a more abstract setting where the internal representation is not explicitly mentioned.)

An important and extensively studied subclass are the graph algorithms, in particular graph traversal algorithms, for finding specific sub-structures in a given graph — such as subgraphs, paths, circuits, and so on. Examples include Dijkstra's algorithm, Kruskal's algorithm, the nearest neighbour algorithm, and Prim's algorithm.

Another important subclass of this category are the string searching algorithms, that search for patterns within strings. Two famous examples are the Boyer–Moore and Knuth–Morris–Pratt algorithms, and several algorithms based on the suffix tree data structure.

In 1953, American statistician Jack Kiefer devised Fibonacci search which can be used to find the maximum of a unimodal function and has many other applications in computer science.

There are also search methods designed for quantum computers, like Grover's algorithm, that are theoretically faster than linear or brute-force search even without the help of data structures or heuristics. While the ideas and applications behind quantum computers are still entirely theoretical, studies have been conducted with algorithms like Grover's that accurately replicate the hypothetical physical versions of quantum computing systems.^{[9]}

Search algorithms used in a search engine such as Google, order the relevant search results based on a myriad of important factors.^{[10]} Search engine optimization (SEO) is the process in which any given search result will work in conjunction with the search algorithm to organically gain more traction, attention, and clicks, to their site. This can go as far as attempting to adjust the search engines algorithm to favor a specific search result more heavily, but the strategy revolving around SEO has become incredibly important and relevant in the business world.^{[10]}

- Backward induction
- Content-addressable memory – Special type of computer memory used in certain very-high-speed searching applications hardware
- Dual-phase evolution – Process that drives self-organization within complex adaptive systems
- Linear search problem
- No free lunch in search and optimization – Average solution cost is the same with any method
- Recommender system – Information filtering system to predict users' preferences, also use statistical methods to rank results in very large data sets
- Search engine (computing) – System to help searching for information
- Search game – Two-person zero-sum game
- Selection algorithm – An algorithm for finding the kth smallest number in a list or array
- Solver – Software for a class of mathematical problems
- Sorting algorithm – Algorithm that arranges lists in order, necessary for executing certain search algorithms
- Web search engine

Categories:

- Category:Search algorithms

**^**Davies, Dave (May 25, 2020). "How Search Engine Algorithms Work: Everything You Need to Know".*Search Engine Journal*. Retrieved 27 March 2021.`{{cite web}}`

: CS1 maint: url-status (link)**^**Beame & Fich 2002, p. 39.**^**Knuth 1998, §6.5 ("Retrieval on Secondary Keys").**^**Knuth 1998, §6.1 ("Sequential Searching").**^**Knuth 1998, §6.2 ("Searching by Comparison of Keys").**^**Knuth 1998, §6.3 (Digital Searching).**^**Knuth 1998, §6.4, (Hashing).**^**Hunter, A.H.; Pippenger, Nicholas (4 July 2013). "Local versus global search in channel graphs".*Networks: An International Journey*. arXiv:1004.2526.**^**López, G V; Gorin, T; Lara, L (26 February 2008). "Simulation of Grover's quantum search algorithm in an Ising-nuclear-spin-chain quantum computer with first- and second-nearest-neighbour couplings".*Journal of Physics B: Atomic, Molecular and Optical Physics*.**41**(5): 055504. arXiv:0710.3196. Bibcode:2008JPhB...41e5504L. doi:10.1088/0953-4075/41/5/055504. S2CID 18796310.- ^
^{a}^{b}Baye, Michael; De los Santos, Barbur; Wildenbeest, Matthijs (2016). "Search Engine Optimization: What Drives Organic Traffic to Retail Sites?".*Journal of Economics & Management Strategy*.**25**: 6–31. doi:10.1111/jems.12141. S2CID 156960693.

- Knuth, Donald (1998).
*Sorting and Searching*. The Art of Computer Programming. Vol. 3 (2nd ed.). Reading, MA: Addison-Wesley Professional.

- Schmittou, Thomas; Schmittou, Faith E. (2002-08-01). "Optimal Bounds for the Predecessor Problem and Related Problems".
*Journal of Computer and System Sciences*.**65**(1): 38–72. doi:10.1006/jcss.2002.1822.

- Uninformed Search Project at the Wikiversity.