Abstract
Many interactions between searching agents and their elusive targets are composed of a succession of steps, whether in the context of immune systems, predation or counterterrorism. In the simplest case, a twostep process starts with a searchandhide phase, also called a hideandseek phase, followed by a round of pursuit–escape. Our aim is to link these two processes, usually analysed separately and with different models, in a single game theory context. We define a matrix game in which a searcher looks at a fixed number of discrete locations only once each searching for a hider, which can escape with varying probabilities according to its location. The value of the game is the overall probability of capture after k looks. The optimal search and hide strategies are described. If a searcher looks only once into any of the locations, an optimal hider chooses it's hiding place so as to make all locations equally attractive. This optimal strategy remains true as long as the number of looks is below an easily calculated threshold; however, above this threshold, the optimal position for the hider is where it has the highest probability of escaping once spotted.
1. Introduction
Many biological interactions between a hider and a searcher consist of a succession of hideandseek bouts, also called searchandhide bouts, followed by pursuit–evasion sequences. For example, at the physiological scale, immune cells patrol the body and destroy invaders once identified; at the ecological scale, there are long sequences of interactions between game animals and large predators, as commonly broadcast in wildlife documentaries. Quantitative descriptions of the exact position and behaviour of the two protagonists, including a detailed map of the arena in which the interaction takes place, are however exceedingly rare in the scientific literature. The focus is indeed generally on the escape strategies of the prey or on the foraging strategy of the predator, but very seldom of both. Hunting cheetahs are a good example [1]: we now know much about the kinematics and biomechanics of the predator during the chase, but information regarding the prey whereabouts is nearly nonexistent, and information about the terrain in which the interaction occurs is systematically absent. The same applies to other predator–prey interactions with very different animals in very different environments: for penguins chasing krill for example [2]. The lack of comprehensive data for the major variables means that the attack strategies of predators cannot be truly understood in the only relevant context, that of the escape strategies developed by prey, and vice versa [3,4].
Game theory is the tool of choice for analysing the strategies of both predator and prey, in particular two subdisciplines: search theory and search games. Search theory [5], which deals with the optimal way of looking for a hidden object with no opponent, has recently again proved its enormous power and relevance to society; after 2 years of unsuccessful searching, application of this theory enabled the wreckage of Air France flight AF 447 to be found in the Atlantic within one week [6]. The twosided search problems of hideandseek situations that we all know from our childhood are the realm of search games. These situations are common in military and antiterror activity and also in many other fields especially predator–prey situations, as described by Gal [7]. Pursuit–evasion situations, for example an aircraft or a missile trying to intercept another aircraft, were modelled and solved by Isaacs in his classic book [8] as a new type of game: differential games. The book also introduced continuous search games in its last chapter. These games were subsequently developed by Gal [9] and updated by Alpern & Gal [10]. Search theory in general, and search games in particular, have stimulated much research, with applications to computer science (e.g. [11]), economics and biology, culminating in several recent books [12,13]. The two types of models, pursuit–evasion and hide–search models, were developed separately from the 1960s onwards. The main emphasis in pursuit–evasion games has been to find minmax controls, such as accelerations or angles, which influence the trajectories of the players, each of them usually being aware of the movement of the other player. In hide–search games, the players have, by contrast, little information about the movement of the other player and the main emphasis is to find optimal probabilistic choices of strategies for the players.
The search space considered in our work is a set of discrete locations. An optimal onesided (i.e. no opponent) search for an object hidden in one of several locations with different characteristics is a classical problem solved by Blackwell, as reported by [14] and extended by Ross [15,16] to an optimal search and stop problem. The classical result was also obtained by Gittins [17] within the context of his wellknown Gittins index. Gittins and Roberts analysed this model as a search game, the solution of which is significantly complicated even for only two locations [18,19]. In this game, the hider would like to choose the hiding probabilities so as to make both locations equally attractive for the searcher but this is a difficult task for an immobile hider. For example, if the two locations were equally attractive for the searcher at the first step but the hider was not found at that step, then the posterior probability would have made the unvisited location distinctly more favourable to visit next. Thus, the hider cannot make both locations equally attractive all the time. What the hider can achieve is an appropriate balance so that during the search process both locations will be as equally attractive as possible. Discrete search games of this and other types have also been analysed in [20–23].
Analysing real predator–prey strategies using elements of game theory as described above has been carried out on a few occasions. It has been applied more to defence by prey than to attacks by predators [24,25]. The vast majority of studies in this area deal with pursuit–evasion games in a featureless environment, from hunting dragonflies and bats in the air [26,27] to sighthound dogs on land [28], and fish and copepods in water [29]. Pursuit–evasion games, in their pure form, assume that complete information is available to both antagonists; both have full visibility, for example. Most behavioural studies on predator attack strategies deal with this kind of game, as reviewed by Alpern et al. [30]. Search games in their pure form assume no knowledge of the other party's whereabouts. However, partial information is sometimes available in which case it is referred to as a ‘noisy’ search game. The information enables one or both to ‘hear’ the other and gain partial information about the other's movement pattern but without full information about its location [30,31,32]. Sitandwait strategies, which have been well studied for predators [33,34], are another form of search game in which the predator has partial information about the prey, whereas the prey has virtually no information about the predator's position and movements. Two biological systems, one involving parasitic wasps attacking leafmining moths and the other involving wolfspiders attacking wood crickets have been analysed as search games [35,36]. There is, however, still no model encompassing both types of games, search and pursuit–evasion games.
We have combined the theories for two types of game theory, which until now have developed separately owing to differing assumptions about the level of information available to the two players; we thus developed a single model to represent the complex interactions as they unfold in reality. We use a succession of a searchandhide phase, followed by a pursuit–escape phase, based on multiple locations and identify the optimal strategies for both players. The hider has a varying probability of being captured depending on where it is. The search effort is identified as determining which of the two types of games makes the largest contribution to defining the ultimate capture probability. We define a matrix game in which a searcher looks at a fixed number of discrete locations only once each, searching for a hider, which can escape with varying probabilities according to its location. The value of the game is the overall probability of capture after k looks. The optimal search and hiding strategies are described. If a searcher looks only once into any of the locations, an optimal hider chooses its hiding place randomly so as to make all locations equally attractive. This optimal strategy remains true as long as the number of looks is below an easily calculated threshold; however, above this threshold, the optimal strategy for the hider is to choose a fixed location, which has the highest probability of escaping once spotted. The emphasis shifts above the threshold from the searchandhide phase to the pursuit–escape phase. An extension to a random number of looks and repeated games is proposed and the implications in terms of givingup time by the searcher are considered.
2. Searching among heterogeneous locations
We analyse the following model. A predator (searcher) looks for a prey (hider) in a search space consisting of n locations. The hider chooses a location and the searcher inspects k different locations, where k is a parameter of the game (the ‘givingup time’ for the continuous version). If the predator visits a location i where the prey is hiding, capture is not certain but occurs with probability p_{i} and the game ends. This probability can originate from the second phase of the game—pursuit–evasion—and represents the probability of capture in that phase.
We assume n locations, as possible hiding places for the prey. For each location, i = 1, 2, … , n, the probability of capture is p_{i}. For convenience, we assume
The number of (different) locations, k, that the searcher inspects before he gets ‘tired’ is a parameter of the problem 1 ≤ k ≤ n.
Hiding strategy (mixed): a vector of hiding probabilities (h_{1}, h_{2}, … , h_{n}), where h_{i} is the probability that the hider hides at location i. Thus,
Search strategy: a pure search strategy is a set {S_{1}, S_{2}, … , S_{k}} of k different locations that are visited.
A mixed search strategy is a probabilistic choice of these sets.
We will use an equivalent definition for a mixed search strategy.
Definition 2.1.
A mixed search strategy is a vector of probabilities R = (r_{1}, r_{2}, … , r_{n}), where r_{i} is the probability that the searcher visits location i during the k visits. 2.1
For any vector R, a possible way to construct the corresponding mixed search strategy is the following simple algorithm.

— Arrange n nonoverlapping intervals, L_{1}, L_{2}, … , L_{n}, where L_{i} is associated with location i and has length r_{i}, in the line segment [1, k + 1]. These intervals will be denoted Lintervals.

— Take a sample of one observation, θ, from the uniform distribution in [0, 1].

— Construct the following k points in the line segment [1, k + 1]: {j + θ, j = 1, 2, … , k} and for each j = 1, 2, … , k picks the Linterval containing the point j + θ. The k locations associated with these Lintervals will be visited by the searcher.
We now show that for each location i = 1, 2, … , n, the probability that the interval L_{i} (i.e. location i) will be chosen by the algorithm is indeed r_{i}. Let L_{i} = [j + a, j + a + r_{i}], where 0 ≤ a ≤ 1, for some j, and consider two cases.
If a + r_{i} ≤ 1, then location i will be chosen if and only if a ≤ θ ≤ a + r_{i}, which occurs with probability r_{i}.
If a + r_{i} > 1, then location i will be chosen if and only if a ≤ θ ≤ 1 or 0 ≤ θ ≤ a + r_{i} − 1, which also occurs with probability r_{i}.
This algorithm is in the spirit of systematic sampling [37] and Monte Carlo simulation techniques. An alternative approach, suggested by Steve Alpern, is to use the fact that the set of all R is convex and compact so that any point in this set is a convex combination of the extreme points of the set. However, this direction leads to a large linear programming problem with about n^{k} variables for producing the corresponding mixed search strategy.
The value of the game (the probability of capture) is denoted by v_{k}.
Let us first consider a hiding strategy h*, which makes all the locations equally attractive to search for k = 1 2.2
By (2.2), λ satisfies 2.3
The following two statements are easy to prove:

— for k = 1, the optimal hiding strategy is h* and v = λ and

— for k = n, the optimal hiding strategy is h_{1} = 1 and v = p_{1}.
In general, if k < p_{1}/λ then the optimal hiding strategy is h* and v_{k} = kλ. We will show that the searcher can make all the locations equally attractive for hiding and that the hider can make all the locations equally attractive to the searcher, i.e. for each i, the probability that the searcher visits i is proportional to 1/p_{i}. We now present a formal proof for the result.
Consider a search plan that involves visiting each location i with probability r_{i} = kλ/p_{i}.
This search strategy guarantees kλ because for each hiding location i, the probability of capture is p_{i} × r_{i} = kλ.
However, the hiding strategy h* guarantees a probability of capture of at most kλ against any pure search strategy S_{1}, S_{2}, … , S_{k} because for each visit S_{i} the probability of capture is so the probability of capture by one of these visits is the sum of the individual probabilities, i.e. kλ.
The above optimal strategies are unique: if r_{j} < kλ/p_{j} for some j, then the hider can pay r_{j}p_{j} < kλ by hiding at j. Hence, an optimal search strategy has to satisfy r_{i} ≥ kλ/p_{i} for all i, which implies equality by (2.1).
Similarly, if h_{j} > λ/p_{j} for some j, then the searcher can get more than kλ by using a vector R with r_{i} = (k − d)λ/p_{i} for all i ≠ j and r_{j} = (k − d)λ/p_{j} + d , where d is a small positive number. Hence, an optimal hiding strategy has to satisfy h_{i} ≤ λ/p_{i} for all i, which implies equality.
If k ≥ p_{1}/λ, then the hider can guarantee at most p_{1} by always hiding at location 1. The optimality of this strategy is proved as follows:
The searcher can guarantee at least p_{1} by choosing r_{1} = 1 ≤ kλ/p_{1} and r_{i} ≥ min(kλ/p_{i}, 1) for all 2 ≤ i ≤ k. This is possible because
As kλ ≥ p_{1}, a response of hiding at any location i ≥ 2 leads to a probability of capture ≥ p_{1}.
It is easy to see that for k > p_{1}/λ, optimal search strategies are not unique.
So we have proved:
Theorem 2.2.
If k < p_{1}/λ, then the optimal search strategy satisfies r_{i} = kλ/p_{i} and the optimal hiding strategy satisfies h_{i} = λ/p_{i} for all i = 1, 2, … , n.
If k ≥ p_{1}/λ, then an optimal search strategy satisfies r_{1} = 1 and r_{i} ≥ kλ/p_{i} for i = 2, … , n and an optimal hiding strategy satisfies h_{1} = 1.
The value of the game satisfies
We now present a simple example that illustrates the result.
Example 2.2.
Assume n = 4, and that p_{i}, are (0.5, 0.5, 1, 1)
By (2.2), λ = 1/6 and p_{1}/λ = 3, so
for k = 1, 2, the optimal hiding strategy is h* = (1/3, 1/3, 1/6, 1/6), and
for k = 4, an optimal hiding strategy is to hide at location number 1 (or 2).
For k = 3, both types of strategies are optimal. This is because p_{1}/λ is an integer.
The value v_{k} for k = 1, 2, 3 and 4 is 1/6, 1/3, 1/2 and 1/2, respectively.
Figure 1 describes the behaviour of the value as a function of k. The value increases linearly but becomes a constant as soon as the threshold is reached. This behaviour is true in general, as stated in the Theorem.
3. Discussion
We start the discussion by commenting on two major assumptions in our model, the fixed number of looks and the lack of repetitions, and propose some modifications to increase the broadness of the model. The final paragraph puts the work within the larger context of behavioural ecology.
The searcher's strategy assumes that the number of looks into potential hiding sites, k, is fixed and known to both players. In reality, k might be a variable. The problem of k being a random variable with a distribution known only to the searcher is more complex and requires further analysis. However, if k < p_{1}/λ with high probability, then h* is optimal and v = kλ, whereas if k > p_{1}/λ with high probability, then hiding at location number 1 is optimal and v = p_{1}. However, the optimal strategy can, in general, be different from the strategies described above. For example: p_{i} are (0.5, 0.51, 1) and k is a random variable which can be either 1 or 3, each with a probability of 50%. Hiding according to h* is bad for k = 3, whereas hiding only at 1 is bad for k = 1. The optimal hiding strategy involves hiding at either 1 or 2 with a probability about 1/2 at each of these locations and hiding at 3 with a small probability.
The second assumption, which sets our work apart from other work on searching different sites, is the absence of repetition. In all of the models of searching several locations described in the introduction, there is generally some probability for a searcher visiting a location at which the hider hides but not seeing it. Thus, the searcher may benefit from going back to a visited location more than once. By contrast, the assumption in the model we developed here is that the searcher always finds the hider if it is hiding at a location visited; however, the searcher does not necessarily catch the hider. Thus, each location is visited at most once. The search plan consists of a random sequence of sites to visit, which is set before the game starts; there is no update during the game itself. Its dynamical nature stems therefore from the twophase succession of hide–search and pursuit–evasion phases. In reality, most interactions contain several such successions in series, and our model is the simplest prototype of such interactions. Fabre [38], for example, described in vivid detail the intermingled successions of hunting bouts by dabber wasps locating and pursuing their spider prey. The spider drops onto the ground from the flower it is sitting on and thereby escapes for a while. The wasp, even though disoriented at first, does not give up and starts exploring the surroundings with great care, sometimes at substantial distance from the flower, before relocating its prey. If found again, the spider runs as fast as possible in an attempt to escape once more. The hunt ends sooner or later, usually with a wasp's sting in the spider's body. Our model can be extended to cover such cases as a repeated game with a constant sum of payoffs [39]. Let us suppose that a searcher is looking for a hider in a larger region consisting of distinct clumps of locations, called patches in the behavioural ecology literature. During the k looks among the locations within a single patch, there can be any of three events. First, if the searcher does not find the hider, then the game ends with zero payoff to the searcher and a payoff of one to the hider. Second, if the searcher finds the hider and catches it, then the game ends with a payoff of one to the searcher and a payoff of zero to the hider. Finally, if the searcher finds the hider but the hider escapes to another patch then the process restarts.
In our model, the hider's optimal strategy is simple because it consists of a single choice. If the number of looks is small, the hider should make all locations equally attractive. If it is large, it should hide at the location with the smallest probability of capture once spotted. The strategy of the searcher is more complex, because it consists of a plan composed of looking into a random sequence of locations selected by a rather involved selection mechanism, as described in the Theorem. An optimal searcher should not waste time by looking more often than needed to reach the threshold beyond which the capture probability stays constant. The increase in the chance of capturing the hider is linear before that threshold and the overall relationship is thus one approaching the law of diminishing returns, well known in optimal foraging theory in behavioural ecology [40,41,42]. The deeper relationship between our minimax results based on game theory and the onesided search theory developed for stationary prey items will be considered in a sequel work.
 Received January 19, 2014.
 Accepted February 20, 2014.
 © 2014 The Author(s) Published by the Royal Society. All rights reserved.