688,355 research outputs found
SEARCH MODELS OF A MOVING TARGET
Adversarial submarine activity in the Atlantic has steadily intensified over the past few years. Furthermore, strategic adversaries have developed sophisticated and stealthy submarines, making them much more difficult to locate. The heightened activity coupled with advanced platforms have allowed the United States' adversaries to challenge its dominance in the underwater domain. Though extensive research has been performed on optimized search strategies using Bayesian search methods, most methodologies in the open literature focus on searching for stationary objects rather than searching for a moving Red submarine conducted by a Blue submarine. Thusly motivated, we develop a model of an enemy submarine whose goal is to avoid detection. As the search effort is expended, a posterior probability distribution for the enemy submarine’s location is calculated based on negative search results. We present a methodology for finding a search pattern that attempts to maximize the probability of detection in a Bayesian framework utilizing Markovian properties. Specifically, we study three different running window methods: a simple network optimization model, a network optimization model that performs updates after every time period that is planning the entire route, and a dynamic program that only looks two time periods ahead.NPS Naval Research ProgramThis project was funded in part by the NPS Naval Research Program.Lieutenant, United States NavyApproved for public release. Distribution is unlimited
Area Search for a Moving Target
When a target has an apriori existence in an area A and a fraction phi of the area is searched, there are well-known expressions for the detection probability when the target is stationary. In this paper the detection probability is worked out for a more important case when the target is in motion. It must be assumed, however, that the target manages to remain in the area in which it has apriori existence by permitting suitable changes in its direction of motion. The detection probability will depend on the ratio of the speeds of the target and the searcher in a complex way. The computation should involve a computer programme but analytical expressions can be approximately derived for phi <<1. The calculated probability is less than phi which is the detection probability for continuous search for a stationary target and more than the value for a random search
Line Search for an Oblivious Moving Target
Consider search on an infinite line involving an autonomous robot starting at
the origin of the line and an oblivious moving target at initial distance from it. The robot can change direction and move anywhere on the line
with constant maximum speed while the target is also moving on the line
with constant speed but is unable to change its speed or direction. The
goal is for the robot to catch up to the target in as little time as possible.
The classic case where and the target's initial distance is unknown
to the robot is the well-studied ``cow-path problem''. Alpert and Gal gave an
optimal algorithm for the case where a target with unknown initial distance
is moving away from the robot with a known speed . In this paper we design
and analyze search algorithms for the remaining possible knowledge situations,
namely, when and are known, when is known but is unknown, when
is known but is unknown, and when both and are unknown.
Furthermore, for each of these knowledge models we consider separately the case
where the target is moving away from the origin and the case where it is moving
toward the origin. We design algorithms and analyze competitive ratios for all
eight cases above. The resulting competitive ratios are shown to be optimal
when the target is moving towards the origin as well as when is known and
the target is moving away from the origin
On optimal search for a moving target
The work of this thesis is concerned with the following problem and its derivatives. Consider the problem of searching for a target which moves randomly between n sites. The movement is modelled with an n state Markov chain. One of the sites is searched at each time t = 1, 2,…until the target is found. Associated with each search of site i is an overlook probability a(_i) and a cost C(_i). Our aim is to determine the policy that will find the target with the minimal average cost. Notably in the two site case we examine the conjecture that if we let p denote the probability that the target is at site 1, an optimal policy can be defined in terms of a threshold probability P(^*) such that site 1 is searched if and only if p ≥ P(^*). We show this conjecture to be correct (i) for general C(_1) ≠C(_2) when the overlook probabilities a(_i) are small and (ii) for general a, and d for a large range of transition laws for the movement. We also derive some properties of the optimal policy for the problem on n sites in the no-overlook case and for the case where each site has the same a, and Q. We also examine related problems such as ones in which we have the ability to divide available search resources between different regions, and a couple of machine replacement problems
Optimal Search for a Moving Target with the Option to Wait
We investigate the problem in which an agent has to find an object that moves between two locations according to a discrete Markov process (see Pollock, 1970). At every period, the agent has three options: searching left, searching right, and waiting. We assume that waiting is costless whereas searching is costly. Waiting can be useful because it could induce a more favorable probability distribution over the two locations next period. We find an essentially unique (nearly) optimal strategy, and prove that it is characterized by two thresholds (as conjectured by Weber, 1986). We show, moreover, that it can never be optimal to search the location with the lower probability of containing the object. The latter result is far from obvious and is in clear contrast with the example in Ross (1983) for the model without waiting.We also analyze the case of multiple agents. This makes the problem a more strategic one, since now the agents not only compete against time but also against each other in finding the object. We find different kinds of subgame perfect equilibria, possibly containing strategies that are not optimal in the one agent case. We compare the various equilibria in terms of cost-effectiveness.Strategy;
Searching with Measurement Dependent Noise
Consider a target moving with a constant velocity on a unit-circumference
circle, starting from an arbitrary location. To acquire the target, any region
of the circle can be probed for its presence, but the associated measurement
noise increases with the size of the probed region. We are interested in the
expected time required to find the target to within some given resolution and
error probability. For a known velocity, we characterize the optimal tradeoff
between time and resolution (i.e., maximal rate), and show that in contrast to
the case of constant measurement noise, measurement dependent noise incurs a
multiplicative gap between adaptive search and non-adaptive search. Moreover,
our adaptive scheme attains the optimal rate-reliability tradeoff. We further
show that for optimal non-adaptive search, accounting for an unknown velocity
incurs a factor of two in rate.Comment: Information Theory Workshop (ITW) 201
A competitive search game with a moving target
We introduce a discrete-time search game, in which two players compete to find an invisible object first. The object moves according to a time-varying Markov chain on finitely many states. The players are active in turns. At each period, the active player chooses a state. If the object is there then he finds the object and wins. Otherwise the object moves and the game enters the next period. We show that this game admits a value, and for any error-term epsilon > 0 , each player has a pure (subgame-perfect) epsilon-optimal strategy. Interestingly, a 0-optimal strategy does not always exist. We derive results on the analytic and structural properties of the value and the epsilon-optimal strategies. We devote special attention to the important timehomogeneous case, where we show that (subgame-perfect) optimal strategies exist if the Markov chain is irreducible and aperiodic
Toward Smart Moving Target Defense for Linux Container Resiliency
This paper presents ESCAPE, an informed moving target defense mechanism for
cloud containers. ESCAPE models the interaction between attackers and their
target containers as a "predator searching for a prey" search game. Live
migration of Linux-containers (prey) is used to avoid attacks (predator) and
failures. The entire process is guided by a novel host-based
behavior-monitoring system that seamlessly monitors containers for indications
of intrusions and attacks. To evaluate ESCAPE effectiveness, we simulated the
attack avoidance process based on a mathematical model mimicking the
prey-vs-predator search game. Simulation results show high container survival
probabilities with minimal added overhead.Comment: Published version is available on IEEE Xplore at
http://ieeexplore.ieee.org/document/779685
- …