231 research outputs found

    Machine Learning for Ad Publishers in Real Time Bidding

    Get PDF

    Multiple-Target Tracking in Complex Scenarios

    Get PDF
    In this dissertation, we develop computationally efficient algorithms for multiple-target tracking: MTT) in complex scenarios. For each of these scenarios, we develop measurement and state-space models, and then exploit the structure in these models to propose efficient tracking algorithms. In addition, we address design issues such as sensor selection and resource allocation. First, we consider MTT when the targets themselves are moving in a time-varying multipath environment. We develop a sparse-measurement model that allows us to exploit the inherent joint delay-Doppler diversity offered by the environment. We then reformulate the problem of MTT as a block-support recovery problem using the sparse measurement model. We exploit the structure of the dictionary matrix to develop a computationally efficient block support recovery algorithm: and thereby a multiple-target tracking algorithm) under the assumption that the channel state describing the time-varying multipath environment is known. Further, we also derive an upper bound on the overall error probability of wrongly identifying the support of the sparse signal. We then relax the assumption that the channel state is known. We develop a new particle filter called the Multiple Rao-Blackwellized Particle Filter: MRBPF) to jointly estimate both the target and the channel states. We also compute the posterior Cramér-Rao bound: PCRB) on the estimates of the target and the channel states and use the PCRB to find a suitable subset of antennas to be used for transmission in each tracking interval, as well as the power transmitted by these antennas. Second, we consider the problem of tracking an unknown number and types of targets using a multi-modal sensor network. In a multi-modal sensor network, different quantities associated with the same state are measured using sensors of different kinds. Hence, an efficient method that can suitably combine the diverse information measured by each sensor is required. We first develop a Hierarchical Particle Filter: HPF) to estimate the unknown state from the multi-modal measurements for a special class of problems which can be modeled hierarchically. We then model our problem of tracking using a hierarchical model and then use the proposed HPF for joint initiation, termination and tracking of multiple targets. The multi-modal data consists of the measurements collected from a radar, an infrared camera and a human scout. We also propose a unified framework for multi-modal sensor management that comprises sensor selection: SS), resource allocation: RA) and data fusion: DF). Our approach is inspired by the trading behavior of economic agents in commercial markets. We model the sensors and the sensor manager as economic agents, and the interaction among them as a double sided market with both consumers and producers. We propose an iterative double auction mechanism for computing the equilibrium of such a market. We relate the equilibrium point to the solutions of SS, RA and DF. Third, we address MTT problem in the presence of data association ambiguity that arises due to clutter. Data association corresponds to the problem of assigning a measurement to each target. We treat the data association and state estimation as separate subproblems. We develop a game-theoretic framework to solve the data association, in which we model each tracker as a player and the set of measurements as strategies. We develop utility functions for each player, and then use a regret-based learning algorithm to find the correlated equilibrium of this game. The game-theoretic approach allows us to associate measurements to all the targets simultaneously. We then use particle filtering on the reduced dimensional state of each target, independently

    First IJCAI International Workshop on Graph Structures for Knowledge Representation and Reasoning (GKR@IJCAI'09)

    Get PDF
    International audienceThe development of effective techniques for knowledge representation and reasoning (KRR) is a crucial aspect of successful intelligent systems. Different representation paradigms, as well as their use in dedicated reasoning systems, have been extensively studied in the past. Nevertheless, new challenges, problems, and issues have emerged in the context of knowledge representation in Artificial Intelligence (AI), involving the logical manipulation of increasingly large information sets (see for example Semantic Web, BioInformatics and so on). Improvements in storage capacity and performance of computing infrastructure have also affected the nature of KRR systems, shifting their focus towards representational power and execution performance. Therefore, KRR research is faced with a challenge of developing knowledge representation structures optimized for large scale reasoning. This new generation of KRR systems includes graph-based knowledge representation formalisms such as Bayesian Networks (BNs), Semantic Networks (SNs), Conceptual Graphs (CGs), Formal Concept Analysis (FCA), CPnets, GAI-nets, all of which have been successfully used in a number of applications. The goal of this workshop is to bring together the researchers involved in the development and application of graph-based knowledge representation formalisms and reasoning techniques

    Path Data in Marketing: An Integrative Framework and Prospectus for Model Building

    Get PDF
    Many data sets, from different and seemingly unrelated marketing domains, all involve paths—records of consumers\u27 movements in a spatial configuration. Path data contain valuable information for marketing researchers because they describe how consumers interact with their environment and make dynamic choices. As data collection technologies improve and researchers continue to ask deeper questions about consumers\u27 motivations and behaviors, path data sets will become more common and will play a more central role in marketing research. To guide future research in this area, we review the previous literature, propose a formal definition of a path (in a marketing context), and derive a unifying framework that allows us to classify different kinds of paths. We identify and discuss two primary dimensions (characteristics of the spatial configuration and the agent) as well as six underlying subdimensions. Based on this framework, we cover a range of important operational issues that should be taken into account as researchers begin to build formal models of path-related phenomena. We close with a brief look into the future of path-based models, and a call for researchers to address some of these emerging issues

    Imitation learning for combinatorial optimization and contact tracing

    Get PDF
    The field of Imitation Learning (IL) has seen significant progress in recent years as researchers have applied this machine learning technique to various domains such as robotics, self-driving cars, healthcare, and game playing. Each domain has contributed to the advancement of the field by developing and applying new methods to solve the unique problems specific to their domain. In this thesis, we focus on IL in two domains that pose their own unique challenges. The first application involves learning to imitate a highly accurate heuristic for mixed-integer linear programming (MILP) solvers, which although precise, is not practical due to its computational inefficiency. The second application involves the development of an IL framework to accurately predict the infectiousness of individuals through a smartphone application utilizing the newly developed Proactive Contact Tracing (PCT) framework, which overcomes the limitations of conventional contact tracing methods. We design our IL frameworks based on the dynamics of a manageable environment (e.g., simulator), with the goal of transferring the learned models to larger, unseen environments. The development of these frameworks requires the consideration and resolution of several challenges. These challenges include incorporating domain-specific inductive biases, ensuring the robustness of models against distribution shifts, and designing models that are lightweight and suitable for deployment. By addressing these challenges, we hope to contribute not only to the advancement of IL, but also to the domains in which it is applied, bringing new and improved solutions to these respective fields. Specifically, to imitate the expert heuristic of MILP solvers, we identified and addressed two key shortcomings of the existing IL framework. First, the proposed Graph Neural Networks (GNNs) are computationally expensive but highly accurate and their runtime performance degrades in the absence of GPUs. This setting may arise since MILP solvers are CPU--only. To address this, we proposed novel architectures that trade-off the expressivity of GNNs with inexpensive computations of multi-linear perceptrons, along with training protocols that make the models robust to distribution shifts. The models trained using these techniques resulted in up to 26% improvement in runtime. The second issue is the inability to capture the dependence between observations to train GNNs. Our research revealed a ``lookback'' phenomenon that occurs frequently in the expert heuristic, where the best decision at the child node is often the second-best at the parent node. To incorporate this phenomenon in the loss function, we proposed a new loss function that imitates this heuristic more accurately, resulting in models with up to 15% improvement in running time. Finally, during the COVID-19 pandemic, nations around the world faced a dilemma of whether to open up the economy or prioritize saving lives. In response, digital contact tracing applications emerged. However, to avoid violating user privacy, most apps relied on a quarantine-or-not interface with limited intelligence on the level of risk of the notification recipient. This approach led to alert fatigue, making users less likely to follow recommendations. To overcome these issues while maintaining user privacy and sophisticated risk estimation models, we proposed the Proactive Contact Tracing (PCT) framework. Our framework repurposes user communication to carry information about estimated risk in "risk messages". These messages, along with personal information (e.g., medical history or symptoms), are used in a risk estimation model to output risk messages sent to other users. Depending on estimated risk, graded notifications (e.g., exercise caution or avoid unnecessary behavior) are shown to the users. Using an agent-based model (ABM) and a simple interpretable rule-based model, we demonstrated that the rule-based PCT has a better economic-public health trade-off than the existing apps. In follow-up work, we turned to deep learning to design a risk estimation model. While reinforcement learning would have been ideal, the computationally expensive ABM precludes its use. Instead, we employed an imitation learning framework to train deep learning models, specifically, we proposed several variants of set transformers. We also used domain randomization, collecting observations using several random instantiations of the ABM, to ensure that models were robust to assumptions baked into the ABM. Furthermore, we used iterative training to ensure the models remained robust to auto-induced distribution shifts. Overall, we showed that a deep learning-based PCT outperforms rule-based PCT. To finalize our proposal, we suggest an iterative procedure for app deployment and ABM calibration to bridge the gap from the ABM to real-world deployment

    Mining diverse consumer preferences for bundling and recommendation

    Get PDF
    • …
    corecore