174 research outputs found

    The CGEM-IT: An Upgrade for the BESIII Experiment

    Get PDF
    The BESIII experiment has been collecting data since 2009 at the e+e− collider BEPCII in Beijing, a charm-τ factory characterized by high statistics and high precision. The discovery of exotic charmonium-like states and the still open questions in low-energy QCD led to an extension of the experimental program, with several upgrades. This review focuses on the CGEM-IT, the innovative solution proposed to replace the current inner tracker, which is aging. It consists of three, co-axial, cylindrical triple-GEM detectors and will be the first cylindrical GEM operating inside a 1 T magnetic field with analogue readout. For this purpose, a dedicated mixed-signal ASIC for the readout of CGEM-IT signals and FPGA-based electronics for data processing have been developed. The simultaneous measurement of both ionization charge and time distribution enables three reconstruction algorithms, to cope with the asymmetry of the electron avalanche in the magnetic field and with non-orthogonal incident tracks. The CGEM-IT will not only restore the design efficiency but also improve the secondary vertex reconstruction and the radiation tolerance. The gas mixture and gain settings were chosen to optimize the position resolution to ∌130 ”m in the transverse plane and better than 350 ”m along the beam direction. This paper addresses the innovative aspects in terms of construction, readout, and software, employed to achieve the design goals as well as the experimental measurements performed during the development and commissioning of the CGEM-IT

    Bayesian Active Malware Analysis

    Get PDF
    We propose a novel technique for Active Malware Analysis (AMA) formalized as a Bayesian game between an analyzer agent and a malware agent, focusing on the decision making strategy for the analyzer. In our model, the analyzer performs an action on the system to trigger the malware into showing a malicious behavior, i.e., by activating its payload. The formalization is built upon the link between malware families and the notion of types in Bayesian games. A key point is the design of the utility function, which reflects the amount of uncertainty on the type of the adversary after the execution of an analyzer action. This allows us to devise an algorithm to play the game with the aim of minimizing the entropy of the analyzer's belief at every stage of the game in a myopic fashion. Empirical evaluation indicates that our approach results in a significant improvement both in terms of learning speed and classification score when compared to other state-of-the-art AMA techniques

    Agent Behavioral Analysis Based on Absorbing Markov Chains

    Get PDF
    We propose a novel technique to identify known behaviors of intelligent agents acting within uncertain environments. We employ Markov chains to represent the observed behavioral models of the agents and we formulate the problem as a classification task. In particular, we propose to use the long-term transition probability values of moving between states of the Markov chain as features. Additionally, we transform our models into absorbing Markov chains, enabling the use of standard techniques to compute such features. The empirical evaluation considers two scenarios: the identification of given strategies in classical games, and the detection of malicious behaviors in malware analysis. Results show that our approach can provide informative features to successfully identify known behavioral patterns. In more detail, we show that focusing on the long-term transition probability enables to diminish the error introduced by noisy states and transitions that may be present in an observed behavioral model. We pose particular attention to the case of noise that may be intentionally introduced by a target agent to deceive an observer agent

    POMP: Pomcp-based Online Motion Planning for active visual search in indoor environments

    Get PDF
    In this paper we focus on the problem of learning an optimal policy for Active Visual Search (AVS) of objects in known indoor environments with an online setup. Our POMP method uses as input the current pose of an agent (e.g. a robot) and a RGB-D frame. The task is to plan the next move that brings the agent closer to the target object. We model this problem as a Partially Observable Markov Decision Process solved by a Monte-Carlo planning approach. This allows us to make decisions on the next moves by iterating over the known scenario at hand, exploring the environment and searching for the object at the same time. Differently from the current state of the art in Reinforcement Learning, POMP does not require extensive and expensive (in time and computation) labelled data so being very agile in solving AVS in small and medium real scenarios. We only require the information of the floormap of the environment, an information usually available or that can be easily extracted from an a priori single exploration run. We validate our method on the publicly available AVD benchmark, achieving an average success rate of 0.76 with an average path length of 17.1, performing close to the state of the art but without any training needed. Additionally, we show experimentally the robustness of our method when the quality of the object detection goes from ideal to faulty

    POMP++: Pomcp-based Active Visual Search in unknown indoor environments

    Get PDF
    In this paper we focus on the problem of learning online an optimal policy for Active Visual Search (AVS) of objects in unknown indoor environments. We propose POMP++, a planning strategy that introduces a novel formulation on top of the classic Partially Observable Monte Carlo Planning (POMCP) framework, to allow training-free online policy learning in unknown environments. We present a new belief reinvigoration strategy which allows to use POMCP with a dynamically growing state space to address the online generation of the floor map. We evaluate our method on two public benchmark datasets, AVD that is acquired by real robotic platforms and Habitat ObjectNav that is rendered from real 3D scene scans, achieving the best success rate with an improvement of >10% over the state-of-the-art methods

    Active Android malware analysis: an approach based on stochastic games

    Get PDF
    Active Malware Analysis focuses on learning the behaviors and the intentions of a malicious piece of software by interacting with it in a safe environment. The process can be formalized as a stochastic game involving two agents, a malware sample and an analyzer, that interact with opposite objectives: the malware sample tries to hide its behavior, while the analyzer aims at gaining as much information on the malware sample as possible. Our goal is to design a software agent that interacts with malware and extracts information on the behavior, learning a policy. We can then analyze different malware policies by using standard clustering approaches. In more detail, we propose a novel method to build malware models that can be used as an input to the stochastic game formulation. We empirically evaluate our method on real malware for the Android systems, showing that our approach can group malware belonging to the same families and identify the presence of possible sub-groups within such families
    • 

    corecore