391 research outputs found

    Approximate Computing Survey, Part II: Application-Specific & Architectural Approximation Techniques and Applications

    Full text link
    The challenging deployment of compute-intensive applications from domains such Artificial Intelligence (AI) and Digital Signal Processing (DSP), forces the community of computing systems to explore new design approaches. Approximate Computing appears as an emerging solution, allowing to tune the quality of results in the design of a system in order to improve the energy efficiency and/or performance. This radical paradigm shift has attracted interest from both academia and industry, resulting in significant research on approximation techniques and methodologies at different design layers (from system down to integrated circuits). Motivated by the wide appeal of Approximate Computing over the last 10 years, we conduct a two-part survey to cover key aspects (e.g., terminology and applications) and review the state-of-the art approximation techniques from all layers of the traditional computing stack. In Part II of our survey, we classify and present the technical details of application-specific and architectural approximation techniques, which both target the design of resource-efficient processors/accelerators & systems. Moreover, we present a detailed analysis of the application spectrum of Approximate Computing and discuss open challenges and future directions.Comment: Under Review at ACM Computing Survey

    Sequential Monte Carlo Optimisation for Air Traffic Management

    Get PDF
    This report shows that significant reduction in fuel use could be achieved by the adoption of `free flight' type of trajectories in the Terminal Manoeuvring Area (TMA) of an airport, under the control of an algorithm which optimises the trajectories of all the aircraft within the TMA simultaneously while maintaining safe separation. We propose the real-time use of Monte Carlo optimisation in the framework of Model Predictive Control (MPC) as the trajectory planning algorithm. Implementation on a Graphical Processor Unit (GPU) allows the exploitation of the parallelism inherent in Monte Carlo methods, which results in solution speeds high enough to allow real-time use. We demonstrate the solution of very complicated scenarios with both arrival and departure aircraft, in three dimensions, in the presence of a stochastic wind model and non-convex safe-separation constraints. We evaluate our algorithm on flight data obtained in the London Gatwick Airport TMA, and show that fuel saving of about 30% can be obtained. We also demonstrate the flexibility of our approach by adding noise-reduction objectives to the problem and observing the resulting modifications to arrival and departure trajectories

    Novel Alert Visualization: The Development of a Visual Analytics Prototype for Mitigation of Malicious Insider Cyber Threats

    Get PDF
    Cyber insider threat is one of the most difficult risks to mitigate in organizations. However, innovative validated visualizations for cyber analysts to better decipher and react to detected anomalies has not been reported in literature or in industry. Attacks caused by malicious insiders can cause millions of dollars in losses to an organization. Though there have been advances in Intrusion Detection Systems (IDSs) over the last three decades, traditional IDSs do not specialize in anomaly identification caused by insiders. There is also a profuse amount of data being presented to cyber analysts when deciphering big data and reacting to data breach incidents using complex information systems. Information visualization is pertinent to the identification and mitigation of malicious cyber insider threats. The main goal of this study was to develop and validate, using Subject Matter Experts (SME), an executive insider threat dashboard visualization prototype. Using the developed prototype, an experimental study was conducted, which aimed to assess the perceived effectiveness in enhancing the analysts’ interface when complex data correlations are presented to mitigate malicious insiders cyber threats. Dashboard-based visualization techniques could be used to give full visibility of network progress and problems in real-time, especially within complex and stressful environments. For instance, in an Emergency Room (ER), there are four main vital signs used for urgent patient triage. Cybersecurity vital signs can give cyber analysts clear focal points during high severity issues. Pilots must expeditiously reference the Heads Up Display (HUD), which presents only key indicators to make critical decisions during unwarranted deviations or an immediate threat. Current dashboard-based visualization techniques have yet to be fully validated within the field of cybersecurity. This study developed a visualization prototype based on SME input utilizing the Delphi method. SMEs validated the perceived effectiveness of several different types of the developed visualization dashboard. Quantitative analysis of SME’s perceived effectiveness via self-reported value and satisfaction data as well as qualitative analysis of feedback provided during the experiments using the prototype developed were performed. This study identified critical cyber visualization variables and identified visualization techniques. The identifications were then used to develop QUICK.v™ a prototype to be used when mitigating potentially malicious cyber insider threats. The perceived effectiveness of QUICK.v™ was then validated. Insights from this study can aid organizations in enhancing cybersecurity dashboard visualizations by depicting only critical cybersecurity vital signs

    Computer vision based navigation for spacecraft proximity operations

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 219-226).The use of computer vision for spacecraft relative navigation and proximity operations within an unknown environment is an enabling technology for a number of future commercial and scientific space missions. This thesis presents three first steps towards a larger research initiative to develop and mature these technologies. The first step that is presented is the design and development of a " flight-traceable" upgrade to the Synchronize Position Hold Engage Reorient Experimental Satellites, known as the SPHERES Goggles. This upgrade enables experimental research and maturation of computer vision based navigation technologies on the SPHERES satellites. The second step that is presented is the development of an algorithm for vision based relative spacecraft navigation that uses a fiducial marker with the minimum number of known point correspondences. An experimental evaluation of this algorithm is presented that determines an upper bound on the accuracy and precision of this system. The third step towards vision based relative navigation in an unknown environment is a preliminary investigation into the computational issues associated with high performance embedded computing. The computational characteristics of vision based relative navigation algorithms are discussed along with the requirements that they impose on computational hardware. A trade study is performed which compares a number of dierent commercially available hardware architectures to determine which would provide the best computational performance per unit of electrical power.by Brent Edward Tweddle.S.M

    Efficient configuration space construction and optimization

    Get PDF
    The configuration space is a fundamental concept that is widely used in algorithmic robotics. Many applications in robotics, computer-aided design, and related areas can be reduced to computational problems in terms of configuration spaces. In this dissertation, we address three main computational challenges related to configuration spaces: 1) how to efficiently compute an approximate representation of high-dimensional configuration spaces; 2) how to efficiently perform geometric, proximity, and motion planning queries in high dimensional configuration spaces; and 3) how to model uncertainty in configuration spaces represented by noisy sensor data. We present new configuration space construction algorithms based on machine learning and geometric approximation techniques. These algorithms perform collision queries on many configuration samples. The collision query results are used to compute an approximate representation for the configuration space, which quickly converges to the exact configuration space. We highlight the efficiency of our algorithms for penetration depth computation and instance-based motion planning. We also present parallel GPU-based algorithms to accelerate the performance of optimization and search computations in configuration spaces. In particular, we design efficient GPU-based parallel k-nearest neighbor and parallel collision detection algorithms and use these algorithms to accelerate motion planning. In order to extend configuration space algorithms to handle noisy sensor data arising from real-world robotics applications, we model the uncertainty in the configuration space by formulating the collision probabilities for noisy data. We use these algorithms to perform reliable motion planning for the PR2 robot.Doctor of Philosoph

    Diffusion MRI analysis:robust and efficient microstructure modeling

    Get PDF
    Diffusion MRI (dMRI) allows for investigating the structure of the human brain. This is useful for both scientific brain research as well as medical diagnosis. Since the raw dMRI data is not directly interpretable by humans, we use mathematical models to convert the raw dMRI data into something interpretable. These models can be computed using multiple different computational methods, each having a different trade-off in accuracy, robustness and efficiency. In this thesis we studied multiple different computational models for their usability and efficiency for dMRI modeling. In the end we provide the reader with methodological recommendations for dMRI modeling and provide a high performance GPU enabled dMRI computing platform containing all recommendations

    Spatiotemporal anomaly detection: streaming architecture and algorithms

    Get PDF
    Includes bibliographical references.2020 Summer.Anomaly detection is the science of identifying one or more rare or unexplainable samples or events in a dataset or data stream. The field of anomaly detection has been extensively studied by mathematicians, statisticians, economists, engineers, and computer scientists. One open research question remains the design of distributed cloud-based architectures and algorithms that can accurately identify anomalies in previously unseen, unlabeled streaming, multivariate spatiotemporal data. With streaming data, time is of the essence, and insights are perishable. Real-world streaming spatiotemporal data originate from many sources, including mobile phones, supervisory control and data acquisition enabled (SCADA) devices, the internet-of-things (IoT), distributed sensor networks, and social media. Baseline experiments are performed on four (4) non-streaming, static anomaly detection multivariate datasets using unsupervised offline traditional machine learning (TML), and unsupervised neural network techniques. Multiple architectures, including autoencoders, generative adversarial networks, convolutional networks, and recurrent networks, are adapted for experimentation. Extensive experimentation demonstrates that neural networks produce superior detection accuracy over TML techniques. These same neural network architectures can be extended to process unlabeled spatiotemporal streaming using online learning. Space and time relationships are further exploited to provide additional insights and increased anomaly detection accuracy. A novel domain-independent architecture and set of algorithms called the Spatiotemporal Anomaly Detection Environment (STADE) is formulated. STADE is based on federated learning architecture. STADE streaming algorithms are based on a geographically unique, persistently executing neural networks using online stochastic gradient descent (SGD). STADE is designed to be pluggable, meaning that alternative algorithms may be substituted or combined to form an ensemble. STADE incorporates a Stream Anomaly Detector (SAD) and a Federated Anomaly Detector (FAD). The SAD executes at multiple locations on streaming data, while the FAD executes at a single server and identifies global patterns and relationships among the site anomalies. Each STADE site streams anomaly scores to the centralized FAD server for further spatiotemporal dependency analysis and logging. The FAD is based on recent advances in DNN-based federated learning. A STADE testbed is implemented to facilitate globally distributed experimentation using low-cost, commercial cloud infrastructure provided by Microsoft™. STADE testbed sites are situated in the cloud within each continent: Africa, Asia, Australia, Europe, North America, and South America. Communication occurs over the commercial internet. Three STADE case studies are investigated. The first case study processes commercial air traffic flows, the second case study processes global earthquake measurements, and the third case study processes social media (i.e., Twitter™) feeds. These case studies confirm that STADE is a viable architecture for the near real-time identification of anomalies in streaming data originating from (possibly) computationally disadvantaged, geographically dispersed sites. Moreover, the addition of the FAD provides enhanced anomaly detection capability. Since STADE is domain-independent, these findings can be easily extended to additional application domains and use cases

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability
    corecore