228 research outputs found

    Improving Desktop System Security Using Compartmentalization

    Get PDF
    abstract: Compartmentalizing access to content, be it websites accessed in a browser or documents and applications accessed outside the browser, is an established method for protecting information integrity [12, 19, 21, 60]. Compartmentalization solutions change the user experience, introduce performance overhead and provide varying degrees of security. Striking a balance between usability and security is not an easy task. If the usability aspects are neglected or sacrificed in favor of more security, the resulting solution would have a hard time being adopted by end-users. The usability is affected by factors including (1) the generality of the solution in supporting various applications, (2) the type of changes required, (3) the performance overhead introduced by the solution, and (4) how much the user experience is preserved. The security is affected by factors including (1) the attack surface of the compartmentalization mechanism, and (2) the security decisions offloaded to the user. This dissertation evaluates existing solutions based on the above factors and presents two novel compartmentalization solutions that are arguably more practical than their existing counterparts. The first solution, called FlexICon, is an attractive alternative in the design space of compartmentalization solutions on the desktop. FlexICon allows for the creation of a large number of containers with small memory footprint and low disk overhead. This is achieved by using lightweight virtualization based on Linux namespaces. FlexICon uses two mechanisms to reduce user mistakes: 1) a trusted file dialog for selecting files for opening and launching it in the appropriate containers, and 2) a secure URL redirection mechanism that detects the user’s intent and opens the URL in the proper container. FlexICon also provides a language to specify the access constraints that should be enforced by various containers. The second solution called Auto-FBI, deals with web-based attacks by creating multiple instances of the browser and providing mechanisms for switching between the browser instances. The prototype implementation for Firefox and Chrome uses system call interposition to control the browser’s network access. Auto-FBI can be ported to other platforms easily due to simple design and the ubiquity of system call interposition methods on all major desktop platforms.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Architectures and GPU-Based Parallelization for Online Bayesian Computational Statistics and Dynamic Modeling

    Get PDF
    Recent work demonstrates that coupling Bayesian computational statistics methods with dynamic models can facilitate the analysis of complex systems associated with diverse time series, including those involving social and behavioural dynamics. Particle Markov Chain Monte Carlo (PMCMC) methods constitute a particularly powerful class of Bayesian methods combining aspects of batch Markov Chain Monte Carlo (MCMC) and the sequential Monte Carlo method of Particle Filtering (PF). PMCMC can flexibly combine theory-capturing dynamic models with diverse empirical data. Online machine learning is a subcategory of machine learning algorithms characterized by sequential, incremental execution as new data arrives, which can give updated results and predictions with growing sequences of available incoming data. While many machine learning and statistical methods are adapted to online algorithms, PMCMC is one example of the many methods whose compatibility with and adaption to online learning remains unclear. In this thesis, I proposed a data-streaming solution supporting PF and PMCMC methods with dynamic epidemiological models and demonstrated several successful applications. By constructing an automated, easy-to-use streaming system, analytic applications and simulation models gain access to arriving real-time data to shorten the time gap between data and resulting model-supported insight. The well-defined architecture design emerging from the thesis would substantially expand traditional simulation models' potential by allowing such models to be offered as continually updated services. Contingent on sufficiently fast execution time, simulation models within this framework can consume the incoming empirical data in real-time and generate informative predictions on an ongoing basis as new data points arrive. In a second line of work, I investigated the platform's flexibility and capability by extending this system to support the use of a powerful class of PMCMC algorithms with dynamic models while ameliorating such algorithms' traditionally stiff performance limitations. Specifically, this work designed and implemented a GPU-enabled parallel version of a PMCMC method with dynamic simulation models. The resulting codebase readily has enabled researchers to adapt their models to the state-of-art statistical inference methods, and ensure that the computation-heavy PMCMC method can perform significant sampling between the successive arrival of each new data point. Investigating this method's impact with several realistic PMCMC application examples showed that GPU-based acceleration allows for up to 160x speedup compared to a corresponding CPU-based version not exploiting parallelism. The GPU accelerated PMCMC and the streaming processing system can complement each other, jointly providing researchers with a powerful toolset to greatly accelerate learning and securing additional insight from the high-velocity data increasingly prevalent within social and behavioural spheres. The design philosophy applied supported a platform with broad generalizability and potential for ready future extensions. The thesis discusses common barriers and difficulties in designing and implementing such systems and offers solutions to solve or mitigate them

    Fast Protection-Domain Crossing in the CHERI Capability-System Architecture

    Get PDF
    Capability Hardware Enhanced RISC Instructions (CHERI) supplement the conventional memory management unit (MMU) with instruction-set architecture (ISA) extensions that implement a capability system model in the address space. CHERI can also underpin a hardware-software object-capability model for scalable application compartmentalization that can mitigate broader classes of attack. This article describes ISA additions to CHERI that support fast protection-domain switching, not only in terms of low cycle count, but also efficient memory sharing with mutual distrust. The authors propose ISA support for sealed capabilities, hardware-assisted checking during protection-domain switching, a lightweight capability flow-control model, and fast register clearing, while retaining the flexibility of a software-defined protection-domain transition model. They validate this approach through a full-system experimental design, including ISA extensions, a field-programmable gate array prototype (implemented in Bluespec SystemVerilog), and a software stack including an OS (based on FreeBSD), compiler (based on LLVM), software compartmentalization model, and open-source applications.This work is part of the CTSRD and MRC2 projects sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL), under contracts FA8750-10-C-0237 and FA8750-11-C-0249. We also acknowledge the Engineering and Physical Sciences Research Council (EPSRC) REMS Programme Grant [EP/K008528/1], the EPSRC Impact Acceleration Account [EP/K503757/1], EPSRC/ARM iCASE studentship [13220009], Microsoft studentship [MRS2011-031], the Isaac Newton Trust, the UK Higher Education Innovation Fund (HEIF), Thales E-Security, and Google, Inc.This is the author accepted manuscript. The final version of the article can be found at: http://ieeexplore.ieee.org/document/7723791

    Simulation of networks of spiking neurons: A review of tools and strategies

    Full text link
    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of Computational Neuroscience, in press (2007

    Memory Subsystems for Security, Consistency, and Scalability

    Get PDF
    In response to the continuous demand for the ability to process ever larger datasets, as well as discoveries in next-generation memory technologies, researchers have been vigorously studying memory-driven computing architectures that shall allow data-intensive applications to access enormous amounts of pooled non-volatile memory. As applications continue to interact with increasing amounts of components and datasets, existing systems struggle to eÿciently enforce the principle of least privilege for security. While non-volatile memory can retain data even after a power loss and allow for large main memory capacity, programmers have to bear the burdens of maintaining the consistency of program memory for fault tolerance as well as handling huge datasets with traditional yet expensive memory management interfaces for scalability. Today’s computer systems have become too sophisticated for existing memory subsystems to handle many design requirements. In this dissertation, we introduce three memory subsystems to address challenges in terms of security, consistency, and scalability. Specifcally, we propose SMVs to provide threads with fne-grained control over access privileges for a partially shared address space for security, NVthreads to allow programmers to easily leverage nonvolatile memory with automatic persistence for consistency, and PetaMem to enable memory-centric applications to freely access memory beyond the traditional process boundary with support for memory isolation and crash recovery for security, consistency, and scalability

    Advancing computational biophysics with Virtual Reality

    Get PDF
    Modelos computacionais são ferramentas poderosas para explorar as propriedades de sistemas biológicos complexos. Na neurociência computacional, permitir fácil exploração e visualização computacional desses modelos é crucial para o progresso do campo. Nos últimos anos, os sistemas de visualização 3D e o hardware de realidade virtual tornaram-se mais acessíveis e isso abre uma janela de oportunidade para os serviços de visualização. O principal problema atual da visualização 3D diz respeito à usabilidade (ou seja, navegação e seleção). Durante esta dissertação, hipotetizaremos que a substituição do 3D por VR irá (1) superar os problemas de usabilidade mencionados e, eventualmente, (2) aumentar a eficácia dos utilizadores em relação às questões do campo de estudo (neurociência). Para avaliar os resultados do trabalho desenvolvido nesta dissertação, será realizada uma experiência de duas partes, em que um grupo de indivíduos deverá executar um conjunto de tarefas pré-determinadas e avaliar sua experiência usando 3D na primeira e VR na última parte. Além da autoavaliação da experiência, dados como tempo de conclusão e correção da tarefa também serão usados para quantificar a eficácia do método de visualização. Dada a experiência mencionada, um protótipo de uma aplicação (baseada na Web) com visualização de Realidade Virtual deve ser desenvolvido. A visualização 3D será fornecida por uma framework de código aberto baseada na Web, chamada Geppetto. Cada uma das decisões tomadas no desenvolvimento do protótipo será analisada adequadamente neste documento, bem como a literatura científica que servirá de base quando necessário. Além do estudo da Realidade Virtual propriamente dita, também serão analisados métodos padronizados para a visualização de informações (neuro) científicas. A solução proposta procurará constituir uma base de trabalho sólida e suficientemente genérica a ser aplicada, não apenas no contexto da neurociência, mas também em vários outros contextos onde a visualização de modelos através de Realidade Virtual poderá ser bem-sucedida.Computational models are powerful tools for exploring the properties of complex biological systems. In computational neuroscience, allowing easy computational exploration and visualization of this models is crucial for the progress of the field. In recent years, Virtual Reality hardware and visualization systems have become more affordable and this opens a window of opportunity for visualization services. The current major problem of 3D visualization concerns usability (i.e., navigation and selection). During this dissertation, we will hypothesize that the replacement of 3D for VR will (1) overcome the usability issues mentioned and eventually (2) boost user effectiveness regarding field of study (neuroscience) concerns. In order to evaluate the results of the work developed under this dissertation, a two-part experiment will be carried out where a group of individuals must perform a set of predetermined tasks and evaluate their experience using 3D in the first and VR in the last part. Besides the self-evaluation of the experiment, data such as completion time and task correctness will also be used to quantify the effectiveness of the visualization method. Given the aforementioned experiment, a prototype of a (web-based) application with Virtual Reality visualization shall be developed. The 3D visualization will be provided by a web-based open-sourced framework called Geppetto. Each of the decisions made in the development of the prototype will be properly analyzed in this document, as well as the scientific literature that will serve as a basis when necessary. Besides the study of Virtual Reality itself, standard methods with respect to the visualization of (neuro)scientific information will also be analyzed. The proposed solution will seek to constitute a solid and sufficiently generic work base to be applied, not only in the scope of neuroscience, but also in several other contexts where visualization through VR might be successful

    Detection and Diagnosis of Memory Leaks in Web Applications

    Get PDF
    Memory leaks -- the existence of unused memory on the heap of applications -- result in low performance and may, in the worst case, cause applications to crash. The migration of application logic to the client side of modern web applications and the use of JavaScript as the main language for client-side development have made memory leaks in JavaScript an issue for web applications. Significant portions of modern web applications are executed on the client browser, with the server acting only as a data store. Client-side web applications communicate with the server asynchronously, remaining on the same web page during their lifetime. Thus, even minor memory leaks can eventually lead to excessive memory usage, negatively affecting user-perceived response time and possibly causing page crashes. This thesis demonstrates the existence of memory leaks in the client side of large and popular web applications, and develops prototype tools to solve this problem. The first approach taken to address memory leaks in web applications is to detect, diagnose, and x them during application development. This approach prevents such leaks from happening by finding and removing their causes. To achieve this goal, this thesis introduces LeakSpot, a tool that creates a runtime heap model of JavaScript applications by modifying web-application code in a browser-agnostic way to record object allocations, accesses, and references created on objects. LeakSpot reports the locations of the code that are allocating leaked objects, i.e., leaky allocation sites. It also identifies accumulation sites, which are the points in the program where references are created on objects but are not removed, e.g., the points where objects are added to a data structure but are not removed. To facilitate debugging and fixing the code, LeakSpot narrows down the space that must be searched for finding the cause of the leaks in two ways: First, it refines the list of leaky allocation sites and reports those allocation sites that are the main cause of the leaks. In addition, for every leaked object, LeakSpot reports all the locations in the program that create a reference to that object. To confirm its usefulness and e fficacy experimentally, LeakSpot is used to find and x memory leaks in JavaScript benchmarks and open-source web applications. In addition, the potential causes of the leaks in large and popular web applications are identified. The performance overhead of LeakSpot in large and popular web applications is also measured, which indirectly demonstrates the scalability of LeakSpot. The second approach taken to address memory leaks assumes memory leaks may still be present after development. This approach aims to reduce the effects of leaked memory during runtime and improve memory efficiency of web applications by removing the leaked objects or early triggering of garbage collection, Using a new tool, MemRed. MemRed automatically detects excessive use of memory during runtime and then takes actions to reduce memory usage. It detects the excessive use of memory by tracking the size of all objects on the heap. If an error is detected, MemRed applies recovery actions to reduce the overall size of the heap and hide the effects of excessive memory usage from users. MemRed is implemented as an extension for the Chrome browser. Evaluation demonstrates the effectiveness of MemRed in reducing memory usage of web applications. In summary, the first tool provided in this thesis, LeakSpot, can be used by developers in finding and fixing memory leaks in JavaScript Applications. Using both tools improves the experience of web-application users.4 month

    Scalable visual analytics over voluminous spatiotemporal data

    Get PDF
    2018 Fall.Includes bibliographical references.Visualization is a critical part of modern data analytics. This is especially true of interactive and exploratory visual analytics, which encourages speedy discovery of trends, patterns, and connections in data by allowing analysts to rapidly change what data is displayed and how it is displayed. Unfortunately, the explosion of data production in recent years has led to problems of scale as storage, processing, querying, and visualization have struggled to keep pace with data volumes. Visualization of spatiotemporal data pose unique challenges, thanks in part to high-dimensionality in the input feature space, interactions between features, and the production of voluminous, high-resolution outputs. In this dissertation, we address challenges associated with supporting interactive, exploratory visualization of voluminous spatiotemporal datasets and underlying phenomena. This requires the visualization of millions of entities and changes to these entities as the spatiotemporal phenomena unfolds. The rendering and propagation of spatiotemporal phenomena must be both accurate and timely. Key contributions of this dissertation include: 1) the temporal and spatial coupling of spatially localized models to enable the visualization of phenomena at far greater geospatial scales; 2) the ability to directly compare and contrast diverging spatiotemporal outcomes that arise from multiple exploratory "what-if" queries; and 3) the computational framework required to support an interactive user experience in a heavily resource-constrained environment. We additionally provide support for collaborative and competitive exploration with multiple synchronized clients

    Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure

    Get PDF
    Simulating the brain-body-environment trinity in closed loop is an attractive proposal to investigate how perception, motor activity and interactions with the environment shape brain activity, and vice versa. The relevance of this embodied approach, however, hinges entirely on the modeled complexity of the various simulated phenomena. In this article, we introduce a software framework that is capable of simulating large-scale, biologically realistic networks of spiking neurons embodied in a biomechanically accurate musculoskeletal system that interacts with a physically realistic virtual environment. We deploy this framework on the high performance computing resources of the EBRAINS research infrastructure and we investigate the scaling performance by distributing computation across an increasing number of interconnected compute nodes. Our architecture is based on requested compute nodes as well as persistent virtualmachines; this provides a high-performance simulation environment that is accessible to multidomain users without expert knowledge, with a view to enable users to instantiate and control simulations at custom scale via a web-based graphical user interface. Our simulation environment, entirely open source, is based on the Neurorobotics Platform developed in the context of the Human Brain Project, and the NEST simulator. We characterize the capabilities of our parallelized architecture for large-scale embodied brain simulations through two benchmark experiments, by investigating the effects of scaling compute resources on performance defined in terms of experiment runtime, brain instantiation and simulation time. The first benchmark is based on a largescale balanced network, while the second one is a multi-region embodied brain simulation consisting of more than a million neurons and a billion synapses. Both benchmarks clearly show how scaling compute resources improves the aforementioned performance metrics in a near-linear fashion. The second benchmark in particular is indicative of both the potential and limitations of a highly distributed simulation in terms of a trade-off between computation speed and resource cost. Our simulation architecture is being prepared to be accessible for everyone as an EBRAINS service, thereby offering a community-wide tool with a unique workflow that should provide momentum to the investigation of closed-loop embodiment within the computational neuroscience community.European Union’s Horizon 2020 Framework Programme 785907 945539European Union’s Horizon 2020 800858MEXT (hp200139, hp210169) MEXT KAKENHI grant no. 17H06310

    Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure

    Get PDF
    Simulating the brain-body-environment trinity in closed loop is an attractive proposal to investigate how perception, motor activity and interactions with the environment shape brain activity, and vice versa. The relevance of this embodied approach, however, hinges entirely on the modeled complexity of the various simulated phenomena. In this article, we introduce a software framework that is capable of simulating large-scale, biologically realistic networks of spiking neurons embodied in a biomechanically accurate musculoskeletal system that interacts with a physically realistic virtual environment. We deploy this framework on the high performance computing resources of the EBRAINS research infrastructure and we investigate the scaling performance by distributing computation across an increasing number of interconnected compute nodes. Our architecture is based on requested compute nodes as well as persistent virtual machines; this provides a high-performance simulation environment that is accessible to multi-domain users without expert knowledge, with a view to enable users to instantiate and control simulations at custom scale via a web-based graphical user interface. Our simulation environment, entirely open source, is based on the Neurorobotics Platform developed in the context of the Human Brain Project, and the NEST simulator. We characterize the capabilities of our parallelized architecture for large-scale embodied brain simulations through two benchmark experiments, by investigating the effects of scaling compute resources on performance defined in terms of experiment runtime, brain instantiation and simulation time. The first benchmark is based on a large-scale balanced network, while the second one is a multi-region embodied brain simulation consisting of more than a million neurons and a billion synapses. Both benchmarks clearly show how scaling compute resources improves the aforementioned performance metrics in a near-linear fashion. The second benchmark in particular is indicative of both the potential and limitations of a highly distributed simulation in terms of a trade-off between computation speed and resource cost. Our simulation architecture is being prepared to be accessible for everyone as an EBRAINS service, thereby offering a community-wide tool with a unique workflow that should provide momentum to the investigation of closed-loop embodiment within the computational neuroscience community.journal articl
    • …
    corecore