253 research outputs found

    ImageJ2: ImageJ for the next generation of scientific image data

    Full text link
    ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs

    Model Driven Tool Interoperability in Practice

    Get PDF
    International audienceModel Driven Engineering (MDE) advocates the use of models, metamodels and model transformations to revisit some of the classical operations in software engineering. MDE has been mostly used with success in forward and reverse engineering (for software development and better maintenance, respectively). Supporting system interoperability is a third important area of applicability for MDE. The particular case of tool interoperability is currently receiving a lot of interest. In this paper, we describe some experiments in this area that have been performed in the context of open source modeling efforts. Taking stock of these achievements, we propose a general framework where various tools are associated to implicit or explicit metamodels. One of the interesting properties of such an organization is that it allows designers starting some software engineering activity with an informal light-weight tool and carrying it out later on in a more complete or formal context. We analyze such situations and discuss the advantages of using MDE to build a general tool interoperability framework

    Connecting the Dots: An Assessment of Cyber-risks in Networked Building and Municipal Infrastructure Systems

    Get PDF
    The buildings and city streets we walk down are changing. Driven by various data-driven use cases, there is increased interest in networking and integrating lighting and other building systems (e.g., heating, ventilation, and air conditioning (HVAC), security, scheduling) that were previously not internet-facing, and equipping them with sensors that collect information about their environment and the people that inhabit it. These data-enabled systems can potentially deliver improved occupant and resident experiences and help meet the U.S. Department of Energy (DOE) national energy and carbon reduction goals. Deploying connected devices new to being networked, however, is not without its challenges. This paper explores tools available to system designers and integrators that facilitate a cybersecurity landscape assessment – or more specifically the identification of threats, vulnerabilities, and adversarial behaviors that could be used against these networked systems. These assessments can help stakeholders shift security prioritization proactively toward the beginning of the development process

    INRIA-ATLAS Response to the MDA Tool Capabilities OMG RFI

    Get PDF
    Proposition of answer from the INRIA-ATLAS team to the OMG Request For Information named "MDA Tool Capabilities"In the past years, the INRIA ATLAS Group has been building an MDA tool bench named AMMA (ATLAS Model Management Architecture). The present discusses the main characteristics and overall vision of this platform in the context of the OMG MDA Tool Capabilities RFI. In the following pages, we will provide an overall description of what MDA tool capabilities means for the ATLAS Group. We will show, within this response, how our overall Model-Driven Engineering (MDE) vision and implemented platform bring answers to the different RFI questions. We will also highlight the various MDA tool-specific needs and requirements we have already identified, even though some are not yet fully addressed by the current version of our platform.In a more organizational point of view, we have tried to follow as much as possible the logical sequence of the RFI proposed questions; however in many cases we have answered several questions at once. Our goal is not to answer exhaustively all the questions but more to cover all the different requirement areas

    A monitoring and threat detection system using stream processing as a virtual function for big data

    Get PDF
    The late detection of security threats causes a significant increase in the risk of irreparable damages, disabling any defense attempt. As a consequence, fast realtime threat detection is mandatory for security guarantees. In addition, Network Function Virtualization (NFV) provides new opportunities for efficient and low-cost security solutions. We propose a fast and efficient threat detection system based on stream processing and machine learning algorithms. The main contributions of this work are i) a novel monitoring threat detection system based on stream processing; ii) two datasets, first a dataset of synthetic security data containing both legitimate and malicious traffic, and the second, a week of real traffic of a telecommunications operator in Rio de Janeiro, Brazil; iii) a data pre-processing algorithm, a normalizing algorithm and an algorithm for fast feature selection based on the correlation between variables; iv) a virtualized network function in an open-source platform for providing a real-time threat detection service; v) near-optimal placement of sensors through a proposed heuristic for strategically positioning sensors in the network infrastructure, with a minimum number of sensors; and, finally, vi) a greedy algorithm that allocates on demand a sequence of virtual network functions.A detecção tardia de ameaças de segurança causa um significante aumento no risco de danos irreparáveis, impossibilitando qualquer tentativa de defesa. Como consequência, a detecção rápida de ameaças em tempo real é essencial para a administração de segurança. Além disso, A tecnologia de virtualização de funções de rede (Network Function Virtualization - NFV) oferece novas oportunidades para soluções de segurança eficazes e de baixo custo. Propomos um sistema de detecção de ameaças rápido e eficiente, baseado em algoritmos de processamento de fluxo e de aprendizado de máquina. As principais contribuições deste trabalho são: i) um novo sistema de monitoramento e detecção de ameaças baseado no processamento de fluxo; ii) dois conjuntos de dados, o primeiro ´e um conjunto de dados sintético de segurança contendo tráfego suspeito e malicioso, e o segundo corresponde a uma semana de tráfego real de um operador de telecomunicações no Rio de Janeiro, Brasil; iii) um algoritmo de pré-processamento de dados composto por um algoritmo de normalização e um algoritmo para seleção rápida de características com base na correlação entre variáveis; iv) uma função de rede virtualizada em uma plataforma de código aberto para fornecer um serviço de detecção de ameaças em tempo real; v) posicionamento quase perfeito de sensores através de uma heurística proposta para posicionamento estratégico de sensores na infraestrutura de rede, com um número mínimo de sensores; e, finalmente, vi) um algoritmo guloso que aloca sob demanda uma sequencia de funções de rede virtual

    Bypassing Modern CPU Protections With Function-Oriented Programming

    Get PDF
    Over the years, code reuse attacks such as return-oriented programming (ROP) and jump-oriented programming (JOP) have been a primary target to gain execution on a system via buffer overflow, memory corruption, and code flow hijacking vulnerabilities. However, new CPU-level protections have introduced a variety of hurdles. ARM has designed the “Pointer Authentication” and “Branch Target Identification” mechanisms to handle the authentication of memory addresses and pointers, and Intel has followed through with its Shadow Stack and Indirect Branch Targeting mechanisms, otherwise known as Control-Flow Enforcement Technology. As intended, these protections make it nearly impossible to utilize regular code reuse methods such as ROP and JOP. The inclusion of these new protections has left gaps in the system\u27s security where the use of function-based code reuse attacks are still possible. This research demonstrates a novel approach to utilizing Function-Oriented Programming (FOP) as a technique to utilize in such environments. The design and creation of the “FOP Mythoclast” tool to identify FOP gadgets within Intel and ARM environments demonstrates not only a proof of concept (PoC) for FOP, but further cements its ability to thrive in diverse constrained environments. Additionally, the demonstration of FOP within the Linux kernel showcases the ability of FOP to excel in complex and real-world situations. This research concludes with potential solutions for mitigating FOP without adversely affecting system performance

    A Dynamic Allocation Mechanism for Network Slicing as-a-Service

    Get PDF
    In my thesis, I explore the design of a market mechanism to socially efficiently allocate resources for network slicing as-a-Service. Network slicing is a novel usage concept for the upcoming 5G network standard, allowing for isolated and customized virtual networks to operate upon a larger, physical 5G network. By providing network slices as-a-Service, where the users of the network slice do not own any of the underlying resources, a larger range of use cases can be catered to. My market mechanism is a novel amalgamation of existing mechanism design solutions from economics, and the nascent computer science literature into the technical aspects of network slicing and underlying network virtualization concepts. The existing literature in computer science is focused on the operative aspects of network slicing, while economics literature is incompatible with the unique problems network slicing poses as a market. In this thesis, I bring these two strands of literature together to create a functional allocation mechanism for the network slice market. I successfully create this market mechanism in my thesis, which is split into three phases. The first phase allows for bidder input into the network slices they bid for, overcoming a trade-off between market efficiency and tractability, making truthful valuation Bayes-Nash optimal. The second phase allocates resources to bidders based on a modified VCG mechanism that forms the multiple, non-identical resources of the market into packages that are based on bidder Quality of Service demands. Allocation is optimized to be socially efficient. The third phase re-allocates vacant resources of entitled network slices according to a Generalized Second-Price auction, while allowing for the return of resources to these entitled network slices without service interruption. As a whole, the mechanism is designed to optimize the allocation of resources as much as possible to those users that create the greatest value out of them, and successfully does so

    MINING AND VERIFICATION OF TEMPORAL EVENTS WITH APPLICATIONS IN COMPUTER MICRO-ARCHITECTURE RESEARCH

    Get PDF
    Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted
    corecore