1,255 research outputs found

    Tactical ISR/C2 Integration with AI/ML Augmentation

    Get PDF
    NPS NRP Project PresentationNAVPLAN 2021 specifies Distributed Maritime Operations (DMO) with a tactical grid to connect distributed nodes with processing at the tactical edge to include Artificial Intelligence/Machine Learning (AI/ML) in support of Expeditionary Advanced Base Operations (EABO) and Littoral Operations in a Contested Environment (LOCE). Joint All-Domain Command and Control (JADC2) is the concept for sensor integration. However, Intelligence, Surveillance and Reconnaissance (ISR) and Command and Control (C2) hardware and software have yet to be fully defined, tools integrated, and configurations tested. This project evaluates options for ISR and C2 integration into a Common Operational Picture (COP) with AI/ML for decision support on tactical clouds in support of DMO, EABO, LOCE and JADC2 objectives.Commander, Naval Surface Forces (CNSF)U.S. Fleet Forces Command (USFF)This research is supported by funding from the Naval Postgraduate School, Naval Research Program (PE 0605853N/2098). https://nps.edu/nrpChief of Naval Operations (CNO)Approved for public release. Distribution is unlimited.

    Simplifying Internet of Things (IoT) Data Processing Work ow Composition and Orchestration in Edge and Cloud Datacenters

    Get PDF
    Ph. D. Thesis.Internet of Things (IoT) allows the creation of virtually in nite connections into a global array of distributed intelligence. Identifying a suitable con guration of devices, software and infrastructures in the context of user requirements are fundamental to the success of delivering IoT applications. However, the design, development, and deployment of IoT applications are complex and complicated due to various unwarranted challenges. For instance, addressing the IoT application users' subjective and objective opinions with IoT work ow instances remains a challenge for the design of a more holistic approach. Moreover, the complexity of IoT applications increased exponentially due to the heterogeneous nature of the Edge/Cloud services, utilised to lower latency in data transformation and increase reusability. To address the composition and orchestration of IoT applications in the cloud and edge environments, this thesis presents IoT-CANE (Context Aware Recommendation System) as a high-level uni ed IoT resource con guration recommendation system which embodies a uni ed conceptual model capturing con guration, constraint and infrastructure features of Edge/Cloud together with IoT devices. Second, I present an IoT work ow composition system (IoTWC) to allow IoT users to pipeline their work ows with proposed IoT work ow activity abstract patterns. IoTWC leverages the analytic hierarchy process (AHP) to compose the multi-level IoT work ow that satis es the requirements of any IoT application. Besides, the users are be tted with recommended IoT work ow con gurations using an AHP based multi-level composition framework. The proposed IoTWC is validated on a user case study to evaluate the coverage of IoT work ow activity abstract patterns and a real-world scenario for smart buildings. Last, I propose a fault-tolerant automation deployment IoT framework which captures the IoT work ow plan from IoTWC to deploy in multi-cloud edge environment with a fault-tolerance mechanism. The e ciency and e ectiveness of the proposed fault-tolerant system are evaluated in a real-time water ooding data monitoring and management applicatio

    Engenharia de Resiliência

    Get PDF
    This thesis presents a study of a new discipline called Chaos Engineering and its approaches, that help to verify the correct behavior of a system and to discover new information about it, through chaos experiments like the shutdown of a machine or the simulation of latency in the network connections between applications. The case study was carried out at the company Mindera, to verify and improve the resilience to failures of a client’s project. Initially the chaos maturity of the project within the Chaos Maturity Model wasin the first levels and it was necessary to increase its sophistication and adoption by conducting experiments to test and improve the resilience. The cloud environment that the project uses, and the architecture is explained to contextualize the components that the experiments will use and test. Different alternatives to test disaster recovery plans are compared as well as the differences between the use of a test environment and the production environment. The value of carrying out experiments for the client project is described, as well as the identification of their value proposal. In the end, the analysis of the different chaos tools is performed using the TOPSIS method. The four performed experiments test the system's resilience to failure of a database’s primary node, the impact of latency in the network connections between different components, the system's reaction to the exhaustion of physical resources of a machine and finally the global test of a system's resiliency in the face of a server failure. After the execution, the experiences were evaluated by company experts. In the end, the conclusions about the work developed are presented. The experiments carried out were classified as important for the project. A problem was found after in the latency introduction experiment and after changing the application’s code, the system reaction was positive, and the number of responses was increased.Esta tese apresenta um estudo de uma nova disciplina chamada Chaos Engineering e as suas abordagens, que ajudam a verificar o correto funcionamento e a descoberta de novas informações acerca de um sistema através de realização de experiências como o desligar de uma máquina ou a simulação de latência nas ligações de rede entre aplicações. O caso de estudo foi realizado na empresa Mindera, dentro de um projeto cliente, para verificar e melhorar a sua resiliência a falhas. Inicialmente a maturidade de caos do projeto dentro do Chaos Maturity Model encontra-se nos primeiros níveis e tornou-se necessário aumentar a sua sofisticação e adoção através da realização de experiências para testar e melhorar a resiliência. O ambiente de cloud que o projeto usa e a sua arquitetura é explicada para contextualizar os componentes que as experiências vão usar e testar. As diferentes alternativas de testar planos de recuperação a desastres são comparadas, assim como, as diferenças entre a utilização do ambiente de testes e de produção. O valor da realização de experiências para o projeto cliente é descrito, assim como a identificação da sua proposta de valor. No final, a análise das diferentes ferramentas de caos é realizada recorrendo ao método TOPSIS. As quatro experiências executadas testam a resiliência do sistema perante a falha de um nó primário de uma base de dados, o impacto da latência nas ligações de rede entre diferentes componentes, a reação do sistema perante a exaustão de recursos físicos de uma máquina e por último o teste global da resiliência de um sistema perante a falha de um servidor. As experiências são posteriormente avaliadas por experts da empresa. No final, as conclusões acerca do trabalho desenvolvido são apresentadas. As experiências realizadas foram classificadas como importantes para o projeto. Um problema foi encontrado na experiência de introdução de latência e após a alteração do seu código, a reação do sistema foi positiva e o número de respostas aumentou

    Single-click to data insights: transaction replication and deployment automation made simple for the cloud age

    Get PDF
    In this report we present out initial work on making the MonetDB column-store analytical database ready for Cloud deployment. As we stand in the new space between research and industry we have tried to combine approaches from both worlds. We provide details how we utilize modern technologies and tools for automating building of virtual machine image for Cloud, datacentre and desktop use. We also explain our solution to asynchronous transaction replication MonetDB. The report concludes with how this all ties together with our efforts to make MonetDB ready for the age where high-performance data analytics is available in a single-click

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    Containerization in Cloud Computing: performance analysis of virtualization architectures

    Get PDF
    La crescente adozione del cloud è fortemente influenzata dall’emergere di tecnologie che mirano a migliorare i processi di sviluppo e deployment di applicazioni di livello enterprise. L’obiettivo di questa tesi è analizzare una di queste soluzioni, chiamata “containerization” e di valutare nel dettaglio come questa tecnologia possa essere adottata in infrastrutture cloud in alternativa a soluzioni complementari come le macchine virtuali. Fino ad oggi, il modello tradizionale “virtual machine” è stata la soluzione predominante nel mercato. L’importante differenza architetturale che i container offrono ha portato questa tecnologia ad una rapida adozione poichè migliora di molto la gestione delle risorse, la loro condivisione e garantisce significativi miglioramenti in termini di provisioning delle singole istanze. Nella tesi, verrà esaminata la “containerization” sia dal punto di vista infrastrutturale che applicativo. Per quanto riguarda il primo aspetto, verranno analizzate le performances confrontando LXD, Docker e KVM, come hypervisor dell’infrastruttura cloud OpenStack, mentre il secondo punto concerne lo sviluppo di applicazioni di livello enterprise che devono essere installate su un insieme di server distribuiti. In tal caso, abbiamo bisogno di servizi di alto livello, come l’orchestrazione. Pertanto, verranno confrontate le performances delle seguenti soluzioni: Kubernetes, Docker Swarm, Apache Mesos e Cattle

    Context constraint integration and validation in dynamic web service compositions

    Get PDF
    System architectures that cross organisational boundaries are usually implemented based on Web service technologies due to their inherent interoperability benets. With increasing exibility requirements, such as on-demand service provision, a dynamic approach to service architecture focussing on composition at runtime is needed. The possibility of technical faults, but also violations of functional and semantic constraints require a comprehensive notion of context that captures composition-relevant aspects. Context-aware techniques are consequently required to support constraint validation for dynamic service composition. We present techniques to respond to problems occurring during the execution of dynamically composed Web services implemented in WS-BPEL. A notion of context { covering physical and contractual faults and violations { is used to safeguard composed service executions dynamically. Our aim is to present an architectural framework from an application-oriented perspective, addressing practical considerations of a technical framework
    corecore