1,183 research outputs found

    Challenge Token Based Security for Hybrid Clouds

    Get PDF
    Cloud has now become the essential part of the web technology and fast growth of cloud computing technique making it worth for the companies to invest in cloud. Growth of number of clouds is requiring inter cloud communication as concept of multi cloud or hybrid cloud is also spreading quickly. With this fast growth, more and more challenges are arising in the field of cloud computing. Various researchers are focusing on cloud oriented challenges and lots of research works are going on in this field. With emergence of cloud computing, the term "Hybrid Topology" or "Hybrid Deployment" is becoming more and more common. A "Hybrid Cloud" is group of clouds you join different cloud deployments into one connected cluster. Another area of research is to focus on communication between a cloud and non cloud computing system. Hybrid Cloud computing mainly deals with working of data centers where different software are installed with huge of growing data to provide information to the users of the system. The techniques which can be used in hybrid cloud securities can be built around the encryption and decryption of data, key based security algorithms which are mainly oriented on authentication and authorization techniques as in wired and wireless networks. One such mechanism is to share the challenge text between the clouds before actual communication should start for authentication. The various works done in this area till date are oriented on other techniques of security between the two or more clouds in a hybrid cloud

    TSKY: a dependable middleware solution for data privacy using public storage clouds

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaThis dissertation aims to take advantage of the virtues offered by data storage cloud based systems on the Internet, proposing a solution that avoids security issues by combining different providers’ solutions in a vision of a cloud-of-clouds storage and computing. The solution, TSKY System (or Trusted Sky), is implemented as a middleware system, featuring a set of components designed to establish and to enhance conditions for security, privacy, reliability and availability of data, with these conditions being secured and verifiable by the end-user, independently of each provider. These components, implement cryptographic tools, including threshold and homomorphic cryptographic schemes, combined with encryption, replication, and dynamic indexing mecha-nisms. The solution allows data management and distribution functions over data kept in different storage clouds, not necessarily trusted, improving and ensuring resilience and security guarantees against Byzantine faults and at-tacks. The generic approach of the TSKY system model and its implemented services are evaluated in the context of a Trusted Email Repository System (TSKY-TMS System). The TSKY-TMS system is a prototype that uses the base TSKY middleware services to store mailboxes and email Messages in a cloud-of-clouds

    Selected Papers from the First International Symposium on Future ICT (Future-ICT 2019) in Conjunction with 4th International Symposium on Mobile Internet Security (MobiSec 2019)

    Get PDF
    The International Symposium on Future ICT (Future-ICT 2019) in conjunction with the 4th International Symposium on Mobile Internet Security (MobiSec 2019) was held on 17–19 October 2019 in Taichung, Taiwan. The symposium provided academic and industry professionals an opportunity to discuss the latest issues and progress in advancing smart applications based on future ICT and its relative security. The symposium aimed to publish high-quality papers strictly related to the various theories and practical applications concerning advanced smart applications, future ICT, and related communications and networks. It was expected that the symposium and its publications would be a trigger for further related research and technology improvements in this field

    An Overview of Data Storage in Cloud Computing

    Get PDF
    Cloud computing is a functional paradigm that is evolving and making IT utilization easier by the day for consumers. Cloud computing offers standardized applications to users online and in a manner that can be accessed regularly. Such applications can be accessed by as many persons as permitted within an organisation without bothering about the maintenance of such application. The Cloud also provides a channel to design and deploy user applications including its storage space and database without bothering about the underlying operating system. The application can run without consideration for on-premise infrastructure. Also, the Cloud makes massive storage available both for data and databases. Storage of data on the Cloud is one of the core activities in Cloud computing. Storage utilizes infrastructure spread across several geographical locations. Storage on the Cloud makes use of the internet, virtualization, encryption and others technologies to ensure security of data. This paper presents the state of the art from some literature available on Cloud storage. The study was executed by means of review of literature available on Cloud storage. It examines present trends in the area of Cloud storage and provides a guide for future research. The objective of this paper is to answer the question of what the current trend and development in Cloud storage is? The expected result at the end of this review is the identification of trends in Cloud storage, which can beneficial to prospective Cloud researches, users and even providers

    ARCHITECTURE FOR A CBM+ AND PHM CENTRIC DIGITAL TWIN FOR WARFARE SYSTEMS

    Get PDF
    The Department of the Navy’s continued progression from time-based maintenance into condition-based maintenance plus (CBM+) shows the importance of increasing operational availability (Ao) across fleet weapon systems. This capstone uses the concept of digital efficiency from a digital twin (DT) combined with a three-dimensional (3D) direct metal laser melting printer as the physical host on board a surface vessel. The DT provides an agnostic conduit for combining model-based systems engineering with a digital analysis for real-time prognostic health monitoring while improving predictive maintenance. With the DT at the forefront of prioritized research and development, the 3D printer combines the value of additive manufacturing with complex systems in dynamic shipboard environments. To demonstrate that the DT possesses parallel abilities for improving both the physical host’s Ao and end-goal mission, this capstone develops a DT architecture and a high-level model. The model focuses on specific printer components (deionized [DI] water level, DI water conductivity, air filters, and laser motor drive system) to demonstrate the DT’s inherent effectiveness towards CBM+. To embody the system of systems analysis for printer suitability and performance, more components should be evaluated and combined with the ship’s environment data. Additionally, this capstone recommends the use of DTs as a nexus into more complex weapon systems while using a deeper level of design of experiment.Outstanding ThesisCivilian, Department of the NavyCommander, United States NavyCivilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyCivilian, Department of the NavyApproved for public release. Distribution is unlimited

    A differentiated proposal of three dimension i/o performance characterization model focusing on storage environments

    Get PDF
    The I/O bottleneck remains a central issue in high-performance environments. Cloud computing, high-performance computing (HPC) and big data environments share many underneath difficulties to deliver data at a desirable time rate requested by high-performance applications. This increases the possibility of creating bottlenecks throughout the application feeding process by bottom hardware devices located in the storage system layer. In the last years, many researchers have been proposed solutions to improve the I/O architecture considering different approaches. Some of them take advantage of hardware devices while others focus on a sophisticated software approach. However, due to the complexity of dealing with high-performance environments, creating solutions to improve I/O performance in both software and hardware is challenging and gives researchers many opportunities. Classifying these improvements in different dimensions allows researchers to understand how these improvements have been built over the years and how it progresses. In addition, it also allows future efforts to be directed to research topics that have developed at a lower rate, balancing the general development process. This research present a three-dimension characterization model for classifying research works on I/O performance improvements for large scale storage computing facilities. This classification model can also be used as a guideline framework to summarize researches providing an overview of the actual scenario. We also used the proposed model to perform a systematic literature mapping that covered ten years of research on I/O performance improvements in storage environments. This study classified hundreds of distinct researches identifying which were the hardware, software, and storage systems that received more attention over the years, which were the most researches proposals elements and where these elements were evaluated. In order to justify the importance of this model and the development of solutions that targets I/O performance improvements, we evaluated a subset of these improvements using a a real and complete experimentation environment, the Grid5000. Analysis over different scenarios using a synthetic I/O benchmark demonstrates how the throughput and latency parameters behaves when performing different I/O operations using distinct storage technologies and approaches.O gargalo de E/S continua sendo um problema central em ambientes de alto desempenho. Os ambientes de computação em nuvem, computação de alto desempenho (HPC) e big data compartilham muitas dificuldades para fornecer dados em uma taxa de tempo desejável solicitada por aplicações de alto desempenho. Isso aumenta a possibilidade de criar gargalos em todo o processo de alimentação de aplicativos pelos dispositivos de hardware inferiores localizados na camada do sistema de armazenamento. Nos últimos anos, muitos pesquisadores propuseram soluções para melhorar a arquitetura de E/S considerando diferentes abordagens. Alguns deles aproveitam os dispositivos de hardware, enquanto outros se concentram em uma abordagem sofisticada de software. No entanto, devido à complexidade de lidar com ambientes de alto desempenho, criar soluções para melhorar o desempenho de E/S em software e hardware é um desafio e oferece aos pesquisadores muitas oportunidades. A classificação dessas melhorias em diferentes dimensões permite que os pesquisadores entendam como essas melhorias foram construídas ao longo dos anos e como elas progridem. Além disso, também permite que futuros esforços sejam direcionados para tópicos de pesquisa que se desenvolveram em menor proporção, equilibrando o processo geral de desenvolvimento. Esta pesquisa apresenta um modelo de caracterização tridimensional para classificar trabalhos de pesquisa sobre melhorias de desempenho de E/S para instalações de computação de armazenamento em larga escala. Esse modelo de classificação também pode ser usado como uma estrutura de diretrizes para resumir as pesquisas, fornecendo uma visão geral do cenário real. Também usamos o modelo proposto para realizar um mapeamento sistemático da literatura que abrangeu dez anos de pesquisa sobre melhorias no desempenho de E/S em ambientes de armazenamento. Este estudo classificou centenas de pesquisas distintas, identificando quais eram os dispositivos de hardware, software e sistemas de armazenamento que receberam mais atenção ao longo dos anos, quais foram os elementos de proposta mais pesquisados e onde esses elementos foram avaliados. Para justificar a importância desse modelo e o desenvolvimento de soluções que visam melhorias no desempenho de E/S, avaliamos um subconjunto dessas melhorias usando um ambiente de experimentação real e completo, o Grid5000. Análises em cenários diferentes usando um benchmark de E/S sintética demonstra como os parâmetros de vazão e latência se comportam ao executar diferentes operações de E/S usando tecnologias e abordagens distintas de armazenamento

    Logico-linguistic semantic representation of documents

    Get PDF
    The knowledge behind the gigantic pool of data remains largely unextracted. Techniques such as ontology design, RDF representations, hpernym extraction, etc. have been used to represent the knowledge. However, the area of logic (FOPL) and linguistics (Semantics) has not been explored in depth for this purpose. Search engines suffer in extraction of specific answers to queries because of the absence of structured domain knowledge. The current paper deals with the design of formalism to extract and represent knowledge from the data in a consistent format. The application of logic and linguistics combined greatly eases and increases the precision of knowledge translation from natural language. The results clearly indicate the effectiveness of the knowledge extraction and representation methodology developed providing intelligence to machines for efficient analysis of data. The methodology helps machines to precise results in an efficient manner
    corecore