4,000 research outputs found

    Girt by sea: understanding Australia’s maritime domains in a networked world

    Get PDF
    This study aims to provide the background, language and context necessary for an informed understanding of the challenges and dilemmas faced by those responsible for the efficacy of Australia’s maritime domain awareness system. Abstract Against a rapidly changing region dominated by the rise of China, India and, closer to home, Indonesia, Australia’s approaches to understanding its maritime domains will be influenced by strategic factors and diplomatic judgements as well as operational imperatives.  Australia’s alliance relationship with the United States and its relationships with regional neighbours may be expected to have a profound impact on the strength of the information sharing and interoperability regimes on which so much of Australia’s maritime domain awareness depends. The purpose of this paper is twofold.  First, it seeks to explain in plain English some of the principles, concepts and terms that maritime domain awareness practitioners grapple with on a daily basis.  Second, it points to a series of challenges that governments face in deciding how to spend scarce tax dollars to deliver a maritime domain awareness system that is necessary and sufficient for the protection and promotion of Australia’s national interests

    Cloud transactions and caching for improved performance in clouds and DTNs

    Get PDF
    In distributed transactional systems deployed over some massively decentralized cloud servers, access policies are typically replicated. Interdependencies ad inconsistencies among policies need to be addressed as they can affect performance, throughput and accuracy. Several stringent levels of policy consistency constraints and enforcement approaches to guarantee the trustworthiness of transactions on cloud servers are proposed. We define a look-up table to store policy versions and the concept of Tree-Based Consistency approach to maintain a tree structure of the servers. By integrating look-up table and the consistency tree based approach, we propose an enhanced version of Two-phase validation commit (2PVC) protocol integrated with the Paxos commit protocol with reduced or almost the same performance overhead without affecting accuracy and precision. A new caching scheme has been proposed which takes into consideration Military/Defense applications of Delay-tolerant Networks (DTNs) where data that need to be cached follows a whole different priority levels. In these applications, data popularity can be defined not only based on request frequency, but also based on the importance like who created and ranked point of interests in the data, when and where it was created; higher rank data belonging to some specific location may be more important though frequency of those may not be higher than more popular lower priority data. Thus, our caching scheme is designed by taking different requirements into consideration for DTN networks for defense applications. The performance evaluation shows that our caching scheme reduces the overall access latency, cache miss and usage of cache memory when compared to using caching schemes --Abstract, page iv

    Developing a Cloud Computing Framework for University Libraries

    Get PDF
    Our understanding of the library context on security challenges on storing research output on the cloud is inadequate and incomplete. Existing research has mostly focused on profit-oriented organizations. To address the limitation within the university environment, the paper unravels the data/information security concerns of cloud storage services within the university libraries. On the score of changes occurring in the libraries, this paper serves to inform users and library managers of the traditional approaches that have not guaranteed the security of research output. The paper is built upon the work of Shaw and the cloud storage security framework, which links aspects of cloud security and helps explain reasons for university libraries moving research output into cloud infrastructure, and how the cloud service is more secured. Specifically, this paper examined the existing storage carriers/media for storing research output and the associated risks with cloud storage services for university libraries. The paper partly fills this gap by a case study examination of two (2) African countries’ (Ghana and Uganda) reports on research output and cloud storage security in university libraries. The paper argues that in storing university research output on the cloud, libraries consider the security of content, the resilience of librarians, determining access levels and enterprise cloud storage platforms. The interview instrument is used to collect qualitative data from librarians and the thematic content analysis is used to analyze the research data. Significantly, results show that copyright law infringement, unauthorized data accessibility, policy issues, insecurity of content, cost and no interoperable cloud standards were major risks associated with cloud storage services. It is expected that university libraries pay more attention to the security/confidentiality of content, the resilience of librarians, determining access levels and enterprise cloud storage platforms to enhance cloud security of research output. The paper contributes to the field of knowledge by developing a framework that supports an approach to understand security in cloud storage. It also enables actors in the library profession to understand the makeup and measures of security issues in cloud storage. By presenting empirical evidence, it is clear that university libraries have migrated research output into cloud infrastructure as an alternative for continued storage, maintenance and access of information

    It's about THYME: On the design and implementation of a time-aware reactive storage system for pervasive edge computing environments

    Get PDF
    This work was partially supported by Fundacao para a Ciencia e a Tecnologia (FCT-MCTES) through project DeDuCe (PTDC/CCI-COM/32166/2017), NOVA LINCS UIDB/04516/2020, and grant SFRH/BD/99486/2014; and by the European Union through project LightKone (grant agreement n. 732505).Nowadays, smart mobile devices generate huge amounts of data in all sorts of gatherings. Much of that data has localized and ephemeral interest, but can be of great use if shared among co-located devices. However, mobile devices often experience poor connectivity, leading to availability issues if application storage and logic are fully delegated to a remote cloud infrastructure. In turn, the edge computing paradigm pushes computations and storage beyond the data center, closer to end-user devices where data is generated and consumed, enabling the execution of certain components of edge-enabled systems directly and cooperatively on edge devices. In this article, we address the challenge of supporting reliable and efficient data storage and dissemination among co-located wireless mobile devices without resorting to centralized services or network infrastructures. We propose THYME, a novel time-aware reactive data storage system for pervasive edge computing environments, that exploits synergies between the storage substrate and the publish/subscribe paradigm. We present the design of THYME and elaborate a three-fold evaluation, through an analytical study, and both simulation and real world experimentations, characterizing the scenarios best suited for its use. The evaluation shows that THYME allows the notification and retrieval of relevant data with low overhead and latency, and also with low energy consumption, proving to be a practical solution in a variety of situations.publishersversionpublishe

    FM 30-16, Technical Intelligence, 28 February 1969.

    Get PDF
    This manual defines technical intelligence and explains the technical intelligence process. It briefly discusses the top level Army technical intelligence production organizations, which at that time were Foreign Science and Technology Center, the Missile Intelligence Directorate of the US Army Missile Command, and the Medical Intelligence Office of the Office of the Surgeon General of the Army. It explains the technical intelligence activities and planning in US Forces in the field. Considerable attention is given to explaining the proper procedures for recovery and evacuation of foreign equipment and documents. The appendices contain an extensive list of references, the categories of technical intelligence, and an example of a technical intelligence plan

    FM 30-16, Technical Intelligence, 28 February 1969.

    Get PDF
    This manual defines technical intelligence and explains the technical intelligence process. It briefly discusses the top level Army technical intelligence production organizations, which at that time were Foreign Science and Technology Center, the Missile Intelligence Directorate of the US Army Missile Command, and the Medical Intelligence Office of the Office of the Surgeon General of the Army. It explains the technical intelligence activities and planning in US Forces in the field. Considerable attention is given to explaining the proper procedures for recovery and evacuation of foreign equipment and documents. The appendices contain an extensive list of references, the categories of technical intelligence, and an example of a technical intelligence plan

    An overview of the Copernicus C4I architecture

    Get PDF
    The purpose of this thesis is to provide the reader with an overview of the U.S. Navy's Copernicus C4I Architecture. The acronym "C4I" emphasizes the intimate relationship between command, control, communications and intelligence, as well as their significance to the modern day warrior. Never in the history of the U.S> Navy has the importance of an extremely flexible C4I architecture been made more apparent than in the last decade. Included are discussions of the Copernicus concept, its command and control doctrine, its architectural goals and components, and Copernicus-related programs. Also included is a discussion on joint service efforts and the initiatives being conducted by the U.S. Marine Corps, the U.S. Air Force and the U.S. Army. Finally, a discussion of the Copernicus Phase I Requirements Definition Document's compliance with the acquisition process as required by DoD Instruction 5000.2 is presented.http://archive.org/details/overviewofcopern00dearLieutenant, United States NavyApproved for public release; distribution is unlimited

    SDSF : social-networking trust based distributed data storage and co-operative information fusion.

    Get PDF
    As of 2014, about 2.5 quintillion bytes of data are created each day, and 90% of the data in the world was created in the last two years alone. The storage of this data can be on external hard drives, on unused space in peer-to-peer (P2P) networks or using the more currently popular approach of storing in the Cloud. When the users store their data in the Cloud, the entire data is exposed to the administrators of the services who can view and possibly misuse the data. With the growing popularity and usage of Cloud storage services like Google Drive, Dropbox etc., the concerns of privacy and security are increasing. Searching for content or documents, from this distributed stored data, given the rate of data generation, is a big challenge. Information fusion is used to extract information based on the query of the user, and combine the data and learn useful information. This problem is challenging if the data sources are distributed and heterogeneous in nature where the trustworthiness of the documents may be varied. This thesis proposes two innovative solutions to resolve both of these problems. Firstly, to remedy the situation of security and privacy of stored data, we propose an innovative Social-based Distributed Data Storage and Trust based co-operative Information Fusion Framework (SDSF). The main objective is to create a framework that assists in providing a secure storage system while not overloading a single system using a P2P like approach. This framework allows the users to share storage resources among friends and acquaintances without compromising the security or privacy and enjoying all the benefits that the Cloud storage offers. The system fragments the data and encodes it to securely store it on the unused storage capacity of the data owner\u27s friends\u27 resources. The system thus gives a centralized control to the user over the selection of peers to store the data. Secondly, to retrieve the stored distributed data, the proposed system performs the fusion also from distributed sources. The technique uses several algorithms to ensure the correctness of the query that is used to retrieve and combine the data to improve the information fusion accuracy and efficiency for combining the heterogeneous, distributed and massive data on the Cloud for time critical operations. We demonstrate that the retrieved documents are genuine when the trust scores are also used while retrieving the data sources. The thesis makes several research contributions. First, we implement Social Storage using erasure coding. Erasure coding fragments the data, encodes it, and through introduction of redundancy resolves issues resulting from devices failures. Second, we exploit the inherent concept of trust that is embedded in social networks to determine the nodes and build a secure net-work where the fragmented data should be stored since the social network consists of a network of friends, family and acquaintances. The trust between the friends, and availability of the devices allows the user to make an informed choice about where the information should be stored using `k\u27 optimal paths. Thirdly, for the purpose of retrieval of this distributed stored data, we propose information fusion on distributed data using a combination of Enhanced N-grams (to ensure correctness of the query), Semantic Machine Learning (to extract the documents based on the context and not just bag of words and also considering the trust score) and Map Reduce (NSM) Algorithms. Lastly we evaluate the performance of distributed storage of SDSF using era- sure coding and identify the social storage providers based on trust and evaluate their trustworthiness. We also evaluate the performance of our information fusion algorithms in distributed storage systems. Thus, the system using SDSF framework, implements the beneficial features of P2P networks and Cloud storage while avoiding the pitfalls of these systems. The multi-layered encrypting ensures that all other users, including the system administrators cannot decode the stored data. The application of NSM algorithm improves the effectiveness of fusion since large number of genuine documents are retrieved for fusion

    Current and future efforts to vary the level of detail for the common operational picture

    Get PDF
    The Joint Staff developed the C4I for the Warrior Concept in 1992 which stated that the warrior needs a fused, real-time, true representation of the battlespace. To help accomplish this vision, the Global Command and Control System was created. It provides the Common Operational Picture described above, but only down to the level of the Unified Commander. This thesis is a comprehensive report that gives a complete review of the current situational awareness systems available to the commander in addition to current and future efforts to bring a common operational picture to all levels of command. These thesis is designed to give situational awareness to all levels of command. The detailed discussions in the thesis of these systems will help students and researchers in the Joint C4I curriculum at the Naval Postgraduate School develop a better understanding of the difficulties in getting a true common operational picture to all services at all levelshttp://archive.org/details/currentfutureeff00hageLieutenant, United States NavyApproved for public release; distribution is unlimited
    corecore