18 research outputs found

    A framework for development of android mobile electronic prescription transfer applications in compliance with security requirements mandated by the Australian healthcare industry

    Get PDF
    This thesis investigates mobile electronic transfer of prescription (ETP) in compliance with the security requirements mandated by the Australian healthcare industry and proposes a framework for the development of an Android mobile electronic prescription transfer application. Furthermore, and based upon the findings and knowledge from constructing this framework, another framework is also derived for assessing Android mobile ETP applications for their security compliance. The centralised exchange model-based ETP solution currently used in the Australian healthcare industry is an expensive solution for on-going use. With challenges such as an aging population and the rising burden of chronic disease, the cost of the current ETP solution’s operational infrastructure is certain to rise in the future. In an environment where it is increasingly beneficial for patients to engage in and manage their own information and subsequent care, this current solution fails to offer the patient direct access to their electronic prescription information. The current system also fails to incorporate certain features that would dramatically improve the quality of the patient’s care and safety, i.e. alerts for the patient’s drug allergies, harmful dosage and script expiration. Over a decade old, the current ETP solution was essentially designed and built to meet legislation and regulatory requirements, with change-averting its highest priority. With little, if any, provision for future growth and innovation, it was not designed to cater to the needs of the ETP process. This research identifies the gap within the current ETP implementation (i.e. dependency on infrastructure, significant on-going cost and limited availability of the patient’s medication history) and proposes a framework for building a secure mobile ETP solution on the Android mobile operating system platform which will address the identified gap. The literature review part of this thesis examined the significance of ETP for the nation’s larger initiative to provide an improved and better maintainable healthcare system. The literature review also revealed the stance of each jurisdiction, from legislative and regulatory perspectives, in transitioning to the use of a fully electronic ETP solution. It identified the regulatory mandates of each jurisdiction for ETP as well as the security standards by which the current ETP implementation is iii governed so as to conform to those regulatory mandates. The literature review part of the thesis essentially identified and established how the Australian healthcare industry’s various prescription-related legislations and regulations are constructed, and the complexity of this construction for eTP. The jurisdictional regulatory mandates identified in the literature review translate into a set of security requirements. These requirements establish the basis of the guiding framework for the development of a security-compliant Android mobile ETP application. A number of experimentations were conducted focusing on the native security features of the Android operating system, as well as wireless communication technologies such as NFC and Bluetooth, in order to propose an alternative mobile ETP solution with security assurance comparable to the current ETP implementation. The employment of a proof-of-concept prototype such as this alongside / coupled with a series of iterative experimentations strengthens the validity and practicality of the proposed framework. The first experiment successfully proved that the Android operating system has sufficient encryption capabilities, in compliance with the security mandates, to secure the electronic prescription information from the data at rest perspective. The second experiment indicated that the use of NFC technology to implement the alternative transfer mechanism for exchanging electronic prescription information between ETP participating devices is not practical. The next iteration of the experimentation using Bluetooth technology proved that it can be utilised as an alternative electronic prescription transfer mechanism to the current approach using the Internet. These experiment outcomes concluded the partial but sufficient proofof- concept prototype for this research. Extensive document analysis and iterative experimentations showed that the framework constructed by this research can guide the development of an alternative mobile ETP solution with both comparable security assurance to and better access to the patient’s medication history than the current solution. This alternative solution would present no operational dependence upon infrastructure and its associated, ongoing cost to the nation’s healthcare expenditure. In addition, use of this mobile ETP alternative has the potential to change the public’s perception (i.e. acceptance from regulatory and security perspectives) of mobile healthcare solutions, thereby paving the way for further innovation and future enhancements in eHealth

    an introduction to the history of European computer science and technology

    Get PDF
    (essays and documents

    Strategic cost management in a global supply chain

    Get PDF
    Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2004.Includes bibliographical references (p. 100).In the face of an economic downturn, cost has become a focal point of supply chain management. Cost management is increasingly being recognized as a vital core competency needed for survival. As companies transition from being vertically integrated to pursuing increasingly outsourced manufacturing strategies, modeling and monitoring the total cost of manufacturing products has become crucial, and complicated. In the context of the automated test equipment industry, this thesis explores the impact of outsourcing on product cost and cost management practices. It examines prevailing cost management practices with reference to design and procurement, as well as methods to leverage information technology and re-engineer business processes to manage "spend" effectively and efficiently. It surveys capabilities that are available through software and examines cost-benefit tradeoffs that have to be addressed in selecting such systems.by Venkatesh G. Rao.S.M.M.B.A

    Hardware/software architectures for iris biometrics

    Get PDF
    Nowadays, the necessity of identifying users of facilities and services has become quite important not only to determine who accesses a system and/or service, but also to determine which privileges should be provided to each user. For achieving such identification, Biometrics is emerging as a technology that provides a high level of security, as well as being convenient and comfortable for the citizen. Most biometric systems are based on computer solutions, where the identification process is performed by servers or workstations, whose cost and processing time make them not feasible for some situations. However, Microelectronics can provide a suitable solution without the need of complex and expensive computer systems. Microelectronics is a subfield of Electronics and as the name suggests, is related to the study, development and/or manufacturing of electronic components, i.e. integrated circuits (ICs). We have focused our research in a concrete field of Microelectronics: hardware/software co-design. This technique is widely used for developing specific and high computational cost devices. Its basis relies on using both hardware and software solutions in an effective way, thus, obtaining a device faster than just a software solution, or smaller devices that use dedicated hardware developed for all the processes. The questions on how we can obtain an effective solution for Biometrics will be solved considering all the different aspects of these systems. In this Thesis, we have made two important contributions: the first one for a verification system based on ID token and secondly, a search engine used for massive recognition systems, both of them related to Iris Biometrics. The first relevant contribution is a biometric system architecture proposal based on ID tokens in a distributed system. In this contribution, we have specified some considerations to be done in the system and describe the different functionalities of the elements which form it, such as the central servers and/or the terminals. The main functionality of the terminal is just left to acquiring the initial biometric raw data, which will be transmitted under security cryptographic methods to the token, where all the biometric process will be performed. The ID token architecture is based on Hardware/software co-design. The architecture proposed, independent of the modality, divides the biometric process into hardware and software in order to achieve further performance functions, more than in the existing tokens. This partition considers not only the decrease of computational time hardware can provide, but also the reduction of area and power consumption, the increase in security levels and the effects on performance in all the design. To prove the proposal made, we have implemented an ID token based on Iris Biometrics following our premises. We have developed different modules for an iris algorithm both in hardware and software platforms to obtain results necessary for an effective combination of same. We have also studied different alternatives for solving the partition problem in the Hardware/software co-design issue, leading to results which point out tabu search as the fastest algorithm for this purpose. Finally, with all the data obtained, we have been able to obtain different architectures according to different constraints. We have presented architectures where the time is a major requirement, and we have obtained 30% less processing time than in all software solutions. Likewise, another solution has been proposed which provides less area and power consumption. When considering the performance as the most important constraint, two architectures have been presented, one which also tries to minimize the processing time and another which reduces hardware area and power consumption. In regard the security we have also shown two architectures considering time and hardware area as secondary requirements. Finally, we have presented an ultimate architecture where all these factors were considered. These architectures have allowed us to study how hardware improves the security against authentication attacks, how the performance is influenced by the lack of floating point operations in hardware modules, how hardware reduces time with software reducing the hardware area and the power consumption. The other singular contribution made is the development of a search engine for massive identification schemes, where time is a major constraint as the comparison should be performed over millions of users. We have initially proposed two implementations: following a centralized architecture, where memories are connected to the microprocessor, although the comparison is performed by a dedicated hardware co-processor, and a second approach, where we have connected the memory driver directly in the hardware coprocessor. This last architecture has showed us the importance of a correct connection between the elements used when time is a major requirement. A graphical representation of the different aspects covered in this Thesis is presented in Fig.1, where the relation between the different topics studied can be seen. The main topics, Biometrics and Hardware/Software Co-design have been studied, where several aspects of them have been described, such as the different Biometric modalities, where we have focussed on Iris Biometrics and the security related to these systems. Hardware/Software Co-design has been studied by presenting different design alternatives and by identifying the most suitable configuration for ID Tokens. All the data obtained from this analysis has allowed us to offer two main proposals: The first focuses on the development of a fast search engine device, and the second combines all the factors related to both sciences with regards ID tokens, where different aspects have been combined in its Hardware/Software Design. Both approaches have been implemented to show the feasibility of our proposal. Finally, as a result of the investigation performed and presented in this thesis, further work and conclusions can be presented as a consequence of the work developed.-----------------------------------------------------------------------------------------Actualmente la identificación usuarios para el acceso a recintos o servicios está cobrando importancia no sólo para poder permitir el acceso, sino además para asignar los correspondientes privilegios según el usuario del que se trate. La Biometría es una tecnología emergente que además de realizar estas funciones de identificación, aporta mayores niveles de seguridad que otros métodos empleados, además de resultar más cómodo para el usuario. La mayoría de los sistemas biométricos están basados en ordenadores personales o servidores, sin embargo, la Microelectrónica puede aportar soluciones adecuadas para estos sistemas, con un menor coste y complejidad. La Microelectrónica es un campo de la Electrónica, que como su nombre sugiere, se basa en el estudio, desarrollo y/o fabricación de componentes electrónicos, también denominados circuitos integrados. Hemos centrado nuestra investigación en un campo específico de la Microelectrónica llamado co-diseño hardware/software. Esta técnica se emplea en el desarrollo de dispositivos específicos que requieren un alto gasto computacional. Se basa en la división de tareas a realizar entre hardware y software, consiguiendo dispositivos más rápidos que aquellos únicamente basados en una de las dos plataformas, y más pequeños que aquellos que se basan únicamente en hardware. Las cuestiones sobre como podemos crear soluciones aplicables a la Biometría son las que intentan ser cubiertas en esta tesis. En esta tesis, hemos propuesto dos importantes contribuciones: una para aquellos sistemas de verificación que se apoyan en dispositivos de identificación y una segunda que propone el desarrollo de un sistema de búsqueda masiva. La primera aportación es la metodología para el desarrollo de un sistema distribuido basado en dispositivos de identificación. En nuestra propuesta, el sistema de identificación está formado por un proveedor central de servicios, terminales y dichos dispositivos. Los terminales propuestos únicamente tienen la función de adquirir la muestra necesaria para la identificación, ya que son los propios dispositivos quienes realizan este proceso. Los dispositivos se apoyan en una arquitectura basada en codiseño hardware/software, donde los procesos biométricos se realizan en una de las dos plataformas, independientemente de la modalidad biométrica que se trate. El reparto de tareas se realiza de tal manera que el diseñador pueda elegir que parámetros le interesa más enfatizar, y por tanto se puedan obtener distintas arquitecturas según se quiera optimizar el tiempo de procesado, el área o consumo, minimizar los errores de identificación o incluso aumentar la seguridad del sistema por medio de la implementación en hardware de aquellos módulos que sean más susceptibles a ser atacados por intrusos. Para demostrar esta propuesta, hemos implementado uno de estos dispositivos basándonos en un algoritmo de reconocimiento por iris. Hemos desarrollado todos los módulos de dicho algoritmo tanto en hardware como en software, para posteriormente realizar combinaciones de ellos, en busca de arquitecturas que cumplan ciertos requisitos. Hemos estudiado igualmente distintas alternativas para la solucionar el problema propuesto, basándonos en algoritmos genéticos, enfriamiento simulado y búsqueda tabú. Con los datos obtenidos del estudio previo y los procedentes de los módulos implementados, hemos obtenido una arquitectura que minimiza el tiempo de ejecución en un 30%, otra que reduce el área y el consumo del dispositivo, dos arquitecturas distintas que evitan la pérdida de precisión y por tanto minimizan los errores en la identificación: una que busca reducir el área al máximo posible y otra que pretende que el tiempo de procesado sea mínimo; dos arquitecturas que buscan aumentar la seguridad, minimizando ya sea el tiempo o el área y por último, una arquitectura donde todos los factores antes nombrados son considerados por igual. La segunda contribución de la tesis se refiere al desarrollo de un motor de búsqueda para identificación masiva. La premisa seguida en esta propuesta es la de minimizar el tiempo lo más posible para que los usuarios no deban esperar mucho tiempo para ser identificados. Para ello hemos propuesto dos alternativas: una arquitectura clásica donde las memorias están conectadas a un microprocesador central, el cual a su vez se comunica con un coprocesador que realiza las funciones de comparación. Una segunda alternativa, donde las memorias se conectan directamente a dicho co-procesador, evitándose el uso del microprocesador en el proceso de comparación. Ambas propuestas son comparadas y analizadas, mostrando la importancia de una correcta y apropiada conexión de los distintos elementos que forman un sistema. La Fig. 2 muestra los distintos temas tratados en esta tesis, señalando la relación existente entre ellos. Los principales temas estudiados son la Biometría y el co-diseño hardware/software, describiendo distintos aspectos de ellos, como las diferentes modalidades biométricas, centrándonos en la Biometría por iris o la seguridad relativa a estos sistemas. En el caso del co-diseño hardware/software se presenta un estado de la técnica donde se comentan diversas alternativas para el desarrollo de sistemas empotrados, el trabajo propuesto por otros autores en el ¶ambito del co-diseño y por último qué características deben cumplir los dispositivos de identificación como sistemas empotrados. Con toda esta información pasamos al desarrollo de las propuestas antes descritas y los desarrollos realizados. Finalmente, conclusiones y trabajo futuro son propuestos a raíz de la investigación realizada

    Private cloud computing platforms. Analysis and implementation in a Higher Education Institution

    Get PDF
    The constant evolution of the Internet and its increasing use and subsequent entailing to private and public activities, resulting in a strong impact on their survival, originates an emerging technology. Through cloud computing, it is possible to abstract users from the lower layers to the business, focusing only on what is most important to manage and with the advantage of being able to grow (or degrades) resources as needed. The paradigm of cloud arises from the necessity of optimization of IT resources evolving in an emergent and rapidly expanding and technology. In this regard, after a study of the most common cloud platforms and the tactic of the current implementation of the technologies applied at the Institute of Biomedical Sciences of Abel Salazar and Faculty of Pharmacy of Oporto University a proposed evolution is suggested in order adorn certain requirements in the context of cloud computing.atividades privadas e públicas, traduzindo-se num forte impacto à sua sobrevivência, origina uma tecnologia emergente. Através de cloud computing, é possível abstrair os utilizadores das camadas inferiores ao negócio, focalizando apenas no que realmente é mais importante de gerir e ainda com a vantagem de poder crescer (ou diminuir) os recursos conforme as necessidades correntes. Os recursos das TI evoluíram consideravelmente na última década tendo despoletado toda uma nova consciencialização de otimização, originando o paradigma da computação em nuvem. Neste sentido, após um estudo das plataformas de cloud mais comuns, é abordado um case study das tecnologias implementadas no Instituto de Ciências Biomédicas de Abel Salazar e Faculdade de Farmácia da Universidade do Porto seguido de uma sugestão de implementação de algumas plataformas de cloud a fim de adereçar determinados requisitos do case study. Distribuições produzidas especificamente para a implementação de nuvens privadas encontram-se hoje em dia disponíveis e cujas configurações estão amplamente simplificadas. No entanto para que seja viável uma arquitetura bem implementada, quer a nível de hardware, rede, segurança eficiência e eficácia, é pertinente considerar a infraestrutura necessária como um todo. Um estudo multidisciplinar aprofundado sobre todos os temas adjacentes a esta tecnologia está intrinsecamente ligado à arquitetura de um sistema de nuvem, sob pena de se obter um sistema deficitário. É necessário um olhar mais abrangente, para além do equipamento necessário e do software utilizado, que pondere efetivamente os custos de implementação tendo em conta também os recursos humanos especializados nas diversas áreas envolvidas. A construção de um novo centro de dados, fruto da junção dos edifícios do Instituto de Ciências Biomédicas de Abel Salazar e da Faculdade de Farmácia da Universidade do Porto, possibilitou a partilha de recursos tecnológicos. Tendo em conta a infraestrutura existente, completamente escalável, e assente numa abordagem de crescimento e de virtualização, considera-se a implementação de uma nuvem privada já que os recursos existentes são perfeitamente adaptáveis a esta realidade emergente. A tecnologia de virtualização adotada, bem como o respetivo hardware (armazenamento e processamento) foi pensado numa implementação baseada no XEN Server, e considerando que existe heterogeneidade no parque dos servidores e tendo em conta a ideologia das tecnologias disponíveis (aberta e proprietária) é estudada uma abordagem distinta à implementação existente baseada na Microsoft. Dada a natureza da instituição, e dependendo dos recursos necessários e abordagem a tomar, no desenvolvimento de uma nuvem privada, poderá ser levado em conta a integração com nuvens públicas (por exemplo Google Apps), sendo que as possíveis soluções a adotar poderão ser baseadas em tecnologias abertas e/ou pagas (ou ambas). Este trabalho tem como objetivo, em última instância, o desígnio de verificar as tecnologias utilizadas atualmente e identificar potenciais soluções para que em conjunto com a infraestrutura atual, disponibilizar um serviço de nuvem privada. O trabalho inicia-se com uma explicação concisa do conceito de nuvem, comparando com outras formas de computação, expondo as suas características, revendo a sua história, explicando as suas camadas, modelos de implementação e arquiteturas. Em seguida, no capítulo do estado da arte, são abordadas as principais plataformas de computação em nuvem focando o Microsoft Azure, Google Apps, Cloud Foundry, Delta Cloud e Open Stack. São também abordadas outras plataformas que emergem fornecendo assim um olhar mais amplo para as soluções tecnológicas atuais disponíveis. Após o estado da arte, é abordado um estudo de um caso em particular, a implementação do cenário de TI do novo edifício das duas unidades orgânicas da Universidade do Porto, o Instituto de Ciências Biomédicas Abel Salazar e a Faculdade de Farmácia e sua arquitetura de nuvem privada utilizando recursos partilhados. O estudo do caso é seguido de uma sugestão de evolução da implementação, utilizando tecnologias de computação em nuvem de forma a cumprir com os requisitos necessários e integrar e agilizar a infraestrutura existente

    On the performance of helper data template protection schemes

    Get PDF
    The use of biometrics looks promising as it is already being applied in elec- tronic passports, ePassports, on a global scale. Because the biometric data has to be stored as a reference template on either a central or personal storage de- vice, its wide-spread use introduces new security and privacy risks such as (i) identity fraud, (ii) cross-matching, (iii) irrevocability and (iv) leaking sensitive medical information. Mitigating these risks is essential to obtain the accep- tance from the subjects of the biometric systems and therefore facilitating the successful implementation on a large-scale basis. A solution to mitigate these risks is to use template protection techniques. The required protection properties of the stored reference template according to ISO guidelines are (i) irreversibility, (ii) renewability and (iii) unlinkability. A known template protection scheme is the helper data system (HDS). The fun- damental principle of the HDS is to bind a key with the biometric sample with use of helper data and cryptography, as such that the key can be reproduced or released given another biometric sample of the same subject. The identity check is then performed in a secure way by comparing the hash of the key. Hence, the size of the key determines the amount of protection. This thesis extensively investigates the HDS system, namely (i) the the- oretical classication performance, (ii) the maximum key size, (iii) the irre- versibility and unlinkability properties, and (iv) the optimal multi-sample and multi-algorithm fusion method. The theoretical classication performance of the biometric system is deter- mined by assuming that the features extracted from the biometric sample are Gaussian distributed. With this assumption we investigate the in uence of the bit extraction scheme on the classication performance. With use of the the- oretical framework, the maximum size of the key is determined by assuming the error-correcting code to operate on Shannon's bound. We also show three vulnerabilities of HDS that aect the irreversibility and unlinkability property and propose solutions. Finally, we study the optimal level of applying multi- sample and multi-algorithm fusion with the HDS at either feature-, score-, or decision-level

    Privacy-aware Biometric Blockchain based e-Passport System for Automatic Border Control

    Get PDF
    In the middle of 1990s, World Wide Web technology initially steps into our life. Now, 30 years after that, widespread internet access and established computing technology bring embodied real life into Metaverse by digital twin. Internet is not only blurring the concept of physical distance, but also blurring the edge between the real and virtual world. Another breakthrough in computing is the blockchain, which shifts the root of trust attached to a system administrator to the computational power of the system. Furthermore, its favourable properties such as immutable time-stamped transaction history and atomic smart contracts trigger the development of decentralized autonomous organizations (DAOs). Combining above two, this thesis presents a privacy-aware biometric Blockchain based e-passport system for automatic border control(ABC), which aims for improving the efficiency of existing ABC system. Specifically, through constructing a border control Metaverse DAO, border control workload can be autonomously self-executed by atomic smart contracts as transaction and then immutably recorded on Blockchain. What is more, to digitize border crossing documentation, biometric Blockchain based e-passport system(BBCVID) is created to generate an immutable real-world identity digital twin in the border control Metaverse DAO through Blockchain and biometric identity authentication. That is to say, by digitizing border crossing documentation and automatizing both biometric identity authentication and border crossing documentation verification, our proposal is able to significantly improve existing border control efficiency. Through system simulation and performance evaluation by Hyperledger Caliper, the proposed system turns out to be able to improve existing border control efficiency by 3.5 times more on average, which is remarkable. What is more, the dynamic digital twin constructed by BBCVID enables computing techniques such as machine learning and big data analysis applicable to real-world entity, which has a huge potential to create more value by constructing smarter ABC systems

    Vers une programmation locale et distribuée unifiée au travers de l'utilisation de conteneurs actifs et de références asynchrones.

    Get PDF
    Dans le domaine des systèmes distribués, la notion de mobilité du code est à l’origine de nombreux travaux visant à améliorer les performances des applications parallèles (processus légers mobiles), à faciliter le développement d’applications (agents mobiles) ou à garantir la sécurité (cartes à puces). Dans ce contexte, nous montrons que les systèmes d’agents mobiles ont peu à peu disparu au profit de plates-formes d’exécution asynchrones. Nous présentons une nouvelle abstraction – appellée conteneur actif – qui est issue d’une modélisation en π-calcul d’un système d’agents mobiles, et qui semble être une brique de base avec laquelle les applications distribuées peuvent être conçues. Le développement d’une implémentation de cette abstraction en Java a fait apparaître un problème lié à la gestion de la concurrence dans les applications, distribuées ou non. Nous décrivons donc la notion de référence asynchrone – notre solution à ce problème – qui permet d’exprimer simplement la concurrence d’exécution dans une application. Notre implémentation en Java de ce concept facilite le développement des applications multithread ées et parallèles, en évitant le recours problématique aux threads par l’utilisation exclusive d’un unique paradigme : l’appel de méthode. Ce dernier peut se décliner en de multiples versions : synchrone, asynchrone, local ou distant. L’ensemble de nos travaux est disponible sous licence libre LGPL au sein d’une plateforme opérationnelle et documentée appellée Mandala qui est brièvement décrite.In the domain of distributed systems, several projects focus on mobile code in order to enhance the performance of parallel applications (mobile threads), to make easier the development of applications (mobile agents) or to guarantee security (smart cards). In this context, we show how mobile agent systems have basically disappeared in favor of asynchronous execution frameworks. We present a new abstraction – called active container – originating from a model of a mobile agents system. It seems to be a base layer on top of which distributed applications can be developped. A Java implementation of this abstraction raises a problem related to the management of concurrency in applications, distributed or not. We describe the notion of asynchronous reference – our solution to this problem – which allows to express concurrency quite easily. Our Java implementation of this concept eases the development of multithreaded and parallel applications avoiding the problematic use of threads by the exclusive use of a single paradigm: method invocation. This can be: synchronous, asynchronous, local or remote. Our work is available as an open-source LGPL licence package within a ready to use and documented framework called Mandala which is briefly described

    Préserver la vie privée des individus grâce aux Systèmes Personnels de Gestion des Données

    Get PDF
    Riding the wave of smart disclosure initiatives and new privacy-protection regulations, the Personal Cloud paradigm is emerging through a myriad of solutions offered to users to let them gather and manage their whole digital life. On the bright side, this opens the way to novel value-added services when crossing multiple sources of data of a given person or crossing the data of multiple people. Yet this paradigm shift towards user empowerment raises fundamental questions with regards to the appropriateness of the functionalities and the data management and protection techniques which are offered by existing solutions to laymen users. Our work addresses these questions on three levels. First, we review, compare and analyze personal cloud alternatives in terms of the functionalities they provide and the threat models they target. From this analysis, we derive a general set of functionality and security requirements that any Personal Data Management System (PDMS) should consider. We then identify the challenges of implementing such a PDMS and propose a preliminary design for an extensive and secure PDMS reference architecture satisfying the considered requirements. Second, we focus on personal computations for a specific hardware PDMS instance (i.e., secure token with mass storage of NAND Flash). In this context, we propose a scalable embedded full-text search engine to index large document collections and manage tag-based access control policies. Third, we address the problem of collective computations in a fully-distributed architecture of PDMSs. We discuss the system and security requirements and propose protocols to enable distributed query processing with strong security guarantees against an attacker mastering many colluding corrupted nodes.Surfant sur la vague des initiatives de divulgation restreinte de données et des nouvelles réglementations en matière de protection de la vie privée, le paradigme du Cloud Personnel émerge à travers une myriade de solutions proposées aux utilisateurs leur permettant de rassembler et de gérer l'ensemble de leur vie numérique. Du côté positif, cela ouvre la voie à de nouveaux services à valeur ajoutée lors du croisement de plusieurs sources de données d'un individu ou du croisement des données de plusieurs personnes. Cependant, ce changement de paradigme vers la responsabilisation de l'utilisateur soulève des questions fondamentales quant à l'adéquation des fonctionnalités et des techniques de gestion et de protection des données proposées par les solutions existantes aux utilisateurs lambda. Notre travail aborde ces questions à trois niveaux. Tout d'abord, nous passons en revue, comparons et analysons les alternatives de cloud personnel au niveau des fonctionnalités fournies et des modèles de menaces ciblés. De cette analyse, nous déduisons un ensemble général d'exigences en matière de fonctionnalité et de sécurité que tout système personnel de gestion des données (PDMS) devrait prendre en compte. Nous identifions ensuite les défis liés à la mise en œuvre d'un tel PDMS et proposons une conception préliminaire pour une architecture PDMS étendue et sécurisée de référence répondant aux exigences considérées. Ensuite, nous nous concentrons sur les calculs personnels pour une instance matérielle spécifique du PDMS (à savoir, un dispositif personnel sécurisé avec un stockage de masse de type NAND Flash). Dans ce contexte, nous proposons un moteur de recherche plein texte embarqué et évolutif pour indexer de grandes collections de documents et gérer des politiques de contrôle d'accès basées sur des étiquettes. Troisièmement, nous abordons le problème des calculs collectifs dans une architecture entièrement distribuée de PDMS. Nous discutons des exigences d'architectures système et de sécurité et proposons des protocoles pour permettre le traitement distribué des requêtes avec de fortes garanties de sécurité contre un attaquant maîtrisant de nombreux nœuds corrompus

    Disruptive Technologies with Applications in Airline & Marine and Defense Industries

    Get PDF
    Disruptive Technologies With Applications in Airline, Marine, Defense Industries is our fifth textbook in a series covering the world of Unmanned Vehicle Systems Applications & Operations On Air, Sea, and Land. The authors have expanded their purview beyond UAS / CUAS / UUV systems that we have written extensively about in our previous four textbooks. Our new title shows our concern for the emergence of Disruptive Technologies and how they apply to the Airline, Marine and Defense industries. Emerging technologies are technologies whose development, practical applications, or both are still largely unrealized, such that they are figuratively emerging into prominence from a background of nonexistence or obscurity. A Disruptive technology is one that displaces an established technology and shakes up the industry or a ground-breaking product that creates a completely new industry.That is what our book is about. The authors think we have found technology trends that will replace the status quo or disrupt the conventional technology paradigms.The authors have collaborated to write some explosive chapters in Book 5:Advances in Automation & Human Machine Interface; Social Media as a Battleground in Information Warfare (IW); Robust cyber-security alterative / replacement for the popular Blockchain Algorithm and a clean solution for Ransomware; Advanced sensor technologies that are used by UUVs for munitions characterization, assessment, and classification and counter hostile use of UUVs against U.S. capital assets in the South China Seas. Challenged the status quo and debunked the climate change fraud with verifiable facts; Explodes our minds with nightmare technologies that if they come to fruition may do more harm than good; Propulsion and Fuels: Disruptive Technologies for Submersible Craft Including UUVs; Challenge the ammunition industry by grassroots use of recycled metals; Changing landscape of UAS regulations and drone privacy; and finally, Detailing Bioterrorism Risks, Biodefense, Biological Threat Agents, and the need for advanced sensors to detect these attacks.https://newprairiepress.org/ebooks/1038/thumbnail.jp
    corecore