8 research outputs found

    Securing instant messages with hardware-based cryptography and authentication in browser extension

    Get PDF
    Instant Messaging (IM) provides near-real-time communication between users, which has shown to be a valuable tool for internal communication in companies and for general-purpose interaction among people. IM systems and supporting protocols, however, must consider security aspects to guarantee the messages' authenticity, confidentiality, and integrity. In this paper, we present a solution for integrating hardware-based public key cryptography into Converse.js, an open-source IM client for browsers enabled with the Extensible Messaging and Presence Protocol (XMPP). The proposal is developed as a plugin for Converse.js, thus overriding the original functions of the client; and a browser extension that is triggered by the plugin and is responsible for calling the encryption and decryption services for each sent and received message. This integrated artifact allowed the experimental validation of the proposal providing authenticity of IM users with digital certificates and protection of IM messages with hardware-based cryptography. Results also shows the proposed systems is resistent to adversarial attacks against confidentiality and integrity and it is secure when considering cryptrographic tests like the Hamming distance and the NIST SP800-22

    AMORIS project - mobile application and command and control center on an iot network to support solidarity actions to counter Covid-19 and other outbreaks

    Get PDF
    O Projeto AMORIS visa fomentar uma ação de solidariedade entre membros da comunidade da Universidade de BrasĂ­lia e seu entorno com abrangĂȘncia regional e nacional, ação denominada Sistema UnB SolidĂĄria. O projeto contempla o desenvolvimento de um aplicativo de redes mĂłveis operando em paradigma de Internet of Things (IoT) com uma central de comando & controle (C&C), monitoração, coordenação e integração, de modo a permitir que as pessoas façam açÔes solidĂĄrias em diversas situaçÔes, como: ajuda mĂ©dica, açÔes de segurança comunitĂĄria, casos de dificuldade pessoal, apoio educacional etc

    Proposal of an system architeture based on neural OCR for rescue and index paleography writens between XVI and XIX centuries

    No full text
    Dissertação (mestrado)—Universidade de BrasĂ­lia, Faculdade de Tecnologia, Departamento de Engenharia ElĂ©trica, 2008.Este trabalho objetiva propor uma arquitetura de um sistema para tratamento e reconhecimento automĂĄtico do texto de documentos paleogrĂĄficos, utilizando um OCR (Optical Character Recognition) com tecnologia de redes neurais artificiais. O sistema proposto deve atuar no contexto de processos de transcrição do texto de documentos de escritas paleogrĂĄficas do sĂ©culo XVI ao XIX, documentos estes do Brasil colĂŽnia que foram digitalizados a partir dos originais impressos arquivados no Arquivo Ultramarino de Lisboa, uma das realizaçÔes do Projeto Resgate do MinistĂ©rio da Cultura brasileiro. A arquitetura do sistema proposto inclui mĂłdulos para segmentar as imagens digitalizadas dos documentos, para anĂĄlise dos segmentos com OCR na tentativa de reconhecimento do texto, para treinamento do OCR com formação de um dicionĂĄrio de palavras reconhecidas e para armazenamento do texto transcrito a partir das imagens dos documentos. Para avaliar essa arquitetura foi desenvolvido um protĂłtipo de software que permite ao usuĂĄrio segmentar manualmente uma imagem de documento, treinar um OCR simples e extrair com esse OCR algumas informaçÔes de texto do documento paleogrĂĄfico digitalizado. Conclui-se que a arquitetura proposta Ă© funcional, ainda que sejam necessĂĄrios desenvolvimentos mais profundos no que se refere aos processos de segmentação dos documentos e reconhecimento das escritas paleogrĂĄficas do sĂ©culo XVI ao XIX. ___________________________________________________________________________________________ ABSTRACTThis work propose a system architecture for automatic manipulate and recognize of text on paleographic document, using Optical Character Recognition (OCR) aggregate with artificial neural networks. The system should work on the context of process text transcription on text documents with paleographic writing of century XVI to XIX; those documents are acquired from Brazil on colony age and digitalized from the original files archived on Ultramario Archive of Lisboa, one works of Projeto Resgate from Brazilian Culture Ministry. The architecture of propose system has modules for segment the digital image of documents, analyze of segments with OCR in try of text recognize, OCR training for compose a dictionary of recognized worlds and also a module for storage the transcript text from document images. For evaluation has been developed prototype software, where one user could manually segment a document image, simple OCR training and using this OCR gets some text information from a digital paleographic document. We conclude that the propose architecture was functional, but still need more improvements on document segmentation module and on module that recognize the paleographic writings of century XVI to XIX

    Tensor-based framework with model order selection and high accuracy factor decomposition for time-delay estimation in dynamic multipath scenarios

    Get PDF
    Global Navigation Satellite Systems (GNSS) are crucial for applications that demand very accurate positioning. Tensor-based time-delay estimation methods, such as CPD-GEVD, DoA/KRF, and SECSI, combined with the GPS3 L1C signal, are capable of, significantly, mitigating the positioning degradation caused by multipath components. However, even though these schemes require an estimated model order, they assume that the number of multipath components is constant. In GNSS applications, the number of multipath components is time-varying in dynamic scenarios. Thus, in this paper, we propose a tensor-based framework with model order selection and high accuracy factor decomposition for time-delay estimation in dynamic multipath scenarios. Our proposed approach exploits the estimates of the model order for each slice by grouping the data tensor slices into sub-tensors to provide high accuracy factor decomposition. We further enhance the proposed approach by incorporating the tensor-based Multiple Denoising (MuDe)

    Understanding Data Breach from a Global Perspective: Incident Visualization and Data Protection Law Review

    No full text
    Data breaches result in data loss, including personal, health, and financial information that are crucial, sensitive, and private. The breach is a security incident in which personal and sensitive data are exposed to unauthorized individuals, with the potential to incur several privacy concerns. As an example, the French newspaper Le Figaro breached approximately 7.4 billion records that included full names, passwords, and e-mail and physical addresses. To reduce the likelihood and impact of such breaches, it is fundamental to strengthen the security efforts against this type of incident and, for that, it is first necessary to identify patterns of its occurrence, primarily related to the number of data records leaked, the affected geographical region, and its regulatory aspects. To advance the discussion in this regard, we study a dataset comprising 428 worldwide data breaches between 2018 and 2019, providing a visualization of the related statistics, such as the most affected countries, the predominant economic sector targeted in different countries, and the median number of records leaked per incident in different countries, regions, and sectors. We then discuss the data protection regulation in effect in each country comprised in the dataset, correlating key elements of the legislation with the statistical findings. As a result, we have identified an extensive disclosure of medical records in India and government data in Brazil in the time range. Based on the analysis and visualization, we find some interesting insights that researchers seldom focus on before, and it is apparent that the real dangers of data leaks are beyond the ordinary imagination. Finally, this paper contributes to the discussion regarding data protection laws and compliance regarding data breaches, supporting, for example, the decision process of data storage location in the cloud

    ICT Governance and Management Macroprocesses of a Brazilian Federal Government Agency

    No full text
    The process of identifying and managing Information and Communication Technology (ICT) risks has become a concern and a challenge for public and private organizations. In this context, risk management methodologies within the Brazilian Federal Public Administration organizations have become indispensable to help the managers of these organizations in decision making, especially in the distribution of public funds, elaboration of public policies focused on transparency, social actions contemplating indemnities, and social benefits, among others. In addition, the various ICT projects controlled by the public administration need a methodology to perform their management of ICT resources. In this article, we present the Governance and Risk Management methodology used to model the Administrative Council for Economic Defense (CADE) macro processes. The proposed methodology used the risk management process aligned to the ISO 31000 standards. This alignment was necessary for mapping CADE’s risk events, regardless of their complexity. The modeled ICT risk processes will support the organization’s managers in decision making and may be used or customized by any other organization of the Brazilian Federal Public Administration

    ICT Governance and Management Macroprocesses of a Brazilian Federal Government Agency

    No full text
    The process of identifying and managing Information and Communication Technology (ICT) risks has become a concern and a challenge for public and private organizations. In this context, risk management methodologies within the Brazilian Federal Public Administration organizations have become indispensable to help the managers of these organizations in decision making, especially in the distribution of public funds, elaboration of public policies focused on transparency, social actions contemplating indemnities, and social benefits, among others. In addition, the various ICT projects controlled by the public administration need a methodology to perform their management of ICT resources. In this article, we present the Governance and Risk Management methodology used to model the Administrative Council for Economic Defense (CADE) macro processes. The proposed methodology used the risk management process aligned to the ISO 31000 standards. This alignment was necessary for mapping CADE’s risk events, regardless of their complexity. The modeled ICT risk processes will support the organization’s managers in decision making and may be used or customized by any other organization of the Brazilian Federal Public Administration

    Development and Evaluation of an Intelligence and Learning System in Jurisprudence Text Mining in the Field of Competition Defense

    No full text
    A jurisprudence search system is a solution that makes available to its users a set of decisions made by public bodies on the recurring understanding as a way of understanding the law. In the similarity of legal decisions, jurisprudence seeks subsidies that provide stability, uniformity, and some predictability in the analysis of a case decided. This paper presents a proposed solution architecture for the jurisprudence search system of the Brazilian Administrative Council for Economic Defense (CADE), with a view to building and expanding the knowledge generated regarding the economic defense of competition to support the agency’s final procedural business activities. We conducted a literature review and a survey to investigate the characteristics and functionalities of the jurisprudence search systems used by Brazilian public administration agencies. Our findings revealed that the prevailing technologies of Brazilian agencies in developing jurisdictional search systems are Java programming language and Apache Solr as the main indexing engine. Around 87% of the jurisprudence search systems use machine learning classification. On the other hand, the systems do not use too many artificial intelligence and morphological construction techniques. No agency participating in the survey claimed to use ontology to treat structured and unstructured data from different sources and formats
    corecore