206 research outputs found

    The Effect of Using Histogram Equalization and Discrete Cosine Transform on Facial Keypoint Detection

    Get PDF
    This study aims to figure out the effect of using Histogram Equalization and Discrete Cosine Transform (DCT) in detecting facial keypoints, which can be applied for 3D facial reconstruction in face recognition. Four combinations of methods comprising of Histogram Equalization, removing low-frequency coefficients using Discrete Cosine Transform (DCT) and using five feature detectors, namely: SURF, Minimum Eigenvalue, Harris-Stephens, FAST, and BRISK were used for test. Data that were used for test were obtained from Head Pose Image and ORL Databases. The result from the test were evaluated using F-score. The highest F-score for Head Pose Image Dataset is 0.140 and achieved through the combination of DCT & Histogram Equalization with feature detector SURF. The highest F-score for ORL Database is 0.33 and achieved through the combination of DCT & Histogram Equalization with feature detector BRISK

    ARCHANGEL: Tamper-proofing Video Archives using Temporal Content Hashes on the Blockchain

    Get PDF
    We present ARCHANGEL; a novel distributed ledger based system for assuring the long-term integrity of digital video archives. First, we describe a novel deep network architecture for computing compact temporal content hashes (TCHs) from audio-visual streams with durations of minutes or hours. Our TCHs are sensitive to accidental or malicious content modification (tampering) but invariant to the codec used to encode the video. This is necessary due to the curatorial requirement for archives to format shift video over time to ensure future accessibility. Second, we describe how the TCHs (and the models used to derive them) are secured via a proof-of-authority blockchain distributed across multiple independent archives. We report on the efficacy of ARCHANGEL within the context of a trial deployment in which the national government archives of the United Kingdom, Estonia and Norway participated.Comment: Accepted to CVPR Blockchain Workshop 201

    A Design Concept for a Tourism Recommender System for Regional Development

    Get PDF
    Despite of tourism infrastructure and software, the development of tourism is hampered due to the lack of information support, which encapsulates various aspects of travel implementation. This paper highlights a demand for integrating various approaches and methods to develop a universal tourism information recommender system when building individual tourist routes. The study objective is proposing a concept of a universal information recommender system for building a personalized tourist route. The developed design concept for such a system involves a procedure for data collection and preparation for tourism product synthesis; a methodology for tourism product formation according to user preferences; the main stages of this methodology implementation. To collect and store information from real travelers, this paper proposes to use elements of blockchain technology in order to ensure information security. A model that specifies the key elements of a tourist route planning process is presented. This article can serve as a reference and knowledge base for digital business system analysts, system designers, and digital tourism business implementers for better digital business system design and implementation in the tourism sector

    Image Restoration Effect on DCT High Frequency Removal and Wiener Algorithm for Detecting Facial Key Points

    Get PDF
    This study aims to figure out the effect of using Histogram Equalization and Discrete Cosine Transform (DCT) in detecting facial keypoints, which can be applied for 3D facial reconstruction in face recognition. Four combinations of methods comprising of Histogram Equalization, removing low-frequency coefficients using Discrete Cosine Transform (DCT) and using five feature detectors, namely: SURF, Minimum Eigenvalue, Harris-Stephens, FAST, and BRISK were used for test. Data that were used for test were obtained from Head Pose Image and ORL Databases. The result from the test were evaluated using F-score. The highest F-score for Head Pose Image Dataset is 0.140 and achieved through the combination of DCT & Histogram Equalization with feature detector SURF. The highest F-score for ORL Database is 0.33 and achieved through the combination of DCT & Histogram Equalization with feature detector BRISK

    Decentralized Federated Learning: Fundamentals, State-of-the-art, Frameworks, Trends, and Challenges

    Full text link
    In the last decade, Federated Learning (FL) has gained relevance in training collaborative models without sharing sensitive data. Since its birth, Centralized FL (CFL) has been the most common approach in the literature, where a central entity creates a global model. However, a centralized approach leads to increased latency due to bottlenecks, heightened vulnerability to system failures, and trustworthiness concerns affecting the entity responsible for the global model creation. Decentralized Federated Learning (DFL) emerged to address these concerns by promoting decentralized model aggregation and minimizing reliance on centralized architectures. However, despite the work done in DFL, the literature has not (i) studied the main aspects differentiating DFL and CFL; (ii) analyzed DFL frameworks to create and evaluate new solutions; and (iii) reviewed application scenarios using DFL. Thus, this article identifies and analyzes the main fundamentals of DFL in terms of federation architectures, topologies, communication mechanisms, security approaches, and key performance indicators. Additionally, the paper at hand explores existing mechanisms to optimize critical DFL fundamentals. Then, the most relevant features of the current DFL frameworks are reviewed and compared. After that, it analyzes the most used DFL application scenarios, identifying solutions based on the fundamentals and frameworks previously defined. Finally, the evolution of existing DFL solutions is studied to provide a list of trends, lessons learned, and open challenges

    Fotofacesua: sistema de gestão fotográfica da Universidade de Aveiro

    Get PDF
    Nowadays, automation is present in basically every computational system. With the raise of Machine Learning algorithms through the years, the necessity of a human being to intervene in a system has dropped a lot. Although, in Universities, Companies and even governmental Institutions there are some systems that are have not been automatized. One of these cases, is the profile photo management, that stills requires human intervention to check if the image follows the Institution set of criteria that are obligatory to submit a new photo. FotoFaces is a system for updating the profile photos of collaborators at the University of Aveiro that allows the collaborator to submit a new photo and, automatically, through a set of image processing algorithms, decide if the photo meets a set of predifined criteria. One of the main advantages of this system is that it can be used in any institution and can be adapted to different needs by just changing the algorithms or criteria considered. This Dissertation describes some improvements implemented in the existing system, as well as some new features in terms of the available algorithms. The main contributions to the system are the following: sunglasses detection, hat detection and background analysis. For the first two, it was necessary to create a new database and label it to train, validate and test a deep transfer learning network, used to detect sunglasses and hats. In addition, several tests were performed varying the parameters of the network and using some machine learning and pre-processing techniques on the input images. Finally, the background analysis consists of the implementation and testing of 2 existing algorithms in the literature, one low level and the other deep learning. Overall, the results obtained in the improvement of the existing algorithms, as well as the performance of the new image processing modules, allowed the creation of a more robust (improved production version algorithms) and versatile (addition of new algorithms to the system) profile photo update system.Atualmente, a automação está presente em basicamente todos os sistemas computacionais. Com o aumento dos algoritmos de Aprendizagem Máquina ao longo dos anos, a necessidade de um ser humano intervir num sistema caiu bastante. Embora, em Universidades, Empresas e até Instituições governamentais, existam alguns sistemas que não foram automatizados. Um desses casos, é a gestão de fotos de perfil, que requer intervenção humana para verificar se a imagem segue o conjunto de critérios da Instituição que são obrigatórios para a submissão de uma nova foto. O FotoFaces é um sistema de atualização de fotos do perfil dos colaboradores na Universidade de Aveiro que permite ao colaborador submeter uma nova foto e, automaticamente, através de um conjunto de algoritmos de processamnto de imagem, decidir se a foto cumpre um conjunto de critérios predefinidos. Uma das principais vantagens deste sistema é que pode ser utilizado em qualquer Instituição e pode ser adaptado às diferentes necessidades alterando apenas os algoritmos ou os critérios considerados. Esta Dissertação descreve algumas melhorias implementadas no sistema existente, bem como algumas funcionalidades novas ao nível dos algoritmos disponíveis. As principais contribuições para o sistema são as seguintes: detecção de óculos de sol, detecção de chapéus e análise de background. Para as duas primeiras, foi necessário criar uma nova base de dados e rotulá-la para treinar, validar e testar uma rede de aprendizagem profunda por transferência, utilizada para detectar os óculos de sol e chapéus. Além disso, foram feitos vários testes variando os parâmetros da rede e usando algumas técnicas de aprendizagem máquina e pré-processamento sobre as imagens de entrada. Por fim, a análise do fundo consiste na implementação e teste de 2 algoritmos existentes na literatura, um de baixo nível e outro de aprendizagem profunda. Globalmente, os resultados obtidos na melhoria dos algoritmos existentes, bem como o desempenho dos novos módulos de processamneto de imagem, permitiram criar um sistema de atualização de fotos do perfil mais robusto (melhoria dos algoritmos da versão de produção) e versátil (adição de novos algoritmos ao sistema).Mestrado em Engenharia Eletrónica e Telecomunicaçõe

    New Waves of IoT Technologies Research – Transcending Intelligence and Senses at the Edge to Create Multi Experience Environments

    Get PDF
    The next wave of Internet of Things (IoT) and Industrial Internet of Things (IIoT) brings new technological developments that incorporate radical advances in Artificial Intelligence (AI), edge computing processing, new sensing capabilities, more security protection and autonomous functions accelerating progress towards the ability for IoT systems to self-develop, self-maintain and self-optimise. The emergence of hyper autonomous IoT applications with enhanced sensing, distributed intelligence, edge processing and connectivity, combined with human augmentation, has the potential to power the transformation and optimisation of industrial sectors and to change the innovation landscape. This chapter is reviewing the most recent advances in the next wave of the IoT by looking not only at the technology enabling the IoT but also at the platforms and smart data aspects that will bring intelligence, sustainability, dependability, autonomy, and will support human-centric solutions.acceptedVersio

    A Distributed Audit Trail for the Internet of Things

    Get PDF
    Sharing Internet of Things (IoT) data over open-data platforms and digital data marketplaces can reduce infrastructure investments, improve sustainability by reducing the required resources, and foster innovation. However, due to the inability to audit the authenticity, integrity, and quality of IoT data, third-party data consumers cannot assess the trustworthiness of received data. Therefore, it is challenging to use IoT data obtained from third parties for quality-relevant applications. To overcome this limitation, the IoT data must be auditable. Distributed Ledger Technology (DLT) is a promising approach for building auditable systems. However, the existing solutions do not integrate authenticity, integrity, data quality, and location into an all-encompassing auditable model and only focus on specific parts of auditability. This thesis aims to provide a distributed audit trail that makes the IoT auditable and enables sharing of IoT data between multiple organizations for quality relevant applications. Therefore, we designed and evaluated the Veritaa framework. The Veritaa framework comprises the Graph of Trust (GoT) as distributed audit trail and a DLT to immutably store the transactions that build the GoT. The contributions of this thesis are summarized as follows. First, we designed and evaluated the GoT a DLT-based Distributed Public Key Infrastructure (DPKI) with a signature store. Second, we designed a Distributed Calibration Certificate Infrastructure (DCCI) based on the GoT, which makes quality-relevant maintenance information of IoT devices auditable. Third, we designed an Auditable Positioning System (APS) to make positions in the IoT auditable. Finally, we designed an Location Verification System (LVS) to verify location claims and prevent physical layer attacks against the APS. All these components are integrated into the GoT and build the distributed audit trail. We implemented a real-world testbed to evaluate the proposed distributed audit trail. This testbed comprises several custom-built IoT devices connectable over Long Range Wide Area Network (LoRaWAN) or Long-Term Evolution Category M1 (LTE Cat M1), and a Bluetooth Low Energy (BLE)-based Angle of Arrival (AoA) positioning system. All these low-power devices can manage their identity and secure their data on the distributed audit trail using the IoT client of the Veritaa framework. The experiments suggest that a distributed audit trail is feasible and secure, and the low-power IoT devices are capable of performing the required cryptographic functions. Furthermore, the energy overhead introduced by making the IoT auditable is limited and reasonable for quality-relevant applications

    Survey on 6G Frontiers: Trends, Applications, Requirements, Technologies and Future Research

    Get PDF
    Emerging applications such as Internet of Everything, Holographic Telepresence, collaborative robots, and space and deep-sea tourism are already highlighting the limitations of existing fifth-generation (5G) mobile networks. These limitations are in terms of data-rate, latency, reliability, availability, processing, connection density and global coverage, spanning over ground, underwater and space. The sixth-generation (6G) of mobile networks are expected to burgeon in the coming decade to address these limitations. The development of 6G vision, applications, technologies and standards has already become a popular research theme in academia and the industry. In this paper, we provide a comprehensive survey of the current developments towards 6G. We highlight the societal and technological trends that initiate the drive towards 6G. Emerging applications to realize the demands raised by 6G driving trends are discussed subsequently. We also elaborate the requirements that are necessary to realize the 6G applications. Then we present the key enabling technologies in detail. We also outline current research projects and activities including standardization efforts towards the development of 6G. Finally, we summarize lessons learned from state-of-the-art research and discuss technical challenges that would shed a new light on future research directions towards 6G

    Control layer security: a new security paradigm for cooperative autonomous systems

    Get PDF
    Autonomous systems often cooperate to ensure safe navigation. Embedded within the centralised or distributed coordination mechanisms are a set of observations, unobservable states, and control variables. Security of data transfer between autonomous systems is crucial for safety, and both cryptography and physical layer security methods have been used to secure communication surfaces - each with its drawbacks and dependencies. Here, we show for the first time a new wireless Control Layer Security (CLS) mechanism. CLS exploits mutual physical states between cooperative autonomous systems to generate cipher keys. These mutual states are chosen to be observable to legitimate users and not sufficient to eavesdroppers, thereby enhancing the resulting secure capacity. The CLS cipher keys can encrypt data without key exchange or a common key pool, and offers very low information leakage. As such the security of digital data channels is now dependent on physical state estimation rather than wireless channel estimation. This protects the estimation process from wireless jamming and channel entropy dependency. We review for first time what kind of signal processing techniques are used for hidden state estimation and key generation, and the performance of CLS in different case studies.Engineering and Physical Sciences Research Council (EPSRC): EP/V026763/
    corecore