14,080 research outputs found

    Security and Privacy Problems in Voice Assistant Applications: A Survey

    Full text link
    Voice assistant applications have become omniscient nowadays. Two models that provide the two most important functions for real-life applications (i.e., Google Home, Amazon Alexa, Siri, etc.) are Automatic Speech Recognition (ASR) models and Speaker Identification (SI) models. According to recent studies, security and privacy threats have also emerged with the rapid development of the Internet of Things (IoT). The security issues researched include attack techniques toward machine learning models and other hardware components widely used in voice assistant applications. The privacy issues include technical-wise information stealing and policy-wise privacy breaches. The voice assistant application takes a steadily growing market share every year, but their privacy and security issues never stopped causing huge economic losses and endangering users' personal sensitive information. Thus, it is important to have a comprehensive survey to outline the categorization of the current research regarding the security and privacy problems of voice assistant applications. This paper concludes and assesses five kinds of security attacks and three types of privacy threats in the papers published in the top-tier conferences of cyber security and voice domain.Comment: 5 figure

    Joint Activity Detection, Channel Estimation, and Data Decoding for Grant-free Massive Random Access

    Full text link
    In the massive machine-type communication (mMTC) scenario, a large number of devices with sporadic traffic need to access the network on limited radio resources. While grant-free random access has emerged as a promising mechanism for massive access, its potential has not been fully unleashed. In particular, the common sparsity pattern in the received pilot and data signal has been ignored in most existing studies, and auxiliary information of channel decoding has not been utilized for user activity detection. This paper endeavors to develop advanced receivers in a holistic manner for joint activity detection, channel estimation, and data decoding. In particular, a turbo receiver based on the bilinear generalized approximate message passing (BiG-AMP) algorithm is developed. In this receiver, all the received symbols will be utilized to jointly estimate the channel state, user activity, and soft data symbols, which effectively exploits the common sparsity pattern. Meanwhile, the extrinsic information from the channel decoder will assist the joint channel estimation and data detection. To reduce the complexity, a low-cost side information-aided receiver is also proposed, where the channel decoder provides side information to update the estimates on whether a user is active or not. Simulation results show that the turbo receiver is able to reduce the activity detection, channel estimation, and data decoding errors effectively, while the side information-aided receiver notably outperforms the conventional method with a relatively low complexity

    Reinforcement Learning-based User-centric Handover Decision-making in 5G Vehicular Networks

    Get PDF
    The advancement of 5G technologies and Vehicular Networks open a new paradigm for Intelligent Transportation Systems (ITS) in safety and infotainment services in urban and highway scenarios. Connected vehicles are vital for enabling massive data sharing and supporting such services. Consequently, a stable connection is compulsory to transmit data across the network successfully. The new 5G technology introduces more bandwidth, stability, and reliability, but it faces a low communication range, suffering from more frequent handovers and connection drops. The shift from the base station-centric view to the user-centric view helps to cope with the smaller communication range and ultra-density of 5G networks. In this thesis, we propose a series of strategies to improve connection stability through efficient handover decision-making. First, a modified probabilistic approach, M-FiVH, aimed at reducing 5G handovers and enhancing network stability. Later, an adaptive learning approach employed Connectivity-oriented SARSA Reinforcement Learning (CO-SRL) for user-centric Virtual Cell (VC) management to enable efficient handover (HO) decisions. Following that, a user-centric Factor-distinct SARSA Reinforcement Learning (FD-SRL) approach combines time series data-oriented LSTM and adaptive SRL for VC and HO management by considering both historical and real-time data. The random direction of vehicular movement, high mobility, network load, uncertain road traffic situation, and signal strength from cellular transmission towers vary from time to time and cannot always be predicted. Our proposed approaches maintain stable connections by reducing the number of HOs by selecting the appropriate size of VCs and HO management. A series of improvements demonstrated through realistic simulations showed that M-FiVH, CO-SRL, and FD-SRL were successful in reducing the number of HOs and the average cumulative HO time. We provide an analysis and comparison of several approaches and demonstrate our proposed approaches perform better in terms of network connectivity

    Performance Analysis and Comparison of Non-ideal Wireless PBFT and RAFT Consensus Networks in 6G Communications

    Full text link
    Due to advantages in security and privacy, blockchain is considered a key enabling technology to support 6G communications. Practical Byzantine Fault Tolerance (PBFT) and RAFT are seen as the most applicable consensus mechanisms (CMs) in blockchain-enabled wireless networks. However, previous studies on PBFT and RAFT rarely consider the channel performance of the physical layer, such as path loss and channel fading, resulting in research results that are far from real networks. Additionally, 6G communications will widely deploy high-frequency signals such as terahertz (THz) and millimeter wave (mmWave), while performances of PBFT and RAFT are still unknown when these signals are transmitted in wireless PBFT or RAFT networks. Therefore, it is urgent to study the performance of non-ideal wireless PBFT and RAFT networks with THz and mmWave signals, to better make PBFT and RAFT play a role in the 6G era. In this paper, we study and compare the performance of THz and mmWave signals in non-ideal wireless PBFT and RAFT networks, considering Rayleigh Fading (RF) and close-in Free Space (FS) reference distance path loss. Performance is evaluated by five metrics: consensus success rate, latency, throughput, reliability gain, and energy consumption. Meanwhile, we find and derive that there is a maximum distance between two nodes that can make CMs inevitably successful, and it is named the active distance of CMs. The research results not only analyze the performance of non-ideal wireless PBFT and RAFT networks, but also provide important references for the future transmission of THz and mmWave signals in PBFT and RAFT networks.Comment: arXiv admin note: substantial text overlap with arXiv:2303.1575

    Underwater optical wireless communications in turbulent conditions: from simulation to experimentation

    Get PDF
    Underwater optical wireless communication (UOWC) is a technology that aims to apply high speed optical wireless communication (OWC) techniques to the underwater channel. UOWC has the potential to provide high speed links over relatively short distances as part of a hybrid underwater network, along with radio frequency (RF) and underwater acoustic communications (UAC) technologies. However, there are some difficulties involved in developing a reliable UOWC link, namely, the complexity of the channel. The main focus throughout this thesis is to develop a greater understanding of the effects of the UOWC channel, especially underwater turbulence. This understanding is developed from basic theory through to simulation and experimental studies in order to gain a holistic understanding of turbulence in the UOWC channel. This thesis first presents a method of modelling optical underwater turbulence through simulation that allows it to be examined in conjunction with absorption and scattering. In a stationary channel, this turbulence induced scattering is shown to cause and increase both spatial and temporal spreading at the receiver plane. It is also demonstrated using the technique presented that the relative impact of turbulence on a received signal is lower in a highly scattering channel, showing an in-built resilience of these channels. Received intensity distributions are presented confirming that fluctuations in received power from this method follow the commonly used Log-Normal fading model. The impact of turbulence - as measured using this new modelling framework - on link performance, in terms of maximum achievable data rate and bit error rate is equally investigated. Following that, experimental studies comparing both the relative impact of turbulence induced scattering on coherent and non-coherent light propagating through water and the relative impact of turbulence in different water conditions are presented. It is shown that the scintillation index increases with increasing temperature inhomogeneity in the underwater channel. These results indicate that a light beam from a non-coherent source has a greater resilience to temperature inhomogeneity induced turbulence effect in an underwater channel. These results will help researchers in simulating realistic channel conditions when modelling a light emitting diode (LED) based intensity modulation with direct detection (IM/DD) UOWC link. Finally, a comparison of different modulation schemes in still and turbulent water conditions is presented. Using an underwater channel emulator, it is shown that pulse position modulation (PPM) and subcarrier intensity modulation (SIM) have an inherent resilience to turbulence induced fading with SIM achieving higher data rates under all conditions. The signal processing technique termed pair-wise coding (PWC) is applied to SIM in underwater optical wireless communications for the first time. The performance of PWC is compared with the, state-of-the-art, bit and power loading optimisation algorithm. Using PWC, a maximum data rate of 5.2 Gbps is achieved in still water conditions

    Interference mitigation in LiFi networks

    Get PDF
    Due to the increasing demand for wireless data, the radio frequency (RF) spectrum has become a very limited resource. Alternative approaches are under investigation to support the future growth in data traffic and next-generation high-speed wireless communication systems. Techniques such as massive multiple-input multiple-output (MIMO), millimeter wave (mmWave) communications and light-fidelity (LiFi) are being explored. Among these technologies, LiFi is a novel bi-directional, high-speed and fully networked wireless communication technology. However, inter-cell interference (ICI) can significantly restrict the system performance of LiFi attocell networks. This thesis focuses on interference mitigation in LiFi attocell networks. The angle diversity receiver (ADR) is one solution to address the issue of ICI as well as frequency reuse in LiFi attocell networks. With the property of high concentration gain and narrow field of view (FOV), the ADR is very beneficial for interference mitigation. However, the optimum structure of the ADR has not been investigated. This motivates us to propose the optimum structures for the ADRs in order to fully exploit the performance gain. The impact of random device orientation and diffuse link signal propagation are taken into consideration. The performance comparison between the select best combining (SBC) and maximum ratio combining (MRC) is carried out under different noise levels. In addition, the double source (DS) system, where each LiFi access point (AP) consists of two sources transmitting the same information signals but with opposite polarity, is proven to outperform the single source (SS) system under certain conditions. Then, to overcome issues around ICI, random device orientation and link blockage, hybrid LiFi/WiFi networks (HLWNs) are considered. In this thesis, dynamic load balancing (LB) considering handover in HLWNs is studied. The orientation-based random waypoint (ORWP) mobility model is considered to provide a more realistic framework to evaluate the performance of HLWNs. Based on the low-pass filtering effect of the LiFi channel, we firstly propose an orthogonal frequency division multiple access (OFDMA)-based resource allocation (RA) method in LiFi systems. Also, an enhanced evolutionary game theory (EGT)-based LB scheme with handover in HLWNs is proposed. Finally, due to the characteristic of high directivity and narrow beams, a vertical-cavity surface-emitting laser (VCSEL) array transmission system has been proposed to mitigate ICI. In order to support mobile users, two beam activation methods are proposed. The beam activation based on the corner-cube retroreflector (CCR) can achieve low power consumption and almost-zero delay, allowing real-time beam activation for high-speed users. The mechanism based on the omnidirectional transmitter (ODTx) is suitable for low-speed users and very robust to random orientation

    Educating Sub-Saharan Africa:Assessing Mobile Application Use in a Higher Learning Engineering Programme

    Get PDF
    In the institution where I teach, insufficient laboratory equipment for engineering education pushed students to learn via mobile phones or devices. Using mobile technologies to learn and practice is not the issue, but the more important question lies in finding out where and how they use mobile tools for learning. Through the lens of Kearney et al.’s (2012) pedagogical model, using authenticity, personalisation, and collaboration as constructs, this case study adopts a mixed-method approach to investigate the mobile learning activities of students and find out their experiences of what works and what does not work. Four questions are borne out of the over-arching research question, ‘How do students studying at a University in Nigeria perceive mobile learning in electrical and electronic engineering education?’ The first three questions are answered from qualitative, interview data analysed using thematic analysis. The fourth question investigates their collaborations on two mobile social networks using social network and message analysis. The study found how students’ mobile learning relates to the real-world practice of engineering and explained ways of adapting and overcoming the mobile tools’ limitations, and the nature of the collaborations that the students adopted, naturally, when they learn in mobile social networks. It found that mobile engineering learning can be possibly located in an offline mobile zone. It also demonstrates that investigating the effectiveness of mobile learning in the mobile social environment is possible by examining users’ interactions. The study shows how mobile learning personalisation that leads to impactful engineering learning can be achieved. The study shows how to manage most interface and technical challenges associated with mobile engineering learning and provides a new guide for educators on where and how mobile learning can be harnessed. And it revealed how engineering education can be successfully implemented through mobile tools

    Coverage measurements of NB-IoT technology

    Get PDF
    Abstract. The narrowband internet of things (NB-IoT) is a cellular radio access technology that provides seamless connectivity to wireless IoT devices with low latency, low power consumption, and long-range coverage. For long-range coverage, NB-IoT offers a coverage enhancement (CE) mechanism that is achieved by repeating the transmission of signals. Good network coverage is essential to reduce the battery usage and power consumption of IoT devices, while poor network coverage increases the number of repetitions in transmission, which causes high power consumption of IoT devices. The primary objective of this work is to determine the network coverage of NB-IoT technology under the University of Oulu’s 5G test network (5GTN) base station. In this thesis work, measurement results on key performance indicators such as reference signal received power (RSRP), reference signal received quality (RSRQ), received signal strength indicator (RSSI), and signal to noise plus interference (SINR) have been reported. The goal of the measurement is to find out the NB-IoT signal strength at different locations, which are served by the 5GTN cells configured with different parameters, e.g., Tx power levels, antenna tilt angles. The signal strength of NB-IoT technology has been measured at different places under the 5GTN base station in Oulu, Finland. Drive tests have been conducted to measure the signal strength of NB-IoT technology by using the Quectel BG96 module, Qualcomm kDC-5737 dongle and Keysight Nemo Outdoor software. The results have shown the values of RSRP, RSRQ, RSSI, and SINR at different locations within several kilometres of the 5GTN base stations. These values indicate the performance of the network and are used to assess the performance of network services to the end-users. In this work, the overall performance of the network has been checked to verify if network performance meets good signal levels and good network coverage. Relevant details of the NB-IoT technology, the theory behind the signal coverage and comparisons with the measurement results have also been discussed to check the relevance of the measurement results

    SYSTEMS METHODS FOR ANALYSIS OF HETEROGENEOUS GLIOBLASTOMA DATASETS TOWARDS ELUCIDATION OF INTER-TUMOURAL RESISTANCE PATHWAYS AND NEW THERAPEUTIC TARGETS

    Get PDF
    In this PhD thesis is described an endeavour to compile litterature about Glioblastoma key molecular mechanisms into a directed network followin Disease Maps standards, analyse its topology and compare results with quantitative analysis of multi-omics datasets in order to investigate Glioblastoma resistance mechanisms. The work also integrated implementation of Data Management good practices and procedures

    Delay measurements In live 5G cellular network

    Get PDF
    Abstract. 5G Network has many important properties, including increased bandwidth, increased data throughput, high reliability, high network density, and low latency. This thesis concentrate on the low latency attribute of the 5G Standalone (SA) mode and 5G Non-Standalone (NSA) mode. One of the most critical considerations in 5G is to have low latency network for various delay-sensitive applications, such as remote diagnostics and surgery in healthcare, self-driven cars, industrial factory automation, and live audio productions in the music industry. Therefore, 5G employs various retransmission algorithms and techniques to meet the low latency standards, a new frame structure with multiple subcarrier spacing (SCS) and time slots, and a new cloud-native core. For the low latency measurements, a test setup is built. A video is sent from the 5G User Equipment (UE) to the multimedia server deployed in the University of Oulu 5G test Network (5GTN) edge server. The University of Oulu 5GTN is operating both in NSA and SA modes. Delay is measured both for the downlink and the uplink direction with Qosium tool. When calculating millisecond-level transmission delays, clock synchronization is essential. Therefore, Precision Time Protocol daemon (PTPd) service is initiated on both the sending and receiving machines. The tests comply with the specifications developed at the University of Oulu 5GTN for both the SA and the NSA mode. When the delay measurement findings were compared between the two deployment modes, it was observed that the comparison was not appropriate. The primary reason for this is that in the 5GTN, the NSA and the SA have entirely different data routing paths and configurations. Additionally, the author did not have sufficient resources to make the required architectural changes
    • …
    corecore