51 research outputs found

    Defenses Against Perception-Layer Attacks on IoT Smart Furniture for Impaired People

    Full text link
    [EN] Internet of Things (IoT) is becoming highly supportive in innovative technological solutions for assisting impaired people. Some of these IoT solutions are still in a prototyping phase ignoring possible attacks and the corresponding security defenses. This article proposes a learning-based approach for defending against perception-layer attacks performed on specific sensor types in smart furniture for impaired people. This approach is based on the analysis of time series by means of dynamic time warping algorithm for calculating similarity and a novel detector for identifying anomalies. This approach has been illustrated by defending against simulated perception-layer magnetic attacks on a smart cupboard with door magnetic sensors. The results show the performance of the proposed approach for properly identifying these attacks. In particular, these results advocate an accuracy about 95.5% per day.This work was supported in part by the research project Utilisation of IoT and Sensors in Smart Cities for Improving Quality of Life of Impaired People under Grant 52-2020, in part by the Ciudades Inteligentes Totalmente Integrales, Eficientes Y Sotenibles (CITIES) funded by the Programa Iberoamericano de Ciencia y Tecnologia para el Desarrollo (CYTED) under Grant 518RT0558, in part by the Diseno Colaborativo Para La Promocion Del Bienestar En Ciudades Inteligentes Inclusivas under Grant TIN2017-88327-R funded by the Spanish Council of Science, Innovation and Universities from the Spanish Government, and in part by the Ministerio de Economia y Competitividad in the Programa Estatal de Fomento de la Investigacion Cientifica y Tecnica de Excelencia, Subprograma Estatal de Generacion de Conocimiento under Grant TIN2017-84802-C2-1-P.Nasralla, MM.; García-Magariño, I.; Lloret, J. (2020). Defenses Against Perception-Layer Attacks on IoT Smart Furniture for Impaired People. IEEE Access. 8:119795-119805. https://doi.org/10.1109/ACCESS.2020.3004814S119795119805

    A Repository of Method Fragments for Agent-Oriented Development of Learning-Based Edge Computing Systems

    Full text link
    [EN] The upcoming avenue of IoT, with its massive generated data, makes it really hard to train centralized systems with machine learning in real time. This problem can be addressed with learning-based edge computing systems where the learning is performed in a distributed way on the nodes. In particular, this work focuses on developing multi-agent systems for implementing learning-based edge computing systems. The diversity of methodologies in agent-oriented software engineering reflects the complexity of developing multi-agent systems. The division of the development processes into method fragments facilitates the application of agent-oriented methodologies and their study. In this line of research, this work proposes a database for implementing a repository of method fragments considering the development of learning-based edge computing systems and the information recommended by the FIPA technical committee. This repository makes method fragments available from different methodologies, and computerizes certain metrics and queries over the existing method fragments. This work compares the performance of several combinations of dimensionality reduction methods and machine learning techniques (i.e., support vector regression, k-nearest neighbors, and multi-layer perceptron neural networks) in a simulator of a learning-based edge computing system for estimating profits and customers.The authors acknowledge PSU Smart Systems Engineering Lab, project "Utilisation of IoT and sensors in smart cities for improving quality of life of impaired people" (ref. 52-2020), CYTED (ref. 518RT0558), and the Spanish Council of Science, Innovation and Universities (TIN2017-88327-R).García-Magariño, I.; Nasralla, MM.; Lloret, J. (2021). A Repository of Method Fragments for Agent-Oriented Development of Learning-Based Edge Computing Systems. IEEE Network. 35(1):156-162. https://doi.org/10.1109/MNET.011.2000296S15616235

    Using machine learning advances to unravel patterns in subject areas and performances of university students with special educational needs and disabilities (MALSEND): a conceptual approach

    Get PDF
    Universities and colleges in the UK welcome almost 30,000 disabled students each year. Re-search shows that the dropout from education in the EU for the disabled is at 31.5%, much higher compared to only 12.3% for non-disabled students. Supporting young students who require special educational needs in pursuing higher education is an ambitious and necessary step that needs to be adopted by tertiary education providers worldwide. We propose, MALSEND, a project aiming to develop a platform based on machine and human intelligence to understand learning disability patterns in Higher Education. The platform will analyse da-tasets from universities in the previous years and will help to discover any trends in subject areas and performance among autistic students, dyslexic students or students having attention deficit hyperactive disorder (ADHD), among others. Analysing variables such as students’ courses, modules, performances and other engagement-indices will give new insights on re-search questions, career advice and institutional policy making. This paper describes the activ-ities of the development phases of this concept

    An intelligent fuzzy logic-based content and channel aware downlink scheduler for scalable video over OFDMA wireless systems

    Get PDF
    The recent advancements of wireless technology and applications make downlink scheduling and resource allocations an important research topic. In this paper, we consider the problem of downlink scheduling for multi-user scalable video streaming over OFDMA channels. The video streams are precoded using a scalable video coding (SVC) scheme. We propose a fuzzy logic-based scheduling algorithm, which prioritises the transmission to different users by considering video content, and channel conditions. Furthermore, a novel analytical model and a new performance metric have been developed for the performance analysis of the proposed scheduling algorithm. The obtained results show that the proposed algorithm outperforms the content-blind/channel aware scheduling algorithms with a gain of as much as 19% in terms of the number of supported users. The proposed algorithm allows for a fairer allocation of resources among users across the entire sector coverage, allowing for the enhancement of video quality at edges of the cell while minimising the degradation of users closer to the base station

    Multilayer perceptron neural network-based QoS-aware, content-aware and device-aware QoE prediction model : a proposed prediction model for medical ultrasound streaming over small cell networks

    Get PDF
    This paper presents a QoS-aware, content-aware and device-aware non-intrusive medical QoE (m-QoE) prediction model over small cell networks. The proposed prediction model utilises a Multilayer Perceptron (MLP) neural network to predict m-QoE. It also acts as a platform to maintain and optimise the acceptable diagnostic quality through a device-aware adaptive video streaming mechanism. The proposed model is trained for an unseen dataset of input variables such as QoS, content features, and display device characteristics, to produce an output value in the form of m-QoE (i.e. MOS). The efficiency of the proposed model is validated through subjective tests carried by medical experts. The prediction accuracy obtained via the correlation coefficient and Root Mean-Square-Error (RMSE) indicates that the proposed model succeeds in measuring m-QoE closer to the visual perception of the medical experts. Furthermore, we have addressed the following two main research questions: (1) How significant is ultrasound video content type in determining m-QoE? and (2) How much of a role does the screen size and device resolution play in medical experts’ diagnostic experience? The former is answered through the content classification of ultrasound video sequences based on their spatio-temporal features, by including these features in the proposed prediction model, and validating their significance through medical experts’ subjective ratings. The latter is answered by conducting a novel subjective experiment of the ultrasound video sequences across multiple devices

    Swarm of UAVs for Network Management in 6G: A Technical Review

    Full text link
    Fifth-generation (5G) cellular networks have led to the implementation of beyond 5G (B5G) networks, which are capable of incorporating autonomous services to swarm of unmanned aerial vehicles (UAVs). They provide capacity expansion strategies to address massive connectivity issues and guarantee ultra-high throughput and low latency, especially in extreme or emergency situations where network density, bandwidth, and traffic patterns fluctuate. On the one hand, 6G technology integrates AI/ML, IoT, and blockchain to establish ultra-reliable, intelligent, secure, and ubiquitous UAV networks. 6G networks, on the other hand, rely on new enabling technologies such as air interface and transmission technologies, as well as a unique network design, posing new challenges for the swarm of UAVs. Keeping these challenges in mind, this article focuses on the security and privacy, intelligence, and energy-efficiency issues faced by swarms of UAVs operating in 6G mobile networks. In this state-of-the-art review, we integrated blockchain and AI/ML with UAV networks utilizing the 6G ecosystem. The key findings are then presented, and potential research challenges are identified. We conclude the review by shedding light on future research in this emerging field of research.Comment: 19,

    Features of mobile apps for people with autism in a post covid-19 scenario: current status and recommendations for apps using AI

    Get PDF
    The new ‘normal’ defined during the COVID-19 pandemic has forced us to re-assess how people with special needs thrive in these unprecedented conditions, such as those with Autism Spectrum Disorder (ASD). These changing/challenging conditions have instigated us to revisit the usage of telehealth services to improve the quality of life for people with ASD. This study aims to identify mobile applications that suit the needs of such individuals. This work focuses on identifying features of a number of highly-rated mobile applications (apps) that are designed to assist people with ASD, specifically those features that use Artificial Intelligence (AI) technologies. In this study, 250 mobile apps have been retrieved using keywords such as autism, autism AI, and autistic. Among 250 apps, 46 were identified after filtering out irrelevant apps based on defined elimination criteria such as ASD common users, medical staff, and non-medically trained people interacting with people with ASD. In order to review common functionalities and features, 25 apps were downloaded and analysed based on eye tracking, facial expression analysis, use of 3D cartoons, haptic feedback, engaging interface, text-to-speech, use of Applied Behaviour Analysis therapy, Augmentative and Alternative Communication techniques, among others were also deconstructed. As a result, software developers and healthcare professionals can consider the identified features in designing future support tools for autistic people. This study hypothesises that by studying these current features, further recommendations of how existing applications for ASD people could be enhanced using AI for (1) progress tracking, (2) personalised content delivery, (3) automated reasoning, (4) image recognition, and (5) Natural Language Processing (NLP). This paper follows the PRISMA methodology, which involves a set of recommendations for reporting systematic reviews and meta-analyses

    An investigation into the roles of chlorides and sulphate salts on the performance of low salinity injection in sandstone reservoirs : experimental approach

    Get PDF
    Numerous studies have been carried out to ascertain the mechanisms of low salinity and smart water flooding technique for improved oil recovery. Focus were often on brine composition and, specifically the cationic content in sandstone reservoirs. Given the importance of the salt composition and concentration, tweaking the active ions which are responsible for the fluids-rock equilibrium will bring into effect numerous mechanisms of displacement which have been extensively debated. This experimental study, however, was carried out to evaluate the extent of the roles of chloride and sulphate-based brines in improved oil recovery. To carry this out, 70,000 ppm sulphates and chloride-based brines were prepared to simulate formation water and 5,000ppm brines of the same species as low salinity displacement fluids. Core flooding process was used to simulate the displacement of oil by using four (4) native sandstones core samples, obtained from Burgan oil field in Kuwait, at operating conditions of 1500 psig and 50oC. The core samples were injected with 70,000 ppm chloride and sulphates and subsequently flooded with the 5,000 ppm counterparts in a forced imbibition process. Separate evaluations of chloride and sulphate-based brines were carried out to investigate the displacement efficiencies of each brine species. The results showed that the in both high and low salinity displacement tests, the SO4 brine presented better recovery of up to 89% of the initial oil saturation (Soi). Several mechanisms of displacement were observed to be responsible for improved recovery during SO4 brine displacement. IFT measurement experiments also confirmed that there was reduction in IFT at test conditions between SO4 brine and oil and visual inspection of the effluent showed a degree emulsification of oil and brines. Changes in pH were observed in the low salinity flooding and negligible changes were noticed in the high salinity floods. These results provide an insight into the roles of chloride and sulphate ions in the design of smart “designer” water and low salinity injection scenarios

    Video quality and QoS-driven downlink scheduling for 2D and 3D video over LTE networks

    No full text
    In recent years, cellular operators throughout the world have observed a rapid increase in the number of mobile broadband subscribers. Similarily, the amount of traffic per subscriber is growing rapidly, in particular with the emergence of advanced mobile phones, smart phones, and real-time services (such as 2D and 3D video, IP telephony, etc.). On the other hand, Long-Term Evolution (LTE) is a technology which is capable of providing high data rates for multimedia applications through its IP-based framework. The Third Generation Partnership Project (3GPP) LTE and its subsequent modification called LTE-Advanced (LTE-A) are the latest standards in the series of mobile telecommunication systems, and they have been already deployed in the developed countries. The 3GPP standard has left the scheduling approaches understandardised, hence this enabled the proposal of standard-compatible solutions to enhance the Quality of Service (QoS) and Quality of Experience (QoE) performance in multi-user wireless network scenarios. The main objective of the PhD project was the design and evaluation of LTE downlink scheduling strategies for efficient transmission of multi-user 2D and 3D video and multi-traffic classes over error prone and bandwith-limited wireless communication channels. The strategies developed are aimed at maximising and balancing the QoS among the users and improving the QoE at the receiver end. Following a review and a novel taxonomy of the existing content-aware and content-unaware downlink scheduling algorithms and a network-centric and user-centric performance evaluation, this thesis proposes a novel QoS-driven downlink scheduling approach for 2D and 3D video and multi-traffic classes over LTE wireless systems. Moreover, this thesis explores the quality of 3D video over LTE wireless networks through network-centric and user-centric performance evaluation of existing and our proposed scheduling algorithms. Admission control is also proposed by considering the different LTE bandwith sizes, in order to achieve high system resource utilization and deliver high 2D and 3D video quality for the LTE users. This thesis introduces the transmission of 3D video over a modelled LTE wireless network. The channel is modelled via Gilbert-Elliot (GE) parameters which represent real statistics of an LTE wireless channel. The results of subjective and objective assessments of the 3D video sequences are provided for different levels of wireless impairments
    corecore