7,327 research outputs found

    Reliable indoor optical wireless communication in the presence of fixed and random blockers

    Get PDF
    The advanced innovation of smartphones has led to the exponential growth of internet users which is expected to reach 71% of the global population by the end of 2027. This in turn has given rise to the demand for wireless data and internet devices that is capable of providing energy-efficient, reliable data transmission and high-speed wireless data services. Light-fidelity (LiFi), known as one of the optical wireless communication (OWC) technology is envisioned as a promising solution to accommodate these demands. However, the indoor LiFi channel is highly environment-dependent which can be influenced by several crucial factors (e.g., presence of people, furniture, random users' device orientation and the limited field of view (FOV) of optical receivers) which may contribute to the blockage of the line-of-sight (LOS) link. In this thesis, it is investigated whether deep learning (DL) techniques can effectively learn the distinct features of the indoor LiFi environment in order to provide superior performance compared to the conventional channel estimation techniques (e.g., minimum mean square error (MMSE) and least squares (LS)). This performance can be seen particularly when access to real-time channel state information (CSI) is restricted and is achieved with the cost of collecting large and meaningful data to train the DL neural networks and the training time which was conducted offline. Two DL-based schemes are designed for signal detection and resource allocation where it is shown that the proposed methods were able to offer close performance to the optimal conventional schemes and demonstrate substantial gain in terms of bit-error ratio (BER) and throughput especially in a more realistic or complex indoor environment. Performance analysis of LiFi networks under the influence of fixed and random blockers is essential and efficient solutions capable of diminishing the blockage effect is required. In this thesis, a CSI acquisition technique for a reconfigurable intelligent surface (RIS)-aided LiFi network is proposed to significantly reduce the dimension of the decision variables required for RIS beamforming. Furthermore, it is shown that several RIS attributes such as shape, size, height and distribution play important roles in increasing the network performance. Finally, the performance analysis for an RIS-aided realistic indoor LiFi network are presented. The proposed RIS configuration shows outstanding performances in reducing the network outage probability under the effect of blockages, random device orientation, limited receiver's FOV, furniture and user behavior. Establishing a LOS link that achieves uninterrupted wireless connectivity in a realistic indoor environment can be challenging. In this thesis, an analysis of link blockage is presented for an indoor LiFi system considering fixed and random blockers. In particular, novel analytical framework of the coverage probability for a single source and multi-source are derived. Using the proposed analytical framework, link blockages of the indoor LiFi network are carefully investigated and it is shown that the incorporation of multiple sources and RIS can significantly reduce the LOS coverage blockage probability in indoor LiFi systems

    Strategy Tripod Perspective on the Determinants of Airline Efficiency in A Global Context: An Application of DEA and Tobit Analysis

    Get PDF
    The airline industry is vital to contemporary civilization since it is a key player in the globalization process: linking regions, fostering global commerce, promoting tourism and aiding economic and social progress. However, there has been little study on the link between the operational environment and airline efficiency. Investigating the amalgamation of institutions, organisations and strategic decisions is critical to understanding how airlines operate efficiently. This research aims to employ the strategy tripod perspective to investigate the efficiency of a global airline sample using a non-parametric linear programming method (data envelopment analysis [DEA]). Using a Tobit regression, the bootstrapped DEA efficiency change scores are further regressed to determine the drivers of efficiency. The strategy tripod is employed to assess the impact of institutions, industry and resources on airline efficiency. Institutions are measured by global indices of destination attractiveness; industry, including competition, jet fuel and business model; and finally, resources, such as the number of full-time employees, alliances, ownership and connectivity. The first part of the study uses panel data from 35 major airlines, collected from their annual reports for the period 2011 to 2018, and country attractiveness indices from global indicators. The second part of the research involves a qualitative data collection approach and semi-structured interviews with experts in the field to evaluate the impact of COVID-19 on the first part’s significant findings. The main findings reveal that airlines operate at a highly competitive level regardless of their competition intensity or origin. Furthermore, the unpredictability of the environment complicates airline operations. The efficiency drivers of an airline are partially determined by its type of business model, its degree of cooperation and how fuel cost is managed. Trade openness has a negative influence on airline efficiency. COVID-19 has toppled the airline industry, forcing airlines to reconsider their business model and continuously increase cooperation. Human resources, sustainability and alternative fuel sources are critical to airline survival. Finally, this study provides some evidence for the practicality of the strategy tripod and hints at the need for a broader approach in the study of international strategies

    The AddACO: A bio-inspired modified version of the ant colony optimization algorithm to solve travel salesman problems

    Get PDF
    The Travel Salesman Problem (TSP) consists in finding the minimal-length closed tour that connects the entire group of nodes of a given graph. We propose to solve such a combinatorial optimization problem with the AddACO algorithm: it is a version of the Ant Colony Optimization method that is characterized by a modified probabilistic law at the basis of the exploratory movement of the artificial insects. In particular, the ant decisional rule is here set to amount in a linear convex combination of competing behavioral stimuli and has therefore an additive form (hence the name of our algorithm), rather than the canonical multiplicative one. The AddACO intends to address two conceptual shortcomings that characterize classical ACO methods: (i) the population of artificial insects is in principle allowed to simultaneously minimize/maximize all migratory guidance cues (which is in implausible from a biological/ecological point of view) and (ii) a given edge of the graph has a null probability to be explored if at least one of the movement trait is therein equal to zero, i.e., regardless the intensity of the others (this in principle reduces the exploratory potential of the ant colony). Three possible variants of our method are then specified: the AddACO-V1, which includes pheromone trail and visibility as insect decisional variables, and the AddACO-V2 and the AddACO-V3, which in turn add random effects and inertia, respectively, to the two classical migratory stimuli. The three versions of our algorithm are tested on benchmark middle-scale TPS instances, in order to assess their performance and to find their optimal parameter setting. The best performing variant is finally applied to large-scale TSPs, compared to the naive Ant-Cycle Ant System, proposed by Dorigo and colleagues, and evaluated in terms of quality of the solutions, computational time, and convergence speed. The aim is in fact to show that the proposed transition probability, as long as its conceptual advantages, is competitive from a performance perspective, i.e., if it does not reduce the exploratory capacity of the ant population w.r.t. the canonical one (at least in the case of selected TSPs). A theoretical study of the asymptotic behavior of the AddACO is given in the appendix of the work, whose conclusive section contains some hints for further improvements of our algorithm, also in the perspective of its application to other optimization problems

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Optimal decision of a disaster relief network equilibrium model

    Get PDF
    Frequent natural disasters challenge relief network efficiency. This paper introduces a stochastic relief network with limited path capacity, develops an equilibrium model based on cumulative prospect theory, and formulates it as a stochastic variational inequality problem to enhance emergency response and resource allocation efficiency. Using the NCP function, Lagrange function, and random variables, the model dynamically monitors disasters, enabling rational resource allocation for quick decision-making. Compared to traditional methods, our model significantly improves resource scheduling and reduces disaster response costs. Through a random network example, we validate the model's effectiveness in aiding intelligent decision-making for relief plans and resource allocation optimization

    Application of a digital twin for highway tunnels based on multi-sensor and information fusion

    Get PDF
    Due to the harsh environment of highway tunnels and frequent breakdowns of various detection sensors and surveillance devices, the operational management of highway tunnels lacks effective data support. This paper analyzes the characteristics of operational surveillance data in highway tunnels. It proposes a multimodal information fusion method based on CNN–LSTM–attention and designs and develops a digital twin for highway tunnel operations. The system addresses issues such as insufficient development and coordination of the technical architecture of operation control systems, weak information service capabilities, and insufficient data application capabilities. The system also lacks intelligent decision-making and control capabilities. The developed system achieves closed-loop management of “accurate perception–risk assessment–decision warning–emergency management” for highway tunnel operations based on data-driven approaches. The engineering demonstration application underscores the system’s capacity to enhance tunnel traffic safety, diminish tunnel management costs, and elevate tunnel driving comfort

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    The Application of Data Analytics Technologies for the Predictive Maintenance of Industrial Facilities in Internet of Things (IoT) Environments

    Get PDF
    In industrial production environments, the maintenance of equipment has a decisive influence on costs and on the plannability of production capacities. In particular, unplanned failures during production times cause high costs, unplanned downtimes and possibly additional collateral damage. Predictive Maintenance starts here and tries to predict a possible failure and its cause so early that its prevention can be prepared and carried out in time. In order to be able to predict malfunctions and failures, the industrial plant with its characteristics, as well as wear and ageing processes, must be modelled. Such modelling can be done by replicating its physical properties. However, this is very complex and requires enormous expert knowledge about the plant and about wear and ageing processes of each individual component. Neural networks and machine learning make it possible to train such models using data and offer an alternative, especially when very complex and non-linear behaviour is evident. In order for models to make predictions, as much data as possible about the condition of a plant and its environment and production planning data is needed. In Industrial Internet of Things (IIoT) environments, the amount of available data is constantly increasing. Intelligent sensors and highly interconnected production facilities produce a steady stream of data. The sheer volume of data, but also the steady stream in which data is transmitted, place high demands on the data processing systems. If a participating system wants to perform live analyses on the incoming data streams, it must be able to process the incoming data at least as fast as the continuous data stream delivers it. If this is not the case, the system falls further and further behind in processing and thus in its analyses. This also applies to Predictive Maintenance systems, especially if they use complex and computationally intensive machine learning models. If sufficiently scalable hardware resources are available, this may not be a problem at first. However, if this is not the case or if the processing takes place on decentralised units with limited hardware resources (e.g. edge devices), the runtime behaviour and resource requirements of the type of neural network used can become an important criterion. This thesis addresses Predictive Maintenance systems in IIoT environments using neural networks and Deep Learning, where the runtime behaviour and the resource requirements are relevant. The question is whether it is possible to achieve better runtimes with similarly result quality using a new type of neural network. The focus is on reducing the complexity of the network and improving its parallelisability. Inspired by projects in which complexity was distributed to less complex neural subnetworks by upstream measures, two hypotheses presented in this thesis emerged: a) the distribution of complexity into simpler subnetworks leads to faster processing overall, despite the overhead this creates, and b) if a neural cell has a deeper internal structure, this leads to a less complex network. Within the framework of a qualitative study, an overall impression of Predictive Maintenance applications in IIoT environments using neural networks was developed. Based on the findings, a novel model layout was developed named Sliced Long Short-Term Memory Neural Network (SlicedLSTM). The SlicedLSTM implements the assumptions made in the aforementioned hypotheses in its inner model architecture. Within the framework of a quantitative study, the runtime behaviour of the SlicedLSTM was compared with that of a reference model in the form of laboratory tests. The study uses synthetically generated data from a NASA project to predict failures of modules of aircraft gas turbines. The dataset contains 1,414 multivariate time series with 104,897 samples of test data and 160,360 samples of training data. As a result, it could be proven for the specific application and the data used that the SlicedLSTM delivers faster processing times with similar result accuracy and thus clearly outperforms the reference model in this respect. The hypotheses about the influence of complexity in the internal structure of the neuronal cells were confirmed by the study carried out in the context of this thesis
    • 

    corecore