211 research outputs found

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Planning and dynamic spectrum management in heterogeneous mobile networks with QoE optimization

    Get PDF
    The radio and network planning and optimisation are continuous processes that do not end after the network has been launched. To achieve the best trade-offs, especially between quality and costs, operators make use of several coverage and capacity enhancement methods. The research from this thesis proposes methods such as the implementation of cell zooming and Relay Stations (RSs) with dynamic sleep modes and Carrier Aggregation (CA) for coverage and capacity enhancements. Initially, a survey is presented on ubiquitous mesh networks implementation scenarios and an updated characterization of requirements for services and applications is proposed. The performance targets for the key parameters, delay, delay variation, information loss and throughput have been addressed for all types of services. Furthermore, with the increased competition, mobile operator’s success does not only depend on how good the offered Quality of Service (QoS) is, but also if it meets the end user’s expectations, i.e., Quality of Experience (QoE). In this context, a model for the mapping between QoS parameters and QoE has been proposed for multimedia traffic. The planning and optimization of fixed Worldwide Interoperability for Microwave Access (WiMAX) networks with RSs in conjunction with cell zooming has been addressed. The challenging case of a propagation measurement-based scenario in the hilly region of CovilhĂŁ has been considered. A cost/revenue function has been developed by taking into account the cost of building and maintaining the infrastructure with the use of RSs. This part of the work also investigates the energy efficiency and economic implications of the use of power saving modes for RSs in conjunction with cell zooming. Assuming that the RSs can be switched-off or zoomed out to zero in periods when the trafïŹc exchange is low, such as nights and weekends, it has been shown that energy consumption may be reduced whereas cellular coverage and capacity, as well as economic performance may be improved. An integrated Common Radio Resource Management (iCRRM) entity is proposed that implements inter-band CA by performing scheduling between two Long Term Evolution – Advanced (LTE-A) Component Carriers (CCs). Considering the bandwidths available in Portugal, the 800 MHz and 2.6 GHz CCs have been considered whilst mobile video traffic is addressed. Through extensive simulations it has been found that the proposed multi-band schedulers overcome the capacity of LTE systems without CA. Result shown a clear improvement of the QoS, QoE and economic trade-off with CA

    PERFORMANCE STUDY FOR CAPILLARY MACHINE-TO-MACHINE NETWORKS

    Get PDF
    Communication technologies witness a wide and rapid pervasiveness of wireless machine-to-machine (M2M) communications. It is emerging to apply for data transfer among devices without human intervention. Capillary M2M networks represent a candidate for providing reliable M2M connectivity. In this thesis, we propose a wireless network architecture that aims at supporting a wide range of M2M applications (either real-time or non-real-time) with an acceptable QoS level. The architecture uses capillary gateways to reduce the number of devices communicating directly with a cellular network such as LTE. Moreover, the proposed architecture reduces the traffic load on the cellular network by providing capillary gateways with dual wireless interfaces. One interface is connected to the cellular network, whereas the other is proposed to communicate to the intended destination via a WiFi-based mesh backbone for cost-effectiveness. We study the performance of our proposed architecture with the aid of the ns-2 simulator. An M2M capillary network is simulated in different scenarios by varying multiple factors that affect the system performance. The simulation results measure average packet delay and packet loss to evaluate the quality-of-service (QoS) of the proposed architecture. Our results reveal that the proposed architecture can satisfy the required level of QoS with low traffic load on the cellular network. It also outperforms a cellular-based capillary M2M network and WiFi-based capillary M2M network. This implies a low cost of operation for the service provider while meeting a high-bandwidth service level agreement. In addition, we investigate how the proposed architecture behaves with different factors like the number of capillary gateways, different application traffic rates, the number of backbone routers with different routing protocols, the number of destination servers, and the data rates provided by the LTE and Wi-Fi technologies. Furthermore, the simulation results show that the proposed architecture continues to be reliable in terms of packet delay and packet loss even under a large number of nodes and high application traffic rates

    Dense Point-Cloud Representation of a Scene using Monocular Vision

    Get PDF
    We present a three-dimensional (3-D) reconstruction system designed to support various autonomous navigation applications. The system presented focuses on the 3-D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the depths of a scene. In this way, the system can be used to construct a point-cloud model of its unknown surroundings. We present the step-by-step methodology and analysis used in developing the 3-D reconstruction technique. We present a reconstruction framework that generates a primitive point cloud, which is computed based on feature matching and depth triangulation analysis. To populate the reconstruction, we utilized optical flow features to create an extremely dense representation model. With the third algorithmic modification, we introduce the addition of the preprocessing step of nonlinear single-image super resolution. With this addition, the depth accuracy of the point cloud, which relies on precise disparity measurement, has significantly increased. Our final contribution is an additional postprocessing step designed to filter noise points and mismatched features unveiling the complete dense point-cloud representation (DPR) technique. We measure the success of DPR by evaluating the visual appeal, density, accuracy, and computational expense and compare with two state-of-the-art techniques

    3D multiple description coding for error resilience over wireless networks

    Get PDF
    Mobile communications has gained a growing interest from both customers and service providers alike in the last 1-2 decades. Visual information is used in many application domains such as remote health care, video –on demand, broadcasting, video surveillance etc. In order to enhance the visual effects of digital video content, the depth perception needs to be provided with the actual visual content. 3D video has earned a significant interest from the research community in recent years, due to the tremendous impact it leaves on viewers and its enhancement of the user’s quality of experience (QoE). In the near future, 3D video is likely to be used in most video applications, as it offers a greater sense of immersion and perceptual experience. When 3D video is compressed and transmitted over error prone channels, the associated packet loss leads to visual quality degradation. When a picture is lost or corrupted so severely that the concealment result is not acceptable, the receiver typically pauses video playback and waits for the next INTRA picture to resume decoding. Error propagation caused by employing predictive coding may degrade the video quality severely. There are several ways used to mitigate the effects of such transmission errors. One widely used technique in International Video Coding Standards is error resilience. The motivation behind this research work is that, existing schemes for 2D colour video compression such as MPEG, JPEG and H.263 cannot be applied to 3D video content. 3D video signals contain depth as well as colour information and are bandwidth demanding, as they require the transmission of multiple high-bandwidth 3D video streams. On the other hand, the capacity of wireless channels is limited and wireless links are prone to various types of errors caused by noise, interference, fading, handoff, error burst and network congestion. Given the maximum bit rate budget to represent the 3D scene, optimal bit-rate allocation between texture and depth information rendering distortion/losses should be minimised. To mitigate the effect of these errors on the perceptual 3D video quality, error resilience video coding needs to be investigated further to offer better quality of experience (QoE) to end users. This research work aims at enhancing the error resilience capability of compressed 3D video, when transmitted over mobile channels, using Multiple Description Coding (MDC) in order to improve better user’s quality of experience (QoE). Furthermore, this thesis examines the sensitivity of the human visual system (HVS) when employed to view 3D video scenes. The approach used in this study is to use subjective testing in order to rate people’s perception of 3D video under error free and error prone conditions through the use of a carefully designed bespoke questionnaire.EThOS - Electronic Theses Online ServicePetroleum Technology Development Fund (PTDF)GBUnited Kingdo

    A Survey on Cellular-connected UAVs: Design Challenges, Enabling 5G/B5G Innovations, and Experimental Advancements

    Full text link
    As an emerging field of aerial robotics, Unmanned Aerial Vehicles (UAVs) have gained significant research interest within the wireless networking research community. As soon as national legislations allow UAVs to fly autonomously, we will see swarms of UAV populating the sky of our smart cities to accomplish different missions: parcel delivery, infrastructure monitoring, event filming, surveillance, tracking, etc. The UAV ecosystem can benefit from existing 5G/B5G cellular networks, which can be exploited in different ways to enhance UAV communications. Because of the inherent characteristics of UAV pertaining to flexible mobility in 3D space, autonomous operation and intelligent placement, these smart devices cater to wide range of wireless applications and use cases. This work aims at presenting an in-depth exploration of integration synergies between 5G/B5G cellular systems and UAV technology, where the UAV is integrated as a new aerial User Equipment (UE) to existing cellular networks. In this integration, the UAVs perform the role of flying users within cellular coverage, thus they are termed as cellular-connected UAVs (a.k.a. UAV-UE, drone-UE, 5G-connected drone, or aerial user). The main focus of this work is to present an extensive study of integration challenges along with key 5G/B5G technological innovations and ongoing efforts in design prototyping and field trials corroborating cellular-connected UAVs. This study highlights recent progress updates with respect to 3GPP standardization and emphasizes socio-economic concerns that must be accounted before successful adoption of this promising technology. Various open problems paving the path to future research opportunities are also discussed.Comment: 30 pages, 18 figures, 9 tables, 102 references, journal submissio

    Explainable Artificial Intelligence for Image Segmentation and for Estimation of Optical Aberrations

    Get PDF
    State-of-the-art machine learning methods such as convolutional neural networks (CNNs) are frequently employed in computer vision. Despite their high performance on unseen data, CNNs are often criticized for lacking transparency — that is, providing very limited if any information about the internal decision-making process. In some applications, especially in healthcare, such transparency of algorithms is crucial for end users, as trust in diagnosis and prognosis is important not only for the satisfaction and potential adherence of patients, but also for their health. Explainable artificial intelligence (XAI) aims to open up this “black box,” often perceived as a cryptic and inconceivable algorithm, to increase understanding of the machines’ reasoning.XAI is an emerging field, and techniques for making machine learning explainable are becoming increasingly available. XAI for computer vision mainly focuses on image classification, whereas interpretability in other tasks remains challenging. Here, I examine explainability in computer vision beyond image classification, namely in semantic segmentation and 3D multitarget image regression. This thesis consists of five chapters. In Chapter 1 (Introduction), the background of artificial intelligence (AI), XAI, computer vision, and optics is presented, and the definitions of the terminology for XAI are proposed. Chapter 2 is focused on explaining the predictions of U-Net, a CNN commonly used for semantic image segmentation, and variations of this architecture. To this end, I propose the gradient-weighted class activation mapping for segmentation (Seg-Grad-CAM) method based on the well-known Grad-CAM method for explainable image classification. In Chapter 3, I present the application of deep learning to estimation of optical aberrations in microscopy biodata by identifying the present Zernike aberration modes and their amplitudes. A CNN-based approach PhaseNet can accurately estimate monochromatic aberrations in images of point light sources. I extend this method to objects of complex shapes. In Chapter 4, an approach for explainable 3D multitarget image regression is reported. First, I visualize how the model differentiates the aberration modes using the local interpretable model-agnostic explanations (LIME) method adapted for 3D image classification. Then I “explain,” using LIME modified for multitarget 3D image regression (Image-Reg-LIME), the outputs of the regression model for estimation of the amplitudes. In Chapter 5, the results are discussed in a broader context. The contribution of this thesis is the development of explainability methods for semantic segmentation and 3D multitarget image regression of optical aberrations. The research opens the door for further enhancement of AI’s transparency.:Title Page i List of Figures xi List of Tables xv 1 Introduction 1 1.1 Essential Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Artificial intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.2 Explainable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.3 Proposed definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Explainable Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.1 Aims and applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.2 Image classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.3 Image regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3.4 Image segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.4 Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4.1 Aberrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.2 Zernike polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.5 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5.2 Dissertation outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2 Explainable Image Segmentation 23 2.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.3.1 CAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.3.2 Grad-CAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.3.3 U-Net . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.3.4 Seg-Grad-CAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4.1 Circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4.2 TextureMNIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.4.3 Cityscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.1 Circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.2 TextureMNIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.5.3 Cityscapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3 Estimation of Aberrations 55 3.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.1 PhaseNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3.2 PhaseNet data generator . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.3.3 Retrieval of noise parameters . . . . . . . . . . . . . . . . . . . . . . . . 62 3.3.4 Data generator with phantoms . . . . . . . . . . . . . . . . . . . . . . . 62 3.3.5 Restoration via deconvolution . . . . . . . . . . . . . . . . . . . . . . . . 63 3.3.6 Convolution with the “zero” synthetic PSF . . . . . . . . . . . . . . . . 63 3.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.4.1 Astrocytes (synthetic data) . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.4.2 Fluorescent beads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.4.3 Drosophila embryo (live sample) . . . . . . . . . . . . . . . . . . . . . . 67 3.4.4 Neurons (fixed sample) . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.5.1 Astrocytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.5.2 Conclusions on the results for astrocytes . . . . . . . . . . . . . . . . . . 74 3.5.3 Fluorescent beads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.5.4 Conclusions on the results for fluorescent beads . . . . . . . . . . . . . . 81 3.5.5 Drosophila embryo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.5.6 Conclusions on the results for Drosophila embryo . . . . . . . . . . . . . 87 3.5.7 Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4 Explainable Multitarget Image Regression 99 4.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.3.1 LIME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.3.2 Superpixel algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.3.3 LIME for 3D image classification . . . . . . . . . . . . . . . . . . . . . . 104 4.3.4 Image-Reg-LIME: LIME for 3D image regression . . . . . . . . . . . . . 107 4.4 Results: Classification of Aberrations . . . . . . . . . . . . . . . . . . . . . . . . 109 viii TABLE OF CONTENTS 4.4.1 Transforming the regression task into classification . . . . . . . . . . . . 110 4.4.2 Data augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.4.3 Parameter search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.4 Clustering of 3D images . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.4.5 Explanations of classification . . . . . . . . . . . . . . . . . . . . . . . . 114 4.4.6 Conclusions on the results for classification . . . . . . . . . . . . . . . . 117 4.5 Results: Explainable Regression of Aberrations . . . . . . . . . . . . . . . . . . 118 4.5.1 Explanations with a reference value . . . . . . . . . . . . . . . . . . . . 121 4.5.2 Validation of explanations . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5 Conclusions and Outlook 127 References 12

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies
    • 

    corecore