1,203 research outputs found

    A network tomography approach for traffic monitoring in smart cities

    Get PDF
    Various urban planning and managing activities required by a Smart City are feasible because of traffic monitoring. As such, the thesis proposes a network tomography-based approach that can be applied to road networks to achieve a cost-efficient, flexible, and scalable monitor deployment. Due to the algebraic approach of network tomography, the selection of monitoring intersections can be solved through the use of matrices, with its rows representing paths between two intersections, and its columns representing links in the road network. Because the goal of the algorithm is to provide a cost-efficient, minimum error, and high coverage monitor set, this problem can be translated into an optimization problem over a matroid, which can be solved efficiently by a greedy algorithm. Also as supplementary, the approach is capable of handling noisy measurements and a measurement-to-path matching. The approach proves a low error and a 90% coverage with only 20% nodes selected as monitors in a downtown San Francisco, CA topology --Abstract, page iv

    A Network Tomography Approach for Traffic Monitoring in Smart Cities

    Get PDF
    Traffic monitoring is a key enabler for several planning and management activities of a Smart City. However, traditional techniques are often not cost efficient, flexible, and scalable. This paper proposes an approach to traffic monitoring that does not rely on probe vehicles, nor requires vehicle localization through GPS. Conversely, it exploits just a limited number of cameras placed at road intersections to measure car end-to-end traveling times. We model the problem within the theoretical framework of network tomography, in order to infer the traveling times of all individual road segments in the road network. We specifically deal with the potential presence of noisy measurements, and the unpredictability of vehicles paths. Moreover, we address the issue of optimally placing the monitoring cameras in order to maximize coverage, while minimizing the inference error, and the overall cost. We provide extensive experimental assessment on the topology of downtown San Francisco, CA, USA, using real measurements obtained through the Google Maps APIs, and on realistic synthetic networks. Our approach provides a very low error in estimating the traveling times over 95% of all roads even when as few as 20% of road intersections are equipped with cameras

    Binary Independent Component Analysis with OR Mixtures

    Full text link
    Independent component analysis (ICA) is a computational method for separating a multivariate signal into subcomponents assuming the mutual statistical independence of the non-Gaussian source signals. The classical Independent Components Analysis (ICA) framework usually assumes linear combinations of independent sources over the field of realvalued numbers R. In this paper, we investigate binary ICA for OR mixtures (bICA), which can find applications in many domains including medical diagnosis, multi-cluster assignment, Internet tomography and network resource management. We prove that bICA is uniquely identifiable under the disjunctive generation model, and propose a deterministic iterative algorithm to determine the distribution of the latent random variables and the mixing matrix. The inverse problem concerning inferring the values of latent variables are also considered along with noisy measurements. We conduct an extensive simulation study to verify the effectiveness of the propose algorithm and present examples of real-world applications where bICA can be applied.Comment: Manuscript submitted to IEEE Transactions on Signal Processin

    Practical Network Tomography

    Get PDF
    In this thesis, we investigate methods for the practical and accurate localization of Internet performance problems. The methods we propose belong to the field of network loss tomography, that is, they infer the loss characteristics of links from end-to-end measurements. The existing versions of the problem of network loss tomography are ill-posed, hence, tomographic algorithms that attempt to solve them resort to making various assumptions, and as these assumptions do not usually hold in practice, the information provided by the algorithms might be inaccurate. We argue, therefore, for tomographic algorithms that work under weak, realistic assumptions. We first propose an algorithm that infers the loss rates of network links from end-to-end measurements. Inspired by previous work, we design an algorithm that gains initial information about the network by computing the variances of links' loss rates and by using these variances as an indication of the congestion level of links, i.e., the more congested the link, the higher the variance of its loss rate. Its novelty lies in the way it uses this information – to identify and characterize the maximum set of links whose loss rates can be accurately inferred from end-to-end measurements. We show that our algorithm performs significantly better than the existing alternatives, and that this advantage increases with the number of congested links in the network. Furthermore, we validate its performance by using an "Internet tomographer" that runs on a real testbed. Second, we show that it is feasible to perform network loss tomography in the presence of "link correlations," i.e., when the losses that occur on one link might depend on the losses that occur on other links in the network. More precisely, we formally derive the necessary and sufficient condition under which the probability that each set of links is congested is statistically identifiable from end-to-end measurements even in the presence of link correlations. In doing so, we challenge one of the popular assumptions in network loss tomography, specifically, the assumption that all links are independent. The model we propose assumes we know which links are most likely to be correlated, but it does not assume any knowledge about the nature or the degree of their correlations. In practice, we consider that all links in the same local area network or the same administrative domain are potentially correlated, because they could be sharing physical links, network equipment, or even management processes. Finally, we design a practical algorithm that solves "Congestion Probability Inference" even in the presence of link correlations, i.e., it infers the probability that each set of links is congested even when the losses that occur on one link might depend on the losses that occur on other links in the network. We model Congestion Probability Inference as a system of linear equations where each equation corresponds to a set of paths. Because it is infeasible to consider an equation for each set of paths in the network, our algorithm finds the maximum number of linearly independent equations by selecting particular sets of paths based on our theoretical results. On the one hand, the information provided by our algorithm is less than that provided by the existing alternatives that infer either the loss rates or the congestion statuses of links, i.e., we only learn how often each set of links is congested, as opposed to how many packets were lost at each link, or to which particular links were congested when. On the other hand, this information is more useful in practice because our algorithm works under assumptions weaker than those required by the existing alternatives, and we experimentally show that it is accurate under challenging network conditions such as non-stationary network dynamics and sparse topologies

    Augmented Reality Based Surgical Navigation of Complex Pelvic Osteotomies

    Full text link
    first_page loading... settings Open AccessArticle Augmented Reality Based Surgical Navigation of Complex Pelvic Osteotomies—A Feasibility Study on Cadavers by JoĂ«lle Ackermann 1,2,† [ORCID] , Florentin Liebmann 1,2,*,† [ORCID] , Armando Hoch 3 [ORCID] , Jess G. Snedeker 2,3, Mazda Farshad 3, Stefan Rahm 3, Patrick O. Zingg 3 and Philipp FĂŒrnstahl 1 1 Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland 2 Laboratory for Orthopaedic Biomechanics, ETH Zurich, 8093 Zurich, Switzerland 3 Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland * Author to whom correspondence should be addressed. † These authors contributed equally to this work. Academic Editor: Jiro Tanaka Appl. Sci. 2021, 11(3), 1228; https://doi.org/10.3390/app11031228 Received: 20 December 2020 / Revised: 13 January 2021 / Accepted: 25 January 2021 / Published: 29 January 2021 (This article belongs to the Special Issue Artificial Intelligence (AI) and Virtual Reality (VR) in Biomechanics) Download PDF Browse Figures Citation Export Abstract Augmented reality (AR)-based surgical navigation may offer new possibilities for safe and accurate surgical execution of complex osteotomies. In this study we investigated the feasibility of navigating the periacetabular osteotomy of Ganz (PAO), known as one of the most complex orthopedic interventions, on two cadaveric pelves under realistic operating room conditions. Preoperative planning was conducted on computed tomography (CT)-reconstructed 3D models using an in-house developed software, which allowed creating cutting plane objects for planning of the osteotomies and reorientation of the acetabular fragment. An AR application was developed comprising point-based registration, motion compensation and guidance for osteotomies as well as fragment reorientation. Navigation accuracy was evaluated on CT-reconstructed 3D models, resulting in an error of 10.8 mm for osteotomy starting points and 5.4° for osteotomy directions. The reorientation errors were 6.7°, 7.0° and 0.9° for the x-, y- and z-axis, respectively. Average postoperative error of LCE angle was 4.5°. Our study demonstrated that the AR-based execution of complex osteotomies is feasible. Fragment realignment navigation needs further improvement, although it is more accurate than the state of the art in PAO surgery

    Du placement des services à la surveillance des services dans les réseaux 5G et post-5G

    Get PDF
    5G and beyond 5G (B5G) networks are expected to accommodate a plethora of network services with diverse requirements using a single physical infrastructure. Hence, the ``one-size fits all'' paradigm that characterized the 4th generation of wireless networks is no longer suitable. By leveraging the last advent of Network Function Virtualization (NFV) and Software-Defined Networking (SDN), Network Slicing (NS) is considered as one of the key enablers of this paradigm shift. NS will enable the coexistence of heterogeneous services by partitioning the physical infrastructure into a set of virtual networks ''(the slices)'', each running a particular service. Besides, NS offers more flexibility and agility in business operations.Despite the advantages it brings, NS raises some technical challenges. The placement of network slices is one of them, it is known in the literature as the Virtual Network Embedding Problem (VNEP), and it is an NP-Hard problem. Therefore, the first part of this thesis focuses on unveiling the potential of Deep Reinforcement Learning (DRL) and Graph Neural Networks (GNNs) to solve the network slice placement problem and overcome the limitations of existing methods. Two approaches are considered: The first one aims to learn automatically how to solve the VNEP. Instead of putting any constraint on the topology of the physical infrastructure or extracting features manually, we formulate the task as a reinforcement problem, and we use a graph convolutional-based neural architecture to learn how to find an optimal solution. Next, instead of training a DRL agent from scratch to find the optimal solution, a process that may result in unsafe training, we train it to reduce the optimality gap of existing heuristics. The motivation behind this contribution is to ensure safety during the training of the DRL agent.The placement of the slices is not the only challenge raised by NS. Once the slices are placed, monitoring the status of network slices becomes a priority for both network slices' tenants and providers in order to ensure that Service Level Agreements (SLAs) are not violated. In the second part of this thesis, we propose to leverage machine learning techniques and network tomography to monitor the network slices. Network Tomography (NT) is defined as a set of methods that aim to infer unmeasured network metrics using an end-to-end measurement between monitors.We focus on two main challenges. First, on the inference of slices metrics based on some end-to-end measurements between monitors, as well as on the efficient monitor placement. For the inference, we model the task as a multi-output regression problem, which we solve using neural networks. We propose to train on synthetic data to augment the diversity of the training data and avoid the overfitting issue. Moreover, to deal with the changes that may occur either on the slices we monitor or the topology on top of which they are placed, we use transfer learning techniques.Regarding the monitor's placement problem, we consider a special case where only cycles' probes are allowed. The probing cycle schemes have a significant advantage compared to regular paths since the source probe is actually the destination, which reduces the synchronization problems. We formulate the problem as a variant of the Minimum Set Cover problem. Owing to its complexity, we introduce a standalone solution based on GNNs and genetic algorithms to find a trade-off between the quality of monitors placement and the cost to achieve it.Les rĂ©seaux 5G et au-delĂ  sont destinĂ©s Ă  servir un large Ă©ventail de services rĂ©seau aux besoins trĂšs disparates tout en utilisant la mĂȘme infrastructure physique. En scindant l'infrastructure physique en un ensemble de rĂ©seaux virtuels, chacun exploitant un service spĂ©cifique, le Network Slicing (NS) permettra la coexistence de ces services. En dĂ©pit de ses avantages, le NS est complexe d'un point de vue technique puisqu'il s'agit d'un problĂšme NP-hard. La premiĂšre section de la thĂšse explore le potentiel de l'apprentissage par renforcement profond (DRL) basĂ© sur des graphes de rĂ©seaux neuronaux pour rĂ©soudre le problĂšme du placement des tranches de rĂ©seau et remĂ©dier aux limites des techniques existantes. Deux approches sont proposĂ©es : la premiĂšre consiste Ă  apprendre Ă  rĂ©soudre automatiquement le problĂšme du placement. PlutĂŽt que de se limiter Ă  la topologie de l'infrastructure physique ou Ă  extraire manuellement des caractĂ©ristiques, le problĂšme est formulĂ© sous la forme d'un processus de dĂ©cision markovien qui est rĂ©solu Ă  l'aide d’un rĂ©seau de neurones convolutif Ă  base de graphes pour apprendre Ă  dĂ©couvrir une solution optimale. Ensuite, plutĂŽt que de former un agent DRL de zĂ©ro pour identifier la meilleure solution, ce qui pourrait entraĂźner un dĂ©faut de fiabilitĂ©, un agent est prĂ©sentĂ© pour rĂ©duire l'Ă©cart d'optimalitĂ© des heuristiques existantes. Une fois les tranches placĂ©es, la surveillance de l'Ă©tat des tranches de rĂ©seau devient une prioritĂ© pour s'assurer que les SLAs sont respectĂ©s. Ainsi, dans la deuxiĂšme partie de la thĂšse, il est proposĂ© d'utiliser des techniques d'apprentissage automatique et la tomographie rĂ©seau (NT) pour surveiller les tranches de rĂ©seau. Il y a deux problĂšmes majeurs Ă  prendre en compte. PremiĂšrement, les mĂ©triques de slices sont dĂ©duites sur la base de diverses mesures de bout en bout entre les moniteurs, ainsi que du placement efficace des moniteurs. Des rĂ©seaux neuronaux sont utilisĂ©s pour traiter l'infĂ©rence des mĂ©triques. Une approche d'apprentissage par transfert est Ă©galement utilisĂ©e pour faire face aux changements qui peuvent se produire sur les slices surveillĂ©s ou sur la topologie physique sur laquelle elles sont placĂ©es. Des sondes cycliques sont envisagĂ©es pour le problĂšme du placement des moniteurs. Le problĂšme est formulĂ© comme une variante du problĂšme de couverture par ensembles. En raison de sa complexitĂ©, il est proposĂ© d'introduire une solution autonome basĂ©e sur des rĂ©seaux neuronaux Ă  base de graphes (GNN) et des algorithmes gĂ©nĂ©tiques pour trouver un compromis entre la qualitĂ© du placement des moniteurs et le coĂ»t pour y parvenir

    Advanced photonic and electronic systems WILGA 2018

    Get PDF
    WILGA annual symposium on advanced photonic and electronic systems has been organized by young scientist for young scientists since two decades. It traditionally gathers around 400 young researchers and their tutors. Ph.D students and graduates present their recent achievements during well attended oral sessions. Wilga is a very good digest of Ph.D. works carried out at technical universities in electronics and photonics, as well as information sciences throughout Poland and some neighboring countries. Publishing patronage over Wilga keep Elektronika technical journal by SEP, IJET and Proceedings of SPIE. The latter world editorial series publishes annually more than 200 papers from Wilga. Wilga 2018 was the XLII edition of this meeting. The following topical tracks were distinguished: photonics, electronics, information technologies and system research. The article is a digest of some chosen works presented during Wilga 2018 symposium. WILGA 2017 works were published in Proc. SPIE vol.10445. WILGA 2018 works were published in Proc. SPIE vol.10808

    Exploring the Use of Audible Sound in Bone Density Diagnostic Devices

    Get PDF
    Osteoporosis is a medical condition in which there is a progressive degradation of bone tissue that correlates with a characteristic decrease in bone density (BD). It is estimated that osteoporosis affects over 200 million people globally and is responsible for 8.9 million fractures annually. Populations at risk for developing osteoporosis include post-menopausal women, diabetic patients, and the elderly, representing a large population within the state of Maine. Current densitometric and sonometric devices used to monitor BD include quantitative computed tomography (QCT), dual-energy x-ray absorption (DXA), and ultrasound (QUS). All methods are expensive and, in the cases of QCT and DXA, patients are exposed to small, frequent doses of ionizing radiation. While these methods can effectively measure BD, they are critically limited for applications in rural healthcare because they are cost-prohibitive to rural medical facilities and to patients that require routine screening. The diversity of at-risk patient populations, current expensive and invasive BD devices drives the need for a rapid, low-cost, and non-invasive approach to monitoring BD. The present work explores audible sound as a potential solution that could safely and effectively measure BD by minimizing cost drivers and increasing device simplicity to improve availability. The current prototype aims to measure calcaneal (heel) BD using audible sound and time delay spectroscopy (TDS). To assess the feasibility of such a device, iterative prototypes were constructed and evaluated, a relative sensitivity analysis was performed, and testing of critical device components was completed. The testing included the ability of the device to measure the frequency and phase of a signal, measure the coupling force applied at the patient and device interface, and measure the geometries of a test material. The relative sensitivity analysis supported the use of audible sound in this application. The testing showed the device can measure the frequency and phase of a signal and the geometries of a test material while design changes are required to measure the coupling force. With the indicated improvements, the device is ready for testing materials that share similar material properties with bone

    Adaptive Loss Inference Using Unicast End-to-End Measurements

    Get PDF
    We address the problem of inferring link loss rates from unicast end-to-end measurements on the basis of network tomography. Because measurement probes will incur additional traffic overheads, most tomography-based approaches perform the inference by collecting the measurements only on selected paths to reduce the overhead. However, all previous approaches select paths offline, which will inevitably miss many potential identifiable links, whose loss rates should be unbiasedly determined. Furthermore, if element failures exist, an appreciable number of the selected paths may become unavailable. In this paper, we creatively propose an adaptive loss inference approach in which the paths are selected sequentially depending on the previous measurement results. In each round, we compute the loss rates of links that can be unbiasedly determined based on the current measurement results and remove them from the system. Meanwhile, we locate the most possible failures based on the current measurement outcomes to avoid selecting unavailable paths in subsequent rounds. In this way, all identifiable and potential identifiable links can be determined unbiasedly using only 20% of all available end-to-end measurements. Compared with a previous classical approach through extensive simulations, the results strongly confirm the promising performance of our proposed approach
    • 

    corecore