414 research outputs found

    A Network Tomography Approach for Traffic Monitoring in Smart Cities

    Get PDF
    Traffic monitoring is a key enabler for several planning and management activities of a Smart City. However, traditional techniques are often not cost efficient, flexible, and scalable. This paper proposes an approach to traffic monitoring that does not rely on probe vehicles, nor requires vehicle localization through GPS. Conversely, it exploits just a limited number of cameras placed at road intersections to measure car end-to-end traveling times. We model the problem within the theoretical framework of network tomography, in order to infer the traveling times of all individual road segments in the road network. We specifically deal with the potential presence of noisy measurements, and the unpredictability of vehicles paths. Moreover, we address the issue of optimally placing the monitoring cameras in order to maximize coverage, while minimizing the inference error, and the overall cost. We provide extensive experimental assessment on the topology of downtown San Francisco, CA, USA, using real measurements obtained through the Google Maps APIs, and on realistic synthetic networks. Our approach provides a very low error in estimating the traveling times over 95% of all roads even when as few as 20% of road intersections are equipped with cameras

    A network tomography approach for traffic monitoring in smart cities

    Get PDF
    Various urban planning and managing activities required by a Smart City are feasible because of traffic monitoring. As such, the thesis proposes a network tomography-based approach that can be applied to road networks to achieve a cost-efficient, flexible, and scalable monitor deployment. Due to the algebraic approach of network tomography, the selection of monitoring intersections can be solved through the use of matrices, with its rows representing paths between two intersections, and its columns representing links in the road network. Because the goal of the algorithm is to provide a cost-efficient, minimum error, and high coverage monitor set, this problem can be translated into an optimization problem over a matroid, which can be solved efficiently by a greedy algorithm. Also as supplementary, the approach is capable of handling noisy measurements and a measurement-to-path matching. The approach proves a low error and a 90% coverage with only 20% nodes selected as monitors in a downtown San Francisco, CA topology --Abstract, page iv

    Activity Analysis; Finding Explanations for Sets of Events

    Get PDF
    Automatic activity recognition is the computational process of analysing visual input and reasoning about detections to understand the performed events. In all but the simplest scenarios, an activity involves multiple interleaved events, some related and others independent. The activity in a car park or at a playground would typically include many events. This research assumes the possible events and any constraints between the events can be defined for the given scene. Analysing the activity should thus recognise a complete and consistent set of events; this is referred to as a global explanation of the activity. By seeking a global explanation that satisfies the activity’s constraints, infeasible interpretations can be avoided, and ambiguous observations may be resolved. An activity’s events and any natural constraints are defined using a grammar formalism. Attribute Multiset Grammars (AMG) are chosen because they allow defining hierarchies, as well as attribute rules and constraints. When used for recognition, detectors are employed to gather a set of detections. Parsing the set of detections by the AMG provides a global explanation. To find the best parse tree given a set of detections, a Bayesian network models the probability distribution over the space of possible parse trees. Heuristic and exhaustive search techniques are proposed to find the maximum a posteriori global explanation. The framework is tested for two activities: the activity in a bicycle rack, and around a building entrance. The first case study involves people locking bicycles onto a bicycle rack and picking them up later. The best global explanation for all detections gathered during the day resolves local ambiguities from occlusion or clutter. Intensive testing on 5 full days proved global analysis achieves higher recognition rates. The second case study tracks people and any objects they are carrying as they enter and exit a building entrance. A complete sequence of the person entering and exiting multiple times is recovered by the global explanation

    Uncertainty modeling in higher dimensions

    Get PDF
    Moderne Design Probleme stellen Ingenieure vor mehrere elementare Aufgaben. 1) Das Design muss die angestrebten Funktionalitäten aufweisen. 2) Es muss optimal sein in Hinsicht auf eine vorgegebene Zielfunktion. 3) Schließlich muss das Design abgesichert sein gegen Unsicherheiten, die nicht zu Versagen des Designs führen dürfen. All diese Aufgaben lassen sich unter dem Begriff der robusten Design Optimierung zusammenfassen und verlangen nach computergestützten Methoden, die Unsicherheitsmodellierung und Design Optimierung in sich vereinen. Unsicherheitsmodellierung enthält einige fundamentale Herausforderungen: Der Rechenaufwand darf gewisse Grenzen nicht überschreiten; unbegründete Annahmen müssen so weit wie möglich vermieden werden. Die beiden kritischsten Probleme betreffen allerdings den Umgang mit unvollständiger stochastischer Information und mit hoher Dimensionalität. Der niedrigdimensionale Fall ist gut erforscht, und es existieren diverse Methoden, auch unvollständige Informationen zu verarbeiten. In höheren Dimensionen hingegen ist die Anzahl der Möglichkeiten derzeit sehr begrenzt. Ungenauigkeit und Unvollständigkeit von Daten kann schwerwiegende Probleme verursachen - aber die Lage ist nicht hoffnungslos. In dieser Dissertation zeigen wir, wie man den hochdimensionalen Fall mit Hilfe von "Potential Clouds" in ein eindimensionales Problem übersetzt. Dieser Ansatz führt zu einer Unsicherheitsanalyse auf Konfidenzregionen relevanter Szenarien mittels einer Potential Funktion. Die Konfidenzregionen werden als Nebenbedingungen in einem Design Optimierungsproblem formuliert. Auf diese Weise verknüpfen wir Unsicherheitsmodellierung und Design Optimierung, wobei wir außerdem eine adaptive Aktualisierung der Unsicherheitsinformationen ermöglichen. Abschließend wenden wir unsere Methode in zwei Fallstudien an, in 24, bzw. in 34 Dimensionen.Modern design problems impose multiple major tasks an engineer has to accomplish. 1) The design should account for the designated functionalities. 2) It should be optimal with respect to a given design objective. 3) Ultimately the design must be safeguarded against uncertain perturbations which should not cause failure of the design. These tasks are united in the problem of robust design optimization giving rise to the development of computational methods for uncertainty modeling and design optimization, simultaneously. Methods for uncertainty modeling face some fundamental challenges: The computational effort should not exceed certain limitations; unjustified assumptions must be avoided as far as possible. However, the most critical issues concern the handling of incomplete information and of high dimensionality. While the low dimensional case is well studied and several methods exist to handle incomplete information, in higher dimensions there are only very few techniques. Imprecision and lack of sufficient information cause severe difficulties - but the situation is not hopeless. In this dissertation, it is shown how to transfer the high-dimensional to the one-dimensional case by means of the potential clouds formalism. Using a potential function, this enables a worst-case analysis on confidence regions of relevant scenarios. The confidence regions are weaved into an optimization problem formulation for robust design as safety constraints. Thus an interaction between optimization phase and worst-case analysis is modeled which permits a posteriori adaptive information updating. Finally, we apply our approach in two case studies in 24 and 34 dimensions, respectively

    Economic Networks: Theory and Computation

    Full text link
    This textbook is an introduction to economic networks, intended for students and researchers in the fields of economics and applied mathematics. The textbook emphasizes quantitative modeling, with the main underlying tools being graph theory, linear algebra, fixed point theory and programming. The text is suitable for a one-semester course, taught either to advanced undergraduate students who are comfortable with linear algebra or to beginning graduate students.Comment: Textbook homepage is https://quantecon.github.io/book-networks/intro.htm

    Learning from samples using coherent lower previsions

    Get PDF
    Het hoofdonderwerp van dit werk is het afleiden, voorstellen en bestuderen van voorspellende en parametrische gevolgtrekkingsmodellen die gebaseerd zijn op de theorie van coherente onderprevisies. Een belangrijk nevenonderwerp is het vinden en bespreken van extreme onderwaarschijnlijkheden. In het hoofdstuk ‘Modeling uncertainty’ geef ik een inleidend overzicht van de theorie van coherente onderprevisies ─ ook wel theorie van imprecieze waarschijnlijkheden genoemd ─ en de ideeën waarop ze gestoeld is. Deze theorie stelt ons in staat onzekerheid expressiever ─ en voorzichtiger ─ te beschrijven. Dit overzicht is origineel in de zin dat ze meer dan andere inleidingen vertrekt van de intuitieve theorie van coherente verzamelingen van begeerlijke gokken. Ik toon in het hoofdstuk ‘Extreme lower probabilities’ hoe we de meest extreme vormen van onzekerheid kunnen vinden die gemodelleerd kunnen worden met onderwaarschijnlijkheden. Elke andere onzekerheidstoestand beschrijfbaar met onderwaarschijnlijkheden kan geformuleerd worden in termen van deze extreme modellen. Het belang van de door mij bekomen en uitgebreid besproken resultaten in dit domein is voorlopig voornamelijk theoretisch. Het hoofdstuk ‘Inference models’ behandelt leren uit monsters komende uit een eindige, categorische verzameling. De belangrijkste basisveronderstelling die ik maak is dat het bemonsteringsproces omwisselbaar is, waarvoor ik een nieuwe definitie geef in termen van begeerlijke gokken. Mijn onderzoek naar de gevolgen van deze veronderstelling leidt ons naar enkele belangrijke representatiestellingen: onzekerheid over (on)eindige rijen monsters kan gemodelleerd worden in termen van categorie-aantallen (-frequenties). Ik bouw hier op voort om voor twee populaire gevolgtrekkingsmodellen voor categorische data ─ het voorspellende imprecies Dirichlet-multinomiaalmodel en het parametrische imprecies Dirichletmodel ─ een verhelderende afleiding te geven, louter vertrekkende van enkele grondbeginselen; deze modellen pas ik toe op speltheorie en het leren van Markov-ketens. In het laatste hoofdstuk, ‘Inference models for exponential families’, verbreed ik de blik tot niet-categorische exponentiële-familie-bemonsteringsmodellen; voorbeelden zijn normale bemonstering en Poisson-bemonstering. Eerst onderwerp ik de exponentiële families en de aanverwante toegevoegde parametrische en voorspellende previsies aan een grondig onderzoek. Deze aanverwante previsies worden gebruikt in de klassieke Bayesiaanse gevolgtrekkingsmodellen gebaseerd op toegevoegd updaten. Ze dienen als grondslag voor de nieuwe, door mij voorgestelde imprecieze-waarschijnlijkheidsgevolgtrekkingsmodellen. In vergelijking met de klassieke Bayesiaanse aanpak, laat de mijne toe om voorzichtiger te zijn bij de beschrijving van onze kennis over het bemonsteringsmodel; deze voorzichtigheid wordt weerspiegeld door het op deze modellen gebaseerd gedrag (getrokken besluiten, gemaakte voorspellingen, genomen beslissingen). Ik toon ten slotte hoe de voorgestelde gevolgtrekkingsmodellen gebruikt kunnen worden voor classificatie door de naïeve credale classificator.This thesis's main subject is deriving, proposing, and studying predictive and parametric inference models that are based on the theory of coherent lower previsions. One important side subject also appears: obtaining and discussing extreme lower probabilities. In the chapter ‘Modeling uncertainty’, I give an introductory overview of the theory of coherent lower previsions ─ also called the theory of imprecise probabilities ─ and its underlying ideas. This theory allows us to give a more expressive ─ and a more cautious ─ description of uncertainty. This overview is original in the sense that ─ more than other introductions ─ it is based on the intuitive theory of coherent sets of desirable gambles. I show in the chapter ‘Extreme lower probabilities’ how to obtain the most extreme forms of uncertainty that can be modeled using lower probabilities. Every other state of uncertainty describable by lower probabilities can be formulated in terms of these extreme ones. The importance of the results in this area obtained and extensively discussed by me is currently mostly theoretical. The chapter ‘Inference models’ treats learning from samples from a finite, categorical space. My most basic assumption about the sampling process is that it is exchangeable, for which I give a novel definition in terms of desirable gambles. My investigation of the consequences of this assumption leads us to some important representation theorems: uncertainty about (in)finite sample sequences can be modeled entirely in terms of category counts (frequencies). I build on this to give an elucidating derivation from first principles for two popular inference models for categorical data ─ the predictive imprecise Dirichlet-multinomial model and the parametric imprecise Dirichlet model; I apply these models to game theory and learning Markov chains. In the last chapter, ‘Inference models for exponential families’, I enlarge the scope to exponential family sampling models; examples are normal sampling and Poisson sampling. I first thoroughly investigate exponential families and the related conjugate parametric and predictive previsions used in classical Bayesian inference models based on conjugate updating. These previsions serve as a basis for the new imprecise-probabilistic inference models I propose. Compared to the classical Bayesian approach, mine allows to be much more cautious when trying to express what we know about the sampling model; this caution is reflected in behavior (conclusions drawn, predictions made, decisions made) based on these models. Lastly, I show how the proposed inference models can be used for classification with the naive credal classifier

    A Markovian approach to the analysis and optimization of a portfolio of credit card accounts

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Network Function Virtualization in Dynamic Networks: A Stochastic Perspective

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this recordAs a key enabling technology for 5G network softwarization, Network Function Virtualization (NFV) provides an efficient paradigm to optimize network resource utility for the benefits of both network providers and users. However, the inherent network dynamics and uncertainties from 5G infrastructure, resources and applications are slowing down the further adoption of NFV in many emerging networking applications. Motivated by this, in this paper, we investigate the issues of network utility degradation when implementing NFV in dynamic networks, and design a proactive NFV solution from a fully stochastic perspective. Unlike existing deterministic NFV solutions, which assume given network capacities and/or static service quality demands, this paper explicitly integrates the knowledge of influential network variations into a twostage stochastic resource utilization model. By exploiting the hierarchical decision structures in this problem, a distributed computing framework with two-level decomposition is designed to facilitate a distributed implementation of the proposed model in large-scale networks. The experimental results demonstrate that the proposed solution not only improves 3∼5 folds of network performance, but also effectively reduces the risk of service quality violation.The work of Xiangle Cheng is partially supported by the China Scholarship Council for the study at the University of Exeter. This work is also partially supported by the UK EPSRC project (Grant No.: EP/R030863/1)
    corecore