1,003 research outputs found
Economic regulation for multi tenant infrastructures
Large scale computing infrastructures need scalable and effi cient resource allocation mechanisms to ful l the requirements of its participants and applications while the whole system is regulated to work e ciently. Computational markets provide e fficient allocation mechanisms that aggregate information from multiple sources in large, dynamic and complex systems where there is not a single source with complete information. They have been proven to be successful in matching resource demand and resource supply in the presence of sel sh multi-objective and utility-optimizing users and sel sh pro t-optimizing providers. However, global infrastructure metrics which may not directly affect participants of the computational market still need to be addressed -a.k.a. economic externalities like load balancing or energy-efficiency.
In this thesis, we point out the need to address these economic externalities, and we design and evaluate appropriate regulation mechanisms from di erent perspectives on top of existing economic models, to incorporate a wider range of objective metrics not considered otherwise. Our main contributions in this thesis are threefold; fi rst, we propose a taxation mechanism that addresses the resource congestion problem e ffectively improving the balance of load among resources when correlated economic preferences are present; second,
we propose a game theoretic model with complete information to derive an algorithm to aid resource providers to scale up and down resource supply so energy-related costs can be reduced; and third, we relax our previous assumptions about complete information on the resource provider side and design an incentive-compatible mechanism to encourage users to truthfully report their resource requirements effectively assisting providers to make energy-eff cient allocations while providing a dynamic allocation mechanism to users.Les infraestructures computacionals de gran escala necessiten mecanismes dâassignaciĂł de recursos escalables i eficients per complir amb els requisits computacionals de tots els seus participants, assegurant-se de que el sistema ĂŠs regulat apropiadament per a que funcioni de manera efectiva. Els mercats computacionals sĂłn mecanismes dâassignaciĂł de recursos eficients que incorporen informaciĂł de diferents fonts considerant sistemes de gran escala, complexos i dinĂ mics on no existeix una Ăşnica font que proveeixi informaciĂł completa de l'estat del sistema. Aquests mercats computacionals han demostrat ser exitosos per acomodar la demanda de recursos computacionals amb la seva oferta quan els seus participants son considerats estratègics des del punt de vist de teoria de jocs. Tot i això existeixen mètriques a nivell global sobre la infraestructura que no tenen per que influenciar els usuaris a priori de manera directa. AixĂ doncs, aquestes externalitats econòmiques com poden ser el balanceig de cĂ rrega o la eficiència energètica, conformen una lĂnia dâinvestigaciĂł que cal explorar. En aquesta tesi, presentem i descrivim la problemĂ tica derivada d'aquestes externalitats econòmiques. Un cop establert el marc dâactuaciĂł, dissenyem i avaluem mecanismes de regulaciĂł apropiats basats en models econòmics existents per resoldre aquesta problemĂ tica des de diferents punts de vista per incorporar un ventall mĂŠs ampli de mètriques objectiu que no havien estat considerades fins al moment. Les nostres contribucions principals tenen tres vessants: en primer lloc, proposem un mecanisme de regulaciĂł de tipus impositiu que tracta de mitigar lâapariciĂł de recursos sobre-explotats que, efectivament, millora el balanceig de la cĂ rrega de treball entre els recursos disponibles; en segon lloc, proposem un model teòric basat en teoria de jocs amb informaciĂł o completa que permet derivar un algorisme que facilita la tasca dels proveĂŻdors de recursos per modi car a l'alça o a la baixa l'oferta de recursos per tal de reduir els costos relacionats amb el consum energètic; i en tercer lloc, relaxem la nostra assumpciĂł prèvia sobre lâexistència dâinformaciĂł complerta per part del proveĂŻdor de recursos i dissenyem un mecanisme basat en incentius per fomentar que els usuaris facin pĂşblica de manera verĂdica i explĂcita els seus requeriments computacionals, ajudant d'aquesta manera als proveĂŻdors de recursos a fer assignacions eficients des del punt de vista energètic a la vegada que oferim un mecanisme lâassignaciĂł de recursos dinĂ mica als usuari
Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources
Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogrà fic i Geològic de Catalunya.
In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments.
Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families:
- Time-of-flight (terrestrial and aerial LiDAR).
- Photogrammetry (street-level, satellite, and aerial imagery).
- Human-edited vector data (cadastre and other map sources).
Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort.
Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking.
In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information.
Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions:
- Effective and feature-preserving simplification of massive point clouds.
- Developing normal estimation algorithms explicitly designed for LiDAR data.
- Low-stretch panoramic representation for point clouds.
- Semantic analysis of street-level imagery for improved multi-view stereo reconstruction.
- Color improvement through heuristic techniques and the registration of LiDAR and imagery data.
- Efficient and faithful visualization of massive point clouds using image-based techniques.Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalitzaciĂł d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucciĂł ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut CartogrĂ fic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col¡lecciĂł d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una Ă mplia diversitat de tecnologies de digitalitzaciĂł i la selecciĂł de l'adequada ĂŠs clau per a cada aplicaciĂł particular. Aproximadament, aquestes tècniques es poden agrupar en tres famĂlies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl¡lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'Ă rea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en aviĂł i en cotxe sĂłn òptims per escombrar Ă rees enormes, però adquirir i calibrar aquests dispositius no ĂŠs una tasca trivial. A mĂŠs, el procĂŠs de captura es realitza mitjançant lĂnies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR sĂłn mĂŠs accessibles, però cobreixen Ă rees mĂŠs petites, i la seva estratègia de mostreig sol produir nĂşvols de punts massius amb regions planes sobrerepresentades. Una opciĂł mĂŠs barata sĂłn les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una cĂ mera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'Ăşltima generaciĂł per produir. Un altre avantatge d'aquest mètode ĂŠs la captura de dades de color d'alta qualitat. Tanmateix, la informaciĂł geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisiciĂł de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalitzaciĂł d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informaciĂł geomètrica i imatges a escala de carrer per obtenir informaciĂł sobre colors. El nostre objectiu principal ĂŠs el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigaciĂł s'ha centrat en les segĂźents contribucions: - SimplificaciĂł eficaç de nĂşvols de punts massius, preservant detalls d'alta resoluciĂł. - Desenvolupament d'algoritmes d'estimaciĂł normal dissenyats explĂcitament per a dades LiDAR. - RepresentaciĂł panorĂ mica de baixa distorsiĂł per a nĂşvols de punts. - AnĂ lisi semĂ ntica d'imatges a escala de carrer per millorar la reconstrucciĂł estèreo de façanes. - Millora del color mitjançant tècniques heurĂstiques i el registre de dades LiDAR i imatge. - VisualitzaciĂł eficient i fidel de nĂşvols de punts massius mitjançant tècniques basades en imatges
Resource Management In Cloud And Big Data Systems
Cloud computing is a paradigm shift in computing, where services are offered and acquired on demand in a cost-effective way. These services are often virtualized, and they can handle the computing needs of big data analytics. The ever-growing demand for cloud services arises in many areas including healthcare, transportation, energy systems, and manufacturing. However, cloud resources such as computing power, storage, energy, dollars for infrastructure, and dollars for operations, are limited. Effective use of the existing resources raises several fundamental challenges that place the cloud resource management at the heart of the cloud providers\u27 decision-making process. One of these challenges faced by the cloud providers is to provision, allocate, and price the resources such that their profit is maximized and the resources are utilized efficiently. In addition, executing large-scale applications in clouds may require resources from several cloud providers. Another challenge when processing data intensive applications is minimizing their energy costs. Electricity used in US data centers in 2010 accounted for about 2% of total electricity used nationwide. In addition, the energy consumed by the data centers is growing at over 15% annually, and the energy costs make up about 42% of the data centers\u27 operating costs. Therefore, it is critical for the data centers to minimize their energy consumption when offering services to customers. In this Ph.D. dissertation, we address these challenges by designing, developing, and analyzing mechanisms for resource management in cloud computing systems and data centers. The goal is to allocate resources efficiently while optimizing a global performance objective of the system (e.g., maximizing revenue, maximizing social welfare, or minimizing energy). We improve the state-of-the-art in both methodologies and applications. As for methodologies, we introduce novel resource management mechanisms based on mechanism design, approximation algorithms, cooperative game theory, and hedonic games. These mechanisms can be applied in cloud virtual machine (VM) allocation and pricing, cloud federation formation, and energy-efficient computing. In this dissertation, we outline our contributions and possible directions for future research in this field
The Inter-cloud meta-scheduling
Inter-cloud is a recently emerging approach that expands cloud elasticity. By facilitating an adaptable setting, it purposes at the realization of a scalable resource provisioning that enables a diversity of cloud user requirements to be handled efficiently. This studyâs contribution is in the inter-cloud performance optimization of job executions using metascheduling concepts. This includes the development of the inter-cloud meta-scheduling (ICMS) framework, the ICMS optimal schemes and the SimIC toolkit. The ICMS model is an architectural strategy for managing and scheduling user services in virtualized dynamically inter-linked clouds. This is achieved by the development of a model that includes a set of algorithms, namely the Service-Request, Service-Distribution, Service-Availability and Service-Allocation algorithms. These along with resource management optimal schemes offer the novel functionalities of the ICMS where the message exchanging implements the job distributions method, the VM deployment offers the VM management features and the local resource management system details the management of the local cloud schedulers. The generated system offers great flexibility by facilitating a lightweight resource management methodology while at the same time handling the heterogeneity of different clouds through advanced service level agreement coordination. Experimental results are productive as the proposed ICMS model achieves enhancement of the performance of service distribution for a variety of criteria such as service execution times, makespan, turnaround times, utilization levels and energy consumption rates for various inter-cloud entities, e.g. users, hosts and VMs. For example, ICMS optimizes the performance of a non-meta-brokering inter-cloud by 3%, while ICMS with full optimal schemes achieves 9% optimization for the same configurations. The whole experimental platform is implemented into the inter-cloud Simulation toolkit (SimIC) developed by the author, which is a discrete event simulation framework
Learning-based stereo matching for 3D reconstruction
Stereo matching has been widely adopted for 3D reconstruction of real world
scenes and has enormous applications in the fields of Computer Graphics, Vision,
and Robotics. Being an ill-posed problem, estimating accurate disparity maps is a
challenging task. However, humans rely on binocular vision to perceive 3D environments
and can estimate 3D information more rapidly and robustly than many active
and passive sensors that have been developed. One of the reasons is that human brains
can utilize prior knowledge to understand the scene and to infer the most reasonable
depth hypothesis even when the visual cues are lacking. Recent advances in machine
learning have shown that the brain's discrimination power can be mimicked using deep
convolutional neural networks. Hence, it is worth investigating how learning-based
techniques can be used to enhance stereo matching for 3D reconstruction.
Toward this goal, a sequence of techniques were developed in this thesis: a novel
disparity filtering approach that selects accurate disparity values through analyzing
the corresponding cost volumes using 3D neural networks; a robust semi-dense stereo
matching algorithm that utilizes two neural networks for computing matching cost
and performing confidence-based filtering; a novel network structure that learns global
smoothness constraints and directly performs multi-view stereo matching based on
global information; and finally a point cloud consolidation method that uses a neural
network to reproject noisy data generated by multi-view stereo matching under
different viewpoints. Qualitative and quantitative comparisons with existing works
demonstrate the respective merits of these presented techniques
Climbing Up Cloud Nine: Performance Enhancement Techniques for Cloud Computing Environments
With the transformation of cloud computing technologies from an attractive trend to a business reality, the need is more pressing than ever for efficient cloud service management tools and techniques. As cloud technologies continue to mature, the service model, resource allocation methodologies, energy efficiency models and general service management schemes are not yet saturated. The burden of making this all tick perfectly falls on cloud providers. Surely, economy of scale revenues and leveraging existing infrastructure and giant workforce are there as positives, but it is far from straightforward operation from that point. Performance and service delivery will still depend on the providersâ algorithms and policies which affect all operational areas.
With that in mind, this thesis tackles a set of the more critical challenges faced by cloud providers with the purpose of enhancing cloud service performance and saving on providersâ cost. This is done by exploring innovative resource allocation techniques and developing novel tools and methodologies in the context of cloud resource management, power efficiency, high availability and solution evaluation.
Optimal and suboptimal solutions to the resource allocation problem in cloud data centers from both the computational and the network sides are proposed. Next, a deep dive into the energy efficiency challenge in cloud data centers is presented. Consolidation-based and non-consolidation-based solutions containing a novel dynamic virtual machine idleness prediction technique are proposed and evaluated. An investigation of the problem of simulating cloud environments follows. Available simulation solutions are comprehensively evaluated and a novel design framework for cloud simulators covering multiple variations of the problem is presented. Moreover, the challenge of evaluating cloud resource management solutions performance in terms of high availability is addressed. An extensive framework is introduced to design high availability-aware cloud simulators and a prominent cloud simulator (GreenCloud) is extended to implement it. Finally, real cloud application scenarios evaluation is demonstrated using the new tool.
The primary argument made in this thesis is that the proposed resource allocation and simulation techniques can serve as basis for effective solutions that mitigate performance and cost challenges faced by cloud providers pertaining to resource utilization, energy efficiency, and client satisfaction
AFFECT-PRESERVING VISUAL PRIVACY PROTECTION
The prevalence of wireless networks and the convenience of mobile cameras enable many new video applications other than security and entertainment. From behavioral diagnosis to wellness monitoring, cameras are increasing used for observations in various educational and medical settings. Videos collected for such applications are considered protected health information under privacy laws in many countries. Visual privacy protection techniques, such as blurring or object removal, can be used to mitigate privacy concern, but they also obliterate important visual cues of affect and social behaviors that are crucial for the target applications. In this dissertation, we propose to balance the privacy protection and the utility of the data by preserving the privacy-insensitive information, such as pose and expression, which is useful in many applications involving visual understanding.
The Intellectual Merits of the dissertation include a novel framework for visual privacy protection by manipulating facial image and body shape of individuals, which: (1) is able to conceal the identity of individuals; (2) provide a way to preserve the utility of the data, such as expression and pose information; (3) balance the utility of the data and capacity of the privacy protection.
The Broader Impacts of the dissertation focus on the significance of privacy protection on visual data, and the inadequacy of current privacy enhancing technologies in preserving affect and behavioral attributes of the visual content, which are highly useful for behavior observation in educational and medical settings. This work in this dissertation represents one of the first attempts in achieving both goals simultaneously
Doctor of Philosophy
dissertationShape analysis is a well-established tool for processing surfaces. It is often a first step in performing tasks such as segmentation, symmetry detection, and finding correspondences between shapes. Shape analysis is traditionally employed on well-sampled surfaces where the geometry and topology is precisely known. When the form of the surface is that of a point cloud containing nonuniform sampling, noise, and incomplete measurements, traditional shape analysis methods perform poorly. Although one may first perform reconstruction on such a point cloud prior to performing shape analysis, if the geometry and topology is far from the true surface, then this can have an adverse impact on the subsequent analysis. Furthermore, for triangulated surfaces containing noise, thin sheets, and poorly shaped triangles, existing shape analysis methods can be highly unstable. This thesis explores methods of shape analysis applied directly to such defect-laden shapes. We first study the problem of surface reconstruction, in order to obtain a better understanding of the types of point clouds for which reconstruction methods contain difficulties. To this end, we have devised a benchmark for surface reconstruction, establishing a standard for measuring error in reconstruction. We then develop a new method for consistently orienting normals of such challenging point clouds by using a collection of harmonic functions, intrinsically defined on the point cloud. Next, we develop a new shape analysis tool which is tolerant to imperfections, by constructing distances directly on the point cloud defined as the likelihood of two points belonging to a mutually common medial ball, and apply this for segmentation and reconstruction. We extend this distance measure to define a diffusion process on the point cloud, tolerant to missing data, which is used for the purposes of matching incomplete shapes undergoing a nonrigid deformation. Lastly, we have developed an intrinsic method for multiresolution remeshing of a poor-quality triangulated surface via spectral bisection
Model-based joint bit allocation between geometry and color for video-based 3D point cloud compression
The file attached to this record is the author's final peer reviewed version.In video-based 3D point cloud compression, the quality of the reconstructed 3D point cloud depends on both the geometry and color distortions. Finding an optimal allocation of the total bitrate between the geometry coder and the color coder is a challenging task due to the large number of possible solutions. To solve this bit allocation problem, we first propose analytical distortion and rate models for the geometry and color information. Using these models, we formulate the joint bit allocation problem as a constrained convex optimization problem and solve it with an interior point method. Experimental results show that the rate distortion performance of the proposed solution is close to that obtained with exhaustive search but at only 0.66% of its time complexity
- âŚ