800 research outputs found

    Colorado water, November/December 2018

    Get PDF
    The newsletter is devoted to highlighting water research and activities at CSU and throughout Colorado.Newsletter of the Colorado Water Center. Theme: Irrigation innovation and technology

    Technology Trends and Opportunities for Construction Industry and Lifecycle Management

    Get PDF
    Master's thesis in for Offshore Technology: Industrial Asset ManagementThe purpose of the report is to highlight methods that can make it easier for the construction industry and industry in general to benefit from new technology. The report is intended as a reference to technological solutions that along with some techniques, can streamline workflow for multiple tasks in planning, design, and operation and maintenance management. The problems focused on is how to: ‱ Simplify the procurement and tracing of documentation ‱ Optimize building stages, design, and Life Cycle Management (LCM) ‱ Provide interactions between disciplines and employees using different software Scientific Platform are based on literature within technology trends. Some history and trends in digital technology are presented. Definition of roles and general terms related to documentation is derived from Norsk Standard and is interpreted on this basis. The report charts the use of individual software and technical setup of digital tools within CAD-engineering (Computer Aided Design), HDS-technology (High Definition Surveying), and gaming technology. This technology combined with cloud-services to support planning, design and management of building stages. Later to support LCM of facilities and businesses' ERP-systems (Enterprise Resource Planning). Use of Robotic Process Automation (RPA) and Artificial Intelligence (AI), for document control tasks. The result of the report is that several suppliers provide services and products accessible through web. Setup and implementation will require some work and knowledge for business and organizations, but the gain largely seems to justify the use of resources for this purpose. Particularly through IOT-interactions (Internet of Things), cloud-services and free downloadable applications that may be considered as a paradigm shift related to the issues in the report. Also, presenting new platforms for engineering phases to support Building Information Modeling processes (BIM). With the use of Algorithmic Editors for encoding between computer programs without the need of data programmer expertise. To streamline workflows, reduce recreation of data, interactions between different software of various user level, and support of AI to optimize designing by adds-on for CAD-engineering (Computer Aided Design). Mobile devices like phones and tablets to support several of solutions and products presented is very accessible. It seems naturally to assume that the vast majority of people are familiar with technology related to smartphone applications for daily use. The use of resources for implementing the presented solutions have not been considered in this report. Some of the equipment presented can be interpreted as relatively expensive. Investment analysis would be sensible. The trend however, shows continues price drops and increased availability. At the same time as the user interface is being improved for both software and digital equipment. The conclusion, is that the construction industry, as well as Facility Management (FM). Within both, public, and private sector, can have much to gain using the technology and techniques presented in the report

    Book of Abstracts:8th International Conference on Smart Energy Systems

    Get PDF

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Effizienz in Cluster-Datenbanksystemen - Dynamische und ArbeitslastberĂŒcksichtigende Skalierung und Allokation

    Get PDF
    Database systems have been vital in all forms of data processing for a long time. In recent years, the amount of processed data has been growing dramatically, even in small projects. Nevertheless, database management systems tend to be static in terms of size and performance which makes scaling a difficult and expensive task. Because of performance and especially cost advantages more and more installed systems have a shared nothing cluster architecture. Due to the massive parallelism of the hardware programming paradigms from high performance computing are translated into data processing. Database research struggles to keep up with this trend. A key feature of traditional database systems is to provide transparent access to the stored data. This introduces data dependencies and increases system complexity and inter process communication. Therefore, many developers are exchanging this feature for a better scalability. However, explicitly managing the data distribution and data flow requires a deep understanding of the distributed system and reduces the possibilities for automatic and autonomic optimization. In this thesis we present an approach for database system scaling and allocation that features good scalability although it keeps the data distribution transparent. The first part of this thesis analyzes the challenges and opportunities for self-scaling database management systems in cluster environments. Scalability is a major concern of Internet based applications. Access peaks that overload the application are a financial risk. Therefore, systems are usually configured to be able to process peaks at any given moment. As a result, server systems often have a very low utilization. In distributed systems the efficiency can be increased by adapting the number of nodes to the current workload. We propose a processing model and an architecture that allows efficient self-scaling of cluster database systems. In the second part we consider different allocation approaches. To increase the efficiency we present a workload-aware, query-centric model. The approach is formalized; optimal and heuristic algorithms are presented. The algorithms optimize the data distribution for local query execution and balance the workload according to the query history. We present different query classification schemes for different forms of partitioning. The approach is evaluated for OLTP and OLAP style workloads. It is shown that variants of the approach scale well for both fields of application. The third part of the thesis considers benchmarks for large, adaptive systems. First, we present a data generator for cloud-sized applications. Due to its architecture the data generator can easily be extended and configured. A key feature is the high degree of parallelism that makes linear speedup for arbitrary numbers of nodes possible. To simulate systems with user interaction, we have analyzed a productive online e-learning management system. Based on our findings, we present a model for workload generation that considers the temporal dependency of user interaction.Datenbanksysteme sind seit langem die Grundlage fĂŒr alle Arten von Informationsverarbeitung. In den letzten Jahren ist das Datenaufkommen selbst in kleinen Projekten dramatisch angestiegen. Dennoch sind viele Datenbanksysteme statisch in Bezug auf ihre KapazitĂ€t und Verarbeitungsgeschwindigkeit was die Skalierung aufwendig und teuer macht. Aufgrund der guten Geschwindigkeit und vor allem aus KostengrĂŒnden haben immer mehr Systeme eine Shared-Nothing-Architektur, bestehen also aus unabhĂ€ngigen, lose gekoppelten Rechnerknoten. Da dieses Konstruktionsprinzip einen sehr hohen Grad an ParallelitĂ€t aufweist, werden zunehmend Programmierparadigmen aus dem klassischen Hochleistungsrechen fĂŒr die Informationsverarbeitung eingesetzt. Dieser Trend stellt die Datenbankforschung vor große Herausforderungen. Eine der grundlegenden Eigenschaften traditioneller Datenbanksysteme ist der transparente Zugriff zu den gespeicherten Daten, der es dem Nutzer erlaubt unabhĂ€ngig von der internen Organisation auf die Daten zuzugreifen. Die resultierende UnabhĂ€ngigkeit fĂŒhrt zu AbhĂ€ngigkeiten in den Daten und erhöht die KomplexitĂ€t der Systeme und der Kommunikation zwischen einzelnen Prozessen. Daher wird Transparenz von vielen Entwicklern fĂŒr eine bessere Skalierbarkeit geopfert. Diese Entscheidung fĂŒhrt dazu, dass der die Datenorganisation und der Datenfluss explizit behandelt werden muss, was die Möglichkeiten fĂŒr eine automatische und autonome Optimierung des Systems einschrĂ€nkt. Der in dieser Arbeit vorgestellte Ansatz zur Skalierung und Allokation erhĂ€lt den transparenten Zugriff und zeichnet sich dabei durch seine vollstĂ€ndige Automatisierbarkeit und sehr gute Skalierbarkeit aus. Im ersten Teil dieser Dissertation werden die Herausforderungen und Chancen fĂŒr selbst-skalierende Datenbankmanagementsysteme behandelt, die in auf Computerclustern betrieben werden. Gute Skalierbarkeit ist eine notwendige Eigenschaft fĂŒr Anwendungen, die ĂŒber das Internet zugreifbar sind. Lastspitzen im Zugriff, die die Anwendung ĂŒberladen stellen ein finanzielles Risiko dar. Deshalb werden Systeme so konfiguriert, dass sie eventuelle Lastspitzen zu jedem Zeitpunkt verarbeiten können. Das fĂŒhrt meist zu einer im Schnitt sehr geringen Auslastung der unterliegenden Systeme. Eine Möglichkeit dieser Ineffizienz entgegen zu steuern ist es die Anzahl der verwendeten Rechnerknoten an die vorliegende Last anzupassen. In dieser Dissertation werden ein Modell und eine Architektur fĂŒr die Anfrageverarbeitung vorgestellt, mit denen es möglich ist Datenbanksysteme auf Clusterrechnern einfach und effizient zu skalieren. Im zweiten Teil der Arbeit werden verschieden Möglichkeiten fĂŒr die Datenverteilung behandelt. Um die Effizienz zu steigern wird ein Modell verwendet, das die Lastverteilung im Anfragestrom berĂŒcksichtigt. Der Ansatz ist formalisiert und optimale und heuristische Lösungen werden prĂ€sentiert. Die vorgestellten Algorithmen optimieren die Datenverteilung fĂŒr eine lokale AusfĂŒhrung aller Anfragen und balancieren die Last auf den Rechnerknoten. Es werden unterschiedliche Arten der Anfrageklassifizierung vorgestellt, die zu verschiedenen Arten von Partitionierung fĂŒhren. Der Ansatz wird sowohl fĂŒr Onlinetransaktionsverarbeitung, als auch Onlinedatenanalyse evaluiert. Die Evaluierung zeigt, dass der Ansatz fĂŒr beide Felder sehr gut skaliert. Im letzten Teil der Arbeit werden verschiedene Techniken fĂŒr die Leistungsmessung von großen, adaptiven Systemen prĂ€sentiert. ZunĂ€chst wird ein Datengenerierungsansatz gezeigt, der es ermöglicht sehr große Datenmengen völlig parallel zu erzeugen. Um die Benutzerinteraktion von Onlinesystemen zu simulieren wurde ein produktives E-learningsystem analysiert. Anhand der Analyse wurde ein Modell fĂŒr die Generierung von Arbeitslasten erstellt, das die zeitlichen AbhĂ€ngigkeiten von Benutzerinteraktion berĂŒcksichtigt

    Convergence of Intelligent Data Acquisition and Advanced Computing Systems

    Get PDF
    This book is a collection of published articles from the Sensors Special Issue on "Convergence of Intelligent Data Acquisition and Advanced Computing Systems". It includes extended versions of the conference contributions from the 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS’2019), Metz, France, as well as external contributions

    Structural issues and energy efficiency in data centers

    Get PDF
    Mención Internacional en el título de doctorWith the rise of cloud computing, data centers have been called to play a main role in the Internet scenario nowadays. Despite this relevance, they are probably far from their zenith yet due to the ever increasing demand of contents to be stored in and distributed by the cloud, the need of computing power or the larger and larger amounts of data being analyzed by top companies such as Google, Microsoft or Amazon. However, everything is not always a bed of roses. Having a data center entails two major issues: they are terribly expensive to build, and they consume huge amounts of power being, therefore, terribly expensive to maintain. For this reason, cutting down the cost of building and increasing the energy efficiency (and hence reducing the carbon footprint) of data centers has been one of the hottest research topics during the last years. In this thesis we propose different techniques that can have an impact in both the building and the maintenance costs of data centers of any size, from small scale to large flagship data centers. The first part of the thesis is devoted to structural issues. We start by analyzing the bisection (band)width of a topology, of product graphs in particular, a useful parameter to compare and choose among different data center topologies. In that same part we describe the problem of deploying the servers in a data center as a Multidimensional Arrangement Problem (MAP) and propose a heuristic to reduce the deployment and wiring costs. We target energy efficiency in data centers in the second part of the thesis. We first propose a method to reduce the energy consumption in the data center network: rate adaptation. Rate adaptation is based on the idea of energy proportionality and aims to consume power on network devices proportionally to the load on their links. Our analysis proves that just using rate adaptation we may achieve average energy savings in the order of a 30-40% and up to a 60% depending on the network topology. We continue by characterizing the power requirements of a data center server given that, in order to properly increase the energy efficiency of a data center, we first need to understand how energy is being consumed. We present an exhaustive empirical characterization of the power requirements of multiple components of data center servers, namely, the CPU, the disks, and the network card. To do so, we devise different experiments to stress these components, taking into account the multiple available frequencies as well as the fact that we are working with multicore servers. In these experiments, we measure their energy consumption and identify their optimal operational points. Our study proves that the curve that defines the minimal power consumption of the CPU, as a function of the load in Active Cycles Per Second (ACPS), is neither concave nor purely convex. Moreover, it definitively has a superlinear dependence on the load. We also validate the accuracy of the model derived from our characterization by running different Hadoop applications in diverse scenarios obtaining an error below 4:1% on average. The last topic we study is the Virtual Machine Assignment problem (VMA), i.e., optimizing how virtual machines (VMs) are assigned to physical machines (PMs) in data centers. Our optimization target is to minimize the power consumed by all the PMs when considering that power consumption depends superlinearly on the load. We study four different VMA problems, depending on whether the number of PMs and their capacity are bounded or not. We study their complexity and perform an offline and online analysis of these problems. The online analysis is complemented with simulations that show that the online algorithms we propose consume substantially less power than other state of the art assignment algorithms.Programa Oficial de Doctorado en Ingeniería TelemåticaPresidente: Joerg Widmer.- Secretario: José Manuel Moya Fernåndez.- Vocal: Shmuel Zak
    • 

    corecore