880 research outputs found
Quality of Service Aware Data Stream Processing for Highly Dynamic and Scalable Applications
Huge amounts of georeferenced data streams are arriving daily to data stream management systems that are deployed for serving highly scalable and dynamic applications. There are innumerable ways at which those loads can be exploited to gain deep insights in various domains. Decision makers require an interactive visualization of such data in the form of maps and dashboards for decision making and strategic planning. Data streams normally exhibit fluctuation and oscillation in arrival rates and skewness. Those are the two predominant factors that greatly impact the overall quality of service. This requires data stream management systems to be attuned to those factors in addition to the spatial shape of the data that may exaggerate the negative impact of those factors. Current systems do not natively support services with quality guarantees for dynamic scenarios, leaving the handling of those logistics to the user which is challenging and cumbersome. Three workloads are predominant for any data stream, batch processing, scalable storage and stream processing. In this thesis, we have designed a quality of service aware system, SpatialDSMS, that constitutes several subsystems that are covering those loads and any mixed load that results from intermixing them. Most importantly, we natively have incorporated quality of service optimizations for processing avalanches of geo-referenced data streams in highly dynamic application scenarios. This has been achieved transparently on top of the codebases of emerging de facto standard best-in-class representatives, thus relieving the overburdened shoulders of the users in the presentation layer from having to reason about those services. Instead, users express their queries with quality goals and our system optimizers compiles that down into query plans with an embedded quality guarantee and leaves logistic handling to the underlying layers. We have developed standard compliant prototypes for all the subsystems that constitutes SpatialDSMS
Redesign of the Quizzito administration console
In the context of the competitive online educational landscape, we sought to deliver a platform that would not only meet but surpass the evolving expectations of users, providing them with a seamless and rewarding learning experience. Our project showcases how innovative design, effective user testing, and the deployment of modern development technologies can transform an educational platform to better serve the needs of its users and facilitate their growth and learning journeys.
Our project aimed to address the challenges faced by the existing educational platform, Quizzito, which seeks to make learning an engaging and rewarding experience. By redesigning the user interface and enhancing the overall user experience, we aspired to not only retain current users but also attract new ones. The development process was carefully guided by the feedback collected during the mock-up phase, ensuring that user expectations were met and exceeded. Using modern technologies like Vue.js for the front end and Laravel for the back end, we streamlined the platform’s functionality while also optimizing performance and responsiveness. Data storage, a critical component of any educational platform, was handled efficiently through SQL.No contexto do competitivo cenário educacional on-line, procuramos fornecer uma plataforma que nĂŁo apenas atendesse, mas superasse as expectativas em evolução dos usuários, proporcionando-lhes uma experiĂŞncia de aprendizagem contĂnua e gratificante. Nosso projeto mostra como o design inovador, os testes de usuário eficazes e a implantação de tecnologias modernas de desenvolvimento podem transformar uma plataforma educacional para melhor atender Ă s necessidades de seus usuários e facilitar seu crescimento e jornadas de aprendizagem.
Nosso projeto teve como objetivo enfrentar os desafios enfrentados pela plataforma educacional existente, Quizzito, que busca tornar o aprendizado uma experiência envolvente e gratificante. Ao redesenhar a interface do usuário e melhorar a experiência geral do usuário, aspiramos não apenas reter os usuários atuais, mas também atrair novos. O processo de desenvolvimento foi cuidadosamente orientado pelo feedback recolhido durante a fase de maquete, garantindo que as expectativas dos utilizadores fossem satisfeitas e superadas. Usando tecnologias modernas como Vue.js para front-end e Laravel para back-end,
simplificamos a funcionalidade da plataforma e ao mesmo tempo otimizamos o desempenho e a capacidade de resposta. O armazenamento de dados, um componente crĂtico de qualquer plataforma educacional, foi gerenciado de forma eficiente por meio de SQL
Scalable and responsive real time event processing using cloud computing
PhD ThesisCloud computing provides the potential for scalability and adaptability in a cost e ective
manner. However, when it comes to achieving scalability for real time applications
response time cannot be high. Many applications require good performance and low
response time, which need to be matched with the dynamic resource allocation. The
real time processing requirements can also be characterized by unpredictable rates
of incoming data streams and dynamic outbursts of data. This raises the issue of
processing the data streams across multiple cloud computing nodes. This research
analyzes possible methodologies to process the real time data in which applications
can be structured as multiple event processing networks and be partitioned over the
set of available cloud nodes. The approach is based on queuing theory principles
to encompass the cloud computing. The transformation of the raw data into useful
outputs occurs in various stages of processing networks which are distributed across
the multiple computing nodes in a cloud. A set of valid options is created to understand
the response time requirements for each application. Under a given valid set of
conditions to meet the response time criteria, multiple instances of event processing
networks are distributed in the cloud nodes. A generic methodology to scale-up and
scale-down the event processing networks in accordance to the response time criteria
is de ned. The real time applications that support sophisticated decision support
mechanisms need to comply with response time criteria consisting of interdependent
data
ow paradigms making it harder to improve the performance. Consideration is
given for ways to reduce the latency,improve response time and throughput of the real
time applications by distributing the event processing networks in multiple computing
nodes
Engineering Self-Adaptive Collective Processes for Cyber-Physical Ecosystems
The pervasiveness of computing and networking is creating significant opportunities for building valuable socio-technical systems. However, the scale, density, heterogeneity, interdependence, and QoS constraints of many target systems pose severe operational and engineering challenges. Beyond individual smart devices, cyber-physical collectives can provide services or solve complex problems by leveraging a “system effect” while coordinating and adapting to context or environment change. Understanding and building systems exhibiting collective intelligence and autonomic capabilities represent a prominent research goal, partly covered, e.g., by the field of collective adaptive systems. Therefore, drawing inspiration from and building on the long-time research activity on coordination, multi-agent systems, autonomic/self-* systems, spatial computing, and especially on the recent aggregate computing paradigm, this thesis investigates concepts, methods, and tools for the engineering of possibly large-scale, heterogeneous ensembles of situated components that should be able to operate, adapt and self-organise in a decentralised fashion. The primary contribution of this thesis consists of four main parts. First, we define and implement an aggregate programming language (ScaFi), internal to the mainstream Scala programming language, for describing collective adaptive behaviour, based on field calculi. Second, we conceive of a “dynamic collective computation” abstraction, also called aggregate process, formalised by an extension to the field calculus, and implemented in ScaFi. Third, we characterise and provide a proof-of-concept implementation of a middleware for aggregate computing that enables the development of aggregate systems according to multiple architectural styles. Fourth, we apply and evaluate aggregate computing techniques to edge computing scenarios, and characterise a design pattern, called Self-organising Coordination Regions (SCR), that supports adjustable, decentralised decision-making and activity in dynamic environments.Con lo sviluppo di informatica e intelligenza artificiale, la diffusione pervasiva di device computazionali e la crescente interconnessione tra elementi fisici e digitali, emergono innumerevoli opportunità per la costruzione di sistemi socio-tecnici di nuova generazione. Tuttavia, l'ingegneria di tali sistemi presenta notevoli sfide, data la loro complessità —si pensi ai livelli, scale, eterogeneità , e interdipendenze coinvolti. Oltre a dispositivi smart individuali, collettivi cyber-fisici possono fornire servizi o risolvere problemi complessi con un “effetto sistema” che emerge dalla coordinazione e l'adattamento di componenti fra loro, l'ambiente e il contesto. Comprendere e costruire sistemi in grado di esibire intelligenza collettiva e capacità autonomiche è un importante problema di ricerca studiato, ad esempio, nel campo dei sistemi collettivi adattativi. Perciò, traendo ispirazione e partendo dall'attività di ricerca su coordinazione, sistemi multiagente e self-*, modelli di computazione spazio-temporali e, specialmente, sul recente paradigma di programmazione aggregata, questa tesi tratta concetti, metodi, e strumenti per l'ingegneria di
ensemble di elementi situati eterogenei che devono essere in grado di lavorare, adattarsi, e auto-organizzarsi in modo decentralizzato. Il contributo di questa tesi consiste in quattro parti principali. In primo luogo, viene definito e implementato un linguaggio di programmazione aggregata (ScaFi), interno al linguaggio Scala, per descrivere comportamenti collettivi e adattativi secondo l'approccio dei campi computazionali. In secondo luogo, si propone e caratterizza l'astrazione di processo aggregato per rappresentare computazioni collettive dinamiche concorrenti, formalizzata come estensione al field calculus e implementata in ScaFi. Inoltre, si analizza e implementa un prototipo di middleware per sistemi aggregati, in grado di supportare piĂą stili architetturali. Infine, si applicano e valutano tecniche di programmazione aggregata in scenari di edge computing, e si propone un pattern, Self-Organising Coordination Regions, per supportare, in modo decentralizzato, attivitĂ decisionali e di regolazione in ambienti dinamici
Austrian High-Performance-Computing meeting (AHPC2020)
This booklet is a collection of abstracts presented at the AHPC conference
OpenCMP: An Open-Source Computational Multiphysics Package
Computational multiphysics offers a safe, inexpensive, and rapid alternative to direct experimentation, but there remain barriers to its widespread use.
One such barrier is the generation of conformal meshes of simulation domains, which is the primary approach to spatial discretization used in multiphysics simulations. Generation of these meshes is time intensive, non-deterministic, and often requires manual user intervention. For complex domain geometries, there is also a competition between domain-conforming mesh elements, element and mesh quality within the domain, and simulation stability.
A second barrier is lack of easy access to computational multiphysics software based on the finite element method, which enables high-order spatial discretization at the cost of complexity of implementation. Most computational multiphysics software is based on the finite volume method, which involves inherently low-order spatial discretization. This requires relatively high densities of mesh elements for adequate numerical accuracy but is relatively simple to implement. However, higher mesh densities correspond to smaller element scales, resulting in stability issues for convection-dominated simulations. There do exist finite element method-based computational multiphysics software packages, however these software packages are either closed-source or require extensive user skills in a broad range of areas including continuum mechanics, applied math, and computational science.
This thesis presents OpenCMP, a new open-source computational multiphysics package. OpenCMP implements the diffuse interface method, which allows even complex geometries to be meshed with nonconforming structured grids, improving simulation stability and sometimes speed. OpenCMP is built on the popular finite element library, NGSolve, and offers both a simple user interface for running standard models and the ability for experienced users to easily add new models. It has been validated on common benchmark problems and used to extend the diffuse interface method to simulations with moving domains
Load shedding in network monitoring applications
Monitoring and mining real-time network data streams are crucial operations for managing and operating data networks. The information that network operators desire to extract from the network traffic is of different size, granularity and accuracy depending on the measurement task (e.g., relevant data for capacity planning and intrusion detection are very different). To satisfy these different demands, a new class of monitoring systems is emerging to handle multiple and arbitrary monitoring applications.
Such systems must inevitably cope with the effects of continuous overload situations due to the large volumes, high data rates and bursty nature of the network traffic. These overload situations can severely compromise the accuracy and effectiveness of monitoring systems, when their results are most valuable to network operators.
In this thesis, we propose a technique called load shedding as an effective and low-cost alternative to over-provisioning in network monitoring systems.
It allows these systems to handle efficiently overload situations in the presence of multiple, arbitrary and competing monitoring applications. We present the design and evaluation of a predictive load shedding scheme that can shed excess load in front of extreme traffic conditions and maintain the accuracy of the monitoring applications within bounds defined by end users, while assuring a fair allocation of computing resources to non-cooperative applications.
The main novelty of our scheme is that it considers monitoring applications as black boxes, with arbitrary (and highly variable) input traffic and processing cost. Without any explicit knowledge of the application internals, the proposed scheme extracts a set of features from the traffic streams to build an on-line prediction model of the resource requirements of each monitoring application, which is used to anticipate overload situations and control the overall resource usage by sampling the input packet streams. This way, the monitoring system preserves a high degree of flexibility, increasing the range of applications and network scenarios where it can be used.
Since not all monitoring applications are robust against sampling, we then extend our load shedding scheme to support custom load shedding methods defined by end users, in order to provide a generic solution for arbitrary monitoring applications. Our scheme allows the monitoring system to safely delegate the task of shedding excess load to the applications and still guarantee fairness of service with non-cooperative users.
We implemented our load shedding scheme in an existing network monitoring system and deployed it in a research ISP network. We present experimental evidence of the performance and robustness of our system with several concurrent monitoring applications during long-lived executions and using real-world traffic traces.Postprint (published version
Recommended from our members
Optimisation of a water company’s waste pumping asset base with a focus on energy reduction
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonWater companies use a significant quantity of electricity for the operation of their clean and wastewater assets. Rising energy prices have led to higher energy bills within the water companies, which has increased operating costs. Thus, improvements in demand side energy management are needed to increase efficiency and reduce costs, which forms the premise for this research project.
Thames Water Utilities Ltd has identified that improvements in demand side energy management is required and is currently researching various methods to reduce energy consumption. One initiative included the upgrade of a variety of site telemetry assets. By deploying these new telemetry assets, Thames Water Utilities Ltd are more able to liberate the asset data and as such, be able to make informed decisions on how better to control and optimise the target sites, which is where this research project has seen further opportunities. This enhanced telemetry and SCADA infrastructure will enable successful research to further develop an intelligent integrated system that tackles pump scheduling and process control with the emphasis on energy management.
The use of modern techniques, such as artificial intelligence, to optimise the network operation is gradually gaining traction. The balance between implementing new technology (with the benefits it may bring) and reluctance to change from the incumbent operating model will always provide challenges in the technology adoption agenda.
The main work of this research project included the physical surveying of a wastewater hydraulic catchment, inclusive of all wet well dimensions, lidar overlays, and pump electrical power characteristics. These survey results where then able to be programmed by the research into the company’s' hydraulic model to enable a higher degree of accuracy in the modelling, as well as enabling electrical power as a measurable output. From here, the model was then able to be optimised, focussing on electrical energy as an output variable for reduction.
The research concluded that electrical energy consumption over time can be reduced using the aforementioned strategies and as such recommends further work to move from the model environment to physical architecture. It does so with the key message that risk tolerances on water levels must be pre-agreed with hydraulic specialists prior to deployment
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
- …