4,359 research outputs found
EVEREST IST - 2002 - 00185 : D23 : final report
Deliverable pĂșblic del projecte europeu EVERESTThis deliverable constitutes the final report of the project IST-2002-001858 EVEREST. After its successful completion, the project presents this document that firstly summarizes the context, goal and the approach objective of the project. Then it presents a concise summary of the major goals and results, as well as highlights the most valuable lessons derived form the project work. A list of deliverables and publications is included in the annex.Postprint (published version
Cloud Cost Optimization: A Comprehensive Review of Strategies and Case Studies
Cloud computing has revolutionized the way organizations manage their IT
infrastructure, but it has also introduced new challenges, such as managing
cloud costs. This paper explores various techniques for cloud cost
optimization, including cloud pricing, analysis, and strategies for resource
allocation. Real-world case studies of these techniques are presented, along
with a discussion of their effectiveness and key takeaways. The analysis
conducted in this paper reveals that organizations can achieve significant cost
savings by adopting cloud cost optimization techniques. Additionally, future
research directions are proposed to advance the state of the art in this
important field
An Edge-Cloud based Reference Architecture to support cognitive solutions in Process Industry
Process Industry is one of the leading sectors of the world economy, characterized however by intense environmental impact, and very high-energy consumption. Despite a traditional low innovation pace in PI, in the recent years a strong push at worldwide level towards the dual objective of improving the efficiency of plants and the quality of products, significantly reducing the consumption of electricity and CO2 emissions has taken momentum. Digital Technologies (namely Smart Embedded Systems, IoT, Data, AI and Edge-to-Cloud Technologies) are enabling drivers for a Twin Digital-Green Transition, as well as foundations for human centric, safe, comfortable and inclusive workplaces. Currently, digital sensors in plants produce a large amount of data, which in most cases constitutes just a potential and not a real value for Process Industry, often locked-in in close proprietary systems and seldomly exploited. Digital technologies, with process modelling-simulation via digital twins, can build a bridge between the physical and the virtual worlds, bringing innovation with great efficiency and drastic reduction of waste. In accordance with the guidelines of Industrie 4.0 this work proposes a modular and scalable Reference Architecture, based on open source software, which can be implemented both in brownfield and greenfield scenarios. The ability to distribute processing between the edge, where the data have been created, and the cloud, where the greatest computational resources are available, facilitates the development of integrated digital solutions with cognitive capabilities. The reference architecture is being validated in the three pilot plants, paving the way to the development of integrated planning solutions, with scheduling and control of the plants, optimizing the efficiency and reliability of the supply chain, and balancing energy efficiency
SISTEMI PER LA MOBILITĂ DEGLI UTENTI E DEGLI APPLICATIVI IN RETI WIRED E WIRELESS
The words mobility and network are found together in many contexts. The issue alone of modeling geographical user mobility in wireless networks has countless applications. Depending on oneâs background, the concept is investigated with very different tools and aims.
Moreover, the last decade saw also a growing interest in code mobility, i.e. the possibility for soft-ware applications (or parts thereof) to migrate and keeps working in different devices and environ-ments. A notable real-life and successful application is distributed computing, which under certain hypothesis can void the need of expensive supercomputers. The general rationale is splitting a very demanding computing task into a large number of independent sub-problems, each addressable by limited-power machines, weakly connected (typically through the Internet, the quintessence of a wired network).
Following this lines of thought, we organized this thesis in two distinct and independent parts:
Part I
It deals with audio fingerprinting, and a special emphasis is put on the application of broadcast mon-itoring and on the implementation aspects. Although the problem is tackled from many sides, one of the most prominent difficulties is the high computing power required for the task. We thus devised and operated a distributed-computing solution, which is described in detail. Tests were conducted on the computing cluster available at the Department of Engineering of the University of Ferrara.
Part II
It focuses instead on wireless networks. Even if the approach is quite general, the stress is on WiFi networks. More specifically, we tried to evaluate how mobile-usersâ experience can be improved. Two tools are considered. In the first place, we wrote a packet-level simulator and used it to esti-mate the impact of pricing strategies in allocating the bandwidth resource, finding out the need for such solutions. Secondly, we developed a high-level simulator that strongly advises to deepen the topic of user cooperation for the selection of the âbestâ point of access, when many are available. We also propose one such policy
Parallel Combining: Benefits of Explicit Synchronization
A parallel batched data structure is designed to process synchronized batches of operations on the data structure using a parallel program. In this paper, we propose parallel combining, a technique that implements a concurrent data structure from a parallel batched one. The idea is that we explicitly synchronize concurrent operations into batches: one of the processes becomes a combiner which collects concurrent requests and initiates a parallel batched algorithm involving the owners (clients) of the collected requests. Intuitively, the cost of synchronizing the concurrent calls can be compensated by running the parallel batched algorithm.
We validate the intuition via two applications. First, we use parallel combining to design a concurrent data structure optimized for read-dominated workloads, taking a dynamic graph data structure as an example. Second, we use a novel parallel batched priority queue to build a concurrent one. In both cases, we obtain performance gains with respect to the state-of-the-art algorithms
Recommended from our members
Transiency-driven Resource Management for Cloud Computing Platforms
Modern distributed server applications are hosted on enterprise or cloud data centers that provide computing, storage, and networking capabilities to these applications. These applications are built using the implicit assumption that the underlying servers will be stable and normally available, barring for occasional faults. In many emerging scenarios, however, data centers and clouds only provide transient, rather than continuous, availability of their servers. Transiency in modern distributed systems arises in many contexts, such as green data centers powered using renewable intermittent sources, and cloud platforms that provide lower-cost transient servers which can be unilaterally revoked by the cloud operator.
Transient computing resources are increasingly important, and existing fault-tolerance and resource management techniques are inadequate for transient servers because applications typically assume continuous resource availability. This thesis presents research in distributed systems design that treats transiency as a first-class design principle. I show that combining transiency-specific fault-tolerance mechanisms with resource management policies to suit application characteristics and requirements, can yield significant cost and performance benefits. These mechanisms and policies have been implemented and prototyped as part of software systems, which allow a wide range of applications, such as interactive services and distributed data processing, to be deployed on transient servers, and can reduce cloud computing costs by up to 90\%.
This thesis makes contributions to four areas of computer systems research: transiency-specific fault-tolerance, resource allocation, abstractions, and resource reclamation. For reducing the impact of transient server revocations, I develop two fault-tolerance techniques that are tailored to transient server characteristics and application requirements. For interactive applications, I build a derivative cloud platform that masks revocations by transparently moving application-state between servers of different types. Similarly, for distributed data processing applications, I investigate the use of application level periodic checkpointing to reduce the performance impact of server revocations. For managing and reducing the risk of server revocations, I investigate the use of server portfolios that allow transient resource allocation to be tailored to application requirements.
Finally, I investigate how resource providers (such as cloud platforms) can provide transient resource availability without revocation, by looking into alternative resource reclamation techniques. I develop resource deflation, wherein a server\u27s resources are fractionally reclaimed, allowing the application to continue execution albeit with fewer resources. Resource deflation generalizes revocation, and the deflation mechanisms and cluster-wide policies can yield both high cluster utilization and low application performance degradation
Marshall Space Flight Center Research and Technology Report 2019
Today, our calling to explore is greater than ever before, and here at Marshall Space Flight Centerwe make human deep space exploration possible. A key goal for Artemis is demonstrating and perfecting capabilities on the Moon for technologies needed for humans to get to Mars. This years report features 10 of the Agencys 16 Technology Areas, and I am proud of Marshalls role in creating solutions for so many of these daunting technical challenges. Many of these projects will lead to sustainable in-space architecture for human space exploration that will allow us to travel to the Moon, on to Mars, and beyond. Others are developing new scientific instruments capable of providing an unprecedented glimpse into our universe. NASA has led the charge in space exploration for more than six decades, and through the Artemis program we will help build on our work in low Earth orbit and pave the way to the Moon and Mars. At Marshall, we leverage the skills and interest of the international community to conduct scientific research, develop and demonstrate technology, and train international crews to operate further from Earth for longer periods of time than ever before first at the lunar surface, then on to our next giant leap, human exploration of Mars. While each project in this report seeks to advance new technology and challenge conventions, it is important to recognize the diversity of activities and people supporting our mission. This report not only showcases the Centers capabilities and our partnerships, it also highlights the progress our people have achieved in the past year. These scientists, researchers and innovators are why Marshall and NASA will continue to be a leader in innovation, exploration, and discovery for years to come
An O(1) time complexity software barrier
technical reportAs network latency rapidly approaches thousands of processor cycles and multiprocessors systems become larger and larger, the primary factor in determining a barrier algorithm?s performance is the number of serialized network latencies it requires. All existing barrier algorithms require at least O(log N) round trip message latencies to perform a single barrier operation on an N-node shared memory multiprocessor. In addition, existing barrier algorithms are not well tuned in terms of how they interact with modern shared memory systems, which leads to an excessive number of message exchanges to signal barrier completion. The contributions of this paper are threefold. First, we identify and quantitatively analyze the performance deficiencies of conventional barrier implementations when they are executed on real (non-idealized) hardware. Second, we propose a queue-based barrier algorithm that has effectively O(1)time complexity as measured in round trip message latencies. Third, by exploiting a hardware write-update (PUT) mechanism for signaling, we demonstrate how carefully matching the barrier implementation to the way that modern shared memory systems operate can improve performance dramatically. The resulting optimized algorithm only costs one round trip message latency to perform a barrier operation across N processors. Using a cycle-accurate execution-driven simulator of a future-generation SGI multiprocessor, we show that the proposed queue-based barrier outperforms conventional barrier implementations based on load-linked/storeconditional instructions by a factor of 5.43 (on 4 processors) to 93.96 (on 256 processors)
A Survey on Energy-Efficient Strategies in Static Wireless Sensor Networks
A comprehensive analysis on the energy-efficient strategy in static Wireless Sensor Networks (WSNs) that are not equipped with any energy harvesting modules is conducted in this article. First, a novel generic mathematical definition of Energy Efficiency (EE) is proposed, which takes the acquisition rate of valid data, the total energy consumption, and the network lifetime of WSNs into consideration simultaneously. To the best of our knowledge, this is the first time that the EE of WSNs is mathematically defined. The energy consumption characteristics of each individual sensor node and the whole network are expounded at length. Accordingly, the concepts concerning EE, namely the Energy-Efficient Means, the Energy-Efficient Tier, and the Energy-Efficient Perspective, are proposed. Subsequently, the relevant energy-efficient strategies proposed from 2002 to 2019 are tracked and reviewed. Specifically, they respectively are classified into five categories: the Energy-Efficient Media Access Control protocol, the Mobile Node Assistance Scheme, the Energy-Efficient Clustering Scheme, the Energy-Efficient Routing Scheme, and the Compressive Sensing--based Scheme. A detailed elaboration on both of the basic principle and the evolution of them is made. Finally, further analysis on the categories is made and the related conclusion is drawn. To be specific, the interdependence among them, the relationships between each of them, and the Energy-Efficient Means, the Energy-Efficient Tier, and the Energy-Efficient Perspective are analyzed in detail. In addition, the specific applicable scenarios for each of them and the relevant statistical analysis are detailed. The proportion and the number of citations for each category are illustrated by the statistical chart. In addition, the existing opportunities and challenges facing WSNs in the context of the new computing paradigm and the feasible direction concerning EE in the future are pointed out
- âŠ