38 research outputs found

    Space-Efficient Predictive Block Management

    Get PDF
    With growing disk and storage capacities, the amount of required metadata for tracking all blocks in a system becomes a daunting task by itself. In previous work, we have demonstrated a system software effort in the area of predictive data grouping for reducing power and latency on hard disks. The structures used, very similar to prior efforts in prefetching and prefetch caching, track access successor information at the block level, keeping a fixed number of immediate successors per block. While providing powerful predictive expansion capabilities and being more space efficient in the amount of required metadata than many previous strategies, there remains a growing concern of how much data is actually required. In this paper, we present a novel method of storing equivalent information, SESH, a Space Efficient Storage of Heredity. This method utilizes the high amount of block-level predictability observed in a number of workload trace sets to reduce the overall metadata storage by up to 99% without any loss of information. As a result, we are able to provide a predictive tool that is adaptive, accurate, and robust in the face of workload noise, for a tiny fraction of the metadata cost previously anticipated; in some cases, reducing the required size from 12 gigabytes to less than 150 megabytes

    BADANIA REDUKCJI OPÓŹNIEŃ SERWERA WWW

    Get PDF
    This paper investigates the characteristics of web server response delay in order to understand and analyze the optimization techniques of reducing latency. The analysis of the latency behavior for multi-process Apache HTTP server with different thread count and various workloads, was made. It was indicated, that the insufficient number of threads used by the server handling the concurrent requests of clients, is responsible for increasing latency under various loads. The problem can be solved by using a modified web server configuration allowing to reduce the response time.W artykule opisano badania charakterystyk czasowych serwera WWW w celu zrozumienia i analizy technik optymalizacyjnych powodujących redukcję opóźnienia. Dokonano analizy czasów opóźnień dla wieloprocesowego serwera Apache dla różnej liczby wątków i obciążeń. Wskazano, że niewystarczająca liczba wątków wykorzystywanych przez serwer, obsługujących jednoczesne żądania klientów, wpływa znacząco na zwiększenie opóźnień dla różnych obciążeń. Problem może być rozwiązany za pomocą modyfikacji ustawień serwera WWW, pozwalających na skrócenie czasu reakcji

    ELDC: An Artificial Neural Network Based Energy-Efficient and Robust Routing Scheme for Pollution Monitoring in WSNs

    Full text link
    [EN] The range of applications of Wireless Sensor Networks (WSNs) is increasing continuously despite of their serious constraints of the sensor nodes¿ resources such as storage, processing capacity, communication range and energy. The main issues in WSN are the energy consumption and the delay in relaying data to the Sink node. This becomes extremely important when deploying a big number of nodes, like the case of industry pollution monitoring. We propose an artificial neural network based energy-efficient and robust routing scheme for WSNs called ELDC. In this technique, the network is trained on huge data set containing almost all scenarios to make the network more reliable and adaptive to the environment. Additionally, it uses group based methodology to increase the life-span of the overall network, where groups may have different sizes. An artificial neural network provides an efficient threshold values for the selection of a group's CN and a cluster head based on back propagation technique and allows intelligent, efficient, and robust group organization. Thus, our proposed technique is highly energy-efficient capable to increase sensor nodes¿ lifetime. Simulation results show that it outperforms LEACH protocol by 42 percent, and other state-of-the-art protocols by more than 30 percent.Mehmood, A.; Lv, Z.; Lloret, J.; Umar, MM. (2020). ELDC: An Artificial Neural Network Based Energy-Efficient and Robust Routing Scheme for Pollution Monitoring in WSNs. IEEE Transactions on Emerging Topics in Computing. IEEE TETC. 8(1):106-114. https://doi.org/10.1109/TETC.2017.26718471061148

    Air Force Institute of Technology Research Report 2009

    Get PDF
    This report summarizes the research activities of the Air Force Institute of Technology’s Graduate School of Engineering and Management. It describes research interests and faculty expertise; lists student theses/dissertations; identifies research sponsors and contributions; and outlines the procedures for contacting the school. Included in the report are: faculty publications, conference presentations, consultations, and funded research projects. Research was conducted in the areas of Aeronautical and Astronautical Engineering, Electrical Engineering and Electro-Optics, Computer Engineering and Computer Science, Systems and Engineering Management, Operational Sciences, Mathematics, Statistics and Engineering Physics

    EFFECTIVE GROUPING FOR ENERGY AND PERFORMANCE: CONSTRUCTION OF ADAPTIVE, SUSTAINABLE, AND MAINTAINABLE DATA STORAGE

    Get PDF
    The performance gap between processors and storage systems has been increasingly critical overthe years. Yet the performance disparity remains, and further, storage energy consumption israpidly becoming a new critical problem. While smarter caching and predictive techniques domuch to alleviate this disparity, the problem persists, and data storage remains a growing contributorto latency and energy consumption.Attempts have been made at data layout maintenance, or intelligent physical placement ofdata, yet in practice, basic heuristics remain predominant. Problems that early studies soughtto solve via layout strategies were proven to be NP-Hard, and data layout maintenance todayremains more art than science. With unknown potential and a domain inherently full of uncertainty,layout maintenance persists as an area largely untapped by modern systems. But uncertainty inworkloads does not imply randomness; access patterns have exhibited repeatable, stable behavior.Predictive information can be gathered, analyzed, and exploited to improve data layouts. Ourgoal is a dynamic, robust, sustainable predictive engine, aimed at improving existing layouts byreplicating data at the storage device level.We present a comprehensive discussion of the design and construction of such a predictive engine,including workload evaluation, where we present and evaluate classical workloads as well asour own highly detailed traces collected over an extended period. We demonstrate significant gainsthrough an initial static grouping mechanism, and compare against an optimal grouping method ofour own construction, and further show significant improvement over competing techniques. We also explore and illustrate the challenges faced when moving from static to dynamic (i.e. online)grouping, and provide motivation and solutions for addressing these challenges. These challengesinclude metadata storage, appropriate predictive collocation, online performance, and physicalplacement. We reduced the metadata needed by several orders of magnitude, reducing the requiredvolume from more than 14% of total storage down to less than 12%. We also demonstrate how ourcollocation strategies outperform competing techniques. Finally, we present our complete modeland evaluate a prototype implementation against real hardware. This model was demonstrated tobe capable of reducing device-level accesses by up to 65%

    Optimal Allocation of Interconnecting Links in Cyber-Physical Systems: Interdependence, Cascading Failures and Robustness

    Full text link
    We consider a cyber-physical system consisting of two interacting networks, i.e., a cyber-network overlaying a physical-network. It is envisioned that these systems are more vulnerable to attacks since node failures in one network may result in (due to the interdependence) failures in the other network, causing a cascade of failures that would potentially lead to the collapse of the entire infrastructure. The robustness of interdependent systems against this sort of catastrophic failure hinges heavily on the allocation of the (interconnecting) links that connect nodes in one network to nodes in the other network. In this paper, we characterize the optimum inter-link allocation strategy against random attacks in the case where the topology of each individual network is unknown. In particular, we analyze the "regular" allocation strategy that allots exactly the same number of bi-directional inter-network links to all nodes in the system. We show, both analytically and experimentally, that this strategy yields better performance (from a network resilience perspective) compared to all possible strategies, including strategies using random allocation, unidirectional inter-links, etc.Comment: 13 pages, 6 figures. To appear in the Special Issue of IEEE Transactions on Parallel and Distributed Systems on Cyber-Physical Systems, 201

    Requirements of an Integrated Formal Method for Intelligent Swarms

    Get PDF
    NASA is investigating new paradigms for future space exploration, heavily focused on the (still) emerging technologies of autonomous and autonomic systems [47, 48, 49]. Missions that rely on multiple, smaller, collaborating spacecraft, analogous to swarms in nature, are being investigated to supplement and complement traditional missions that rely on one large spacecraft [16]. The small spacecraft in such missions would each be able to operate on their own to accomplish a part of a mission, but would need to interact and exchange information with the other spacecraft to successfully execute the mission

    Asymptotically Optimal Size-Interval Task Assignments

    Get PDF
    International audienceSize-based routing provides robust strategies to improve the performance of computer and communication systems with highly variable workloads because it is able to isolate small jobs from large ones in a static manner. The basic idea is that each server is assigned all jobs whose sizes belong to a distinct and continuous interval. In the literature, dispatching rules of this type are referred to as SITA (Size Interval Task Assignment) policies. Though their evident benefits, the problem of finding a SITA policy that minimizes the overall mean (steady-state) waiting time is known to be intractable. In particular it is not clear when it is preferable to balance or unbalance server loads and, in the latter case, how. In this paper, we provide an answer to these questions in the celebrated limiting regime where the system capacity grows linearly with the system demand to infinity. Within this framework, we prove that the minimum mean waiting time achievable by a SITA policy necessarily converges to the mean waiting time achieved by SITA-E, the SITA policy that equalizes server loads, provided that servers are homogeneous. However, within the set of SITA policies we also show that SITA-E can perform arbitrarily bad if servers are heterogeneous. In this case we prove that there exist exactly C! asymptotically optimal policies, where C denotes the number of server types, and all of them are linked to the solution of a single strictly convex optimization problem. It turns out that the mean waiting time achieved by any of such asymptotically optimal policies does not depend on how job-size intervals are mapped to servers. Our theoretical results are validated by numerical simulations with respect to realistic parameters and suggest that the above insights are also accurate in small systems composed of a few servers, i.e., ten

    Energy Efficiency Adaptation for Multihop Routing in Wireless Sensor Networks

    Get PDF
    corecore