1,032 research outputs found
Recommended from our members
Classification of building design information
The more widely used classification/coding systems for building elements and components, such as CI /SfB and UDC, were developed to classify documents. A classification/coding system for use with computer aided design has to be able to convey detailed information about the features and properties of components.
Previous studies of the use of information in the construction industry, in particular the CACCI Reports, have examined the logical structure of design operations and how this influences the structure of a corresponding information system. This Thesis examines also the traditional roles of the participants in the design team and demonstrates that these roles modify the ideal structure.
A number of existing classification systems are analysed to provide, with an analysis of the theory of classification, the desirable features of a practical classification system.
The CACCI Report proposed the development of a national commodity file. In the Section on an outline of a possible classification system it is argued that the function of a national commodity file could be replaced by a three-level classification/code with responsibility for information being divided between manufacturer, trade sector organisation and the design team, responsibility for information rests with the participant-most-concerned.
Examples are provided of an individual participant's use of the proposed system and how the system would be used by several participants.
In the absence of a national system, it is suggested that the proposed system would allow teams of designers to proceed with the development of a data base for computer aided design
Overview of Caching Mechanisms to Improve Hadoop Performance
Nowadays distributed computing environments, large amounts of data are
generated from different resources with a high velocity, rendering the data
difficult to capture, manage, and process within existing relational databases.
Hadoop is a tool to store and process large datasets in a parallel manner
across a cluster of machines in a distributed environment. Hadoop brings many
benefits like flexibility, scalability, and high fault tolerance; however, it
faces some challenges in terms of data access time, I/O operation, and
duplicate computations resulting in extra overhead, resource wastage, and poor
performance. Many researchers have utilized caching mechanisms to tackle these
challenges. For example, they have presented approaches to improve data access
time, enhance data locality rate, remove repetitive calculations, reduce the
number of I/O operations, decrease the job execution time, and increase
resource efficiency. In the current study, we provide a comprehensive overview
of caching strategies to improve Hadoop performance. Additionally, a novel
classification is introduced based on cache utilization. Using this
classification, we analyze the impact on Hadoop performance and discuss the
advantages and disadvantages of each group. Finally, a novel hybrid approach
called Hybrid Intelligent Cache (HIC) that combines the benefits of two methods
from different groups, H-SVM-LRU and CLQLMRS, is presented. Experimental
results show that our hybrid method achieves an average improvement of 31.2% in
job execution time
SEH: Size Estimate Hedging for Single-Server Queues
For a single server system, Shortest Remaining Processing Time (SRPT) is an
optimal size-based policy. In this paper, we discuss scheduling a single-server
system when exact information about the jobs' processing times is not
available. When the SRPT policy uses estimated processing times, the
underestimation of large jobs can significantly degrade performance. We propose
a simple heuristic, Size Estimate Hedging (SEH), that only uses jobs' estimated
processing times for scheduling decisions. A job's priority is increased
dynamically according to an SRPT rule until it is determined that it is
underestimated, at which time the priority is frozen. Numerical results suggest
that SEH has desirable performance when estimation errors are not unreasonably
large
Linearized Data Center Workload and Cooling Management
With the current high levels of energy consumption of data centers, reducing
power consumption by even a small percentage is beneficial. We propose a
framework for thermal-aware workload distribution in a data center to reduce
cooling power consumption. The framework includes linearization of the general
optimization problem and proposing a heuristic to approximate the solution for
the resulting Integer Linear Programming (ILP) problems. We first define a
general nonlinear power optimization problem including several cooling
parameters, heat recirculation effects, and constraints on server temperatures.
We propose to study a linearized version of the problem, which is easier to
analyze. As an energy saving scenario and as a proof of concept for our
approach, we also consider the possibility that the red-line temperature for
idle servers is higher than that for busy servers. For the resulting ILP
problem, we propose a heuristic for intelligent rounding of the fractional
solution. Through numerical simulations, we compare our heuristics with two
baseline algorithms. We also evaluate the performance of the solution of the
linearized system on the original system. The results show that the proposed
approach can reduce the cooling power consumption by more than 30 percent
compared to the case of continuous utilizations and a single red-line
temperature
Inactive or moderately active human promoters are enriched for inter-individual epialleles
This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
Hadoop-Oriented SVM-LRU (H-SVM-LRU): An Intelligent Cache Replacement Algorithm to Improve MapReduce Performance
Modern applications can generate a large amount of data from different
sources with high velocity, a combination that is difficult to store and
process via traditional tools. Hadoop is one framework that is used for the
parallel processing of a large amount of data in a distributed environment,
however, various challenges can lead to poor performance. Two particular issues
that can limit performance are the high access time for I/O operations and the
recomputation of intermediate data. The combination of these two issues can
result in resource wastage. In recent years, there have been attempts to
overcome these problems by using caching mechanisms. Due to cache space
limitations, it is crucial to use this space efficiently and avoid cache
pollution (the cache contains data that is not used in the future). We propose
Hadoop-oriented SVM-LRU (HSVM- LRU) to improve Hadoop performance. For this
purpose, we use an intelligent cache replacement algorithm, SVM-LRU, that
combines the well-known LRU mechanism with a machine learning algorithm, SVM,
to classify cached data into two groups based on their future usage.
Experimental results show a significant decrease in execution time as a result
of an increased cache hit ratio, leading to a positive impact on Hadoop
performance
Maximizing throughput in zero-buffer tandem lines with dedicated and flexible servers
Abstract For tandem queues with no buffer spaces and both dedicated and flexible servers, we study how flexible servers should be assigned to maximize the throughput. When there is one flexible server and two stations each with a dedicated server, we completely characterize the optimal policy. We use the insights gained from applying the Policy Iteration algorithm on systems with three, four, and five stations to devise heuristics for systems of arbitrary size. These heuristics are verified by numerical analysis. We also discuss the throughput improvement, when for a given server assignment, dedicated servers are changed to flexible servers
- …