6,964 research outputs found
Runtime Scheduling, Allocation, and Execution of Real-Time Hardware Tasks onto Xilinx FPGAs Subject to Fault Occurrence
This paper describes a novel way to exploit the computation capabilities delivered by modern Field-Programmable Gate Arrays (FPGAs), not only towards a higher performance, but also towards an improved reliability. Computation-specific pieces of circuitry are dynamically scheduled and allocated to different resources on the chip based on a set of novel algorithms which are described in detail in this article. These algorithms consider most of the technological constraints existing in modern partially reconfigurable FPGAs as well as spontaneously occurring faults and emerging permanent damage in the silicon substrate of the chip. In addition, the algorithms target other important aspects such as communications and synchronization among the different computations that are carried out, either concurrently or at different times. The effectiveness of the proposed algorithms is tested by means of a wide range of synthetic simulations, and, notably, a proof-of-concept implementation of them using real FPGA hardware is outlined
A Survey on Cache Management Mechanisms for Real-Time Embedded Systems
Š ACM, 2015. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Computing Surveys, {48, 2, (November 2015)} http://doi.acm.org/10.1145/2830555Multicore processors are being extensively used by real-time systems, mainly because of their demand for
increased computing power. However, multicore processors have shared resources that affect the predictability
of real-time systems, which is the key to correctly estimate the worst-case execution time of tasks. One of
the main factors for unpredictability in a multicore processor is the cache memory hierarchy. Recently, many
research works have proposed different techniques to deal with caches in multicore processors in the context
of real-time systems. Nevertheless, a review and categorization of these techniques is still an open topic and
would be very useful for the real-time community. In this article, we present a survey of cache management
techniques for real-time embedded systems, from the first studies of the field in 1990 up to the latest research
published in 2014. We categorize the main research works and provide a detailed comparison in terms of
similarities and differences. We also identify key challenges and discuss future research directions.King Saud University
NSER
Software Coherence in Multiprocessor Memory Systems
Processors are becoming faster and multiprocessor memory interconnection systems are not keeping up. Therefore, it is necessary to have threads and the memory they access as near one another as possible. Typically, this involves putting memory or caches with the processors, which gives rise to the problem of coherence: if one processor writes an address, any other processor reading that address must see the new value. This coherence can be maintained by the hardware or with software intervention. Systems of both types have been built in the past; the hardware-based systems tended to outperform the software ones. However, the ratio of processor to interconnect speed is now so high that the extra overhead of the software systems may no longer be significant. This issue is explored both by implementing a software maintained system and by introducing and using the technique of offline optimal analysis of memory reference traces. It finds that in properly built systems, software maintained coherence can perform comparably to or even better than hardware maintained coherence. The architectural features necessary for efficient software coherence to be profitable include a small page size, a fast trap mechanism, and the ability to execute instructions while remote memory references are outstanding
Study and optimization of the memory management in Memcached
Over the years the Internet has become more popular than ever and web applications
like Facebook and Twitter are gaining more users. This results in generation of more and
more data by the users which has to be efficiently managed, because access speed is an
important factor nowadays, a user will not wait no more than three seconds for a web
page to load before abandoning the site. In-memory key-value stores like Memcached
and Redis are used to speed up web applications by speeding up access to the data by
decreasing the number of accesses to the slower data storageâs. The first implementation
of Memcached, in the LiveJournalâs website, showed that by using 28 instances of Memcached
on ten unique hosts, caching the most popular 30GB of data can achieve a hit rate
around 92%, reducing the number of accesses to the database and reducing the response
time considerably.
Not all objects in cache take the same time to recompute, so this research is going to
study and present a new cost aware memory management that is easy to integrate in a
key-value store, with this approach being implemented in Memcached. The new memory
management and cache will give some priority to key-value pairs that take longer to be
recomputed. Instead of replacing Memcachedâs replacement structure and its policy, we
simply add a new segment in each structure that is capable of storing the more costly
key-value pairs. Apart from this new segment in each replacement structure, we created
a new dynamic cost-aware rebalancing policy in Memcached, giving more memory to
store more costly key-value pairs.
With the implementations of our approaches, we were able to offer a prototype that
can be used to research the cost on the caching systems performance. In addition, we
were able to improve in certain scenarios the access latency of the user and the total
recomputation cost of the key-value stored in the system
Efficient runtime placement management for high performance and reliability in COTS FPGAs
Designing high-performance, fault-tolerant multisensory electronic systems for
hostile environments such as nuclear plants and outer space within the constraints of
cost, power and flexibility is challenging. Issues such as ionizing radiation, extreme
temperature and ageing can lead to faults in the electronics of these systems. In
addition, the remote nature of these environments demands a level of flexibility and
autonomy in their operations. The standard practice of using specially hardened
electronic devices for such systems is not only very expensive but also has limited
flexibility.
This thesis proposes novel techniques that promote the use of Commercial Off-The-
Shelf (COTS) reconfigurable devices to meet the challenges of high-performance
systems for hostile environments. Reconfigurable hardware such as Field
Programmable Gate Arrays (FPGA) have a unique combination of flexibility and
high performance. The flexibility offered through features such as dynamic partial
reconfiguration (DPR) can be harnessed not only to achieve cost-effective designs as
a smaller area can be used to execute multiple tasks, but also to improve the
reliability of a system as a circuit on one portion of the device can be physically
relocated to another portion in the case of fault occurrence. However, to harness
these potentials for high performance and reliability in a cost-effective manner, novel
runtime management tools are required. Most runtime support tools for
reconfigurable devices are based on ideal models which do not adequately consider
the limitations of realistic FPGAs, in particular modern FPGAs which are
increasingly heterogeneous. Specifically, these tools lack efficient mechanisms for
ensuring a high utilization of FPGA resources, including the FPGA area and the
configuration port and clocking resources, in a reliable manner.
To ensure high utilization of reconfigurable device area, placement management is a
key aspect of these tools. This thesis presents novel techniques for the management
of hardware task placement on COTS reconfigurable devices for high performance
and reliability. To this end, it addresses design-time issues that affect efficient
hardware task placement, with a focus on reliability. It also presents techniques to
maximize the utilization of the FPGA area in runtime, including techniques to
minimize fragmentation. Fragmentation leads to the creation of unusable areas due to
dynamic placement of tasks and the heterogeneity of the resources on the chip.
Moreover, this thesis also presents an efficient task reuse mechanism to improve the
availability of the internal configuration infrastructure of the FPGA for critical
responsibilities like error mitigation. The task reuse scheme, unlike previous
approaches, also improves the utilization of the chip area by offering
defragmentation.
Task relocation, which involves changing the physical location of circuits is a
technique for error mitigation and high performance. Hence, this thesis also provides
a functionality-based relocation mechanism for improving the number of locations to
which tasks can be relocated on heterogeneous FPGAs. As tasks are relocated, clock
networks need to be routed to them. As such, a reliability-aware technique of clock
network routing to tasks after placement is also proposed.
Finally, this thesis offers a prototype implementation and characterization of a
placement management system (PMS) which is an integration of the aforementioned
techniques. The performance of most of the proposed techniques are tested using
data processing tasks of a NASA JPL spectrometer application. The results show that
the proposed techniques have potentials to improve the reliability and performance of
applications in hostile environment compared to state-of-the-art techniques. The task
optimization technique presented leads to better capacity to circumvent permanent
faults on COTS FPGAs compared to state-of-the-art approaches (48.6% more errors
were circumvented for the JPL spectrometer application). The proposed task reuse
scheme leads to approximately 29% saving in the amount of configuration time. This
frees up the internal configuration interface for more error mitigation operations. In
addition, the proposed PMS has a worst-case latency of less than 50% of that of state-of-
the-art runtime placement systems, while maintaining the same level of placement
quality and resource overhead
- âŚ