4,922 research outputs found
Tree Parity Machine Rekeying Architectures
The necessity to secure the communication between hardware components in
embedded systems becomes increasingly important with regard to the secrecy of
data and particularly its commercial use. We suggest a low-cost (i.e. small
logic-area) solution for flexible security levels and short key lifetimes. The
basis is an approach for symmetric key exchange using the synchronisation of
Tree Parity Machines. Fast successive key generation enables a key exchange
within a few milliseconds, given realistic communication channels with a
limited bandwidth. For demonstration we evaluate characteristics of a
standard-cell ASIC design realisation as IP-core in 0.18-micrometer
CMOS-technology
Fault-tolerant sub-lithographic design with rollback recovery
Shrinking feature sizes and energy levels coupled with high clock rates and decreasing node capacitance lead us into a regime where transient errors in logic cannot be ignored. Consequently, several recent studies have focused on feed-forward spatial redundancy techniques to combat these high transient fault rates. To complement these studies, we analyze fine-grained rollback techniques and show that they can offer lower spatial redundancy factors with no significant impact on system performance for fault rates up to one fault per device per ten million cycles of operation (Pf = 10^-7) in systems with 10^12 susceptible devices. Further, we concretely demonstrate these claims on nanowire-based programmable logic arrays. Despite expensive rollback buffers and general-purpose, conservative analysis, we show the area overhead factor of our technique is roughly an order of magnitude lower than a gate level feed-forward redundancy scheme
Online and Offline BIST in IP-Core Design
This article presents an online and offline built-in self-test architecture implemented as an SRAM intellectual-property core for telecommunication applications. The architecture combines fault-latency reduction, code-based fault detection, and architecture-based fault avoidance to meet reliability constraint
Treatment algorithm for infants diagnosed with spinal muscular atrophy through newborn screening
Spinal muscular atrophy (SMA) is an autosomal recessive disease characterized by the degeneration of alpha motor neurons in the spinal cord, leading to muscular atrophy. SMA is caused by deletions or mutations in the survival motor neuron 1 gene (SMN1). In humans, a nearly identical copy gene, SMN2, is present. Because SMN2 has been shown to decrease disease severity in a dose-dependent manner, SMN2 copy number is predictive of disease severity.
To develop a treatment algorithm for SMA-positive infants identified through newborn screening based upon SMN2 copy number.
A working group comprised of 15 SMA experts participated in a modified Delphi process, moderated by a neutral third-party expert, to develop treatment guidelines.
The overarching recommendation is that all infants with two or three copies of SMN2 should receive immediate treatment (n = 13). For those infants in which immediate treatment is not recommended, guidelines were developed that outline the timing and appropriate screens and tests to be used to determine the timing of treatment initiation.
The identification SMA affected infants via newborn screening presents an unprecedented opportunity for achievement of maximal therapeutic benefit through the administration of treatment pre-symptomatically. The recommendations provided here are intended to help formulate treatment guidelines for infants who test positive during the newborn screening process
FPGA Based Data Read-Out System of the Belle 2 Pixel Detector
The upgrades of the Belle experiment and the KEKB accelerator aim to increase
the data set of the experiment by the factor 50. This will be achieved by
increasing the luminosity of the accelerator which requires a significant
upgrade of the detector. A new pixel detector based on DEPFET technology will
be installed to handle the increased reaction rate and provide better vertex
resolution. One of the features of the DEPFET detector is a long integration
time of 20 {\mu}s, which increases detector occupancy up to 3 %. The detector
will generate about 2 GB/s of data. An FPGA-based two-level read-out system,
the Data Handling Hybrid, was developed for the Belle 2 pixel detector. The
system consists of 40 read-out and 8 controller modules. All modules are built
in {\mu}TCA form factor using Xilinx Virtex-6 FPGA and can utilize up to 4 GB
DDR3 RAM. The system was successfully tested in the beam test at DESY in
January 2014. The functionality and the architecture of the Belle 2 Data
Handling Hybrid system as well as the performance of the system during the beam
test are presented in the paper.Comment: Transactions on Nuclear Science, Proceedings of the 19th Real Time
Conference, Preprin
Cloud based testing of business applications and web services
This paper deals with testing of applications based on the principles of cloud computing. It is aimed to describe options of testing business software in clouds (cloud testing). It identifies the needs for cloud testing tools including multi-layer testing; service level agreement (SLA) based testing, large scale simulation, and on-demand test environment. In a cloud-based model, ICT services are distributed and accessed over networks such as intranet or internet, which offer large data centers deliver on demand,
resources as a service, eliminating the need for investments in specific hardware, software, or on data center infrastructure. Businesses can apply those new technologies in the contest of intellectual capital management to lower the cost and increase competitiveness and also earnings. Based on comparison of the testing tools and techniques, the paper further investigates future trend of cloud based testing tools research and development. It is also important to say that this comparison and classification of testing tools describes a new area and it has not yet been done
- …