24 research outputs found
Reconfigurable elliptic curve cryptography
Elliptic Curve Cryptosystems (ECC) have been proposed as an alternative to other established public key cryptosystems such as RSA (Rivest Shamir Adleman). ECC provide more security per bit than other known public key schemes based on the discrete logarithm problem. Smaller key sizes result in faster computations, lower power consumption and memory and bandwidth savings, thus making ECC a fast, flexible and cost-effective solution for providing security in constrained environments. Implementing ECC on reconfigurable platform combines the speed, security and concurrency of hardware along with the flexibility of the software approach.
This work proposes a generic architecture for elliptic curve cryptosystem on a Field Programmable Gate Array (FPGA) that performs an elliptic curve scalar multiplication in 1.16milliseconds for GF (2163), which is considerably faster than most other documented implementations. One of the benefits of the proposed processor architecture is that it is easily reprogrammable to use different algorithms and is adaptable to any field order. Also through reconfiguration the arithmetic unit can be optimized for different area/speed requirements. The mathematics involved uses binary extension field of the form GF (2n) as the underlying field and polynomial basis for the representation of the elements in the field. A significant gain in performance is obtained by using projective coordinates for the points on the curve during the computation process
Instruction-Level Abstraction (ILA): A Uniform Specification for System-on-Chip (SoC) Verification
Modern Systems-on-Chip (SoC) designs are increasingly heterogeneous and
contain specialized semi-programmable accelerators in addition to programmable
processors. In contrast to the pre-accelerator era, when the ISA played an
important role in verification by enabling a clean separation of concerns
between software and hardware, verification of these "accelerator-rich" SoCs
presents new challenges. From the perspective of hardware designers, there is a
lack of a common framework for the formal functional specification of
accelerator behavior. From the perspective of software developers, there exists
no unified framework for reasoning about software/hardware interactions of
programs that interact with accelerators. This paper addresses these challenges
by providing a formal specification and high-level abstraction for accelerator
functional behavior. It formalizes the concept of an Instruction Level
Abstraction (ILA), developed informally in our previous work, and shows its
application in modeling and verification of accelerators. This formal ILA
extends the familiar notion of instructions to accelerators and provides a
uniform, modular, and hierarchical abstraction for modeling software-visible
behavior of both accelerators and programmable processors. We demonstrate the
applicability of the ILA through several case studies of accelerators (for
image processing, machine learning, and cryptography), and a general-purpose
processor (RISC-V). We show how the ILA model facilitates equivalence checking
between two ILAs, and between an ILA and its hardware finite-state machine
(FSM) implementation. Further, this equivalence checking supports accelerator
upgrades using the notion of ILA compatibility, similar to processor upgrades
using ISA compatibility.Comment: 24 pages, 3 figures, 3 table
Specifying and Solving Robust Empirical Risk Minimization Problems Using CVXPY
We consider robust empirical risk minimization (ERM), where model parameters
are chosen to minimize the worst-case empirical loss when each data point
varies over a given convex uncertainty set. In some simple cases, such problems
can be expressed in an analytical form. In general the problem can be made
tractable via dualization, which turns a min-max problem into a min-min
problem. Dualization requires expertise and is tedious and error-prone. We
demonstrate how CVXPY can be used to automate this dualization procedure in a
user-friendly manner. Our framework allows practitioners to specify and solve
robust ERM problems with a general class of convex losses, capturing many
standard regression and classification problems. Users can easily specify any
complex uncertainty set that is representable via disciplined convex
programming (DCP) constraints
Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19
IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19.
Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19.
DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022).
INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days.
MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes.
RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively).
CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes.
TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
ELECTROMYOGRAPHY BASED HAND CONTROL SIGNALS: A REVIEW
ABSTRACT Electromyography (EMG) is the analytical study of electrical activity produced by skeletal muscles. EMG is an example of modern human computer interaction which can be used in the field of medicines and engineering. Through this paper we are going to discuss about different types of EMG-Surface EMG(SEMG)/surface scanning EMG and intramuscular/ indwelling (needle and fine-wire) EMG, the electrical noise and factors which will affect EMG signals, different techniques used to extract these signals and their comparison. In this paper, we will also discuss the variety of applications where EMG signals can be used
International Journal of Electronics and Computer Science Engineering 307 Available Online at www.ijecse.org ISSN- 2277-1956 Human-Computer Interface using Gestures based on Neural Network
Abstract- Gestures are powerful tools for non-verbal communication. Human computer interface (HCI) is a growing field which reduces the complexity of interaction between human and machine in which gestures are used for conveying information or controlling the machine. In the present paper, static hand gestures are utilized for this purpose. The paper presents a novel technique of recognizing hand gestures i.e. A-Z alphabets, 0-9 numbers and 6 additional control signals (for keyboard and mouse control) by extracting various features of hand,creating a feature vector table and training a neural network. The proposed work has a recognition rate of 99%
Recommended from our members
Efficient predictive analysis for detecting nondeterminism in multi-threaded programs
Determinism is often a desired property in multithreaded programs. A multi-threaded program is said to be deterministic if for a given input, different thread interleavings result in the same system state in the execution of the program. This, in turn, requires that different interleavings preserve the values read by each read operation. A related, but less strict condition is for the program to be race-free. A deterministic program is race-free but the converse may not be true. There is much work done in the static analysis of programs to detect races and nondeterminism. However, this can be expensive and may not complete for large programs in reasonable time. In contrast to static analysis, predictive analysis techniques take a given program trace and explore other possible interleavings that may violate a given property - in this case the property of interest is determinism. Predictive analysis can be sound, but is not complete as it is limited to a specific set of program runs. Nonetheless, it is of interest as it offers greater scalability than static analysis. This work presents a predictive analysis method for detecting nondeterminism in multi-threaded programs. Potential cases of nondeterminism are checked by constructing a causality graph from the thread events and confirming that it is acyclic. On average, the number of graphs analyzed per benchamrk is one per potential case of nondeterminism, thereby ensuring that it is efficient. We demonstrate its application on some benchmark Java and C/C++ programs
Toward Formalizing a Validation Methodology Using Simulation Coverage
The biggest obstacle in the formal verification of large designs is their very large state spaces, which cannot be handled even by techniques such as implicit state space traversal. The only viable solution in most cases is validation by functional simulation. Unfortunately, this has the drawbacksof high computational requirements due to the large number of test vectors needed, and the lack of adequate coverage measures to characterize the quality of a given test set. To overcome these limitations, there has been recent interest in hybrid techniques which combine the strengths of formal verification and simulation. Formal verification-based techniques are used on a test model (usually muchsmaller than the design) to derive a set of functional test vectors, which are then used for design validation through simulation. The test set generated typically satisfies some coverage measure on the test model. Recent research has proposed the use of state or transition coverage. However, no effor..