394 research outputs found
The development of an interim generalized gate logic software simulator
A proof-of-concept computer program called IGGLOSS (Interim Generalized Gate Logic Software Simulator) was developed and is discussed. The simulator engine was designed to perform stochastic estimation of self test coverage (fault-detection latency times) of digital computers or systems. A major attribute of the IGGLOSS is its high-speed simulation: 9.5 x 1,000,000 gates/cpu sec for nonfaulted circuits and 4.4 x 1,000,000 gates/cpu sec for faulted circuits on a VAX 11/780 host computer
Measurement of fault latency in a digital avionic miniprocessor
The results of fault injection experiments utilizing a gate-level emulation of the central processor unit of the Bendix BDX-930 digital computer are presented. The failure detection coverage of comparison-monitoring and a typical avionics CPU self-test program was determined. The specific tasks and experiments included: (1) inject randomly selected gate-level and pin-level faults and emulate six software programs using comparison-monitoring to detect the faults; (2) based upon the derived empirical data develop and validate a model of fault latency that will forecast a software program's detecting ability; (3) given a typical avionics self-test program, inject randomly selected faults at both the gate-level and pin-level and determine the proportion of faults detected; (4) determine why faults were undetected; (5) recommend how the emulation can be extended to multiprocessor systems such as SIFT; and (6) determine the proportion of faults detected by a uniprocessor BIT (built-in-test) irrespective of self-test
Feasibility study for a generalized gate logic software simulator
Unit-delay simulation, event driven simulation, zero-delay simulation, simulation techniques, 2-valued versus multivalued logic, network initialization, gate operations and alternate network representations, parallel versus serial mode simulation fault modelling, extension of multiprocessor systems, and simulation timing are discussed. Functional level networks, gate equivalent circuits, the prototype BDX-930 network model, fault models, identifying detected faults for BGLOSS are discussed. Preprocessor tasks, postprocessor tasks, executive tasks, and a library of bliss coded macros for GGLOSS are also discussed
Efficient Comparison of Massive Graphs Through The Use Of 'Graph Fingerprints'
The problem of how to compare empirical graphs is an area of great interest within the field of network science. The ability to accurately but efficiently compare graphs has a significant impact in such areas as temporal graph evolution, anomaly detection and protein comparison. The comparison problem is compounded when working with graphs containing millions of anonymous, i.e. unlabelled, vertices and edges. Comparison of two or more graphs is highly computationally expensive. Thus reducing a graph to a much smaller feature set – called a fingerprint, which accurately captures the essence of the graph would be highly desirable. Such an approach would have potential applications outside of graph comparisons, especially in the area of machine learning. This paper introduces a feature extraction based approach for the efficient comparison of large topologically similar, but order varying, unlabelled graph datasets. The approach acts by producing a ‘Graph Fingerprint’ which represents both vertex level and global level topological features from a graph. The approach is shown to be efficient when comparing graphs which are highly topologically similar but order varying. The approach scales linearly with the size and complexity of the graphs being fingerprinted
Data Quality Assessment and Anomaly Detection Via Map / Reduce and Linked Data: A Case Study in the Medical Domain
Recent technological advances in modern healthcare have lead to the ability to collect a vast wealth of patient monitoring data. This data can be utilised for patient diagnosis but it also holds the potential for use within medical research. However, these datasets often contain errors which limit their value to medical research, with one study finding error rates ranging from 2.3%???26.9% in a selection of medical databases. Previous methods for automatically assessing data quality normally rely on threshold rules, which are often unable to correctly identify errors, as further complex domain knowledge is required. To combat this, a semantic web based framework has previously been developed to assess the quality of medical data. However, early work, based solely on traditional semantic web technologies, revealed they are either unable or inefficient at scaling to the vast volumes of medical data. In this paper we present a new method for storing and querying medical RDF datasets using Hadoop Map / Reduce. This approach exploits the inherent parallelism found within RDF datasets and queries, allowing us to scale with both dataset and system size. Unlike previous solutions, this framework uses highly optimised (SPARQL) joining strategies, intelligent data caching and the use of a super-query to enable the completion of eight distinct SPARQL lookups, comprising over eighty distinct joins, in only two Map / Reduce iterations. Results are presented comparing both the Jena and a previous Hadoop implementation demonstrating the superior performance of the new methodology. The new method is shown to be five times faster than Jena and twice as fast as the previous approach
Integral equations for simple fluids in a general reference functional approach
The integral equations for the correlation functions of an inhomogeneous
fluid mixture are derived using a functional Taylor expansion of the free
energy around an inhomogeneous equilibrium distribution. The system of
equations is closed by the introduction of a reference functional for the
correlations beyond second order in the density difference from the equilibrium
distribution. Explicit expressions are obtained for energies required to insert
particles of the fluid mixture into the inhomogeneous system. The approach is
illustrated by the determination of the equation of state of a simple,
truncated Lennard--Jones fluid and the analysis of the behavior of this fluid
near a hard wall. The wall--fluid integral equation exhibits complete drying
and the corresponding coexisting densities are in good agreement with those
obtained from the standard (Maxwell) construction applied to the bulk fluid.
Self--consistency of the approach is examined by analyzing the
virial/compressibility routes to the equation of state and the Gibbs--Duhem
relation for the bulk fluid, and the contact density sum rule and the Gibbs
adsorption equation for the hard wall problem. For the bulk fluid, we find good
self--consistency for stable states outside the critical region. For the hard
wall problem, the Gibbs adsorption equation is fulfilled very well near phase
coexistence where the adsorption is large.For the contact density sum rule, we
find some deviationsnear coexistence due to a slight disagreement between the
coexisting density for the gas phase obtained from the Maxwell construction and
from complete drying at the hard wall.Comment: 29 page
Deep Topology Classification: A New Approach for Massive Graph Classification
The classification of graphs is a key challenge within many scientific fields using graphs to represent data and is an active area of research. Graph classification can be critical in identifying and labelling unknown graphs within a dataset and has seen application across many scientific fields. Graph classification poses two distinct problems: the classification of elements within a graph and the classification of the entire graph. Whilst there is considerable work on the first problem, the efficient and accurate classification of massive graphs into one or more classes has, thus far, received less attention. In this paper we propose the Deep Topology Classification (DTC) approach for global graph classification. DTC extracts both global and vertex level topological features from a graph to create a highly discriminate representation in feature space. A deep feed-forward neural network is designed and trained to classify these graph feature vectors. This approach is shown to be over 99% accurate at discerning graph classes over two datasets. Additionally, it is shown to be more accurate than current state of the art approaches both in binary and multi-class graph classification tasks
A 13-hour laboratory school study of lisdexamfetamine dimesylate in school-aged children with attention-deficit/hyperactivity disorder
BackgroundLisdexamfetamine dimesylate (LDX) is indicated for the treatment of attention-deficit/hyperactivity disorder (ADHD) in children 6 to 12 years of age and in adults. In a previous laboratory school study, LDX demonstrated efficacy 2 hours postdose with duration of efficacy through 12 hours. The current study further characterizes the time course of effect of LDX.MethodsChildren aged 6 to 12 years with ADHD were enrolled in a laboratory school study. The multicenter study consisted of open-label, dose-optimization of LDX (30, 50, 70 mg/d, 4 weeks) followed by a randomized, placebo-controlled, 2-way crossover phase (1 week each). Efficacy measures included the SKAMP (deportment [primary] and attention [secondary]) and PERMP (attempted/correct) scales (secondary) measured at predose and at 1.5, 2.5, 5, 7.5, 10, 12, and 13 hours postdose. Safety measures included treatment-emergent adverse events (AEs), physical examination, vital signs, and ECGs.ResultsA total of 117 subjects were randomized and 111 completed the study. Compared with placebo, LDX demonstrated significantly greater efficacy at each postdose time point (1.5 hours to 13.0 hours), as measured by SKAMP deportment and attention scales and PERMP (P < .005). The most common treatment-emergent AEs during dose optimization were decreased appetite (47%), insomnia (27%), headache (17%), irritability (16%), upper abdominal pain (16%), and affect lability (10%), which were less frequent in the crossover phase (6%, 4%, 5%, 1%, 2%, and 0% respectively).ConclusionIn school-aged children (6 to 12 years) with ADHD, efficacy of LDX was maintained from the first time point (1.5 hours) up to the last time point assessed (13.0 hours). LDX was generally well tolerated, resulting in typical stimulant AEs.Trial registrationOfficial Title: A Phase IIIb, Randomized, Double-Blind, Multi-Center, Placebo-Controlled, Dose-Optimization, Cross-Over, Analog Classroom Study to Assess the Time of Onset of Vyvanse (Lisdexamfetamine Dimesylate) in Pediatric Subjects Aged 6-12 With Attention-Deficit/Hyperactivity Disorder. ClinicalTrials.gov Identifier: NCT00500149 http://clinicaltrials.gov/ct2/show/NCT00500149
- …