585 research outputs found
Static stability of the Space Station solar array FASTMast structure
The combined loads test of the 3-Bay FASTMast marks the end of the Lewis Research Center (LeRC) effort to characterize the behavior of the principal Space Station solar array support structure. The primary objective of this test and analysis effort was to develop a method to predict structural stability failure modes under flight-like applied loads. Included at the beginning of this report is a brief historical perspective of the hardware design development and FASTMast structural stability problem evolution. Once an understanding of the solution process has been established, test and analysis details are presented and related to the postulated failure theories. The combined load test series subjected the structure to a combination of transverse, moment, and torsion loads similar to that expected in the service environment. Nonlinear finite element (FE) models were developed and large displacement analyses were performed to support the test effort and failure mode predictions. Details of the test configuration as well as test and analysis results are presented. The results were then critiqued to establish valid and successful support of the failure mode assessments. Finally, study conclusions are drawn and recommendations for safe operation of the FASTMast structure are presented for consideration
Source Code Classification for Energy Efficiency in Parallel Ultra Low-Power Microcontrollers
The analysis of source code through machine learning techniques is an increasingly explored research topic aiming at increasing smartness in the software toolchain to exploit modern architectures in the best possible way. In the case of low-power, parallel embedded architectures, this means finding the configuration, for instance in terms of the number of cores, leading to minimum energy consumption. Depending on the kernel to be executed, the energy optimal scaling configuration is not trivial. While recent work has focused on general-purpose systems to learn and predict the best execution target in terms of the execution time of a snippet of code or kernel (e.g. offload OpenCL kernel on multicore CPU or GPU), in this work we focus on static compile-time features to assess if they can be successfully used to predict the minimum energy configuration on PULP, an ultra-low-power architecture featuring an on-chip cluster of RISC-V processors. Experiments show that using machine learning models on the source code to select the best energy scaling configuration automatically is viable and has the potential to be used in the context of automatic system configuration for energy minimisation
Benchmarking a many-core neuromorphic platform with an MPI-based DNA sequence matching algorithm
SpiNNaker is a neuromorphic globally asynchronous locally synchronous (GALS)multi-core architecture designed for simulating a spiking neural network (SNN) in real-time. Several studies have shown that neuromorphic platforms allow flexible and efficient simulations of SNN by exploiting the efficient communication infrastructure optimised for transmitting small packets across the many cores of the platform. However, the effectiveness of neuromorphic platforms in executing massively parallel general-purpose algorithms, while promising, is still to be explored. In this paper, we present an implementation of a parallel DNA sequence matching algorithm implemented by using the MPI programming paradigm ported to the SpiNNaker platform. In our implementation, all cores available in the board are configured for executing in parallel an optimised version of the Boyer-Moore (BM) algorithm. Exploiting this application, we benchmarked the SpiNNaker platform in terms of scalability and synchronisation latency. Experimental results indicate that the SpiNNaker parallel architecture allows a linear performance increase with the number of used cores and shows better scalability compared to a general-purpose multi-core computing platform
Virtual Environment for Next Generation Sequencing Analysis
Next Generation Sequencing technology, on the
one hand, allows a more accurate analysis, and, on the other
hand, increases the amount of data to process. A new protocol
for sequencing the messenger RNA in a cell, known as RNA-
Seq, generates millions of short sequence fragments in a single
run. These fragments, or reads, can be used to measure levels
of gene expression and to identify novel splice variants of genes.
The proposed solution is a distributed architecture consisting
of a Grid Environment and a Virtual Grid Environment, in
order to reduce processing time by making the system scalable
and flexibl
A first-order lumped parameters model of electrohydraulic actuators for low-inertia rotating systems with dry friction
In aerospace engineering, there are several control systems affected by dry friction, which are characterized by low inertia and high working frequencies. For these systems, it is possible to use a downgraded, first-order dynamic model to represent their behaviour properly without run into numerical problems that would be harmful for the solution itself and would require high computational power to be solved, which means more weight, costs, and complexity. Yet, the effect of dry friction is still possible to be accounted for accurately using a new algorithm based on the Coulomb friction model applied to the downgraded, first-order dynamic model. In this paper, the degraded first-order model is applied to an electrohydraulic servomechanism with its PID control unit, hydraulic motor, electrohydraulic servo-valve, and applied load. These components represent a classic airplane actuator system. The downgraded model will be compared to the second-order one focusing on the pros and cons of the reduction process with a focus on the effect of dry friction for reversible and irreversible actuators
Adhesive restoration of endodontically treated premolars: influence of posts on cuspal deflection
To determine, by means of a non-destructive experimental procedure, the effectiveness of adhesive restorations in reducing the cuspal deflection of endodontically treated premolars, with or without root canal fiber posts.To determine, by means of a non-destructive experimental procedure, the effectiveness of adhesive restorations in reducing the cuspal deflection of endodontically treated premolars, with or without root canal fiber posts.
MATERIALS AND METHODS:
The cuspal deflection of ten sound, intact maxillary premolars was evaluated. A loading device induced deformation by axial force (ranging from 98 to 294 N) applied on the occlusal surface of teeth while laser sensors registered the amount of deflection. Once tested, teeth were endodontically treated and the marginal ridges were removed. The teeth were randomly divided into two groups and restored with: group 1) dual curing adhesive, flowable composite, and microhybrid composite; group 2) the same materials associated with root canal glass fiber post and composite cement. The cuspal deflection test was repeated with the same protocol after restorative procedures, allowing a direct comparison of the same samples. Statistical analysis was performed using ANOVA at a significance level of 0.05.
RESULTS:
Different average cuspal deflection was detected in the two groups: composite resin with post insertion resulted in lower deformation compared with composite alone. Mean deflection ranged from 3.43 to 12.17 μm in intact teeth, from 14.42 to 26.93 μm in group 1, and from 15.35 to 20.39 μm in group 2. ANOVA found significant differences (p = 0.02).
CONCLUSION:
Bonded composite restorations with fiber posts may be more effective than composite alone in reducing the cuspal deflection in endodontically treated premolars in which the marginal ridges have been lost
Up-regulation of prostaglandin biosynthesis by leukotriene C4 in elicited mice peritoneal macrophages activated with lipopolysaccharide/interferon-gamma
Leukotrienes (LT) and prostaglandins (PG) are proinflammatory mediators generated by the conversion of arachidonic acid via 5-lipoxygenase (5-LO) and cyclooxygenase (COX) pathways. It has long been proposed that the inhibition of the 5-LO could enhance the COX pathway leading to an increased PG generation. We have found that in in vitro models of inflammation, such as mice-elicited peritoneal macrophages activated with lipopolysaccharide (LPS)/interferon- γ (IFN-γ), the deletion of the gene encoding for 5-LO or the enzyme activity inhibition corresponded to a negative modulation of the COX pathway. Moreover, exogenously added LTC4, but not LTD4, LTE 4, and LTB4, was able to increase PG production in stimulated cells from 5-LO wild-type and knockout mice. LTC4 was not able to induce COX-2 expression by itself but rather potentiated the action of LPS/IFN-γ through the extracellular signal-regulated kinase-1/2 activation, as demonstrated by the use of a specific mitogen-activated protein kinase (MAPK) kinase inhibitor. The LT-induced increase in PG generation, as well as MAPK activation, was dependent by a specific ligand-receptor interaction, as demonstrated by the use of a cys-LT1 receptor antagonist, although also a direct action of the antagonist used, on PG generation, cannot be excluded. Thus, the balance between COX and 5-LO metabolites could be of great importance in controlling macrophage functions and consequently, inflammation and tumor promotion
Reverse Engineering of TopHat: Splice Junction Mapper for Improving Computational Aspect
TopHat is a fast splice junction mapper for Next Generation Sequencing analysis, a technology for functional genomic research. Next Generation Sequencing technology allows more accurate analysis increasing data to elaborate, this opens to new challenges in terms of development of tools and computational infrastructures. We present a solution that cover aspects both software and hardware, the first one, after a reverse engineering phase, provides an improvement of algorithm of TopHat making it parallelizable, the second aspect is an implementation of an hybrid infrastructure: grid and virtual grid computing. Moreover the system allows to have a multi sample environment and is able to process automatically totally transparent to user
- …