62 research outputs found
Compression and strength behaviour of viscose/polypropylene nonwoven fabrics
Compression and strength properties of viscose/polypropylene nonwoven fabrics has been studied. Compressionbehavior of the nonwoven samples (sample compressibility, sample thickness loss & sample compressive resilience) havebeen analyzed considering the magnitude of applied pressure, fabric weight, fabric thickness, and the porosity of thesamples. Based on the calculated porosity of the samples, pore compression behavior (pore compressibility, porosity loss &pore compressive resilience) are determined. Equations for the determination of pore compressibility, porosity loss, and porecompressive resilience, are established. Tensile strength and elongation as well as bursting strength and ball traverseelongation are also determined. The results show that the sample compression behavior as well as pore compressionbehavior depend on the magnitude of applied pressure. At the high level of applied pressure, a sample with highercompressibility has the lower sample compressive resilience. Differences in pore compressibility and porosity loss betweeninvestigated samples have also been registered, except in pore compressive resilience. Sample with the higher fabric weight,higher thickness, and lower porosity shows the lower sample compressibility, pore compressibility, sample thickness loss,porosity loss, and tensile elongation, but the higher tensile strength, bursting strength, and ball traverse elongation
Quality of clothing fabrics in terms of their comfort properties
Quality of various clothing woven fabrics with respect to their comfort properties, such as electro-physical properties, air permeability, and compression properties has been studied. Fabrics are produced from cotton and cotton/polyester fibre blends in plain, twill, satin and basket weave. Results show that cotton fabrics have lower values of the volume resistivity, air permeability and compressive resilience but higher values of effective relative dielectric permeability and compressibility as compared to fabrics that have been produced from cotton/PES fibre blends. Regression analysis shows a strong linear correlative relationship between the air permeability and the porosity of the woven fabrics with very high coefficient of linear correlation (0.9807). It is also observed that comfort properties are determined by the structure of woven fabrics (raw material composition, type of weave) as well as by the fabrics surface condition. Findings of the studies have been used for estimating the quality of woven fabrics in terms of their comfort properties by the application of ranking method. It is concluded that the group of cotton fabrics exhibits better quality of comfort as compared to the group of cotton/PES blend fabrics.
Analysing Astronomy Algorithms for GPUs and Beyond
Astronomy depends on ever increasing computing power. Processor clock-rates
have plateaued, and increased performance is now appearing in the form of
additional processor cores on a single chip. This poses significant challenges
to the astronomy software community. Graphics Processing Units (GPUs), now
capable of general-purpose computation, exemplify both the difficult
learning-curve and the significant speedups exhibited by massively-parallel
hardware architectures. We present a generalised approach to tackling this
paradigm shift, based on the analysis of algorithms. We describe a small
collection of foundation algorithms relevant to astronomy and explain how they
may be used to ease the transition to massively-parallel computing
architectures. We demonstrate the effectiveness of our approach by applying it
to four well-known astronomy problems: Hogbom CLEAN, inverse ray-shooting for
gravitational lensing, pulsar dedispersion and volume rendering. Algorithms
with well-defined memory access patterns and high arithmetic intensity stand to
receive the greatest performance boost from massively-parallel architectures,
while those that involve a significant amount of decision-making may struggle
to take advantage of the available processing power.Comment: 10 pages, 3 figures, accepted for publication in MNRA
Refactoring GrPPI:Generic Refactoring for Generic Parallelism in C++
Funding: EU Horizon 2020 project, TeamPlay (https://www.teamplay-xh2020.eu), Grant Number 779882, UK EPSRC Discovery, grant number EP/P020631/1, and Madrid Regional Government, CABAHLA-CM (ConvergenciA Big dAta-Hpc: de Los sensores a las Aplicaciones) Grant Number S2018/TCS-4423.The Generic Reusable Parallel Pattern Interface (GrPPI) is a very useful abstraction over different parallel pattern libraries, allowing the programmer to write generic patterned parallel code that can easily be compiled to different backends such as FastFlow, OpenMP, Intel TBB and C++ threads. However, rewriting legacy code to use GrPPI still involves code transformations that can be highly non-trivial, especially for programmers who are not experts in parallelism. This paper describes software refactorings to semi-automatically introduce instances of GrPPI patterns into sequential C++ code, as well as safety checking static analysis mechanisms which verify that introducing patterns into the code does not introduce concurrency-related bugs such as race conditions. We demonstrate the refactorings and safety-checking mechanisms on four simple benchmark applications, showing that we are able to obtain, with little effort, GrPPI-based parallel versions that accomplish good speedups (comparable to those of manually-produced parallel versions) using different pattern backends.Publisher PDFPeer reviewe
SignS: a parallelized, open-source, freely available, web-based tool for gene selection and molecular signatures for survival and censored data
<p>Abstract</p> <p>Background</p> <p>Censored data are increasingly common in many microarray studies that attempt to relate gene expression to patient survival. Several new methods have been proposed in the last two years. Most of these methods, however, are not available to biomedical researchers, leading to many re-implementations from scratch of ad-hoc, and suboptimal, approaches with survival data.</p> <p>Results</p> <p>We have developed SignS (Signatures for Survival data), an open-source, freely-available, web-based tool and R package for gene selection, building molecular signatures, and prediction with survival data. SignS implements four methods which, according to existing reviews, perform well and, by being of a very different nature, offer complementary approaches. We use parallel computing via MPI, leading to large decreases in user waiting time. Cross-validation is used to asses predictive performance and stability of solutions, the latter an issue of increasing concern given that there are often several solutions with similar predictive performance. Biological interpretation of results is enhanced because genes and signatures in models can be sent to other freely-available on-line tools for examination of PubMed references, GO terms, and KEGG and Reactome pathways of selected genes.</p> <p>Conclusion</p> <p>SignS is the first web-based tool for survival analysis of expression data, and one of the very few with biomedical researchers as target users. SignS is also one of the few bioinformatics web-based applications to extensively use parallelization, including fault tolerance and crash recovery. Because of its combination of methods implemented, usage of parallel computing, code availability, and links to additional data bases, SignS is a unique tool, and will be of immediate relevance to biomedical researchers, biostatisticians and bioinformaticians.</p
Impact of clinical phenotypes on management and outcomes in European atrial fibrillation patients: a report from the ESC-EHRA EURObservational Research Programme in AF (EORP-AF) General Long-Term Registry
Background: Epidemiological studies in atrial fibrillation (AF) illustrate that clinical complexity increase the risk of major adverse outcomes. We aimed to describe European AF patients\u2019 clinical phenotypes and analyse the differential clinical course. Methods: We performed a hierarchical cluster analysis based on Ward\u2019s Method and Squared Euclidean Distance using 22 clinical binary variables, identifying the optimal number of clusters. We investigated differences in clinical management, use of healthcare resources and outcomes in a cohort of European AF patients from a Europe-wide observational registry. Results: A total of 9363 were available for this analysis. We identified three clusters: Cluster 1 (n = 3634; 38.8%) characterized by older patients and prevalent non-cardiac comorbidities; Cluster 2 (n = 2774; 29.6%) characterized by younger patients with low prevalence of comorbidities; Cluster 3 (n = 2955;31.6%) characterized by patients\u2019 prevalent cardiovascular risk factors/comorbidities. Over a mean follow-up of 22.5 months, Cluster 3 had the highest rate of cardiovascular events, all-cause death, and the composite outcome (combining the previous two) compared to Cluster 1 and Cluster 2 (all P <.001). An adjusted Cox regression showed that compared to Cluster 2, Cluster 3 (hazard ratio (HR) 2.87, 95% confidence interval (CI) 2.27\u20133.62; HR 3.42, 95%CI 2.72\u20134.31; HR 2.79, 95%CI 2.32\u20133.35), and Cluster 1 (HR 1.88, 95%CI 1.48\u20132.38; HR 2.50, 95%CI 1.98\u20133.15; HR 2.09, 95%CI 1.74\u20132.51) reported a higher risk for the three outcomes respectively. Conclusions: In European AF patients, three main clusters were identified, differentiated by differential presence of comorbidities. Both non-cardiac and cardiac comorbidities clusters were found to be associated with an increased risk of major adverse outcomes
Application instrumentation for performance analysis and tuning with focus on energy efficiency
Profiling and tuning of parallel applications is an essential part of HPC. Analysis and elimination of application hot spots can be performed using many available tools, which also provides resource consumption measurements for instrumented parts of the code. Since complex applications show different behavior in each part of the code, it is essential to be able to insert instrumentation to analyse these parts. Because each performance analysis or autotuning tool can bring different insights into an application behavior, it is valuable to analyze and optimize an application using a variety of them. We present our on request inserted shared C/C++ API for the most common open-source HPC performance analysis tools, which simplify the process of the manual instrumentation. Besides manual instrumentation, profiling libraries provide different methods for instrumentation. Of these, the binary patching is the most universal mechanism, and highly improves the user-friendliness and robustness of the tool. We provide an overview of the most commonly used binary patching tools, and describe a workflow for how to use them to implement a binary instrumentation tool for any profiler or autotuner. We have also evaluated the minimum overhead of the manual and binary instrumentation.Web of Scienc
Compression and strength behaviour of viscose/polypropylene nonwoven fabrics
329-337Compression and strength properties of viscose/polypropylene nonwoven fabrics has been studied. Compression behavior of the nonwoven samples (sample compressibility, sample thickness loss & sample compressive resilience) have been analyzed considering the magnitude of applied pressure, fabric weight, fabric thickness, and the porosity of the samples. Based on the calculated porosity of the samples, pore compression behavior (pore compressibility, porosity loss & pore compressive resilience) are determined. Equations for the determination of pore compressibility, porosity loss, and pore compressive resilience, are established. Tensile strength and elongation as well as bursting strength and ball traverse elongation are also determined. The results show that the sample compression behavior as well as pore compression behavior depend on the magnitude of applied pressure. At the high level of applied pressure, a sample with higher compressibility has the lower sample compressive resilience. Differences in pore compressibility and porosity loss between investigated samples have also been registered, except in pore compressive resilience. Sample with the higher fabric weight, higher thickness, and lower porosity shows the lower sample compressibility, pore compressibility, sample thickness loss, porosity loss, and tensile elongation, but the higher tensile strength, bursting strength, and ball traverse elongation
Recommended from our members
Golden gate: Bridging the resource-efficiency gap between ASICs and FPGA prototypes
We present Golden Gate, an FPGA-based simulation tool that decouples the timing of an FPGA host platform from that of the target RTL design. In contrast to previous work in static time-multiplexing of FPGA resources, Golden Gate employs the Latency-Insensitive Bounded Dataflow Network (LI-BDN) formalism to decompose the simulator into subcomponents, each of which may be independently and automatically optimized. This structure allows Golden Gate to support a broad class of optimizations that improve resource utilization by implementing FPGA-hostile structures over multiple cycles, while the LI-BDN formalism ensures that the simulator still produces bit-and cycle-exact results. To verify that these optimizations are implemented correctly, we also present LIME, a model-checking tool that provides a push-button flow for checking whether optimized subcomponents adhere to an associated correctness specification, while also guaranteeing forward progress. Finally, we use Golden Gate to generate a cycle-exact simulator of a multi-core SoC, where we reduce LUT utilization by up to 26% by coercing multi-ported, combinationally read memories into simulation models backed by time-multiplexed block RAMs, enabling us to simulate 50% more cores on a single FPGA
- âŠ