113 research outputs found

    Effectiveness of the common method in balancing exhaust ventilation systems

    Get PDF
    Dampers are used to adjust the distribution of airflows in duct systems. The common (i.e., most commonly used) method is to adjust each damper in turn so that the airflow through its branch equals the desired level. Typically, the airflow through each branch duct is estimated from the centerline velocity pressure. To test the effectiveness of that approach, dampers were adjusted on a seven branch, full-sized experimental duct system. After adjusting the dampers for a given condition, the percent excess airflow (%Qexcess) for the system was estimated as the amount above the ideal fan airflow that would exist if the fan speed were adjusted so that the lowest ratio of airflow to airflow goal for any branch was unity. The lower the value of percent excess airflow, the more perfectly balanced the system was.;The results varied with the level of the target airflows. The excess airflow was least (5.3%) for the low airflow system. The excess for the moderate (8.5%) was greater than the excess for the perfect targets (6.56%). These values were much higher than the mean value of 2.13% found by Dodrill (2004) on the same system for the SPh Goal Ratio method proposed by Guffey (2005). The pressure in the system was measured. As expected, the pressure in the system increased as the dampers were choked down to achieve the targets

    Harnessing the Power of Many: Extensible Toolkit for Scalable Ensemble Applications

    Full text link
    Many scientific problems require multiple distinct computational tasks to be executed in order to achieve a desired solution. We introduce the Ensemble Toolkit (EnTK) to address the challenges of scale, diversity and reliability they pose. We describe the design and implementation of EnTK, characterize its performance and integrate it with two distinct exemplar use cases: seismic inversion and adaptive analog ensembles. We perform nine experiments, characterizing EnTK overheads, strong and weak scalability, and the performance of two use case implementations, at scale and on production infrastructures. We show how EnTK meets the following general requirements: (i) implementing dedicated abstractions to support the description and execution of ensemble applications; (ii) support for execution on heterogeneous computing infrastructures; (iii) efficient scalability up to O(10^4) tasks; and (iv) fault tolerance. We discuss novel computational capabilities that EnTK enables and the scientific advantages arising thereof. We propose EnTK as an important addition to the suite of tools in support of production scientific computing

    U.S.-Based Global Intellectual Property Creation: An Analysis

    Get PDF
    Summarizes an analysis of U.S. applications in the international Patent Cooperation Treaty database, with a focus on where innovation is occurring -- in which states, in which companies and universities, and in which technical areas

    High-throughput Binding Affinity Calculations at Extreme Scales

    Get PDF
    Resistance to chemotherapy and molecularly targeted therapies is a major factor in limiting the effectiveness of cancer treatment. In many cases, resistance can be linked to genetic changes in target proteins, either pre-existing or evolutionarily selected during treatment. Key to overcoming this challenge is an understanding of the molecular determinants of drug binding. Using multi-stage pipelines of molecular simulations we can gain insights into the binding free energy and the residence time of a ligand, which can inform both stratified and personal treatment regimes and drug development. To support the scalable, adaptive and automated calculation of the binding free energy on high-performance computing resources, we introduce the High- throughput Binding Affinity Calculator (HTBAC). HTBAC uses a building block approach in order to attain both workflow flexibility and performance. We demonstrate close to perfect weak scaling to hundreds of concurrent multi-stage binding affinity calculation pipelines. This permits a rapid time-to-solution that is essentially invariant of the calculation protocol, size of candidate ligands and number of ensemble simulations. As such, HTBAC advances the state of the art of binding affinity calculations and protocols

    See Through the Fog: Curriculum Learning with Progressive Occlusion in Medical Imaging

    Full text link
    In recent years, deep learning models have revolutionized medical image interpretation, offering substantial improvements in diagnostic accuracy. However, these models often struggle with challenging images where critical features are partially or fully occluded, which is a common scenario in clinical practice. In this paper, we propose a novel curriculum learning-based approach to train deep learning models to handle occluded medical images effectively. Our method progressively introduces occlusion, starting from clear, unobstructed images and gradually moving to images with increasing occlusion levels. This ordered learning process, akin to human learning, allows the model to first grasp simple, discernable patterns and subsequently build upon this knowledge to understand more complicated, occluded scenarios. Furthermore, we present three novel occlusion synthesis methods, namely Wasserstein Curriculum Learning (WCL), Information Adaptive Learning (IAL), and Geodesic Curriculum Learning (GCL). Our extensive experiments on diverse medical image datasets demonstrate substantial improvements in model robustness and diagnostic accuracy over conventional training methodologies.Comment: 20 pages, 3 figures, 1 tabl

    SYNTHESIS, MOLECULAR MODELING, AND QUANTITATIVE STRUCTURE–ACTIVITY RELATIONSHIP STUDIES OF UNDEC-10-ENEHYDRAZIDE DERIVATIVES AS ANTIMICROBIAL AGENTS

    Get PDF
    Objective: In recent years, an increasing frequency and severity of antimicrobial resistance to different antimicrobial agents, demands new remedies for the treatment of infections. Therefore, in this study, a series of undec-10-enehydrazide derivatives were synthesized and screened for in vitro activity against selected pathogenic microbial strains.Methods: The synthesis of the intermediate and target compounds was performed by standard procedure. Synthesized compounds were screened for antimicrobial activity by tube dilution method. Molecular docking study of synthesized derivatives was also performed to find out their interaction with the target site of β-ketoacyl-acyl carrier protein synthase III, (FabH; pdb id:3IL7) by docking technique. Quantitative structure–activity relationship (QSAR) studies were also performed to correlate antimicrobial activity with structural properties of synthesized molecules.Results: Antimicrobial screening results showed that compound 8 having benzylidine moiety with methoxy groups at meta and para position and compound 16 having 3-chloro-2-(3-flourophenyl)-4-oxoazetidine moiety was found to be most potent. QSAR studies revealed the importance of Randic topology parameter (R) in describing the antimicrobial activity of synthesized derivatives. Molecular docking study indicated hydrophobic interaction of deeply inserted aliphatic side chain of the ligand with FabH. The N-atoms of hydrazide moiety interacts with Ala246 and Asn247 through H-bonding. The m- and p-methoxy groups form H-bond with water and side chain of Arg36, respectively.Conclusion: Compound 8 having benzylidine moiety with methoxy groups at meta and para position and compound 16 having 3-chloro-2-(3- flourophenyl)-4-oxoazetidine moiety was found to most potent antibacterial and antifungal compounds, respectively

    Transcending Grids: Point Clouds and Surface Representations Powering Neurological Processing

    Full text link
    In healthcare, accurately classifying medical images is vital, but conventional methods often hinge on medical data with a consistent grid structure, which may restrict their overall performance. Recent medical research has been focused on tweaking the architectures to attain better performance without giving due consideration to the representation of data. In this paper, we present a novel approach for transforming grid based data into its higher dimensional representations, leveraging unstructured point cloud data structures. We first generate a sparse point cloud from an image by integrating pixel color information as spatial coordinates. Next, we construct a hypersurface composed of points based on the image dimensions, with each smooth section within this hypersurface symbolizing a specific pixel location. Polygonal face construction is achieved using an adjacency tensor. Finally, a dense point cloud is generated by densely sampling the constructed hypersurface, with a focus on regions of higher detail. The effectiveness of our approach is demonstrated on a publicly accessible brain tumor dataset, achieving significant improvements over existing classification techniques. This methodology allows the extraction of intricate details from the original image, opening up new possibilities for advanced image analysis and processing tasks
    corecore