250 research outputs found

    Generalized Points-to Graphs: A New Abstraction of Memory in the Presence of Pointers

    Full text link
    Flow- and context-sensitive points-to analysis is difficult to scale; for top-down approaches, the problem centers on repeated analysis of the same procedure; for bottom-up approaches, the abstractions used to represent procedure summaries have not scaled while preserving precision. We propose a novel abstraction called the Generalized Points-to Graph (GPG) which views points-to relations as memory updates and generalizes them using the counts of indirection levels leaving the unknown pointees implicit. This allows us to construct GPGs as compact representations of bottom-up procedure summaries in terms of memory updates and control flow between them. Their compactness is ensured by the following optimizations: strength reduction reduces the indirection levels, redundancy elimination removes redundant memory updates and minimizes control flow (without over-approximating data dependence between memory updates), and call inlining enhances the opportunities of these optimizations. We devise novel operations and data flow analyses for these optimizations. Our quest for scalability of points-to analysis leads to the following insight: The real killer of scalability in program analysis is not the amount of data but the amount of control flow that it may be subjected to in search of precision. The effectiveness of GPGs lies in the fact that they discard as much control flow as possible without losing precision (i.e., by preserving data dependence without over-approximation). This is the reason why the GPGs are very small even for main procedures that contain the effect of the entire program. This allows our implementation to scale to 158kLoC for C programs

    Flow and Segregation of Granular Materials during Heap Formation

    Get PDF
    Segregation during flow of granular materials is important from an industrial point of view. Granular materials segregate during flow due to their physical properties (such as size, shape, and density). A considerable work has been done on granular segregation in the past (two decades). This chapter is divided into three parts. In the first part, a review of work done on heap formation is presented. Experimental work during heap formation by intermittent feeding is reported in the second part. The system used is a simplified model for the feeding of raw materials to a blast furnace, which is widely used for the manufacture of iron and steel. Experiments carried out using 2-D system and steel balls of size 1 and 2 mm are used as model granular materials. Image analysis is done to detect the position of each particle using an in-house computer code. Accuracy and efficiency of image analysis techniques were found to be good enough as we have used 1 and 2 mm spherical steel balls for all the cases studied. The chapter ends with concluding remarks

    Reaching Community Through School Going Children for Sickle Cell Disease in Zankhvav Village of Surat District, Western India

    Get PDF
    Objectives: The objective of the study was to explore a method to reach the community via school going adolescents for screening tribal people for Sickle Cell disease. Methodology: A cross-sectional study was conducted in a total of 1023 school going adolescents for Sickle Cell Disease screening by Dithionite Tube Turbidity (DTT) test with subsequent Hemoglobin Electrophoresis for confirmation among those testing positive for DTT. They were then advised to get their family members tested for the same. Results: 24.41% of school going adolescents tested positive by DTT test. 152 family members of those positive school children came forward for testing. On performing the DTT test on the family members, 53.94% tested positive. Out of those who tested positive, 61% belonged to Vasava caste; 36.6% belonged to Chaudhari caste followed by Gamit and Dhodhia Patel. On the subsequent Hemoglobin electrophoresis, 52.63% of the family members were found to be positive for Sickle Cell Trait and 1.97% tested positive for Sickle Cell Disease. Conclusions: School going children can be an effective medium to approach the community for the screening of Sickle Cell Disease

    Augmented flow and reduced clogging of particles passing through small apertures by addition of fine grains

    Get PDF
    The effect on the flow and clogging of a sample of particles passing through an orifice due to the addition of a second fine-graded species is investigated. The flow rate of the main species is measured for various parameters: the mass ratio of the big species, the particle size ratio and the orifice diameter. We show that when the fine grains are added into the system the flow rate of the larger species can be increased and its clogging significantly reduced. In particular, we were able to flow (without clogging) the big species through an orifice only 1.5 times its particle diameter. This allows for applications such as the alignment of particles in a narrow tube without clogging. A simple state diagram is presented to describe the clogging transition for these binary mixtures. The experimental results are compared with various existing models for the flow rate of binary mixtures.Fil: Gharat, Sandip. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - Patagonia Confluencia; Argentina. Universidad Nacional de La Pampa. Facultad de Ciencias Exactas y Naturales. Departamento de FĂ­sica; ArgentinaFil: Pugnaloni, Luis Ariel. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - Patagonia Confluencia; Argentina. Universidad Nacional de La Pampa. Facultad de Ciencias Exactas y Naturales. Departamento de FĂ­sica; Argentin

    Star Classification: A Deep Learning Approach for Identifying Binary and Exoplanet Stars

    Full text link
    We present a novel approach for classifying stars as binary or exoplanet using deep learning techniques. Our method utilizes feature extraction, wavelet transformation, and a neural network on the light curves of stars to achieve high-accuracy results. We have also compiled a dataset of binary and exoplanet stars for training and validation by cross-matching observations from multiple space-based telescopes with catalogs of known binary and exoplanet stars. The application of wavelet transformation on the light curves has reduced the number of data points and improved the training time. Our algorithm has shown exceptional performance, with a test accuracy of 81.17%. This method can be applied to large datasets from current and future space-based telescopes, providing an efficient and accurate way of classifying stars.Comment: In Review: MNRA

    Estimation of redshift and associated uncertainty of Fermi/LAT extra-galactic sources with Deep Learning

    Full text link
    With the advancement of technology, machine learning-based analytical methods have pervaded nearly every discipline in modern studies. Particularly, a number of methods have been employed to estimate the redshift of gamma-ray loud active galactic nuclei (AGN), which are a class of supermassive black hole systems known for their intense multi-wavelength emissions and violent variability. Determining the redshifts of AGNs is essential for understanding their distances, which, in turn, sheds light on our current understanding of the structure of the nearby universe. However, the task involves a number of challenges such as the need for meticulous follow-up observations across multiple wavelengths and astronomical facilities. In this study, we employ a simple yet effective deep learning model with a single hidden layer having 6464 neurons and a dropout of 0.25 in the hidden layer, on a sample of AGNs with known redshifts from the latest AGN catalog, 4LAC-DR3, obtained from Fermi-LAT. We utilized their spectral, spatial, and temporal properties to robustly predict the redshifts of AGNs as well quantify their associated uncertainties, by modifying the model using two different variational inference methods. We achieve a correlation coefficient of 0.784 on the test set from the frequentist model and 0.777 and 0.778 from both the variants of variational inference, and, when used to make predictions on the samples with unknown redshifts, we achieve mean predictions of 0.421, 0.415 and 0.393, with standard deviations of 0.258, 0.246 and 0.207 from the models, respectively.Comment: In-Review: MNRAS Comments are appreciate

    A real-world evidence study to evaluate the efficacy of software-driven digital therapeutics on major adverse cardiovascular events, vitals, adherence to medication and lifestyle changes among patients with coronary artery disease or post-coronary interventions

    Get PDF
    Background: Coronary artery disease (CAD), a leading cause of cardiovascular disease (CVD) mortality worldwide, is a major health concern in India due to the high rates of the disease. Acute coronary syndrome (ACS) is a prevalent form of CAD that requires prompt treatment. Digital therapeutics (DTx) is an emerging field that employs remote monitoring and behavioural changes to manage diseases, with promising outcomes in ACS and post-percutaneous coronary intervention (PCI) patients. This study evaluates the efficacy of a software-driven DTx intervention in enhancing outcomes for CAD patients. Methods: This pilot, single-centred, prospective and real-world evidence cohort study aims to evaluate the effectiveness of a software-driven therapeutic intervention (LYFE) in patients with ACS and/or post-PCI. The study enrolled 30 patients over a 3-month follow-up period from October to November 2022. The main outcomes measured were changes in blood pressure, heart rate, medication adherence, the incidence of major adverse cardiovascular events (MACE), all-cause readmission, and lifestyle adherence at 1 and 3 months. Results: The mean age of the patients was 53.2±12.1 years; 27 (93%) males and 2 (7%) were females. Mean BMI of the patients was 26.3±5.0. The mean difference for systolic blood pressure (SBP) and diastolic blood pressure (DBP) was 7.8±10.9 (p=0.001*), 3.7±5.7 (p=0.002*) respectively with statistically significant reduction, at 3 months. The 25 (83.3%) patients had controlled blood pressure at 3 months. 27 (90%) patients were adherent to the medication and physically active, while 3 (10%) inactive throughout the study period. No CVD death/major bleeding event was reported. Conclusions: DTx improved medication adherence and blood pressure control in CAD, ACS with post-PCI patients during the study period.

    An Image Processing approach to identify solar plages observed at 393.37 nm by Kodaikanal Solar Observatory

    Full text link
    Solar Plages are bright chromospheric features observed in Ca II K photographic observations of the sun. These are regions of high magnetic field concentration thus tracer of magnetic activity of the Sun and are one of the most important features to study long term variability of the Sun as Ca II K spectroheliograms are recorded for more than a century. . However, detection of the plages from century-long databases is a non-trivial task and need significant human resources for doing it manually. Hence, in this study we propose an image processing algorithm which can identify solar plages from Ca II K photographic observations. The proposed study has been implemented on archival data from Kodaikanal Solar Observatory. To ensure that the algorithm works, irrespective of noise level, brightness and other image properties, we randomly draw a samples of images from data archive to test our algorithm

    A Fuzzy Logic Approach for Best Crop Selection

    Get PDF
    The agricultural sector faces challenges in selecting suitable crop due to complexity and uncertainty of environmental factors. By using fuzzy technique, we enhance the accuracy in crop selection compare to traditional statistical method. Statistical approaches often rely on precise numerical threshold while, fuzzy logic excels in capturing the vagueness associated with linguistic terms by standard intersection. Considering all parameters like temperature, soil compatibility, water required, production cost and profit enabling farmers to make context aware approach for crop selection. First calculating the gradation of variables and taking minimum intersection of all fuzzy sets from all parameters and then preferring the crop with maximum index. Finite accuracy got by fuzzy logic can be trusted more than statistical method

    Gamma-ray Blazar Classification using Machine Learning with Advanced Weight Initialization and Self-Supervised Learning Techniques

    Full text link
    Machine learning has emerged as a powerful tool in the field of gamma-ray astrophysics. The algorithms can distinguish between different source types, such as blazars and pulsars, and help uncover new insights into the high-energy universe. The Large Area Telescope (LAT) on-board the Fermi Gamma-ray telescope has significantly advanced our understanding of the Universe. The instrument has detected a large number of gamma-ray emitting sources, among which a significant number of objects have been identified as active galactic nuclei (AGN). The sample is primarily composed of blazars; however, more than one-third of these sources are either of an unknown class or lack a definite association with a low-energy counterpart. In this work, we employ multiple machine learning algorithms to classify the sources based on their other physical properties. In particular, we utilized smart initialisation techniques and self-supervised learning for classifying blazars into BL Lacertae objects (BL Lac) and flat spectrum radio quasars (FSRQ). The core advantage of the algorithm is its simplicity, usage of minimum number of features and easy deployment due to lesser number of parameters without compromising on the performance. The model predicts that out of the 1115 sources of uncertain type in the 4FGL-DR3 catalog, 820 can be classified as BL Lacs, and 295 can be classified as FSRQs.Comment: In Review: MNRAS, Comments are appreciate
    • …
    corecore