106 research outputs found

    Cryogenic test of gravitational inverse square law below 100-micrometer length scales

    Get PDF
    The inverse-square law is a hallmark of theories of gravity, impressively demonstrated from astronomical scales to sub-millimeter scales, yet we do not have a complete quantized theory of gravity applicable at the shortest distance scale. Problems within modern physics such as the hierarchy problem, the cosmological constant problem, and the strong CP problem in the Standard Model motivate a search for new physics. Theories such as large extra dimensions, ‘fat gravitons,’ and the axion, proposed to solve these problems, can result in a deviation from the gravitational inverse-square law below 100 μm and are thus testable in the laboratory. We have conducted a sub-millimeter test of the inverse-square law at 4.2 K. To minimize Newtonian errors, the experiment employed a near-null source, a disk of large diameter-to-thickness ratio. Two test masses, also disk-shaped, were positioned on the two sides of the source mass at a nominal distance of 280 μm. As the source was driven sinusoidally, the response of the test masses was sensed through a superconducting differential accelerometer. Any deviations from the inverse-square law would appear as a violation signal at the second harmonic of the source frequency, due to symmetry. We improved the design of the experiment significantly over an earlier version, by separating the source mass suspension from the detector housing and making the detector a true differential accelerometer. We identified the residual gas pressure as an error source, and developed ways to overcome the problem. During the experiment we further identified the two dominant sources of error - magnetic cross-talk and electrostatic coupling. Using cross-talk cancellation and residual balance, these were reduced to the level of the limiting random noise. No deviations from the inverse-square law were found within the experimental error (2σ) down to a length scale λ = 100 μm at the level of coupling constant |α|≤2. Extra dimensions were searched down to a length scale of 78 μm (|α|≤4). We have also proposed modifications to the current experimental design in the form of new tantalum source mass and installing additional accelerometers, to achieve an amplifier noise limited sensitivity

    A comparative study of the consistent and simplified finite element analyses of Eigenvalue, problems

    Get PDF
    Classical displacement method of the finite element analysis of eigenvalue problems requires the use of consistent and conforming elements. However, simpler approaches based on relaxing the condition of consistency of the element descriptions, such as lumped inertia force method and others are also found to yield satisfactory results. In this paper we make a comparative study of the consistent and simplified approaches with reference to four representative problems. In the simplified approach studied in this paper, the contribution of straining modes in the derivation of the mass and geometric stiffness matrices is neglected and this simplifies their derivation substantially. The results indicate that this simplification introduces only small errors in the eigenvalues

    Assessment of accuracies of finite eigenvalues

    Get PDF
    This article does not have an abstract

    Cd4 count at the time of presentation in newly diagnosed HIV patients in a tertiary care hospital in south India: implications for the programme

    Get PDF
    Background: Lower CD4 count at initiation of antiretroviral therapy (ART) can have a significant negative impact on subsequent disease progression and mortality among HIV patients. Hence, author assessed the status of the CD4 count at the time of diagnosis and factors associated with lower CD4 count among newly diagnosed HIV cases.Methods: A prospective observational study was conducted in a single integrated counseling and testing center, affiliated with a Medical College and Hospital, Andhra Pradesh. All newly diagnosed HIV cases in the setting between January to December 2017 were included. The CD4 count was assessed as per national guidelines for enumeration of CD4 2015.Results: The final analysis included 125 participants. The mean CD4 count at diagnosis was 276.51±228.37. Only 19 (15.20%) people had CD4 count >500, 47 (37.60%) had between 200-500 and 59 (47.20%) had CD4 count <200. Only 20% had appropriate knowledge of treatment. Among the study population, 43 (34.70%) had symptomatic conditions attributed to HIV infection, 44 (35.50%) participants had an AIDS-defining illness at the time of diagnosis. Only 3 (2.40%) had voluntary counseling and testing. Even though male gender, poor educational status, having more sexual partners, poor knowledge related to HIV diagnosis and treatment was associated with higher odds of low CD4 count (<200), none of the associations were statistically significant.Conclusions: The mean CD4 count was low and almost half of newly diagnosed cases had low CD4 count (<200) at the time of diagnosis. There is a strong need to intensify the efforts to fill the gaps in the screening for the early diagnosis to maximize the benefits of HAART and to stop the spread of the infection

    Performance Issues on K-Mean Partitioning Clustering Algorithm

    Get PDF
    In data mining, cluster analysis is one of challenging field of research. Cluster analysis is called data segmentation. Clustering is process of grouping the data objects such that all objects in same group are similar and object of other group are dissimilar. In literature, many categories of cluster analysis algorithms present. Partitioning methods are one of efficient clustering methods, where data base is partition into groups in iterative relocation procedure. K-means is widely used partition method. In this paper, we presented the k-means algorithm and its mathematical calculations for each step in detailed by taking simple data sets. This will be useful for understanding performance of algorithm. We also executed k-means algorithm with same data set using data mining tool Weka Explorer. The tool displays the final cluster points, but won’t give internal steps. In our paper, we present each step calculations and results. This paper helpful to user, who wants know step by step process. We also discuss performance issues of k-means algorithm for further extension.

    Performance Issues on K-Mean Partitioning Clustering Algorithm

    Get PDF
    In data mining, cluster analysis is one of challenging field of research. Cluster analysis is called data segmentation. Clustering is process of grouping the data objects such that all objects in same group are similar and object of other group are dissimilar. In literature, many categories of cluster analysis algorithms present. Partitioning methods are one of efficient clustering methods, where data base is partition into groups in iterative relocation procedure. K-means is widely used partition method. In this paper, we presented the k-means algorithm and its mathematical calculations for each step in detailed by taking simple data sets. This will be useful for understanding performance of algorithm. We also executed k-means algorithm with same data set using data mining tool Weka Explorer. The tool displays the final cluster points, but won’t give internal steps. In our paper, we present each step calculations and results. This paper helpful to user, who wants know step by step process. We also discuss performance issues of k-means algorithm for further extension.

    A Novel Hybrid Optimization With Ensemble Constraint Handling Approach for the Optimal Materialized Views

    Get PDF
    The datawarehouse is extremely challenging to work with, as doing so necessitates a significant investment of both time and space. As a result, it is essential to enable rapid data processing in order to cut down on the amount of time needed to respond to queries that are sent to the warehouse. To effectively solve this problem, one of the significant approaches that should be taken is to take the view of materialization. It is extremely unlikely that all of the views that can be derived from the data will ever be materialized. As a result, view subsets need to be selected intelligently in order to enable rapid data processing for queries coming from a variety of locations. The Materialized view selection problem is addressed by the model that has been proposed. The model is based on the ensemble constraint handling techniques (ECHT). In order to optimize the problem, we must take into account the constraints, which include the self-adaptive penalty, the Epsilon ()-parameter, and the stochastic ranking. For the purpose of making a quicker and more accurate selection of queries from the data warehouse, the proposed model includes the implementation of an innovative algorithm known as the constrained hybrid Ebola with COATI optimization (CHECO) algorithm. For the purpose of computing the best possible fitness, the goals of "processing cost of the query," "response cost," and "maintenance cost" are each defined. The top views are selected by the CHECO algorithm based on whether or not the defined fitness requirements are met. In the final step of the process, the proposed model is compared to the models already in use in order to validate the performance improvement in terms of a variety of performance metrics
    • 

    corecore