3,895 research outputs found

    Trajectory Shaping Study for Various Ballistic Missile Scenarios

    Get PDF
    As the world continues to change and threatening countries begin to develop weapons capable of reaching all corners of the Earth, it becomes necessary forU.S.to find new ways of protecting the sovereignty it its people and borders of its nation. It is the purpose of this research to determine a more effective way of neutralizing ballistic missile warheads before they reenter Earth’s atmosphere and simulate a scenario showing this possibility in action. The ballistic missile code is written in MATLAB and initialized in the Entry Analysis Tool for Exploration Missions (EATEM). EATEM is a tool originally designed for the analysis of a Lander entering Martian atmosphere. It was designed by Shaun Deacon in the pursuit of his M.S. in Aerospace Engineering atEmbry-RiddleAeronauticalUniversity. The ballistic missile scenario is designed to show how a ballistic missile may be used to intercept an opposing ballistic missile. The intercept missile will launch a few minutes after the hostile launch has been detected and will initiate an intercept trajectory as it ascends through the atmosphere and into orbit

    The Development of a Performance Assessment Methodology for Activity Based Intelligence: A Study of Spatial, Temporal, and Multimodal Considerations

    Get PDF
    Activity Based Intelligence (ABI) is the derivation of information from a series of in- dividual actions, interactions, and transactions being recorded over a period of time. This usually occurs in Motion imagery and/or Full Motion Video. Due to the growth of unmanned aerial systems technology and the preponderance of mobile video devices, more interest has developed in analyzing people\u27s actions and interactions in these video streams. Currently only visually subjective quality metrics exist for determining the utility of these data in detecting specific activities. One common misconception is that ABI boils down to a simple resolution problem; more pixels and higher frame rates are better. Increasing resolution simply provides more data, not necessary more informa- tion. As part of this research, an experiment was designed and performed to address this assumption. Nine sensors consisting of four modalities were place on top of the Chester F. Carlson Center for Imaging Science in order to record a group of participants executing a scripted set of activities. The multimodal characteristics include data from the visible, long-wave infrared, multispectral, and polarimetric regimes. The activities the participants were scripted to cover a wide range of spatial and temporal interactions (i.e. walking, jogging, and a group sporting event). As with any large data acquisition, only a subset of this data was analyzed for this research. Specifically, a walking object exchange scenario and simulated RPG. In order to analyze this data, several steps of preparation occurred. The data were spatially and temporally registered; the individual modalities were fused; a tracking algorithm was implemented, and an activity detection algorithm was applied. To develop a performance assessment for these activities a series of spatial and temporal degradations were performed. Upon completion of this work, the ground truth ABI dataset will be released to the community for further analysis

    Impact of polarization on the intrinsic cosmic microwave background bispectrum

    Get PDF
    We compute the cosmic microwave background (CMB) bispectrum induced by the evolution of the primordial density perturbations, including for the first time both temperature and polarization using a second-order Boltzmann code. We show that including polarization can increase the signal-to-noise by a factor 4 with respect to temperature alone. We find the expected signal-to-noise for this intrinsic bispectrum of S=N ¼ 3.8; 2.9; 1.6 and 0.5 for an ideal experiment with an angular resolution of lmax ¼ 3000, the proposed CMB surveys PRISM and COrE, and Planck’s polarized data, respectively; the bulk of this signal comes from E-mode polarization and from squeezed configurations. We discuss how CMB lensing is expected to reduce these estimates as it suppresses the bispectrum for squeezed configurations and contributes to the noise in the estimator. We find that the presence of the intrinsic bispectrum will bias a measurement of primordial non-Gaussianity of local type by fintr NL ¼ 0.66 for an ideal experiment with lmax ¼ 3000. Finally, we verify the robustness of our results by recovering the analytic approximation for the squeezed-limit bispectrum in the general polarized case

    Using APOGEE Wide Binaries to Test Chemical Tagging with Dwarf Stars

    Full text link
    Stars of a common origin are thought to have similar, if not nearly identical, chemistry. Chemical tagging seeks to exploit this fact to identify Milky Way subpopulations through their unique chemical fingerprints. In this work, we compare the chemical abundances of dwarf stars in wide binaries to test the abundance consistency of stars of a common origin. Our sample of 31 wide binaries is identified from a catalog produced by cross-matching APOGEE stars with UCAC5 astrometry, and we confirm the fidelity of this sample with precision parallaxes from Gaia DR2. For as many as 14 separate elements, we compare the abundances between components of our wide binaries, finding they have very similar chemistry (typically within 0.1 dex). This level of consistency is more similar than can be expected from stars with different origins (which show typical abundance differences of 0.3-0.4 dex within our sample). For the best measured elements, Fe, Si, K, Ca, Mn, and Ni, these differences are reduced to 0.05-0.08 dex when selecting pairs of dwarf stars with similar temperatures. Our results suggest that APOGEE dwarf stars may currently be used for chemical tagging at the level of ∼\sim0.1 dex or at the level of ∼\sim0.05 dex when restricting for the best-measured elements in stars of similar temperatures. Larger wide binary catalogs may provide calibration sets, in complement to open cluster samples, for on-going spectroscopic surveys.Comment: 21 pages, 14 figures, accepted for publication in Ap

    Quantitative and Qualitative Analysis of Dynamic Cavernosographies in Erectile Dysfunction due to Venous Leakage

    Get PDF
    Of 521 patients with erectile dysfunction in whom a multidisciplinary approach was used, 145 (27.8%) showed venous leakage as (concomitant) etiology of the impotence in dynamic cavernosography. The rate of the maintenance flow corresponded well with the response to a standardized intracavernosal injection of vasoactive drugs (p < 0.05) in patients with venous leakage. The maintenance flow increased with the age in secondary impotent men. It was not statistically different in patients with or without concomitant arterial insufficiency (p = 0.19). Fifty-one of 145 patients (32.2%) presented a pathologic cavernosal drainage via a single venous system; 94/145 (64.8%) showed a combined venous leakage. The type of leakage corresponded neither to the maintenance flow nor to the response to intracavernosal injections. Our findings show that standardized intracavernosal testing and Doppler have a high predictive value for the status of the venous occlusive system. Exact evaluation of the type of leakage can be made by bidimensional cavernosography only

    Predicting invasive aspergillosis in haematology patients by combining clinical and genetic risk factors with early diagnostic biomarkers

    Get PDF
    The incidence of invasive aspergillosis (IA) in high risk haematology populations, is relatively low (79.1% in patients with four or more factors. Using a risk threshold of 50%, pre-emptive therapy would have been prescribed in 8.4% of the population

    Erectile dysfunction due to ectopic penile vein

    Get PDF
    A total of 86/260 patients with erectile dysfunction had venous leakage as (joint) etiology. In 5 of 86 patients cavernosography showed pathologic cavernosal drainage only via an ectopic penile vein into the femoral vein. After ligation of this pathologic draining vessel, 4 of 5 patients regained spontaneous erectability. One patient with pathologic bulbocavernosus reflex latencies needed intracavernosal injection of vasoactive drugs for full rigidity

    A Dataset and Analysis of Open-Source Machine Learning Products

    Full text link
    Machine learning (ML) components are increasingly incorporated into software products, yet developers face challenges in transitioning from ML prototypes to products. Academic researchers struggle to propose solutions to these challenges and evaluate interventions because they often do not have access to close-sourced ML products from industry. In this study, we define and identify open-source ML products, curating a dataset of 262 repositories from GitHub, to facilitate further research and education. As a start, we explore six broad research questions related to different development activities and report 21 findings from a sample of 30 ML products from the dataset. Our findings reveal a variety of development practices and architectural decisions surrounding different types and uses of ML models that offer ample opportunities for future research innovations. We also find very little evidence of industry best practices such as model testing and pipeline automation within the open-source ML products, which leaves room for further investigation to understand its potential impact on the development and eventual end-user experience for the products

    Overcoming the compression limit of the individualsequence (zero order empirical entropy) using the Set Shaping Theory

    Full text link
    Given the importance of the claim, we want to start by exposing the following consideration: this claim comes out more than a year after the article "Practical applications of Set Shaping Theory in Huffman coding" which reports the program that carried out an experiment of data compression in which the coding limit NH0(S) of a single sequence was questioned. We waited so long because, before making a claim of this type, we wanted to be sure of the consistency of the result. All this time the program has always been public; anyone could download it, modify it and independently obtain the reported results. In this period there have been many information theory experts who have tested the program and agreed to help us, we thank these people for the time dedicated to us and their precious advice. Given a sequence S of random variables i.i.d. with symbols belonging to an alphabet A; the parameter NH0(S) (the zero-order empirical entropy multiplied by the length of the sequence) is considered the average coding limit of the symbols of the sequence S through a uniquely decipherable and instantaneous code. Our experiment that calls into question this limit is the following: a sequence S is generated in a random and uniform way, the value NH0(S) is calculated, the sequence S is transformed into a new sequence f(S), longer but with the symbols belonging to the same alphabet, finally we code f(S) using Huffman coding. By generating a statistically significant number of sequences we obtain that the average value of the length of the encoded sequence f(S) is less than the average value of NH0(S). In this way, a result is obtained which is incompatible with the meaning given to NH0(S)
    • …
    corecore