25,225 research outputs found

    Theoretical modeling of propagation of magneto-acoustic waves in magnetic regions below sunspots

    Full text link
    We use 2D numerical simulations and eikonal approximation, to study properties of MHD waves traveling below the solar surface through the magnetic structure of sunspots. We consider a series of magnetostatic models of sunspots of different magnetic field strengths, from 10 Mm below the photosphere to the low chromosphere. The purpose of these studies is to quantify the effect of the magnetic field on local helioseismology measurements by modeling waves excited by sub-photospheric sources. Time-distance propagation diagrams and wave travel times are calculated for models of various field strength and compared to the non-magnetic case. The results clearly indicate that the observed time-distance helioseismology signals in sunspot regions correspond to fast MHD waves. The slow MHD waves form a distinctly different pattern in the time-distance diagram, which has not been detected in observations. The numerical results are in good agreement with the solution in the short-wavelength (eikonal) approximation, providing its validation. The frequency dependence of the travel times is in a good qualitative agreement with observations.Comment: accepted by Ap

    QML-Morven : A Novel Framework for Learning Qualitative Models

    Get PDF
    Publisher PD

    Real-Time Virtual Pathology Using Signal Analysis and Synthesis

    Get PDF
    This dissertation discusses the modeling and simulation (M& S) research in the area of real-time virtual pathology using signal analysis and synthesis. The goal of this research is to contribute to the research in the M&S area of generating simulated outputs of medical diagnostics tools to supplement training of medical students with human patient role players. To become clinically competent physicians, medical students must become skilled in the areas of doctor-patient communication, eliciting the patient\u27s history, and performing the physical exam. The use of Standardized Patients (SPs), individuals trained to realistically portray patients, has become common practice. SPs provide the medical student with a means to learn in a safe, realistic setting, while providing a way to reliably test students\u27 clinical skills. The range of clinical problems an SP can portray, however, is limited. SPs are usually healthy individuals with few or no abnormal physical findings. Some SPs have been trained to simulate physical abnormalities, such as breathing through one lung, voluntarily and increasing blood pressure. But, there are many abnormalities that SPs cannot simulate. The research encompassed developing methods and algorithms to be incorporated into the previous work of McKenzie, el al. [1]ā€“[3] for simulating abnormal heart sounds in a Standardized Patient (SP), which may be utilized in a modified electronic stethoscope. The methods and algorithms are specific to the real-time modeling of human body sounds through modifying the sounds from a real person with various abnormalities. The main focus of the research involved applying methods from tempo and beat analysis of acoustic musical signals for heart signal analysis, specifically in detecting the heart rate and heartbeat locations. In addition, the research included an investigation and selection of an adaptive noise cancellation filtering method to separate heart sounds from lung sounds. A model was developed to use a heart/lung sound signal as input to efficiently and accurately separate heart sound and lung sound signals, characterize the heart sound signal when appropriate, replace the heart or lung sound signal with a reference pathology signal containing an abnormality such as a crackle or murmur, and then recombine the original heart or lung sound signal with the modified pathology signal for presentation to the student. After completion of the development of the model, the model was validated. The validation included both a qualitative assessment and a quantitative assessment. The qualitative assessment drew on the visual and auditory analysis of SMEs, and the quantitative assessment utilized simulated data to verify key portions of the model

    Quantitative analysis of incorrectly-configured bogon-filter detection

    Get PDF
    Copyright Ā© 2008 IEEENewly announced IP addresses (from previously unused IP blocks) are often unreachable. It is common for network operators to filter out address space which is known to be unallocated (ā€œbogonā€ addresses). However, as allocated address space changes over time, these bogons might become legitimately announced prefixes. Unfortunately, some ISPs still do not configure their bogon filters via lists published by the Regional Internet Registries (RIRs). Instead, they choose to manually configure filters. Therefore it would be desirable to test whether filters block legitimate address space before it is allocated to ISPs and/or end users. Previous work has presented a methodology that aims at detecting such wrongly configured filters, so that ISPs can be contacted and asked to update their filters. This paper extends the methodology by providing a more formal algorithm for finding such filters, and the paper quantitatively assesses the performance of this methodology.Jon Arnold, Olaf Maennel, Ashley Flavel, Jeremy McMahon, Matthew Rougha

    Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding

    Full text link
    This work addresses the problem of semantic scene understanding under dense fog. Although considerable progress has been made in semantic scene understanding, it is mainly related to clear-weather scenes. Extending recognition methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both synthetic and real foggy data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) the Foggy Zurich dataset comprising 38083808 real foggy images, with pixel-level semantic annotations for 1616 images with dense fog. Our experiments show that 1) our fog simulation slightly outperforms a state-of-the-art competing simulation with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly by leveraging unlabeled real foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201

    Combining qualitative and quantitative reasoning to support hazard identification by computer

    Get PDF
    This thesis investigates the proposition that use must be made of quantitative information to control the reporting of hazard scenarios in automatically generated HAZOP reports. HAZOP is a successful and widely accepted technique for identification of process hazards. However, it requires an expensive commitment of time and personnel near the end of a project. Use of a HAZOP emulation tool before conventional HAZOP could speed up the examination of routine hazards, or identify deficiencies I in the design of a plant. Qualitative models of process equipment can efficiently model fault propagation in chemical plants. However, purely qualitative models lack the representational power to model many constraints in real plants, resulting in indiscriminate reporting of failure scenarios. In the AutoHAZID computer program, qualitative reasoning is used to emulate HAZOP. Signed-directed graph (SDG) models of equipment are used to build a graph model of the plant. This graph is searched to find links between faults and consequences, which are reported as hazardous scenarios associated with process variable deviations. However, factors not represented in the SDG, such as the fluids in the plant, often affect the feasibility of scenarios. Support for the qualitative model system, in the form of quantitative judgements to assess the feasibility of certain hazards, was investigated and is reported here. This thesis also describes the novel "Fluid Modelling System" (FMS) which now provides this quantitative support mechanism in AutoHAZID. The FMS allows the attachment of conditions to SDG arcs. Fault paths are validated by testing the conditions along their arcs. Infeasible scenarios are removed. In the FMS, numerical limits on process variable deviations have been used to assess the sufficiency of a given fault to cause any linked consequence. In a number of case studies, use of the FMS in AutoHAZID has improved the focus of the automatically generated HAZOP results. This thesis describes qualitative model-based methods for identifying process hazards by computer, in particular AutoHAZID. It identifies a range of problems where the purely qualitative approach is inadequate and demonstrates how such problems can be tackled by selective use of quantitative information about the plant or the fluids in it. The conclusion is that quantitative knowledge is' required to support the qualitative reasoning in hazard identification by computer
    • ā€¦
    corecore