58 research outputs found

    Interpretation of Solar Magnetic Field Strength Observations

    Full text link
    This study based on longitudinal Zeeman effect magnetograms and spectral line scans investigates the dependence of solar surface magnetic fields on the spectral line used and the way the line is sampled in order to estimate the magnetic flux emerging above the solar atmosphere and penetrating to the corona from magnetograms of the Mt. Wilson 150-foot tower synoptic program (MWO). We have compared the synoptic program \lambda5250\AA line of Fe I to the line of Fe I at \lambda5233\AA since this latter line has a broad shape with a profile that is nearly linear over a large portion of its wings. The present study uses five pairs of sampling points on the λ5233\lambda5233\AA line. We recommend adoption of the field determined with a line bisector method with a sampling point as close as possible to the line core as the best estimate of the emergent photospheric flux. The combination of the line profile measurements and the cross-correlation of fields measured simultaneously with \lambda5250\AA and \lambda5233\AA yields a formula for the scale factor 1/\delta that multiplies the MWO synoptic magnetic fields. The new calibration shows that magnetic fields measured by the MDI system on the SOHO spacecraft are equal to 0.619+/-0.018 times the true value at a center-to-limb position 30 deg. Berger and Lites (2003) found this factor to be 0.64+/-0.013 based on a comparison the the Advanced Stokes Polarimeter.Comment: Accepted by Solar Physic

    An Updated Algorithm Integrated With Patient Data for the Differentiation of Atypical Nevi From Early Melanomas: the idScore 2021

    Get PDF
    Introduction: It is well known that multiple patient-related risk factors contribute to the development of cutaneous melanoma, including demographic, phenotypic and anamnestic factors. Objectives: We aimed to investigate which MM risk factors were relevant to be incorporated in a risk scoring-classifier based clinico-dermoscopic algorithm. Methods: This retrospective study was performed on a monocentric dataset of 374 atypical melanocytic skin lesions sharing equivocal dermoscopic features, excised in the suspicion of malignancy. Dermoscopic standardized images of 258 atypical nevi (aN) and 116 early melanomas (eMM) were collected along with objective lesional data (i.e., maximum diameter, specific body site and body area) and 7 dermoscopic data. All cases were combined with a series of 10 MM risk factors, including demographic (2), phenotypic (5) and anamnestic (3) ones. Results: The proposed iDScore 2021 algorithm is composed by 9 variables (age, skin phototype I/II, personal/familiar history of MM, maximum diameter, location on the lower extremities (thighs/legs/ ankles/back of the feet) and 4 dermoscopic features (irregular dots and globules, irregular streaks, blue gray peppering, blue white veil). The algorithm assigned to each lesion a score from 0 to 18, reached an area under the ROC curve of 92% and, with a score threshold ≥ 6, a sensitivity (SE) of 98.2% and a specificity (SP) of 50.4%, surpassing the experts in SE (+13%) and SP (+9%).Conclusions: An integrated checklist combining multiple anamnestic data with selected relevant dermoscopic features can be useful in the differential diagnosis and management of eMM and aN exhibiting with equivocal features

    Comparison of reflectance confocal microscopy and line-field optical coherence tomography for the identification of keratinocyte skin tumours

    Get PDF
    Background: Reflectance confocal microscopy (RCM) and line-field confocal optical coherence tomography (LC-OCT) are non-invasive imaging devices that can help in the clinical diagnosis of actinic keratosis (AK) and cutaneous squamous cell carcinoma (SCC). No studies are available on the comparison between these two technologies for the identification of the different features of keratinocyte skin tumours. Objectives: To compare RCM and LC-OCT findings in AK and SCC. Methods: A retrospective multicenter study was conducted. Tumours were imaged with RCM and LC-OCT devices before surgery, and the diagnosis was confirmed by histological examinations. LC-OCT and RCM criteria for AK/SCC were identified, and their presence/absence was evaluated in all study lesions. Gwet AC1 concordance index was calculated to compare RCM and LC-OCT. Results: We included 52 patients with 33 AKs and 19 SCCs. Irregular epidermis was visible in most tumours and with a good degree of agreement between RCM and LC-OCT (Gwet's AC1 0.74). Parakeratosis, dyskeratotic keratinocytes and both linear dilated and glomerular vessels were better visible at LC-OCT than RCM (p < 0.001). Erosion/ulceration was identified with both methods in more than half of the cases with a good degree of agreement (Gwet AC1 0.62). Conclusions: Our results suggest that both LC-OCT and hand-held RCM can help clinicians in the identification of AK and SCC, providing an in vivo and non-invasive identification of an irregular epidermis. LC-OCT proved to be more effective in identifying parakeratosis, dyskeratotic keratinocytes and vessels in this series

    14 th Conference on Harmonisation within Atmospheric Dispersion Modelling for Regulatory Purposes -2-6

    Get PDF
    Abstract In regional emission inventories road transport contribution is often evaluated by adopting a top-down methodology, which is based on the choice of large scale variables (for example fuel sold or consumed) then conveyed to smaller scale by using proxy variables (for example resident population or registered fleet). While this approach is the most effective for the compilation of national databases of emissions, since it allows to obtain a geographically complete and methodologically homogeneous set of information, when it comes to the assessment of air quality and to the evaluation of measures in alternative scenarios the bottom-up strategy could prove to be preferable. The amount of pollutants released by traffic networks is in this latter case estimated starting from site-specific data such as traffic flow, vehicles speed, vehicle categories and local fleet technical features (fuel supply, weight, etc.). The bottom-up approach is clearly affected by the amount and quality of information available, but it allows the estimate of emission data with a greater spatial and temporal detail. In this paper, we present a comparison of the results obtained by the application of the two approaches in evaluating road transport emissions in the metropolitan area of Torino, in which 1 350 000 inhabitants on average travel 8.5 km a day. Traffic flow data are available on an hourly basis on a network consisting of 5125 links, part of a larger road monitoring system. On site surveys have been used to differentiate vehicle categories on highway, rural and urban roads. Even if both methodologies are based on Copert IV, the quantitative comparison of estimated traffic emissions enlightens differences which are not negligible for some pollutants. In order to fully assess pros and cons of the two methodologies, emissions have been used to feed an air quality modelling system, based on the Eulerian chemical transport model FARM, on a 1 km horizontal spatial resolution square grid (51 x 51 km 2 ), using a regional simulation on a 4 km grid as one-way boundary conditions. The results emphasize a good description of the pattern distribution of pollution caused by the road network, treated as a linear source in the bottom-up approach, and show the appropriateness of using such an approach when proposed measures for abatement of traffic-related pollution need to be assessed. INTRODUCTION The Directive 2008/50/EC, in accordance with the previous European Directives, requires Member States to develop air quality plans in order to comply with air quality standards in the most effective and timesaving way. A reliable and updated estimation of emission sources is thus a crucial factor in the definition of effective air quality plans and related abatement measures

    Search for Short-Term Periodicities in the Sun's Surface Rotation: A Revisit

    Full text link
    The power spectral analyses of the Sun's surface equatorial rotation rate determined from the Mt. Wilson daily Doppler velocity measurements during the period 3 December 1985 to 5 March 2007 suggests the existence of 7.6 year, 2.8 year, 1.47 year, 245 day, 182 day and 158 day periodicities in the surface equatorial rotation rate during the period before 1996. However, there is no variation of any kind in the more accurately measured data during the period after 1995. That is, the aforementioned periodicities in the data during the period before the year 1996 may be artifacts of the uncertainties of those data due to the frequent changes in the instrumentation of the Mt. Wilson spectrograph. On the other hand, the temporal behavior of most of the activity phenomena during cycles 22 (1986-1996) and 23 (after 1997) is considerably different. Therefore, the presence of the aforementioned short-term periodicities during the last cycle and absence of them in the current cycle may, in principle, be real temporal behavior of the solar rotation during these cycles.Comment: 11 pages, 6 figures, accepted for publication in Solar Physic

    Surprising Sun

    Full text link
    Important revisions of the solar model ingredients appear after 35 years of intense work which have led to an excellent agreement between solar models and solar neutrino detections. We first show that the updated CNO composition suppresses the anomalous position of the Sun in the known galactic enrichment. The following law: He/H= 0.075 + 44.6 O/H in fraction number is now compatible with all the indicators. We then examine the existing discrepancies between the standard model and solar - seismic and neutrino - observations and suggest some directions of investigation to solve them. We update our predicted neutrino fluxes using the recent composition, new nuclear reaction rates and seismic models as the most representative of the central plasma properties. We get 5.31 +- 0.6 10^6/cm^{2}/s for the total ^8B neutrinos, 66.5 SNU and 2.76 SNU for the gallium and chlorine detectors, all in remarquable agreement with the detected values including neutrino oscillations for the last two. We conclude that the acoustic modes and detected neutrinos see the same Sun, but that clear discrepancies in solar modelling encourage further observational and theoretical efforts.Comment: 4 pages 3 figures Submitted to Phys. Rev. let

    The quest for the solar g modes

    Full text link
    Solar gravity modes (or g modes) -- oscillations of the solar interior for which buoyancy acts as the restoring force -- have the potential to provide unprecedented inference on the structure and dynamics of the solar core, inference that is not possible with the well observed acoustic modes (or p modes). The high amplitude of the g-mode eigenfunctions in the core and the evanesence of the modes in the convection zone make the modes particularly sensitive to the physical and dynamical conditions in the core. Owing to the existence of the convection zone, the g modes have very low amplitudes at photospheric levels, which makes the modes extremely hard to detect. In this paper, we review the current state of play regarding attempts to detect g modes. We review the theory of g modes, including theoretical estimation of the g-mode frequencies, amplitudes and damping rates. Then we go on to discuss the techniques that have been used to try to detect g modes. We review results in the literature, and finish by looking to the future, and the potential advances that can be made -- from both data and data-analysis perspectives -- to give unambiguous detections of individual g modes. The review ends by concluding that, at the time of writing, there is indeed a consensus amongst the authors that there is currently no undisputed detection of solar g modes.Comment: 71 pages, 18 figures, accepted by Astronomy and Astrophysics Revie
    • …
    corecore