2,581 research outputs found

    TAXATION-FEDERAL ESTATE AND GIFT TAXATION-POWERS OF APPOINTMENT ACT OF 1951

    Get PDF
    It is the purpose of this comment to trace the history of Congress\u27 treatment of powers of appointment in the past and to examine the latest legislative pronouncement, the Powers of Appointment Act of 1951

    Design, validation and dissemination of an undergraduate assessment tool using SimMan® in simulated medical emergencies

    Get PDF
    Background: Increasingly, medical students are being taught acute medicine using whole-body simulator manikins. Aim: We aimed to design, validate and make widely available two simple assessment tools to be used with Laerdal SimMan (R) for final year students. Methods: We designed two scenarios with criterion-based checklists focused on assessment and management of two medical emergencies. Members of faculty critiqued the assessments for face validity and checklists revised. We assessed three groups of different experience levels: Foundation Year 2 doctors, third and final year medical students. Differences between groups were analysed, and internal consistency and interrater reliability calculated. A generalisability analysis was conducted using scenario and rater as facets in design. Results: A maximum of two items were removed from either checklist following the initial survey. Significantly different scores for three groups of experience for both scenarios were reported (p0.90). Internal consistency was poor (alpha<50.5). Generalizability study results suggest that four cases would provide reliable discrimination between final year students. Conclusions: These assessments proved easy to administer and we have gone some way to demonstrating construct validity and reliability. We have made the material available on a simulator website to enable others to reproduce these assessments

    Exploring Spatial Patterns of Virginia Tornadoes Using Kernel Density and Space-Time Cube Analysis (1960-2019)

    Get PDF
    This study evaluates the spatial-temporal patterns in Virginia tornadoes using the National Weather Service Storm Prediction Center’s Severe Weather GIS (SVRGIS) database. In addition to descriptive statistics, the analysis employs Kernel Density Estimation for spatial pattern analysis and space-time cubes to visualize the spatiotemporal frequency of tornadoes and potential trends. Most of the 726 tornadoes between 1960–2019 occurred in Eastern Virginia, along the Piedmont and Coastal Plain. Consistent with other literature, both the number of tornadoes and the tornado days have increased in Virginia. While 80% of the tornadoes occurred during the warm season, tornadoes did occur during each month including two deadly tornadoes in January and February. Over the 60-year period, a total of 28 people were killed in the Commonwealth. Most tornado activity took place in the afternoon and early evening hours drawing attention to the temporal variability of risk and vulnerability. Spatial analysis results identify significant, non-random clusters of tornado activity and increasing temporal frequency. While this study improves weather-related literacy and addresses a need in the Commonwealth, more research is necessary to further evaluate the synoptic and mesoscale mechanisms of Virginia tornadoes

    I Wonder What Will William Tell

    Get PDF
    https://digitalcommons.library.umaine.edu/mmb-vp/1731/thumbnail.jp

    Sparse Coding for Hyperspectral Images Using Random Dictionary and Soft Thresholding

    Get PDF
    Many techniques have been recently developed for classification of hyperspectral images (HSI) including support vector machines (SVMs), neural networks and graph-based methods. To achieve good performances for the classification, a good feature representation of the HSI is essential. A great deal of feature extraction algorithms have been developed such as principal component analysis (PCA) and independent component analysis (ICA). Sparse coding has recently shown state-of-the-art performances in many applications including image classification. In this paper, we present a feature extraction method for HSI data motivated by a recently developed sparse coding based image representation technique. Sparse coding consists of a dictionary learning step and an encoding step. In the learning step, we compared two different methods, L1-penalized sparse coding and random selection for the dictionary learning. In the encoding step, we utilized a soft threshold activation function to obtain feature representations for HSI. We applied the proposed algorithm to a HSI dataset collected at the Kennedy Space Center (KSC) and compared our results with those obtained by a recently proposed method, supervised locally linear embedding weighted k-nearest-neighbor (SLLE-WkNN) classifier. We have achieved better performances on this dataset in terms of the overall accuracy with a random dictionary. We conclude that this simple feature extraction framework might lead to more efficient HSI classification systems

    Toward Automatic Subpixel Registration of Unmanned Airborne Vehicle Images

    Get PDF
    Many applications require to register images within subpixel accuracy like computer vision especially super-resolution (SR) where the estimated subpixel shifts are very crucial in the reconstruction and restoration of SR images. In our work we have an optical sensor that is mounted on an unmanned airborne vehicle (UAV) and captures a set of images that contain sufficient overlapped area required to reconstruct a SR image. Due to the wind, The UAV may encounter rotational effects such as yaw, pitch and roll which can distort the acquired as well as processed images with shear, tilt or perspective distortions. In this paper we propose a hybrid algorithm to register these UAV images within subpixel accuracy to feed them in a SR reconstruction step. Our algorithm consists of two steps. The first step uses scale invariant feature transform (SIFT) to correct the distorted images. Because the resultant images are not registered to a subpixel precision, the second step registers the images using a fast Fourier transform (FFT) based method that is both efficient and robust to moderate noise and lens optical blur. Our FFT based method reduces the dimensionality of the Fourier matrix of the cross correlation and uses a forward and backward search in order to obtain an accurate estimation of the subpixel shifts. We discuss the relation between the dimensionality reduction factors and the image shifts as well as propose criteria that can be used to optimally select these factors. Finally, we compare the results of our approach to other subpixel techniques in terms of their efficiency and computational speed

    Fast Stochastic Wiener Filter for Super-Resolution Image Restoration with Information Theoretic Visual Quality Assessment

    Get PDF
    Super-resolution (SR) refers to reconstructing a single high resolution (HR) image from a set of subsampled, blurred and noisy low resolution (LR) images. The reconstructed image suffers from degradations such as blur, aliasing, photo-detector noise and registration and fusion error. Wiener filter can be used to remove artifacts and enhance the visual quality of the reconstructed images. In this paper, we introduce a new fast stochastic Wiener filter for SR reconstruction and restoration that can be implemented efficiently in the frequency domain. Our derivation depends on the continuous-discrete-continuous (CDC) model that represents most of the degradations encountered during the image-gathering and image-display processes. We incorporate a new parameter that accounts for LR images registration and fusion errors. Also, we speeded up the performance of the filter by constraining it to work on small patches of the images. Beside this, we introduce two figures of merits: information rate and maximum realizable fidelity, which can be used to assess the visual quality of the resultant images. Simulations and experimental results demonstrate that the derived Wiener filter that can be implemented efficiently in the frequency domain can reduce aliasing, blurring, and noise and result in a sharper reconstructed image. Also, Quantitative assessment using the proposed figures coincides with the visual qualitative assessment. Finally, we evaluate our filter against other SR techniques and its results were very competitive

    Molecular dynamics of flows in the Knudsen regime

    Get PDF
    Novel technological applications often involve fluid flows in the Knudsen regime in which the mean free path is comparable to the system size. We use molecular dynamics simulations to study the transition between the dilute gas and the dense fluid regimes as the fluid density is increased.Comment: REVTeX, 15 pages, 4 EPS figures, to appear in Physica

    Starlike Sets, Inverse Starlike Sets, and a Generalization of Convexity

    Get PDF
    Mathematic
    • …
    corecore