194,202 research outputs found

    Laser Based Mid-Infrared Spectroscopic Imaging – Exploring a Novel Method for Application in Cancer Diagnosis

    Get PDF
    A number of biomedical studies have shown that mid-infrared spectroscopic images can provide both morphological and biochemical information that can be used for the diagnosis of cancer. Whilst this technique has shown great potential it has yet to be employed by the medical profession. By replacing the conventional broadband thermal source employed in modern FTIR spectrometers with high-brightness, broadly tuneable laser based sources (QCLs and OPGs) we aim to solve one of the main obstacles to the transfer of this technology to the medical arena; namely poor signal to noise ratios at high spatial resolutions and short image acquisition times. In this thesis we take the first steps towards developing the optimum experimental configuration, the data processing algorithms and the spectroscopic image contrast and enhancement methods needed to utilise these high intensity laser based sources. We show that a QCL system is better suited to providing numerical absorbance values (biochemical information) than an OPG system primarily due to the QCL pulse stability. We also discuss practical protocols for the application of spectroscopic imaging to cancer diagnosis and present our spectroscopic imaging results from our laser based spectroscopic imaging experiments of oesophageal cancer tissue

    Locating the LCROSS Impact Craters

    Get PDF
    The Lunar CRater Observations and Sensing Satellite (LCROSS) mission impacted a spent Centaur rocket stage into a permanently shadowed region near the lunar south pole. The Sheperding Spacecraft (SSC) separated \sim9 hours before impact and performed a small braking maneuver in order to observe the Centaur impact plume, looking for evidence of water and other volatiles, before impacting itself. This paper describes the registration of imagery of the LCROSS impact region from the mid- and near-infrared cameras onboard the SSC, as well as from the Goldstone radar. We compare the Centaur impact features, positively identified in the first two, and with a consistent feature in the third, which are interpreted as a 20 m diameter crater surrounded by a 160 m diameter ejecta region. The images are registered to Lunar Reconnaisance Orbiter (LRO) topographical data which allows determination of the impact location. This location is compared with the impact location derived from ground-based tracking and propagation of the spacecraft's trajectory and with locations derived from two hybrid imagery/trajectory methods. The four methods give a weighted average Centaur impact location of -84.6796\circ, -48.7093\circ, with a 1{\sigma} un- certainty of 115 m along latitude, and 44 m along longitude, just 146 m from the target impact site. Meanwhile, the trajectory-derived SSC impact location is -84.719\circ, -49.61\circ, with a 1{\sigma} uncertainty of 3 m along the Earth vector and 75 m orthogonal to that, 766 m from the target location and 2.803 km south-west of the Centaur impact. We also detail the Centaur impact angle and SSC instrument pointing errors. Six high-level LCROSS mission requirements are shown to be met by wide margins. We hope that these results facilitate further analyses of the LCROSS experiment data and follow-up observations of the impact region.Comment: Accepted for publication in Space Science Review. 24 pages, 9 figure

    Towards multiple 3D bone surface identification and reconstruction using few 2D X-ray images for intraoperative applications

    Get PDF
    This article discusses a possible method to use a small number, e.g. 5, of conventional 2D X-ray images to reconstruct multiple 3D bone surfaces intraoperatively. Each bone’s edge contours in X-ray images are automatically identified. Sparse 3D landmark points of each bone are automatically reconstructed by pairing the 2D X-ray images. The reconstructed landmark point distribution on a surface is approximately optimal covering main characteristics of the surface. A statistical shape model, dense point distribution model (DPDM), is then used to fit the reconstructed optimal landmarks vertices to reconstruct a full surface of each bone separately. The reconstructed surfaces can then be visualised and manipulated by surgeons or used by surgical robotic systems

    Prediction model of alcohol intoxication from facial temperature dynamics based on K-means clustering driven by evolutionary computing

    Get PDF
    Alcohol intoxication is a significant phenomenon, affecting many social areas, including work procedures or car driving. Alcohol causes certain side effects including changing the facial thermal distribution, which may enable the contactless identification and classification of alcohol-intoxicated people. We adopted a multiregional segmentation procedure to identify and classify symmetrical facial features, which reliably reflects the facial-temperature variations while subjects are drinking alcohol. Such a model can objectively track alcohol intoxication in the form of a facial temperature map. In our paper, we propose the segmentation model based on the clustering algorithm, which is driven by the modified version of the Artificial Bee Colony (ABC) evolutionary optimization with the goal of facial temperature features extraction from the IR (infrared radiation) images. This model allows for a definition of symmetric clusters, identifying facial temperature structures corresponding with intoxication. The ABC algorithm serves as an optimization process for an optimal cluster's distribution to the clustering method the best approximate individual areas linked with gradual alcohol intoxication. In our analysis, we analyzed a set of twenty volunteers, who had IR images taken to reflect the process of alcohol intoxication. The proposed method was represented by multiregional segmentation, allowing for classification of the individual spatial temperature areas into segmentation classes. The proposed method, besides single IR image modelling, allows for dynamical tracking of the alcohol-temperature features within a process of intoxication, from the sober state up to the maximum observed intoxication level.Web of Science118art. no. 99

    Multimodal segmentation of lifelog data

    Get PDF
    A personal lifelog of visual and audio information can be very helpful as a human memory augmentation tool. The SenseCam, a passive wearable camera, used in conjunction with an iRiver MP3 audio recorder, will capture over 20,000 images and 100 hours of audio per week. If used constantly, very soon this would build up to a substantial collection of personal data. To gain real value from this collection it is important to automatically segment the data into meaningful units or activities. This paper investigates the optimal combination of data sources to segment personal data into such activities. 5 data sources were logged and processed to segment a collection of personal data, namely: image processing on captured SenseCam images; audio processing on captured iRiver audio data; and processing of the temperature, white light level, and accelerometer sensors onboard the SenseCam device. The results indicate that a combination of the image, light and accelerometer sensor data segments our collection of personal data better than a combination of all 5 data sources. The accelerometer sensor is good for detecting when the user moves to a new location, while the image and light sensors are good for detecting changes in wearer activity within the same location, as well as detecting when the wearer socially interacts with others

    Project RISE: Recognizing Industrial Smoke Emissions

    Full text link
    Industrial smoke emissions pose a significant concern to human health. Prior works have shown that using Computer Vision (CV) techniques to identify smoke as visual evidence can influence the attitude of regulators and empower citizens to pursue environmental justice. However, existing datasets are not of sufficient quality nor quantity to train the robust CV models needed to support air quality advocacy. We introduce RISE, the first large-scale video dataset for Recognizing Industrial Smoke Emissions. We adopted a citizen science approach to collaborate with local community members to annotate whether a video clip has smoke emissions. Our dataset contains 12,567 clips from 19 distinct views from cameras that monitored three industrial facilities. These daytime clips span 30 days over two years, including all four seasons. We ran experiments using deep neural networks to establish a strong performance baseline and reveal smoke recognition challenges. Our survey study discussed community feedback, and our data analysis displayed opportunities for integrating citizen scientists and crowd workers into the application of Artificial Intelligence for social good.Comment: Technical repor
    corecore