2,052 research outputs found

    Segmentation and Fracture Detection in CT Images for Traumatic Pelvic Injuries

    Get PDF
    In recent decades, more types and quantities of medical data have been collected due to advanced technology. A large number of significant and critical information is contained in these medical data. High efficient and automated computational methods are urgently needed to process and analyze all available medical data in order to provide the physicians with recommendations and predictions on diagnostic decisions and treatment planning. Traumatic pelvic injury is a severe yet common injury in the United States, often caused by motor vehicle accidents or fall. Information contained in the pelvic Computed Tomography (CT) images is very important for assessing the severity and prognosis of traumatic pelvic injuries. Each pelvic CT scan includes a large number of slices. Meanwhile, each slice contains a large quantity of data that may not be thoroughly and accurately analyzed via simple visual inspection with the desired accuracy and speed. Hence, a computer-assisted pelvic trauma decision-making system is needed to assist physicians in making accurate diagnostic decisions and determining treatment planning in a short period of time. Pelvic bone segmentation is a vital step in analyzing pelvic CT images and assisting physicians with diagnostic decisions in traumatic pelvic injuries. In this study, a new hierarchical segmentation algorithm is proposed to automatically extract multiplelevel bone structures using a combination of anatomical knowledge and computational techniques. First, morphological operations, image enhancement, and edge detection are performed for preliminary bone segmentation. The proposed algorithm then uses a template-based best shape matching method that provides an entirely automated segmentation process. This is followed by the proposed Registered Active Shape Model (RASM) algorithm that extracts pelvic bone tissues using more robust training models than the Standard ASM algorithm. In addition, a novel hierarchical initialization process for RASM is proposed in order to address the shortcoming of the Standard ASM, i.e. high sensitivity to initialization. Two suitable measures are defined to evaluate the segmentation results: Mean Distance and Mis-segmented Area to quantify the segmentation accuracy. Successful segmentation results indicate effectiveness and robustness of the proposed algorithm. Comparison of segmentation performance is also conducted using both the proposed method and the Snake method. A cross-validation process is designed to demonstrate the effectiveness of the training models. 3D pelvic bone models are built after pelvic bone structures are segmented from consecutive 2D CT slices. Automatic and accurate detection of the fractures from segmented bones in traumatic pelvic injuries can help physicians detect the severity of injuries in patients. The extraction of fracture features (such as presence and location of fractures) as well as fracture displacement measurement, are vital for assisting physicians in making faster and more accurate decisions. In this project, after bone segmentation, fracture detection is performed using a hierarchical algorithm based on wavelet transformation, adaptive windowing, boundary tracing and masking. Also, a quantitative measure of fracture severity based on pelvic CT scans is defined and explored. The results are promising, demonstrating that the proposed method not only capable of automatically detecting both major and minor fractures, but also has potentials to be used for clinical applications

    Underwater Localization in Complex Environments

    Get PDF
    A capacidade de um veículo autónomo submarino (AUV) se localizar num ambiente complexo, bem como de extrair características relevantes do mesmo, é de grande importância para o sucesso da navegação. No entanto, esta tarefa é particularmente desafiante em ambientes subaquáticos devido à rápida atenuação sofrida pelos sinais de sistemas de posicionamento global ou outros sinais de radiofrequência, dispersão e reflexão, sendo assim necessário o uso de processos de filtragem. Ambiente complexo é definido aqui como um cenário com objetos destacados das paredes, por exemplo, o objeto pode ter uma certa variabilidade de orientação, portanto a sua posição nem sempre é conhecida. Exemplos de cenários podem ser um porto, um tanque ou mesmo uma barragem, onde existem paredes e dentro dessas paredes um AUV pode ter a necessidade de se localizar de acordo com os outros veículos na área e se posicionar em relação ao mesmo e analisá-lo. Os veículos autónomos empregam muitos tipos diferentes de sensores para localização e percepção dos seus ambientes e dependem dos computadores de bordo para realizar tarefas de direção autónoma. Para esta dissertação há um problema concreto a resolver, localizar um cabo suspenso numa coluna de água em uma região conhecida do mar e navegar de acordo com ela. Embora a posição do cabo no mundo seja bem conhecida, a dinâmica do cabo não permite saber exatamente onde ele está. Assim, para que o veículo se localize de acordo com este para que possa ser inspecionado, a localização deve ser baseada em sensores ópticos e acústicos. Este estudo explora o processamento e a análise de imagens óticas e acústicas, por meio dos dados adquiridos através de uma câmara e por um sonar de varrimento mecânico (MSIS),respetivamente, a fim de extrair características ambientais relevantes que possibilitem a estimação da localização do veículo. Os pontos de interesse extraídos de cada um dos sensores são utilizados para alimentar um estimador de posição, implementando um Filtro de Kalman Extendido (EKF), de modo a estimar a posição do cabo e através do feedback do filtro melhorar os processos de extração de pontos de interesse utilizados.The ability of an autonomous underwater vehicle (AUV) to locate itself in a complex environment as well as to detect relevant environmental features is of crucial importance for successful navigation. However, it's particularly challenging in underwater environments due to the rapid attenuation suffered by signals from global positioning systems or other radio frequency signals, dispersion and reflection thus needing a filtering process. Complex environment is defined here as a scenario with objects detached from the walls, for example the object can have a certain orientation variability therefore its position is not always known. Examples of scenarios can be a harbour, a tank or even a dam reservoir, where there are walls and within those walls an AUV may have the need to localize itself according to the other vehicles in the area and position itself relative to one to observe, analyse or scan it. Autonomous vehicles employ many different types of sensors for localization and perceiving their environments and they depend on the on-board computers to perform autonomous driving tasks. For this dissertation there is a concrete problem to solve, which is to locate a suspended cable in a water column in a known region in the sea and navigate according to it. Although the cable position in the world is well known, the cable dynamics does not allow knowing where it is exactly. So, in order to the vehicle localize itself according to it so it can be inspected, the localization has to be based on optical and acoustic sensors. This study explores the processing and analysis of optical and acoustic images, through the data acquired through a camera and by a mechanical scanning sonar (MSIS), respectively, in order to extract relevant environmental characteristics that allow the estimation of the location of the vehicle. The points of interest extracted from each of the sensors are used to feed a position estimator, by implementing an Extended Kalman Filter (EKF), in order to estimate the position of the cable and through the feedback of the filter improve the extraction processes of points of interest used

    A new framework for sign language alphabet hand posture recognition using geometrical features through artificial neural network (part 1)

    Get PDF
    Hand pose tracking is essential in sign languages. An automatic recognition of performed hand signs facilitates a number of applications, especially for people with speech impairment to communication with normal people. This framework which is called ASLNN proposes a new hand posture recognition technique for the American sign language alphabet based on the neural network which works on the geometrical feature extraction of hands. A user’s hand is captured by a three-dimensional depth-based sensor camera; consequently, the hand is segmented according to the depth analysis features. The proposed system is called depth-based geometrical sign language recognition as named DGSLR. The DGSLR adopted in easier hand segmentation approach, which is further used in segmentation applications. The proposed geometrical feature extraction framework improves the accuracy of recognition due to unchangeable features against hand orientation compared to discrete cosine transform and moment invariant. The findings of the iterations demonstrate the combination of the extracted features resulted to improved accuracy rates. Then, an artificial neural network is used to drive desired outcomes. ASLNN is proficient to hand posture recognition and provides accuracy up to 96.78% which will be discussed on the additional paper of this authors in this journal

    Real-time people tracking in a camera network

    Get PDF
    Visual tracking is a fundamental key to the recognition and analysis of human behaviour. In this thesis we present an approach to track several subjects using multiple cameras in real time. The tracking framework employs a numerical Bayesian estimator, also known as a particle lter, which has been developed for parallel implementation on a Graphics Processing Unit (GPU). In order to integrate multiple cameras into a single tracking unit we represent the human body by a parametric ellipsoid in a 3D world. The elliptical boundary can be projected rapidly, several hundred times per subject per frame, onto any image for comparison with the image data within a likelihood model. Adding variables to encode visibility and persistence into the state vector, we tackle the problems of distraction and short-period occlusion. However, subjects may also disappear for longer periods due to blind spots between cameras elds of view. To recognise a desired subject after such a long-period, we add coloured texture to the ellipsoid surface, which is learnt and retained during the tracking process. This texture signature improves the recall rate from 60% to 70-80% when compared to state only data association. Compared to a standard Central Processing Unit (CPU) implementation, there is a signi cant speed-up ratio

    Hand gesture recognition using Kinect.

    Get PDF
    Hand gesture recognition (HGR) is an important research topic because some situations require silent communication with sign languages. Computational HGR systems assist silent communication, and help people learn a sign language. In this thesis. a novel method for contact-less HGR using Microsoft Kinect for Xbox is described, and a real-time HCR system is implemented with Microsoft Visual Studio 2010. Two different scenarios for HGR are provided: the Popular Gesture with nine gestures, and the Numbers with nine gestures. The system allows the users to select a scenario, and it is able to detect hand gestures made by users. to identify fingers, and to recognize the meanings of gestures, and to display the meanings and pictures on screen. The accuracy of the HGR system is from 84% to 99% with single hand gestures, and from 90% to 100% if both hands perform the same gesture at the same time. Because the depth sensor of Kinect is an infrared camera, the lighting conditions. signers\u27 skin colors and clothing, and background have little impact on the performance of this system. The accuracy and the robustness make this system a versatile component that can be integrated in a variety of applications in daily life

    Enabling Seamless Access to Digital Graphical Contents for Visually Impaired Individuals via Semantic-Aware Processing

    Get PDF
    Vision is one of the main sources through which people obtain information from the world, but unfortunately, visually-impaired people are partially or completely deprived of this type of information. With the help of computer technologies, people with visual impairment can independently access digital textual information by using text-to-speech and text-to-Braille software. However, in general, there still exists a major barrier for people who are blind to access the graphical information independently in real-time without the help of sighted people. In this paper, we propose a novel multi-level and multi-modal approach aiming at addressing this challenging and practical problem, with the key idea being semantic-aware visual-to-tactile conversion through semantic image categorization and segmentation, and semantic-driven image simplification. An end-to-end prototype system was built based on the approach. We present the details of the approach and the system, report sample experimental results with realistic data, and compare our approach with current typical practice
    corecore