15 research outputs found

    Classical methods of left ventricular contour extraction and preprocessing of echocardiographic images: a review

    Get PDF
    A main objective of digital processing of echocardiographic images is to improve the signal to noise ratio of the video images acquired from the ultrasound equipment along with contour extraction in order to obtain cardiac parameters. We present a review and comparison among the different proposed methods in the current literature for both noise removal and contour extraction of echocardiographic images. It is shown that classical methods do not render good contours and there is a need for a different approach to contour extraction algorithms.Peer ReviewedPostprint (published version

    New Robust Obstacle Detection System Using Color Stereo Vision

    Get PDF
    Intelligent transportation systems (ITS) are divided into intelligent infrastructure systems and intelligent vehicle systems. Intelligent vehicle systems are typically classified in three categories, namely 1) Collision Avoidance Systems; 2) Driver Assistance Systems and 3) Collision Notification Systems. Obstacle detection is one of crucial tasks for Collision Avoidance Systems and Driver Assistance Systems. Obstacle detection systems use vehiclemounted sensors to detect obstuctions, such as other vehicles, bicyclists, pedestrians, road debris, or animals, in a vehicleâs path and alert the driver. Obstacle detection systems are proposed to help drivers see farther and therefore have more time to react to road hazards. These systems also help drivers to get a large visibility area when the visibility conditions is reduced such as night, fog, snow, rain, ... Obstacle detection systems process data acquired from one or several sensors: radar Kruse et al. (2004), lidar Gao & Coifman (2006), monocular vision Lombardi & Zavidovique (2004), stereo vision Franke (2000) Bensrhair et al. (2002) Cabani et al. (2006b) Kogler et al. (2006) Woodfill et al. (2007), vision fused with active sensors Gern et al. (2000) Steux et al. (2002) Mobus & Kolbe (2004)Zhu et al. (2006) Alessandretti et al. (2007)Cheng et al. (2007). It is clear now that most obstacle detection systems cannot work without vision. Typically, vision-based systems consist of cameras that provide gray level images. When visibility conditions are reduced (night, fog, twilight, tunnel, snow, rain), vision systems are almost blind. Obstacle detection systems are less robust and reliable. To deal with the problem of reduced visibility conditions, infrared or color cameras can be used. Thermal imaging cameras are initially used by militaries. Over the last few years, these systems became accessible to the commercial market, and can be found in select 2006 BMW cars. For example, vehicle headlight systems provide between 75 to 140 meters of moderate illumination; at 90 K meters per hour this means less than 4 seconds to react to hazards. When with PathFindIR PathFindIR (n.d.) (a commercial system), a driver can have more than 15 seconds. Other systems still in the research stage assist drivers to detect pedestrians Xu & Fujimura (2002) Broggi et al. (2004) Bertozzi et al. (2007). Color is appropriate to various visibility conditions and various environments. In Betke et al. (2000) and Betke & Nguyen (1998), Betke et al. have demonstrated that the tracking o

    Rough or Noisy? Metrics for Noise Estimation in SfM Reconstructions

    Get PDF
    Structure from Motion (SfM) can produce highly detailed 3D reconstructions, but distinguishing real surface roughness from reconstruction noise and geometric inaccuracies has always been a difficult problem to solve. Existing SfM commercial solutions achieve noise removal by a combination of aggressive global smoothing and the reconstructed texture for smaller details, which is a subpar solution when the results are used for surface inspection. Other noise estimation and removal algorithms do not take advantage of all the additional data connected with SfM. We propose a number of geometrical and statistical metrics for noise assessment, based on both the reconstructed object and the capturing camera setup. We test the correlation of each of the metrics to the presence of noise on reconstructed surfaces and demonstrate that classical supervised learning methods, trained with these metrics can be used to distinguish between noise and roughness with an accuracy above 85%, with an additional 5–6% performance coming from the capturing setup metrics. Our proposed solution can easily be integrated into existing SfM workflows as it does not require more image data or additional sensors. Finally, as part of the testing we create an image dataset for SfM from a number of objects with varying shapes and sizes, which are available online together with ground truth annotations

    The evaluation of entropy-based algorithm towards the production of closed-loop edge

    Get PDF
    This research concerns with the common problem of edge detection that suffers from producing a disjointed and incomplete edge leading to misdetection of visual object. The entropy-based algorithm has a potential to solve this problem by classifying the pixel belonging to which objects in an image. Hence, the paper aims to evaluate the performance of entropy-based algorithm to produce the closed-loop edge representing the formation of object boundary. The research utilizes the concept of entropy to sense the uncertainty of pixel membership to the existing objects in order to classify pixel as the edge or object. Six entropy-based algorithms are evaluated, i.e. the optimum entropy based on Shannon formula, the optimum of relative-entropy based on Kullback-Leibler divergence, the maximum of optimum entropy neighbour, the minimum of optimum relative-entropy neighbour, the thinning of optimum entropy neighbour, and the thinning of optimum relative-entropy neighbour. The experiment is held to compare the developed algorithms against Canny as a benchmark by employing five performance parameters, i.e., the average number of detected objects, the average number of detected edge pixels, the average size of detected objects, the ratio of the number of edge pixel per object, and the average of ten biggest size. The experiment shows that the entropy-based algorithms significantly improve the production of closed-loop edge, and the optimum of relative-entropy neighbour based on Kullback-Leibler divergence becomes the most desired approach among others due to the production of bigger closed-loop edge in the average. This finding suggests that the entropy-based algorithm becomes the best choice to support edge-based segmentation. The effectiveness of entropy in segmentation task is addressed for further research

    A Simulated shape recognition system using feature extraction

    Get PDF
    A simulated shape recognition system using feature extraction was built as an aid for designing robot vision systems. The simulation allows the user to study the effects of image resolution and feature selection on the performance of a vision system that tries to identify unknown 2-D objects. Performance issues that can be studied include identification accuracy and recognition speed as functions of resolution and the size and makeup of the feature set. Two approaches to feature selection were studied as was a nearest neighbor classification algorithm based on Mahalanobis distances. Using a pool of ten objects and twelve features, the system was tested by performing studies of hypothetical visual recognition tasks

    Cloud-Edge suppression for visual outdoor navigation

    Get PDF
    Hoffmann A, Möller R. Cloud-Edge suppression for visual outdoor navigation. Robotics. 2017;6(4): 38.Outdoor environments pose multiple challenges for the visual navigation of robots, like changing illumination conditions, seasonal changes, dynamic environments and non-planar terrain. Illumination changes are mostly caused by the movement of the Sun and by changing cloud cover. Moving clouds themselves also are a dynamic aspect of a visual scene. For visual homing algorithms, which compute the direction to a previously visited place by comparing the current view with a snapshot taken at that place, in particular, the changing cloud cover poses a problem, since cloud movements do not correspond to movements of the camera and thus constitute misleading information. We propose an edge-filtering method operating on linearly-transformed RGB channels, which reliably detects edges in the ground region of the image while suppressing edges in the sky region. To fulfill this criterion, the factors for the linear transformation of the RGB channels are optimized systematically concerning this special requirement. Furthermore, we test the proposed linear transformation on an existing visual homing algorithm (MinWarping) and show that the performance of the visual homing method is significantly improved compared to the use of edge-filtering methods on alternative color information

    Edge Direction Confidence Estimation for Improvement of Hough Accumulation

    Get PDF
    The Hough transform silhouette identification method requires a consistency of edge direction in the identification of similar silhouettes. Many gradient operators used in Hough preprocessing require a thresholding and a non-maxima suppression routine to aid the localization process. These routines may delete edges or cause edge fragmentation. These anomalies degrade the Hough performance due to the lack of accurate silhouette extraction, and reduce the correct localization in the Hough accumulator. Noise or sampling errors can be removed by several enhancement routines presented. They are mean, median, symmetric nearest neighbor, hi pass, and low pass filters. An edge detection process is presented which produces a directional image and the confidence image that allows subsequent image analysis the ability to determine if the detected original edge orientation is accurate, and to what degree. The orientation confidence is produced by comparing a 7 by 7 operator and the Compass Gradient operator. This allows the Hough process the ability to modify the position of accumulation, thereby improving the Hough localization process

    Multicast outing protocols and architectures in mobile ad-hoc wireless networks

    Get PDF
    The basic philosophy of personal communication services is to provide user-to-user, location independent communication services. The emerging group communication wireless applications, such as multipoint data dissemination and multiparty conferencing tools have made the design and development of efficient multicast techniques in mobile ad-hoc networking environments a necessity and not just a desire. Multicast protocols in mobile adhoc networks have been an area of active research for the past few years. In this dissertation, protocols and architectures for supporting multicast services are proposed, analyzed and evaluated in mobile ad-hoc wireless networks. In the first chapter, the activities and recent advances are summarized in this work-in-progress area by identifying the main issues and challenges that multicast protocols are facing in mobile ad-hoc networking environments and by surveying several existing multicasting protocols. a classification of the current multicast protocols is presented, the functionality of the individual existing protocols is discussed, and a qualitative comparison of their characteristics is provided according to several distinct features and performance parameters. In the second chapter, a novel mobility-based clustering strategy that facilitates the support of multicast routing and mobility management is presented in mobile ad-hoc networks. In the proposed structure, mobile nodes are organized into nonoverlapping clusters which have adaptive variable-sizes according to their respective mobility. The mobility-based clustering (MBC) approach which is proposed uses combination of both physical and logical partitions of the network (i.e. geographic proximity and functional relation between nodes, such as mobility pattern etc.). In the third chapter, an entropy-based modeling framework for supporting and evaluating the stability is proposed in mobile ad-hoc wireless networks. The basic motivations of the proposed modeling approach stem from the commonality observed in the location uncertainty in mobile ad-hoc wireless networks and the concept of entropy. In the fourth chapter, a Mobility-based Hybrid Multicast Routing (MHMR) protocol suitable for mobile ad-hoc networks is proposed. The MHMR uses the MBC algorithm as the underlying structure. The main features that the proposed protocol introduces are the following: a) mobility based clustering and group based hierarchical structure, in order to effectively support the stability and scalability, b) group based (limited) mesh structure and forwarding tree concepts, in order to support the robustness of the mesh topologies which provides limited redundancy and the efficiency of tree forwarding simultaneously, and c) combination of proactive and reactive concepts which provide the low route acquisition delay of proactive techniques and the low overhead of reactive methods. In the fifth chapter, an architecture for supporting geomulticast services with high message delivery accuracy is presented in mobile ad-hoc wireless networks. Geomulticast is a specialized location-dependent multicasting technique, where messages are multicast to some specific user groups within a specific zone. An analytical framework which is used to evaluate the various geomulticast architectures and protocols is also developed and presented. The last chapter concludes the dissertation

    Automated flaw detection method for X-ray images in nondestructive evaluation

    Get PDF
    Private, government and commercial sectors of the manufacturing world are plagued with imperfect materials, defective components, and aging assemblies that continuously infiltrate the products and services provided to the public. Increasing awareness of public safety and economic stability has caused the manufacturing world to search deeper for a solution to identify these mechanical weaknesses and thereby reduce their impact. The areas of digital image and signal processing have benefited greatly from the technological advances in computer hardware and software capabilities and the development of new processing methods resulting from extensive research in information theory, artificial intelligence, pattern recognition and related fields. These new processing methodologies and capabilities are laying a foundation of knowledge that empowers the industrial and academic community to boldly address this problem and begin designing and building better products and systems for tomorrow

    MobiMed: Framework for Rapid Application Development of Medical Mobile Apps

    Get PDF
    In the medical field images obtained from high definition cameras and other medical imaging systems are an integral part of medical diagnosis. The analysis of these images are usually performed by the physicians who sometimes need to spend long hours reviewing the images before they are able to come up with a diagnosis and then decide on the course of action. In this dissertation we present a framework for a computer-aided analysis of medical imagery via the use of an expert system. While this problem has been discussed before, we will consider a system based on mobile devices. Since the release of the iPhone on April 2003, the popularity of mobile devices has increased rapidly and our lives have become more reliant on them. This popularity and the ease of development of mobile applications has now made it possible to perform on these devices many of the image analyses that previously required a personal computer. All of this has opened the door to a whole new set of possibilities and freed the physicians from their reliance on their desktop machines. The approach proposed in this dissertation aims to capitalize on these new found opportunities by providing a framework for analysis of medical images that physicians can utilize from their mobile devices thus remove their reliance on desktop computers. We also provide an expert system to aid in the analysis and advice on the selection of medical procedure. Finally, we also allow for other mobile applications to be developed by providing a generic mobile application development framework that allows for access of other applications into the mobile domain. In this dissertation we outline our work leading towards development of the proposed methodology and the remaining work needed to find a solution to the problem. In order to make this difficult problem tractable, we divide the problem into three parts: the development user interface modeling language and tooling, the creation of a game development modeling language and tooling, and the development of a generic mobile application framework. In order to make this problem more manageable, we will narrow down the initial scope to the hair transplant, and glaucoma domains
    corecore