10 research outputs found

    Finite element model set-up of colorectal tissue for analyzing surgical scenarios

    Get PDF
    Finite Element Analysis (FEA) has gained an extensive application in the medical field, such as soft tissues simulations. In particular, colorectal simulations can be used to understand the interaction with the surrounding tissues, or with instruments used in surgical procedures. Although several works have been introduced considering small displacements, as a result of the forces exerted on adjacent tissues, FEA applied to colorectal surgical scenarios is still a challenge. Therefore, this work aims to provide a sensitivity analysis on three geometric models, taking in mind different bioengineering tasks. In this way, a set of simulations has been performed using three mechanical models named Linear Elastic, Hyper-Elastic with a Mooney-Rivlin material model, and Hyper-Elastic with a YEOH material model

    3D Object Recognition and Facial Identification Using Time-averaged Single-views from Time-of-flight 3D Depth-Camera

    No full text
    International audienceWe report here on feasibility evaluation experiments for 3D object recognition and person facial identification from single-view on real depth images acquired with an “off-the-shelf” 3D time-of-flight depth camera. Our methodology is the following: for each person or object, we perform 2 independent recordings, one used for learning and the other one for test purposes. For each recorded frame, a 3D-mesh is computed by simple triangulation from the filtered depth image. The feature we use for recognition is the normalized histogram of directions of normal vectors to the 3D-mesh facets. We consider each training frame as a separate example, and the training is done with a multilayer perceptron with 1 hidden layer. For our 3D person facial identification experiments, 3 different persons were used, and we obtain a global correct rank-1 recognition rate of up to 80%, measured on test frames from an independent 3D video. For our 3D object recognition experiment, we have considered 3 different objects, and obtain a correct single-frame recognition rate of 95%, and checked that the method is quite robust to variation of distance from depth camera to object. These first experiments show that 3D object recognition or 3D face identification, with a time-of-flight 3D camera, seems feasible, despite the high level of noise in the obtained real depth images

    People detection in nuclear plants by video processing for safety purpose

    Get PDF
    This work describes the development of a surveillance system for safety purposes in nuclear plants. The final objective is to track people online in videos, in order to estimate the dose received by personnel, during the execution of working tasks in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a real nuclear plant at Instituto de Engenharia Nuclear, Argonauta nuclear research reactor. Cameras have been installed within Argonauta’s room, supplying the data needed. Both video processing and statistical signal processing techniques may be used for detection, segmentation and tracking people in video. This first paper reports people segmentation in video using background subtraction, by two different approaches, namely frame differences, and blind signal separation based on the independent component analysis method. Results are commented, along with perspectives for further work

    An Exploration of Methods for Classifying Air-Written Letters from the Spanish Alphabet

    Get PDF
    The ability to recognize human activity, especially air-writing, is an interesting challenge as one could identify any letter from many languages. I intend to investigate this problem of air-writing, but with the added twist of including the following letters from the Spanish alphabet: Á, É, Í, Ó, Ú, Ü, and Ñ. With this new alphabet, I set out to see what kinds of classifiers work best and on what kinds of data, since letters can be represented in multiple ways. My tracking system will consist of a regular camera and a subject who will draw with a brightly colored marker (green in my experiments). The tracker will track the marker via the hue, saturation, and intensity (HSI) color space, threshold the HSI image on a certain hue range, identify the edges from the threshold or mask image, and get the minimum enclosing circle of the set of edges. With this the subject can draw letters, pressing a key to draw one letter at a time. I used the Python programming language, as well as the OpenCV library, to implement my design. The classifiers I employed are dynamic time warping, k-nearest neighbors, nearest centroid, and support vector machine. Dynamic time warping classifies letters based on the time series representations of the letters. k-nearest neighbors and nearest centroid classify letters based on the means of each x and y component time series. While the support vector machine classifies letters based on their 28x28 image representations. My total dataset size was 3,630 samples, where 2,640 were used for training and 990 for testing. After testing, dynamic time warping achieved 58.69% accuracy, k­-nearest neighbors had 48.79% accuracy, nearest centroid had 47.98% accuracy, and the support vector machine had 97.17% accuracy. The accuracies when considering only the English letters improved the accuracies by about 2%. Although I believe more data and analysis is needed for a better conclusion, classifying a vast array of letters on the images seems like a good characteristic to consider when classifying letters and potentially other kinds of characters

    FUSION FRAMEWORK FOR VIDEO EVENT RECOGNITION

    Get PDF
    International audienceThis paper presents a multisensor fusion framework for video activities recognition based on statistical reasoning and D-S evidence theory. Precisely, the framework consists in the combination of the events' uncertainty computation with the trained database and the fusion method based on the conflict management of evidences. Our framework aims to build Multisensor fusion architecture for event recognition by combining sensors, dealing with conflicting recognition, and improving their performance. According to a complex event's hierarchy, Primitive state is chosen as our target event in the framework. A RGB camera and a RGB-D camera are used to recognise a person's basic activities in the scene. The main convenience of the proposed framework is that it firstly allows adding easily more possible events into the system with a complete structure for handling uncertainty. And secondly, the inference of Dempster-Shafer theory resembles human perception and fits for uncertainty and conflict management with incomplete information. The cross-validation of real-world data (10 persons) is carried out using the proposed framework, and the evaluation shows promising results that the fusion approach has an average sensitivity of 93.31% and an average precision of 86.7%. These results are better than the ones when only one camera is used, encouraging further research focusing on the combination of more sensors with more events, as well as the optimization of the parameters in the framework for improvements

    Feature-based tracking of multiple people for intelligent video surveillance.

    Get PDF
    Intelligent video surveillance is the process of performing surveillance task automatically by a computer vision system. It involves detecting and tracking people in the video sequence and understanding their behavior. This thesis addresses the problem of detecting and tracking multiple moving people with unknown background. We have proposed a feature-based framework for tracking, which requires feature extraction and feature matching. We have considered color, size, blob bounding box and motion information as features of people. In our feature-based tracking system, we have proposed to use Pearson correlation coefficient for matching feature-vector with temporal templates. The occlusion problem has been solved by histogram backprojection. Our tracking system is fast and free from assumptions about human structure. We have implemented our tracking system using Visual C++ and OpenCV and tested on real-world images and videos. Experimental results suggest that our tracking system achieved good accuracy and can process videos in 10-15 fps.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2006 .A42. Source: Masters Abstracts International, Volume: 45-01, page: 0347. Thesis (M.Sc.)--University of Windsor (Canada), 2006

    Manuaalisen työn arviointi virtuaaliympÀristössÀ digitaalisen ihmismallin avulla

    Get PDF
    NykyÀÀn ergonomian arvioinnissa valmis tuote tai fyysinen prototyyppi on lÀhes vÀlttÀmÀtön. Prototyypin valmistus on resursseja kuluttava prosessi, joten ergonomian arviointi jÀÀ usein vasta tuotekehityksen loppuvaiheeseen. Muutosten tekeminen muuttuu kuitenkin sitÀ kalliimmaksi, mitÀ pitemmÀlle prosessissa edetÀÀn, joten kehityskohteiden havaitseminen jo aiemmissa vaiheissa on huomattava etu. Virtuaalitekniikoiden avulla on mahdollista kehittÀÀ sekÀ tuotteista ettÀ ympÀristöistÀ virtuaalimalleja, ja siten ottaa ergonomian arviointi osaksi tuotekehitysprosessia sen alkuvaiheessa. TÀmÀn diplomityön tarkoitus on kehittÀÀ jÀrjestelmÀ manuaalisen työn arviointiin virtuaaliympÀristössÀ (VE) kÀyttÀen digitaalista ihmismallia (DHM). TÀssÀ diplomityössÀ tutkitaan virtuaalitekniikoita ja systemaattisia asentoanalyysejÀ. TyössÀ kehitetÀÀn sovellus liikkeenkaappauksen perusteella ohjautuvasta ihmismallista ja siihen liitetystÀ automaattisesta RULA-analyysistÀ. JÀrjestelmÀn oikeellisuutta tutkitaan sitten kÀyttÀjÀtestauksen avulla, ja sovelluksen antamia tuloksia verrataan manuaalisen RULA-analyysin tuloksiin. JÀrjestelmÀn kehitystÀ ei saateta työssÀ kokonaan loppuun johtuen liikkeenkaappauksessa edelleen esiintyvistÀ vakausongelmista. Liikkeenkaappausta lukuunottamatta jÀrjestelmÀ todetaan kuitenkin oikein toimivaksi. Ihmismallin vakaus onkin tÀrkein sovelluksen jatkokehityskohta, ja kun se saadaan toimimaan kunnolla, on jÀrjestelmÀ kÀyttökelpoinen tulevissa projekteissa.Ergonomics evaluation these days almost necessarily requires a ready product or a physical prototype. Prototype manufacturing is a resource consuming process, so ergonomics evaluation often is left to latter stages of product design. However making changes gets more expensive as the process proceeds so it is a significant advantage to be able to discover the targets of alteration in earlier phases. By the use of virtual techniques it is possible to make virtual models of products and environments and thereby take ergonomics evaluation as a part of design process in it's early stages. The purpose of this Master's thesis is to create a system for manual work evaluation in a virtual environment (VE) using a digital human model (DHM). In this thesis virtual environment technology and systematic postural analysis methods are studied. An application of a motion capture controlled DHM added with an application of automated RULA analysis is implemented. The system validity is then evaluated by user testing and the system results are compared to results of manual RULA analysis. The development of the system is not fully finished as stability problems in motion capture still remain. However, apart from motion tracking, the system is working correctly. DHM stability is the most essential target for further development. As soon as the tracking part is gotten to work properly, the system is fit for use in future projects. /Kir1

    Identification de la zone regardée sur un écran d'ordinateur à partir du flou

    Get PDF
    Quand vient le temps de comprendre le comportement d’une personne, le regard est une source d’information importante. L’analyse des comportements des consommateurs, des criminels, ou encore de certains Ă©tats cognitifs passe par l’interprĂ©tation du regard dans une scĂšne Ă  travers le temps. Il existe un besoin rĂ©el d’identification de la zone regardĂ©e sur un Ă©cran ou tout autre mĂ©dium par un utilisateur. Pour cela, la vision humaine fait la composition de plusieurs images pour permettre de comprendre la relation tridimensionnelle qui existe entre les objets et la scĂšne. La perception 3D d’une scĂšne rĂ©elle passe alors Ă  travers plusieurs images. Mais qu’en est-il lorsqu’il n’y a qu’une seule image

    Pedestrian detection and tracking using stereo vision techniques

    Get PDF
    Automated pedestrian detection, counting and tracking has received significant attention from the computer vision community of late. Many of the person detection techniques described so far in the literature work well in controlled environments, such as laboratory settings with a small number of people. This allows various assumptions to be made that simplify this complex problem. The performance of these techniques, however, tends to deteriorate when presented with unconstrained environments where pedestrian appearances, numbers, orientations, movements, occlusions and lighting conditions violate these convenient assumptions. Recently, 3D stereo information has been proposed as a technique to overcome some of these issues and to guide pedestrian detection. This thesis presents such an approach, whereby after obtaining robust 3D information via a novel disparity estimation technique, pedestrian detection is performed via a 3D point clustering process within a region-growing framework. This clustering process avoids using hard thresholds by using bio-metrically inspired constraints and a number of plan view statistics. This pedestrian detection technique requires no external training and is able to robustly handle challenging real-world unconstrained environments from various camera positions and orientations. In addition, this thesis presents a continuous detect-and-track approach, with additional kinematic constraints and explicit occlusion analysis, to obtain robust temporal tracking of pedestrians over time. These approaches are experimentally validated using challenging datasets consisting of both synthetic data and real-world sequences gathered from a number of environments. In each case, the techniques are evaluated using both 2D and 3D groundtruth methodologies
    corecore