135 research outputs found

    Boosting minimalist classifiers for blemish detection in potatoes

    Get PDF
    This paper introduces novel methods for detecting blemishes in potatoes using machine vision. After segmentation of the potato from the background, a pixel-wise classifier is trained to detect blemishes using features extracted from the image. A very large set of candidate features, based on statistical information relating to the colour and texture of the region surrounding a given pixel, is first extracted. Then an adaptive boosting algorithm (AdaBoost) is used to automatically select the best features for discriminating between blemishes and nonblemishes. With this approach, different features can be selected for different potato varieties, while also handling the natural variation in fresh produce due to different seasons, lighting conditions, etc. The results show that the method is able to build “minimalist” classifiers that optimise detection performance at low computational cost. In experiments, minimalist blemish detectors were trained for both white and red potato varieties, achieving 89.6% and 89.5% accuracy respectively

    Visual detection of blemishes in potatoes using minimalist boosted classifiers

    Get PDF
    This paper introduces novel methods for detecting blemishes in potatoes using machine vision. After segmentation of the potato from the background, a pixel-wise classifier is trained to detect blemishes using features extracted from the image. A very large set of candidate features, based on statistical information relating to the colour and texture of the region surrounding a given pixel, is first extracted. Then an adaptive boosting algorithm (AdaBoost) is used to automatically select the best features for discriminating between blemishes and non-blemishes. With this approach, different features can be selected for different potato varieties, while also handling the natural variation in fresh produce due to different seasons, lighting conditions, etc. The results show that the method is able to build ``minimalist'' classifiers that optimise detection performance at low computational cost. In experiments, blemish detectors were trained for both white and red potato varieties, achieving 89.6\% and 89.5\% accuracy, respectively

    Development of new intelligent autonomous robotic assistant for hospitals

    Get PDF
    Continuous technological development in modern societies has increased the quality of life and average life-span of people. This imposes an extra burden on the current healthcare infrastructure, which also creates the opportunity for developing new, autonomous, assistive robots to help alleviate this extra workload. The research question explored the extent to which a prototypical robotic platform can be created and how it may be implemented in a hospital environment with the aim to assist the hospital staff with daily tasks, such as guiding patients and visitors, following patients to ensure safety, and making deliveries to and from rooms and workstations. In terms of major contributions, this thesis outlines five domains of the development of an actual robotic assistant prototype. Firstly, a comprehensive schematic design is presented in which mechanical, electrical, motor control and kinematics solutions have been examined in detail. Next, a new method has been proposed for assessing the intrinsic properties of different flooring-types using machine learning to classify mechanical vibrations. Thirdly, the technical challenge of enabling the robot to simultaneously map and localise itself in a dynamic environment has been addressed, whereby leg detection is introduced to ensure that, whilst mapping, the robot is able to distinguish between people and the background. The fourth contribution is geometric collision prediction into stabilised dynamic navigation methods, thus optimising the navigation ability to update real-time path planning in a dynamic environment. Lastly, the problem of detecting gaze at long distances has been addressed by means of a new eye-tracking hardware solution which combines infra-red eye tracking and depth sensing. The research serves both to provide a template for the development of comprehensive mobile assistive-robot solutions, and to address some of the inherent challenges currently present in introducing autonomous assistive robots in hospital environments.Open Acces

    Real-Time Body Pose Recognition Using 2D or 3D Haarlets

    Get PDF
    This article presents a novel approach to markerless real-time pose recognition in a multicamera setup. Body pose is retrieved using example-based classification based on Haar wavelet-like features to allow for real-time pose recognition. Average Neighborhood Margin Maximization (ANMM) is introduced as a powerful new technique to train Haar-like features. The rotation invariant approach is implemented for both 2D classification based on silhouettes, and 3D classification based on visual hull

    Elderly Fall Detection by Sensitive Features Based on Image Processing and Machine Learning

    Get PDF
    The world’s elderly population is growing every year. It is easy to say that the fall is one of the major dangers that threaten them. This paper offers a Trained Model for fall detection to help the older people live comfortably and alone at home. The purpose of this paper is to investigate appropriate methods for diagnosing falls by analyzing the motion and shape characteristics of the human body. Several machine learning technologies have been proposed for automatic fall detection. The proposed research reported in this paper detects a moving object by using a background subtraction algorithm with a single camera. The next step is to extract the features that are very important and generally describe the human shape and show the difference between the human falls from the daily activities. These features are based on motion, changes in human shape, and oval diameters around the human and temporal head position. The features extracted from the human mask are eventually fed in to various machine learning classifiers for fall detection. Experimental results showed the efficiency and reliability of the proposed method with a fall detection rate of 81% that have been tested with UR Fall Detection dataset

    Development of artificial neural network-based object detection algorithms for low-cost hardware devices

    Get PDF
    Finally, the fourth work was published in the “WCCI” conference in 2020 and consisted of an individuals' position estimation algorithm based on a novel neural network model for environments with forbidden regions, named “Forbidden Regions Growing Neural Gas”.The human brain is the most complex, powerful and versatile learning machine ever known. Consequently, many scientists of various disciplines are fascinated by its structures and information processing methods. Due to the quality and quantity of the information extracted from the sense of sight, image is one of the main information channels used by humans. However, the massive amount of video footage generated nowadays makes it difficult to process those data fast enough manually. Thus, computer vision systems represent a fundamental tool in the extraction of information from digital images, as well as a major challenge for scientists and engineers. This thesis' primary objective is automatic foreground object detection and classification through digital image analysis, using artificial neural network-based techniques, specifically designed and optimised to be deployed in low-cost hardware devices. This objective will be complemented by developing individuals' movement estimation methods by using unsupervised learning and artificial neural network-based models. The cited objectives have been addressed through a research work illustrated in four publications supporting this thesis. The first one was published in the “ICAE” journal in 2018 and consists of a neural network-based movement detection system for Pan-Tilt-Zoom (PTZ) cameras deployed in a Raspberry Pi board. The second one was published in the “WCCI” conference in 2018 and consists of a deep learning-based automatic video surveillance system for PTZ cameras deployed in low-cost hardware. The third one was published in the “ICAE” journal in 2020 and consists of an anomalous foreground object detection and classification system for panoramic cameras, based on deep learning and supported by low-cost hardware

    Tracking and modeling focus of attention in meetings [online]

    Get PDF
    Abstract This thesis addresses the problem of tracking the focus of attention of people. In particular, a system to track the focus of attention of participants in meetings is developed. Obtaining knowledge about a person\u27s focus of attention is an important step towards a better understanding of what people do, how and with what or whom they interact or to what they refer. In meetings, focus of attention can be used to disambiguate the addressees of speech acts, to analyze interaction and for indexing of meeting transcripts. Tracking a user\u27s focus of attention also greatly contributes to the improvement of human­computer interfaces since it can be used to build interfaces and environments that become aware of what the user is paying attention to or with what or whom he is interacting. The direction in which people look; i.e., their gaze, is closely related to their focus of attention. In this thesis, we estimate a subject\u27s focus of attention based on his or her head orientation. While the direction in which someone looks is determined by head orientation and eye gaze, relevant literature suggests that head orientation alone is a su#cient cue for the detection of someone\u27s direction of attention during social interaction. We present experimental results from a user study and from several recorded meetings that support this hypothesis. We have developed a Bayesian approach to model at whom or what someone is look­ ing based on his or her head orientation. To estimate head orientations in meetings, the participants\u27 faces are automatically tracked in the view of a panoramic camera and neural networks are used to estimate their head orientations from pre­processed images of their faces. Using this approach, the focus of attention target of subjects could be correctly identified during 73% of the time in a number of evaluation meet­ ings with four participants. In addition, we have investigated whether a person\u27s focus of attention can be pre­dicted from other cues. Our results show that focus of attention is correlated to who is speaking in a meeting and that it is possible to predict a person\u27s focus of attention based on the information of who is talking or was talking before a given moment. We have trained neural networks to predict at whom a person is looking, based on information about who was speaking. Using this approach we were able to predict who is looking at whom with 63% accuracy on the evaluation meetings using only information about who was speaking. We show that by using both head orientation and speaker information to estimate a person\u27s focus, the accuracy of focus detection can be improved compared to just using one of the modalities for focus estimation. To demonstrate the generality of our approach, we have built a prototype system to demonstrate focus­aware interaction with a household robot and other smart appliances in a room using the developed components for focus of attention tracking. In the demonstration environment, a subject could interact with a simulated household robot, a speech­enabled VCR or with other people in the room, and the recipient of the subject\u27s speech was disambiguated based on the user\u27s direction of attention. Zusammenfassung Die vorliegende Arbeit beschĂ€ftigt sich mit der automatischen Bestimmung und Ver­folgung des Aufmerksamkeitsfokus von Personen in Besprechungen. Die Bestimmung des Aufmerksamkeitsfokus von Personen ist zum VerstĂ€ndnis und zur automatischen Auswertung von Besprechungsprotokollen sehr wichtig. So kann damit beispielsweise herausgefunden werden, wer zu einem bestimmten Zeitpunkt wen angesprochen hat beziehungsweise wer wem zugehört hat. Die automatische Bestim­mung des Aufmerksamkeitsfokus kann desweiteren zur Verbesserung von Mensch-Maschine­Schnittstellen benutzt werden. Ein wichtiger Hinweis auf die Richtung, in welche eine Person ihre Aufmerksamkeit richtet, ist die Kopfstellung der Person. Daher wurde ein Verfahren zur Bestimmung der Kopfstellungen von Personen entwickelt. Hierzu wurden kĂŒnstliche neuronale Netze benutzt, welche als Eingaben vorverarbeitete Bilder des Kopfes einer Person erhalten, und als Ausgabe eine SchĂ€tzung der Kopfstellung berechnen. Mit den trainierten Netzen wurde auf Bilddaten neuer Personen, also Personen, deren Bilder nicht in der Trainingsmenge enthalten waren, ein mittlerer Fehler von neun bis zehn Grad fĂŒr die Bestimmung der horizontalen und vertikalen Kopfstellung erreicht. Desweiteren wird ein probabilistischer Ansatz zur Bestimmung von Aufmerksamkeits­zielen vorgestellt. Es wird hierbei ein Bayes\u27scher Ansatzes verwendet um die A­posterior iWahrscheinlichkeiten verschiedener Aufmerksamkteitsziele, gegeben beobachteter Kopfstellungen einer Person, zu bestimmen. Die entwickelten AnsĂ€tze wurden auf mehren Besprechungen mit vier bis fĂŒnf Teilnehmern evaluiert. Ein weiterer Beitrag dieser Arbeit ist die Untersuchung, inwieweit sich die Blickrich­tung der Besprechungsteilnehmer basierend darauf, wer gerade spricht, vorhersagen lĂ€ĂŸt. Es wurde ein Verfahren entwickelt um mit Hilfe von neuronalen Netzen den Fokus einer Person basierend auf einer kurzen Historie der Sprecherkonstellationen zu schĂ€tzen. Wir zeigen, dass durch Kombination der bildbasierten und der sprecherbasierten SchĂ€tzung des Aufmerksamkeitsfokus eine deutliche verbesserte SchĂ€tzung erreicht werden kann. Insgesamt wurde mit dieser Arbeit erstmals ein System vorgestellt um automatisch die Aufmerksamkeit von Personen in einem Besprechungsraum zu verfolgen. Die entwickelten AnsĂ€tze und Methoden können auch zur Bestimmung der Aufmerk­samkeit von Personen in anderen Bereichen, insbesondere zur Steuerung von comput­erisierten, interaktiven Umgebungen, verwendet werden. Dies wird an einer Beispielapplikation gezeigt
    • 

    corecore