8 research outputs found

    MICROHEMODYNAMIC PARAMETERS OF CORTICAL SUBSTANCE OF KIDNEYS AT EXPERIMENTAL HYDRONEPHROSIS

    Get PDF
    Research objective. Under conditions of unilateral ureter occlusion, development of microhemodynamic disorders of cortical substance in kidneys was studied both on the damage side and on contralateral (opposite) side. Materials and methods. Under the conditions of the experiment, we studied the state of microhemodynamics of the cortical region of both kidneys during the development of unilateral hydronephrosis. The experiments were conducted on 90 (48 experimental, 36 false-operated, 6 intact) outbred adult males. Results. It has been established that disorders on the side of occlusion depend on duration of obstruction in lumen of ureter, which determines the development of tissue hypoxia and violates trophic renal parenchyma. Conclusion. Dynamically functional disorders of the microvasculature occur at the early (on the 1st, 3rd, and 5th day), and irreversible changes in the renal parenchyma at the later (on the 14th and 30th day) stages of the experiment. The contralateral one responds to turning off the function of one kidney by adaptive restructuring of the microhemocirculatory system, which is manifested by micro vessels dilatation and an increase in the linear velocity of blood flow in them, which ensures its hyperfunction

    Yüz analizine dayalı derin öğrenme tabanlı bir ilgi tespit sisteminin gerçekleştirilmesi

    Get PDF
    06.03.2018 tarihli ve 30352 sayılı Resmi Gazetede yayımlanan “Yükseköğretim Kanunu İle Bazı Kanun Ve Kanun Hükmünde Kararnamelerde Değişiklik Yapılması Hakkında Kanun” ile 18.06.2018 tarihli “Lisansüstü Tezlerin Elektronik Ortamda Toplanması, Düzenlenmesi ve Erişime Açılmasına İlişkin Yönerge” gereğince tam metin erişime açılmıştır.Pazarlama alanında, en heyecan verici, yenilikçi ve gelecek vaat eden konulardan biri müşteri ilgisinin ölçülmesidir. Müşteri ilgisini ölçmek için geleneksel bir yaklaşım olan müşteri memnuniyet anketleri, günümüzde müşteriyi rahatsız edici bir yöntem olarak değerlendirilmektedir. Diğer bir müşteri ilgisi ölçme yöntemi de bir insan gözlemcinin müşteri davranışlarını izleyip kaydetmesi şeklinde olabilir ancak bu da deneyimli ve yetenekli insan gerektirir. Ayrıca her gözlemci, insan davranışlarını farklı yorumlayabileceğinden, sonuçlar tarafsız olamayabilir. Bu nedenle müşteri davranışlarını izlemek için, rahatsız edici olmayan, nicel, tarafsız ve otomatik sonuçlar üretebilen sistemlere ihtiyaç vardır. Bu tez çalışması ile müşteri davranışının bilgisayar aracılığı ile izlenmesi ve bir ürüne ya da reklama ilgi duyan müşterilerin belirlenmesi için derin öğrenme tabanlı bir sistem önerilmektedir. Bu sistem ilk olarak müşterinin dikkatini baş yönelimi tahminiyle ölçer. Baş pozisyonları reklama veya ilgilenilen ürüne yönelik olan müşteriler için, sistem yüz ifadelerini analiz eder ve yüz ifadesine dayalı olarak müşterilerin ürünlere veya reklamlara olan ilgisini tahmin eder. Sistem ön yüz görüntülerinin algılanmasıyla çalışmaya başlar, ardından yüz ifadesi tespiti için önemli olan ağız, göz ve kaş bileşenleri tespit edilip yüz üzerinde bölütlenir ve bölütlenmiş bir yüz görüntüsü oluşturulur. Son olarak, ham yüz görüntüleri ile birlikte, elde edilen bölütlenmiş yüz görüntülerine ait güven değerleri kullanılarak yüz ifadeleri tespit edilir. İki aşamalı olan bu yüz ifadesi tespit yöntemi, parça tabanlı özellikler ile bütünsel yüz özelliklerini birleştirerek daha güçlü bir yüz ifadesi sistemi sunar. Sistemde ayrıca müşteri yüzleri etiketlenerek video çerçevesi boyunca takip edilir. Her müşteriye ait yüz ifadeleri belirli bir süre boyunca depolanır ve bu süre sonunda müşterinin ürüne ilgili olup olmadığı ile ilgili sonuç bildirilir. Önerilen sistem müşteri davranışlarının izlenmesine ek olarak, farklı odak gruplarının çeşitli fikirlere, resimlere, seslere, kelimelere ve diğer uyaranlara duygusal tepkisini izlemek için de kullanılabilir.In the marketing research, one of the most exciting, innovative, and promising trends is quantification of customer interest. The customer satisfaction survey, which is a traditional approach to quantify customer interest, has come to be considered as an invasive method in recent years. Recording customer interest by a salesperson who observes customers' behavior during the advertisement watching or shopping phase is another approach. However, this task requires specific skills for every salesperson, and each observer may interpret customer behaviors differently. Consequently, there is a critical need to develop non-invasive, objective, and quantitative tools for monitoring customer interest. This study presents a deep learning-based system for monitoring customer behavior specifically for detection of interest. The proposed system first measures customer attention through head pose estimation. For those customers whose heads are oriented toward the advertisement or the product of interest, the system further analyzes the facial expressions and reports customers' interest. The proposed system starts by detecting frontal face poses; facial components important for facial expression recognition are then segmented and an iconized face image is generated; finally, facial expressions are analyzed using the confidence values of obtained iconized face image combined with the raw facial images. This approach fuses local part-based features with holistic facial information for robust facial expression recognition. The system is also tracked human faces along the video frame by labeling the faces. The facial expressions of each customer are stored for a certain period of time; at the end of this period, the result of whether the customer is related to the product or advirtesement is notified. With the proposed processing pipeline, using a basic imaging device, such as a webcam, head pose estimation and facial expression recognition is possible. The proposed pipeline can be used to monitor emotional response of focus groups to various ideas, pictures, sounds, words, and other stimuli

    Driver head pose estimation using efficient descriptor fusion

    No full text
    International audienceA great interest is focused on driver assistance systems using the head pose as an indicator of the visual focus of attention and the mental state. In fact, the head pose estimation is a technique allowing to deduce head orientation relatively to a view of camera and could be performed by model-based or appearance-based approaches. Model-based approaches use a face geometrical model usually obtained from facial features, whereas appearance-based techniques use the whole face image characterized by a descriptor and generally consider the pose estimation as a classification problem. Appearance-based methods are faster and more adapted to discrete pose estimation. However, their performance depends strongly on the head descriptor, which should be well chosen in order to reduce the information about identity and lighting contained in the face appearance. In this paper, we propose an appearance-based discrete head pose estimation aiming to determine the driver attention level from monocular visible spectrum images, even if the facial features are not visible. Explicitly, we first propose a novel descriptor resulting from the fusion of four most relevant orientation-based head descriptors, namely the steerable filters, the histogram of oriented gradients (HOG), the Haar features, and an adapted version of speeded up robust feature (SURF) descriptor. Second, in order to derive a compact, relevant, and consistent subset of descriptor’s features, a comparative study is conducted on some well-known feature selection algorithms. Finally, the obtained subset is subject to the classification process, performed by the support vector machine (SVM), to learn head pose variations. As we show in experiments with the public database (Pointing’04) as well as with our real-world sequence, our approach describes the head with a high accuracy and provides robust estimation of the head pose, compared to state-of-the-art methods

    Vision-based Driver State Monitoring Using Deep Learning

    Get PDF
    Road accidents cause thousands of injuries and losses of lives every year, ranking among the top lifetime odds of death causes. More than 90% of the traffic accidents are caused by human errors [1], including sight obstruction, failure to spot danger through inattention, speeding, expectation errors, and other reasons. In recent years, driver monitoring systems (DMS) have been rapidly studied and developed to be used in commercial vehicles to prevent human error-caused car crashes. A DMS is a vehicle safety system that monitors driver’s attention and warns if necessary. Such a system may contain multiple modules that detect the most accident-related human factors, such as drowsiness and distractions. Typical DMS approaches seek driver distraction cues either from vehicle acceleration and steering (vehicle-based approach), driver physiological signals (physiological approach), or driver behaviours (behavioural-based approach). Behavioural-based driver state monitoring has numerous advantages over vehicle-based and physiological-based counterparts, including fast responsiveness and non-intrusiveness. In addition, the recent breakthrough in deep learning enables high-level action and face recognition, expanding driver monitoring coverage and improving model performance. This thesis presents CareDMS, a behavioural approach-based driver monitoring system using deep learning methods. CareDMS consists of driver anomaly detection and classification, gaze estimation, and emotion recognition. Each approach is developed with state-of-the-art deep learning solutions to address the shortcomings of the current DMS functionalities. Combined with a classic drowsiness detection method, CareDMS thoroughly covers three major types of distractions: physical (hands-off-steering wheel), visual (eyes-off-road ahead), and cognitive (minds-off-driving). There are numerous challenges in behavioural-based driver state monitoring. Current driver distraction detection methods either lack detailed distraction classification or unknown driver anomalies generalization. This thesis introduces a novel two-phase proposal and classification network architecture. It can suspect all forms of distracted driving and recognize driver actions simultaneously, which provide downstream DMS important information for warning level customization. Next, gaze estimation for driver monitoring is difficult as drivers tend to have severe head movements while driving. This thesis proposes a video-based neural network that jointly learns head pose and gaze dynamics together. The design significantly reduces per-head-pose gaze estimation performance variance compared to benchmarks. Furthermore, emotional driving such as road rage and sadness could seriously impact driving performance. However, individuals have various emotional expressions, which makes vision-based emotion recognition a challenging task. This work proposes an efficient and versatile multimodal fusion module that effectively fuses facial expression and human voice for emotion recognition. Visible advantages are demonstrated compared to using a single modality. Finally, a driver state monitoring system, CareDMS, is presented to convert the output of each functionality into a specific driver’s status measurement and integrates various measurements into the driver’s level of alertness

    Tiefen-basierte Bestimmung der Kopfposition und -orientierung im Fahrzeuginnenraum

    Get PDF

    Extraction et analyse des caractéristiques faciales : application à l'hypovigilance chez le conducteur

    Get PDF
    Studying facial features has attracted increasing attention in both academic and industrial communities. Indeed, these features convey nonverbal information that plays a key role in humancommunication. Moreover, they are very useful to allow human-machine interactions. Therefore, the automatic study of facial features is an important task for various applications includingrobotics, human-machine interfaces, behavioral science, clinical practice and monitoring driver state. In this thesis, we focus our attention on monitoring driver state through its facial features analysis. This problematic solicits a universal interest caused by the increasing number of road accidents, principally induced by deterioration in the driver vigilance level, known as hypovigilance. Indeed, we can distinguish three hypovigilance states. The first and most critical one is drowsiness, which is manifested by an inability to keep awake and it is characterized by microsleep intervals of 2-6 seconds. The second one is fatigue, which is defined by the increasing difficulty of maintaining a task and it is characterized by an important number of yawns. The third and last one is the inattention that occurs when the attention is diverted from the driving activity and it is characterized by maintaining the head pose in a non-frontal direction.The aim of this thesis is to propose facial features based approaches allowing to identify driver hypovigilance. The first approach was proposed to detect drowsiness by identifying microsleepintervals through eye state analysis. The second one was developed to identify fatigue by detecting yawning through mouth analysis. Since no public hypovigilance database is available,we have acquired and annotated our own database representing different subjects simulating hypovigilance under real lighting conditions to evaluate the performance of these two approaches. Next, we have developed two driver head pose estimation approaches to detect its inattention and also to determine its vigilance level even if the facial features (eyes and mouth) cannot be analyzed because of non-frontal head positions. We evaluated these two estimators on the public database Pointing'04. Then, we have acquired and annotated a driver head pose database to evaluate our estimators in real driving conditions.L'étude des caractéristiques faciales a suscité l'intérêt croissant de la communauté scientifique et des industriels. En effet, ces caractéristiques véhiculent des informations non verbales qui jouent un rôle clé dans la communication entre les hommes. De plus, elles sont très utiles pour permettre une interaction entre l'homme et la machine. De ce fait, l'étude automatique des caractéristiques faciales constitue une tâche primordiale pour diverses applications telles que les interfaces homme-machine, la science du comportement, la pratique clinique et la surveillance de l'état du conducteur. Dans cette thèse, nous nous intéressons à la surveillance de l'état du conducteur à travers l'analyse de ses caractéristiques faciales. Cette problématique sollicite un intérêt universel causé par le nombre croissant des accidents routiers, dont une grande partie est provoquée par une dégradation de la vigilance du conducteur, connue sous le nom de l'hypovigilance. En effet, nous pouvons distinguer trois états d'hypovigilance. Le premier, et le plus critique, est la somnolence qui se manifeste par une incapacité à se maintenir éveillé et se caractérise par les périodes de micro-sommeil correspondant à des endormissements de 2 à 6 secondes. Le second est la fatigue qui se définit par la difficulté croissante à maintenir une tâche à terme et se caractérise par une augmentation du nombre de bâillements. Le troisième est l'inattention qui se produit lorsque l'attention est détournée de l'activité de conduite et se caractérise par le maintien de la pose de la tête en une direction autre que frontale. L'objectif de cette thèse est de concevoir des approches permettant de détecter l'hypovigilance chez le conducteur en analysant ses caractéristiques faciales. En premier lieu, nous avons proposé une approche dédiée à la détection de la somnolence à partir de l'identification des périodes de micro-sommeil à travers l'analyse des yeux. En second lieu, nous avons introduit une approche permettant de relever la fatigue à partir de l'analyse de la bouche afin de détecter les bâillements. Du fait qu'il n'existe aucune base de données publique dédiée à la détection de l'hypovigilance, nous avons acquis et annoté notre propre base de données représentant différents sujets simulant des états d'hypovigilance sous des conditions d'éclairage réelles afin d'évaluer les performances de ces deux approches. En troisième lieu, nous avons développé deux nouveaux estimateurs de la pose de la tête pour permettre à la fois de détecter l'inattention du conducteur et de déterminer son état, même quand ses caractéristiques faciales (yeux et bouche) ne peuvent être analysées suite à des positions non-frontales de la tête. Nous avons évalué ces deux estimateurs sur la base de données publique Pointing'04. Ensuite, nous avons acquis et annoté une base de données représentant la variation de la pose de la tête du conducteur pour valider nos estimateurs sous un environnement de conduite
    corecore