13 research outputs found

    CoMapping: Efficient 3D-Map Sharing Methodology for Decentralized cases

    Get PDF
    International audienceCoMapping is a framework to efficient manage, share, and merge 3D map data between mobile robots. The main objective of this framework is to implement a Collaborative Mapping for outdoor environments. The framework structure is based on two stages. During the first one, the Pre-Local Mapping stage, each robot constructs a real time pre-local map of its environment using Laser Rangefinder data and low cost GPS information only in certain situations. Afterwards, the second one is the Local Mapping stage where the robots share their pre-local maps and merge them in a decentralized way in order to improve their new maps, renamed now as local maps. An experimental study for the case of decentralized cooperative 3D mapping is presented, where tests were conducted using three intelligent cars equipped with LiDAR and GPS receiver devices in urban outdoor scenarios. We also discuss the performance of the cooperative system in terms of map alignments

    CoMapping: Multi-robot Sharing and Generation of 3D-Maps applied to rural and urban scenarios

    Get PDF
    International audienceWe present an experimental study for the generation of large 3D maps using our CoMapping framework. This framework considers a collaborative approach to efficiently manage, share, and merge maps between vehicles. The main objective of this work is to perform a cooperative mapping for urban and rural environments denied of continuous-GPS service. The study is split in to 2 stages: Pre-Local and Local. In the first stage, each vehicle builds a Pre-Local map of its surroundings in real-time using laser-based measurements, then relocates the map in a global coordinate system using just the low cost GPS data from the first instant of the map construction. In the second stage, vehicles share their pre-local maps, align and merge them in a decentralized way in order to generate more consistent and larger maps, named Local maps. To evaluate performance of all the cooperative system in terms of map alignments, tests are conducted using 3 cars equipped with LiDARs and GPS receiver devices in urban outdoor scenarios of thé Ecole Centrale Nantes campus and rural environments

    Reconocimiento de actividades humanas por medio de extracción de características y técnicas de inteligencia artificial: una revisión

    Get PDF
    Context: In recent years, the recognition of human activities has become an area of constant exploration in different fields. This article presents a literature review focused on the different types of human activities and information acquisition devices for the recognition of activities. It also delves into elderly fall detection via computer vision using feature extraction methods and artificial intelligence techniques. Methodology: This manuscript was elaborated following the criteria of the document review and analysis methodology (RAD), dividing the research process into the heuristics and hermeneutics of the information sources. Finally, 102 research works were referenced, which made it possible to provide information on current state of the recognition of human activities. Results: The analysis of the proposed techniques for the recognition of human activities shows the importance of efficient fall detection. Although it is true that, at present, positive results are obtained with the techniques described in this article, their study environments are controlled, which does not contribute to the real advancement of research. Conclusions: It would be of great impact to present the results of studies in environments similar to reality, which is why it is essential to focus research on the development of databases with real falls of adults or in uncontrolled environments.Contexto: En los últimos años, el reconocimiento de actividades humanas se ha convertido en un área de constante exploración en diferentes campos. Este artículo presenta una revisión de la literatura enfocada en diferentes tipos de actividades humanas y dispositivos de adquisición de información para el reconocimiento de actividades, y profundiza en la detección de caídas de personas de tercera edad por medio de visión computacional, utilizando métodos de extracción de características y técnicas de inteligencia artificial. Metodología: Este manuscrito se elaboró con criterios de la metodología de revisión y análisis documental (RAD), dividiendo el proceso de investigación en heurística y hermenéutica de las fuentes de información. Finalmente, se referenciaron 102 investigaciones que permitieron dar a conocer la actualidad del reconocimiento de actividades humanas. Resultados: El análisis de las técnicas propuestas para el reconocimiento de actividades humanas muestra la importancia de la detección eficiente de caídas. Si bien es cierto en la actualidad se obtienen resultados positivos con las técnicas descritas en este artículo, sus entornos de estudio son controlados, lo cual no contribuye al verdadero avance de las investigaciones. Conclusiones: Sería de gran impacto presentar resultados de estudios en entornos semejantes a la realidad, por lo que es primordial centrar el trabajo de investigación en la elaboración de bases de datos con caídas reales de personas adultas o en entornos no controlados

    Image restoration by adaptive wavelet thresholding method

    No full text
    Doğan, Mustafa (Dogus Author) -- Conference full title: 2013 21st Signal Processing and Communications Applications Conference, SIU 2013; Haspolat; Turkey; 24 April 2013 through 26 April 2013.Görüntü zenginleştirme ve onarma yöntemleri tıbbi görüntüleme ve uydu görüntüleme sistemleri gibi birçok alanda uygulanmaktadır. Literatürde görüntü zenginleştirme ve onarma üzerine birçok farklı yaklaşım ve çalışma Mevcuttur. Bu çalışmada, görüntüde bozulmalara yol açan bazı gürültü modelleri incelenerek, Wiener süzgeci, ortanca süzgeci, ortalama süzgeci ve önerilen adaptif dalgacık eşiklemeye dayalı yöntemin bu tip gürültü modellerinden etkilenerek bozulmaya uğramış görüntüleri iyileştirme performansları karşılaştırılmıştır.Image enhancement and restoration methods are essential for many fields like medical imaging and radar imaging systems. In literature, there are many studies and different approaches to image enhancement and restoration methods. In this paper, some noise models are studied and the performances of Wiener filter, median filter, mean filter and a proposed method based on adaptive wavelet thresholding are compared on images degraded by mentioned noise models

    Vision-Based Traffic Sign Detection and Recognition Systems: Current Trends and Challenges

    Get PDF
    The automatic traffic sign detection and recognition (TSDR) system is very important research in the development of advanced driver assistance systems (ADAS). Investigations on vision-based TSDR have received substantial interest in the research community, which is mainly motivated by three factors, which are detection, tracking and classification. During the last decade, a substantial number of techniques have been reported for TSDR. This paper provides a comprehensive survey on traffic sign detection, tracking and classification. The details of algorithms, methods and their specifications on detection, tracking and classification are investigated and summarized in the tables along with the corresponding key references. A comparative study on each section has been provided to evaluate the TSDR data, performance metrics and their availability. Current issues and challenges of the existing technologies are illustrated with brief suggestions and a discussion on the progress of driver assistance system research in the future. This review will hopefully lead to increasing efforts towards the development of future vision-based TSDR system. Document type: Articl

    Mapeamento de qualidade de experiência (QOE) através de qualidade de serviço (QOS) focado em bases de dados distribuídas

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2017.A falta de conceitualização congruente sobre qualidade de serviço (QoS) para bases de dados (BDs) foi o fator que impulsionou o estudo resultante nesta tese. A definição de QoS como uma simples verificação de se um nó corre risco de falha devido ao número de acessos, como faziam, na época do levantamento bibliométrico desta tese, alguns sistemas comerciais, era uma simplificação exagerada para englobar um conceito tão complexo. Outros trabalhos que dizem lidar com estes conceitos também não são exatos, em termos matemáticos, e não possuem definições concretas ou com qualidade passível de utilização ou replicação, o que torna inviável sua aplicação ou mesmo verificação. O foco deste estudo é direcionado à bases de dados distribuídas (BDDs), de maneira que a conceitualização aqui desenvolvida é também compatível, ao menos parcialmente, com modelos não distribuídos de BDs. As novas definições de QoS desenvolvidas são utilizadas para se lidar com o conceito correlacionado de qualidade de experiência (QoE), em uma abordagem em nível de sistema focada em completude de QoS. Mesmo sendo QoE um conceito multidimensional, difícil de ser mensurado, o foco é mantido em uma abordagem passível de mensuramento, de maneira a permitir que sistemas de BDDs possam lidar com autoavaliação. A proposta de autoavaliação surge da necessidade de identificação de problemas passíveis de autocorreção. Tendo-se QoS bem definida, de maneira estatística, pode-se fazer análise de comportamento e tendência comportamental de maneira a se inferir previsão de estados futuros, o que permite o início de processo de correção antes que se alcance estados inesperados, por predição estatística. Sendo o objetivo geral desta tese a definição de métricas de QoS e QoE, com foco em BDDs, lidando com a hipótese de que é possível se definir QoE estatisticamente com base em QoS, para propósitos de nível de sistema. Ambos os conceitos sendo novos para BDDs quando lidando com métricas mensuráveis exatas. E com estes conceitos então definidos, um modelo de recuperação arquitetural é apresentado e testado para demonstração de resultados quando da utilização das métricas definidas para predição comportamental.Abstract : The hitherto lack of quality of service (QoS) congruent conceptualization to databases (DBs) was the factor that drove the initial development of this thesis. To define QoS as a simple verification that if a node is at risk of failure due to memory over-commitment, as did some commercial systems at the time that was made the bibliometric survey of this thesis, it is an oversimplification to encompass such a complex concept. Other studies that quote to deal with this concept are not accurate and lack concrete definitions or quality allowing its use, making infeasible its application or even verification. Being the focus targeted to distributed databases (DDBs), the developed conceptualization is also compatible, at least partially, with models of non-distributed DBs. These newfound QoS settings are then used to handle the correlated concept of quality of experience (QoE) in a system-level approach, focused on QoS completeness. Being QoE a multidimensional concept, hard to be measured, the focus is kept in an approach liable of measurement, in a way to allow DDBs systems to deal with self-evaluation. The idea of self-evaluation arises from the need of identifying problems subject to self-correction. With QoS statistically well-defined, it is possible to analyse behavior and to indetify tendencies in order to predict future states, allowing early correction before the system reaches unexpected states. Being the general objective of this thesis the definition of metrics of QoS and QoE, focused on DDBs, dealing with the hypothesis that it is possible to define QoE statistically based on QoS, for system level purposes. Both these concepts being new to DDBs when dealing with exact measurable metrics. Once defined these concepts, an architectural recovering model is presented and tested to demonstrate the results when using the metrics defined for behavioral prediction

    Algorithms for Image Analysis in Traffic Surveillance Systems

    Get PDF
    Import 23/07/2015The presence of various surveillance systems in many areas of the modern society is indisputable and the most perceptible are the video surveillance systems. This thesis mainly describes novel algorithm for vision-based estimation of the parking lot occupancy and the closely related topics of pre-processing of images captured under harsh conditions. The developed algorithms have their practical application in the parking guidance systems which are still more popular. One part of this work also tries to contribute to the specific area of computer graphics denoted as direct volume rendering (DVR).Přítomnost nejrůznějších dohledových systémů v mnoha oblastech soudobé společnosti je nesporná a systémy pro monitorování dopravy jsou těmi nejviditelnějšími. Hlavní část této práce se věnuje popisu nového algoritmu pro detekci obsazenosti parkovacích míst pomocí analýzy obrazu získaného z kamerového systému. Práce se také zabývá tématy úzce souvisejícími s předzpracováním obrazu získaného za ztížených podmínek. Vyvinuté algoritmy mají své praktické uplatnění zejména v oblasti pomocných parkovacích systémů, které se stávají čím dál tím více populárními. Jedna část této práce se snaží přispět do oblasti počítačové grafiky označované jako přímá vizualizace objemových dat.Prezenční460 - Katedra informatikyvyhově

    Smart Sensor Technologies for IoT

    Get PDF
    The recent development in wireless networks and devices has led to novel services that will utilize wireless communication on a new level. Much effort and resources have been dedicated to establishing new communication networks that will support machine-to-machine communication and the Internet of Things (IoT). In these systems, various smart and sensory devices are deployed and connected, enabling large amounts of data to be streamed. Smart services represent new trends in mobile services, i.e., a completely new spectrum of context-aware, personalized, and intelligent services and applications. A variety of existing services utilize information about the position of the user or mobile device. The position of mobile devices is often achieved using the Global Navigation Satellite System (GNSS) chips that are integrated into all modern mobile devices (smartphones). However, GNSS is not always a reliable source of position estimates due to multipath propagation and signal blockage. Moreover, integrating GNSS chips into all devices might have a negative impact on the battery life of future IoT applications. Therefore, alternative solutions to position estimation should be investigated and implemented in IoT applications. This Special Issue, “Smart Sensor Technologies for IoT” aims to report on some of the recent research efforts on this increasingly important topic. The twelve accepted papers in this issue cover various aspects of Smart Sensor Technologies for IoT

    Kamu hizmetlerinde veri madenciliği : Çözüm masası verileri temelinde bir araştırma

    Get PDF
    06.03.2018 tarihli ve 30352 sayılı Resmi Gazetede yayımlanan “Yükseköğretim Kanunu İle Bazı Kanun Ve Kanun Hükmünde Kararnamelerde Değişiklik Yapılması Hakkında Kanun” ile 18.06.2018 tarihli “Lisansüstü Tezlerin Elektronik Ortamda Toplanması, Düzenlenmesi ve Erişime Açılmasına İlişkin Yönerge” gereğince tam metin erişime açılmıştır.Kurumlar tarafından kullanılan yönetim bilişim sistemleri, gelişen akıllı teknolojilerin etkisiyle oluşan büyük veriden gizli bilgi örüntülerinin çıkarılması ve geleceğe dönük kararlarda kurum yöneticilerine karar desteğinin sağlanması büyük önem arz etmektedir. Kamu yönetimi disiplininde teknoloji odaklı çalışmalar genellikle teorik düzeyde ve ağırlıklı olarak "e-devlet" konusunda yoğunlaşmaktadır. Veri madenciliği uygulamaları ise genellikle yönetim bilişim sistemleri, bilgisayar bilimleri ve işletme gibi disiplinlerde özel sektör verisi ile çalışılmaktadır. Bu çalışma, veri madenciliği konusunu kamu yönetimi ile yönetim bilişim sistemleri disiplinlerine dayalı olarak incelemektedir. Çalışmanın uygulama bölümünde, literatürdeki genel eğilimden farklı olarak, kamu verisiyle veri madenciliği uygulaması gerçekleştirilmiştir. Veri madenciliği için, bir büyükşehir belediyesinden elde edilen çözüm masası verileri Naive Bayes, Destek Vektör Makinesi, K-En Yakın Komşuluk ve Karar Ağaçları gibi makine öğrenmesi algoritmaları kullanılarak analiz edilmiştir. Elde edilen bulguların görsel gösterimi içinse iş zekâsı uygulaması olan "Tableu" kullanılmıştır. Çalışmada, Türkiye'de büyük verinin son yıllarda kamu sektörü kuruluşlarında yaygınlaştığı, kurumların stratejik planlarında yer verildiği, ancak veri madenciliği uygulamalarının çok az kurumda etkin olarak kullanıldığı sonucuna varılmıştır. Uygulama bulguları, yapılandırılmamış veri üzerinde ön işleme aşamasının dikkatli ve doğru şekilde yapılmasının makine öğrenmesinin doğruluk oranlarına doğrudan etki ettiğini göstermesi açısından önemlidir. Büyük veri ve veri madenciliği uygulamalarının, hükümet hizmetlerini, ayrıca devlet operasyonlarını, politika üretme ve yönetimini geliştirmek için kamu sektörü tarafından etkin olarak kullanılabileceği sonucuna ulaşılmıştır. Veri madenciliğinin yalnızca sayısal yöntemleri içeren yazılım aracı değil; çözümüne ihtiyaç duyulan probleme göre tasarlanmış, ilgili yöntem, teknik ve uygulamaları da kapsayan, sonuçları itibariyle probleme ait ilişki, kural ve örüntüyü modelleyen ve gösteren bir süreç olarak kamu hizmetlerinde kullanılabileceğini göstermesi açısından da bu tez önem arz etmektedir.Management information systems used by the goverments and public agencies are crucial in terms of acquiring latent information patterns comprised from big data generated by the developing smart technologies and provision of decision supports on future decisions to policy makers and public managers. Technology-based studies in public administration are generally conducted on the basis of theoretical and practical dimensions of "e-government". Data mining applications are usually studied with focusing on private sector data in disciplines such as management information systems, computer sciences and business administration. This study examines data mining, on the accounts of the disciplines of public administration and management information systems. In the empirical part of the study, fourth chapter, data mining process is implemented with public data, unlike the general tendency in the literature. The help desk data of the Kocaeli Metropolitan Municipality is used in the study. Preprocessing of data and classification methods are implemented via "Weka Machine Learning" tool. The help desk data is analyzed using a number of machine learning algorithms such as Naive Bayes, Support Vector Machine, K-Nearest Neighborhood and Decision Trees. The results were visualized with a business intelligence application called "Tableu". It was concluded that while there is an increasing awareness in reent years on big data technologies and data mining in governments and public agencies in Turkey, the number of applications and projects have still been outnumbered. In essence our study shows that, careful and accurate pre-processing of the raw data, qualitative or quantitative, has a direct impact on the accuracy of machine learning algortihms. Finally, it seems that big data and data mining applications can be effectively used by the public agencies to enhance government operations, to provide effective and efficient public services, and to improve the quality of public policy-making. Data mining is not only a software tool that contains numerical methods; but it includes methods and applications intented to solve the real world problems. This thesis is also important in that it shows data mining can be adopted in public services with a generic model

    The Role of Transient Vibration of the Skull on Concussion

    Get PDF
    Concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. The maximum mechanical impedance of the brain tissue occurs at 450±50 Hz and may be affected by the skull resonant frequencies. After an impact to the head, vibration resonance of the skull damages the underlying cortex. The skull deforms and vibrates, like a bell for 3 to 5 milliseconds, bruising the cortex. Furthermore, the deceleration forces the frontal and temporal cortex against the skull, eliminating a layer of cerebrospinal fluid. When the skull vibrates, the force spreads directly to the cortex, with no layer of cerebrospinal fluid to reflect the wave or cushion its force. To date, there is few researches investigating the effect of transient vibration of the skull. Therefore, the overall goal of the proposed research is to gain better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives. First, a MRI skull and brain segmentation automatic technique is developed. Due to bones’ weak magnetic resonance signal, MRI scans struggle with differentiating bone tissue from other structures. One of the most important components for a successful segmentation is high-quality ground truth labels. Therefore, we introduce a deep learning framework for skull segmentation purpose where the ground truth labels are created from CT imaging using the standard tessellation language (STL). Furthermore, the brain region will be important for a future work, thus, we explore a new initialization concept of the convolutional neural network (CNN) by orthogonal moments to improve brain segmentation in MRI. Second, the creation of a novel 2D and 3D Automatic Method to Align the Facial Skeleton is introduced. An important aspect for further impact analysis is the ability to precisely simulate the same point of impact on multiple bone models. To perform this task, the skull must be precisely aligned in all anatomical planes. Therefore, we introduce a 2D/3D technique to align the facial skeleton that was initially developed for automatically calculating the craniofacial symmetry midline. In the 2D version, the entire concept of using cephalometric landmarks and manual image grid alignment to construct the training dataset was introduced. Then, this concept was extended to a 3D version where coronal and transverse planes are aligned using CNN approach. As the alignment in the sagittal plane is still undefined, a new alignment based on these techniques will be created to align the sagittal plane using Frankfort plane as a framework. Finally, the resonant frequencies of multiple skulls are assessed to determine how the skull resonant frequency vibrations propagate into the brain tissue. After applying material properties and mesh to the skull, modal analysis is performed to assess the skull natural frequencies. Finally, theories will be raised regarding the relation between the skull geometry, such as shape and thickness, and vibration with brain tissue injury, which may result in concussive injury
    corecore