1,878 research outputs found
A Survey on Ear Biometrics
Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though, current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion; earprint forensics; ear symmetry; ear classification; and ear individuality. This paper provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers
Virtual 3D Reconstruction of Archaeological Pottery Using Coarse Registration
The 3D reconstruction of objects has not only improved visualisation of digitised objects, it has helped researchers to actively carry out archaeological pottery. Reconstructing pottery is significant in archaeology but is challenging task among practitioners. For one, excavated potteries are hardly complete to provide exhaustive and useful information, hence archaeologists attempt to reconstruct them with available tools and methods. It is also challenging to apply existing reconstruction approaches in archaeological documentation. This limitation makes it difficult to carry out studies within a reasonable time. Hence, interest has shifted to developing new ways of reconstructing archaeological artefacts with new techniques and algorithms.
Therefore, this study focuses on providing interventions that will ease the challenges encountered in reconstructing archaeological pottery. It applies a data acquisition approach that uses a 3D laser scanner to acquire point cloud data that clearly show the geometric and radiometric properties of the object’s surface. The acquired data is processed to remove noise and outliers before undergoing a coarse-to-fine registration strategy which involves detecting and extracting keypoints from the point clouds and estimating descriptions with them. Additionally, correspondences are estimated between point pairs, leading to a pairwise and global registration of the acquired point clouds.
The peculiarity of the approach of this thesis is in its flexibility due to the peculiar nature of the data acquired. This improves the efficiency, robustness and accuracy of the approach. The approach and findings show that the use of real 3D dataset can attain good results when used with right tools. High resolution lenses and accurate calibration help to give accurate results. While the registration accuracy attained in the study lies between 0.08 and 0.14 mean squared error for the data used, further studies will validate this result. The results obtained are nonetheless useful for further studies in 3D pottery reassembly
Robotic Technologies for High-Throughput Plant Phenotyping: Contemporary Reviews and Future Perspectives
Phenotyping plants is an essential component of any effort to develop new crop varieties. As plant breeders seek to increase crop productivity and produce more food for the future, the amount of phenotype information they require will also increase. Traditional plant phenotyping relying on manual measurement is laborious, time-consuming, error-prone, and costly. Plant phenotyping robots have emerged as a high-throughput technology to measure morphological, chemical and physiological properties of large number of plants. Several robotic systems have been developed to fulfill different phenotyping missions. In particular, robotic phenotyping has the potential to enable efficient monitoring of changes in plant traits over time in both controlled environments and in the field. The operation of these robots can be challenging as a result of the dynamic nature of plants and the agricultural environments. Here we discuss developments in phenotyping robots, and the challenges which have been overcome and others which remain outstanding. In addition, some perspective applications of the phenotyping robots are also presented. We optimistically anticipate that autonomous and robotic systems will make great leaps forward in the next 10 years to advance the plant phenotyping research into a new era
A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms
In this paper a review is presented of the research on eye gaze estimation
techniques and applications, that has progressed in diverse ways over the past
two decades. Several generic eye gaze use-cases are identified: desktop, TV,
head-mounted, automotive and handheld devices. Analysis of the literature leads
to the identification of several platform specific factors that influence gaze
tracking accuracy. A key outcome from this review is the realization of a need
to develop standardized methodologies for performance evaluation of gaze
tracking systems and achieve consistency in their specification and comparative
evaluation. To address this need, the concept of a methodological framework for
practical evaluation of different gaze tracking systems is proposed.Comment: 25 pages, 13 figures, Accepted for publication in IEEE Access in July
201
Divergent evolution of protein conformational dynamics in dihydrofolate reductase.
Molecular evolution is driven by mutations, which may affect the fitness of an organism and are then subject to natural selection or genetic drift. Analysis of primary protein sequences and tertiary structures has yielded valuable insights into the evolution of protein function, but little is known about the evolution of functional mechanisms, protein dynamics and conformational plasticity essential for activity. We characterized the atomic-level motions across divergent members of the dihydrofolate reductase (DHFR) family. Despite structural similarity, Escherichia coli and human DHFRs use different dynamic mechanisms to perform the same function, and human DHFR cannot complement DHFR-deficient E. coli cells. Identification of the primary-sequence determinants of flexibility in DHFRs from several species allowed us to propose a likely scenario for the evolution of functionally important DHFR dynamics following a pattern of divergent evolution that is tuned by cellular environment
3D machine vision system for robotic weeding and plant phenotyping
The need for chemical free food is increasing and so is the demand for a larger supply to feed the growing global population. An autonomous weeding system should be capable of differentiating crop plants and weeds to avoid contaminating crops with herbicide or damaging them with mechanical tools. For the plant genetics industry, automated high-throughput phenotyping technology is critical to profiling seedlings at a large scale to facilitate genomic research. This research applied 2D and 3D imaging techniques to develop an innovative crop plant recognition system and a 3D holographic plant phenotyping system.
A 3D time-of-flight (ToF) camera was used to develop a crop plant recognition system for broccoli and soybean plants. The developed system overcame the previously unsolved problems caused by occluded canopy and illumination variation. Both 2D and 3D features were extracted and utilized for the plant recognition task. Broccoli and soybean recognition algorithms were developed based on the characteristics of the plants. At field experiments, detection rates of over 88.3% and 91.2% were achieved for broccoli and soybean plants, respectively. The detection algorithm also reached a speed over 30 frame per second (fps), making it applicable for robotic weeding operations.
Apart from applying 3D vision for plant recognition, a 3D reconstruction based phenotyping system was also developed for holographic 3D reconstruction and physical trait parameter estimation for corn plants. In this application, precise alignment of multiple 3D views is critical to the 3D reconstruction of a plant. Previously published research highlighted the need for high-throughput, high-accuracy, and low-cost 3D phenotyping systems capable of holographic plant reconstruction and plant morphology related trait characterization. This research contributed to the realization of such a system by integrating a low-cost 2D camera, a low-cost 3D ToF camera, and a chessboard-pattern beacon array to track the 3D camera\u27s position and attitude, thus accomplishing precise 3D point cloud registration from multiple views. Specifically, algorithms of beacon target detection, camera pose tracking, and spatial relationship calibration between 2D and 3D cameras were developed. The phenotypic data obtained by this novel 3D reconstruction based phenotyping system were validated by the experimental data generated by the instrument and manual measurements, showing that the system has achieved measurement accuracy of more than 90% for most cases under an average of less than five seconds processing time per plant
Multi-Modal 3D Object Detection in Autonomous Driving: a Survey
In the past few years, we have witnessed rapid development of autonomous
driving. However, achieving full autonomy remains a daunting task due to the
complex and dynamic driving environment. As a result, self-driving cars are
equipped with a suite of sensors to conduct robust and accurate environment
perception. As the number and type of sensors keep increasing, combining them
for better perception is becoming a natural trend. So far, there has been no
indepth review that focuses on multi-sensor fusion based perception. To bridge
this gap and motivate future research, this survey devotes to review recent
fusion-based 3D detection deep learning models that leverage multiple sensor
data sources, especially cameras and LiDARs. In this survey, we first introduce
the background of popular sensors for autonomous cars, including their common
data representations as well as object detection networks developed for each
type of sensor data. Next, we discuss some popular datasets for multi-modal 3D
object detection, with a special focus on the sensor data included in each
dataset. Then we present in-depth reviews of recent multi-modal 3D detection
networks by considering the following three aspects of the fusion: fusion
location, fusion data representation, and fusion granularity. After a detailed
review, we discuss open challenges and point out possible solutions. We hope
that our detailed review can help researchers to embark investigations in the
area of multi-modal 3D object detection
Plants Detection, Localization and Discrimination using 3D Machine Vision for Robotic Intra-row Weed Control
Weed management is vitally important in crop production systems. However, conventional herbicide-based weed control can lead to negative environmental impacts. Manual weed control is laborious and impractical for large scale production. Robotic weeding offers a possibility of controlling weeds precisely, particularly for weeds growing close to or within crop rows. The fusion of two-dimensional textural images and three-dimensional spatial images to recognize and localize crop plants at different growth stages were investigated. Images of different crop plants at different growth stages with weeds were acquired. Feature extraction algorithms were developed, and different features were extracted and used to train plant and background classifiers, which also addressed the problems of canopy occlusion and leaf damage. Then, the efficacy and accuracy of the proposed methods in classification were demonstrated by experiments. Currently, the algorithms were only developed and tested for broccoli and lettuce. For broccoli plants, the crop plants detection true positive rate was 93.1%, and the false discover rate was 1.1%, with the average crop-plant-localization error of 15.9 mm. For lettuce plants, the crop plants detection true positive rate was 92.3%, and the false discover rate was 4.0%, with the average crop-plant-localization error of 8.5 mm. The results have shown that 3D imaging based plant recognition algorithms are effective and reliable for crop/weed differentiation
Emerging Linguistic Functions in Early Infancy
This paper presents results from experimental
studies on early language acquisition in infants and
attempts to interpret the experimental results within
the framework of the Ecological Theory of
Language Acquisition (ETLA) recently proposed
by (Lacerda et al., 2004a). From this perspective,
the infant’s first steps in the acquisition of the
ambient language are seen as a consequence of the
infant’s general capacity to represent sensory input
and the infant’s interaction with other actors in its
immediate ecological environment. On the basis of
available experimental evidence, it will be argued
that ETLA offers a productive alternative to
traditional descriptive views of the language
acquisition process by presenting an operative
model of how early linguistic function may emerge
through interaction
Spatio-temporal action localization with Deep Learning
Dissertação de mestrado em Engenharia InformáticaThe system that detects and identifies human activities are named human action recognition.
On the video approach, human activity is classified into four different categories, depending
on the complexity of the steps and the number of body parts involved in the action, namely
gestures, actions, interactions, and activities, which is challenging for video Human action
recognition to capture valuable and discriminative features because of the human body’s
variations. So, deep learning techniques have provided practical applications in multiple fields
of signal processing, usually surpassing traditional signal processing on a large scale.
Recently, several applications, namely surveillance, human-computer interaction, and video
recovery based on its content, have studied violence’s detection and recognition. In recent
years there has been a rapid growth in the production and consumption of a wide variety of
video data due to the popularization of high quality and relatively low-price video devices.
Smartphones and digital cameras contributed a lot to this factor. At the same time, there are
about 300 hours of video data updates every minute on YouTube. Along with the growing
production of video data, new technologies such as video captioning, answering video surveys,
and video-based activity/event detection are emerging every day. From the video input data,
the detection of human activity indicates which activity is contained in the video and locates
the regions in the video where the activity occurs.
This dissertation has conducted an experiment to identify and detect violence with spatial action localization, adapting a public dataset for effect. The idea was used an annotated
dataset of general action recognition and adapted only for violence detection.O sistema que deteta e identifica as atividades humanas é denominado reconhecimento da
ação humana. Na abordagem por vídeo, a atividade humana é classificada em quatro
categorias diferentes, dependendo da complexidade das etapas e do número de partes do
corpo envolvidas na ação, a saber, gestos, ações, interações e atividades, o que é desafiador
para o reconhecimento da ação humana do vídeo para capturar características valiosas e
discriminativas devido às variações do corpo humano. Portanto, as técnicas de deep learning
forneceram aplicações práticas em vários campos de processamento de sinal, geralmente
superando o processamento de sinal tradicional em grande escala.
Recentemente, várias aplicações, nomeadamente na vigilância, interação humano computador e recuperação de vídeo com base no seu conteúdo, estudaram a deteção e o
reconhecimento da violência. Nos últimos anos, tem havido um rápido crescimento na
produção e consumo de uma ampla variedade de dados de vídeo devido à popularização de
dispositivos de vídeo de alta qualidade e preços relativamente baixos. Smartphones e cameras
digitais contribuíram muito para esse fator. Ao mesmo tempo, há cerca de 300 horas de
atualizações de dados de vídeo a cada minuto no YouTube. Junto com a produção crescente
de dados de vídeo, novas tecnologias, como legendagem de vídeo, respostas a pesquisas de
vídeo e deteção de eventos / atividades baseadas em vídeo estão surgindo todos os dias. A
partir dos dados de entrada de vídeo, a deteção de atividade humana indica qual atividade
está contida no vídeo e localiza as regiões no vídeo onde a atividade ocorre.
Esta dissertação conduziu uma experiência para identificar e detetar violência com localização
espacial, adaptando um dataset público para efeito. A ideia foi usada um conjunto de dados
anotado de reconhecimento de ações gerais e adaptá-la apenas para deteção de violência
- …