75 research outputs found
Feature Matching in Iris Recognition System using MATLAB
Iris recognition system is a secure human authentication in biometric technology. Iris recognition system consists of five stages. They are Feature matching, Feature encoding, Iris Normalization, Iris Segmentation and Image acquisition. In Image acquisition, the eye Image is captured from the CASIA database, the Image must have good quality with high resolution to process next steps. In Iris Segmentation, the Iris part is detected by using Hough transform technique and Canny Edge detection technique. Iris from an eye Image segmented. In normalization, the Iris region is converted from the circular region into a rectangular region by using polar transform technique. In feature encoding, the normalized Iris can be encoded in the form of binary bit format by using Gabor filter techniques. In feature matching, the encoded Iris template is compared with database eye Image of Iris template and generated the matching score by using Hamming distance technique and Euclidean distance technique. Based on the matching score, we get the result. This project is developed using Image processing toolbox of Matlab software
Edge Intelligence with Light Weight CNN Model for Surface Defect Detection in Manufacturing Industry
Surface defect identification is essential for maintaining and improving the quality of industrial products. However, numerous environmental factors, including reflection, radiance, light, and material, affect the defect detection process, considerably increasing the difficulty of detecting surface defects. Deep Learning, a part of Artificial intelligence, can detect surface defects in the industrial sector. However, conventional deep learning techniques are heavy in terms of expensive GPU requirements to support massive computations during the defect detection process.CondenseNetV2, a Lightweight CNN-based model, which performs well on microscopic defect inspection, and can be operated on low-frequency edge devices, was proposed in this research. It provides sufficient feature extractions with little computational overhead by reusing a set of the existing Sparse Feature Reactivation module. The training data are subjected to data augmentation techniques, and the hyper-parameters of the proposed model are fine-tuned with transfer learning. The model was tested extensively with two real datasets while running on an edge device (NVIDIA Jetson Xavier Nx SOM). The experiment results confirm that the projected model can efficiently detect the faults in the real-world environment while reliably and robustly diagnosing them
Gesture Recognition for Enhancing Human Computer Interaction
Gesture recognition is critical in human-computer communication. As observed, a plethora of current technological developments are in the works, including biometric authentication, which we see all the time in our smartphones. Hand gesture focus, a frequent human-computer interface in which we manage our devices by presenting our hands in front of a webcam, can benefit people of different backgrounds. Some of the efforts in human-computer interface include voice assistance and virtual mouse implementation with voice commands, fingertip recognition and hand motion tracking based on an image in a live video. Human Computer Interaction (HCI), particularly vision-based gesture and object recognition, is becoming increasingly important. Hence, we focused to design and develop a system for monitoring fingers using extreme learning-based hand gesture recognition techniques. Extreme learning helps in quickly interpreting the hand gestures with improved accuracy which would be a highly useful in the domains like healthcare, financial transactions and global busines
Object Sub-Categorization and Common Framework Method using Iterative AdaBoost for Rapid Detection of Multiple Objects
Object detection and tracking in real time has numerous applications and benefits in various fields like survey, crime detection etc. The idea of gaining useful information from real time scenes on the roads is called as Traffic Scene Perception (TSP). TSP actually consists of three subtasks namely, detecting things of interest, recognizing the discovered objects and tracking of the moving objects. Normally the results obtained could be of value in object recognition and tracking, however the detection of a particular object of interest is of higher value in any real time scenario. The prevalent systems focus on developing unique detectors for each of the above-mentioned subtasks and they work upon utilizing different features. This obviously is time consuming and involves multiple redundant operations. Hence in this paper a common framework using the enhanced AdaBoost algorithm is proposed which will examine all dense characteristics only once thereby increasing the detection speed substantially. An object sub-categorization strategy is proposed to capture the intra-class variance of objects in order to boost generalisation performance even more. We use three detection applications to demonstrate the efficiency of the proposed framework: traffic sign detection, car detection, and bike detection. On numerous benchmark data sets, the proposed framework delivers competitive performance using state-of-the-art techniques
Measurement of the Bottom-Strange Meson Mixing Phase in the Full CDF Data Set
We report a measurement of the bottom-strange meson mixing phase \beta_s
using the time evolution of B0_s -> J/\psi (->\mu+\mu-) \phi (-> K+ K-) decays
in which the quark-flavor content of the bottom-strange meson is identified at
production. This measurement uses the full data set of proton-antiproton
collisions at sqrt(s)= 1.96 TeV collected by the Collider Detector experiment
at the Fermilab Tevatron, corresponding to 9.6 fb-1 of integrated luminosity.
We report confidence regions in the two-dimensional space of \beta_s and the
B0_s decay-width difference \Delta\Gamma_s, and measure \beta_s in [-\pi/2,
-1.51] U [-0.06, 0.30] U [1.26, \pi/2] at the 68% confidence level, in
agreement with the standard model expectation. Assuming the standard model
value of \beta_s, we also determine \Delta\Gamma_s = 0.068 +- 0.026 (stat) +-
0.009 (syst) ps-1 and the mean B0_s lifetime, \tau_s = 1.528 +- 0.019 (stat) +-
0.009 (syst) ps, which are consistent and competitive with determinations by
other experiments.Comment: 8 pages, 2 figures, Phys. Rev. Lett 109, 171802 (2012
Neurodevelopmental disorders in children aged 2-9 years: Population-based burden estimates across five regions in India.
BACKGROUND: Neurodevelopmental disorders (NDDs) compromise the development and attainment of full social and economic potential at individual, family, community, and country levels. Paucity of data on NDDs slows down policy and programmatic action in most developing countries despite perceived high burden. METHODS AND FINDINGS: We assessed 3,964 children (with almost equal number of boys and girls distributed in 2-<6 and 6-9 year age categories) identified from five geographically diverse populations in India using cluster sampling technique (probability proportionate to population size). These were from the North-Central, i.e., Palwal (N = 998; all rural, 16.4% non-Hindu, 25.3% from scheduled caste/tribe [SC-ST] [these are considered underserved communities who are eligible for affirmative action]); North, i.e., Kangra (N = 997; 91.6% rural, 3.7% non-Hindu, 25.3% SC-ST); East, i.e., Dhenkanal (N = 981; 89.8% rural, 1.2% non-Hindu, 38.0% SC-ST); South, i.e., Hyderabad (N = 495; all urban, 25.7% non-Hindu, 27.3% SC-ST) and West, i.e., North Goa (N = 493; 68.0% rural, 11.4% non-Hindu, 18.5% SC-ST). All children were assessed for vision impairment (VI), epilepsy (Epi), neuromotor impairments including cerebral palsy (NMI-CP), hearing impairment (HI), speech and language disorders, autism spectrum disorders (ASDs), and intellectual disability (ID). Furthermore, 6-9-year-old children were also assessed for attention deficit hyperactivity disorder (ADHD) and learning disorders (LDs). We standardized sample characteristics as per Census of India 2011 to arrive at district level and all-sites-pooled estimates. Site-specific prevalence of any of seven NDDs in 2-<6 year olds ranged from 2.9% (95% CI 1.6-5.5) to 18.7% (95% CI 14.7-23.6), and for any of nine NDDs in the 6-9-year-old children, from 6.5% (95% CI 4.6-9.1) to 18.5% (95% CI 15.3-22.3). Two or more NDDs were present in 0.4% (95% CI 0.1-1.7) to 4.3% (95% CI 2.2-8.2) in the younger age category and 0.7% (95% CI 0.2-2.0) to 5.3% (95% CI 3.3-8.2) in the older age category. All-site-pooled estimates for NDDs were 9.2% (95% CI 7.5-11.2) and 13.6% (95% CI 11.3-16.2) in children of 2-<6 and 6-9 year age categories, respectively, without significant difference according to gender, rural/urban residence, or religion; almost one-fifth of these children had more than one NDD. The pooled estimates for prevalence increased by up to three percentage points when these were adjusted for national rates of stunting or low birth weight (LBW). HI, ID, speech and language disorders, Epi, and LDs were the common NDDs across sites. Upon risk modelling, noninstitutional delivery, history of perinatal asphyxia, neonatal illness, postnatal neurological/brain infections, stunting, LBW/prematurity, and older age category (6-9 year) were significantly associated with NDDs. The study sample was underrepresentative of stunting and LBW and had a 15.6% refusal. These factors could be contributing to underestimation of the true NDD burden in our population. CONCLUSIONS: The study identifies NDDs in children aged 2-9 years as a significant public health burden for India. HI was higher than and ASD prevalence comparable to the published global literature. Most risk factors of NDDs were modifiable and amenable to public health interventions
Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)
In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field
Multi Objective Optimization Test Case Selection for Non Dominated Sorting Genetic Algorithm (NSGA-II)
The selection of the regression testing is performed to reduce the test case from the test suite. The Multi Objective Evolutionary Algorithm (MOEA) reduces the computational complexity and sharing parameter. In this work, the non-dominated sorting based multi objective evolutionary algorithm called as NSGA-II which evaluates the above difficulties. A fast non-dominated sorting algorithm selects the operator, which creates the off spring by combining the parent and child populations. NSGA-II should be used to reduce the execution cost and statement coverage from the test suite. In order to overcome this criterion, the proposed NSGA-II is able to find better solutions in all problems compared to elitist multi objective evolutionary algorithm
- …