331 research outputs found

    Fingerprint Database Enhancement by Applying Interpolation and Regression Techniques for IoT-based Indoor Localization

    Get PDF
    Most applied indoor localization is based on distance and fingerprint techniques. The distance-based technique converts specific parameters to a distance, while the fingerprint technique stores parameters as the fingerprint database. The widely used Internet of Things (IoT) technologies, e.g., Wi-Fi and ZigBee, provide the localization parameters, i.e., received signal strength indicator (RSSI). The fingerprint technique advantages over the distance-based method as it straightforwardly uses the parameter and has better accuracy. However, the burden in database reconstruction in terms of complexity and cost is the disadvantage of this technique. Some solutions, i.e., interpolation, image-based method, machine learning (ML)-based, have been proposed to enhance the fingerprint methods. The limitations are complex and evaluated only in a single environment or simulation. This paper proposes applying classical interpolation and regression to create the synthetic fingerprint database using only a relatively sparse RSSI dataset. We use bilinear and polynomial interpolation and polynomial regression techniques to create the synthetic database and apply our methods to the 2D and 3D environments. We obtain an accuracy improvement of 0.2m for 2D and 0.13m for 3D by applying the synthetic database. Adding the synthetic database can tackle the sparsity issues, and the offline fingerprint database construction will be less burden. Doi: 10.28991/esj-2021-SP1-012 Full Text: PD

    Improvement of Position Estimation Accuracy Using Multiple Access Points in Terminal Position Estimation based on Position Fingerprint

    Get PDF
    Abstract -In this paper, we consider a position estimation method based on position fingerprint of radio propagation characteristics. We propose a new method where multiple Access Points (APs) are used to improve the estimation performance. We experimentally evaluate the accuracy of the proposed method

    Learning based forensic techniques for source camera identification

    Get PDF
    In recent years, multimedia forensics has received rapidly growing attention. One challenging problem of multimedia forensics is source camera identification, the goal of which is to identify the source of a multimedia object, such as digital image and video. Sensor pattern noises, produced by imaging sensors, have been proved to be an effective way for source camera identification. Precisely speaking, the conventional SPN-based source camera identification.has two application models: verification and identification. In the past decade, significant progress has been achieved in the tasks of SPN-based source camera verification and identification. However, there are still many cases requiring solutions beyond the capabilities of the current methods. In this thesis, we considered and addressed two commonly seen but less studied problems. The first problem is the source camera verification with reference SPNs corrupted by scene details. The most significant limitation of using SPN for source camera identification.is that SPN can be seriously contaminated by scene details. Most existing methods consider the contaminations from scene details only occur in query images but not in reference images. To address this issue, we propose a measurement based on the combination of local image entropy and brightness so as to evaluate the quality of SPN contained by different image blocks. Based on this measurement, a context adaptive reference SPN estimator is proposed to address the problem that reference images are contaminated by scene details. The second problem that we considered relates to the high computational complexity of using SPN in source camera identification., which is caused by the high dimensionality of SPN. In order to improve identification.efficiency without degrading accuracy, we propose an effective feature extraction algorithm based on the concept of PCA denoising to extract a small set of components from the original noise residual, which tends to carry most of the information of the true SPN signal. To further improve the performance of this framework, two enhancement methods are introduced. The first enhancement method is proposed to take the advantage of the label information of the reference images so as to better separate different classes and further reduce the dimensionality. Secondly, we propose an extension based on Candid Covariance-free Incremental PCA to incrementally update the feature extractor according to the received images so that there is no need to re-conduct training every time when a new image is added to the database. Moreover, an ensemble method based on the random subspace method and majority voting is proposed in the context of source camera identification.to tackle the performance degradation of PCA-based feature extraction method due to the corruption by unwanted interferences in the training set. The proposed algorithms are evaluated on the challenging Dresden image database and experimental results confirmed their effectiveness

    Analyzing complex data using domain constraints

    Get PDF
    Data-driven research approaches are becoming increasingly popular in a growing number of scientific disciplines. While a data-driven research approach can yield superior results, generating the required data can be very costly. This frequently leads to small and complex data sets, in which it is impossible to rely on volume alone to compensate for all shortcomings of the data. To counter this problem, other reliable sources of information must be incorporated. In this work, domain knowledge, as a particularly reliable type of additional information, is used to inform data-driven analysis methods. This domain knowledge is represented as constraints on the possible solutions, which the presented methods can use to inform their analysis. It focusses on spatial constraints as a particularly common type of constraint, but the proposed techniques are general enough to be applied to other types of constraints. In this thesis, new methods using domain constraints for data-driven science applications are discussed. These methods have applications in feature evaluation, route database repair, and Gaussian Mixture modeling of spatial data. The first application focuses on feature evaluation. The presented method receives two representations of the same data: one as the intended target and the other for investigation. It calculates a score indicating how much the two representations agree. A presented application uses this technique to compare a reference attribute set with different subsets to determine the importance and relevance of individual attributes. A second technique analyzes route data for constraint compliance. The presented framework allows the user to specify constraints and possible actions to modify the data. The presented method then uses these inputs to generate a version of the data, which agrees with the constraints, while otherwise reducing the impact of the modifications as much as possible. Two extensions of this schema are presented: an extension to continuously valued costs, which are minimized, and an extension to constraints involving more than one moving object. Another addressed application area is modeling of multivariate measurement data, which was measured at spatially distributed locations. The spatial information recorded with the data can be used as the basis for constraints. This thesis presents multiple approaches to building a model of this kind of data while complying with spatial constraints. The first approach is an interactive tool, which allows domain scientists to generate a model of the data, which complies with their knowledge about the data. The second is a Monte Carlo approach, which generates a large number of possible models, tests them for compliance with the constraints, and returns the best one. The final two approaches are based on the EM algorithm and use different ways of incorporating the information into their models. At the end of the thesis, two applications of the models, which have been generated in the previous chapter, are presented. The first is prediction of the origin of samples and the other is the visual representation of the extracted models on a map. These tools can be used by domain scientists to augment their tried and tested tools. The developed techniques are applied to a real-world data set collected in the archaeobiological research project FOR 1670 (Transalpine mobility and cultural transfer) of the German Science Foundation. The data set contains isotope ratio measurements of samples, which were discovered at archaeological sites in the Alps region of central Europe. Using the presented data analysis methods, the data is analyzed to answer relevant domain questions. In a first application, the attributes of the measurements are analyzed for their relative importance and their ability to predict the spatial location of samples. Another presented application is the reconstruction of potential migration routes between the investigated sites. Then spatial models are built using the presented modeling approaches. Univariate outliers are determined and used to predict locations based on the generated models. These are cross-referenced with the recorded origins. Finally, maps of the isotope distribution in the investigated regions are presented. The described methods and demonstrated analyses show that domain knowledge can be used to formulate constraints that inform the data analysis process to yield valid models from relatively small data sets and support domain scientists in their analyses.Datengetriebene Forschungsansätze werden für eine wachsende Anzahl von wissenschaftlichen Disziplinen immer wichtiger. Obwohl ein datengetriebener Forschungsansatz bessere Ergebnisse erzielen kann, kann es sehr teuer sein die notwendigen Daten zu gewinnen. Dies hat häufig zur Folge, dass kleine und komplexe Datensätze entstehen, bei denen es nicht möglich ist sich auf die Menge der Datenpunkte zu verlassen um Probleme bei der Analyse auszugleichen. Um diesem Problem zu begegnen müssen andere Informationsquellen verwendet werden. Fachwissen als eine besonders zuverlässige Quelle solcher Informationen kann herangezogen werden, um die datengetriebenen Analysemethoden zu unterstützen. Dieses Fachwissen wird ausgedrückt als Constraints (Nebenbedingungen) der möglichen Lösungen, die die vorgestellten Methoden benutzen können um ihre Analyse zu steuern. Der Fokus liegt dabei auf räumlichen Constraints als eine besonders häufige Art von Constraints, aber die vorgeschlagenen Methoden sind allgemein genug um auf andere Arte von Constraints angewendet zu werden. Es werden neue Methoden diskutiert, die Fachwissen für datengetriebene wissenschaftliche Anwendungen verwenden. Diese Methoden haben Anwendungen auf Feature-Evaluation, die Reparatur von Bewegungsdatenbanken und auf Gaussian-Mixture-Modelle von räumlichen Daten. Die erste Anwendung betrifft Feature-Evaluation. Die vorgestellte Methode erhält zwei Repräsentationen der selben Daten: eine als Zielrepräsentation und eine zur Untersuchung. Sie berechnet einen Wert, der aussagt, wie einig sich die beiden Repräsentationen sind. Eine vorgestellte Anwendung benutzt diese Technik um eine Referenzmenge von Attributen mit verschiedenen Untermengen zu vergleichen, um die Wichtigkeit und Relevanz einzelner Attribute zu bestimmen. Eine zweite Technik analysiert die Einhaltung von Constraints in Bewegungsdaten. Das präsentierte Framework erlaubt dem Benutzer Constraints zu definieren und mögliche Aktionen zur Veränderung der Daten anzuwenden. Die präsentierte Methode benutzt diese Eingaben dann um eine neue Variante der Daten zu erstellen, die die Constraints erfüllt ohne die Datenbank mehr als notwendig zu verändern. Zwei Erweiterungen dieser Grundidee werden vorgestellt: eine Erweiterung auf stetige Kostenfunktionen, die minimiert werden, und eine Erweiterung auf Bedingungen, die mehr als ein bewegliches Objekt betreffen. Ein weiteres behandeltes Anwendungsgebiet ist die Modellierung von multivariaten Messungen, die an räumlich verteilten Orten gemessen wurden. Die räumliche Information, die zusammen mit diesen Daten erhoben wurde, kann als Grundlage genutzt werden um Constraints zu formulieren. Mehrere Ansätze zum Erstellen von Modellen auf dieser Art von Daten werden vorgestellt, die räumliche Constraints einhalten. Der erste dieser Ansätze ist ein interaktives Werkzeug, das Fachwissenschaftlern dabei hilft, Modelle der Daten zu erstellen, die mit ihrem Wissen über die Daten übereinstimmen. Der zweite ist eine Monte-Carlo-Simulation, die eine große Menge möglicher Modelle erstellt, testet ob sie mit den Constraints übereinstimmen und das beste Modell zurückgeben. Zwei letzte Ansätze basieren auf dem EM-Algorithmus und benutzen verschiedene Arten diese Information in das Modell zu integrieren. Am Ende werden zwei Anwendungen der gerade vorgestellten Modelle vorgestellt. Die erste ist die Vorhersage der Herkunft von Proben und die andere ist die grafische Darstellung der erstellten Modelle auf einer Karte. Diese Werkzeuge können von Fachwissenschaftlern benutzt werden um ihre bewährten Methoden zu unterstützen. Die entwickelten Methoden werden auf einen realen Datensatz angewendet, der von dem archäo-biologischen Forschungsprojekt FOR 1670 (Transalpine Mobilität und Kulturtransfer der Deutschen Forschungsgemeinschaft erhoben worden ist. Der Datensatz enthält Messungen von Isotopenverhältnissen von Proben, die in archäologischen Fundstellen in den zentraleuropäischen Alpen gefunden wurden. Die präsentierten Datenanalyse-Methoden werden verwendet um diese Daten zu analysieren und relevante Forschungsfragen zu klären. In einer ersten Anwendung werden die Attribute der Messungen analysiert um ihre relative Wichtigkeit und ihre Fähigkeit zu bewerten, die räumliche Herkunft der Proben vorherzusagen. Eine weitere vorgestellte Anwendung ist die Wiederherstellung von möglichen Migrationsrouten zwischen den untersuchten Fundstellen. Danach werden räumliche Modelle der Daten unter Verwendung der vorgestellten Methoden erstellt. Univariate Outlier werden bestimmt und ihre möglich Herkunft basierend auf der erstellten Karte wird bestimmt. Die vorhergesagte Herkunft wird mit der tatsächlichen Fundstelle verglichen. Zuletzt werden Karten der Isotopenverteilung der untersuchten Region vorgestellt. Die beschriebenen Methoden und vorgestellten Analysen zeigen, dass Fachwissen verwendet werden kann um Constraints zu formulieren, die den Datenanalyseprozess unterstützen, um gültige Modelle aus relativ kleinen Datensätzen zu erstellen und Fachwissenschaftler bei ihren Analysen zu unterstützen

    Automated dental identification: A micro-macro decision-making approach

    Get PDF
    Identification of deceased individuals based on dental characteristics is receiving increased attention, especially with the large volume of victims encountered in mass disasters. In this work we consider three important problems in automated dental identification beyond the basic approach of tooth-to-tooth matching.;The first problem is on automatic classification of teeth into incisors, canines, premolars and molars as part of creating a data structure that guides tooth-to-tooth matching, thus avoiding illogical comparisons that inefficiently consume the limited computational resources and may also mislead the decision-making. We tackle this problem using principal component analysis and string matching techniques. We reconstruct the segmented teeth using the eigenvectors of the image subspaces of the four teeth classes, and then call the teeth classes that achieve least energy-discrepancy between the novel teeth and their approximations. We exploit teeth neighborhood rules in validating teeth-classes and hence assign each tooth a number corresponding to its location in a dental chart. Our approach achieves 82% teeth labeling accuracy based on a large test dataset of bitewing films.;Because dental radiographic films capture projections of distinct teeth; and often multiple views for each of the distinct teeth, in the second problem we look for a scheme that exploits teeth multiplicity to achieve more reliable match decisions when we compare the dental records of a subject and a candidate match. Hence, we propose a hierarchical fusion scheme that utilizes both aspects of teeth multiplicity for improving teeth-level (micro) and case-level (macro) decision-making. We achieve a genuine accept rate in excess of 85%.;In the third problem we study the performance limits of dental identification due to features capabilities. We consider two types of features used in dental identification, namely teeth contours and appearance features. We propose a methodology for determining the number of degrees of freedom possessed by a feature set, as a figure of merit, based on modeling joint distributions using copulas under less stringent assumptions on the dependence between feature dimensions. We also offer workable approximations of this approach

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Space-partitioning with cascade-connected ANN structures for positioning in mobile communication systems

    Get PDF
    The world around us is getting more connected with each day passing by – new portable devices employing wireless connections to various networks wherever one might be. Locationaware computing has become an important bit of telecommunication services and industry. For this reason, the research efforts on new and improved localisation algorithms are constantly being performed. Thus far, the satellite positioning systems have achieved highest popularity and penetration regarding the global position estimation. In spite the numerous investigations aimed at enabling these systems to equally procure the position in both indoor and outdoor environments, this is still a task to be completed. This research work presented herein aimed at improving the state-of-the-art positioning techniques through the use of two highly popular mobile communication systems: WLAN and public land mobile networks. These systems already have widely deployed network structures (coverage) and a vast number of (inexpensive) mobile clients, so using them for additional, positioning purposes is rational and logical. First, the positioning in WLAN systems was analysed and elaborated. The indoor test-bed, used for verifying the models’ performances, covered almost 10,000m2 area. It has been chosen carefully so that the positioning could be thoroughly explored. The measurement campaigns performed therein covered the whole of test-bed environment and gave insight into location dependent parameters available in WLAN networks. Further analysis of the data lead to developing of positioning models based on ANNs. The best single ANN model obtained 9.26m average distance error and 7.75m median distance error. The novel positioning model structure, consisting of cascade-connected ANNs, improved those results to 8.14m and 4.57m, respectively. To adequately compare the proposed techniques with other, well-known research techniques, the environment positioning error parameter was introduced. This parameter enables to take the size of the test environment into account when comparing the accuracy of the indoor positioning techniques. Concerning the PLMN positioning, in-depth analysis of available system parameters and signalling protocols produced a positioning algorithm, capable of fusing the system received signal strength parameters received from multiple systems and multiple operators. Knowing that most of the areas are covered by signals from more than one network operator and even more than one system from one operator, it becomes easy to note the great practical value of this novel algorithm. On the other hand, an extensive drive-test measurement campaign, covering more than 600km in the central areas of Belgrade, was performed. Using this algorithm and applying the single ANN models to the recorded measurements, a 59m average distance error and 50m median distance error were obtained. Moreover, the positioning in indoor environment was verified and the degradation of performances, due to the crossenvironment model use, was reported: 105m average distance error and 101m median distance error. When applying the new, cascade-connected ANN structure model, distance errors were reduced to 26m and 2m, for the average and median distance errors, respectively. The obtained positioning accuracy was shown to be good enough for the implementation of a broad scope of location based services by using the existing and deployed, commonly available, infrastructure

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Palm Vein Verification Using Multiple Features and Locality Preserving Projections

    Get PDF
    Biometrics is defined as identifying people by their physiological characteristic, such as iris pattern, fingerprint, and face, or by some aspects of their behavior, such as voice, signature, and gesture. Considerable attention has been drawn on these issues during the last several decades. And many biometric systems for commercial applications have been successfully developed. Recently, the vein pattern biometric becomes increasingly attractive for its uniqueness, stability, and noninvasiveness. A vein pattern is the physical distribution structure of the blood vessels underneath a person’s skin. The palm vein pattern is very ganglion and it shows a huge number of vessels. The attitude of the palm vein vessels stays in the same location for the whole life and its pattern is definitely unique. In our work, the matching filter method is proposed for the palm vein image enhancement. New palm vein features extraction methods, global feature extracted based on wavelet coefficients and locality preserving projections (WLPP), and local feature based on local binary pattern variance and locality preserving projections (LBPV_LPP) have been proposed. Finally, the nearest neighbour matching method has been proposed that verified the test palm vein images. The experimental result shows that the EER to the proposed method is 0.1378%
    corecore