AGH (Akademia Górniczo-Hutnicza) University of Science and Technology: Journals
Not a member yet
    2083 research outputs found

    Process of Fingerprint Authentication using Cancelable Biohashed Template

    Get PDF
    Template protection using cancelable biometrics prevents data loss and hacking stored templates, by providing considerable privacy and security. Hashing and salting techniques are used to build resilient systems. Salted password method is employed to protect passwords against different types of attacks namely brute-force attack, dictionary attack, rainbow table attacks. Salting claims that random data can be added to input of hash function to ensure unique output. Hashing salts are speed bumps in an attacker’s road to breach user’s data. Research proposes a contemporary two factor authenticator called Biohashing. Biohashing procedure is implemented by recapitulated inner product over a pseudo random number generator key, as well as fingerprint features that are a network of minutiae. Cancelable template authentication used in fingerprint-based sales counter accelerates payment process. Fingerhash is code produced after applying biohashing on fingerprint. Fingerhash is a binary string procured by choosing individual bit of sign depending on a preset threshold. Experiment is carried using benchmark FVC 2002 DB1 dataset. Authentication accuracy is found to be nearly 97\%. Results compared with state-of art approaches finds promising

    Survey on the Most Current Image Processing Methods in Huntington's Disease Diagnostics and Progression Assessment

    Get PDF
    Huntington's disease (HD) is a rare, incurable neurodegenerative disorder where fast and non-invasive diagnosis targeting patients' condition plays a crucial role. In modern medicine, various scientific areas are being combined, such as computing, medicine and biomedical engineering. This survey is focused on the most recent image processing methods applied not only for the purpose of diagnosing HD but also for the assessment of its progression severity, in order to contribute to the effort to prolong life of and to improve its quality

    Machine Learning based Event Reconstruction for the MUonE Experiment

    Get PDF
    As currently operating high energy physics experiments produce a huge amount of data, new methods of fast and efficient event reconstruction are necessary to handle the immense load. Storing the unprocessed data is not feasible, forcing experiments to process the data online employing the algorithms of quality provided for the offline analysis, but within strict time constraints. In the MUonE experiment the machine learning based event reconstruction techniques are being implemented and tested in order to provide efficient online reduction of data and to maximize the statistical power of the final physics measurement

    The Most Current Solutions using Virtual-Reality-Based Methods in Cardiac Surgery -- A Survey

    Get PDF
    There is a widespread belief that VR technologies can provide controlled, multi-sensory, interactive 3D stimulus environments that engage patients in interventions and measure, record and motivate required human performance. In order to investigate state-of-the-art and associated occupations we provided a careful review of 6 leading medical and technical bibliometric databases. Despite the apparent popularity of the topic of VR use in cardiac surgery, only 47 articles published between 2002 and 2022 met the inclusion criteria. Based on them VR-based solutions in cardiac surgery are useful both for medical specialists and for the patients themselves. The new lifestyle required from cardiac surgery patients is easier to implement thanks to VR-based educational and motivational tools. However, it is necessary to develop the above-mentioned tools and compare their effectiveness with Augmented Reality (AR). With the aforementioned reasons, interdisciplinary collaboration between scientists, clinicians and engineers is necessary

    Square grid Path Planning for Mobile Anchor-Based Localization in Wireless Sensor Networks

    Get PDF
    Localization is to provide all sensor nodes with their geographical positions. A mobile anchor-based localization in WSNs uses a mobile anchor equipped with GPS, which travels along a predetermined path. At each specified beacon point, it broadcasts its current known position to help other sensor nodes with unknown locations estimate their positions. In this paper, we analyze the determination of beacon points based on a square grid. We propose an improved path planning model named Union-curve. Our proposed model incorporates all beacon points of five previously developed paths, namely, SCAN, HILBERT, S-type, Z-curve, and Σ\Sigma-Scan on the commonly used square grid decomposition of area. Unknown sensor nodes estimate their positions using two techniques, APT and WCWCL-RSSI. Simulation results show that the proposed model has higher accuracy, with a big difference in error rate compared to the other models. In addition, this model guarantees maximum coverage with less path resolution value

    Mesh Compression Algorithm for Geometrical Coordinates in Computational Meshes

    Get PDF
    Application of advanced mesh based methods, including adaptive finite element method, is impossible without theoretical elaboration and practical realization of a model for organization and functionality of computational mesh. One of the most basic mesh functionality is storing and providing geometrical coordinates for vertices and other mesh entities. New algorithm for this task based on on-the-fly recreation of coordinates was developed. Conducted tests are proving that, for selected cases, it can be orders of magnitude faster than naive approach or other similar algorithms

    Stacked Denoising Autoencoder Based Parkinson’s Disease Classification using Improved Pigeon-Inspired Optimization Algorithm

    No full text
    One of the most common neurological conditions caused by gradual brain degeneration is Parkinson's disease (PD). Although this neurological condition has no known treatment, early detection and therapy can help patients improve their quality of life. An essential patient's health record is made of medical images used to control, manage, and treat diseases. However, in computer-based diagnostics, disease classification is a difficult task. To overcome this problem, this paper introduces a stacked denoising Autoencoder (SDA) for Parkinson's disease classification. The main aim of this paper is to derive an optimal feature selection design for an effective PD classification. Improved Pigeon-Inspired Optimization (IPIO) algorithm is introduced to enhance the performance of the classifier. Thus, the classification result improved by the optimal features and also increased the sensitivity, accuracy, and specificity in the medical image diagnosis. The proposed scheme is implemented in PYTHON and compared with traditional feature selection models and other classification approaches. The experimental outcomes show that the proposed method yields a superior classification of PD than the current state-of-the-art metho

    Hybrid implementation of the fastICA algorithm for high-density EEG using the capabilities of the Intel architecture and CUDA programming

    Get PDF
    High-density electroencephalographic (EEG) systems are utilized in the study of the human brain and its underlying behaviors. However, working with EEG data requires a well-cleaned signal, which is often achieved through the use of independent component analysis (ICA) methods. The calculation time for these types of algorithms is the longer the more data we have. This article presents a hybrid implementation of the fastICA algorithm that uses parallel programming techniques (libraries and extensions of the Intel processors and CUDA programming), which results in a significant acceleration of execution time on selected architectures

    Efficient Selection Methods in Evolutionary Algorithms

    Get PDF
    Evolutionary algorithms mimic some elements of the theory of evolution. The survival of individuals and the possibility of producing offspring play a huge role in the process of natural evolution. This process is called a natural selection. This mechanism is responsible for eliminating poor population members and gives the possibility of development for good ones. The evolutionary algorithm - an instance of evolution in the computer environment also requires a selection method, a computer version of natural selection. Widely used standard selection methods applied in evolutionary algorithms are usually derived from nature and prefer competition, randomness and some kind of ``fight'' among individuals. But computer environment is quite different from nature. Computer populations of individuals are usually small, they easily suffer from a premature convergence to local extremes. To avoid this drawback, computer selection methods must have different features than natural selection. In the computer selection methods randomness, fight and competition should be controlled or influenced to operate to the desired extent. Several new methods of individual selection are proposed in this work: several kinds of mixed selection, an interval selection and a taboo selection. Also advantages of passing them into the evolutionary algorithm are shown, using examples based on searching for the maximum α-clique problem and traditional TSP in comparison with traditionally considered as very efficient tournament selection, considered ineffective proportional (roulette) selection and similar classical methods

    Using Deep Neural Networks to Improve the Precision of Fast-Sampled Particle Timing Detectors

    Get PDF
    Measurements from particle timing detectors are often affected by the time walk effect caused by statistical fluctuations in the charge deposited by passing particles. The constant fraction discriminator (CFD) algorithm is frequently used to mitigate this effect both in test setups and in running experiments, such as the CMS-PPS system at the CERN’s LHC. The CFD is simple and effective but does not leverage all voltage samples in a time series. Its performance could be enhanced with deep neural networks, which are commonly used for time series analysis, including computing the particle arrival time. We evaluated various neural network architectures using data acquired at the test beam facility in the DESY-II synchrotron, where a precise MCP (MicroChannel Plate) detector was installed in addition to PPS diamond timing detectors. MCP measurements were used as a reference to train the networks and compare the results with the standard CFD method. Ultimately, we improved the timing precision by 8% to 23%, depending on the detector's readout channel. The best results were obtained using a UNet-based model, which outperformed classical convolutional networks and the multilayer perceptron


    full texts


    metadata records
    Updated in last 30 days.
    AGH (Akademia Górniczo-Hutnicza) University of Science and Technology: Journals is based in Poland
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇