11 research outputs found

    A Study on Security Mechanism of Civil Air Defense and Disaster Warning Control System based on CDMA Wireless Access

    Get PDF
    Due to the use of wireless transmission and open networks, mobile communications are faced with enormous security threats. This study focuses on security mechanisms of the civil air defense and disaster warning control system based on CDMA wireless access. The working principle and process of authentication and data encryption are presented in detail. Further we propose and develop a novel hybrid cryptosystem combining AES and ECC for this control system in order to achieve the convenience of a public-key cryptosystem and the efficiency of a symmetric-key cryptosystem. Providing high security and encryption efficiency as well as simple management of keys, the proposed cryptographic approach can meet the requirements for security and real-time-ness of data transmission in the wireless access control system

    E-INVIGILATION OF E-ASSESSMENTS

    Get PDF
    E-learning and particularly distance-based learning is becoming an increasingly important mechanism for education. A leading Virtual Learning Environment (VLE) reports a user base of 70 million students and 1.2 million teachers across 7.5 million courses. Whilst e-learning has introduced flexibility and remote/distance-based learning, there are still aspects of course delivery that rely upon traditional approaches. The most significant of these is examinations. The lack of being able to provide invigilation in a remote-mode has restricted the types of assessments, with exams or in-class test assessments proving difficult to validate. Students are still required to attend physical testing centres in order to ensure strict examination conditions are applied. Whilst research has begun to propose solutions in this respect, they fundamentally fail to provide the integrity required. This thesis seeks to research and develop an e-invigilator that will provide continuous and transparent invigilation of the individual undertaking an electronic based exam or test. The analysis of the e-invigilation solutions has shown that the suggested approaches to minimise cheating behaviours during the online test have varied. They have suffered from a wide range of weaknesses and lacked an implementation achieving continuous and transparent authentication with appropriate security restrictions. To this end, the most transparent biometric approaches are identified to be incorporated in an appropriate solution whilst maintaining security beyond the point-of-entry. Given the existing issues of intrusiveness and point-of-entry user authentication, a complete architecture has been developed based upon maintaining student convenience but providing effective identity verification throughout the test, rather than merely at the beginning. It also provides continuous system-level monitoring to prevent cheating, as well as a variety of management-level functionalities for creating and managing assessments including a prioritised and usable interface in order to enable the academics to quickly verify and check cases of possible cheating. The research includes a detailed discussion of the architecture requirements, components, and complete design to be the core of the system which captures, processes, and monitors students in a completely controlled e-test environment. In order to highlight the ease of use and lightweight nature of the system, a prototype was developed. Employing student face recognition as the most transparent multimodal (2D and 3D modes) biometrics, and novel security features through eye tracking, head movements, speech recognition, and multiple faces detection in order to enable a robust and flexible e-invigilation approach. Therefore, an experiment (Experiment 1) has been conducted utilising the developed prototype involving 51 participants. In this experiment, the focus has been mainly upon the usability of the system under normal use. The FRR of those 51 legitimate participants was 0 for every participant in the 2D mode; however, it was 0 for 45 of them and less than 0.096 for the rest 6 in the 3D mode. Consequently, for all the 51 participants of this experiment, on average, the FRR was 0 in 2D facial recognition mode, however, in 3D facial recognition mode, it was 0.048. Furthermore, in order to evaluate the robustness of the approach against targeted misuse 3 participants were tasked with a series of scenarios that map to typical misuse (Experiment 2). The FAR was 0.038 in the 2D mode and 0 in the 3D mode. The results of both experiments support the feasibility, security, and applicability of the suggested system. Finally, a series of scenario-based evaluations, involving the three separate stakeholders namely: Experts, Academics (qualitative-based surveys) and Students (a quantitative-based and qualitative-based survey) have also been utilised to provide a comprehensive evaluation into the effectiveness of the proposed approach. The vast majority of the interview/feedback outcomes can be considered as positive, constructive and valuable. The respondents agree with the idea of continuous and transparent authentication in e-assessments as it is vital for ensuring solid and convenient security beyond the point-of-entry. The outcomes have also supported the feasibility and practicality of the approach, as well as the efficiency of the system management via well-designed and smart interfaces.The Higher Committee for Education Development in Iraq (HCED

    Numerical and Evolutionary Optimization 2020

    Get PDF
    This book was established after the 8th International Workshop on Numerical and Evolutionary Optimization (NEO), representing a collection of papers on the intersection of the two research areas covered at this workshop: numerical optimization and evolutionary search techniques. While focusing on the design of fast and reliable methods lying across these two paradigms, the resulting techniques are strongly applicable to a broad class of real-world problems, such as pattern recognition, routing, energy, lines of production, prediction, and modeling, among others. This volume is intended to serve as a useful reference for mathematicians, engineers, and computer scientists to explore current issues and solutions emerging from these mathematical and computational methods and their applications

    Computational Intelligence in Healthcare

    Get PDF
    This book is a printed edition of the Special Issue Computational Intelligence in Healthcare that was published in Electronic

    Computational Intelligence in Healthcare

    Get PDF
    The number of patient health data has been estimated to have reached 2314 exabytes by 2020. Traditional data analysis techniques are unsuitable to extract useful information from such a vast quantity of data. Thus, intelligent data analysis methods combining human expertise and computational models for accurate and in-depth data analysis are necessary. The technological revolution and medical advances made by combining vast quantities of available data, cloud computing services, and AI-based solutions can provide expert insight and analysis on a mass scale and at a relatively low cost. Computational intelligence (CI) methods, such as fuzzy models, artificial neural networks, evolutionary algorithms, and probabilistic methods, have recently emerged as promising tools for the development and application of intelligent systems in healthcare practice. CI-based systems can learn from data and evolve according to changes in the environments by taking into account the uncertainty characterizing health data, including omics data, clinical data, sensor, and imaging data. The use of CI in healthcare can improve the processing of such data to develop intelligent solutions for prevention, diagnosis, treatment, and follow-up, as well as for the analysis of administrative processes. The present Special Issue on computational intelligence for healthcare is intended to show the potential and the practical impacts of CI techniques in challenging healthcare applications

    Customizable Feature based Design Pattern Recognition Integrating Multiple Techniques

    Get PDF
    Die Analyse und Rückgewinnung von Architekturinformationen aus existierenden Altsystemen ist eine komplexe, teure und zeitraubende Aufgabe, was der kontinuierlich steigenden Komplexität von Software und dem Aufkommen der modernen Technologien geschuldet ist. Die Wartung von Altsystemen wird immer stärker nachgefragt und muss dabei mit den neuesten Technologien und neuen Kundenanforderungen umgehen können. Die Wiederverwendung der Artefakte aus Altsystemen für neue Entwicklungen wird sehr bedeutsam und überlebenswichtig für die Softwarebranche. Die Architekturen von Altsystemen unterliegen konstanten Veränderungen, deren Projektdokumentation oft unvollständig, inkonsistent und veraltet ist. Diese Dokumente enthalten ungenügend Informationen über die innere Struktur der Systeme. Häufig liefert nur der Quellcode zuverlässige Informationen über die Struktur von Altsystemen. Das Extrahieren von Artefakten aus Quellcode von Altsystemen unterstützt das Programmverständnis, die Wartung, das Refactoring, das Reverse Engineering, die nachträgliche Dokumentation und Reengineering Methoden. Das Ziel dieser Dissertation ist es Entwurfsinformationen von Altsystemen zu extrahieren, mit Fokus auf die Wiedergewinnung von Architekturmustern. Architekturmuster sind Schlüsselelemente, um Architekturentscheidungen aus Quellcode von Altsystemen zu extrahieren. Die Verwendung von Mustern bei der Entwicklung von Applikationen wird allgemein als qualitätssteigernd betrachtet und reduziert Entwicklungszeit und kosten. In der Vergangenheit wurden unterschiedliche Methoden entwickelt, um Muster in Altsystemen zu erkennen. Diese Techniken erkennen Muster mit unterschiedlicher Genauigkeit, da ein und dasselbe Muster unterschiedlich spezifiziert und implementiert wird. Der Lösungsansatz dieser Dissertation basiert auf anpassbaren und wiederverwendbaren Merkmal-Typen, die statische und dynamische Parameter nutzen, um variable Muster zu definieren. Jeder Merkmal-Typ verwendet eine wählbare Suchtechnik (SQL Anfragen, Reguläre Ausdrücke oder Quellcode Parser), um ein bestimmtes Merkmal eines Musters im Quellcode zu identifizieren. Insbesondere zur Erkennung verschiedener Varianten eines Musters kommen im entwickelten Verfahren statische, dynamische und semantische Analysen zum Einsatz. Die Verwendung unterschiedlicher Suchtechniken erhöht die Genauigkeit der Mustererkennung bei verschiedenen Softwaresystemen. Zusätzlich wurde eine neue Semantik für Annotationen im Quellcode von existierenden Softwaresystemen entwickelt, welche die Effizienz der Mustererkennung steigert. Eine prototypische Implementierung des Ansatzes, genannt UDDPRT, wurde zur Erkennung verschiedener Muster in Softwaresystemenen unterschiedlicher Programmiersprachen (JAVA, C/C++, C#) verwendet. UDDPRT erlaubt die Anpassung der Mustererkennung durch den Benutzer. Alle Abfragen und deren Zusammenspiel sind konfigurierbar und erlauben dadurch die Erkennung von neuen und abgewandelten Mustern. Es wurden umfangreiche Experimente mit diversen Open Source Software Systemen durchgeführt und die erzielten Ergebnisse wurden mit denen anderer Ansätze verglichen. Dabei war es möglich eine deutliche Steigerung der Genauigkeit im entwickelten Verfahren gegenüber existierenden Ansätzen zu zeigen.Recovering design information from legacy applications is a complex, expensive, quiet challenging, and time consuming task due to ever increasing complexity of software and advent of modern technology. The growing demand for maintenance of legacy systems, which can cope with the latest technologies and new business requirements, the reuse of artifacts from the existing legacy applications for new developments become very important and vital for software industry. Due to constant evolution in architecture of legacy systems, they often have incomplete, inconsistent and obsolete documents which do not provide enough information about the structure of these systems. Mostly, source code is the only reliable source of information for recovering artifacts from legacy systems. Extraction of design artifacts from the source code of existing legacy systems supports program comprehension, maintenance, code refactoring, reverse engineering, redocumentation and reengineering methodologies. The objective of approach used in this thesis is to recover design information from legacy code with particular focus on the recovery of design patterns. Design patterns are key artifacts for recovering design decisions from the legacy source code. Patterns have been extensively tested in different applications and reusing them yield quality software with reduced cost and time frame. Different techniques, methodologies and tools are used to recover patterns from legacy applications in the past. Each technique recovers patterns with different precision and recall rates due to different specifications and implementations of same pattern. The approach used in this thesis is based on customizable and reusable feature types which use static and dynamic parameters to define variant pattern definitions. Each feature type allows user to switch/select between multiple searching techniques (SQL queries, Regular Expressions and Source Code Parsers) which are used to match features of patterns with source code artifacts. The technique focuses on detecting variants of different design patterns by using static, dynamic and semantic analysis techniques. The integrated use of SQL queries, source code parsers, regular expressions and annotations improve the precision and recall for pattern extraction from different legacy systems. The approach has introduced new semantics of annotations to be used in the source code of legacy applications, which reduce search space and time for detecting patterns. The prototypical implementation of approach, called UDDPRT is used to recognize different design patterns from the source code of multiple languages (Java, C/C++, C#). The prototype is flexible and customizable that novice user can change the SQL queries and regular expressions for detecting implementation variants of design patterns. The approach has improved significant precision and recall of pattern extraction by performing experiments on number of open source systems taken as baselines for comparisons

    Investigations on chemometric approaches for diagnostic applications utilizing various combinations of spectral and image data types

    Get PDF
    In the presented work, several data fusion and machine learning approaches were explored within the frame of the data combination for various measurement techniques in biomedical applications. For each of the measurement techniques used in this work, the data was ana-lyzed by means of machine learning. Prior to applying these machine learning algorithms, a specific preprocessing pipeline for each type of data had to be established. These pipelines made it possible to standardize the data and to decrease sample-to-sample variations which originate from the instability of devices or small deviations in the sample preparation or measurement routine. The preprocessed data sets were used for various analyses of biological samples. Separate data analyses were performed for microscopic images, Raman spectra, and SERS data. However, this work mainly focused on the application of data fusion methods for the analy-sis of biological tissues and cells. To do so, different data fusion pipelines were constructed for each task, depending on the data structure. Both low-level (centralized) and high-level (distributed) data fusion approaches were tested and investigated within in this work. To demonstrate centralized and distributed data fusion, two examples were implemented for tissue investigation. In both examples, a combination of Raman spectroscopic and MALDI spectrometric data were analyzed. One example demonstrated centralized data fusion for the analysis of the chemical composition of a mouse brain section, and the other example employed distributed data fusion for liver cancer detection. Other data fusion examples were demonstrated for cell-based analysis. It was demonstrated that leukocyte cell subtype identification can be improved by a centralized data fusion of Raman spectroscopic data and morphological features obtained from microscopic images of stained cells. The last example presented in this work demonstrated a sepsis diagnostic pipeline based on the combination of Raman spectroscopic data and biomarkers. Besides the measured values, the demographic information of the patient was included in the analysis process for considering non-disease-related variations. During the construction of data fusion pipelines, such issues as unbalanced data contribu-tion, missing values, and variations that are not related to the investigated responses were faced. To resolve these issues, data weighting, missing data imputation, and the introduc-tion of additional responses were employed. For further improvement of analysis reliability, the data fusion pipelines and data processing routine were adjusted for each study in this work. As a result, the most suitable data fusion approach was found for every example, and a combination of the machine learning methods with data fusion approaches was demon-strated as a powerful tool for data analysis in biomedical applications

    XXIII Congreso Argentino de Ciencias de la Computación - CACIC 2017 : Libro de actas

    Get PDF
    Trabajos presentados en el XXIII Congreso Argentino de Ciencias de la Computación (CACIC), celebrado en la ciudad de La Plata los días 9 al 13 de octubre de 2017, organizado por la Red de Universidades con Carreras en Informática (RedUNCI) y la Facultad de Informática de la Universidad Nacional de La Plata (UNLP).Red de Universidades con Carreras en Informática (RedUNCI

    XX Workshop de Investigadores en Ciencias de la Computación - WICC 2018 : Libro de actas

    Get PDF
    Actas del XX Workshop de Investigadores en Ciencias de la Computación (WICC 2018), realizado en Facultad de Ciencias Exactas y Naturales y Agrimensura de la Universidad Nacional del Nordeste, los dìas 26 y 27 de abril de 2018.Red de Universidades con Carreras en Informática (RedUNCI
    corecore