476 research outputs found

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    Machine Learning assisted Digital Twin for event identification in electrical power system

    Get PDF
    The challenges of stable operation in the electrical power system are increasing with the infrastructure shifting of the power grid from the centralized energy supply with fossil fuels towards sustainable energy generation. The predominantly RES plants, due to the non-linear electronic switch, have brought harmonic oscillations into the power grid. These changes lead to difficulties in stable operation, reduction of outages and management of variations in electric power systems. The emergence of the Digital Twin in the power system brings the opportunity to overcome these challenges. Digital Twin is a digital information model that accurately represents the state of every asset in a physical system. It can be used not only to monitor the operation states with actionable insights of physical components to drive optimized operation but also to generate abundant data by simulation according to the guidance on design limits of physical systems. The work addresses the topic of the origin of the Digital Twin concept and how it can be utilized in the optimization of power grid operation.Die Herausforderungen für den zuverfässigen Betrieb des elektrischen Energiesystems werden mit der Umwandlung der Infrastruktur in Stromnetz von der zentralen Energieversorgung mit fossilen Brennstoffen hin zu der regenerativen Energieeinspeisung stetig zugenommen. Der Ausbau der erneuerbaren Energien im Zuge der klimapolitischen Zielsetzung zur CO²-Reduzierung und des Ausstiegs aus der Kernenergie wird in Deutschland zügig vorangetrieben. Aufgrund der nichtlinearen elektronischen Schaltanlagen werden die aus EE-Anlagen hervorgegangenen Oberschwingungen in das Stromnetz eingebracht, was nicht nur die Komplexität des Stromnetzes erhöht, sondern auch die Stabilität des Systems beeinflusst. Diese Entwicklungen erschweren den stabilen Betrieb, die Verringerung der Ausfälle und das Management der Netzschwankungen im elektrischen Energiesystem. Das Auftauchen von Digital Twin bringt die Gelegenheit zur Behebung dieser Herausforderung. Digital Twin ist ein digitales Informationsmodell, das den Zustand des physikalischen genau abbildet. Es kann nicht nur zur Überwachung der Betriebszustände mit nachvollziehbarem Einsichten über physischen Komponenten sondern auch zur Generierung der Daten durch Simulationen unter der Berücksichtigung der Auslegungsgrenze verwendet werden. Diesbezüglich widmet sich die Arbeit zunächste der Fragestellung, woher das Digital Twin Konzept stammt und wie das Digitan Twin für die Optimierung des Stromnetzes eingesetzt wird. Hierfür werden die Perspektiven über die dynamische Zustandsschätzung, die Überwachung des des Betriebszustands, die Erkennung der Anomalien usw. im Stromnetz mit Digital Twin spezifiziert. Dementsprechend wird die Umsetzung dieser Applikationen auf dem Lebenszyklus-Management basiert. Im Rahmen des Lebenszyklusschemas von Digital Twin sind drei wesentliche Verfahren von der Modellierung des Digital Twins zur deren Applizierung erforderlich: Parametrierungsprozess für die Modellierung des Digital Twins, Datengenerierung mit Digital Twin Simulation und Anwendung mit Machine Learning Algorithmus für die Erkennung der Anomalie. Die Validierung der Zuverlässigkeit der Parametrierung für Digital Twin und der Eventserkennung erfolgt mittels numerischer Fallstudien. Dazu werden die Algorithmen für Online und Offline zur Parametrierung des Digital Twins untersucht. Im Rahmen dieser Arbeit wird das auf CIGRÉ basierende Referenznetz zur Abbildung des Digital Twin hinsichtlich der Referenzmessdaten parametriert. So sind neben der Synchronmaschine und Umrichter basierende Einspeisung sowie Erreger und Turbine auch regler von Umrichter für den Parametrierungsprozess berücksichtigt. Nach der Validierung des Digital Twins werden die zahlreichen Simulationen zur Datengenerierung durchgeführt. Jedes Event wird mittels der Daten vo Digital Twin mit einem "Fingerprint" erfasst. Das Training des Machine Learning Algorithmus wird dazu mit den simulierten Daten von Digital Twin abgewickelt. Das Erkennungsergebnis wird durch die Fallstudien validiert und bewertet

    Sensor-Based Activity Recognition and Performance Assessment in Climbing: A Review

    Get PDF
    In the past decades, a number of technological developments made it possible to continuously collect various types of sport activity data in an unobtrusive way. Machine learning and analytical methods have been applied to flows of sensor data to predict the conducted sport activity as well as to calculate key performance indicators. In that scenario, researchers started to be interested in leveraging pervasive information technologies for sport climbing, thus allowing, in day-to-day climbing practice, the realization of systems for automatic assessment of a climber’s performance, detection of injury risk factors, and virtual coaching. This article surveys recent research works on the recognition of climbing activities and the evaluation of climbing performance indicators, where data have been acquired with accelerometers, cameras, force sensors, and other types of sensors. We describe the main types of sensors and equipment adopted for data acquisition, the techniques used to extract relevant features from sensor data, and the methods that have been proposed to identify the activities performed by a climber and to calculate key performance indicators. We also present a classification taxonomy of climbing activities and of climbing performance indicators, with the aim to unify the existing work and facilitate the comparison of methods. Moreover, open problems that call for new approaches and solutions are here discussed. We conclude that there is considerable scope for further work, particularly in the application of recognition techniques to problems involving various climbing activities. We hope that this survey will assist in the translation of research effort into intelligent environments that climbers will benefit from

    Empowering Patient Similarity Networks through Innovative Data-Quality-Aware Federated Profiling

    Get PDF
    Continuous monitoring of patients involves collecting and analyzing sensory data from a multitude of sources. To overcome communication overhead, ensure data privacy and security, reduce data loss, and maintain efficient resource usage, the processing and analytics are moved close to where the data are located (e.g., the edge). However, data quality (DQ) can be degraded because of imprecise or malfunctioning sensors, dynamic changes in the environment, transmission failures, or delays. Therefore, it is crucial to keep an eye on data quality and spot problems as quickly as possible, so that they do not mislead clinical judgments and lead to the wrong course of action. In this article, a novel approach called federated data quality profiling (FDQP) is proposed to assess the quality of the data at the edge. FDQP is inspired by federated learning (FL) and serves as a condensed document or a guide for node data quality assurance. The FDQP formal model is developed to capture the quality dimensions specified in the data quality profile (DQP). The proposed approach uses federated feature selection to improve classifier precision and rank features based on criteria such as feature value, outlier percentage, and missing data percentage. Extensive experimentation using a fetal dataset split into different edge nodes and a set of scenarios were carefully chosen to evaluate the proposed FDQP model. The results of the experiments demonstrated that the proposed FDQP approach positively improved the DQ, and thus, impacted the accuracy of the federated patient similarity network (FPSN)-based machine learning models. The proposed data-quality-aware federated PSN architecture leveraging FDQP model with data collected from edge nodes can effectively improve the data quality and accuracy of the federated patient similarity network (FPSN)-based machine learning models. Our profiling algorithm used lightweight profile exchange instead of full data processing at the edge, which resulted in optimal data quality achievement, thus improving efficiency. Overall, FDQP is an effective method for assessing data quality in the edge computing environment, and we believe that the proposed approach can be applied to other scenarios beyond patient monitoring
    • …
    corecore