5,549 research outputs found
Information actors beyond modernity and coloniality in times of climate change:A comparative design ethnography on the making of monitors for sustainable futures in Curaçao and Amsterdam, between 2019-2022
In his dissertation, Mr. Goilo developed a cutting-edge theoretical framework for an Anthropology of Information. This study compares information in the context of modernity in Amsterdam and coloniality in Curaçao through the making process of monitors and develops five ways to understand how information can act towards sustainable futures. The research also discusses how the two contexts, that is modernity and coloniality, have been in informational symbiosis for centuries which is producing negative informational side effects within the age of the Anthropocene. By exploring the modernity-coloniality symbiosis of information, the author explains how scholars, policymakers, and data-analysts can act through historical and structural roots of contemporary global inequities related to the production and distribution of information. Ultimately, the five theses propose conditions towards the collective production of knowledge towards a more sustainable planet
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
The Application of Data Analytics Technologies for the Predictive Maintenance of Industrial Facilities in Internet of Things (IoT) Environments
In industrial production environments, the maintenance of equipment has a decisive influence on costs and on the plannability of production capacities. In particular, unplanned failures during production times cause high costs, unplanned downtimes and possibly additional collateral damage. Predictive Maintenance starts here and tries to predict a possible failure and its cause so early that its prevention can be prepared and carried out in time. In order to be able to predict malfunctions and failures, the industrial plant with its characteristics, as well as wear and ageing processes, must be modelled. Such modelling can be done by replicating its physical properties. However, this is very complex and requires enormous expert knowledge about the plant and about wear and ageing processes of each individual component. Neural networks and machine learning make it possible to train such models using data and offer an alternative, especially when very complex and non-linear behaviour is evident.
In order for models to make predictions, as much data as possible about the condition of a plant and its environment and production planning data is needed. In Industrial Internet of Things (IIoT) environments, the amount of available data is constantly increasing. Intelligent sensors and highly interconnected production facilities produce a steady stream of data. The sheer volume of data, but also the steady stream in which data is transmitted, place high demands on the data processing systems. If a participating system wants to perform live analyses on the incoming data streams, it must be able to process the incoming data at least as fast as the continuous data stream delivers it. If this is not the case, the system falls further and further behind in processing and thus in its analyses. This also applies to Predictive Maintenance systems, especially if they use complex and computationally intensive machine learning models. If sufficiently scalable hardware resources are available, this may not be a problem at first. However, if this is not the case or if the processing takes place on decentralised units with limited hardware resources (e.g. edge devices), the runtime behaviour and resource requirements of the type of neural network used can become an important criterion.
This thesis addresses Predictive Maintenance systems in IIoT environments using neural networks and Deep Learning, where the runtime behaviour and the resource requirements are relevant. The question is whether it is possible to achieve better runtimes with similarly result quality using a new type of neural network. The focus is on reducing the complexity of the network and improving its parallelisability. Inspired by projects in which complexity was distributed to less complex neural subnetworks by upstream measures, two hypotheses presented in this thesis emerged: a) the distribution of complexity into simpler subnetworks leads to faster processing overall, despite the overhead this creates, and b) if a neural cell has a deeper internal structure, this leads to a less complex network. Within the framework of a qualitative study, an overall impression of Predictive Maintenance applications in IIoT environments using neural networks was developed. Based on the findings, a novel model layout was developed named Sliced Long Short-Term Memory Neural Network (SlicedLSTM). The SlicedLSTM implements the assumptions made in the aforementioned hypotheses in its inner model architecture.
Within the framework of a quantitative study, the runtime behaviour of the SlicedLSTM was compared with that of a reference model in the form of laboratory tests. The study uses synthetically generated data from a NASA project to predict failures of modules of aircraft gas turbines. The dataset contains 1,414 multivariate time series with 104,897 samples of test data and 160,360 samples of training data.
As a result, it could be proven for the specific application and the data used that the SlicedLSTM delivers faster processing times with similar result accuracy and thus clearly outperforms the reference model in this respect. The hypotheses about the influence of complexity in the internal structure of the neuronal cells were confirmed by the study carried out in the context of this thesis
Evaluation Methodologies in Software Protection Research
Man-at-the-end (MATE) attackers have full control over the system on which
the attacked software runs, and try to break the confidentiality or integrity
of assets embedded in the software. Both companies and malware authors want to
prevent such attacks. This has driven an arms race between attackers and
defenders, resulting in a plethora of different protection and analysis
methods. However, it remains difficult to measure the strength of protections
because MATE attackers can reach their goals in many different ways and a
universally accepted evaluation methodology does not exist. This survey
systematically reviews the evaluation methodologies of papers on obfuscation, a
major class of protections against MATE attacks. For 572 papers, we collected
113 aspects of their evaluation methodologies, ranging from sample set types
and sizes, over sample treatment, to performed measurements. We provide
detailed insights into how the academic state of the art evaluates both the
protections and analyses thereon. In summary, there is a clear need for better
evaluation methodologies. We identify nine challenges for software protection
evaluations, which represent threats to the validity, reproducibility, and
interpretation of research results in the context of MATE attacks
Automated Mapping of Adaptive App GUIs from Phones to TVs
With the increasing interconnection of smart devices, users often desire to
adopt the same app on quite different devices for identical tasks, such as
watching the same movies on both their smartphones and TV.
However, the significant differences in screen size, aspect ratio, and
interaction styles make it challenging to adapt Graphical User Interfaces
(GUIs) across these devices.
Although there are millions of apps available on Google Play, only a few
thousand are designed to support smart TV displays.
Existing techniques to map a mobile app GUI to a TV either adopt a responsive
design, which struggles to bridge the substantial gap between phone and TV or
use mirror apps for improved video display, which requires hardware support and
extra engineering efforts.
Instead of developing another app for supporting TVs, we propose a
semi-automated approach to generate corresponding adaptive TV GUIs, given the
phone GUIs as the input.
Based on our empirical study of GUI pairs for TV and phone in existing apps,
we synthesize a list of rules for grouping and classifying phone GUIs,
converting them to TV GUIs, and generating dynamic TV layouts and source code
for the TV display.
Our tool is not only beneficial to developers but also to GUI designers, who
can further customize the generated GUIs for their TV app development.
An evaluation and user study demonstrate the accuracy of our generated GUIs
and the usefulness of our tool.Comment: 30 pages, 15 figure
A Closer Look into Recent Video-based Learning Research: A Comprehensive Review of Video Characteristics, Tools, Technologies, and Learning Effectiveness
People increasingly use videos on the Web as a source for learning. To
support this way of learning, researchers and developers are continuously
developing tools, proposing guidelines, analyzing data, and conducting
experiments. However, it is still not clear what characteristics a video should
have to be an effective learning medium. In this paper, we present a
comprehensive review of 257 articles on video-based learning for the period
from 2016 to 2021. One of the aims of the review is to identify the video
characteristics that have been explored by previous work. Based on our
analysis, we suggest a taxonomy which organizes the video characteristics and
contextual aspects into eight categories: (1) audio features, (2) visual
features, (3) textual features, (4) instructor behavior, (5) learners
activities, (6) interactive features (quizzes, etc.), (7) production style, and
(8) instructional design. Also, we identify four representative research
directions: (1) proposals of tools to support video-based learning, (2) studies
with controlled experiments, (3) data analysis studies, and (4) proposals of
design guidelines for learning videos. We find that the most explored
characteristics are textual features followed by visual features, learner
activities, and interactive features. Text of transcripts, video frames, and
images (figures and illustrations) are most frequently used by tools that
support learning through videos. The learner activity is heavily explored
through log files in data analysis studies, and interactive features have been
frequently scrutinized in controlled experiments. We complement our review by
contrasting research findings that investigate the impact of video
characteristics on the learning effectiveness, report on tasks and technologies
used to develop tools that support learning, and summarize trends of design
guidelines to produce learning video
Chatbots for Modelling, Modelling of Chatbots
Tesis Doctoral inédita leÃda en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de IngenierÃa Informática. Fecha de Lectura: 28-03-202
Development and validation of innovative sequencing tools for the fast and efficient detection of plant virus
Plant viruses are a major cause of crop losses and decreased agricultural productivity worldwide. Rapid and accurate detection of plant viruses is essential for the implementation of effective control measures. Traditional methods of plant virus detection, such as serological and molecular assays, often present very good performance criteria but they are targeted, and they don’t detect new viruses or divergent strains of known viruses.
Overall, developing and validating innovative sequencing tools for fast and efficient detection of plant viruses gained a lot of leverage. Indeed, high throughput sequencing (HTS) tests followed by bioinformatic analyses can detect several viruses at once (including novel ones) and then characterise their genomes. This very high inclusivity allows better monitoring of agricultural pest presence than traditional methods. In addition, the sensitivity of HTS viral detection is theoretically higher than molecular and serological tests, meaning that low-level infection can be traced more efficiently.
HTS tests have several drawbacks: the price, the high technical requirements and the cross-contamination of sequences between samples nevertheless. The cost of viral detection by sequencing is higher than traditional methods, but the cost gap is reducing over time as HTS is more and more affordable. More technical skills are required for sequencing and analysis of a sample for virus detection, but the laboratory and bioinformatic protocols are becoming simpler and easier to learn and apply. Cross-contamination between samples is a recurrent phenomenon that is challenging the operational activities of laboratories aiming to detect plant pests. The high sensitivity of HTS has a drawback as it means that cross-contamination is an even more pressing issue than with traditional methods.
Cross-contamination is probably one of the main issues when using HTS for viral detection. Indeed, if an unexpected genetic material transfer happens between two samples in the laboratory, one virus can be sequenced in the other sample. Since sequencing sensitivity is high, HTS is more prone to detect this cross-contaminating virus. That may lead to a false positive virus detection (as it is really in the bioinformatic data) while it was not present in the plant. The specificities of HTS technologies (high sensitivity, high inclusivity but with the complexity of laboratory and bioinformatics steps) make their validation difficult compared to traditional tests. Therefore, this thesis describes the side-by-side comparison between traditional tests and HTS technologies for virus indexing of Musa germplasm collection. In addition, an alien control (a specific type of external control) has been used for the first time to
II
monitor cross-contamination in HTS. In addition, a newly described alien-based filter algorithm, called Cont-ID, has been developed and applied to find the most appropriate limit of detection that should be applied for accurate virus detection taking into account the risk of false negatives and false positives. That way, the detection prediction's confidence can be high enough to be considered for its use in plant virus diagnosis.
As written above, HTS technologies can also characterise the genome of the detected viruses. Through variant analysis, the different virus variants can be highlighted. A performance testing was conducted to better understand the difficulties and therefore improve the variants' characterisation.
This thesis has therefore addressed several drawbacks limiting potentially the use of HTS technologies for plant virus detection and genome characterisation. It has delivered several milestones to contribute to these technologies' wider and more reliable applications for plant virus detection. Overall, it has reinforced its high potential for improving the control and management of plant virus diseases.2. Zero hunge
Transition 2.0: Re-establishing Constitutional Democracy in EU Member States
The central question of Transition 2.0 is this: what (and how) may a new government do to re-establish constitutional democracy, as well as repair membership within the European Union, without breaching the European rule of law? This volume demonstrates that EU law and international commitments impose constraints but also offer tools and assistance for facilitating the way back after rule of law and democratic backsliding. The various contributions explore the constitutional, legal, and social framework of 'Transition 2.0'.Dieser Band zeigt, dass das EU-Recht und die internationalen Verpflichtungen zwar Zwänge auferlegen, aber auch Instrumente und Hilfestellungen bieten, um den Weg zurück in die Europäische Union nach Rechtsstaatlichkeitsdefiziten und demokratischen Rückschritten zu erleichtern. Die verschiedenen Beiträge untersuchen den verfassungsrechtlichen, rechtlichen und sozialen Rahmen des "Übergangs 2.0"
Cybersecurity: Past, Present and Future
The digital transformation has created a new digital space known as
cyberspace. This new cyberspace has improved the workings of businesses,
organizations, governments, society as a whole, and day to day life of an
individual. With these improvements come new challenges, and one of the main
challenges is security. The security of the new cyberspace is called
cybersecurity. Cyberspace has created new technologies and environments such as
cloud computing, smart devices, IoTs, and several others. To keep pace with
these advancements in cyber technologies there is a need to expand research and
develop new cybersecurity methods and tools to secure these domains and
environments. This book is an effort to introduce the reader to the field of
cybersecurity, highlight current issues and challenges, and provide future
directions to mitigate or resolve them. The main specializations of
cybersecurity covered in this book are software security, hardware security,
the evolution of malware, biometrics, cyber intelligence, and cyber forensics.
We must learn from the past, evolve our present and improve the future. Based
on this objective, the book covers the past, present, and future of these main
specializations of cybersecurity. The book also examines the upcoming areas of
research in cyber intelligence, such as hybrid augmented and explainable
artificial intelligence (AI). Human and AI collaboration can significantly
increase the performance of a cybersecurity system. Interpreting and explaining
machine learning models, i.e., explainable AI is an emerging field of study and
has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-
- …