15,994 research outputs found
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
The Viability and Potential Consequences of IoT-Based Ransomware
With the increased threat of ransomware and the substantial growth of the Internet of Things (IoT) market, there is significant motivation for attackers to carry out IoT-based ransomware campaigns. In this thesis, the viability of such malware is tested.
As part of this work, various techniques that could be used by ransomware developers to attack commercial IoT devices were explored. First, methods that attackers could use to communicate with the victim were examined, such that a ransom note was able to be reliably sent to a victim. Next, the viability of using "bricking" as a method of ransom was evaluated, such that devices could be remotely disabled unless the victim makes a payment to the attacker. Research was then performed to ascertain whether it was possible to remotely gain persistence on IoT devices, which would improve the efficacy of existing ransomware methods, and provide opportunities for more advanced ransomware to be created. Finally, after successfully identifying a number of persistence techniques, the viability of privacy-invasion based ransomware was analysed.
For each assessed technique, proofs of concept were developed. A range of devices -- with various intended purposes, such as routers, cameras and phones -- were used to test the viability of these proofs of concept. To test communication hijacking, devices' "channels of communication" -- such as web services and embedded screens -- were identified, then hijacked to display custom ransom notes. During the analysis of bricking-based ransomware, a working proof of concept was created, which was then able to remotely brick five IoT devices. After analysing the storage design of an assortment of IoT devices, six different persistence techniques were identified, which were then successfully tested on four devices, such that malicious filesystem modifications would be retained after the device was rebooted. When researching privacy-invasion based ransomware, several methods were created to extract information from data sources that can be commonly found on IoT devices, such as nearby WiFi signals, images from cameras, or audio from microphones. These were successfully implemented in a test environment such that ransomable data could be extracted, processed, and stored for later use to blackmail the victim.
Overall, IoT-based ransomware has not only been shown to be viable but also highly damaging to both IoT devices and their users. While the use of IoT-ransomware is still very uncommon "in the wild", the techniques demonstrated within this work highlight an urgent need to improve the security of IoT devices to avoid the risk of IoT-based ransomware causing havoc in our society. Finally, during the development of these proofs of concept, a number of potential countermeasures were identified, which can be used to limit the effectiveness of the attacking techniques discovered in this PhD research
Self-Supervised Learning to Prove Equivalence Between Straight-Line Programs via Rewrite Rules
We target the problem of automatically synthesizing proofs of semantic
equivalence between two programs made of sequences of statements. We represent
programs using abstract syntax trees (AST), where a given set of
semantics-preserving rewrite rules can be applied on a specific AST pattern to
generate a transformed and semantically equivalent program. In our system, two
programs are equivalent if there exists a sequence of application of these
rewrite rules that leads to rewriting one program into the other. We propose a
neural network architecture based on a transformer model to generate proofs of
equivalence between program pairs. The system outputs a sequence of rewrites,
and the validity of the sequence is simply checked by verifying it can be
applied. If no valid sequence is produced by the neural network, the system
reports the programs as non-equivalent, ensuring by design no programs may be
incorrectly reported as equivalent. Our system is fully implemented for a given
grammar which can represent straight-line programs with function calls and
multiple types. To efficiently train the system to generate such sequences, we
develop an original incremental training technique, named self-supervised
sample selection. We extensively study the effectiveness of this novel training
approach on proofs of increasing complexity and length. Our system, S4Eq,
achieves 97% proof success on a curated dataset of 10,000 pairs of equivalent
programsComment: 30 pages including appendi
Corporate Social Responsibility: the institutionalization of ESG
Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective
Countermeasures for the majority attack in blockchain distributed systems
La tecnología Blockchain es considerada como uno de los paradigmas informáticos más importantes posterior al Internet; en función a sus características únicas que la hacen ideal para registrar, verificar y administrar información de diferentes transacciones. A pesar de esto, Blockchain se enfrenta a diferentes problemas de seguridad, siendo el ataque del 51% o ataque mayoritario uno de los más importantes. Este consiste en que uno o más mineros tomen el control de al menos el 51% del Hash extraído o del cómputo en una red; de modo que un minero puede manipular y modificar arbitrariamente la información registrada en esta tecnología. Este trabajo se enfocó en diseñar e implementar estrategias de detección y mitigación de ataques mayoritarios (51% de ataque) en un sistema distribuido Blockchain, a partir de la caracterización del comportamiento de los mineros. Para lograr esto, se analizó y evaluó el Hash Rate / Share de los mineros de Bitcoin y Crypto Ethereum, seguido del diseño e implementación de un protocolo de consenso para controlar el poder de cómputo de los mineros. Posteriormente, se realizó la exploración y evaluación de modelos de Machine Learning para detectar software malicioso de tipo Cryptojacking.DoctoradoDoctor en Ingeniería de Sistemas y Computació
iDML: Incentivized Decentralized Machine Learning
With the rising emergence of decentralized and opportunistic approaches to
machine learning, end devices are increasingly tasked with training deep
learning models on-devices using crowd-sourced data that they collect
themselves. These approaches are desirable from a resource consumption
perspective and also from a privacy preservation perspective. When the devices
benefit directly from the trained models, the incentives are implicit -
contributing devices' resources are incentivized by the availability of the
higher-accuracy model that results from collaboration. However, explicit
incentive mechanisms must be provided when end-user devices are asked to
contribute their resources (e.g., computation, communication, and data) to a
task performed primarily for the benefit of others, e.g., training a model for
a task that a neighbor device needs but the device owner is uninterested in. In
this project, we propose a novel blockchain-based incentive mechanism for
completely decentralized and opportunistic learning architectures. We leverage
a smart contract not only for providing explicit incentives to end devices to
participate in decentralized learning but also to create a fully decentralized
mechanism to inspect and reflect on the behavior of the learning architecture
Associated Random Neural Networks for Collective Classification of Nodes in Botnet Attacks
Botnet attacks are a major threat to networked systems because of their
ability to turn the network nodes that they compromise into additional
attackers, leading to the spread of high volume attacks over long periods. The
detection of such Botnets is complicated by the fact that multiple network IP
addresses will be simultaneously compromised, so that Collective Classification
of compromised nodes, in addition to the already available traditional methods
that focus on individual nodes, can be useful. Thus this work introduces a
collective Botnet attack classification technique that operates on traffic from
an n-node IP network with a novel Associated Random Neural Network (ARNN) that
identifies the nodes which are compromised. The ARNN is a recurrent
architecture that incorporates two mutually associated, interconnected and
architecturally identical n-neuron random neural networks, that act
simultneously as mutual critics to reach the decision regarding which of n
nodes have been compromised. A novel gradient learning descent algorithm is
presented for the ARNN, and is shown to operate effectively both with
conventional off-line training from prior data, and with on-line incremental
training without prior off-line learning. Real data from a 107 node packet
network is used with over 700,000 packets to evaluate the ARNN, showing that it
provides accurate predictions. Comparisons with other well-known state of the
art methods using the same learning and testing datasets, show that the ARNN
offers significantly better performance
Digital Inclusion of the Farming Sector Using Drone Technology
Agriculture continues to be the primary source of income for most rural people in the developing economy. The world’s economy is also strongly reliant on agricultural products, which accounts for a large number of its exports. Despite its growing importance, agriculture is still lagging behind to meet the demands due to crop failure caused by bad weather conditions and unmanaged insect problems. As a result, the quality and quantity of agricultural products are occasionally affected to reduce the farm income. Crop failure could be predicted ahead of time and preventative measures could be taken through a combination of conventional farming practices with contemporary technologies such as agri-drones to address the difficulties plaguing the agricultural sectors. Drones are actually unmanned aerial vehicles that are used for imaging, soil and crop surveillance, and a variety of other purposes in agricultural sectors. Drone technology is now becoming an emerging technology for large-scale applications in agriculture. Although the technology is still in its infancy in developing nations, numerous research and businesses are working to make it easily accessible to the farming community to boost the agricultural productivity
Die akute Appendizitis im Kindes- und Jugendalter: neue diagnostische Verfahren für die prätherapeutische Differenzierung histopathologischer Entitäten zur Unterstützung konservativer Therapiestrategien
Hintergrund der hier zusammengefassten Studien war die aktuelle Datenlage, die dafür spricht, dass es sich bei der klinisch unkomplizierten, histopathologisch phlegmonösen und der klinisch komplizierten, histopathologisch gangränösen Appendizitis um unabhängige Entitäten handelt. Diese können unterschiedlichen Therapieoptionen (konservativ vs. operativ) zugeführt werden. Vor diesem Hintergrund war es ein Ziel der Arbeiten zu untersuchen, wie die Formen der akuten Appendizitis im Kindes- und Jugendalter bereits prätherapeutisch unterschieden werden können.
Sowohl in der Labordiagnostik (P1 und P2) als auch im Ultraschall (P3) lassen sich Unterschiede zwischen Patient*innen mit unkomplizierter, phlegmonöser und komplizierter (gangränöser und perforierender) Appendizitis aufzeigen. Hierdurch allein kann allerdings aufgrund unzureichender Trennschärfe noch keine ausreichende Entscheidungssicherheit erreicht werden. Mit Verfahren der künstlichen Intelligenz auf Untersucher-unabhängige diagnostische Parameter (P4) konnte die Vorhersagegenauigkeit der akuten Appendizitis weiter gesteigert werden. Interessante Ergebnisse bezüglich der unterschiedlichen Pathomechanismen der beiden inflammatorischen Entitäten ergaben sich durch eine differenzielle Genexpressionsanalyse (P5). In einer Proof-of-Concept-Studie wurden zuvor beschriebene Methoden der künstlichen Intelligenz auf die Genexpressionsdaten angewandt (P6). Hierdurch konnte im Modell eine grundsätzliche Differenzierbarkeit der Entitäten durch die Anwendung der neuen Methode aufgezeigt werden.
Ein mittelfristiges Ziel ist es, eine Biomarkersignatur zu definieren, die ihre Aussagekraft durch einen Computeralgorithmus hat. Hierdurch soll eine schnelle Therapieentscheidung ermöglicht werden. Im Idealfall sollte diese Biomarkersignatur sicher, objektiv und einfach zu bestimmen sein sowie eine höhere diagnostische Sicherheit als die bisherige Diagnostik mittels Anamnese, Untersuchung, Laboranalyse und Ultraschall bieten.
Langfristiges Ziel von Folgestudien ist die Identifizierung einer Biomarkersignatur mit der bestmöglichen Vorhersagekraft. Hinsichtlich der routinemäßigen klinischen Diagnostik ist die Anwendung von Point-of-Care Devices auf PCR-Basis denkbar. Hier könnte eine limitierte Anzahl von Primern für eine Biomarkersignatur mit hoher Vorhersagekraft zum Einsatz kommen. Der dadurch ermittelte Biomarker würde seine Aussagekraft durch einen einfach anzuwendenden Computeralgorithmus erhalten. Die Kombination aus Genexpressionsanalyse mit Methoden der künstlichen Intelligenz kann somit die Grundlage für ein neues diagnostisches Instrument zur sicheren Unterscheidung unterschiedlicher Appendizitisentitäten darstellen
- …