25,194 research outputs found

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    The Viability and Potential Consequences of IoT-Based Ransomware

    Get PDF
    With the increased threat of ransomware and the substantial growth of the Internet of Things (IoT) market, there is significant motivation for attackers to carry out IoT-based ransomware campaigns. In this thesis, the viability of such malware is tested. As part of this work, various techniques that could be used by ransomware developers to attack commercial IoT devices were explored. First, methods that attackers could use to communicate with the victim were examined, such that a ransom note was able to be reliably sent to a victim. Next, the viability of using "bricking" as a method of ransom was evaluated, such that devices could be remotely disabled unless the victim makes a payment to the attacker. Research was then performed to ascertain whether it was possible to remotely gain persistence on IoT devices, which would improve the efficacy of existing ransomware methods, and provide opportunities for more advanced ransomware to be created. Finally, after successfully identifying a number of persistence techniques, the viability of privacy-invasion based ransomware was analysed. For each assessed technique, proofs of concept were developed. A range of devices -- with various intended purposes, such as routers, cameras and phones -- were used to test the viability of these proofs of concept. To test communication hijacking, devices' "channels of communication" -- such as web services and embedded screens -- were identified, then hijacked to display custom ransom notes. During the analysis of bricking-based ransomware, a working proof of concept was created, which was then able to remotely brick five IoT devices. After analysing the storage design of an assortment of IoT devices, six different persistence techniques were identified, which were then successfully tested on four devices, such that malicious filesystem modifications would be retained after the device was rebooted. When researching privacy-invasion based ransomware, several methods were created to extract information from data sources that can be commonly found on IoT devices, such as nearby WiFi signals, images from cameras, or audio from microphones. These were successfully implemented in a test environment such that ransomable data could be extracted, processed, and stored for later use to blackmail the victim. Overall, IoT-based ransomware has not only been shown to be viable but also highly damaging to both IoT devices and their users. While the use of IoT-ransomware is still very uncommon "in the wild", the techniques demonstrated within this work highlight an urgent need to improve the security of IoT devices to avoid the risk of IoT-based ransomware causing havoc in our society. Finally, during the development of these proofs of concept, a number of potential countermeasures were identified, which can be used to limit the effectiveness of the attacking techniques discovered in this PhD research

    ENABLING EFFICIENT FLEET COMPOSITION SELECTION THROUGH THE DEVELOPMENT OF A RANK HEURISTIC FOR A BRANCH AND BOUND METHOD

    Get PDF
    In the foreseeable future, autonomous mobile robots (AMRs) will become a key enabler for increasing productivity and flexibility in material handling in warehousing facilities, distribution centers and manufacturing systems. The objective of this research is to develop and validate parametric models of AMRs, develop ranking heuristic using a physics-based algorithm within the framework of the Branch and Bound method, integrate the ranking algorithm into a Fleet Composition Optimization (FCO) tool, and finally conduct simulations under various scenarios to verify the suitability and robustness of the developed tool in a factory equipped with AMRs. Kinematic-based equations are used for computing both energy and time consumption. Multivariate linear regression, a data-driven method, is used for designing the ranking heuristic. The results indicate that the unique physical structures and parameters of each robot are the main factors contributing to differences in energy and time consumption. improvement on reducing computation time was achieved by comparing heuristic-based search and non-heuristic-based search. This research is expected to significantly improve the current nested fleet composition optimization tool by reducing computation time without sacrificing optimality. From a practical perspective, greater efficiency in reducing energy and time costs can be achieved.Ford Motor CompanyNo embargoAcademic Major: Aerospace Engineerin

    Shellfish Stocks and Fisheries Review 2022: an assessment of selected stocks

    Get PDF
    This review presents information on the status of selected shellfish stocks in Ireland. In addition, data on the fleet and landings of shellfish species (excluding Nephrops and mussels) are presented. The intention of this annual review is to present stock assessment and management advice for shellfisheries that may be subject to new management proposals or where scientific advice is required in relation to assessing the environmental impact of shellfish fisheries especially in areas designated under European Directives. The review reflects the recent work of the Marine Institute (MI) in the biological assessment of shellfish fisheries and their interaction with the environment. The information and advice presented here for shellfish is complementary to that presented in the MI Stock Book on demersal and pelagic fisheries. Separate treatment of shellfish is warranted as their biology and distribution, the assessment methods that can be applied to them and the system under which they are managed, all differ substantially to demersal and pelagic stocks. Shellfish stocks are not generally assessed by The International Council for the Exploration of the Sea (ICES) and although they come under the competency of the Common Fisheries Policy they are generally not regulated by EU TAC and in the main, other than crab and scallop, are distributed inside the national 12 nm fisheries limit. Management of these fisheries is within the competency of the Department of Agriculture, Food and Marine (DAFM). A co-operative management framework introduced by the Governing Department and BIM in 2005 (Anon 2005), and under which a number of fishery management plans were developed, was, in 2014, replaced by the National and Regional Inshore Fisheries Forums (NIFF, RIFFs). These bodies are consultative forums, the members of which are representative of the inshore fisheries sector and other stakeholder groups. The National forum (NIFF) provides a structure with which each of the regional forums can interact with each other and with the Marine Agencies, DAFM and the Minister. Management of oyster fisheries is the responsibility of The Department of Environment, Climate and Communications, implemented through Inland Fisheries Ireland (IFI). In many cases, however, management responsibility for oysters is devolved through Fishery Orders or Aquaculture licences to local co-operatives. The main customers for this review are DAFM, RIFFs, NIFF and other Departments and Authorities listed above.EMFAF; Government of Irelan

    Countermeasures for the majority attack in blockchain distributed systems

    Get PDF
    La tecnología Blockchain es considerada como uno de los paradigmas informáticos más importantes posterior al Internet; en función a sus características únicas que la hacen ideal para registrar, verificar y administrar información de diferentes transacciones. A pesar de esto, Blockchain se enfrenta a diferentes problemas de seguridad, siendo el ataque del 51% o ataque mayoritario uno de los más importantes. Este consiste en que uno o más mineros tomen el control de al menos el 51% del Hash extraído o del cómputo en una red; de modo que un minero puede manipular y modificar arbitrariamente la información registrada en esta tecnología. Este trabajo se enfocó en diseñar e implementar estrategias de detección y mitigación de ataques mayoritarios (51% de ataque) en un sistema distribuido Blockchain, a partir de la caracterización del comportamiento de los mineros. Para lograr esto, se analizó y evaluó el Hash Rate / Share de los mineros de Bitcoin y Crypto Ethereum, seguido del diseño e implementación de un protocolo de consenso para controlar el poder de cómputo de los mineros. Posteriormente, se realizó la exploración y evaluación de modelos de Machine Learning para detectar software malicioso de tipo Cryptojacking.DoctoradoDoctor en Ingeniería de Sistemas y Computació

    Application of Multichannel Active Vibration Control in a Multistage Gear Transmission System

    Get PDF
    Gears are the most important parts of rotating machinery and power transmission devices. When gears are engaged in meshing transmission, vibration will occur due to factors such as gear machining errors, meshing rigidity, and meshing impact. The traditional FxLMS algorithm, as a common active vibration algorithm, has been widely studied and applied in gear transmission system active vibration control in recent years. However, it is difficult to achieve good performance in convergence speed and convergence precision at the same time. This paper proposes a variable-step-size multichannel FxLMS algorithm based on the sampling function, which accelerates the convergence speed in the initial stage of iteration, improves the convergence accuracy in the steady-state adaptive stage, and makes the modified algorithm more robust. Simulations verify the effectiveness of the algorithm. An experimental platform for active vibration control of the secondary gear transmission system is built. A piezoelectric actuator is installed on an additional gear shaft to form an active structure and equipped with a signal acquisition system and a control system; the proposed variable-step-size multichannel FxLMS algorithm is experimentally verified. The experimental results show that the proposed multichannel variable-step-size FxLMS algorithm has more accurate convergence accuracy than the traditional FxLMS algorithm, and the convergence accuracy can be increased up to 123%

    Consent and the Construction of the Volunteer: Institutional Settings of Experimental Research on Human Beings in Britain during the Cold War

    Get PDF
    This study challenges the primacy of consent in the history of human experimentation and argues that privileging the cultural frameworks adds nuance to our understanding of the construction of the volunteer in the period 1945 to 1970. Historians and bio-ethicists have argued that medical ethics codes have marked out the parameters of using people as subjects in medical scientific research and that the consent of the subjects was fundamental to their status as volunteers. However, the temporality of the creation of medical ethics codes means that they need to be understood within their historical context. That medical ethics codes arose from a specific historical context rather than a concerted and conscious determination to safeguard the well-being of subjects needs to be acknowledged. The British context of human experimentation is under-researched and there has been even less focus on the cultural frameworks within which experiments took place. This study demonstrates, through a close analysis of the Medical Research Council's Common Cold Research Unit (CCRU) and the government's military research facility, the Chemical Defence Experimental Establishment, Porton Down (Porton), that the `volunteer' in human experiments was a subjective entity whose identity was specific to the institution which recruited and made use of the subject. By examining representations of volunteers in the British press, the rhetoric of the government's collectivist agenda becomes evident and this fed into the institutional construction of the volunteer at the CCRU. In contrast, discussions between Porton scientists, staff members, and government officials demonstrate that the use of military personnel in secret chemical warfare experiments was far more complex. Conflicting interests of the military, the government and the scientific imperative affected how the military volunteer was perceived

    Network Transmission Flags Data Affinity-based Classification by K-Nearest Neighbor

    Get PDF
    Abstract—This research is concerned with the data generated during a network transmission session to understand how to extract value from the data generated and be able to conduct tasks. Instead of comparing all of the transmission flags for a transmission session at the same time to conduct any analysis, this paper conceptualized the influence of each transmission flag on network-aware applications by comparing the flags one by one on their impact to the application during the transmission session, rather than comparing all of the transmission flags at the same time. The K-nearest neighbor (KNN) type classification was used becauseit is a simple distance-based learning algorithm that remembers earlier training samples and is suitable for taking various flags withtheir effect on application protocols by comparing each new sample with the K-nearest points to make a decision. We used transmission session datasets received from Kaggle for IP flow with 87 features and 3.577.296 instances. We picked 13 features from the datasets and ran them through KNN. RapidMiner was used for the study, and the results of the experiments revealed that the KNN-based model was not only significantly more accurate in categorizing data, but it was also significantly more efficient due to the decreased processing costs

    Reduction of Petri net maintenance modeling complexity via Approximate Bayesian Computation

    Get PDF
    This paper is part of the ENHAnCE ITN project (https://www.h2020-enhanceitn.eu/) funded by the European Union's Horizon 2020 research and innovation programme under the Marie SklodowskaCurie grant agreement No. 859957. The authors would like to thank the Lloyd's Register Foundation (LRF), a charitable foundation in the U.K. helping to protect life and property by supporting engineeringrelated education, public engagement, and the application of research. The authors gratefully acknowledge the support of these organizations which have enabled the research reported in this paper.The accurate modeling of engineering systems and processes using Petri nets often results in complex graph representations that are computationally intensive, limiting the potential of this modeling tool in real life applications. This paper presents a methodology to properly define the optimal structure and properties of a reduced Petri net that mimic the output of a reference Petri net model. The methodology is based on Approximate Bayesian Computation to infer the plausible values of the model parameters of the reduced model in a rigorous probabilistic way. Also, the method provides a numerical measure of the level of approximation of the reduced model structure, thus allowing the selection of the optimal reduced structure among a set of potential candidates. The suitability of the proposed methodology is illustrated using a simple illustrative example and a system reliability engineering case study, showing satisfactory results. The results also show that the method allows flexible reduction of the structure of the complex Petri net model taken as reference, and provides numerical justification for the choice of the reduced model structure.European Commission 859957Lloyd's Register Foundation (LRF), a charitable foundation in the U.K
    corecore