15,833 research outputs found

    Vision- and tactile-based continuous multimodal intention and attention recognition for safer physical human-robot interaction

    Full text link
    Employing skin-like tactile sensors on robots enhances both the safety and usability of collaborative robots by adding the capability to detect human contact. Unfortunately, simple binary tactile sensors alone cannot determine the context of the human contact -- whether it is a deliberate interaction or an unintended collision that requires safety manoeuvres. Many published methods classify discrete interactions using more advanced tactile sensors or by analysing joint torques. Instead, we propose to augment the intention recognition capabilities of simple binary tactile sensors by adding a robot-mounted camera for human posture analysis. Different interaction characteristics, including touch location, human pose, and gaze direction, are used to train a supervised machine learning algorithm to classify whether a touch is intentional or not with an F1-score of 86%. We demonstrate that multimodal intention recognition is significantly more accurate than monomodal analyses with the collaborative robot Baxter. Furthermore, our method can also continuously monitor interactions that fluidly change between intentional or unintentional by gauging the user's attention through gaze. If a user stops paying attention mid-task, the proposed intention and attention recognition algorithm can activate safety features to prevent unsafe interactions. We also employ a feature reduction technique that reduces the number of inputs to five to achieve a more generalized low-dimensional classifier. This simplification both reduces the amount of training data required and improves real-world classification accuracy. It also renders the method potentially agnostic to the robot and touch sensor architectures while achieving a high degree of task adaptability.Comment: 11 pages, 8 figures, preprint under revie

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Success factors in IT Outsourcing

    Get PDF
    Abstract. To survive and respond to the everchanging business world companies are seeking new ways to concentrate and improve core competencies, as well as improve their competitive status against the market. Companies are exploring how to exploit the core competencies of other companies. The goals of the partnership might differ depending on the scope of the partnership. The goal might be one or many of the following: cost reduction, access to higher quality service, access to technology and/or know-how. Even if the first IT outsourcing was done around 30 years ago by Eastman Kodak and General Dynamics and the area has been studied quite heavily, the topic seems to be still difficult for companies to grasp the wanted benefits. As the IT outsourcing is widely used option in the business world and the results are not firm, I feel the topic is still relevant to study. The research question for the study is: “What factors affect the success of IT outsourcing relationship?” The research question is answered through the literature review. From the literature review eleven high level success factors can be identified. In some cases, some factors are combined. The success factors are Cost and Quality, Trust, Alignment to business strategy, Culture, Communication, Contracts, Strategic Partnership, Governance, Management support, Infrastructure, and Know-how. How important each individual factors are in outsourcing engagement in question depends on the sort of the partnership. The theoretical implications are very limited, but the practical implications regarding communication, trust and governance should be considered when companies enter IT outsourcing partnerships. Putting an emphasis on setting up proper governance functions and people who are good at communicating with the other party will pay the efforts back in success of the relationship

    Technical Dimensions of Programming Systems

    Get PDF
    Programming requires much more than just writing code in a programming language. It is usually done in the context of a stateful environment, by interacting with a system through a graphical user interface. Yet, this wide space of possibilities lacks a common structure for navigation. Work on programming systems fails to form a coherent body of research, making it hard to improve on past work and advance the state of the art. In computer science, much has been said and done to allow comparison of programming languages, yet no similar theory exists for programming systems; we believe that programming systems deserve a theory too. We present a framework of technical dimensions which capture the underlying characteristics of programming systems and provide a means for conceptualizing and comparing them. We identify technical dimensions by examining past influential programming systems and reviewing their design principles, technical capabilities, and styles of user interaction. Technical dimensions capture characteristics that may be studied, compared and advanced independently. This makes it possible to talk about programming systems in a way that can be shared and constructively debated rather than relying solely on personal impressions. Our framework is derived using a qualitative analysis of past programming systems. We outline two concrete ways of using our framework. First, we show how it can analyze a recently developed novel programming system. Then, we use it to identify an interesting unexplored point in the design space of programming systems. Much research effort focuses on building programming systems that are easier to use, accessible to non-experts, moldable and/or powerful, but such efforts are disconnected. They are informal, guided by the personal vision of their authors and thus are only evaluable and comparable on the basis of individual experience using them. By providing foundations for more systematic research, we can help programming systems researchers to stand, at last, on the shoulders of giants

    Impact of Population Based Indoor Residual Spraying with and without Mass Drug Administration with Dihydroartemisinin-Piperaquine on Malaria Prevalence in a High Transmission Setting: A Quasi-Experimental Controlled Before-and-After Trial in Northeastern Uganda

    Get PDF
    Background: Declines in malaria burden in Uganda have slowed. Modelling predicts that indoor residual spraying (IRS) and mass drug administration (MDA), when co-timed, have synergistic impact. This study investigated additional protective impact of population-based MDA on malaria prevalence, if any, when added to IRS, as compared with IRS alone and with standard of care (SOC). Methods: The 32-month quasi-experimental controlled before-and-after trial enrolled an open cohort of residents (46,765 individuals, 1st enumeration and 52,133, 4th enumeration) of Katakwi District in northeastern Uganda. Consented participants were assigned to three arms based on residential subcounty at study start: MDA+IRS, IRS, SOC. IRS with pirimiphos methyl and MDA with dihydroartemisinin- piperaquine were delivered in 4 co-timed campaign-style rounds 8 months apart. The primary endpoint was population prevalence of malaria, estimated by 6 cross-sectional surveys, starting at baseline and preceding each subsequent round. Results: Comparing malaria prevalence in MDA+IRS and IRS only arms over all 6 surveys (intention-to-treat analysis), roughly every 6 months post-interventions, a geostatistical model found a significant additional 15.5% (95% confidence interval (CI): [13.7%, 17.5%], Z = 9.6, p = 5e−20) decrease in the adjusted odds ratio (aOR) due to MDA for all ages, a 13.3% reduction in under 5’s (95% CI: [10.5%, 16.8%], Z = 4.02, p = 5e−5), and a 10.1% reduction in children 5–15 (95% CI: [8.5%, 11.8%], Z = 4.7, p = 2e−5). All ages residents of the MDA + IRS arm enjoyed an overall 80.1% reduction (95% CI: [80.0%, 83.0%], p = 0.0001) in odds of qPCR confirmed malaria compared with SOC residents. Secondary difference-in-difference analyses comparing surveys at different timepoints to baseline showed aOR (MDA + IRS vs IRS) of qPCR positivity between 0.28 and 0.66 (p \u3c 0.001). Of three serious adverse events, one (nonfatal) was considered related to study medications. Limitations include the initial non-random assignment of study arms, the single large cluster per arm, and the lack of an MDA-only arm, considered to violate equipoise. Conclusions: Despite being assessed at long time points 5–7 months post-round, MDA plus IRS provided significant additional protection from malaria infection over IRS alone. Randomized trials of MDA in large areas undergoing IRS recommended as well as cohort studies of impact on incidence

    A Design Science Research Approach to Smart and Collaborative Urban Supply Networks

    Get PDF
    Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness. A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense. Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice

    Corporate Social Responsibility: the institutionalization of ESG

    Get PDF
    Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective

    The Great Green Wall Initiative in Mali - Country Review

    Get PDF

    High-performance and Scalable Software-based NVMe Virtualization Mechanism with I/O Queues Passthrough

    Full text link
    NVMe(Non-Volatile Memory Express) is an industry standard for solid-state drives (SSDs) that has been widely adopted in data centers. NVMe virtualization is crucial in cloud computing as it allows for virtualized NVMe devices to be used by virtual machines (VMs), thereby improving the utilization of storage resources. However, traditional software-based solutions have flexibility benefits but often come at the cost of performance degradation or high CPU overhead. On the other hand, hardware-assisted solutions offer high performance and low CPU usage, but their adoption is often limited by the need for special hardware support or the requirement for new hardware development. In this paper, we propose LightIOV, a novel software-based NVMe virtualization mechanism that achieves high performance and scalability without consuming valuable CPU resources and without requiring special hardware support. LightIOV can support thousands of VMs on each server. The key idea behind LightIOV is NVMe hardware I/O queues passthrough, which enables VMs to directly access I/O queues of NVMe devices, thus eliminating virtualization overhead and providing near-native performance. Results from our experiments show that LightIOV can provide comparable performance to VFIO, with an IOPS of 97.6%-100.2% of VFIO. Furthermore, in high-density VMs environments, LightIOV achieves 31.4% lower latency than SPDK-Vhost when running 200 VMs, and an improvement of 27.1% in OPS performance in real-world applications

    Conversion of Legal Agreements into Smart Legal Contracts using NLP

    Full text link
    A Smart Legal Contract (SLC) is a specialized digital agreement comprising natural language and computable components. The Accord Project provides an open-source SLC framework containing three main modules: Cicero, Concerto, and Ergo. Currently, we need lawyers, programmers, and clients to work together with great effort to create a usable SLC using the Accord Project. This paper proposes a pipeline to automate the SLC creation process with several Natural Language Processing (NLP) models to convert law contracts to the Accord Project's Concerto model. After evaluating the proposed pipeline, we discovered that our NER pipeline accurately detects CiceroMark from Accord Project template text with an accuracy of 0.8. Additionally, our Question Answering method can extract one-third of the Concerto variables from the template text. We also delve into some limitations and possible future research for the proposed pipeline. Finally, we describe a web interface enabling users to build SLCs. This interface leverages the proposed pipeline to convert text documents to Smart Legal Contracts by using NLP models.Comment: 7 pages, Companion Proceedings of the ACM Web Conference 2023 (WWW '23 Companion), April 30-May 4, 2023, Austin, TX, US
    corecore