16,431 research outputs found

    In Pursuit of Experience: The Authentic Documentation of Experience in Beat Generation Literature

    Get PDF
    Throughout their lives the authors of The Beat Generation sought an escape from the conformity of mid-century American life, in favour of fresh thrilling experiences to influence their writing. The writers of the Beat Generation developed writing methods that authentically document their real-life experiences. Therefore, this thesis examines the documentary nature of literature that came out of this Generation. The first section of the essay explores Beat literature as memoir; arguing that Kerouac's prose is based on his own first-hand experience recollected after the event. This section also argues that due to its fast pace and lack of revision, the Spontaneous Prose Method can be used by authors as a form suited to the authentic documentation of experience. The second chapter looks at the use of transcription methods to document a moment, or specific event, written during the experience. This chapter compares Gary Snyder's Riprap and Cold Mountain Poems, Ginsberg's 'Wichita Vortex Sutra', and Kerouac's Blues Poems as poetry that authentically portrays a moment of experience to the reader. The final chapter explores the more experimental methods of documentation, and whether any authenticity was lost to experimentation. The chapter also explores the Beat use of drugs on the content and form of the literature

    Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control

    Full text link
    This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference

    Full text link
    We propose Conditional Adapter (CoDA), a parameter-efficient transfer learning method that also improves inference efficiency. CoDA generalizes beyond standard adapter approaches to enable a new way of balancing speed and accuracy using conditional computation. Starting with an existing dense pretrained model, CoDA adds sparse activation together with a small number of new parameters and a light-weight training phase. Our experiments demonstrate that the CoDA approach provides an unexpectedly efficient way to transfer knowledge. Across a variety of language, vision, and speech tasks, CoDA achieves a 2x to 8x inference speed-up compared to the state-of-the-art Adapter approach with moderate to no accuracy loss and the same parameter efficiency

    ADS_UNet: A Nested UNet for Histopathology Image Segmentation

    Full text link
    The UNet model consists of fully convolutional network (FCN) layers arranged as contracting encoder and upsampling decoder maps. Nested arrangements of these encoder and decoder maps give rise to extensions of the UNet model, such as UNete and UNet++. Other refinements include constraining the outputs of the convolutional layers to discriminate between segment labels when trained end to end, a property called deep supervision. This reduces feature diversity in these nested UNet models despite their large parameter space. Furthermore, for texture segmentation, pixel correlations at multiple scales contribute to the classification task; hence, explicit deep supervision of shallower layers is likely to enhance performance. In this paper, we propose ADS UNet, a stage-wise additive training algorithm that incorporates resource-efficient deep supervision in shallower layers and takes performance-weighted combinations of the sub-UNets to create the segmentation model. We provide empirical evidence on three histopathology datasets to support the claim that the proposed ADS UNet reduces correlations between constituent features and improves performance while being more resource efficient. We demonstrate that ADS_UNet outperforms state-of-the-art Transformer-based models by 1.08 and 0.6 points on CRAG and BCSS datasets, and yet requires only 37% of GPU consumption and 34% of training time as that required by Transformers.Comment: To be published in Expert Systems With Application

    One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era

    Full text link
    OpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is demonstrated to be one small step for generative AI (GAI), but one giant leap for artificial general intelligence (AGI). Since its official release in November 2022, ChatGPT has quickly attracted numerous users with extensive media coverage. Such unprecedented attention has also motivated numerous researchers to investigate ChatGPT from various aspects. According to Google scholar, there are more than 500 articles with ChatGPT in their titles or mentioning it in their abstracts. Considering this, a review is urgently needed, and our work fills this gap. Overall, this work is the first to survey ChatGPT with a comprehensive review of its underlying technology, applications, and challenges. Moreover, we present an outlook on how ChatGPT might evolve to realize general-purpose AIGC (a.k.a. AI-generated content), which will be a significant milestone for the development of AGI.Comment: A Survey on ChatGPT and GPT-4, 29 pages. Feedback is appreciated ([email protected]

    Assessing the potential of golf among university students to leverage SDG 3 in Planbelas : a consulting project

    Get PDF
    Mestrado Bolonha em ManagementThis consulting project was executed under the partnership of ISEG school of economics and Planbelas, with the main goal of addressing Planbelas’ main concern, which was the potential of profitability of Belas’ new plots of land. In order to disintegrate the case, the project focused on a key component, which was the Assessment of the Potential of Golf Among University Students to Leverage SDG 3 in Planbelas. To resolve this issue, both an internal and external analysis were executed in Belas, comprising a SWOT analysis and the five forces of porter, where it was possible to access the potential and further comprehend the on-going status of Belas. The methodology of the project encompassed both interviews and surveys, where the interviews were semi-structured. The surveys conducted were used solely to support the already available data obtained from the interviews conducted, no deep analysis was conducted. The data was analysed to make new observations and provide a more comprehensive insight of the consulting project. The data analysed reinforces the position that Belas targeting SDG 3 and making use of the golf course to promote itself could also prove beneficial to university students. Golf being able to offer advantages both physically and mentally, would give students a chance not only to socialize but also to lead a healthy lifestyle. Therefore, Belas would be promoting both golf and a healthy lifestyle, as socialisation.info:eu-repo/semantics/publishedVersio

    Economia colaborativa

    Get PDF
    A importância de se proceder à análise dos principais desafios jurídicos que a economia colaborativa coloca – pelas implicações que as mudanças de paradigma dos modelos de negócios e dos sujeitos envolvidos suscitam − é indiscutível, correspondendo à necessidade de se fomentar a segurança jurídica destas práticas, potenciadoras de crescimento económico e bem-estar social. O Centro de Investigação em Justiça e Governação (JusGov) constituiu uma equipa multidisciplinar que, além de juristas, integra investigadores de outras áreas, como a economia e a gestão, dos vários grupos do JusGov – embora com especial participação dos investigadores que integram o grupo E-TEC (Estado, Empresa e Tecnologia) – e de outras prestigiadas instituições nacionais e internacionais, para desenvolver um projeto neste domínio, com o objetivo de identificar os problemas jurídicos que a economia colaborativa suscita e avaliar se já existem soluções para aqueles, refletindo igualmente sobre a conveniência de serem introduzidas alterações ou se será mesmo necessário criar nova regulamentação. O resultado desta investigação é apresentado nesta obra, com o que se pretende fomentar a continuação do debate sobre este tema.Esta obra é financiada por fundos nacionais através da FCT — Fundação para a Ciência e a Tecnologia, I.P., no âmbito do Financiamento UID/05749/202

    Wav2code: Restore Clean Speech Representations via Codebook Lookup for Noise-Robust ASR

    Full text link
    Automatic speech recognition (ASR) has gained a remarkable success thanks to recent advances of deep learning, but it usually degrades significantly under real-world noisy conditions. Recent works introduce speech enhancement (SE) as front-end to improve speech quality, which is proved effective but may not be optimal for downstream ASR due to speech distortion problem. Based on that, latest works combine SE and currently popular self-supervised learning (SSL) to alleviate distortion and improve noise robustness. Despite the effectiveness, the speech distortion caused by conventional SE still cannot be completely eliminated. In this paper, we propose a self-supervised framework named Wav2code to implement a generalized SE without distortions for noise-robust ASR. First, in pre-training stage the clean speech representations from SSL model are sent to lookup a discrete codebook via nearest-neighbor feature matching, the resulted code sequence are then exploited to reconstruct the original clean representations, in order to store them in codebook as prior. Second, during finetuning we propose a Transformer-based code predictor to accurately predict clean codes by modeling the global dependency of input noisy representations, which enables discovery and restoration of high-quality clean representations without distortions. Furthermore, we propose an interactive feature fusion network to combine original noisy and the restored clean representations to consider both fidelity and quality, resulting in even more informative features for downstream ASR. Finally, experiments on both synthetic and real noisy datasets demonstrate that Wav2code can solve the speech distortion and improve ASR performance under various noisy conditions, resulting in stronger robustness.Comment: 12 pages, 7 figures, Submitted to IEEE/ACM TASL
    • …
    corecore