720 research outputs found

    Generative AI

    Get PDF

    Energy storage design and integration in power systems by system-value optimization

    Get PDF
    Energy storage can play a crucial role in decarbonising power systems by balancing power and energy in time. Wider power system benefits that arise from these balancing technologies include lower grid expansion, renewable curtailment, and average electricity costs. However, with the proliferation of new energy storage technologies, it becomes increasingly difficult to identify which technologies are economically viable and how to design and integrate them effectively. Using large-scale energy system models in Europe, the dissertation shows that solely relying on Levelized Cost of Storage (LCOS) metrics for technology assessments can mislead and that traditional system-value methods raise important questions about how to assess multiple energy storage technologies. Further, the work introduces a new complementary system-value assessment method called the market-potential method, which provides a systematic deployment analysis for assessing multiple storage technologies under competition. However, integrating energy storage in system models can lead to the unintended storage cycling effect, which occurs in approximately two-thirds of models and significantly distorts results. The thesis finds that traditional approaches to deal with the issue, such as multi-stage optimization or mixed integer linear programming approaches, are either ineffective or computationally inefficient. A new approach is suggested that only requires appropriate model parameterization with variable costs while keeping the model convex to reduce the risk of misleading results. In addition, to enable energy storage assessments and energy system research around the world, the thesis extended the geographical scope of an existing European opensource model to global coverage. The new build energy system model ‘PyPSA-Earth’ is thereby demonstrated and validated in Africa. Using PyPSA-Earth, the thesis assesses for the first time the system value of 20 energy storage technologies across multiple scenarios in a representative future power system in Africa. The results offer insights into approaches for assessing multiple energy storage technologies under competition in large-scale energy system models. In particular, the dissertation addresses extreme cost uncertainty through a comprehensive scenario tree and finds that, apart from lithium and hydrogen, only seven energy storage are optimizationrelevant technologies. The work also discovers that a heterogeneous storage design can increase power system benefits and that some energy storage are more important than others. Finally, in contrast to traditional methods that only consider single energy storage, the thesis finds that optimizing multiple energy storage options tends to significantly reduce total system costs by up to 29%. The presented research findings have the potential to inform decision-making processes for the sizing, integration, and deployment of energy storage systems in decarbonized power systems, contributing to a paradigm shift in scientific methodology and advancing efforts towards a sustainable future

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy

    Get PDF
    Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts

    Defining Safe Training Datasets for Machine Learning Models Using Ontologies

    Get PDF
    Machine Learning (ML) models have been gaining popularity in recent years in a wide variety of domains, including safety-critical domains. While ML models have shown high accuracy in their predictions, they are still considered black boxes, meaning that developers and users do not know how the models make their decisions. While this is simply a nuisance in some domains, in safetycritical domains, this makes ML models difficult to trust. To fully utilize ML models in safetycritical domains, there needs to be a method to improve trust in their safety and accuracy without human experts checking each decision. This research proposes a method to increase trust in ML models used in safety-critical domains by ensuring the safety and completeness of the model’s training dataset. Since most of the complexity of the model is built through training, ensuring the safety of the training dataset could help to increase the trust in the safety of the model. The method proposed in this research uses a domain ontology and an image quality characteristic ontology to validate the domain completeness and image quality robustness of a training dataset. This research also presents an experiment as a proof of concept for this method where ontologies are built for the emergency road vehicle domain

    InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation

    Full text link
    This paper introduces InternVid, a large-scale video-centric multimodal dataset that enables learning powerful and transferable video-text representations for multimodal understanding and generation. The InternVid dataset contains over 7 million videos lasting nearly 760K hours, yielding 234M video clips accompanied by detailed descriptions of total 4.1B words. Our core contribution is to develop a scalable approach to autonomously build a high-quality video-text dataset with large language models (LLM), thereby showcasing its efficacy in learning video-language representation at scale. Specifically, we utilize a multi-scale approach to generate video-related descriptions. Furthermore, we introduce ViCLIP, a video-text representation learning model based on ViT-L. Learned on InternVid via contrastive learning, this model demonstrates leading zero-shot action recognition and competitive video retrieval performance. Beyond basic video understanding tasks like recognition and retrieval, our dataset and model have broad applications. They are particularly beneficial for generating interleaved video-text data for learning a video-centric dialogue system, advancing video-to-text and text-to-video generation research. These proposed resources provide a tool for researchers and practitioners interested in multimodal video understanding and generation.Comment: Data and Code: https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVi

    Design of an E-learning system using semantic information and cloud computing technologies

    Get PDF
    Humanity is currently suffering from many difficult problems that threaten the life and survival of the human race. It is very easy for all mankind to be affected, directly or indirectly, by these problems. Education is a key solution for most of them. In our thesis we tried to make use of current technologies to enhance and ease the learning process. We have designed an e-learning system based on semantic information and cloud computing, in addition to many other technologies that contribute to improving the educational process and raising the level of students. The design was built after much research on useful technology, its types, and examples of actual systems that were previously discussed by other researchers. In addition to the proposed design, an algorithm was implemented to identify topics found in large textual educational resources. It was tested and proved to be efficient against other methods. The algorithm has the ability of extracting the main topics from textual learning resources, linking related resources and generating interactive dynamic knowledge graphs. This algorithm accurately and efficiently accomplishes those tasks even for bigger books. We used Wikipedia Miner, TextRank, and Gensim within our algorithm. Our algorithm‘s accuracy was evaluated against Gensim, largely improving its accuracy. Augmenting the system design with the implemented algorithm will produce many useful services for improving the learning process such as: identifying main topics of big textual learning resources automatically and connecting them to other well defined concepts from Wikipedia, enriching current learning resources with semantic information from external sources, providing student with browsable dynamic interactive knowledge graphs, and making use of learning groups to encourage students to share their learning experiences and feedback with other learners.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Luis Sánchez Fernández.- Secretario: Luis de la Fuente Valentín.- Vocal: Norberto Fernández Garcí

    Improving approaches to material inventory management in construction industry in the UK

    Get PDF
    Materials used in construction constitute a major proportion of the total cost of construction projects. An important factor of great concern that adversely affects construction projects is the location and tracking of materials, which normally come in bulk with minimal identification. There is inadequate integration of modern wireless technologies (such as Radio Frequency Identification (RFID), Personal Digital Assistant (PDA) or Just-in-Time (JIT)) into project management systems for easier and faster materials management and tracking and to overcome human error. This research focuses on improving approaches to material inventory management in the UK construction industry through the formulation of RFID-based materials management tracking process system with projects. Existing literature review identified many challenges/problems in material inventory management on construction projects, such as supply delays, shortages, price fluctuations, wastage and damage, and insufficient storage space. Six construction projects were selected as exploratory case studies and cross-case analysis was used to investigate approaches to material inventory management practices: problems, implementation of ICT, and the potential for using emerging wireless technologies and systems (such as RFID and PDA) for materials tracking. Findings showed that there were similar problems of storage constraints and logistics with most of the construction projects. The synthesis of good practices required the implementation of RFID-facilitated construction management of materials tracking system to make material handling easier, quicker, more efficient and less paperwork. There was also a recommendation to implement Information and Communication Technology (ICT) tools to integrate plant, labour and materials into one system. The findings from the cases studies and the literature review were used to formulate a process for real-time material tracking using Radio Frequency Identification (RFID) that can improve material inventory management in the UK construction industry. Testing and validation undertaken assisted in formulating a process that can be useful, functional and acceptable for a possible process system’s development. Finally, research achievements/contributions to knowledge, and limitations were discussed and some suggestions for further research were outlined

    Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey

    Full text link
    Large language models (LLMs) have significantly advanced the field of natural language processing (NLP), providing a highly useful, task-agnostic foundation for a wide range of applications. However, directly applying LLMs to solve sophisticated problems in specific domains meets many hurdles, caused by the heterogeneity of domain data, the sophistication of domain knowledge, the uniqueness of domain objectives, and the diversity of the constraints (e.g., various social norms, cultural conformity, religious beliefs, and ethical standards in the domain applications). Domain specification techniques are key to make large language models disruptive in many applications. Specifically, to solve these hurdles, there has been a notable increase in research and practices conducted in recent years on the domain specialization of LLMs. This emerging field of study, with its substantial potential for impact, necessitates a comprehensive and systematic review to better summarize and guide ongoing work in this area. In this article, we present a comprehensive survey on domain specification techniques for large language models, an emerging direction critical for large language model applications. First, we propose a systematic taxonomy that categorizes the LLM domain-specialization techniques based on the accessibility to LLMs and summarizes the framework for all the subcategories as well as their relations and differences to each other. Second, we present an extensive taxonomy of critical application domains that can benefit dramatically from specialized LLMs, discussing their practical significance and open challenges. Last, we offer our insights into the current research status and future trends in this area

    Beyond Quantity: Research with Subsymbolic AI

    Get PDF
    How do artificial neural networks and other forms of artificial intelligence interfere with methods and practices in the sciences? Which interdisciplinary epistemological challenges arise when we think about the use of AI beyond its dependency on big data? Not only the natural sciences, but also the social sciences and the humanities seem to be increasingly affected by current approaches of subsymbolic AI, which master problems of quality (fuzziness, uncertainty) in a hitherto unknown way. But what are the conditions, implications, and effects of these (potential) epistemic transformations and how must research on AI be configured to address them adequately
    corecore