471 research outputs found

    Abstractive Text Summarization for Resumes With Cutting Edge NLP Transformers and LSTM

    Full text link
    Text summarization is a fundamental task in natural language processing that aims to condense large amounts of textual information into concise and coherent summaries. With the exponential growth of content and the need to extract key information efficiently, text summarization has gained significant attention in recent years. In this study, LSTM and pre-trained T5, Pegasus, BART and BART-Large model performances were evaluated on the open source dataset (Xsum, CNN/Daily Mail, Amazon Fine Food Review and News Summary) and the prepared resume dataset. This resume dataset consists of many information such as language, education, experience, personal information, skills, and this data includes 75 resumes. The primary objective of this research was to classify resume text. Various techniques such as LSTM, pre-trained models, and fine-tuned models were assessed using a dataset of resumes. The BART-Large model fine-tuned with the resume dataset gave the best performance

    D7.5 FIRST consolidated project results

    Get PDF
    The FIRST project commenced in January 2017 and concluded in December 2022, including a 24-month suspension period due to the COVID-19 pandemic. Throughout the project, we successfully delivered seven technical reports, conducted three workshops on Key Enabling Technologies for Digital Factories in conjunction with CAiSE (in 2019, 2020, and 2022), produced a number of PhD theses, and published over 56 papers (and numbers of summitted journal papers). The purpose of this deliverable is to provide an updated account of the findings from our previous deliverables and publications. It involves compiling the original deliverables with necessary revisions to accurately reflect the final scientific outcomes of the project

    Strategies for Improving Supply Chain Management in United Nations Peacekeeping Missions

    Get PDF
    Managers of the United Nation’s humanitarian operations are under rigid pressure to deliver their programs efficiently due to 75% of supply chains experiencing disruptions, accounting for 60 to 80% of the expenses due to limited funding and increasing scrutiny by member states. Humanitarian operations are inextricably linked to the performance of a supply chain. Therefore, if the supply chain managers in the United Nations (UN) fail to understand and adopt dynamic capabilities, they can experience operational underperformance, affecting trust from financial supporters. Grounded on dynamic capability theory, the purpose of this qualitative multiple-case study was to explore strategies that executive supply chain managers of the UN use to leverage operational efficiencies in a peacekeeping program. The research participants comprised nine UN staff members in leadership positions who have successfully developed and implemented strategies resulting in operational efficiencies. Data were collected from semistructured interviews and relevant organizational public documents. Three themes emerged during the data analysis process, analytical, innovation, and knowledge management; effective supply chain leadership; and risk management, resulting in several strategies to be considered to operationalize humanitarian aid more efficiently. A key recommendation includes the application of analytical, innovative, technological capabilities and effective leadership that fosters accountability, change management, collaboration, knowledge sharing, and partnerships. Conclusively, an efficient supply chain can help the UN to meet its global sustainable goals, thus improving social well-being and achieving a sustainable future for everybody

    Strategies for Improving Supply Chain Management in United Nations Peacekeeping Missions

    Get PDF
    Managers of the United Nation’s humanitarian operations are under rigid pressure to deliver their programs efficiently due to 75% of supply chains experiencing disruptions, accounting for 60 to 80% of the expenses due to limited funding and increasing scrutiny by member states. Humanitarian operations are inextricably linked to the performance of a supply chain. Therefore, if the supply chain managers in the United Nations (UN) fail to understand and adopt dynamic capabilities, they can experience operational underperformance, affecting trust from financial supporters. Grounded on dynamic capability theory, the purpose of this qualitative multiple-case study was to explore strategies that executive supply chain managers of the UN use to leverage operational efficiencies in a peacekeeping program. The research participants comprised nine UN staff members in leadership positions who have successfully developed and implemented strategies resulting in operational efficiencies. Data were collected from semistructured interviews and relevant organizational public documents. Three themes emerged during the data analysis process, analytical, innovation, and knowledge management; effective supply chain leadership; and risk management, resulting in several strategies to be considered to operationalize humanitarian aid more efficiently. A key recommendation includes the application of analytical, innovative, technological capabilities and effective leadership that fosters accountability, change management, collaboration, knowledge sharing, and partnerships. Conclusively, an efficient supply chain can help the UN to meet its global sustainable goals, thus improving social well-being and achieving a sustainable future for everybody

    Mediators Metadata Management Services: An Implementation Using GOA++ System

    Get PDF
    The main contribution of this work is the development of a Metadata Manager to interconnect heterogeneous and autonomous information sources in a flexible, expandable and transparent way. The interoperability at the semantic level is reached using an integration layer, structured in a hierarchical way, based on the concept of Mediators. Services of a Mediator Metadata Manager (MMM) are specified and implemented using functions based on the Outlines of GOA++. The MMM services e are available in the form of a GOA++ API and they can be accessed remotely via CORBA or through local API calls.Sociedad Argentina de Informática e Investigación Operativ

    Information Technology's Role in Global Healthcare Systems

    Get PDF
    Over the past few decades, modern information technology has made a significant impact on people’s daily lives worldwide. In the field of health care and prevention, there has been a progressing penetration of assistive health services such as personal health records, supporting apps for chronic diseases, or preventive cardiological monitoring. In 2020, the range of personal health services appeared to be almost unmanageable, accompanied by a multitude of different data formats and technical interfaces. The exchange of health-related data between different healthcare providers or platforms may therefore be difficult or even impossible. In addition, health professionals are increasingly confronted with medical data that were not acquired by themselves, but by an algorithmic “black box”. Even further, externally recorded data tend to be incompatible with the data models of classical healthcare information systems.From the individual’s perspective, digital services allow for the monitoring of their own health status. However, such services can also overwhelm their users, especially elderly people, with too many features or barely comprehensible information. It therefore seems highly relevant to examine whether such “always at hand” services exceed the digital literacy levels of average citizens.In this context, this reprint presents innovative, health-related applications or services emphasizing the role of user-centered information technology, with a special focus on one of the aforementioned aspects

    Exploiting general-purpose background knowledge for automated schema matching

    Full text link
    The schema matching task is an integral part of the data integration process. It is usually the first step in integrating data. Schema matching is typically very complex and time-consuming. It is, therefore, to the largest part, carried out by humans. One reason for the low amount of automation is the fact that schemas are often defined with deep background knowledge that is not itself present within the schemas. Overcoming the problem of missing background knowledge is a core challenge in automating the data integration process. In this dissertation, the task of matching semantic models, so-called ontologies, with the help of external background knowledge is investigated in-depth in Part I. Throughout this thesis, the focus lies on large, general-purpose resources since domain-specific resources are rarely available for most domains. Besides new knowledge resources, this thesis also explores new strategies to exploit such resources. A technical base for the development and comparison of matching systems is presented in Part II. The framework introduced here allows for simple and modularized matcher development (with background knowledge sources) and for extensive evaluations of matching systems. One of the largest structured sources for general-purpose background knowledge are knowledge graphs which have grown significantly in size in recent years. However, exploiting such graphs is not trivial. In Part III, knowledge graph em- beddings are explored, analyzed, and compared. Multiple improvements to existing approaches are presented. In Part IV, numerous concrete matching systems which exploit general-purpose background knowledge are presented. Furthermore, exploitation strategies and resources are analyzed and compared. This dissertation closes with a perspective on real-world applications
    corecore