14 research outputs found

    Deep Learning Based Multi-Label Text Classification of UNGA Resolutions

    Full text link
    The main goal of this research is to produce a useful software for United Nations (UN), that could help to speed up the process of qualifying the UN documents following the Sustainable Development Goals (SDGs) in order to monitor the progresses at the world level to fight poverty, discrimination, climate changes. In fact human labeling of UN documents would be a daunting task given the size of the impacted corpus. Thus, automatic labeling must be adopted at least as a first step of a multi-phase process to reduce the overall effort of cataloguing and classifying. Deep Learning (DL) is nowadays one of the most powerful tools for state-of-the-art (SOTA) AI for this task, but very often it comes with the cost of an expensive and error-prone preparation of a training-set. In the case of multi-label text classification of domain-specific text it seems that we cannot effectively adopt DL without a big-enough domain-specific training-set. In this paper, we show that this is not always true. In fact we propose a novel method that is able, through statistics like TF-IDF, to exploit pre-trained SOTA DL models (such as the Universal Sentence Encoder) without any need for traditional transfer learning or any other expensive training procedure. We show the effectiveness of our method in a legal context, by classifying UN Resolutions according to their most related SDGs.Comment: 10 pages, 10 figures, accepted paper at ICEGOV 202

    From Words to Images Through Legal Visualization

    Get PDF
    One of the common characteristics of legal documents is the absolute preponderance of text and their specific domain language, whose complexity can result in impenetrability for those that have no legal expertise. In some experiments, visual communication has been introduced in legal documents to make their meaning clearer and more intelligible, whilst visualizations have also been automatically generated from semantically-enriched legal data. As part of an ongoing research that aims to create user-friendly privacy terms by integrating graphical elements and Semantic Web technologies, the process of creation and interpretation of visual legal concepts will be discussed. The analysis of current approaches to this subject represents the point of departure to propose an empirical methodology that is inspired by interaction and human-centered design practices

    The economic impact of moderate stage Alzheimer's disease in Italy: Evidence from the UP-TECH randomized trial

    Get PDF
    Background: There is consensus that dementia is the most burdensome disease for modern societies. Few cost-of-illness studies examined the complexity of Alzheimer's disease (AD) burden, considering at the same time health and social care, cash allowances, informal care, and out-of-pocket expenditure by families. Methods: This is a comprehensive cost-of-illness study based on the baseline data from a randomized controlled trial (UP-TECH) enrolling 438 patients with moderate AD and their primary caregiver living in the community. Results: The societal burden of AD, composed of public, patient, and informal care costs, was about �20,000/yr. Out of this, the cost borne by the public sector was �4,534/yr. The main driver of public cost was the national cash-for-care allowance (�2,324/yr), followed by drug prescriptions (�1,402/yr). Out-of-pocket expenditure predominantly concerned the cost of private care workers. The value of informal care peaked at �13,590/yr. Socioeconomic factors do not influence AD public cost, but do affect the level of out-of-pocket expenditure. Conclusion: The burden of AD reflects the structure of Italian welfare. The families predominantly manage AD patients. The public expenditure is mostly for drugs and cash-for-care benefits. From a State perspective in the short term, the advantage of these care arrangements is clear, compared to the cost of residential care. However, if caregivers are not adequately supported, savings may be soon offset by higher risk of caregiver morbidity and mortality produced by high burden and stress. The study has been registered on the website www.clinicaltrials.org (Trial Registration number: NCT01700556). Copyright � International Psychogeriatric Association 2015

    Socioeconomic Predictors of the Employment of Migrant Care Workers by Italian Families Assisting Older Alzheimer's Disease Patients: Evidence from the Up-Tech Study

    Get PDF
    Background: The availability of family caregivers of older people is decreasing in Italy as the number of migrant care workers (MCWs) hired by families increases. There is little evidence on the influence of socioeconomic factors in the employment of MCWs. Method: We analyzed baseline data from 438 older people with moderate Alzheimer's disease (AD), and their family caregivers enrolled in the Up-Tech trial. We used bivariate analysis and multilevel regressions to investigate the association between independent variables - education, social class, and the availability of a care allowance - and three outcomes - employment of a MCW, hours of care provided by the primary family caregiver, and by the family network (primary and other family caregivers). Results: The availability of a care allowance and the educational level were independently associated with employing MCWs. A significant interaction between education and care allowance was found, suggesting that more educated families are more likely to spend the care allowance to hire a MCW. Discussion: Socioeconomic inequalities negatively influenced access both to private care and to care allowance, leading disadvantaged families to directly provide more assistance to AD patients. Care allowance entitlement needs to be reformed in Italy and in countries with similar long-term care and migration systems. � 2015 The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved

    Modeling data complexity in public history and cultural heritage

    Get PDF
    The publication by Galleries, Libraries, Archives and Museums of metadata about their collections is fundamental for the creation of our shared digital cultural heritage. Yet, we notice, these digital collections are, on one hand, of little use to scholars (because of the inconsistent quality of the published records), and, on the other hand, they fail to attract the interest of the general public (because of their dry content). These problems are exacerbated by the current move towards public history, where citizens are no longer just passive actors, but play an active role in contributing, maintaining and curating historical records, leading some to question the trustworthiness of collections in which non scholars have the ability to contribute. The core issue behind all these problems is, we argue, a (doomed) search for objectivity, often caused by the fact that data models ignore the derivative and stratified nature of cultural objects, and allow only one point of view to be expressed. In turn this forces the publication of bowdlerized records and removes any venue for the expression of disagreement and different opinions. We propose an approach named contexts to solve these issues. The adoption of contexts makes it possible to support multiple points of view inside the same dataset, allowing not only multiple scholars to provide their own possibly contrasting points of view, but also making it possible to incorporate additions, corrections and more complex kinds of commentaries from citizens without compromising the trustworthiness of the whole dataset

    Embedding semantic annotations within texts: the FRETTA approach

    No full text
    n order to make semantic assertions about the text content of a document we need a mechanism to identify and organize the text structures of the document itself. Such mechanism would closely resemble a document-oriented markup language and would be free of the classical constraints of an embedded markup language, having no limitations given by sequentiality, containment, or contiguity of text fragments. In the past years we developed EARMARK, our OWL proposal for expressing arbitrary semantic annota- tions about the structure and the text content of a document. In this paper we describe FRETTA, our mechanism for rendering arbitrary EARMARK annotations (including non-sequential, non-hierarchical and non-contiguous ones) in XML, bringing into a unifying framework a half dozen of syntactic tricks used in literature to handle overlapping structures in a strictly hierarchical language

    Embedding semantic annotations within texts: the FRETTA approach

    No full text
    n order to make semantic assertions about the text content of a document we need a mechanism to identify and organize the text structures of the document itself. Such mechanism would closely resemble a document-oriented markup language and would be free of the classical constraints of an embedded markup language, having no limitations given by sequentiality, containment, or contiguity of text fragments. In the past years we developed EARMARK, our OWL proposal for expressing arbitrary semantic annota- tions about the structure and the text content of a document. In this paper we describe FRETTA, our mechanism for rendering arbitrary EARMARK annotations (including non-sequential, non-hierarchical and non-contiguous ones) in XML, bringing into a unifying framework a half dozen of syntactic tricks used in literature to handle overlapping structures in a strictly hierarchical language

    Multi-layer markup and ontological structures in Akoma Ntoso

    No full text
    The XML documents that represent legal resources contain information and legal knowledge that belong to many distinct conceptual layers. This paper shows how the Akoma Ntoso standard keeps these layers well separated while providing ontological structures on top of them. Additionally, this paper illustrates how Akoma Ntoso allows multiple interpretations, provided by different agents, over the same set of texts and concepts and how current semantic technologies can use these interpretations to reason on the underlying legal texts

    Long-term preservation of legal resources

    No full text
    In the last decade, large scale electronic collections of legal documents are increasing their dissemination in the public administrations, especially in those entitled to provide official and legal publication of the legal resources. But if the original purpose of these huge document bases has been, basically, to produce a digital counterpart of their traditional representation on paper, new and challenging requirements are in fact starting to arise: not only support legal drafting, law-making workflow and consolidated versions of the law, which are well managed by Akoma Ntoso, but also long-term preservation, semantic analysis or ontological characterization. In this presentation we will discuss how Akoma Ntoso copes with these new challenges with particular regard to the legal long-term preservation
    corecore