15 research outputs found

    Automatic text summarisation of case law using gate with annie and summa plug-ins

    Get PDF
    Legal reasoning and judicial verdicts in many legal systems are highly dependent on case law. The ever increasing number of case law make the task of comprehending case law in a legal case cumbersome for legal practitioners; and this invariably stifles their efficiency. Legal reasoning and judicial verdicts will therefore be easier and faster, if case law were in abridged form that preserves their original meaning. This paper used the General Information Extraction System Architecture approach and integrated Natural Language Processing, Annotation, and Information Extraction tools to develop a software system that does automatic extractive text summarisation of Nigeria Supreme Court case law. The summarised case law which were about 20% of their original, were evaluated for semantic preservation and has shown to be 83% reliable.Keywords: Case law, text summarisation, text engineering, text annotation, text extractio

    Analysing and Visualizing Tweets for U.S. President Popularity

    Get PDF
    In our society we are continually invested by a stream of information (opinions, preferences, comments, etc.). This shows how Twitter users react to news or events that they attend or take part in real time and with interest. In this context it becomes essential to have the appropriate tools in order to be able to analyze and extract data and information hidden in their large number of tweets. Social networks are a source of information with no rivals in terms of amount and variety of information that can be extracted from them. We propose an approach to analyze, with the help of automated tools, comments and opinions taken from social media in a real time environment. We developed a software system in R based on the Bayesian approach for text categorization. We aim of identifying sentiments expressed by the tweets posted on the Twitter social platform. The analysis of sentiment spread on social networks allows to identify the free thoughts, expressed authentically. In particular, we analyze the sentiments related to U.S President popularity by also visualizing tweets on a map. This allows to make an additional analysis of the real time reactions of people by associating the reaction of the single person who posted the tweet to his real time position in Unites States. In particular, we provide a visualization based on the geographical analysis of the sentiments of the users who posted the tweets

    Propuesta de estudio del campo semántico de los libros electrónicos en Twitter

    Get PDF
    Social networks have transformed the Web into a repository of information from which very diverse information can be extracted. Twitter is one of the best known and has had one the greatest increase in number of users of the recent years. Content analysis of their messages provides valuable information about the authors of the tweets, the relationship between followers and the followed ones, etc. as well as about messages that set the trend. In the case of this paper, this kind of research is applied to the semantic field of electronic books.Las redes sociales han conseguido que la Red sea un repositorio de información de las que se pueden extraer múltiples informaciones. Twitter es una de las más conocidas y de las que mayor incremento en número de usuarios ha tenido en los últimos años. El análisis del contenido de sus mensajes proporciona valiosa información sobre los autores de los tweets, las relaciones entre seguidores y seguidos, etc, así como aquellos mensajes que marcan tendencia. En el caso concreto de este trabajo esta información se aplica al campo semántico de los libros electrónicos

    Propuesta de estudio del campo semántico de los libros electrónicos en Twitter

    Get PDF
    [ES]Las redes sociales han conseguido que la Red sea un repositorio de información de las que se pueden extraer múltiples informaciones. Twitter es una de las más conocidas y de las que mayor incremento en número de usuarios ha tenido en los últimos años. El análisis del contenido de sus mensajes proporciona valiosa información sobre los autores de los tweets, las relaciones entre seguidores y seguidos, etc, así como aquellos mensajes que marcan tendencia. En el caso concreto de este trabajo esta información se aplica al campo semántico de los libros electrónicos

    Analysing and Visualizing Tweets for U.S. President Popularity

    Get PDF
    In our society we are continually invested by a stream of information (opinions, preferences, comments, etc.). This shows how Twitter users react to news or events that they attend or take part in real time and with interest. In this context it becomes essential to have the appropriate tools in order to be able to analyze and extract data and information hidden in their large number of tweets. Social networks are a source of information with no rivals in terms of amount and variety of information that can be extracted from them. We propose an approach to analyze, with the help of automated tools, comments and opinions taken from social media in a real time environment. We developed a software system in R based on the Bayesian approach for text categorization. We aim of identifying sentiments expressed by the tweets posted on the Twitter social platform. The analysis of sentiment spread on social networks allows to identify the free thoughts, expressed authentically. In particular, we analyze the sentiments related to U.S President popularity by also visualizing tweets on a map. This allows to make an additional analysis of the real time reactions of people by associating the reaction of the single person who posted the tweet to his real time position in Unites States. In particular, we provide a visualization based on the geographical analysis of the sentiments of the users who posted the tweets

    Automating the Semantic Mapping between Regulatory Guidelines and Organizational Processes

    Get PDF
    The mapping of regulatory guidelines with organizational processes is an important aspect of a regulatory compliance management system. Automating this mapping process can greatly improve the overall compliance process. Currently, there is research on mapping between different entities such as ontology mapping, sentence similarity, semantic similarity and regulation-requirement mapping. However, there has not been adequate research on the automation of the mapping process between regulatory guidelines and organizational processes. In this paper, we explain how Natural Language Processing and Semantic Web technologies can be applied in this area. In particular, we explain how we can take advantage of the structures of regulation-ontology and the process-ontology in order to compute the similarity between a regulatory guideline and a process. Our methodology is validated using a case study in the Pharmaceutical industry, which has shown promising results

    Semantic framework for regulatory compliance support

    Get PDF
    Regulatory Compliance Management (RCM) is a management process, which an organization implements to conform to regulatory guidelines. Some processes that contribute towards automating RCM are: (i) extraction of meaningful entities from the regulatory text and (ii) mapping regulatory guidelines with organisational processes. These processes help in updating the RCM with changes in regulatory guidelines. The update process is still manual since there are comparatively less research in this direction. The Semantic Web technologies are potential candidates in order to make the update process automatic. There are stand-alone frameworks that use Semantic Web technologies such as Information Extraction, Ontology Population, Similarities and Ontology Mapping. However, integration of these innovative approaches in the semantic compliance management has not been explored yet. Considering these two processes as crucial constituents, the aim of this thesis is to automate the processes of RCM. It proposes a framework called, RegCMantic. The proposed framework is designed and developed in two main phases. The first part of the framework extracts the regulatory entities from regulatory guidelines. The extraction of meaningful entities from the regulatory guidelines helps in relating the regulatory guidelines with organisational processes. The proposed framework identifies the document-components and extracts the entities from the document-components. The framework extracts important regulatory entities using four components: (i) parser, (ii) definition terms, (iii) ontological concepts and (iv) rules. The parsers break down a sentence into useful segments. The extraction is carried out by using the definition terms, ontological concepts and the rules in the segments. The entities extracted are the core-entities such as subject, action and obligation, and the aux-entities such as time, place, purpose, procedure and condition. The second part of the framework relates the regulatory guidelines with organisational processes. The proposed framework uses a mapping algorithm, which considers three types of Abstract 3 entities in the regulatory-domain and two types of entities in the process-domains. In the regulatory-domain, the considered entities are regulation-topic, core-entities and aux-entities. Whereas, in the process-domain, the considered entities are subject and action. Using these entities, it computes aggregation of three types of similarity scores: topic-score, core-score and aux-score. The aggregate similarity score determines whether a regulatory guideline is related to an organisational process. The RegCMantic framework is validated through the development of a prototype system. The prototype system implements a case study, which involves regulatory guidelines governing the Pharmaceutical industries in the UK. The evaluation of the results from the case-study has shown improved accuracy in extraction of the regulatory entities and relating regulatory guidelines with organisational processes. This research has contributed in extracting meaningful entities from regulatory guidelines, which are provided in unstructured text and mapping the regulatory guidelines with organisational processes semantically
    corecore