21 research outputs found

    A Big-Data-Analytics Framework for Supporting Logistics Problems in Smart-City Environments

    Get PDF
    Abstract Containers delivery management is a problem widely studied. Typically, it concerns the container movement on a truck from ships to factories or wholesalers and vice-versa. As there is an increasing interest in shipping goods by container, and that delivery points can be far from railways in various areas of interest, it is important to evaluate techniques for managing container transport that involves several days. The time horizon considered is a whole working week, rather than a single day as in classical drayage problems. Truck fleet management companies are typically interested in such optimization, as they plan how to match their truck to the incoming transportation order. This planning is a relevant both for strategical consideration and operational ones, as prices of transportation orders strictly depends on how they are fulfilled. It is worth noting that, from a mathematical point of view, this is an NP-Hard problem. In this paper, a Decision Support System for managing the tasks to be assigned to each truck of a fleet is presented, in order to optimize the number of transportation order fulfilled in a week. The proposed system implements a hybrid optimization algorithm capable of improving the performances typically presented in literature. The proposed heuristic implements an hybrid genetic algorithm that generate chains of consecutive orders that can be executed by a truck. Moreover, it uses an assignment algorithm based to evaluate the optimal solution on the selected order chains

    Methods and tools for analysis and management of risks and regulatory compliance in the healthcare sector: the Hospital at Home – HaH

    Get PDF
    Changing or creating a new organization means creating a new process. Each process involves many risks that need to be identified and managed. The main risks considered here are procedural risks and legal risks. The former are related to the risks of errors that may occur during processes, while the latter are related to the compliance of processes with regulations. Therefore, managing the risks implies proposing changes to the processes that allow the desired result: an optimized process. In order to manage a company and optimize it in the best possible way, not only should the organizational aspect, risk management and legal compliance be taken into account, but it is important that they are all analyzed simultaneously with the aim of finding the right balance that satisfies them all. This is exactly the aim of this thesis, to provide methods and tools to balance these three characteristics, and to enable this type of optimization, ICT support is used. This work is not intended to be a computer science or law thesis but an interdisciplinary thesis. Most of the work done so far is vertical and in a specific domain. The particularity and aim of this thesis is not so much to carry out an in-depth analysis of a particular aspect, but rather to combine several important aspects, normally analyzed separately, which however have an impact on each other and influence each other. In order to carry out this kind of interdisciplinary analysis, the knowledge base of both areas was involved and the combination and collaboration of different experts in the various fields was necessary. Although the methodology described is generic and can be applied to all sectors, a particular use case was chosen to show its application. The case study considered is a new type of healthcare service that allows patients in acute disease to be hospitalized to their home. This provide the possibility to perform experiments using real hospital database

    Methods and tools for analysis and management of risks and regulatory compliance in the healthcare sector: the hospital at home – HaH

    Get PDF
    Changing or creating an organisation means creating a new process. Each process involves many risks that need to be identified and managed. The main risks considered here are procedural and legal risks. The former are related to the risks of errors that may occur during processes, while the latter are related to the compliance of processes with regulations. Managing the risks implies proposing changes to the processes that allow the desired result: an optimised process. In order to manage a company and optimise it in the best possible way, not only should the organisational aspect, risk management and legal compliance be taken into account, but it is important that they are all analysed simultaneously with the aim of finding the right balance that satisfies them all. This is the aim of this thesis, to provide methods and tools to balance these three characteristics, and to enable this type of optimisation, ICT support is used. This work isn’t a thesis in computer science or law, but rather an interdisciplinary thesis. Most of the work done so far is vertical and in a specific domain. The particularity and aim of this thesis is not to carry out an in-depth analysis of a particular aspect, but rather to combine several important aspects, normally analysed separately, which however have an impact and influence each other. In order to carry out this kind of interdisciplinary analysis, the knowledge base of both areas was involved and the combination and collaboration of different experts in the various fields was necessary. Although the methodology described is generic and can be applied to all sectors, the case study considered is a new type of healthcare service that allows patients in acute disease to be hospitalised to their home. This provide the possibility to perform experiments using real hospital database

    An empirically-based framework for ontology modularization

    Get PDF
    Modularity is being increasingly used as an approach to solve for the information overload problem in ontologies. It eases cognitive complexity for humans, and computational complexity for machines. The current literature for modularity focuses mainly on techniques, tools, and on evaluation metrics. However, ontology developers still face difficulty in selecting the correct technique for specific applications and the current tools for modularity are not sufficient. These issues stem from a lack of theory about the modularisation process. To solve this problem, several researchers propose a framework for modularity, but alas, this has not been realised, up until now. In this article, we survey the existing literature to identify and populate dimensions of modules, experimentally evaluate and characterise 189 existing modules, and create a framework for modularity based on these results. The framework guides the ontology developer throughout the modularisation process. We evaluate the framework with a use-case for the Symptom ontology

    Borders and conflicts in the Mediterranean Basin

    Get PDF
    The aim of this book, indeed, is re-reading the history of the Mediterranean basin starting from a double interpretative key: sea and border. In order to avoid possible misunderstandings it is good to clarify the sense of these two words, as the sea – and the Mediterranean in particular – is itself a border. The starting point is to re-read the past of the Mediterranean and to move the observation point from the land to the sea aims. This perspective wants to carry out a de-construction and to start the cancelation, step by step, of the consolidate ethnocentrism of our analyses, in the sense given to the word by William Graham Sumner (1906). It aims, in other words, at avoiding to look at members, structure, culture and history of local groups, other than one’s own, with reference to their own values, habits and rules, as this interpretation of the other unavoidably spurs to overestimate one’s own culture, devaluating that of the others. Secondly, after this first approach, it is possible to build an interpretative model able to recognize what is different from one’s own culture – the alien, the stranger – not as an enemy, but simply as “different”. This process has a dedicated place: the border, namely the place where diversities come into contact, where contamination is accepted, a territory where what is different does not scare, as the otherness is lived as an opportunity, the contamination is an occasion of growth, the hybrid is the rule

    An ontology-driven unifying metamodel of UML Class Diagrams, EER, and ORM2

    Get PDF
    Software interoperability and application integration can be realized \linebreak through using their respective conceptual data models, which may be represented in different conceptual data modeling languages. Such modeling languages seem similar, yet are known to be distinct. Several translations between subsets of the languages' features exist, but there is no unifying framework that respects most language features of the static structural components and constraints. We aim to fill this gap. To this end, we designed a common and unified ontology-driven metamodel of the static, structural components and constraints in such a way that it unifies ER, EER, UML Class Diagrams v2.4.1, and ORM and ORM2 such that each one is a proper fragment of the consistent metamodel. The paper also presents some notable insights into the relatively few common entities and constraints, an analysis on roles, relationships, and attributes, and other modeling motivations are discussed. We describe two practical use cases of the metamodel, being a quantitative assessment of the entities of 30 models in ER/EER, UML, and ORM/ORM2, and a qualitative evaluation of inter-model assertions

    Analyzing Tweets For Predicting Mental Health States Using Data Mining And Machine Learning Algorithms

    Get PDF
    Tweets are usually the outcome of peoples’ feelings on various topics. Twitter allows users to post casual and emotional thoughts to share in real-time. Around 20% of U.S. adults use Twitter. Using the word-frequency and singular value decomposition methods, we identified the behavior of individuals through their tweets. We graded depressive and anti-depressive keywords using the tweet time-series, time-window, and time-stamp methods. We have collected around four million tweets since 2018. A parameter (Depressive Index) is computed using the F1 score and Mathews correlation coefficient (MCC) to indicate the depressive level. A framework showing the Depressive Index and the Happiness Index is prepared with the time, location, and keywords and delivers F1 Score, MCC, and CI values. COVID-19 changed the routines of most peoples\u27 lives and affected mental health. We studied the tweets and compared them with the COVID-19 growth. The Happiness Index from our work and World Happiness Report for Georgia, New York, and Sri Lanka is compared. An interactive framework is prepared to analyze the tweets, depict the happiness index, and compare it. Bad words in tweets are analyzed, and a map showing the Happiness Index is computed for all the US states and was compared with WalletHub data. We add tweets continuously and a framework delivering an atlas of maps based on the Happiness Index and make these maps available for further study. We forecasted tweets with real-time data. Our results of tweets and COVID-19 reports (WHO) are in a similar pattern. A new moving average method was presented; this unique process gave perfect results at peaks of the function and improved the error percentage. An interactive GUI portal computes the Happiness Index, depression index, feel-good- factors, prediction of the keywords, and prepares a Happiness Index map. We plan to create a public web portal to facilitate users to get these results. Upon completing the proposed GUI application, the users can get the Happiness Index, Depression Index values, Happiness map, and prediction of keywords of the desired dates and geographical locations instantaneously

    A foundation for ontology modularisation

    Get PDF
    There has been great interest in realising the Semantic Web. Ontologies are used to define Semantic Web applications. Ontologies have grown to be large and complex to the point where it causes cognitive overload for humans, in understanding and maintaining, and for machines, in processing and reasoning. Furthermore, building ontologies from scratch is time-consuming and not always necessary. Prospective ontology developers could consider using existing ontologies that are of good quality. However, an entire large ontology is not always required for a particular application, but a subset of the knowledge may be relevant. Modularity deals with simplifying an ontology for a particular context or by structure into smaller ontologies, thereby preserving the contextual knowledge. There are a number of benefits in modularising an ontology including simplified maintenance and machine processing, as well as collaborative efforts whereby work can be shared among experts. Modularity has been successfully applied to a number of different ontologies to improve usability and assist with complexity. However, problems exist for modularity that have not been satisfactorily addressed. Currently, modularity tools generate large modules that do not exclusively represent the context. Partitioning tools, which ought to generate disjoint modules, sometimes create overlapping modules. These problems arise from a number of issues: different module types have not been clearly characterised, it is unclear what the properties of a 'good' module are, and it is unclear which evaluation criteria applies to specific module types. In order to successfully solve the problem, a number of theoretical aspects have to be investigated. It is important to determine which ontology module types are the most widely-used and to characterise each such type by distinguishing properties. One must identify properties that a 'good' or 'usable' module meets. In this thesis, we investigate these problems with modularity systematically. We begin by identifying dimensions for modularity to define its foundation: use-case, technique, type, property, and evaluation metric. Each dimension is populated with sub-dimensions as fine-grained values. The dimensions are used to create an empirically-based framework for modularity by classifying a set of ontologies with them, which results in dependencies among the dimensions. The formal framework can be used to guide the user in modularising an ontology and as a starting point in the modularisation process. To solve the problem with module quality, new and existing metrics were implemented into a novel tool TOMM, and an experimental evaluation with a set of modules was performed resulting in dependencies between the metrics and module types. These dependencies can be used to determine whether a module is of good quality. For the issue with existing modularity techniques, we created five new algorithms to improve the current tools and techniques and experimentally evaluate them. The algorithms of the tool, NOMSA, performs as well as other tools for most performance criteria. For NOMSA's generated modules, two of its algorithms' generated modules are good quality when compared to the expected dependencies of the framework. The remaining three algorithms' modules correspond to some of the expected values for the metrics for the ontology set in question. The success of solving the problems with modularity resulted in a formal foundation for modularity which comprises: an exhaustive set of modularity dimensions with dependencies between them, a framework for guiding the modularisation process and annotating module, a way to measure the quality of modules using the novel TOMM tool which has new and existing evaluation metrics, the SUGOI tool for module management that has been investigated for module interchangeability, and an implementation of new algorithms to fill in the gaps of insufficient tools and techniques

    Koostööäriprotsesside läbiviimine plokiahelal: süsteem

    Get PDF
    Tänapäeval peavad organisatsioonid tegema omavahel koostööd, et kasutada ära üksteise täiendavaid võimekusi ning seeläbi pakkuda oma klientidele parimaid tooteid ja teenuseid. Selleks peavad organisatsioonid juhtima äriprotsesse, mis ületavad nende organisatsioonilisi piire. Selliseid protsesse nimetatakse koostööäriprotsessideks. Üks peamisi takistusi koostööäriprotsesside elluviimisel on osapooltevahelise usalduse puudumine. Plokiahel loob detsentraliseeritud pearaamatu, mida ei saa võltsida ning mis toetab nutikate lepingute täitmist. Nii on võimalik teha koostööd ebausaldusväärsete osapoolte vahel ilma kesksele asutusele tuginemata. Paraku on aga äriprotsesside läbiviimine selliseid madala taseme plokiahela elemente kasutades tülikas, veaohtlik ja erioskusi nõudev. Seevastu juba väljakujunenud äriprotsesside juhtimissüsteemid (Business Process Management System – BPMS) pakuvad käepäraseid abstraheeringuid protsessidele orienteeritud rakenduste kiireks arendamiseks. Käesolev doktoritöö käsitleb koostööäriprotsesside automatiseeritud läbiviimist plokiahela tehnoloogiat kasutades, kombineerides traditsioonliste BPMS- ide arendusvõimalused plokiahelast tuleneva suurendatud usaldusega. Samuti käsitleb antud doktoritöö küsimust, kuidas pakkuda tuge olukordades, milles uued osapooled võivad jooksvalt protsessiga liituda, mistõttu on vajalik tagada paindlikkus äriprotsessi marsruutimisloogika muutmise osas. Doktoritöö uurib tarkvaraarhitektuurilisi lähenemisviise ja modelleerimise kontseptsioone, pakkudes välja disainipõhimõtteid ja nõudeid, mida rakendatakse uudsel plokiahela baasil loodud äriprotsessi juhtimissüsteemil CATERPILLAR. CATERPILLAR-i süsteem toetab kahte lähenemist plokiahelal põhinevate protsesside rakendamiseks, läbiviimiseks ja seireks: kompileeritud ja tõlgendatatud. Samuti toetab see kahte kontrollitud paindlikkuse mehhanismi, mille abil saavad protsessis osalejad ühiselt otsustada, kuidas protsessi selle täitmise ajal uuendada ning anda ja eemaldada osaliste juurdepääsuõigusi.Nowadays, organizations are pressed to collaborate in order to take advantage of their complementary capabilities and to provide best-of-breed products and services to their customers. To do so, organizations need to manage business processes that span beyond their organizational boundaries. Such processes are called collaborative business processes. One of the main roadblocks to implementing collaborative business processes is the lack of trust between the participants. Blockchain provides a decentralized ledger that cannot be tamper with, that supports the execution of programs called smart contracts. These features allow executing collaborative processes between untrusted parties and without relying on a central authority. However, implementing collaborative business processes in blockchain can be cumbersome, error-prone and requires specialized skills. In contrast, established Business Process Management Systems (BPMSs) provide convenient abstractions for rapid development of process-oriented applications. This thesis addresses the problem of automating the execution of collaborative business processes on top of blockchain technology in a way that takes advantage of the trust-enhancing capabilities of this technology while offering the development convenience of traditional BPMSs. The thesis also addresses the question of how to support scenarios in which new parties may be onboarded at runtime, and in which parties need to have the flexibility to change the default routing logic of the business process. We explore architectural approaches and modelling concepts, formulating design principles and requirements that are implemented in a novel blockchain-based BPMS named CATERPILLAR. The CATERPILLAR system supports two methods to implement, execute and monitor blockchain-based processes: compiled and interpreted. It also supports two mechanisms for controlled flexibility; i.e., participants can collectively decide on updating the process during its execution as well as granting and revoking access to parties.https://www.ester.ee/record=b536494

    Exploiting general-purpose background knowledge for automated schema matching

    Full text link
    The schema matching task is an integral part of the data integration process. It is usually the first step in integrating data. Schema matching is typically very complex and time-consuming. It is, therefore, to the largest part, carried out by humans. One reason for the low amount of automation is the fact that schemas are often defined with deep background knowledge that is not itself present within the schemas. Overcoming the problem of missing background knowledge is a core challenge in automating the data integration process. In this dissertation, the task of matching semantic models, so-called ontologies, with the help of external background knowledge is investigated in-depth in Part I. Throughout this thesis, the focus lies on large, general-purpose resources since domain-specific resources are rarely available for most domains. Besides new knowledge resources, this thesis also explores new strategies to exploit such resources. A technical base for the development and comparison of matching systems is presented in Part II. The framework introduced here allows for simple and modularized matcher development (with background knowledge sources) and for extensive evaluations of matching systems. One of the largest structured sources for general-purpose background knowledge are knowledge graphs which have grown significantly in size in recent years. However, exploiting such graphs is not trivial. In Part III, knowledge graph em- beddings are explored, analyzed, and compared. Multiple improvements to existing approaches are presented. In Part IV, numerous concrete matching systems which exploit general-purpose background knowledge are presented. Furthermore, exploitation strategies and resources are analyzed and compared. This dissertation closes with a perspective on real-world applications
    corecore