223 research outputs found

    Automating correctness verification of artifact-centric business process models

    Get PDF
    Context: The artifact-centric methodology has emerged as a new paradigm to support business process management over the last few years. This way, business processes are described from the point of view of the artifacts that are manipulated during the process. Objective: One of the research challenges in this area is the verification of the correctness of this kind of business process models where the model is formed of various artifacts that interact among them. Method: In this paper, we propose a fully automated approach for verifying correctness of artifact-centric business process models, taking into account that the state (lifecycle) and the values of each artifact (numerical data described by pre and postconditions) influence in the values and the state of the others. The lifecycles of the artifacts and the numerical data managed are modeled by using the Constraint Programming paradigm, an Artificial Intelligence technique. Results: Two correctness notions for artifact-centric business process models are distinguished (reachability and weak termination), and novel verification algorithms are developed to check them. The algorithms are complete: neither false positives nor false negatives are generated. Moreover, the algorithms offer precise diagnosis of the detected errors, indicating the execution causing the error where the lifecycle gets stuck. Conclusion: To the best of our knowledge, this paper presents the first verification approach for artifact-centric business process models that integrates pre and postconditions, which define the behavior of the services, and numerical data verification when the model is formed of more than one artifact. The approach can detect errors not detectable with other approaches.Ministerio de Educación y Ciencia TIN2009-1371

    Corporate influence and the academic computer science discipline. [4: CMU]

    Get PDF
    Prosopographical work on the four major centers for computer research in the United States has now been conducted, resulting in big questions about the independence of, so called, computer science

    Novel and powerful 3D adaptive crisp active contour method applied in the segmentation of CT lung images

    Get PDF
    The World Health Organization estimates that 300 million people have asthma, 210 million people haveChronic Obstructive Pulmonary Disease (COPD), and, according to WHO, COPD will become the third majorcause of death worldwide in 2030. Computational Vision systems are commonly used in pulmonologyto address the task of image segmentation, which is essential for accurate medical diagnoses. Segmentationdefines the regions of the lungs in CT images of the thorax that must be further analyzed bythe system or by a specialist physician. This work proposes a novel and powerful technique named 3DAdaptive Crisp Active Contour Method (3D ACACM) for the segmentation of CT lung images. The methodstarts with a sphere within the lung to be segmented that is deformed by forces acting on it towardsthe lung borders. This process is performed iteratively in order to minimize an energy function associatedwith the 3D deformable model used. In the experimental assessment, the 3D ACACM is comparedagainst three approaches commonly used in this field: the automatic 3D Region Growing, the level-setalgorithm based on coherent propagation and the semi-automatic segmentation by an expert using the3D OsiriX toolbox. When applied to 40 CT scans of the chest the 3D ACACM had an average F-measureof 99.22%, revealing its superiority and competency to segment lungs in CT images

    Living Innovation Laboratory Model Design and Implementation

    Full text link
    Living Innovation Laboratory (LIL) is an open and recyclable way for multidisciplinary researchers to remote control resources and co-develop user centered projects. In the past few years, there were several papers about LIL published and trying to discuss and define the model and architecture of LIL. People all acknowledge about the three characteristics of LIL: user centered, co-creation, and context aware, which make it distinguished from test platform and other innovation approaches. Its existing model consists of five phases: initialization, preparation, formation, development, and evaluation. Goal Net is a goal-oriented methodology to formularize a progress. In this thesis, Goal Net is adopted to subtract a detailed and systemic methodology for LIL. LIL Goal Net Model breaks the five phases of LIL into more detailed steps. Big data, crowd sourcing, crowd funding and crowd testing take place in suitable steps to realize UUI, MCC and PCA throughout the innovation process in LIL 2.0. It would become a guideline for any company or organization to develop a project in the form of an LIL 2.0 project. To prove the feasibility of LIL Goal Net Model, it was applied to two real cases. One project is a Kinect game and the other one is an Internet product. They were both transformed to LIL 2.0 successfully, based on LIL goal net based methodology. The two projects were evaluated by phenomenography, which was a qualitative research method to study human experiences and their relations in hope of finding the better way to improve human experiences. Through phenomenographic study, the positive evaluation results showed that the new generation of LIL had more advantages in terms of effectiveness and efficiency.Comment: This is a book draf

    Predicting Hackathon Outcomes Using Machine Learning (Data Analytics)

    Get PDF
    Häkatonide tähtsus ja toimumise sagedus on viimase kahe aastakümne jooksul jätkuvalt kasvanud. Häkatonide võitmine võib suurendada võitnud meeskondade tuntust ja tulla osavõtjatele kasuks töökohtade leidmisel, isikliku arengu jaoks ja projektidele investorite leidmisel. Antud uurimus tugineb olemasoleval andmestikul, mis koguti 5 aasta jooksul Devposti häkatoni platvormilt ja mis sisaldab umbes 5000 häkatoni ja enam kui 60000 projekti andmeid. Uurimuses kasutati andmeanalüüsi ja masinõppe tehnikaid tuvastamaks häkatoni meeskondade neid aspekte, mis parandavad meeskondade võiduvõimalusi. Antud töö on katse tegeleda lüngaga häkatonide tulemuste ennustamisel ja demonstreerida erinevate projekti tunnuste tähtsust suure ulatusega andmestiku uurimise tulemuste põhjal. Rakendatud tehnikad visandavad raamistiku masinõppe protsessile lähenemiseks täiesti uue klassifikatsiooni probleemi jaoks. Raamistik adresseerib antud probleemile iseäraseid raskusi ja soovitud tulemuse vajadusi. Valitud meetoditeks olid naiivne Bayes, logistiline regressioon ja juhuslik mets, kuna neid meetodeid kasutatakse laialdaselt sarnaste klassifitseerimisülesannete jaoks. Lisaks valiti XGBoost, kuna viimastel aastatel on see meetod andnud tipptasemel tulemusi erinevate andmeteaduse probleemide lahendamisel. Samuti oli fookuses projektide tunnuste leidmine ja tunnuste valik klassifikatsioonimudelite suutlikkuse parandamiseks. Töös näidatakse, et arendatud algoritmid töötavad paremini kui tavamõistusel tuginev reeglipõhine lähtetase.Over the past two decades, hackathons continue to increase in importance and frequency. Winning hackathon competitions can increase the visibility for winning teams and benefit participants in terms of future job opportunities, personal development and finding potential investors for a project. Based on an existing dataset that covers around 2000 hackathons and more than 60000 projects over the period of 5 years gathered from Devpost hackathon platform, in this study Data Analysis and Machine Learning techniques were used to identify aspects of hackathon teams that improve their chances of winning. This thesis is an attempt to address the gap in hackathon outcome prediction and to demonstrate the importance of different project features by presenting findings from large scope dataset. Applied techniques outline a framework for approaching the Machine Learning process on a brand-new classification problem addressing the particular difficulties and needs of the desired outcome. Naive Bayes, Logistic Regression and Random Forest were selected because they are widely in use in similar classification tasks, while XGBoost was chosen since in recent years it has given a state-of-the-art performance for different Data Science problems. Besides that, the main focus was made on project feature extraction and feature selection for a better prediction. The developed classifiers are shown to outperform the common-sense rule-based baseline

    Power-law distribution aware trust prediction

    Get PDF
    Trust prediction, aiming to predict the trust relations between users in a social network, is a key to helping users discover the reliable information. Many trust prediction methods are proposed based on the low-rank assumption of a trust network. However, one typical property of the trust network is that the trust relations follow the power-law distribution, i.e., few users are trusted by many other users, while most tail users have few trustors. Due to these tail users, the fundamental low-rank assumption made by existing methods is seriously violated and becomes unrealistic. In this paper, we propose a simple yet effective method to address the problem of the violated low-rank assumption. Instead of discovering the low-rank component of the trust network alone, we learn a sparse component of the trust network to describe the tail users simultaneously. With both of the learned low-rank and sparse components, the trust relations in the whole network can be better captured. Moreover, the transitive closure structure of the trust relations is also integrated into our model. We then derive an effective iterative algorithm to infer the parameters of our model, along with the proof of correctness. Extensive experimental results on real-world trust networks demonstrate the superior performance of our proposed method over the state-of-the-arts
    corecore