17 research outputs found

    Investigation of Modelling of Dynamic Business Processes

    Get PDF
    A folyamat fogalma a szervezetek és vállalkozások mindennapi életének alapvető eszközévé vált. Segít a szervezeti célok elérésében és meghatározza az információs rendszerek (IS) kezelését. Az az információs rendszerek kutatásának egyik fontos problémája az, hogy hogyan lehet a vállalati/szervezeti környezet, az ügyfél igényeinek és követelményeinek változásához igazítani, illeszteni az információs rendszerek folyamatait. Az illesztési folyamatot már régen hosszú időre integrálódik több területen az adott célok és a kívánt eredmények elérése érdekében. Amikor egy folyamat célja kapcsolódik a vállalat céljaihoz, a folyamatot üzleti folyamatnak (BP) hívják . A BP-k egymással összefüggésben vannak, tekintettle azok szerkezetére és funkcióira, amelyek meghatározzák statikus és dinamikus oldalaikat. A legtöbb kutatási erőfeszítés a statikus BP problémákra összpontosított, vagyis a folyamat szerkezetére. Azonban a mai IS-ben a BP megvalósításához alkalmazott hagyományos megközelítés már nem fedi le a dinamikusan változó üzleti környezetet, amelyhez történő alkalmazkodás a vállalkozás tényleges igénye. Emiatt ez a Ph.D. a disszertáció a BP-szekkel kapcsolatos kérdéseket széles körben kívánja lefedni, különös figyelmet fordítva annak dinamikus aspektusára, amely egy korszerű és új perspektívát vezet be azért, hogy szabatosan meghatározza a javasolt megközelítést. Korábbi tanulmányok csak megemlítették a folyamatok dinamikus aspektusának fogalmát, és néhány kapcsolódó fogalmi területtel foglalkoztak, anélkül, hogy pontosan definiálták volna; ugyanannak a területnek a kutatói sem határozták meg a különböző elemeket, amelyek hatással vannak a folyamatok dinamikus viselkedésére és folyamatok olyan összetevőire, amelyek a BP végrehajtása során ( futási időben) lényegesek. Vizsgálatunk hozzájárul a BP területéhez azáltal, hogy világos és átfogó meghatározást nyújt a dinamikus viselkedéssel kapcsolatos fogalmakra, és a folyamatok dinamikus aspektusát fogalmilag megragadja. Ez utóbbi segít megismerni a BP funkcióit futás közben. Továbbá megvizsgáltuk az a dinamikus viselkedést befolyásoló tényezőket és azok változását, valamint a változások által érintett elemeket vagy komponenseket is. Míg egyes kutatók a szerkezetre koncentráltak, bár elhanyagolták azt, amikor a BP elemei megváltoztak és hogyan módosultak. Emiatt alaposan megvizsgáltuk ezt a területet, azonosítottuk azokat a különféle elemeket (komponenseket), amelyekre a változás tényezői hatással lehetnek, és hogy a folyamat hogyan alkalmazkodik ehhez a hatáshoz. 3 A BP-modellezés leírja a folyamat működését, és meghatározza a BP-modell összes tevékenységét a megértés érdekében. A dolgozat taxonómiát határoz meg a meglévő BP modellek negatív és pozitív tulajdonságaira (előnyeire és hátrányaira) vonatkozóan. Ezek a hasonlóságok segítettek megtalálni az ábrázolási kódokban és a funkcionalitásokban olyan szintaktikai kódelemeket, amelyek hasznosnak bizonyultak a modellek közötti átalakításokban. A modellek közötti átalakítások területe hatalmas terület, és sok konverziós módszert írtak le, de soha nem választottak az általunk alkalmazott modelleket (BP Execution Language (BPEL) és Finite State Machines (FSM)), valamint (FSM és Hypergraph). A modellek közötti összehasonlítás a különböző modellek alkalmas összekapcsolásához és integrálásához is vezethet azért, hogy megtalálják a formálisan jóvá nem hagyott modellek ellenőrzésének módját. Bevezettük a hipergráf koncepciót e modellezési területen, amely lehetővé teszi a gráf algoritmusok, a lineáris algebra és a legfrissebb adattudományi módszerek felhasználását. Ez az operacionalizált modell támogatja a modellellenőrzést, akkor is, amikor már beépítették a változásokat a dinamikus folyamatok futó példányaiba. Néhány, ezzel a koncepcióval foglalkozó reprezentációt különböző mátrixokkal és gráf ábrázolási formákkal valósítottunk meg. Jövőbeni munkáinkban felhasználhatjuk a hipergráfot és annak modelljeit akár a folyamatok különböző modelljeinek, akár a dinamikusan megváltozott folyamatok helyességének ellenőrzésére számos új ellenőrzési módszer vezetve be erre a területre. Megközelítésünk megvalósítása számos eszköz használatát segíti a folyamat és egyes tulajdonságok ellenőrzésében a hipergráf ábrázolás kihasználásával

    Some Placement Techniques of Test Components Inspired by Fog Computing Approaches

    Get PDF
    In this work we are interested in placing test components for Internet of Things (IoT) and Smart Cities. Our work is inspired by similar works aiming the placement of application components in Fog computational nodes. First we give an overview about the decision variables to consider. Then, we define several types of constraints that may be included in the placement problem. Moreover, We list a set of possible Objectives Functions to maximize or minimize. Finally, we propose some algorithms and techniques to solve the considered Test Component Placement Problem (TCPP) taken from the literature

    Towards a Runtime Standard-Based Testing Framework for Dynamic Distributed Information Systems

    Get PDF
    International audienceIn this work, we are interested in testing dynamic distributed information systems. That is we consider a decentralized information system which can evolve over time. For this purpose we propose a runtime standard-based test execution platform. The latter is built upon the normalized TTCN-3 specification and implementation testing language. The proposed platform ensures execution of tests cases at runtime. Moreover it considers both structural and behavioral adaptations of the system under test. In addition, it is equipped with a test isolation layer that minimizes the risk of interference between business and testing processes. The platform also generates a minimal subset of test scenarios to execute after each adaptation. Finally, it proposes an optimal strategy to place the TTCN-3 test components among the system execution nodes

    Operationalizing and automating data governance

    Get PDF
    The ability to cross data from multiple sources represents a competitive advantage for organizations. Yet, the governance of the data lifecycle, from the data sources into valuable insights, is largely performed in an ad-hoc or manual manner. This is specifically concerning in scenarios where tens or hundreds of continuously evolving data sources produce semi-structured data. To overcome this challenge, we develop a framework for operationalizing and automating data governance. For the first, we propose a zoned data lake architecture and a set of data governance processes that allow the systematic ingestion, transformation and integration of data from heterogeneous sources, in order to make them readily available for business users. For the second, we propose a set of metadata artifacts that allow the automatic execution of data governance processes, addressing a wide range of data management challenges. We showcase the usefulness of the proposed approach using a real world use case, stemming from the collaborative project with the World Health Organization for the management and analysis of data about Neglected Tropical Diseases. Overall, this work contributes on facilitating organizations the adoption of data-driven strategies into a cohesive framework operationalizing and automating data governance.This work was partly supported by the DOGO4ML project, funded by the Spanish Ministerio de Ciencia e Innovación under project PID2020-117191RB-I00/AEI/10.13039/501100011033. Sergi Nadal is partly supported by the Spanish Ministerio de Ciencia e Innovación, as well as the European Union - NextGenerationEU, under project FJC2020-045809-I/AEI/10.13039/501100011033.Peer ReviewedPostprint (published version

    Strategies for Managing Linked Enterprise Data

    Get PDF
    Data, information and knowledge become key assets of our 21st century economy. As a result, data and knowledge management become key tasks with regard to sustainable development and business success. Often, knowledge is not explicitly represented residing in the minds of people or scattered among a variety of data sources. Knowledge is inherently associated with semantics that conveys its meaning to a human or machine agent. The Linked Data concept facilitates the semantic integration of heterogeneous data sources. However, we still lack an effective knowledge integration strategy applicable to enterprise scenarios, which balances between large amounts of data stored in legacy information systems and data lakes as well as tailored domain specific ontologies that formally describe real-world concepts. In this thesis we investigate strategies for managing linked enterprise data analyzing how actionable knowledge can be derived from enterprise data leveraging knowledge graphs. Actionable knowledge provides valuable insights, supports decision makers with clear interpretable arguments, and keeps its inference processes explainable. The benefits of employing actionable knowledge and its coherent management strategy span from a holistic semantic representation layer of enterprise data, i.e., representing numerous data sources as one, consistent, and integrated knowledge source, to unified interaction mechanisms with other systems that are able to effectively and efficiently leverage such an actionable knowledge. Several challenges have to be addressed on different conceptual levels pursuing this goal, i.e., means for representing knowledge, semantic data integration of raw data sources and subsequent knowledge extraction, communication interfaces, and implementation. In order to tackle those challenges we present the concept of Enterprise Knowledge Graphs (EKGs), describe their characteristics and advantages compared to existing approaches. We study each challenge with regard to using EKGs and demonstrate their efficiency. In particular, EKGs are able to reduce the semantic data integration effort when processing large-scale heterogeneous datasets. Then, having built a consistent logical integration layer with heterogeneity behind the scenes, EKGs unify query processing and enable effective communication interfaces for other enterprise systems. The achieved results allow us to conclude that strategies for managing linked enterprise data based on EKGs exhibit reasonable performance, comply with enterprise requirements, and ensure integrated data and knowledge management throughout its life cycle

    Conjunto de heurísticas de usabilidade para avaliação de aplicações móveis em smartphones

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2019.As inovações propostas pelo mercado de telefonia celular têm crescido constantemente nos últimos anos, juntamente com a crescente complexidade do hardware, dos sistemas operacionais e dos aplicativos disponíveis nesse mercado. Essas mudanças trazem novos desafios relacionados à usabilidade que precisam ser considerados durante o processo de desenvolvimento de aplicativos, uma vez que as novas formas de interação, entre usuários e aplicativo, exigem cada vez mais a adaptação de comportamentos. Nessa situação, a usabilidade é uma questão importante, que depende de fatores de usabilidade como o Usuário, suas características e habilidades, a Tarefa que pretende alcançar e também o Contexto de Uso em que usuário e a aplicação estão inseridos. Esta dissertação apre- senta uma revisão sistemática de literatura com o objetivo de identificar as heurísticas de usabilidade utilizadas na academia e/ou na indústria, dentro do contexto de aplicações móveis. Com base nos resultados de pesquisa, essa dissertação apresenta uma contribuição de nova proposta de um conjunto de heurísticas de usabilidade focadas em aplicativos móveis em smartphones, de maneira que leva em consideração o “Usuário”, “Tarefa” a ser desempenhada e principalmente o “Contexto de uso” no qual se inserem, como fatores de usabilidade, o Cognitive Load, como um importante atributo de usabilidade e as sub- heurísticas de usabilidade, que objetivam facilitar a compreensão das diretrizes propostas e são parte dos principais diferenciais apresentados. Os componentes desse conjunto são detalhados em um modelo que foi avaliado por meio de duas avaliações heurísticas, que permitiram incorporar melhorias à proposta. Isto posto, foi proposto um conjunto de 13 heurísticas de usabilidade e 183 sub-heurísticas, que por meio das avaliações heurís- ticas, resultados melhores foram evidenciados. A proposta possibilitou aos especialistas encontrar uma maior quantidade de problemas de usabilidade, em sua maioria de maior gravidade, se comparado a proposta de Inostroza et al.. Como possíveis trabalhos futuros, outras avaliações podem ser realizadas para avaliar a proposta, de modo a incluir uma quantidade maior de especialistas da área, além de utilizar o conjunto de heurísticas em um número maior de aplicações de diferentes categorias.The innovations proposed by the mobile phone market have grown steadily in recent years, along with the increasing complexity of the hardware, operating systems and ap- plications available in this market. These changes bring new usability-related challenges that need to be considered during the application development process, as new forms of user-application interaction increasingly require behavioral adaptation. In this situation, usability is an important issue, which depends on usability factors such as the User, their characteristics and skills, the Task he intend to achieve and also the Context of Use in which the user and the application are inserted. This dissertation presents a system- atic literature review aiming to identify the usability heuristics used in academia and/or industry, within the context of mobile applications. Based on the research results, this dissertation presents a new proposal contribution of a set of usability heuristics focused on mobile applications on smartphones, so that it takes into account the “User”, “Task” to be performed and maunly the “Context of Use” in which they fit, as usability factors, the Cognitive Load, as an important usability attribute and usability sub-heuristics, which aim to facilitate the understanding of the proposed guidelines and are part of the main differentials presented. The components of this set are detailed in a model that was eval- uated through two heuristic evaluations, which allowed to incorporate improvements to the proposal. Thus, a set of 13 usability heuristics and 183 sub-heuristics were proposed, which, through heuristic evaluations, better results were evidenced. The proposal makes it easier for experts to find a greater number of usability problems, mostly of greater severity, compared to the proposal of Inostroza et al.. As possible future work, further evaluations may be carried out to evaluate the proposal to include more experts in the field, as well as to use the set of heuristics in a larger number of applications of different categories

    Distribution-based Regression for Count and Semi-Bounded Data

    Get PDF
    Data mining techniques have been successfully utilized in different applications of significant fields, including pattern recognition, computer vision, medical researches, etc. With the wealth of data generated every day, there is a lack of practical analysis tools to discover hidden relationships and trends. Among all statistical frameworks, regression has been proven to be one of the most strong tools in prediction. The complexity of data that is unfavorable for most models is a considerable challenge in prediction. The ability of a model to perform accurately and efficiently is extremely important. Thus, a model must be selected to fit the data well, such that the learning from previous data is efficient and highly accurate. This work is motivated by the limited number of regression analysis tools for multivariate count data in the literature. We propose two regression models for count data based on flexible distributions, namely, the multinomial Beta-Liouville and multinomial scaled Dirichlet, and evaluate them in the problem of disease diagnosis. The performance is measured based on the accuracy of the prediction, which depends on the nature and complexity of the dataset. Our results show the efficiency of the two proposed regression models where the prediction performance of both models is competitive to other previously used regression approaches for count data and to the best results in the literature. Then, we propose three regression models for positive vectors based on flexible distributions for semi-bounded data, namely, inverted Dirichlet, inverted generalize Dirichlet, and inverted Beta-Liouville. The efficiency of these models is tested via real-world applications, including software defects prediction, spam filtering, and disease diagnosis. Our results show that the performance of the three proposed regression models is better than other commonly used regression models

    Exploiting general-purpose background knowledge for automated schema matching

    Full text link
    The schema matching task is an integral part of the data integration process. It is usually the first step in integrating data. Schema matching is typically very complex and time-consuming. It is, therefore, to the largest part, carried out by humans. One reason for the low amount of automation is the fact that schemas are often defined with deep background knowledge that is not itself present within the schemas. Overcoming the problem of missing background knowledge is a core challenge in automating the data integration process. In this dissertation, the task of matching semantic models, so-called ontologies, with the help of external background knowledge is investigated in-depth in Part I. Throughout this thesis, the focus lies on large, general-purpose resources since domain-specific resources are rarely available for most domains. Besides new knowledge resources, this thesis also explores new strategies to exploit such resources. A technical base for the development and comparison of matching systems is presented in Part II. The framework introduced here allows for simple and modularized matcher development (with background knowledge sources) and for extensive evaluations of matching systems. One of the largest structured sources for general-purpose background knowledge are knowledge graphs which have grown significantly in size in recent years. However, exploiting such graphs is not trivial. In Part III, knowledge graph em- beddings are explored, analyzed, and compared. Multiple improvements to existing approaches are presented. In Part IV, numerous concrete matching systems which exploit general-purpose background knowledge are presented. Furthermore, exploitation strategies and resources are analyzed and compared. This dissertation closes with a perspective on real-world applications

    Extração e evolução de linhas de produtos de software usando Delta-Oriented Programming : um relato de experiência

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2019.Delta-Oriented Programming (DOP) é uma abordagem flexível e modular para a implementação de Linha de Produtos de Software (LPS). Desde 2010, ano em que a abordagem foi proposta, vários trabalhos sobre DOP foram publicados. Entretanto, após a condução de um estudo de mapeamento sistemático da literatura para analisar as reais implicações da técnica, notou-se que poucos desses trabalhos avaliavam de forma rigorosa os aspectos relacionados à evolução de LPS em DOP. Assim sendo, este trabalho apresenta um relato das implicações do uso dessa abordagem através de três diferentes perspectivas: (i) a extração e evolução de um aplicativo mobile em uma linha de produtos usando a DOP; (ii) a caracterização dos cenários de evolução segura e parcialmente segura de DOP através dos templates existentes na literatura; e (iii) uma análise em relação à propagação de mudanças e modularidade da técnica durante o seu processo de evolução. Os resultados mostraram que, apesar da técnica possuir uma maior aderência ao princípio open-closed, o seu uso pode não ser apropriado caso o principal interesse seja a evolução modular de features da linha de produtos, além de que, atualmente, a técnica ainda está limitada ao desenvolvimento em Java, em virtude da falta de plugins ou ferramentas que suportar outras linguagens de programação.Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).Delta-OrientedProgramming(DOP)isaflexibleandmodularapproachtoSoftwareProduct Line (SPL) implementation. Since 2010, the year the approach was proposed, several papers about DOP have been published. However, after conducting a systematic literature mapping study to analyze the real implications of the technique, it was noted that fewofthesestudiesrigorouslyevaluatedtheaspectsrelatedtotheevolutionofSPLdeltaoriented. Therefore, this work reports the implications of using this approach from three different perspectives: (i) extracting and evolving an Android application to a SPL using DOP; (ii) the characterization of safe and partially safe delta-oriented evolution scenarios throughthetemplatesexistingintheliterature; and(iii)ananalysisregardingthechange impact and modularity properties of the technique during its evolution process. The results showed that, although the technique has a greater adherence to the open-closed principle, its use may not be appropriate if the main interest is the modular evolution of product line features, and currently the technique is still limited to Java development because of the lack of plugins or tools that support other programming languages
    corecore