12 research outputs found

    Report on the 6th ADBIS’2002 conference

    Get PDF
    The 6th East European Conference ADBIS 2002 was held on September~8--11, 2002 in Bratislava, Slovakia. It was organised by the Slovak University of Technology (and, in particular, its Faculty of Electrical Engineering and Information Technology) in Bratislava in co-operation with the ACM SIGMOD, the Moscow ACM SIGMOD Chapter, and Slovak Society for Computer Science. The call for papers attracted 115 submissions from 35~countries. The international program committee, consisting of 43 researchers from 21 countries, selected 25 full papers and 4 short papers for a monograph volume published by the Springer Verlag. Beside those 29 regular papers, the volume includes also 3 invited papers presented at the Conference as invited lectures. Additionally, 20 papers have been selected for the Research communications volume. The authors of accepted papers come from 22~countries of 4 continents, indicating the truly international recognition of the ADBIS conference series. The conference had 104 registered participants from 22~countries and included invited lectures, tutorials, and regular sessions. This report describes the goals of the conference and summarizes the issues discussed during the sessions

    Hangout: Redes sociais e o cruzamento de campos no contexto organizacional

    Get PDF
    Neste artigo apresentam-se os resultados de um estudo exploratório realizado no âmbito de um projecto mais vasto, (work in progress), que estuda a utilização de redes sociais e pretende questionar o papel, relevância, potencialidades e limitações destas nas organizações. Partimos do conceito de campo na acepção de Bourdieu, identificamos espaços sociais e espaços informacionais que se cruzam em contextos organizacionais. Escolheram-se os Laboratórios Científicos como terreno experimental por serem organizações de conhecimento intensivo, em que a criação, partilha e uso do conhecimento constituem a sua atividade principal (core activity) e a rede social LinkedIn, por ser a rede profissional online mais utilizada à escala global

    Applying the UML and the Unified Process to the Design of Data Warehouses

    Get PDF
    The design, development and deployment of a data warehouse (DW) is a complex, time consuming and prone to fail task. This is mainly due to the different aspects taking part in a DW architecture such as data sources, processes responsible for Extracting, Transforming and Loading (ETL) data into the DW, the modeling of the DW itself, specifying data marts from the data warehouse or designing end user tools. In the last years, different models, methods and techniques have been proposed to provide partial solutions to cover the different aspects of a data warehouse. Nevertheless, none of these proposals addresses the whole development process of a data warehouse in an integrated and coherent manner providing the same notation for the modeling of the different parts of a DW. In this paper, we propose a data warehouse development method, based on the Unified Modeling Language (UML) and the Unified Process (UP), which addresses the design and development of both the data warehouse back-stage and front-end. We use the extension mechanisms (stereotypes, tagged values and constraints) provided by the UML and we properly extend it in order to accurately model the different parts of a data warehouse (such as the modeling of the data sources, ETL processes or the modeling of the DW itself) by using the same notation. To the best of our knowledge, our proposal provides a seamless method for developing data warehouses. Finally, we apply our approach to a case study to show its benefit.This work has been partially supported by the METASIGN project (TIN2004-OO779) from the Spanish Ministry of Education and Science, by the DADASMECA project (GV05/220) from the Valencia Government, and by the DADS (PBC-05-QI 2-2) project from the Regional Science arid Technology Ministry of CastiIla-La Mancha (Spain)

    Supporting discretionary decision-making with information technology

    Get PDF
    A NUMBER OF INCREASINGLY SOPHISTICATED technologies are now being used to support complex decision-making in a range of contexts. This paper reports on a project undertaken to provide decision support in discretionary legal domains by referring to a recently created model that involves the interplay and weighting of relevant rule-based and discretionary factors used in a decision-making process. The case study used in the modelling process is the Criminal Jurisdiction of the Victorian Magistrate’s Court (Australia), where the handing down of an appropriate custodial or non-custodial sentence requires the consideration of many factors. Tools and techniques used to capture relevant expert knowledge and to display it both as a paper model and as an online prototype application are discussed. Models of sentencing decision-making with rule-based and discretionary elements are presented and analyzed. This paper concludes by discussing the benefits and disadvantages of such technology and considers some potential appropriate uses of the model and web-based prototype application.C

    Engineering adaptive web applications

    Get PDF
    [no abstract

    New Fundamental Technologies in Data Mining

    Get PDF
    The progress of data mining technology and large public popularity establish a need for a comprehensive text on the subject. The series of books entitled by "Data Mining" address the need by presenting in-depth description of novel mining algorithms and many useful applications. In addition to understanding each section deeply, the two books present useful hints and strategies to solving problems in the following chapters. The contributing authors have highlighted many future research directions that will foster multi-disciplinary collaborations and hence will lead to significant development in the field of data mining

    Políticas de Copyright de Publicações Científicas em Repositórios Institucionais: O Caso do INESC TEC

    Get PDF
    A progressiva transformação das práticas científicas, impulsionada pelo desenvolvimento das novas Tecnologias de Informação e Comunicação (TIC), têm possibilitado aumentar o acesso à informação, caminhando gradualmente para uma abertura do ciclo de pesquisa. Isto permitirá resolver a longo prazo uma adversidade que se tem colocado aos investigadores, que passa pela existência de barreiras que limitam as condições de acesso, sejam estas geográficas ou financeiras. Apesar da produção científica ser dominada, maioritariamente, por grandes editoras comerciais, estando sujeita às regras por estas impostas, o Movimento do Acesso Aberto cuja primeira declaração pública, a Declaração de Budapeste (BOAI), é de 2002, vem propor alterações significativas que beneficiam os autores e os leitores. Este Movimento vem a ganhar importância em Portugal desde 2003, com a constituição do primeiro repositório institucional a nível nacional. Os repositórios institucionais surgiram como uma ferramenta de divulgação da produção científica de uma instituição, com o intuito de permitir abrir aos resultados da investigação, quer antes da publicação e do próprio processo de arbitragem (preprint), quer depois (postprint), e, consequentemente, aumentar a visibilidade do trabalho desenvolvido por um investigador e a respetiva instituição. O estudo apresentado, que passou por uma análise das políticas de copyright das publicações científicas mais relevantes do INESC TEC, permitiu não só perceber que as editoras adotam cada vez mais políticas que possibilitam o auto-arquivo das publicações em repositórios institucionais, como também que existe todo um trabalho de sensibilização a percorrer, não só para os investigadores, como para a instituição e toda a sociedade. A produção de um conjunto de recomendações, que passam pela implementação de uma política institucional que incentive o auto-arquivo das publicações desenvolvidas no âmbito institucional no repositório, serve como mote para uma maior valorização da produção científica do INESC TEC.The progressive transformation of scientific practices, driven by the development of new Information and Communication Technologies (ICT), which made it possible to increase access to information, gradually moving towards an opening of the research cycle. This opening makes it possible to resolve, in the long term, the adversity that has been placed on researchers, which involves the existence of barriers that limit access conditions, whether geographical or financial. Although large commercial publishers predominantly dominate scientific production and subject it to the rules imposed by them, the Open Access movement whose first public declaration, the Budapest Declaration (BOAI), was in 2002, proposes significant changes that benefit the authors and the readers. This Movement has gained importance in Portugal since 2003, with the constitution of the first institutional repository at the national level. Institutional repositories have emerged as a tool for disseminating the scientific production of an institution to open the results of the research, both before publication and the preprint process and postprint, increase the visibility of work done by an investigator and his or her institution. The present study, which underwent an analysis of the copyright policies of INESC TEC most relevant scientific publications, allowed not only to realize that publishers are increasingly adopting policies that make it possible to self-archive publications in institutional repositories, all the work of raising awareness, not only for researchers but also for the institution and the whole society. The production of a set of recommendations, which go through the implementation of an institutional policy that encourages the self-archiving of the publications developed in the institutional scope in the repository, serves as a motto for a greater appreciation of the scientific production of INESC TEC

    Solving heterogeneity for a successful service market

    Get PDF
    Diese Dissertation ist im Kontext eines neuen Paradigmas im Software Engineering mit dem Namen On-The-Fly Computing entstanden. OTF Computing basiert auf der Idee von spezialisierten On-The-Fly Märkten. OTF Märkte haben unterschiedliche Eigenschaften und die Marktakteure in diesen Märkten benutzen verschiedene Modellierungstechniken für das Service Engineering. Diese Unterschiede resultieren in Heterogenität und erschweren deshalb die Ausführung von automatisierten Marktoperationen, da Servicebeschreibungen nicht automatisch miteinander verglichen werden können. Für das beschriebene Problem bietet diese Dissertation eine Lösung um einen erfolgreichen OTF Markt zu ermöglichen. Für die Vergleichbarkeit von Servicebeschreibungen in einem OTF Markt wird eine formale Zwischenrepräsentation (Kernsprache) eingeführt. Die Marktoperationen werden auf Basis der Kernsprache definiert, die die optimale Ausführung der automatisierten Marktoperationen in einem OTF Markt unterstützt. Der erste Beitrag dieser Dissertation ist der Ansatz Language Optimizer (LOpt). LOpt nutzt als Basis eine Kernsprache, die strukturelle, verhaltensbezogene und nicht-funktionale Serviceeigenschaften beinhaltet. LOpt konfiguriert diese Sprache basierend auf formalisierten Markteigenschaften und einer Wissensbasis mit Konfigurationsexpertise, um eine optimale Kernsprache zur Servicespezifikation im jeweiligen OTF Markt zu erstellen. Der zweite Beitrag dieser Dissertation ist die Anwendung des Model Transformation By-Example Ansatzes um den Marktakteuren ohne Expertise im Sprachdesign Transformationen von ihren proprietären Sprachen in die optimale Kernsprache zu ermöglichen. Der beschriebene Ansatz generiert Transformationen auf Basis von Beispielabbildungen zwischen Servicebeschreibungen zweier Sprachen. Dabei wird die Idee genetischer Algorithmen angewendet.This PhD thesis is written in the context of a new software development paradigm called On-The-Fly Computing. It is based on the idea of specialized service markets called On-The-Fly (OTF) markets. OTF markets have different properties and their participants use different modeling techniques to perform the activity of service engineering. Such differences result in heterogeneity in OTF markets and complicate the execution of automated market operations like service matching as service specifications cannot be automatically compared with each other. This PhD thesis proposes a solution to cope with the mentioned heterogeneity to foster the success of OTF markets and the OTF Computing paradigm. In order to achieve the comparability of specifications in an OTF market, a formal intermediate representation called core language is introduced. Automated market operations are defined on a core language that optimally supports the execution of these operations in this market. The first contribution of this PhD thesis is the approach language Optimizer (LOpt), which supports the systematic design of a service specification language optimal for the execution of automated market operations in an OTF market. LOpt uses a comprehensive core language covering various structural, behavioral, and non-functional service properties. LOpt performs a configuration of this language based on formalized market properties and a knowledge base containing the configuration expertise. The second contribution of this PhD thesis is the application of the Model Transformations By-Example technique to define transformations from proprietary specification languages of market actors to the optimal core language. The approach generates transformations based on example mappings between concrete specifications in both languages given by market actors. ...by Svetlana Arifulina, M.Sc. ; Thesis Supervisors: Prof. Dr. Gregor Engels and Jun. Prof. Dr. Heiko HamannTag der Verteidigung: 08.12.2016Universität Paderborn, Univ., Dissertation, 201
    corecore