938 research outputs found

    Effects of automatically updated database documentation on the work tasks of IS professionals and end-users

    Get PDF
    Abstract. This study analysed the effects of automatically generated database documentation on the work tasks of IS professionals and end-users. The effects were analysed with case study approach in one Finnish software company that provides customer information system to their customer companies. The availability of the consistent database documentation was a new situation for both IS professionals in the case company and for the end-users in the customer companies. The case company wished to understand, how the documentation would affect the work tasks of the two stakeholder groups. Database documentation is used for database design and analysis tasks. Database documentation is created in the development phase of an IS and maintained afterwards. Yet, the documentation can end up stagnated and the documentation maintenance is commonly considered a burden for the IS professionals. Database reverse-engineering allows automatic generation of database documentation, thus erasing the manual maintenance of database documentation. The research data was gathered using two separate questionnaires for the end-users and IS professionals. Both qualitative and quantitative data were gathered. The questionnaire’s open questions were analysed using content analysis and closed questions were considered descriptive statistics. According to the findings, database documentation supports both IS professionals’ and end-users’ in their work tasks. Database documentation was considered important for the efficiency of their work tasks. Documentation aids the database users to gain knowledge of the database structure and prevents false interpretations. Database documentation also allows the users to plan work better and to conduct tasks with fewer increments. Additionally, documentation allows a better degree of independent work and gives courage for the employees to familiarize themselves to new parts of the database and information system. The consistent, up-to-date database documentation has positive effects on the work tasks of both IS professionals and end-users. Lower workload, better understanding of the system and less false interpretations indicate that software companies would benefit from documenting their databases in more detail and more consistently, for example, using database reverse-engineering. Consistent database documentation benefits all of the database users

    Integrative Science for Integrative Management

    Get PDF
    Julkaistu myös painettun

    Oikean datan löytämisen tärkeys: Case terveydenhuollon operaatioiden kehitysprojektit

    Get PDF
    The utilization of data in healthcare improvement projects is currently a very topical subject. Several public and private companies have shown the value of utilizing data to improve operational efficiency. Not all datasets are, however, equally useful – thus, understanding of the data quality is required to ensure correct decision-making. Currently, two streams of literature exist to guide the improvement teams: the literature on operational improvement, e.g. through methods such as Total Quality Management, Lean, and Six Sigma, and the literature on data quality. From the point-of-view of an improvement project team, a linkage between these two streams of literature is missing. This paper aims to bridge the gap between the two streams of literature by helping healthcare improvement teams to assess whether the data quality is sufficient to support decision-making. The academic framework illustrates, how the viewpoint of data quality has transformed from an intrinsic focus on the 1970s, to fitness for use on the 1990s, finally to describing the specifics of the new trends, such as big data or unstructured data, in the 2010 onwards. Using the case study method, the findings were expanded by observing an improvement project in a private Finnish healthcare company. Together with the project team, I went through an iterative process with five steps: each of which was guided by a distinctive, new set of data. Finally, the actual improvement was gained by gathering the data manually: a dataset which was highly relevant for the end users, but likely to be intrinsically less robust as the previous datasets. As a conclusion, the current data quality literature can bring only modest guidance for the improvement teams in terms of choosing the right dataset. Rather, a new model for the data quality in healthcare operational improvement was created. The model suggests that the teams should first consider whether the dataset is relevant for the goal of the improvement project. After that, the improvement team should consider if the dataset can add value to reaching the goal of the project. After these two steps, the other key data quality attributes linking to the following four dimensions come to play: accessibility, intrinsic, representational, and contextual quality.Datan käyttäminen terveydenhuollon prosessikehityksessä on laajaa kiinnostusta herättävä aihe. Kaksi pää kirjallisuussuuntaa on kehittynyt datan laadun tutkimiseksi: kirjallisuus operaatiokehityksestä eli aiheista, kuten TQM, Lean ja Six Sigma, ja kirjallisuus datan laadusta. Nämä kaksi suuntausta ovat kuitenkin usein riittämättömiä kehitystiimien päätöksenteon tueksi. Tämän diplomityön tarkoitus on yhdistää nämä kaksi kirjallisuussuuntausta frameworkiksi, joka auttaa tiimejä arvioimaan datan soveltuvuutta omaan kehitysprojektiinsa. Työn kirjallisuuskatsaus kuvaa, miten käsitys datan laadusta on muuttunut 1970-luvulta nykypäivään. 1970-luvulla datalaadun kirjallisuuden fokus oli sisäisessä laadussa (intrinsic quality). 1990-luvulle siirtyessä painopiste siirtyi kuvailemaan datan laatua sen soveltuvuuden kautta (fitness for use), ja 2010-luvulle siirryttäessä kirjallisuuteen tuli mukaan uusia trendejä, kuten big data tai strukturoimaton data. Tuloksien tueksi seurattiin kehitysprojektia, joka toteutettiin suomalaisessa yksityisessä terveydenhuollon yrityksessä. Yhdessä projektitiimin kanssa, kirjoittajan matka projektin edetessä voidaan tiivistää viiteen vaiheeseen, joista jokaisessa uusi datasetti näytteli tärkeää roolia. Lopulta suurin edistysaskel projektissa saatiin keräämällä data manuaalisesti. Manuaalisesti kerätty data oli erittäin relevantti projektille, mutta sisäisiltä ominaisuuksiltaan huonompi. Tulosten pohjalta voidaan päätellä, että nykyinen kirjallisuus datan laadusta voi tuoda enintään keskinkertaista tukea kehitystiimien datan laadun arvioinnille. Tästä syystä uusi malli data laadun tutkimiselle terveydenhuollossa luotiin työn tuloksena. Malli ehdottaa, että projekti tiimien pitäisi ensimmäisenä arvioida datasetin relevanttiutta käyttötarkoitukselle. Toisena askeleena tiimin kannattaa miettiä onko data arvokasta vastaamaan projektin senhetkisiin haasteisiin. Näiden kahden askeleen jälkeen, tiimin kannattaa käyttää kirjallisuudessa laajasti tunnistettuja datalaadun tekijöitä oman datasetin laatunsa arviointiin

    Business value of XBRL to the financial report receivers in Finland

    Get PDF
    Objectives of the Study The main objective of this study is to find out the factors that may affect the business value of the XBRL standard to the financial report receivers in Finland. The other objective is to explore the role of network effects in creation of the value to the financial report receivers. The study is exploratory, as the goal is to discover the potential business value of the technological standard that has not been widely implemented in Finland at the moment of writing this paper. This thesis sets the basis for the further studies on the business value of XBRL, if and when the standard is implemented in the case organizations. Academic background and methodology Academic background includes the theories of productivity paradox, network effects and various theories on a business value of IT. The theoretical framework is built on the TOE framework and on the model of sources of value creation in e-business. The methodology includes the collection of qualitative data through semi-structured face-to-face interviews with representatives of the case organizations and subsequent cross-case analysis. Findings and conclusions Findings confirm the proposition presented in the theoretical part of the thesis. Results also show that the most important environmental factor in the potential business value of XBRL at the moment is “network effects” as EU integration of public organizations deepens. Efficiency and interoperability were two business value components that have a potential of influencing companies to adopt XBRL and yield significant benefits from it. On the other side results reveal smaller significance of the factor of technology usage and the business value component of value-added services

    Defining and Measuring Diplomatic Influence

    Get PDF
    This review found no sources of straightforward indicators for use in measuring diplomatic influence. The literature and evaluations found to recommend the use of tailor-made evaluations to account for “differences in diplomatic settings, diplomatic activities and policy fields”. They hinge on developing a theory of change alongside questions and evaluation criteria that are context-specific. They rely on assessing intermediate goals as a ‘proxy’ for the immeasurable long-term influence, and causal contributions (contributed to a result) rather than causal attributions (caused a result). It was also frequently mentioned that programme designers tend to design programmes to support diplomatic influence without specific and measurable objectives because influencing processes are by nature non-linear. In these cases, evaluations will be correspondingly unable to provide specific and measurable indicators of achievement.FCDO (Foreign, Commonwealth and Development Office

    BlogForever D3.2: Interoperability Prospects

    Get PDF
    This report evaluates the interoperability prospects of the BlogForever platform. Therefore, existing interoperability models are reviewed, a Delphi study to identify crucial aspects for the interoperability of web archives and digital libraries is conducted, technical interoperability standards and protocols are reviewed regarding their relevance for BlogForever, a simple approach to consider interoperability in specific usage scenarios is proposed, and a tangible approach to develop a succession plan that would allow a reliable transfer of content from the current digital archive to other digital repositories is presented

    Flexible virtual learning environments: a schema-driven approach using sematic web concepts

    Get PDF
    Flexible e-Iearning refers to an intelligent educational mechanism that focuses on simulating and improving traditional education as far as possible on the Web by integrating various electronic approaches, technologies, and equipment. This mechanism aims to promote the personalized development and management of e-learning Web services and applications. The main value of this method is that it provides high-powered individualization in pedagogy for students and staff.Here, the thesis mainly studied three problems in meeting the practical requirements of users in education. The first question is how a range of teaching styles (e.g. command and guided discovery) can be supported. The second one is how varieties of instructional processes can be authored. The third question is how these processes can be controlled by learners and educators in terms of their personalized needs during the execution of instruction.In this research, through investigating the existing e-Iearning approaches and technologies, the main technical problems of current virtual learning environments (VLEs) were analyzed. Next, by using the Semantic Web concepts as well as relevant standards, a schema-driven approach was created. This method can support users' individualized operations in the Web-based education. Then, a flexible e-learning system based on the approach was designed and implemented to map a range of extensive didactic paradigms. Finally, a case study was completed to evaluate the research results. The main findings of the assessment were that the flexible VLE implemented a range of teaching styles and the personalized creation and control of educational processes

    The First 25 Years of the Bled eConference: Themes and Impacts

    Get PDF
    The Bled eConference is the longest-running themed conference associated with the Information Systems discipline. The focus throughout its first quarter-century has been the application of electronic tools, migrating progressively from Electronic Data Interchange (EDI) via Inter-Organisational Systems (IOS) and eCommerce to encompass all aspects of the use of networking facilities in industry and government, and more recently by individuals, groups and society as a whole. This paper reports on an examination of the conference titles and of the titles and abstracts of the 773 refereed papers published in the Proceedings since 1995. This identified a long and strong focus on categories of electronic business and corporate perspectives, which has broadened in recent years to encompass the democratic, the social and the personal. The conference\u27s extend well beyond the papers and their thousands of citations and tens of thousands of downloads. Other impacts have included innovative forms of support for the development of large numbers of graduate students, and the many international research collaborations that have been conceived and developed in a beautiful lake-side setting in Slovenia
    corecore