77,568 research outputs found

    DATA QUALITY IN FINANCIAL PLANNING - AN EMPIRICAL ASSESSMENT BASED ON BENFORD\u27S LAW

    Get PDF
    Planning Processes play an important role in almost any business scenario. In particular, induced by the financial crisis, financial planning as a foundation for liquidity management is paid extraordinary attention to. Its quality and reliability is usually ensured by the use of information systems. Besides process efficiency, a key factor in liquidity management is the quality of the delivered planning data. More recently, business intelligence measures to increase data quality, for instance, realized through decision support services, find their way into the planning process. In this paper, we lay the foundation to include digital analyses of reported financial planning numbers into automated decision support services. In this vein, our contribution is twofold: First, based on a large and representative data set from a renowned, multinational enterprise, we empirically prove that financial planning numbers exhibit a certain, characteristic digit distribution, namely, Benford\u27s Law. Second, we investigate whether decision support services that incorporate intelligence based on Benford\u27s Law are appropriate to increase financial planning data quality. This question is tackled via analyses that relate detailed properties of the delivered data to Benford\u27s Law as a prerequisite for the integration of automated decision support services into business intelligence systems

    Applying tropos to socio-technical system design and runtime configuration

    Get PDF
    Recent trends in Software Engineering have introduced the importance of reconsidering the traditional idea of software design as a socio-tecnical problem, where human agents are integral part of the system along with hardware and software components. Design and runtime support for Socio-Technical Systems (STSs) requires appropriate modeling techniques and non-traditional infrastructures. Agent-oriented software methodologies are natural solutions to the development of STSs, both humans and technical components are conceptualized and analyzed as part of the same system. In this paper, we illustrate a number of Tropos features that we believe fundamental to support the development and runtime reconfiguration of STSs. Particularly, we focus on two critical design issues: risk analysis and location variability. We show how they are integrated and used into a planning-based approach to support the designer in evaluating and choosing the best design alternative. Finally, we present a generic framework to develop self-reconfigurable STSs

    Virtual reality simulation for the optimization of endovascular procedures : current perspectives

    Get PDF
    Endovascular technologies are rapidly evolving, often - requiring coordination and cooperation between clinicians and technicians from diverse specialties. These multidisciplinary interactions lead to challenges that are reflected in the high rate of errors occurring during endovascular procedures. Endovascular virtual reality (VR) simulation has evolved from simple benchtop devices to full physic simulators with advanced haptics and dynamic imaging and physiological controls. The latest developments in this field include the use of fully immersive simulated hybrid angiosuites to train whole endovascular teams in crisis resource management and novel technologies that enable practitioners to build VR simulations based on patient-specific anatomy. As our understanding of the skills, both technical and nontechnical, required for optimal endovascular performance improves, the requisite tools for objective assessment of these skills are being developed and will further enable the use of VR simulation in the training and assessment of endovascular interventionalists and their entire teams. Simulation training that allows deliberate practice without danger to patients may be key to bridging the gap between new endovascular technology and improved patient outcomes

    Y2K Interruption: Can the Doomsday Scenario Be Averted?

    Get PDF
    The management philosophy until recent years has been to replace the workers with computers, which are available 24 hours a day, need no benefits, no insurance and never complain. But as the year 2000 approached, along with it came the fear of the millennium bug, generally known as Y2K, and the computers threatened to strike!!!! Y2K, though an abbreviation of year 2000, generally refers to the computer glitches which are associated with the year 2000. Computer companies, in order to save memory and money, adopted a voluntary standard in the beginning of the computer era that all computers automatically convert any year designated by two numbers such as 99 into 1999 by adding the digits 19. This saved enormous amount of memory, and thus money, because large databases containing birth dates or other dates only needed to contain the last two digits such as 65 or 86. But it also created a built in flaw that could make the computers inoperable from January 2000. The problem is that most of these old computers are programmed to convert 00 (for the year 2000) into 1900 and not 2000. The trouble could therefore, arise when the systems had to deal with dates outside the 1900s. In 2000, for example a programme that calculates the age of a person born in 1965 will subtract 65 from 00 and get -65. The problem is most acute in mainframe systems, but that does not mean PCs, UNIX and other computing environments are trouble free. Any computer system that relies on date calculations must be tested because the Y2K or the millennium bug arises because of a potential for “date discontinuity” which occurs when the time expressed by a system, or any of its components, does not move in consonance with real time. Though attention has been focused on the potential problems linked with change from 1999 to 2000, date discontinuity may occur at other times in and around this period.

    Working towards an Improved Monitoring Infrastructure to support Disaster Management, Humanitarian Relief and Civil Security

    Get PDF
    Within this paper experiences and results from the work in the context of the European Initiative on Global Monitoring for Environment and Security (GMES) as they were gathered within the German Remote Sensing Data Center (DFD) are reported. It is described how data flows, analysis methods and information networks can be improved to allow better and faster access to remote sensing data and information in order to support the management of crisis situations. This refers to all phases of a crisis or disaster situation, including preparedness, response and recovery. Above the infrastructure and information flow elements, example cases of different crisis situations in the context of natural disasters, humanitarian relief activities and civil security are discussed. This builds on the experiences gained during the very active participation in the network of Excellence on Global Monitoring for Stability and Security (GMOSS), the GMES Service Element RESPOND, focussing on Humanitarian Relief Support and supporting the International Charter on Space and Major Disasters as well as while linking closely to national, European and international entities related to civil human security. It is suggested to further improve the network of national and regional centres of excellence in this context in order to improve local, regional and global monitoring capacities. Only when optimum interoperability and information flow can be achieved among systems and data providers on one hand side and the decision makers on the other, efficient monitoring and analysis capacities can be established successfully
    corecore