6,911 research outputs found

    Research issues in real-time database systems

    Get PDF
    Cataloged from PDF version of article.Today's real-time systems are characterized by managing large volumes of data. Efficient database management algorithms for accessing and manipulating data are required to satisfy timing constraints of supported applications. Real-time database systems involve a new research area investigating possible ways of applying database systems technology to real-time systems. Management of real-time information through a database system requires the integration of concepts from both real-time systems and database systems. Some new criteria need to be developed to involve timing constraints of real-time applications in many database systems design issues, such as transaction/query processing, data buffering, CPU, and IO scheduling. In this paper, a basic understanding of the issues in real-time database systems is provided and the research efforts in this area are introduced. Different approaches to various problems of real-time database systems are briefly described, and possible future research directions are discussed

    Y2K Interruption: Can the Doomsday Scenario Be Averted?

    Get PDF
    The management philosophy until recent years has been to replace the workers with computers, which are available 24 hours a day, need no benefits, no insurance and never complain. But as the year 2000 approached, along with it came the fear of the millennium bug, generally known as Y2K, and the computers threatened to strike!!!! Y2K, though an abbreviation of year 2000, generally refers to the computer glitches which are associated with the year 2000. Computer companies, in order to save memory and money, adopted a voluntary standard in the beginning of the computer era that all computers automatically convert any year designated by two numbers such as 99 into 1999 by adding the digits 19. This saved enormous amount of memory, and thus money, because large databases containing birth dates or other dates only needed to contain the last two digits such as 65 or 86. But it also created a built in flaw that could make the computers inoperable from January 2000. The problem is that most of these old computers are programmed to convert 00 (for the year 2000) into 1900 and not 2000. The trouble could therefore, arise when the systems had to deal with dates outside the 1900s. In 2000, for example a programme that calculates the age of a person born in 1965 will subtract 65 from 00 and get -65. The problem is most acute in mainframe systems, but that does not mean PCs, UNIX and other computing environments are trouble free. Any computer system that relies on date calculations must be tested because the Y2K or the millennium bug arises because of a potential for “date discontinuity” which occurs when the time expressed by a system, or any of its components, does not move in consonance with real time. Though attention has been focused on the potential problems linked with change from 1999 to 2000, date discontinuity may occur at other times in and around this period.

    Bitemporal Support for Business Process Contingency Management

    Get PDF
    Modern organisations are increasingly moving from traditional monolithic business systems to environments where more and more tasks are outsourced to third party providers. Therefore, processes must operate in an open and dynamic environment in which the management of time plays a crucial role. Handling time, however, remains a challenging issue yet to be fully addressed. Traditional processing systems only consider business events in a single time dimension, but are unable to handle bitemporal events: events in two time dimensions. Recently, backend systems have started to provide increased support for handling bitemporal events, but these enhanced capabilities have not been carried through to business process management systems. In this paper, we consider the possible relationships that exist between bitemporal properties of events and we show how these relationships affect a business process. In addition, we demonstrate how bitemporal events can be handled to prevent certain undesired effects on the business process

    UAS Service Supplier Specification

    Get PDF
    Within the Unmanned Aircraft Systems (UAS) Traffic Management (UTM) system, the UAS Service Supplier (USS) is a key component. The USS serves several functions. At a high level, those include the following: Bridging communication between UAS Operators and Flight Information Management System (FIMS) Supporting planning of UAS operations Assisting strategic deconfliction of the UTM airspace Providing information support to UAS Operators during operations Helping UAS Operators meet their formal requirements This document provides the minimum set of requirements for a USS. In order to be recognized as a USS within UTM, successful demonstration of satisfying the requirements described herein will be a prerequisite. To ensure various desired qualities (security, fairness, availability, efficiency, maintainability, etc.), this specification relies on references to existing public specifications whenever possible

    Autonomous mission planning and scheduling: Innovative, integrated, responsive

    Get PDF
    Autonomous mission scheduling, a new concept for NASA ground data systems, is a decentralized and distributed approach to scientific spacecraft planning, scheduling, and command management. Systems and services are provided that enable investigators to operate their own instruments. In autonomous mission scheduling, separate nodes exist for each instrument and one or more operations nodes exist for the spacecraft. Each node is responsible for its own operations which include planning, scheduling, and commanding; and for resolving conflicts with other nodes. One or more database servers accessible to all nodes enable each to share mission and science planning, scheduling, and commanding information. The architecture for autonomous mission scheduling is based upon a realistic mix of state-of-the-art and emerging technology and services, e.g., high performance individual workstations, high speed communications, client-server computing, and relational databases. The concept is particularly suited to the smaller, less complex missions of the future

    Real estate market activity in Slovenia in 2000-2006

    Get PDF
    This paper examines a particular aspect of Slovenian real estate market that is still developing -real estate market activity. Only two decades ago, Slovenia still had a socialist, planned economy, so there is a lack of tradition in the fields of both the real estate market and analysis of that market. The former only started to develop with the transition to the market-oriented economy in the beginning of the 1990s. Significant progress was observed in the second half of the 1990s, due to the favourable economic development of the country. In our research, we focused on the real estate market development in the 2000-2006 period, which was marked by major changes in legislation and other institutional backgrounds, directly or indirectly referring to the field of real estate and real property. The development of the real estate market in Slovenia was examined for a given period on the basis of the available market data, which have been acquired from the Tax Administration of the Republic of Slovenia; the real estate market activity development is analysed by statistical regions and types of real estate. The results show general developments in the Slovenian real estate market for the given period and, in particular, the influence of institutional and legal factors on real estate market activity

    Interoperable Transactions - A Structured Approach

    Get PDF

    Exceptions in Information Systems

    Get PDF
    The concept of exception has been defined in diverse ways. We relate exceptions to computational transactions and to control constructs. Our view of a transaction is very broad, and we consider transactional exceptions to be instances of undefined function values. By giving different interpretations to undefined we arrive at a classification of transactional exceptions. Our primary interest is in information systems, i.e., in database transactions, and in processes that consist of such transactions. In the database context we show that liberal treatment of exceptions is simpler than total quality management for consistency based on a set of constraints. We refer to control operations that link transactions into processes as actions. Actions tend to be time-related, and time Petri nets provide actions with semantics. The time Petri net representation indicates where exceptions can arise. We also consider high-level monitors for the detection of exceptions. Although our emphasis is on detection of exceptions, their handling is also discussed

    Adaptive Performance and Power Management in Distributed Computing Systems

    Get PDF
    The complexity of distributed computing systems has raised two unprecedented challenges for system management. First, various customers need to be assured by meeting their required service-level agreements such as response time and throughput. Second, system power consumption must be controlled in order to avoid system failures caused by power capacity overload or system overheating due to increasingly high server density. However, most existing work, unfortunately, either relies on open-loop estimations based on off-line profiled system models, or evolves in a more ad hoc fashion, which requires exhaustive iterations of tuning and testing, or oversimplifies the problem by ignoring the coupling between different system characteristics (\ie, response time and throughput, power consumption of different servers). As a result, the majority of previous work lacks rigorous guarantees on the performance and power consumption for computing systems, and may result in degraded overall system performance. In this thesis, we extensively study adaptive performance/power management and power-efficient performance management for distributed computing systems such as information dissemination systems, power grid management systems, and data centers, by proposing Multiple-Input-Multiple-Output (MIMO) control and hierarchical designs based on feedback control theory. For adaptive performance management, we design an integrated solution that controls both the average response time and CPU utilization in information dissemination systems to achieve bounded response time for high-priority information and maximized system throughput in an example information dissemination system. In addition, we design a hierarchical control solution to guarantee the deadlines of real-time tasks in power grid computing by grouping them based on their characteristics, respectively. For adaptive power management, we design MIMO optimal control solutions for power control at the cluster and server level and a hierarchical solution for large-scale data centers. Our MIMO control design can capture the coupling among different system characteristics, while our hierarchical design can coordinate controllers at different levels. For power-efficient performance management, we discuss a two-layer coordinated management solution for virtualized data centers. Experimental results in both physical testbeds and simulations demonstrate that all the solutions outperform state-of-the-art management schemes by significantly improving overall system performance

    Information Outlook, May 1997

    Get PDF
    Volume 1, Issue 5https://scholarworks.sjsu.edu/sla_io_1997/1004/thumbnail.jp
    corecore