27,666 research outputs found

    A rapid prototyping/artificial intelligence approach to space station-era information management and access

    Get PDF
    Applications of rapid prototyping and Artificial Intelligence techniques to problems associated with Space Station-era information management systems are described. In particular, the work is centered on issues related to: (1) intelligent man-machine interfaces applied to scientific data user support, and (2) the requirement that intelligent information management systems (IIMS) be able to efficiently process metadata updates concerning types of data handled. The advanced IIMS represents functional capabilities driven almost entirely by the needs of potential users. Space Station-era scientific data projected to be generated is likely to be significantly greater than data currently processed and analyzed. Information about scientific data must be presented clearly, concisely, and with support features to allow users at all levels of expertise efficient and cost-effective data access. Additionally, mechanisms for allowing more efficient IIMS metadata update processes must be addressed. The work reported covers the following IIMS design aspects: IIMS data and metadata modeling, including the automatic updating of IIMS-contained metadata, IIMS user-system interface considerations, including significant problems associated with remote access, user profiles, and on-line tutorial capabilities, and development of an IIMS query and browse facility, including the capability to deal with spatial information. A working prototype has been developed and is being enhanced

    Bridging the gap between algorithmic and learned index structures

    Get PDF
    Index structures such as B-trees and bloom filters are the well-established petrol engines of database systems. However, these structures do not fully exploit patterns in data distribution. To address this, researchers have suggested using machine learning models as electric engines that can entirely replace index structures. Such a paradigm shift in data system design, however, opens many unsolved design challenges. More research is needed to understand the theoretical guarantees and design efficient support for insertion and deletion. In this thesis, we adopt a different position: index algorithms are good enough, and instead of going back to the drawing board to fit data systems with learned models, we should develop lightweight hybrid engines that build on the benefits of both algorithmic and learned index structures. The indexes that we suggest provide the theoretical performance guarantees and updatability of algorithmic indexes while using position prediction models to leverage the data distributions and thereby improve the performance of the index structure. We investigate the potential for minimal modifications to algorithmic indexes such that they can leverage data distribution similar to how learned indexes work. In this regard, we propose and explore the use of helping models that boost classical index performance using techniques from machine learning. Our suggested approach inherits performance guarantees from its algorithmic baseline index, but at the same time it considers the data distribution to improve performance considerably. We study single-dimensional range indexes, spatial indexes, and stream indexing, and show that the suggested approach results in range indexes that outperform the algorithmic indexes and have comparable performance to the read-only, fully learned indexes and hence can be reliably used as a default index structure in a database engine. Besides, we consider the updatability of the indexes and suggest solutions for updating the index, notably when the data distribution drastically changes over time (e.g., for indexing data streams). In particular, we propose a specific learning-augmented index for indexing a sliding window with timestamps in a data stream. Additionally, we highlight the limitations of learned indexes for low-latency lookup on real- world data distributions. To tackle this issue, we suggest adding an algorithmic enhancement layer to a learned model to correct the prediction error with a small memory latency. This approach enables efficient modelling of the data distribution and resolves the local biases of a learned model at the cost of roughly one memory lookup.Open Acces

    Intra-Domain Pathlet Routing

    Full text link
    Internal routing inside an ISP network is the foundation for lots of services that generate revenue from the ISP's customers. A fine-grained control of paths taken by network traffic once it enters the ISP's network is therefore a crucial means to achieve a top-quality offer and, equally important, to enforce SLAs. Many widespread network technologies and approaches (most notably, MPLS) offer limited (e.g., with RSVP-TE), tricky (e.g., with OSPF metrics), or no control on internal routing paths. On the other hand, recent advances in the research community are a good starting point to address this shortcoming, but miss elements that would enable their applicability in an ISP's network. We extend pathlet routing by introducing a new control plane for internal routing that has the following qualities: it is designed to operate in the internal network of an ISP; it enables fine-grained management of network paths with suitable configuration primitives; it is scalable because routing changes are only propagated to the network portion that is affected by the changes; it supports independent configuration of specific network portions without the need to know the configuration of the whole network; it is robust thanks to the adoption of multipath routing; it supports the enforcement of QoS levels; it is independent of the specific data plane used in the ISP's network; it can be incrementally deployed and it can nicely coexist with other control planes. Besides formally introducing the algorithms and messages of our control plane, we propose an experimental validation in the simulation framework OMNeT++ that we use to assess the effectiveness and scalability of our approach.Comment: 13 figures, 1 tabl

    Effectively Learning Spatial Indices

    Get PDF

    Training high performance skills using above real-time training

    Get PDF
    The Above Real-Time Training (ARTT) concept is a unique approach to training high performance skills. ARTT refers to a training paradigm that places the operator in a simulated environment that functions at faster than normal time. Such a training paradigm represents a departure from the intuitive, but not often supported, feeling that the best practice is determined by the training environment with the highest fidelity. This approach is hypothesized to provide greater 'transfer value' per simulation trial, by incorporating training techniques and instructional features into the simulator. These techniques allow individuals to acquire these critical skills faster and with greater retention. ARTT also allows an individual trained in 'fast time' to operate at what appears to be a more confident state, when the same task is performed in a real-time environment. Two related experiments are discussed. The findings appear to be consistent with previous findings that show positive effects of task variation during training. Moreover, ARTT has merit in improving or maintaining transfer with sharp reductions in training time. There are indications that the effectiveness of ARTT varies as a function of task content and possibly task difficulty. Other implications for ARTT are discussed along with future research directions
    • …
    corecore