162 research outputs found

    The Development of Unique Focal Planes for High-Resolution Suborbital and Ground-Based Exploration

    Get PDF
    abstract: The development of new Ultra-Violet/Visible/IR range (UV/Vis/IR) astronomical instrumentation that use novel approaches for imaging and increase the accessibility of observing time for more research groups is essential for rapid innovation within the community. Unique focal planes that are rapid-prototyped, low cost, and provide high resolution are key. In this dissertation the emergent designs of three unique focal planes are discussed. These focal planes were each designed for a different astronomical platform: suborbital balloon, suborbital rocket, and ground-based observatory. The balloon-based payload is a hexapod-actuated focal plane that uses tip-tilt motion to increase angular resolution through the removal of jitter – known as the HExapod Resolution-Enhancement SYstem (HERESY), the suborbital rocket imaging payload is a Jet Propulsion Laboratory (JPL) delta-doped charge-coupled device (CCD) packaged to survive the rigors of launch and image far-ultra-violet (FUV) spectra, and the ground-based observatory payload is a star centroid tracking modification to the balloon version of HERESY for the tip-tilt correction of atmospheric turbulence. The design, construction, verification, and validation of each focal plane payload is discussed in detail. For HERESY’s balloon implementation, pointing error data from the Stratospheric Terahertz Observatory (STO) Antarctic balloon mission was used to form an experimental lab test setup to demonstrate the hexapod can eliminate jitter in flight-like conditions. For the suborbital rocket focal plane, a harsh set of unit-level tests to ensure the payload could survive launch and space conditions, as well as the characterization and optimization of the JPL detector, are detailed. Finally, a modification of co-mounting a fast-read detector to the HERESY focal plane, for use on ground-based observatories, intended to reduce atmospherically induced tip-tilt error through the centroid tracking of bright natural guidestars, is described.Dissertation/ThesisDoctoral Dissertation Exploration Systems Design 201

    High-level real-time programming in Java

    Full text link
    Real-time systems have reached a level of complexity beyond the scaling capability of the low-level or restricted languages traditionally used for real-time programming. While Metronome garbage collection has made it practical to use Java to implement real-time systems, many challenges remain for the construction of complex real-time systems, some specic to the use of Java and others simply due to the change in scale of such systems. The goal of our research is the creation of a comprehensive Java-based programming environment and methodology for the creation of complex real-time systems. Our goals include construction of a provably correct real-time garbage collec-tor capable of providing worst case latencies of 100 s, capa-ble of scaling from sensor nodes up to large multiprocessors; specialized programming constructs that retain the safety and simplicity of Java, and yet provide sub-microsecond la-tencies; the extension of Java's \write once, run anywhere" principle from functional correctness to timing behavior; on-line analysis and visualization that aids in the understanding of complex behaviors; and a principled probabilistic analy-sis methodology for bounding the behavior of the resulting systems. While much remains to be done, this paper describes the progress we have made towards these goals

    Satellite-tracking and Earth dynamics research programs

    Get PDF
    The Arequipa station obtained a total of 31,989 quick-look range observations on 719 passes in the six months. Data were acquired from Metsahovi, San Fernando, Kootwijk, Wettzell, Grasse, Simosato, Graz, Dodaira and Herstmonceux. Work progressed on the setup of SAO 1. Discussions were also initiated with the Israelis on the relocation of SAO-3 to a site in southern Israel in FY-1984. Arequipa and the cooperating stations continued to track LAGEOS at highest priority for polar motion and Earth rotation studies, and for other geophysical investigations, including crustal dynamics, earth and ocean tides, and the general development of precision orbit determination. SAO completed the revisions to its field software as a part of its recent upgrading program. With cesium standards Omega receivers, and other timekeeping aids, the station was able to maintain a timing accuracy of better than plus or minus 6 to 8 microseconds

    Resource Management in Multimedia Networked Systems

    Get PDF
    Error-free multimedia data processing and communication includes providing guaranteed services such as the colloquial telephone. A set of problems have to be solved and handled in the control-management level of the host and underlying network architectures. We discuss in this paper \u27resource management\u27 at the host and network level, and their cooperation to achieve global guaranteed transmission and presentation services, which means end-to-end guarantees. The emphasize is on \u27network resources\u27 (e.g., bandwidth, buffer space) and \u27host resources\u27 (e.g., CPU processing time) which need to be controlled in order to satisfy the Quality of Service (QoS) requirements set by the users of the multimedia networked system. The control of the specified resources involves three actions: (1) properly allocate resources (end-to-end) during the multimedia call establishment, so that traffic can flow according to the QoS specification; (2) control resource allocation during the multimedia transmission; (3) adapt to changes when degradation of system components occurs. These actions imply the necessity of: (a) new services, such as admission services, at the hosts and intermediate network nodes; (b) new protocols for establishing connections which satisfy QoS requirements along the path from send to receiver(s), such as resource reservation protocol; (c) new control algorithms for delay, rate and error control; (d) new resource monitoring protocols for reporting system changes, such as resource administration protocol; (e) new adaptive schemes for dynamic resource allocation to respond to system changes; and (f) new architectures at the hosts and switches to accommodate the resource management entities. This article gives an overview of services, mechanisms and protocols for resource management as outlined above

    Fixed-Priority Memory-Centric Scheduler for COTS-Based Multiprocessors

    Get PDF
    Memory-centric scheduling attempts to guarantee temporal predictability on commercial-off-the-shelf (COTS) multiprocessor systems to exploit their high performance for real-time applications. Several solutions proposed in the real-time literature have hardware requirements that are not easily satisfied by modern COTS platforms, like hardware support for strict memory partitioning or the presence of scratchpads. However, even without said hardware support, it is possible to design an efficient memory-centric scheduler. In this article, we design, implement, and analyze a memory-centric scheduler for deterministic memory management on COTS multiprocessor platforms without any hardware support. Our approach uses fixed-priority scheduling and proposes a global "memory preemption" scheme to boost real-time schedulability. The proposed scheduling protocol is implemented in the Jailhouse hypervisor and Erika real-time kernel. Measurements of the scheduler overhead demonstrate the applicability of the proposed approach, and schedulability experiments show a 20% gain in terms of schedulability when compared to contention-based and static fair-share approaches

    Building Internet caching systems for streaming media delivery

    Get PDF
    The proxy has been widely and successfully used to cache the static Web objects fetched by a client so that the subsequent clients requesting the same Web objects can be served directly from the proxy instead of other sources faraway, thus reducing the server\u27s load, the network traffic and the client response time. However, with the dramatic increase of streaming media objects emerging on the Internet, the existing proxy cannot efficiently deliver them due to their large sizes and client real time requirements.;In this dissertation, we design, implement, and evaluate cost-effective and high performance proxy-based Internet caching systems for streaming media delivery. Addressing the conflicting performance objectives for streaming media delivery, we first propose an efficient segment-based streaming media proxy system model. This model has guided us to design a practical streaming proxy, called Hyper-Proxy, aiming at delivering the streaming media data to clients with minimum playback jitter and a small startup latency, while achieving high caching performance. Second, we have implemented Hyper-Proxy by leveraging the existing Internet infrastructure. Hyper-Proxy enables the streaming service on the common Web servers. The evaluation of Hyper-Proxy on the global Internet environment and the local network environment shows it can provide satisfying streaming performance to clients while maintaining a good cache performance. Finally, to further improve the streaming delivery efficiency, we propose a group of the Shared Running Buffers (SRB) based proxy caching techniques to effectively utilize proxy\u27s memory. SRB algorithms can significantly reduce the media server/proxy\u27s load and network traffic and relieve the bottlenecks of the disk bandwidth and the network bandwidth.;The contributions of this dissertation are threefold: (1) we have studied several critical performance trade-offs and provided insights into Internet media content caching and delivery. Our understanding further leads us to establish an effective streaming system optimization model; (2) we have designed and evaluated several efficient algorithms to support Internet streaming content delivery, including segment caching, segment prefetching, and memory locality exploitation for streaming; (3) having addressed several system challenges, we have successfully implemented a real streaming proxy system and deployed it in a large industrial enterprise

    Teaching in conditions of difficult knowledge transfer due to the state of emergency caused by the pandemic

    Get PDF
    Introduction/purpose: This paper presents the transformation of the current, classical approach to teaching. Online platforms enable students with and without disabilities to follow classes without hindrance during the lecture period. After the lecture, they are allowed to view video and presentation materials. The main advantage of this way of teaching is the possibility of attending classes from any location and from any device; it is only important to be connected to the Internet. Methods: Full integration with the already existing Faculty Information System has been performed. The paper describes a new approach to teaching and illustrates the expected benefits of online teaching. The platforms used in this integration are Microsoft Azure, Microsoft Office 365 Admin, Microsoft Teams, Microsoft Stream and Microsoft SharePoint. Results: The result of the test of work with students showed that by introducing a system for online teaching, we directly affect the improvement and quality of teaching. Conclusion: Considering all the results, it can be concluded that the transition to the online way of teaching allows end listeners a comprehensive transfer of knowledge as well as re-listening to the same. This model can be used for an unlimited number of users in all Institutions, regardless of whether the field of activity of these Institutions is of educational origin.info:eu-repo/semantics/publishedVersio

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco

    Realtime garbage collection in the JamaicaVM 3.0

    Full text link
    This paper provides an overview of the realtime garbage col-lector used by the RTSJ Java Virtual Machine JamaicaVM. A particular emphasis will be made on the improvements made in with release 3.0 JamaicaVM. The JamaicaVM garbage collector is incremental to an extreme extent: single incremental steps of the garbage col-lector correspond to scanning only 32 bytes of memory and have a worst-case execution time in the order of one µsec. The JamaicaVM garbage collector uses automatic pacing, making the system easier to configure than a garbage col-lector using explicit pacing that requires information on the application’s allocation rate. The recent improvements of the garbage collector that will be present in this paper include support for automatic heap expansion; reduction of the memory overhead for garbage collector internal structures; and significant performance op-timisation such as a faster write-barrier and a sweep phase that does not need to touch the objects and therefore re-duces the number of cache misses caused during sweep

    Multi-core devices for safety-critical systems: a survey

    Get PDF
    Multi-core devices are envisioned to support the development of next-generation safety-critical systems, enabling the on-chip integration of functions of different criticality. This integration provides multiple system-level potential benefits such as cost, size, power, and weight reduction. However, safety certification becomes a challenge and several fundamental safety technical requirements must be addressed, such as temporal and spatial independence, reliability, and diagnostic coverage. This survey provides a categorization and overview at different device abstraction levels (nanoscale, component, and device) of selected key research contributions that support the compliance with these fundamental safety requirements.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under grant TIN2015-65316-P, Basque Government under grant KK-2019-00035 and the HiPEAC Network of Excellence. The Spanish Ministry of Economy and Competitiveness has also partially supported Jaume Abella under Ramon y Cajal postdoctoral fellowship (RYC-2013-14717).Peer ReviewedPostprint (author's final draft
    • …
    corecore