4,810 research outputs found

    Evolution of the discrete cosine transform using genetic programming

    Get PDF
    Compression of 2 dimensional data is important for the efficient transmission, storage and manipulation of Images. The most common technique used for lossy image compression relies on fast application of the Discrete Cosine Transform (DCT). The cosine transform has been heavily researched and many efficient methods have been determined and successfully applied in practice; this paper presents a novel method for evolving a DCT algorithm using genetic programming. We show that it is possible to evolve a very close approximation to a 4 point transform. In theory, an 8 point transform could also be evolved using the same technique

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols

    Semantic multimedia remote display for mobile thin clients

    Get PDF
    Current remote display technologies for mobile thin clients convert practically all types of graphical content into sequences of images rendered by the client. Consequently, important information concerning the content semantics is lost. The present paper goes beyond this bottleneck by developing a semantic multimedia remote display. The principle consists of representing the graphical content as a real-time interactive multimedia scene graph. The underlying architecture features novel components for scene-graph creation and management, as well as for user interactivity handling. The experimental setup considers the Linux X windows system and BiFS/LASeR multimedia scene technologies on the server and client sides, respectively. The implemented solution was benchmarked against currently deployed solutions (VNC and Microsoft-RDP), by considering text editing and WWW browsing applications. The quantitative assessments demonstrate: (1) visual quality expressed by seven objective metrics, e.g., PSNR values between 30 and 42 dB or SSIM values larger than 0.9999; (2) downlink bandwidth gain factors ranging from 2 to 60; (3) real-time user event management expressed by network round-trip time reduction by factors of 4-6 and by uplink bandwidth gain factors from 3 to 10; (4) feasible CPU activity, larger than in the RDP case but reduced by a factor of 1.5 with respect to the VNC-HEXTILE

    Telescience Testbed Pilot Program

    Get PDF
    The Telescience Testbed Pilot Program is developing initial recommendations for requirements and design approaches for the information systems of the Space Station era. During this quarter, drafting of the final reports of the various participants was initiated. Several drafts are included in this report as the University technical reports

    Telescience testbed pilot program, volume 2: Program results

    Get PDF
    Space Station Freedom and its associated labs, coupled with the availability of new computing and communications technologies, have the potential for significantly enhancing scientific research. A Telescience Testbed Pilot Program (TTPP), aimed at developing the experience base to deal with issues in the design of the future information system of the Space Station era. The testbeds represented four scientific disciplines (astronomy and astrophysics, earth sciences, life sciences, and microgravity sciences) and studied issues in payload design, operation, and data analysis. This volume, of a 3 volume set, which all contain the results of the TTPP, contains the integrated results. Background is provided of the program and highlights of the program results. The various testbed experiments and the programmatic approach is summarized. The results are summarized on a discipline by discipline basis, highlighting the lessons learned for each discipline. Then the results are integrated across each discipline, summarizing the lessons learned overall

    Communication satellites: Guidelines for a strategic plan

    Get PDF
    To maintain and augment the leadership that the United States has enjoyed and to ensure that the nation is investing sufficiently and wisely to this purpose, a strategic plan for satellite communications research and development was prepared by NASA. Guidelines and recommendations for a NASA plan to support this objective and for the conduct of communication satellite research and development program over the next 25 years were generated. The guidelines are briefly summarized

    Creating a Relational Distributed Object Store

    Full text link
    In and of itself, data storage has apparent business utility. But when we can convert data to information, the utility of stored data increases dramatically. It is the layering of relation atop the data mass that is the engine for such conversion. Frank relation amongst discrete objects sporadically ingested is rare, making the process of synthesizing such relation all the more challenging, but the challenge must be met if we are ever to see an equivalent business value for unstructured data as we already have with structured data. This paper describes a novel construct, referred to as a relational distributed object store (RDOS), that seeks to solve the twin problems of how to persistently and reliably store petabytes of unstructured data while simultaneously creating and persisting relations amongst billions of objects.Comment: 12 pages, 5 figure

    Lifeworld Inc. : and what to do about it

    Get PDF
    Can we detect changes in the way that the world turns up as they turn up? This paper makes such an attempt. The first part of the paper argues that a wide-ranging change is occurring in the ontological preconditions of Euro-American cultures, based in reworking what and how an event is produced. Driven by the security – entertainment complex, the aim is to mass produce phenomenological encounter: Lifeworld Inc as I call it. Swimming in a sea of data, such an aim requires the construction of just enough authenticity over and over again. In the second part of the paper, I go on to argue that this new world requires a different kind of social science, one that is experimental in its orientation—just as Lifeworld Inc is—but with a mission to provoke awareness in untoward ways in order to produce new means of association. Only thus, or so I argue, can social science add to the world we are now beginning to live in

    VIS: the visible imager for Euclid

    Get PDF
    Euclid-VIS is a large format visible imager for the ESA Euclid space mission in their Cosmic Vision program, scheduled for launch in 2019. Together with the near infrared imaging within the NISP instrument it forms the basis of the weak lensing measurements of Euclid. VIS will image in a single r+i+z band from 550-900 nm over a field of view of ~0.5 deg2. By combining 4 exposures with a total of 2240 sec, VIS will reach to V=24.5 (10{\sigma}) for sources with extent ~0.3 arcsec. The image sampling is 0.1 arcsec. VIS will provide deep imaging with a tightly controlled and stable point spread function (PSF) over a wide survey area of 15000 deg2 to measure the cosmic shear from nearly 1.5 billion galaxies to high levels of accuracy, from which the cosmological parameters will be measured. In addition, VIS will also provide a legacy imaging dataset with an unprecedented combination of spatial resolution, depth and area covering most of the extra-Galactic sky. Here we will present the results of the study carried out by the Euclid Consortium during the Euclid Definition phase.Comment: 10 pages, 6 figure
    • …
    corecore