2,037 research outputs found

    Lessons learned from the design of a mobile multimedia system in the Moby Dick project

    Get PDF
    Recent advances in wireless networking technology and the exponential development of semiconductor technology have engendered a new paradigm of computing, called personal mobile computing or ubiquitous computing. This offers a vision of the future with a much richer and more exciting set of architecture research challenges than extrapolations of the current desktop architectures. In particular, these devices will have limited battery resources, will handle diverse data types, and will operate in environments that are insecure, dynamic and which vary significantly in time and location. The research performed in the MOBY DICK project is about designing such a mobile multimedia system. This paper discusses the approach made in the MOBY DICK project to solve some of these problems, discusses its contributions, and accesses what was learned from the project

    Robotics and IoT: Interdisciplinary Applied Research in the RIoT Zone

    Get PDF
    Short Abstract: Robotics and the Internet of Things are intrinsically multi-disciplinary subjects that investigate the interaction between the physical and the cyber worlds and how they impact society. As a result, they not only demand careful consideration of digital and analog technologies, but also the human element. The “RIoT Zone” brings together disparate people and ideas to address intuitive autonomy. Full Abstract: Robotics and the Internet of Things are intrinsically multi-disciplinary subjects that investigate the interaction between the physical and the cyber worlds and how they impact society. As a result, they not only demand careful consideration of digital and analog technologies, but also the human element. The “RIoT Zone” brings together disparate people and ideas to address a human-centric form of intelligence we call “intuitive autonomy”. This talk will describe human/robot interaction and the programming of robots by human demonstration from the perspectives of Engineering Technology, Computer Information Technology, Industrial Engineering and Psychology

    6G White Paper on Machine Learning in Wireless Communication Networks

    Full text link
    The focus of this white paper is on machine learning (ML) in wireless communications. 6G wireless communication networks will be the backbone of the digital transformation of societies by providing ubiquitous, reliable, and near-instant wireless connectivity for humans and machines. Recent advances in ML research has led enable a wide range of novel technologies such as self-driving vehicles and voice assistants. Such innovation is possible as a result of the availability of advanced ML models, large datasets, and high computational power. On the other hand, the ever-increasing demand for connectivity will require a lot of innovation in 6G wireless networks, and ML tools will play a major role in solving problems in the wireless domain. In this paper, we provide an overview of the vision of how ML will impact the wireless communication systems. We first give an overview of the ML methods that have the highest potential to be used in wireless networks. Then, we discuss the problems that can be solved by using ML in various layers of the network such as the physical layer, medium access layer, and application layer. Zero-touch optimization of wireless networks using ML is another interesting aspect that is discussed in this paper. Finally, at the end of each section, important research questions that the section aims to answer are presented

    "Going back to our roots": second generation biocomputing

    Full text link
    Researchers in the field of biocomputing have, for many years, successfully "harvested and exploited" the natural world for inspiration in developing systems that are robust, adaptable and capable of generating novel and even "creative" solutions to human-defined problems. However, in this position paper we argue that the time has now come for a reassessment of how we exploit biology to generate new computational systems. Previous solutions (the "first generation" of biocomputing techniques), whilst reasonably effective, are crude analogues of actual biological systems. We believe that a new, inherently inter-disciplinary approach is needed for the development of the emerging "second generation" of bio-inspired methods. This new modus operandi will require much closer interaction between the engineering and life sciences communities, as well as a bidirectional flow of concepts, applications and expertise. We support our argument by examining, in this new light, three existing areas of biocomputing (genetic programming, artificial immune systems and evolvable hardware), as well as an emerging area (natural genetic engineering) which may provide useful pointers as to the way forward.Comment: Submitted to the International Journal of Unconventional Computin

    Design and management of image processing pipelines within CPS: Acquired experience towards the end of the FitOptiVis ECSEL Project

    Get PDF
    Cyber-Physical Systems (CPSs) are dynamic and reactive systems interacting with processes, environment and, sometimes, humans. They are often distributed with sensors and actuators, characterized for being smart, adaptive, predictive and react in real-time. Indeed, image- and video-processing pipelines are a prime source for environmental information for systems allowing them to take better decisions according to what they see. Therefore, in FitOptiVis, we are developing novel methods and tools to integrate complex image- and video-processing pipelines. FitOptiVis aims to deliver a reference architecture for describing and optimizing quality and resource management for imaging and video pipelines in CPSs both at design- and run-time. The architecture is concretized in low-power, high-performance, smart components, and in methods and tools for combined design-time and run-time multi-objective optimization and adaptation within system and environment constraints

    Synopsis of an engineering solution for a painful problem Phantom Limb Pain

    Get PDF
    This paper is synopsis of a recently proposed solution for treating patients who suffer from Phantom Limb Pain (PLP). The underpinning approach of this research and development project is based on an extension of “mirror box” therapy which has had some promising results in pain reduction. An outline of an immersive individually tailored environment giving the patient a virtually realised limb presence, as a means to pain reduction is provided. The virtual 3D holographic environment is meant to produce immersive, engaging and creative environments and tasks to encourage and maintain patients’ interest, an important aspect in two of the more challenging populations under consideration (over-60s and war veterans). The system is hoped to reduce PLP by more than 3 points on an 11 point Visual Analog Scale (VAS), when a score less than 3 could be attributed to distraction alone

    Towards High-Frequency Tracking and Fast Edge-Aware Optimization

    Full text link
    This dissertation advances the state of the art for AR/VR tracking systems by increasing the tracking frequency by orders of magnitude and proposes an efficient algorithm for the problem of edge-aware optimization. AR/VR is a natural way of interacting with computers, where the physical and digital worlds coexist. We are on the cusp of a radical change in how humans perform and interact with computing. Humans are sensitive to small misalignments between the real and the virtual world, and tracking at kilo-Hertz frequencies becomes essential. Current vision-based systems fall short, as their tracking frequency is implicitly limited by the frame-rate of the camera. This thesis presents a prototype system which can track at orders of magnitude higher than the state-of-the-art methods using multiple commodity cameras. The proposed system exploits characteristics of the camera traditionally considered as flaws, namely rolling shutter and radial distortion. The experimental evaluation shows the effectiveness of the method for various degrees of motion. Furthermore, edge-aware optimization is an indispensable tool in the computer vision arsenal for accurate filtering of depth-data and image-based rendering, which is increasingly being used for content creation and geometry processing for AR/VR. As applications increasingly demand higher resolution and speed, there exists a need to develop methods that scale accordingly. This dissertation proposes such an edge-aware optimization framework which is efficient, accurate, and algorithmically scales well, all of which are much desirable traits not found jointly in the state of the art. The experiments show the effectiveness of the framework in a multitude of computer vision tasks such as computational photography and stereo.Comment: PhD thesi

    Demonstrating Quantum Error Correction that Extends the Lifetime of Quantum Information

    Full text link
    The remarkable discovery of Quantum Error Correction (QEC), which can overcome the errors experienced by a bit of quantum information (qubit), was a critical advance that gives hope for eventually realizing practical quantum computers. In principle, a system that implements QEC can actually pass a "break-even" point and preserve quantum information for longer than the lifetime of its constituent parts. Reaching the break-even point, however, has thus far remained an outstanding and challenging goal. Several previous works have demonstrated elements of QEC in NMR, ions, nitrogen vacancy (NV) centers, photons, and superconducting transmons. However, these works primarily illustrate the signatures or scaling properties of QEC codes rather than test the capacity of the system to extend the lifetime of quantum information over time. Here we demonstrate a QEC system that reaches the break-even point by suppressing the natural errors due to energy loss for a qubit logically encoded in superpositions of coherent states, or cat states of a superconducting resonator. Moreover, the experiment implements a full QEC protocol by using real-time feedback to encode, monitor naturally occurring errors, decode, and correct. As measured by full process tomography, the enhanced lifetime of the encoded information is 320 microseconds without any post-selection. This is 20 times greater than that of the system's transmon, over twice as long as an uncorrected logical encoding, and 10% longer than the highest quality element of the system (the resonator's 0, 1 Fock states). Our results illustrate the power of novel, hardware efficient qubit encodings over traditional QEC schemes. Furthermore, they advance the field of experimental error correction from confirming the basic concepts to exploring the metrics that drive system performance and the challenges in implementing a fault-tolerant system
    • …
    corecore