91 research outputs found

    Interference Suppression in Massive MIMO VLC Systems

    Get PDF
    The focus of this dissertation is on the development and evaluation of methods and principles to mitigate interference in multiuser visible light communication (VLC) systems using several transmitters. All components of such a massive multiple-input multiple-output (MIMO) system are considered and transformed into a communication system model, while also paying particular attention to the hardware requirements of different modulation schemes. By analyzing all steps in the communication process, the inter-channel interference between users is identified as the most critical aspect. Several methods of suppressing this kind of interference, i.e. to split the MIMO channel into parallel single channels, are discussed, and a novel active LCD-based interference suppression principle at the receiver side is introduced as main aspect of this work. This technique enables a dynamic adaption of the physical channel: compared to solely software-based or static approaches, the LCD interference suppression filter achieves adaptive channel separation without altering the characteristics of the transmitter lights. This is especially advantageous in dual-use scenarios with illumination requirements. Additionally, external interferers, like natural light or transmitter light sources of neighboring cells in a multicell setting, can also be suppressed without requiring any control over them. Each user's LCD filter is placed in front of the corresponding photodetector and configured in such a way that only light from desired transmitters can reach the detector by setting only the appropriate pixels to transparent, while light from unwanted transmitters remains blocked. The effectiveness of this method is tested and benchmarked against zero-forcing (ZF) precoding in different scenarios and applications by numerical simulations and also verified experimentally in a large MIMO VLC testbed created specifically for this purpose

    Flat panel display signal processing

    Get PDF
    Televisions (TVs) have shown considerable technological progress since their introduction almost a century ago. Starting out as small, dim and monochrome screens in wooden cabinets, TVs have evolved to large, bright and colorful displays in plastic boxes. It took until the turn of the century, however, for the TV to become like a ‘picture on the wall’. This happened when the bulky Cathode Ray Tube (CRT) was replaced with thin and light-weight Flat Panel Displays (FPDs), such as Liquid Crystal Displays (LCDs) or Plasma Display Panels (PDPs). However, the TV system and transmission formats are still strongly coupled to the CRT technology, whereas FPDs use very different principles to convert the electronic video signal to visible images. These differences result in image artifacts that the CRT never had, but at the same time provide opportunities to improve FPD image quality beyond that of the CRT. This thesis presents an analysis of the properties of flat panel displays, their relation to image quality, and video signal processing algorithms to improve the quality of the displayed images. To analyze different types of displays, the display signal chain is described using basic principles common to all displays. The main function of a display is to create visible images (light) from an electronic signal (video), requiring display chain functions like opto-electronic effect, spatial and temporal addressing and reconstruction, and color synthesis. The properties of these functions are used to describe CRT, LCDs, and PDPs, showing that these displays perform the same functions, using different implementations. These differences have a number of consequences, that are further investigated in this thesis. Spatial and temporal aspects, corresponding to ‘static’ and ‘dynamic’ resolution respectively, are covered in detail. Moreover, video signal processing is an essential part of the display signal chain for FPDs, because the display format will in general no longer match the source format. In this thesis, it is investigated how specific FPD properties, especially related to spatial and temporal addressing and reconstruction, affect the video signal processing chain. A model of the display signal chain is presented, and applied to analyze FPD spatial properties in relation to static resolution. In particular, the effect of the color subpixels, that enable color image reproduction in FPDs, is analyzed. The perceived display resolution is strongly influenced by the color subpixel arrangement. When taken into account in the signal chain, this improves the perceived resolution on FPDs, which clearly outperform CRTs in this respect. The cause and effect of this improvement, also for alternative subpixel arrangements, is studied using the display signal model. However, the resolution increase cannot be achieved without video processing. This processing is efficiently combined with image scaling, which is always required in the FPD display signal chain, resulting in an algorithm called ‘subpixel image scaling’. A comparison of the effects of subpixel scaling on several subpixel arrangements shows that the largest increase in perceived resolution is found for two-dimensional subpixel arrangements. FPDs outperform CRTs with respect to static resolution, but not with respect to ‘dynamic resolution’, i.e. the perceived resolution of moving images. Life-like reproduction of moving images is an important requirement for a TV display, but the temporal properties of FPDs cause artifacts in moving images (‘motion artifacts’), that are not found in CRTs. A model of the temporal aspects of the display signal chain is used to analyze dynamic resolution and motion artifacts on several display types, in particular LCD and PDP. Furthermore, video signal processing algorithms are developed that can reduce motion artifacts and increase the dynamic resolution. The occurrence of motion artifacts is explained by the fact that the human visual system tracks moving objects. This converts temporal effects on the display into perceived spatial effects, that can appear in very different ways. The analysis shows how addressing mismatches in the chain cause motion-dependent misalignment of image data, e.g. resulting in the ‘dynamic false contour’ artifact in PDPs. Also, non-ideal temporal reconstruction results in ‘motion blur’, i.e. a loss of sharpness of moving images, which is typical for LCDs. The relation between motion blur, dynamic resolution, and temporal properties of LCDs is analyzed using the display signal model in the temporal (frequency) domain. The concepts of temporal aperture, motion aperture and temporal display bandwidth are introduced, which enable characterization of motion blur in a simple and direct way. This is applied to compare several motion blur reduction methods, based on modified display design and driving. This thesis further describes the development of several video processing algorithms that can reduce motion artifacts. It is shown that the motion of objects in the image plays an essential role in these algorithms, i.e. they require motion estimation and compensation techniques. In LCDs, video processing for motion artifact reduction involves a compensation for the temporal reconstruction characteristics of the display, leading to the ‘motion compensated inverse filtering’ algorithm. The display chain model is used to analyze this algorithm, and several methods to increase its performance are presented. In PDPs, motion artifact reduction can be achieved with ‘motion compensated subfield generation’, for which an advanced algorithm is presented

    Space station data system analysis/architecture study. Task 2: Options development DR-5. Volume 1: Technology options

    Get PDF
    The second task in the Space Station Data System (SSDS) Analysis/Architecture Study is the development of an information base that will support the conduct of trade studies and provide sufficient data to make key design/programmatic decisions. This volume identifies the preferred options in the technology category and characterizes these options with respect to performance attributes, constraints, cost, and risk. The technology category includes advanced materials, processes, and techniques that can be used to enhance the implementation of SSDS design structures. The specific areas discussed are mass storage, including space and round on-line storage and off-line storage; man/machine interface; data processing hardware, including flight computers and advanced/fault tolerant computer architectures; and software, including data compression algorithms, on-board high level languages, and software tools. Also discussed are artificial intelligence applications and hard-wire communications

    Recent advances in the hardware architecture of flat display devices

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2007Includes bibliographical References (leaves: 115-117)Text in English; Abstract: Turkish and Englishxiii, 133 leavesThesis will describe processing board hardware design for flat panel displays with integrated digital reception, the design challenges in flat panel displays with integrated digital reception explained with details. Thesis also includes brief explanation of flat panel technology and processing blocks. Explanations of building blocks of TV and flat panel displays are given before design stage for better understanding of design stage. Hardware design stage of processing board is investigated in two major steps, schematic design and layout design. First step of the schematic design is system level block diagram design. Schematic diagram is the detailed application level hardware design and layout is the implementation level of the design. System level, application level and implementation level hardware design of the TV processing board is described with details in thesis. Design challenges, considerations and solutions are defined in advance for flat panel displays

    Design Techniques for Energy-Quality Scalable Digital Systems

    Get PDF
    Energy efficiency is one of the key design goals in modern computing. Increasingly complex tasks are being executed in mobile devices and Internet of Things end-nodes, which are expected to operate for long time intervals, in the orders of months or years, with the limited energy budgets provided by small form-factor batteries. Fortunately, many of such tasks are error resilient, meaning that they can toler- ate some relaxation in the accuracy, precision or reliability of internal operations, without a significant impact on the overall output quality. The error resilience of an application may derive from a number of factors. The processing of analog sensor inputs measuring quantities from the physical world may not always require maximum precision, as the amount of information that can be extracted is limited by the presence of external noise. Outputs destined for human consumption may also contain small or occasional errors, thanks to the limited capabilities of our vision and hearing systems. Finally, some computational patterns commonly found in domains such as statistics, machine learning and operational research, naturally tend to reduce or eliminate errors. Energy-Quality (EQ) scalable digital systems systematically trade off the quality of computations with energy efficiency, by relaxing the precision, the accuracy, or the reliability of internal software and hardware components in exchange for energy reductions. This design paradigm is believed to offer one of the most promising solutions to the impelling need for low-energy computing. Despite these high expectations, the current state-of-the-art in EQ scalable design suffers from important shortcomings. First, the great majority of techniques proposed in literature focus only on processing hardware and software components. Nonetheless, for many real devices, processing contributes only to a small portion of the total energy consumption, which is dominated by other components (e.g. I/O, memory or data transfers). Second, in order to fulfill its promises and become diffused in commercial devices, EQ scalable design needs to achieve industrial level maturity. This involves moving from purely academic research based on high-level models and theoretical assumptions to engineered flows compatible with existing industry standards. Third, the time-varying nature of error tolerance, both among different applications and within a single task, should become more central in the proposed design methods. This involves designing “dynamic” systems in which the precision or reliability of operations (and consequently their energy consumption) can be dynamically tuned at runtime, rather than “static” solutions, in which the output quality is fixed at design-time. This thesis introduces several new EQ scalable design techniques for digital systems that take the previous observations into account. Besides processing, the proposed methods apply the principles of EQ scalable design also to interconnects and peripherals, which are often relevant contributors to the total energy in sensor nodes and mobile systems respectively. Regardless of the target component, the presented techniques pay special attention to the accurate evaluation of benefits and overheads deriving from EQ scalability, using industrial-level models, and on the integration with existing standard tools and protocols. Moreover, all the works presented in this thesis allow the dynamic reconfiguration of output quality and energy consumption. More specifically, the contribution of this thesis is divided in three parts. In a first body of work, the design of EQ scalable modules for processing hardware data paths is considered. Three design flows are presented, targeting different technologies and exploiting different ways to achieve EQ scalability, i.e. timing-induced errors and precision reduction. These works are inspired by previous approaches from the literature, namely Reduced-Precision Redundancy and Dynamic Accuracy Scaling, which are re-thought to make them compatible with standard Electronic Design Automation (EDA) tools and flows, providing solutions to overcome their main limitations. The second part of the thesis investigates the application of EQ scalable design to serial interconnects, which are the de facto standard for data exchanges between processing hardware and sensors. In this context, two novel bus encodings are proposed, called Approximate Differential Encoding and Serial-T0, that exploit the statistical characteristics of data produced by sensors to reduce the energy consumption on the bus at the cost of controlled data approximations. The two techniques achieve different results for data of different origins, but share the common features of allowing runtime reconfiguration of the allowed error and being compatible with standard serial bus protocols. Finally, the last part of the manuscript is devoted to the application of EQ scalable design principles to displays, which are often among the most energy- hungry components in mobile systems. The two proposals in this context leverage the emissive nature of Organic Light-Emitting Diode (OLED) displays to save energy by altering the displayed image, thus inducing an output quality reduction that depends on the amount of such alteration. The first technique implements an image-adaptive form of brightness scaling, whose outputs are optimized in terms of balance between power consumption and similarity with the input. The second approach achieves concurrent power reduction and image enhancement, by means of an adaptive polynomial transformation. Both solutions focus on minimizing the overheads associated with a real-time implementation of the transformations in software or hardware, so that these do not offset the savings in the display. For each of these three topics, results show that the aforementioned goal of building EQ scalable systems compatible with existing best practices and mature for being integrated in commercial devices can be effectively achieved. Moreover, they also show that very simple and similar principles can be applied to design EQ scalable versions of different system components (processing, peripherals and I/O), and to equip these components with knobs for the runtime reconfiguration of the energy versus quality tradeoff

    Evaluating the energy consumption and the energy savings potential in ICT backbone networks

    Get PDF

    Platforms for prototyping minimally invasive instruments

    Get PDF
    The introduction of new technologies in medicine is often an issue because there are many stages to go through, from the idea to the approval by ethical committees and mass production. This work covers the first steps of the development of a medical device, dealing with the tools that can help to reduce the time for producing the laboratory prototype. These tools can involve electronics and software for the creation of a “universal”' hardware platform that can be used for many robotic applications, adapting only few components for the specific scenario. The platform is created by setting up a traditional computer with operating system and acquisition channels aimed at opening the system toward the real environment. On this platform algorithms can be implemented rapidly, allowing to assess the feasibility of an idea. This approach lets the designer concentrate on the application rather than on the selection of the appropriate hardware electronics every time that a new project starts. In the first part an overview of the existing instruments for minimally invasive interventions that can be found as commercial or research products is given. An introduction related to hardware electronics is presented with the requirements and the specific characteristics needed for a robotic application. The second part focuses on specific projects in MIS. The first project concerns the study and the development of a lightweight hand-held robotic instrument for laparoscopy. Motivations are related to the lack of dexterous hand-held laparoscopic instruments. The second project concerns the study and the presentation of a prototype of a robotic endoscope with enhanced resolution. The third project concerns the development of a system able to detect the inspiration and the expiration phases. The aim is to evaluate the weariness of the surgeon, since breathing can be related to fatigue

    An Integrated Control and Data Acquisition System for Pharmaceutical Capsule Inspection

    Get PDF
    Pharmaphil Inc. manufactures two-part gelatin capsules for the pharmaceutical industry. Their current methods of quality control of their product is by performing manual inspection of every carton of capsules prior to shipment. In today\u27s modern manufacturing world, more efficient and cost-effective means of quality control exist. It is Pharmaphil\u27s desire to develop a custom machine vision system to replace manual inspection with a potential opportunity in the capsule manufacturing quality control market. In collaboration with the Electrical and Computer Engineering Department at the University of Windsor, a novel system was developed to achieve this goal. The objective was to develop a system capable of inspecting 1000 capsules per minute with the ability to detect holes, cracks, dents, bubble, double caps and incorrect colour or size. Using an antiquated machine vision system for capsule inspection from the mid-nineties as a base, a modern inspection system was developed that performed faster and more thorough inspections. As a measure to minimize the overall system cost as well as to increase flexibility, a full custom design was undertaken. The resulting system follows a traditional machine vision system whereby the main components include an image acquisition component, a processing unit and machine control. The designed system uses custom USB2.0 cameras to acquire images, a standard desktop PC to process image data and a custom machine control board to perform machine control and timing. The system operates with four identical quadrants operating in parallel to increase throughput. The final system developed provided a proof-of-concept for the approach taken. The machine control and image acquisition component of the system yielded a maximum throughput of 1200 capsules per minute. After incorporating image inspection, the final result was a system that was capable of inspecting capsules at a rate of about 800 capsules per minute with high accuracy. With optimizations, the system throughput can be further improved. The findings throughout the development of the prototype system provide an excellent basis from which the first generation commercial unit can be designed

    Data Acquisition Applications

    Get PDF
    Data acquisition systems have numerous applications. This book has a total of 13 chapters and is divided into three sections: Industrial applications, Medical applications and Scientific experiments. The chapters are written by experts from around the world, while the targeted audience for this book includes professionals who are designers or researchers in the field of data acquisition systems. Faculty members and graduate students could also benefit from the book
    • 

    corecore