11,509 research outputs found

    AROMA: Automatic Generation of Radio Maps for Localization Systems

    Full text link
    WLAN localization has become an active research field recently. Due to the wide WLAN deployment, WLAN localization provides ubiquitous coverage and adds to the value of the wireless network by providing the location of its users without using any additional hardware. However, WLAN localization systems usually require constructing a radio map, which is a major barrier of WLAN localization systems' deployment. The radio map stores information about the signal strength from different signal strength streams at selected locations in the site of interest. Typical construction of a radio map involves measurements and calibrations making it a tedious and time-consuming operation. In this paper, we present the AROMA system that automatically constructs accurate active and passive radio maps for both device-based and device-free WLAN localization systems. AROMA has three main goals: high accuracy, low computational requirements, and minimum user overhead. To achieve high accuracy, AROMA uses 3D ray tracing enhanced with the uniform theory of diffraction (UTD) to model the electric field behavior and the human shadowing effect. AROMA also automates a number of routine tasks, such as importing building models and automatic sampling of the area of interest, to reduce the user's overhead. Finally, AROMA uses a number of optimization techniques to reduce the computational requirements. We present our system architecture and describe the details of its different components that allow AROMA to achieve its goals. We evaluate AROMA in two different testbeds. Our experiments show that the predicted signal strength differs from the measurements by a maximum average absolute error of 3.18 dBm achieving a maximum localization error of 2.44m for both the device-based and device-free cases.Comment: 14 pages, 17 figure

    Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    Full text link
    The quality of modern astronomical data, the power of modern computers and the agility of current image-processing software enable the creation of high-quality images in a purely digital form. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways it has led to a new philosophy towards how to create them. A practical guide is presented on how to generate astronomical images from research data with powerful image-processing programs. These programs use a layering metaphor that allows for an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. A philosophy is also presented on how to use color and composition to create images that simultaneously highlight scientific detail and are aesthetically appealing. This philosophy is necessary because most datasets do not correspond to the wavelength range of sensitivity of the human eye. The use of visual grammar, defined as the elements which affect the interpretation of an image, can maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage viewers and keep them interested for a longer period of time. The use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.Comment: 104 pages, 38 figures, submitted to A

    Is Captain Kirk a natural blonde? Do X-ray crystallographers dream of electron clouds? Comparing model-based inferences in science with fiction

    Get PDF
    Scientific models share one central characteristic with fiction: their relation to the physical world is ambiguous. It is often unclear whether an element in a model represents something in the world or presents an artifact of model building. Fiction, too, can resemble our world to varying degrees. However, we assign a different epistemic function to scientific representations. As artifacts of human activity, how are scientific representations allowing us to make inferences about real phenomena? In reply to this concern, philosophers of science have started analyzing scientific representations in terms of fictionalization strategies. Many arguments center on a dyadic relation between the model and its target system, focusing on structural resemblances and “as if” scenarios. This chapter provides a different approach. It looks more closely at model building to analyze the interpretative strategies dealing with the representational limits of models. How do we interpret ambiguous elements in models? Moreover, how do we determine the validity of model-based inferences to information that is not an explicit part of a representational structure? I argue that the problem of ambiguous inference emerges from two features of representations, namely their hybridity and incompleteness. To distinguish between fictional and non-fictional elements in scientific models my suggestion is to look at the integrative strategies that link a particular model to other methods in an ongoing research context. To exemplify this idea, I examine protein modeling through X-ray crystallography as a pivotal method in biochemistry

    In-situ defect detection systems for R2R flexible PV films

    Get PDF
    The atomic layer deposition technique (ALD) is used to apply a thin (40-100 nm thick) barrier coating of Al2O3 on polymer substrates for flexible PV cells, to minimise and control the degradation caused by water vapour ingress. However, defects appearing on the film surfaces during the Al2O3 ALD growth have been seen to be highly significant in deterioration of the PV module efficiency and lifespan [1]. In order to improve the process yield and product efficiency, it is desirable to develop an inspection system that can detect transparent barrier film defects in the production line during film processing. Off-line detection of defects in transparent PV barrier films is difficult and time consuming. Consequently, implementing an accurate in-situ defects inspection system in the production environment is even more challenging, since the requirements on positioning, fast measurement, long term stability and robustness against environmental disturbance are demanding. For in-situ R2R defects inspection systems the following conditions need to be satisfied by the inspection tools. Firstly the measurement must be fast and have no physical contact with the inspected film surface. Secondly the measurement system must be robust against the environmental disturbance inspection. Finally the system should have sub-micrometre lateral resolution and nanometre vertical resolution in order to be able to distinguish defects on the film surface. Optical interferometry techniques have the potentially to be used as a solution for such application. However they are extremely sensitive to environmental noise such as mechanical vibration, air turbulence and temperature drift. George [2] reported that a single shot interferometry system “FlexCam” developed by 4D Technology being used currently to detect defects for PV barrier films manufactured by R2R technology. It is robust against environmental disturbances; but it has a limited vertical range, which is restricted by the phase ambiguity of the phase shift interferometry. This vertical measurement range (a few hundreds nanometres) is far less than the normal vertical range of defects (a few micrometres up to a few tens micrometres). It is not possible to detect the majority of defects in the R2R flexible PV barrier films

    Statistical Analysis in Art Conservation Research

    Get PDF
    Evaluates all components of data analysis and shows that statistical methods in conservation are vastly underutilized. Also offers specific examples of possible improvements

    The Allen Telescope Array: The First Widefield, Panchromatic, Snapshot Radio Camera for Radio Astronomy and SETI

    Get PDF
    The first 42 elements of the Allen Telescope Array (ATA-42) are beginning to deliver data at the Hat Creek Radio Observatory in Northern California. Scientists and engineers are actively exploiting all of the flexibility designed into this innovative instrument for simultaneously conducting surveys of the astrophysical sky and conducting searches for distant technological civilizations. This paper summarizes the design elements of the ATA, the cost savings made possible by the use of COTS components, and the cost/performance trades that eventually enabled this first snapshot radio camera. The fundamental scientific program of this new telescope is varied and exciting; some of the first astronomical results will be discussed.Comment: Special Issue of Proceedings of the IEEE: "Advances in Radio Telescopes", Baars,J. Thompson,R., D'Addario, L., eds, 2009, in pres

    Information recovery in the biological sciences : protein structure determination by constraint satisfaction, simulation and automated image processing

    Get PDF
    Regardless of the field of study or particular problem, any experimental science always poses the same question: ÒWhat object or phenomena generated the data that we see, given what is known?Ó In the field of 2D electron crystallography, data is collected from a series of two-dimensional images, formed either as a result of diffraction mode imaging or TEM mode real imaging. The resulting dataset is acquired strictly in the Fourier domain as either coupled Amplitudes and Phases (as in TEM mode) or Amplitudes alone (in diffraction mode). In either case, data is received from the microscope in a series of CCD or scanned negatives of images which generally require a significant amount of pre-processing in order to be useful. Traditionally, processing of the large volume of data collected from the microscope was the time limiting factor in protein structure determination by electron microscopy. Data must be initially collected from the microscope either on film-negatives, which in turn must be developed and scanned, or from CCDs of sizes typically no larger than 2096x2096 (though larger models are in operation). In either case, data are finally ready for processing as 8-bit, 16-bit or (in principle) 32-bit grey-scale images. Regardless of data source, the foundation of all crystallographic methods is the presence of a regular Fourier lattice. Two dimensional cryo-electron microscopy of proteins introduces special challenges as multiple crystals may be present in the same image, producing in some cases several independent lattices. Additionally, scanned negatives typically have a rectangular region marking the film number and other details of image acquisition that must be removed prior to processing. If the edges of the images are not down-tapered, vertical and horizontal ÒstreaksÓ will be present in the Fourier transform of the image --arising from the high-resolution discontinuities between the opposite edges of the image. These streaks can overlap with lattice points which fall close to the vertical and horizontal axes and disrupt both the information they contain and the ability to detect them. Lastly, SpotScanning (Downing, 1991) is a commonly used process where-by circular discs are individually scanned in an image. The large-scale regularity of the scanning patter produces a low frequency lattice which can interfere and overlap with any protein crystal lattices. We introduce a series of methods packaged into 2dx (Gipson, et al., 2007) which simultaneously addresses these problems, automatically detecting accurate crystal lattice parameters for a majority of images. Further a template is described for the automation of all subsequent image processing steps on the road to a fully processed dataset. The broader picture of image processing is one of reproducibility. The lattice parameters, for instance, are only one of hundreds of parameters which must be determined or provided and subsequently stored and accessed in a regular way during image processing. Numerous steps, from correct CTF and tilt-geometry determination to the final stages of symmetrization and optimal image recovery must be performed sequentially and repeatedly for hundreds of images. The goal in such a project is then to automatically process as significant a portion of the data as possible and to reduce unnecessary, repetitive data entry by the user. Here also, 2dx (Gipson, et al., 2007), the image processing package designed to automatically process individual 2D TEM images is introduced. This package focuses on reliability, ease of use and automation to produce finished results necessary for full three-dimensional reconstruction of the protein in question. Once individual 2D images have been processed, they contribute to a larger project-wide 3-dimensional dataset. Several challenges exist in processing this dataset, besides simply the organization of results and project-wide parameters. In particular, though tilt-geometry, relative amplitude scaling and absolute orientation are in principle known (or obtainable from an individual image) errors, uncertainties and heterogeneous data-types provide for a 3D-dataset with many parameters to be optimized. 2dx_merge (Gipson, et al., 2007) is the follow-up to the first release of 2dx which had originally processed only individual images. Based on the guiding principles of the earlier release, 2dx_merge focuses on ease of use and automation. The result is a fully qualified 3D structure determination package capable of turning hundreds of electron micrograph images, nearly completely automatically, into a full 3D structure. Most of the processing performed in the 2dx package is based on the excellent suite of programs termed collectively as the MRC package (Crowther, et al., 1996). Extensions to this suite and alternative algorithms continue to play an essential role in image processing as computers become faster and as advancements are made in the mathematics of signal processing. In this capacity, an alternative procedure to generate a 3D structure from processed 2D images is presented. This algorithm, entitled ÒProjective Constraint OptimizationÓ (PCO), leverages prior known information, such as symmetry and the fact that the protein is bound in a membrane, to extend the normal boundaries of resolution. In particular, traditional methods (Agard, 1983) make no attempt to account for the Òmissing coneÓ a vast, un-sampled, region in 3D Fourier space arising from specimen tilt limitations in the microscope. Provided sufficient data, PCO simultaneously refines the dataset, accounting for error, as well as attempting to fill this missing cone. Though PCO provides a near-optimal 3D reconstruction based on data, depending on initial data quality and amount of prior knowledge, there may be a host of solutions, and more importantly pseudo-solutions, which are more-or-less consistent with the provided dataset. Trying to find a global best-fit for known information and data can be a daunting challenge mathematically, to this end the use of meta-heuristics is addressed. Specifically, in the case of many pseudo-solutions, so long as a suitably defined error metric can be found, quasi-evolutionary swarm algorithms can be used that search solution space, sharing data as they go. Given sufficient computational power, such algorithms can dramatically reduce the search time for global optimums for a given dataset. Once the structure of a protein has been determined, many questions often remain about its function. Questions about the dynamics of a protein, for instance, are not often readily interpretable from structure alone. To this end an investigation into computationally optimized structural dynamics is described. Here, in order to find the most likely path a protein might take through Òconformation spaceÓ between two conformations, a graphics processing unit (GPU) optimized program and set of libraries is written to speed of the calculation of this process 30x. The tools and methods developed here serve as a conceptual template as to how GPU coding was applied to other aspects of the work presented here as well as GPU programming generally. The final portion of the thesis takes an apparent step in reverse, presenting a dramatic, yet highly predictive, simplification of a complex biological process. Kinetic Monte Carlo simulations idealize thousands of proteins as interacting agents by a set of simple rules (i.e. react/dissociate), offering highly-accurate insights into the large-scale cooperative behavior of proteins. This work demonstrates that, for many applications, structure, dynamics or even general knowledge of a protein may not be necessary for a meaningful biological story to emerge. Additionally, even in cases where structure and function is known, such simulations can help to answer the biological question in its entirety from structure, to dynamics, to ultimate function

    Prácticas artísticas difractivas: la informática y los complicados enredos entre el arte contemporáneo convencional y el arte de los nuevos medios

    Get PDF
    Adoptem la noció de difracció proposada per Karen Barad (2007) per reavaluar les relacions entre l'art contemporani convencional (ACC) i l'art dels nous mitjans (ANM), sobre les quals s'ha discutit durant molts anys com a part d'un debat un xic controvertit. La nostra lectura difractiva posa en relleu diferències, grans i petites però conseqüents, entre aquestes pràctiques artístiques. No suavitzarem les tensions que destaquen en anteriors debats sobre ANM i MCA, sinó que utilitzarem el terme de Barad, «embolic», per suggerir que hi ha «embolics» generatius, així com diferències productives, entre aquestes pràctiques. Ampliem el debat considerant quines diferències importen, per a qui (artistes, galeristes, científics) i com aquestes diferències emergeixen a través d'intraaccions materials i discursives. Propugnem un nou terme, «pràctiques artístiques difractives», i suggerim que aquestes pràctiques artístiques van més enllà de la bifurcació d’ANM i MCA per a reconfigurar parcialment les pràctiques entre art, informàtica i humanitats.We engage with Karen Barad’s notion of diffraction (2007) to re-evaluate the relations between mainstream contemporary art (MCA) and new media art (NMA) that have been discussed for many years as part of a somewhat contentious debate. Our diffractive reading highlights both large and small but consequential differences between these art practices. We do not smooth over the tensions highlighted in earlier discussions of NMA and MCA. Instead we use Barad’s term ‘entanglement’ to suggest that there are generative ‘entanglements’, as well as productive differences, between these practices. We extend the debate by considering which differences matter, for whom (artists, gallerists, scientists) and how these differences emerge through material-discursive intra-actions. We argue for a new term, diffractive art practices, and suggest that such art practices move beyond the bifurcation of NMA and MCA to partially reconfigure the practices between art, computation and humanities.Adoptamos la noción de difracción propuesta por Karen Barad (2007) para reevaluar las relaciones entre el arte contemporáneo convencional (ACC) y el arte de los nuevos medios (ANM), sobre las que se ha discutido durante muchos años como parte de un debate algo controvertido. Nuestra lectura difractiva pone de relieve diferencias, grandes y pequeñas pero consecuentes, entre estas prácticas artísticas. No suavizamos las tensiones puestas de relieve en anteriores debates acerca de ANM y MCA, sino que utilizamos el término de Barad, «enredo», para sugerir que existen «enredos» generativos, así como diferencias productivas, entre dichas prácticas. Ampliamos el debate considerando qué diferencias importan, para quién (artistas, galeristas, científicos) y cómo dichas diferencias emergen a través de intra-acciones materiales y discursivas. Propugnamos un nuevo término, «prácticas artísticas difractivas», y sugerimos que dichas prácticas artísticas van más allá de la bifurcación de ANM y MCA para reconfigurar parcialmente las prácticas entre arte, informática y humanidades

    Fully automated urban traffic system

    Get PDF
    The replacement of the driver with an automatic system which could perform the functions of guiding and routing a vehicle with a human's capability of responding to changing traffic demands was discussed. The problem was divided into four technological areas; guidance, routing, computing, and communications. It was determined that the latter three areas being developed independent of any need for fully automated urban traffic. A guidance system that would meet system requirements was not being developed but was technically feasible

    Applications of optical processing for improving ERTS data, volume 1

    Get PDF
    Application of optically diagnosed noise information toward development of filtering subroutines for improvement of digital sensing data tape quality - Vol.
    corecore