20,005 research outputs found

    Visual Intelligence and the Terminator

    Get PDF
    What would it take to replicate the human visual system in synthetic hardware? What software models can we use to implement the mammalian visual system? The goal of our research is a neuromorphic vision system capable of categorizing, tracking and maintaining a visual memory of tens of targets. The application of such system is in smart phones, computers, robotics, autonomous cars, smart appliances, to name a few

    A novel expression cassette for the efficient visual selection of transformed tissues in florists' chrysanthemum (Chrysanthemum morifolium Ramat.).

    Get PDF
    Constructs carrying visual reporter genes coupled with efficient promoters could facilitate the process of identification and selection of stable transformants in recalcitrant crops. Here, a novel construct utilizing a ribulose-1,5-bisphosphate carboxylase (RbcS) promoter combined with the green fluorescent protein (GFP) reporter gene to initiate very high expression of GFP in florist's chrysanthemum (Chrysanthemum morifolium Ramat.) was described. Based on this expression cassette, a new regeneration protocol using leaf discs as explants was developed for the Agrobacterium-mediated transformation of Chrysanthemum genotype ‘1581’, and a transformation efficiency of 7% was obtained. The expression of two different GFP constructs targeted to either cytosol or plastids was compared in transgenic lines. Both GFP constructs were expressed at such a high level that the green fluorescence dominated red fluorescence in the leaf tissues, allowing easy observation and microdissection of transformed tissues even without a GFP filter. Under normal light, plants with GFP targeted to plastids had a light green phenotype deriving from the high GFP expression. Quantitative reverse transcriptional PCR analysis showed that the plastid targeted construct with intron had significantly higher steady state transcript levels of GFP mRNA. This novel expression cassette may allow direct visual selection of transformed tissues independent of antibiotic selection in a wide range of plant specie

    The June 2012 transit of Venus. Framework for interpretation of observations

    Get PDF
    Ground based observers have on 5/6th June 2012 the last opportunity of the century to watch the passage of Venus across the solar disk from Earth. Venus transits have traditionally provided unique insight into the Venus atmosphere through the refraction halo that appears at the planet outer terminator near ingress/egress. Much more recently, Venus transits have attracted renewed interest because the technique of transits is being successfully applied to the characterization of extrasolar planet atmospheres. The current work investigates theoretically the interaction of sunlight and the Venus atmosphere through the full range of transit phases, as observed from Earth and from a remote distance. Our model predictions quantify the relevant atmospheric phenomena, thereby assisting the observers of the event in the interpretation of measurements and the extrapolation to the exoplanet case. Our approach relies on the numerical integration of the radiative transfer equation, and includes refraction, multiple scattering, atmospheric extinction and solar limb darkening, as well as an up to date description of the Venus atmosphere. We produce synthetic images of the planet terminator during ingress/egress that demonstrate the evolving shape, brightness and chromaticity of the halo. Guidelines are offered for the investigation of the planet upper haze from vertically-unresolved photometric measurements. In this respect, the comparison with measurements from the 2004 transit appears encouraging. We also show integrated lightcurves of the Venus/Sun system at various phases during transit and calculate the respective Venus-Sun integrated transmission spectra. The comparison of the model predictions to those for a Venus-like planet free of haze and clouds (and therefore a closer terrestrial analogue) complements the discussion and sets the conclusions into a broader perspective.Comment: 14 pages; 14 figures; Submitted on 02/06/2012; A&A, accepted for publication on 30/08/201

    Laminar Cortical Dynamics of Visual Form and Motion Interactions During Coherent Object Motion Perception

    Full text link
    How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? A 3D FORMOTION model specifies how 3D boundary representations, which separate figures from backgrounds within cortical area V2, capture motion signals at the appropriate depths in MT; how motion signals in MT disambiguate boundaries in V2 via MT-to-Vl-to-V2 feedback; how sparse feature tracking signals are amplified; and how a spatially anisotropic motion grouping process propagates across perceptual space via MT-MST feedback to integrate feature-tracking and ambiguous motion signals to determine a global object motion percept. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. non-rigid appearance of rotating ellipses.Air Force Office of Scientific Research (F49620-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (BCS-02-35398, SBE-0354378); Office of Naval Research (N00014-95-1-0409, N00014-01-1-0624

    TinkerCell: Modular CAD Tool for Synthetic Biology

    Get PDF
    Synthetic biology brings together concepts and techniques from engineering and biology. In this field, computer-aided design (CAD) is necessary in order to bridge the gap between computational modeling and biological data. An application named TinkerCell has been created in order to serve as a CAD tool for synthetic biology. TinkerCell is a visual modeling tool that supports a hierarchy of biological parts. Each part in this hierarchy consists of a set of attributes that define the part, such as sequence or rate constants. Models that are constructed using these parts can be analyzed using various C and Python programs that are hosted by TinkerCell via an extensive C and Python API. TinkerCell supports the notion of a module, which are networks with interfaces. Such modules can be connected to each other, forming larger modular networks. Because TinkerCell associates parameters and equations in a model with their respective part, parts can be loaded from databases along with their parameters and rate equations. The modular network design can be used to exchange modules as well as test the concept of modularity in biological systems. The flexible modeling framework along with the C and Python API allows TinkerCell to serve as a host to numerous third-party algorithms. TinkerCell is a free and open-source project under the Berkeley Software Distribution license. Downloads, documentation, and tutorials are available at www.tinkercell.com.Comment: 23 pages, 20 figure

    An analysis of the Venera 8 measurements

    Get PDF
    Analysis of the Venera 8 measurements yielded equatorial morning terminator horizontal and vertical winds which are similar to the winds obtained from the Venera 7 measurements. The lower boundary of the horizontal retrograde 4-day wind is defined by a 50-60% decrease in wind speed in the vicinity of 44 km and there exists a retrograde wind plateau of 15 to 40 m/s winds extending from 40 km down to the vicinity of 18 km where the winds decrease rapidly to the order of 0.1 m/s near the surface. Up drafts of 2 to 5 m/s exist in the vicinity of 20 to 30 km and are apparently associated with a slightly super adiabatic lapse rate. The temperature lapse-rate, surface radius, surface topography, and atmospheric structure are discussed

    Neural Models of Motion Integration, Segmentation, and Probablistic Decision-Making

    Full text link
    When brain mechanism carry out motion integration and segmentation processes that compute unambiguous global motion percepts from ambiguous local motion signals? Consider, for example, a deer running at variable speeds behind forest cover. The forest cover is an occluder that creates apertures through which fragments of the deer's motion signals are intermittently experienced. The brain coherently groups these fragments into a trackable percept of the deer in its trajectory. Form and motion processes are needed to accomplish this using feedforward and feedback interactions both within and across cortical processing streams. All the cortical areas V1, V2, MT, and MST are involved in these interactions. Figure-ground processes in the form stream through V2, such as the seperation of occluding boundaries of the forest cover from the boundaries of the deer, select the motion signals which determine global object motion percepts in the motion stream through MT. Sparse, but unambiguous, feauture tracking signals are amplified before they propogate across position and are intergrated with far more numerous ambiguous motion signals. Figure-ground and integration processes together determine the global percept. A neural model predicts the processing stages that embody these form and motion interactions. Model concepts and data are summarized about motion grouping across apertures in response to a wide variety of displays, and probabilistic decision making in parietal cortex in response to random dot displays.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Apollo 10 visual tracking test results

    Get PDF
    Real-time tests of acquisition, tracking, and photography of Apollo 10 spacecraf
    • …
    corecore