17,663 research outputs found

    Patterning of the C. elegans 1Ā° vulval lineage by RAS and Wnt pathways

    Get PDF
    In C. elegans, the descendants of the 1Ā° vulval precursor cell (VPC) establish a fixed spatial pattern of two different cell fates: E-F-F-E. The two inner granddaughters attach to the somatic gonadal anchor cell (AC) and generate four vulF cells, while the two outer granddaughters produce four vulE progeny. zmp-1::GFP, a molecular marker that distinguishes these two fates, is expressed in vulE cells, but not vulF cells. We demonstrate that a short-range AC signal is required to ensure that the pattern of vulE and vulF fates is properly established. In addition, signaling between the inner and outer 1Ā° VPC descendants, as well as intrinsic polarity of the 1Ā° VPC daughters, is involved in the asymmetric divisions of the 1Ā° VPC daughters and the proper orientation of the outcome. Finally, we provide evidence that RAS signaling is used during this new AC signaling event, while the Wnt receptor LIN-17 appears to mediate signaling between the inner and outer 1Ā° VPC descendants

    The C. elegans LIM homeobox gene lin-11 specifies multiple cell fates during vulval development

    Get PDF
    LIM homeobox family members regulate a variety of cell fate choices during animal development. In C. elegans, mutations in the LIM homeobox gene lin-11 have previously been shown to alter the cell division pattern of a subset of the 2Ā° lineage vulval cells. We demonstrate multiple functions of lin-11 during vulval development. We examined the fate of vulval cells in lin-11 mutant animals using five cellular markers and found that lin-11 is necessary for the patterning of both 1Ā° and 2Ā° lineage cells. In the absence of lin-11 function, vulval cells fail to acquire correct identity and inappropriately fuse with each other. The expression pattern of lin-11 reveals dynamic changes during development. Using a temporally controlled overexpression system, we show that lin-11 is initially required in vulval cells for establishing the correct invagination pattern. This process involves asymmetric expression of lin-11 in the 2Ā° lineage cells. Using a conditional RNAi approach, we show that lin-11 regulates vulval morphogenesis. Finally, we show that LDB-1, a NLI/Ldb1/CLIM2 family member, interacts physically with LIN-11, and is necessary for vulval morphogenesis. Together, these findings demonstrate that temporal regulation of lin-11 is crucial for the wild-type vulval patterning

    High resolution cathodoluminescence hyperspectral imaging of surface features in InGaN/GaN multiple quantum well structures

    Get PDF
    InGaN/GaN multiple quantum wells (MQWs) have been studied by using cathodoluminescence hyperspectral imaging with high spatial resolution. Variations in peak emission energies and intensities across trench-like features and V-pits on the surface of the MQWs are investigated. The MQW emission from the region inside trench-like features is red-shifted by approximately 45 meV and more intense than the surrounding planar regions of the sample, whereas emission from the V-pits is blue-shifted by about 20 meV and relatively weaker. By employing this technique to the studied nanostructures it is possible to investigate energy and intensity shifts on a 10 nm length scale.Comment: 3 pages, 3 figure

    A Software Retina for Egocentric & Robotic Vision Applications on Mobile Platforms

    Get PDF
    We present work in progress to develop a low-cost highly integrated camera sensor for egocentric and robotic vision. Our underlying approach is to address current limitations to image analysis by Deep Convolutional Neural Networks, such as the requirement to learn simple scale and rotation transformations, which contribute to the large computational demands for training and opaqueness of the learned structure, by applying structural constraints based on known properties of the human visual system. We propose to apply a version of the retino-cortical transform to reduce the dimensionality of the input image space by a factor of ex100, and map this spatially to transform rotations and scale changes into spatial shifts. By reducing the input image size accordingly, and therefore learning requirements, we aim to develop compact and lightweight egocentric and robot vision sensor using a smartphone as the target platfor

    A Biologically Motivated Software Retina for Robotic Sensors for ARM-Based Mobile Platform Technology

    Get PDF
    A key issue in designing robotics systems is the cost of an integrated camera sensor that meets the bandwidth/processing requirement for many advanced robotics applications, especially lightweight robotics applications, such as visual surveillance or SLAM in autonomous aerial vehicles. There is currently much work going on to adapt smartphones to provide complete robot vision systems, as the smartphone is so exquisitely integrated by having camera(s), inertial sensing, sound I/O and excellent wireless connectivity. Mass market production makes this a very low-cost platform and manufacturers from quadrotor drone suppliers to childrenā€™s toys, such as the Meccanoid robot [5], employ a smartphone to provide a vision system/control system [7,8]. Accordingly, many research groups are attempting to optimise image analysis, computer vision and machine learning libraries for the smartphone platform. However current approaches to robot vision remain highly demanding for mobile processors such as the ARM, and while a number of algorithms have been developed, these are very stripped down, i.e. highly compromised in function or performance. For example, the semi-dense visual odometry implementation of [1] operates on images of only 320x240pixels. In our research we have been developing biologically motivated foveated vision algorithms based on a model of the mammalian retina [2], potentially 100 times more efficient than their conventional counterparts. Accordingly, vision systems based on the foveated architectures found in mammals have also the potential to reduce bandwidth and processing requirements by about x100 - it has been estimated that our brains would weigh ~60Kg if we were to process all our visual input at uniform high resolution. We have reported a foveated visual architecture [2,3,4] that implements a functional model of the retina-visual cortex to produce feature vectors that can be matched/classified using conventional methods, or indeed could be adapted to employ Deep Convolutional Neural Nets for the classification/interpretation stage. Given the above processing/bandwidth limitations, a viable way forward would be to perform off-line learning and implement the forward recognition path on the mobile platform, returning simple object labels, or sparse hierarchical feature symbols, and gaze control commands to the host robot vision system and controller. We are now at the early stages of investigating how best to port our foveated architecture onto an ARM-based smartphone platform. To achieve the required levels of performance we propose to port and optimise our retina model to the mobile ARM processor architecture in conjunction with their integrated GPUs. We will then be in the position to provide a foveated smart vision system on a smartphone with the advantage of processing speed gains and bandwidth optimisations. Our approach will be to develop efficient parallelising compilers and perhaps propose new processor architectural features to support this approach to computer vision, e.g. efficient processing of hexagonally sampled foveated images. Our current goal is to have a foveated system running in real-time on at least a 1080p input video stream to serve as a front-end robot sensor for tasks such as general purpose object recognition and reliable dense SLAM using a commercial off-the-shelf smartphone. Initially this system would communicate a symbol stream to conventional hardware performing back-end visual classification/interpretation, although simple object detection and recognition tasks should be possible on-board the device. We propose that, as in Nature, foveated vision is the key to achieving the necessary data reduction to be able to implement complete visual recognition and learning processes on the smartphone itself

    Implementation of a color-capable optofluidic microscope on a RGB CMOS color sensor chip substrate

    Get PDF
    We report the implementation of a color-capable on-chip lensless microscope system, termed color optofluidic microscope (color OFM), and demonstrate imaging of double stained Caenorhabditis elegans with lacZ gene expression at a light intensity about 10 mW/cm^2

    Single integrated device for optical CDMA code processing in dual-code environment

    Get PDF
    We report on the design, fabrication and performance of a matching integrated optical CDMA encoder-decoder pair based on holographic Bragg reflector technology. Simultaneous encoding/decoding operation of two multiple wavelength-hopping time-spreading codes was successfully demonstrated and shown to support two error-free OCDMA links at OC-24. A double-pass scheme was employed in the devices to enable the use of longer code length

    A River Valley Segment Classification of Michigan Streams Based on Fish and Physical Attributes

    Full text link
    Water resource managers are frequently interested in river and stream classification systems to generalize stream conditions and establish management policies over large spatial scales. We used fish assemblage data from 745 river valley segments to develop a twoā€level, river valley segmentā€scale classification system of rivers and streams throughout Michigan. Regression tree analyses distinguished 10 segment types based on mean July temperature and network catchment area and 26 segment types when channel gradient was also considered. Nonmetric multidimensional scaling analyses suggested that fish assemblages differed among segment types but were only slightly influenced by channel gradient. Species that were indicative of specific segment types generally had habitat requirements that matched segment attributes. A test of classification strength using fish assemblage data from an additional 77 river valley segments indicated that the classification system performed significantly better than random groupings of river valley segments. Our classification system for river valley segments overcomes several weaknesses of the classifications previously used in Michigan, and our approach may prove beneficial for developing classifications elsewhere.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/141625/1/tafs1621.pd
    • ā€¦
    corecore