517 research outputs found

    On-board processing satellite network architecture and control study

    Get PDF
    The market for telecommunications services needs to be segmented into user classes having similar transmission requirements and hence similar network architectures. Use of the following transmission architecture was considered: satellite switched TDMA; TDMA up, TDM down; scanning (hopping) beam TDMA; FDMA up, TDM down; satellite switched MF/TDMA; and switching Hub earth stations with double hop transmission. A candidate network architecture will be selected that: comprises multiple access subnetworks optimized for each user; interconnects the subnetworks by means of a baseband processor; and optimizes the marriage of interconnection and access techniques. An overall network control architecture will be provided that will serve the needs of the baseband and satellite switched RF interconnected subnetworks. The results of the studies shall be used to identify elements of network architecture and control that require the greatest degree of technology development to realize an operational system. This will be specified in terms of: requirements of the enabling technology; difference from the current available technology; and estimate of the development requirements needed to achieve an operational system. The results obtained for each of these tasks are presented

    A Decade of Neural Networks: Practical Applications and Prospects

    Get PDF
    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization

    ZyON: Enabling Spike Sorting on APSoC-Based Signal Processors for High-Density Microelectrode Arrays

    Get PDF
    Multi-Electrode Arrays and High-Density Multi-Electrode Arrays of sensors are a key instrument in neuroscience research. Such devices are evolving to provide ever-increasing temporal and spatial resolution, paving the way to unprecedented results when it comes to understanding the behaviour of neuronal networks and interacting with them. However, in some experimental cases, in-place low-latency processing of the sensor data acquired by the arrays is required. This poses the need for high-performance embedded computing platforms capable of processing in real-time the stream of samples produced by the acquisition front-end to extract higher-level information. Previous work has demonstrated that Field-Programmable Gate Array and All-Programmable System-On-Chip devices are suitable target technology for the implementation of real-time processors of High-Density Multi-Electrode Arrays data. However, approaches available in literature can process a limited number of channels or are designed to execute only the first steps of the neural signal processing chain. In this work, we propose an All-Programmable System-On-Chip based implementation capable of sorting neural spikes acquired by the sensors, to associate the shape of each spike to a specific firing neuron. Our system, implemented on a Xilinx Z7020 All-Programmable System-On-Chip is capable of executing on-line spike sorting up to 5500 acquisition channels, 43x more than state-of-the-art alternatives, supporting 18KHz acquisition frequency. We present an experimental study on a commonly used reference dataset, using on-line refinement of the sorting clusters to improve accuracy up to 82%, with only 4% degradation with respect to off-line analysis

    Pruning random resistive memory for optimizing analogue AI

    Full text link
    The rapid advancement of artificial intelligence (AI) has been marked by the large language models exhibiting human-like intelligence. However, these models also present unprecedented challenges to energy consumption and environmental sustainability. One promising solution is to revisit analogue computing, a technique that predates digital computing and exploits emerging analogue electronic devices, such as resistive memory, which features in-memory computing, high scalability, and nonvolatility. However, analogue computing still faces the same challenges as before: programming nonidealities and expensive programming due to the underlying devices physics. Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning to optimize the topology of a randomly weighted analogue resistive memory neural network. Software-wise, the topology of a randomly weighted neural network is optimized by pruning connections rather than precisely tuning resistive memory weights. Hardware-wise, we reveal the physical origin of the programming stochasticity using transmission electron microscopy, which is leveraged for large-scale and low-cost implementation of an overparameterized random neural network containing high-performance sub-networks. We implemented the co-design on a 40nm 256K resistive memory macro, observing 17.3% and 19.9% accuracy improvements in image and audio classification on FashionMNIST and Spoken digits datasets, as well as 9.8% (2%) improvement in PR (ROC) in image segmentation on DRIVE datasets, respectively. This is accompanied by 82.1%, 51.2%, and 99.8% improvement in energy efficiency thanks to analogue in-memory computing. By embracing the intrinsic stochasticity and in-memory computing, this work may solve the biggest obstacle of analogue computing systems and thus unleash their immense potential for next-generation AI hardware

    Self-organized aggregation without computation

    Get PDF
    This paper presents a solution to the problem of self-organized aggregation of embodied robots that requires no arithmetic computation. The robots have no memory and are equipped with one binary sensor, which informs them whether or not there is another robot in their line of sight. It is proven that the sensor needs to have a sufficiently long range; otherwise aggregation cannot be guaranteed, irrespective of the controller used. The optimal controller is found by performing a grid search over the space of all possible controllers. With this controller, robots rotate on the spot when they perceive another robot, and move backwards along a circular trajectory otherwise. This controller is proven to always aggregate two simultaneously moving robots in finite time, an upper bound for which is provided. Simulations show that the controller also aggregates at least 1000 robots into a single cluster consistently. Moreover, in 30 experiments with 40 physical e-puck robots, 98.6% of the robots aggregated into one cluster. The results obtained have profound implications for the implementation of multi-robot systems at scales where conventional approaches to sensing and information processing are no longer applicable

    NASA Space Engineering Research Center for VLSI systems design

    Get PDF
    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design
    • …
    corecore