73 research outputs found

    The recursive neural network

    Get PDF
    This paper describes a special type of dynamic neural network called the Recursive Neural Network (RNN). The RNN is a single-input single-output nonlinear dynamical system with three subnets, a nonrecursive subnet and two recursive subnets. The nonrecursive subnet feeds current and previous input samples through a multi-layer perceptron with second order input units (SOMLP) [9]. In a similar fashion the two recursive subnets feed back previous output signals through SOMLPs. The outputs of the three subnets are summed to form the overall network output. The purpose of this paper is to describe the architecture of the RNN, to derive a learning algorithm for the network based on a gradient search, and to provide some examples of its use. The work in this paper is an extension of previous work on the RNN [10]. In previous work the RNN contained only two subnets, a nonrecursive subnet and a recursive subnet. Here we have added a second recursive subnet. In addition, both of the subnets in the previous RNN had linear input units. Here all three of the subnets have second order input units. In many cases this allows the RNN to solve problems more efficiently, that is with a smaller overall network. In addition, the use of the RNN for inverse modeling and control was never fully developed in the past. Here, for the first time, we derive the complete learning algorithm for the case where the RNN is used in the general model following configuration. This configuration includes the following as special cases: system modeling, nonlinear filtering, inverse modeling, nonlinear prediction and control

    NASA Near Earth Network (NEN), Deep Space Network (DSN) and Space Network (SN) Support of CubeSat Communications

    Get PDF
    There has been a historical trend to increase capability and drive down the Size, Weight and Power (SWAP) of satellites and that trend continues today. Small satellites, including systems conforming to the CubeSat specification, because of their low launch and development costs, are enabling new concepts and capabilities for science investigations across multiple fields of interest to NASA. NASA scientists and engineers across many of NASAs Mission Directorates and Centers are developing exciting CubeSat concepts and welcome potential partnerships for CubeSat endeavors. From a communications and tracking point of view, small satellites including CubeSats are a challenge to coordinate because of existing small spacecraft constraints, such as limited SWAP and attitude control, low power, and the potential for high numbers of operational spacecraft. The NASA Space Communications and Navigation (SCaN) Programs Near Earth Network (NEN), Deep Space Network (DSN) and the Space Network (SN) are customer driven organizations that provide comprehensive communications services for space assets including data transport between a missions orbiting satellite and its Mission Operations Center (MOC). The NASA NEN consists of multiple ground antennas. The SN consists of a constellation of geosynchronous (Earth orbiting) relay satellites, named the Tracking and Data Relay Satellite System (TDRSS). The DSN currently makes available 13 antennas at its three tracking stations located around the world for interplanetary communication. The presentation will analyze how well these space communication networks are positioned to support the emerging small satellite and CubeSat market. Recognizing the potential support, the presentation will review the basic capabilities of the NEN, DSN and SN in the context of small satellites and will present information about NEN, DSN and SN-compatible flight radios and antenna development activities at the Goddard Space Flight Center (GSFC) and across industry. The presentation will review concepts on how the SN multiple access capability could help locate CubeSats and provide a low-latency early warning system. The presentation will also present how the DSN is evolving to maximize use of its assets for interplanetary CubeSats. The critical spectrum-related topics of available and appropriate frequency bands, licensing, and coordination will be reviewed. Other key considerations, such as standardization of radio frequency interfaces and flight and ground communications hardware systems, will be addressed as such standardization may reduce the amount of time and cost required to obtain frequency authorization and perform compatibility and end-to-end testing. Examples of standardization that exist today are the NASA NEN, DSN and SN systems which have published users guides and defined frequency bands for high data rate communication, as well as conformance to CCSDS standards. The workshop session will also seek input from the workshop participants to better understand the needs of small satellite systems and to identify key development activities and operational approaches necessary to enhance communication and navigation support using NASA's NEN, DSN and SN

    Learning a Class of Large Finite State Machines with a Recurrent Neural Network

    Get PDF
    One of the issues in any learning model is how it scales with problem size. Neural networks have not been immune to scaling issues. We show that a dynamically-driven discrete-time recurrent network (DRNN) can learn rather large grammatical inference problems when the strings of a finite memory machine (FMM) are encoded as temporal sequences. FMMs are a subclass of finite state machines which have a finite memory or a finite order of inputs and outputs. The DRNN that learns the FMM is a neural network that maps directly from the sequential machine implementation of the FMM. It has feedback only from the output and not from any hidden units; an example is the recurrent network of Narendra and Parthasarathy. (FMMs that have zero order in the feedback of outputs are called definite memory machines and are analogous to Time-delay or Finite Impulse Response neural networks.) Due to their topology these DRNNs are as least as powerful as any sequential machine implementation of a FMM and should be capable of representing any FMM. We choose to learn ``particular FMMs.\' Specifically, these FMMs have a large number of states (simulations are for 256256 and 512512 state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine. Simulations for the number of training examples versus generalization performance and FMM extraction size show that the number of training samples necessary for perfect generalization is less than that necessary to completely characterize the FMM to be learned. This is in a sense a best case learning problem since any arbitrarily chosen FMM with a minimal number of states would have much more order and string depth and most likely require more logic in its sequential machine implementation. (Also cross-referenced as UMIACS-TR-94-94

    Product Unit Learning

    Get PDF
    Product units provide a method of automatically learning the higher-order input combinations required for the efficient synthesis of Boolean logic functions by neural networks. Product units also have a higher information capacity than sigmoidal networks. However, this activation function has not received much attention in the literature. A possible reason for this is that one encounters some problems when using standard backpropagation to train networks containing these units. This report examines these problems, and evaluates the performance of three training algorithms on networks of this type. Empirical results indicate that the error surface of networks containing product units have more local minima than corresponding networks with summation units. For this reason, a combination of local and global training algorithms were found to provide the most reliable convergence. We then investigate how `hints' can be added to the training algorithm. By extracting a common frequency from the input weights, and training this frequency separately, we show that convergence can be accelerated. A constructive algorithm is then introduced which adds product units to a network as required by the problem. Simulations show that for the same problems this method creates a network with significantly less neurons than those constructed by the tiling and upstart algorithms. In order to compare their performance with other transfer functions, product units were implemented as candidate units in the Cascade Correlation (CC) \cite{Fahlman90} system. Using these candidate units resulted in smaller networks which trained faster than when the any of the standard (three sigmoidal types and one Gaussian) transfer functions were used. This superiority was confirmed when a pool of candidate units of four different nonlinear activation functions were used, which have to compete for addition to the network. Extensive simulations showed that for the problem of implementing random Boolean logic functions, product units are always chosen above any of the other transfer functions. (Also cross-referenced as UMIACS-TR-95-80

    Performance of On-Line Learning Methods in Predicting Multiprocessor Memory Access Patterns

    Get PDF
    Shared memory multiprocessors require reconfigurable interconnection networks (INs) for scalability. These INs are reconfigured by an IN control unit. However, these INs are often plagued by undesirable reconfiguration time that is primarily due to control latency, the amount of time delay that the control unit takes to decide on a desired new IN configuration. To reduce control latency, a trainable prediction unit (PU) was devised and added to the IN controller. The PU's job is to anticipate and reduce control configuration time, the major component of the control latency. Three different on-line prediction techniques were tested to learn and predict repetitive memory access patterns for three typical parallel processing applications, the 2-D relaxation algorithm, matrix multiply and Fast Fourier Transform. The predictions were then used by a routing control algorithm to reduce control latency by configuring the IN to provide needed memory access paths before they were requested. Three prediction techniques were used and tested: 1). a Markov predictor, 2). a linear predictor and 3). a time delay neural network (TDNN) predictor. As expected, different predictors performed best on different applications, however, the TDNN produced the best overall results. (Also cross-referenced as UMIACS-TR-96-59

    TOI-2196 b : Rare planet in the hot Neptune desert transiting a G-type star

    Get PDF
    Funding: C.M.P., M.F., I.G., and J.K. gratefully acknowledge the support of the Swedish National Space Agency (DNR 65/19, 174/18, 177/19, 2020-00104). L.M.S and D.G. gratefully acknowledge financial support from the CRT foundation under Grant No. 2018.2323 “Gaseous or rocky? Unveiling the nature of small worlds”. P.K. acknowledges support from grant LTT-20015. E.G. acknowledge the support of the ThĂŒringer Ministerium fĂŒr Wirtschaft, Wissenschaft und Digitale Gesellschaft. J.S.J. gratefully acknowledges support by FONDECYT grant 1201371 and from the ANID BASAL projects ACE210002 and FB210003. H.J.D. acknowledges support from the Spanish Research Agency of the Ministry of Science and Innovation (AEI-MICINN) under grant PID2019-107061GBC66, DOI: 10.13039/501100011033. D.D. acknowledges support from the TESS Guest Investigator Program grants 80NSSC21K0108 and 80NSSC22K0185. M.E. acknowledges the support of the DFG priority program SPP 1992 "Exploring the Diversity of Extrasolar Planets" (HA 3279/12-1). K.W.F.L. was supported by Deutsche Forschungsgemeinschaft grants RA714/14-1 within the DFG Schwerpunkt SPP 1992, Exploring the Diversity of Extrasolar Planets. N.N. acknowledges support from JSPS KAKENHI Grant Number JP18H05439, JST CREST Grant Number JPMJCR1761. M.S.I.P. is funded by NSF.The hot Neptune desert is a region hosting a small number of short-period Neptunes in the radius-instellation diagram. Highly irradiated planets are usually either small (R â‰Č 2 R⊕) and rocky or they are gas giants with radii of ≳1 RJ. Here, we report on the intermediate-sized planet TOI-2196 b (TIC 372172128.01) on a 1.2 day orbit around a G-type star (V = 12.0, [Fe/H] = 0.14 dex) discovered by the Transiting Exoplanet Survey Satellite in sector 27. We collected 41 radial velocity measurements with the HARPS spectrograph to confirm the planetary nature of the transit signal and to determine the mass. The radius of TOI-2196 b is 3.51 ± 0.15 R⊕, which, combined with the mass of 26.0 ± 1.3 M⊕, results in a bulk density of 3.31−0.43+0.51 g cm−3. Hence, the radius implies that this planet is a sub-Neptune, although the density is twice than that of Neptune. A significant trend in the HARPS radial velocity measurements points to the presence of a distant companion with a lower limit on the period and mass of 220 days and 0.65 MJ, respectively, assuming zero eccentricity. The short period of planet b implies a high equilibrium temperature of 1860 ± 20 K, for zero albedo and isotropic emission. This places the planet in the hot Neptune desert, joining a group of very few planets in this parameter space discovered in recent years. These planets suggest that the hot Neptune desert may be divided in two parts for planets with equilibrium temperatures of ≳1800 K: a hot sub-Neptune desert devoid of planets with radii of ≈ 1.8−3 R⊕ and a sub-Jovian desert for radii of ≈5−12 R⊕. More planets in this parameter space are needed to further investigate this finding. Planetary interior structure models of TOI-2196 b are consistent with a H/He atmosphere mass fraction between 0.4% and 3%, with a mean value of 0.7% on top of a rocky interior. We estimated the amount of mass this planet might have lost at a young age and we find that while the mass loss could have been significant, the planet had not changed in terms of character: it was born as a small volatile-rich planet and it remains one at present.Publisher PDFPeer reviewe
    • 

    corecore