970,861 research outputs found

    Regulation of South China Sea throughflow by pressure difference

    Get PDF
    Author Posting. © American Geophysical Union, 2016. This article is posted here by permission of American Geophysical Union for personal use, not for redistribution. The definitive version was published in Journal of Geophysical Research: Oceans 121 (2016): 4077–4096, doi:10.1002/2015JC011177.Sea Surface Height (SSH) data from the European Centre for Medium-Range Weather Forecasts-Ocean Reanalysis System 4 (ECMWF-ORAS4) are used to determine the pressure difference in connection with variability of the South China Sea ThroughFlow (SCSTF) from 1958 to 2007. Two branches of SCSTF, the Karimata-Sunda Strait ThroughFlow (KSSTF) and the Mindoro Strait ThroughFlow (MSTF), are examined. Using the ensemble empirical mode decomposition method (EEMD), time series of pressure difference and volume transport are decomposed into intrinsic mode functions and trend functions, with the corresponding variability on different time scales. Pressure difference agrees with the KSSTF volume transport on decadal time scale; while for the MSTF, pressure difference varies similarly with volume transport on interannual time scale. Separating the dynamic height difference into the thermal and haline terms, for the KSSTF more than half of the dynamic height difference (32 cm) is due to the thermal contributions; while the remaining dynamic height difference (23 cm) is due to the haline contributions. For the MSTF, the dynamic height difference (29 cm) is primarily due to the thermal contribution (26 cm).This work was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (grant XDA11010304), the National Natural Science Foundation of China (grant number 41306015 and 41476013) and the Independent Research Project Program of State Key Laboratory of Tropical Oceanography (grant LTOZZ1603).2016-12-1

    Optimal pricing strategies for capacity leasing based on time and volume usage in telecommunication networks

    Get PDF
    In this study, we use a monopoly pricing model to examine the optimal pricing strategies for “pay-per-time”, “pay-per-volume” and “pay-per both time and volume” based leasing of data networks. Traditionally, network capacity distribution includes short/long term bandwidth and/or usage time leasing. Each consumer has a choice to select volume based, connection-time based or both volume and connection-time based pricing. When customers choose connection-time based pricing, their optimal behavior would be utilizing the bandwidth capacity fully, which can cause network to burst. Also, offering the pay-per-volume scheme to the consumer provides the advantage of leasing the excess capacity to other potential customers serving as network providers. However, volume-based strategies are decreasing the consumers’ interest and usage, because the optimal behaviors of the customers who choose the pay-per-volume pricing scheme generally encourages them to send only enough bytes for time-fixed tasks (for real time applications), causing quality of the task to decrease, which in turn creating an opportunity cost. Choosing pay-per time and volume hybridized pricing scheme allows customers to take advantages of both pricing strategies while decreasing (minimizing) the disadvantages of each, because consumers generally have both time-fixed and size-fixed task such as batch data transactions. However, such a complex pricing policy may confuse and frighten consumers. Therefore, in this study we examined the following two issues: (i) what (if any) are the benefits to the network provider of providing the time and volume hybridized pricing scheme? and (ii) would this offering schema make an impact on the market size? The main contribution of this study is to show that pay-per both time and volume pricing is a viable and often preferable alternative to the only time and/or only volume-based offerings for a large number of customers, and that judicious use of such pricing policy is profitable to the network provider

    Logics of Finite Hankel Rank

    Full text link
    We discuss the Feferman-Vaught Theorem in the setting of abstract model theory for finite structures. We look at sum-like and product-like binary operations on finite structures and their Hankel matrices. We show the connection between Hankel matrices and the Feferman-Vaught Theorem. The largest logic known to satisfy a Feferman-Vaught Theorem for product-like operations is CFOL, first order logic with modular counting quantifiers. For sum-like operations it is CMSOL, the corresponding monadic second order logic. We discuss whether there are maximal logics satisfying Feferman-Vaught Theorems for finite structures.Comment: Appeared in YuriFest 2015, held in honor of Yuri Gurevich's 75th birthday. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-23534-9_1

    Rule Based System for Diagnosing Wireless Connection Problems Using SL5 Object

    Get PDF
    There is an increase in the use of in-door wireless networking solutions via Wi-Fi and this increase infiltrated and utilized Wi-Fi enable devices, as well as smart mobiles, games consoles, security systems, tablet PCs and smart TVs. Thus the demand on Wi-Fi connections increased rapidly. Rule Based System is an essential method in helping using the human expertise in many challenging fields. In this paper, a Rule Based System was designed and developed for diagnosing the wireless connection problems and attain a precise decision about the cause of the problem. SL5 Object expert system language was used in developing the rule based system. An Evaluation of the rule based system was carried out to test its accuracy and the results were promising

    Abnormal connectional fingerprint in schizophrenia: a novel network analysis of diffusion tensor imaging data

    Get PDF
    The graph theoretical analysis of structural magnetic resonance imaging (MRI) data has received a great deal of interest in recent years to characterize the organizational principles of brain networks and their alterations in psychiatric disorders, such as schizophrenia. However, the characterization of networks in clinical populations can be challenging, since the comparison of connectivity between groups is influenced by several factors, such as the overall number of connections and the structural abnormalities of the seed regions. To overcome these limitations, the current study employed the whole-brain analysis of connectional fingerprints in diffusion tensor imaging data obtained at 3 T of chronic schizophrenia patients (n = 16) and healthy, age-matched control participants (n = 17). Probabilistic tractography was performed to quantify the connectivity of 110 brain areas. The connectional fingerprint of a brain area represents the set of relative connection probabilities to all its target areas and is, hence, less affected by overall white and gray matter changes than absolute connectivity measures. After detecting brain regions with abnormal connectional fingerprints through similarity measures, we tested each of its relative connection probability between groups. We found altered connectional fingerprints in schizophrenia patients consistent with a dysconnectivity syndrome. While the medial frontal gyrus showed only reduced connectivity, the connectional fingerprints of the inferior frontal gyrus and the putamen mainly contained relatively increased connection probabilities to areas in the frontal, limbic, and subcortical areas. These findings are in line with previous studies that reported abnormalities in striatal–frontal circuits in the pathophysiology of schizophrenia, highlighting the potential utility of connectional fingerprints for the analysis of anatomical networks in the disorder

    The ATLAS SCT grounding and shielding concept and implementation

    Get PDF
    This paper presents a complete description of Virgo, the French-Italian gravitational wave detector. The detector, built at Cascina, near Pisa (Italy), is a very large Michelson interferometer, with 3 km-long arms. In this paper, following a presentation of the physics requirements, leading to the specifications for the construction of the detector, a detailed description of all its different elements is given. These include civil engineering infrastructures, a huge ultra-high vacuum (UHV) chamber (about 6000 cubic metres), all of the optical components, including high quality mirrors and their seismic isolating suspensions, all of the electronics required to control the interferometer and for signal detection. The expected performances of these different elements are given, leading to an overall sensitivity curve as a function of the incoming gravitational wave frequency. This description represents the detector as built and used in the first data-taking runs. Improvements in different parts have been and continue to be performed, leading to better sensitivities. These will be detailed in a forthcoming paper

    Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions

    Full text link
    A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.Comment: Evolutionary Computation Journa

    Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks

    Full text link
    A long-term goal of AI is to produce agents that can learn a diversity of skills throughout their lifetimes and continuously improve those skills via experience. A longstanding obstacle towards that goal is catastrophic forgetting, which is when learning new information erases previously learned information. Catastrophic forgetting occurs in artificial neural networks (ANNs), which have fueled most recent advances in AI. A recent paper proposed that catastrophic forgetting in ANNs can be reduced by promoting modularity, which can limit forgetting by isolating task information to specific clusters of nodes and connections (functional modules). While the prior work did show that modular ANNs suffered less from catastrophic forgetting, it was not able to produce ANNs that possessed task-specific functional modules, thereby leaving the main theory regarding modularity and forgetting untested. We introduce diffusion-based neuromodulation, which simulates the release of diffusing, neuromodulatory chemicals within an ANN that can modulate (i.e. up or down regulate) learning in a spatial region. On the simple diagnostic problem from the prior work, diffusion-based neuromodulation 1) induces task-specific learning in groups of nodes and connections (task-specific localized learning), which 2) produces functional modules for each subtask, and 3) yields higher performance by eliminating catastrophic forgetting. Overall, our results suggest that diffusion-based neuromodulation promotes task-specific localized learning and functional modularity, which can help solve the challenging, but important problem of catastrophic forgetting
    corecore