1,047 research outputs found

    Dimension reduction for linear separation with curvilinear distances

    Get PDF
    Any high dimensional data in its original raw form may contain obviously classifiable clusters which are difficult to identify given the high-dimension representation. In reducing the dimensions it may be possible to perform a simple classification technique to extract this cluster information whilst retaining the overall topology of the data set. The supervised method presented here takes a high dimension data set consisting of multiple clusters and employs curvilinear distance as a relation between points, projecting in a lower dimension according to this relationship. This representation allows for linear separation of the non-separable high dimensional cluster data and the classification to a cluster of any successive unseen data point extracted from the same higher dimension

    Adjusting the Ground Truth Annotations for Connectivity-Based Learning to Delineate

    Full text link
    Deep learning-based approaches to delineating 3D structure depend on accurate annotations to train the networks. Yet, in practice, people, no matter how conscientious, have trouble precisely delineating in 3D and on a large scale, in part because the data is often hard to interpret visually and in part because the 3D interfaces are awkward to use. In this paper, we introduce a method that explicitly accounts for annotation inaccuracies. To this end, we treat the annotations as active contour models that can deform themselves while preserving their topology. This enables us to jointly train the network and correct potential errors in the original annotations. The result is an approach that boosts performance of deep networks trained with potentially inaccurate annotations

    Knowledge management system on flow and water quality modeling

    Get PDF
    Author name used in this publication: K. W. Chau2001-2002 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    Underwater Robots Part II: Existing Solutions and Open Issues

    Get PDF
    National audienceThis paper constitutes the second part of a general overview of underwater robotics. The first part is titled: Underwater Robots Part I: current systems and problem pose. The works referenced as (Name*, year) have been already cited on the first part of the paper, and the details of these references can be found in the section 7 of the paper titled Underwater Robots Part I: current systems and problem pose. The mathematical notation used in this paper is defined in section 4 of the paper Underwater Robots Part I: current systems and problem pose

    ABSense: Sensing Electromagnetic Waves on Metasurfaces via Ambient Compilation of Full Absorption

    Get PDF
    Metasurfaces constitute effective media for manipulating and transforming impinging EM waves. Related studies have explored a series of impactful MS capabilities and applications in sectors such as wireless communications, medical imaging and energy harvesting. A key-gap in the existing body of work is that the attributes of the EM waves to-be-controlled (e.g., direction, polarity, phase) are known in advance. The present work proposes a practical solution to the EM wave sensing problem using the intelligent and networked MS counterparts-the HyperSurfaces (HSFs), without requiring dedicated field sensors. An nano-network embedded within the HSF iterates over the possible MS configurations, finding the one that fully absorbs the impinging EM wave, hence maximizing the energy distribution within the HSF. Using a distributed consensus approach, the nano-network then matches the found configuration to the most probable EM wave traits, via a static lookup table that can be created during the HSF manufacturing. Realistic simulations demonstrate the potential of the proposed scheme. Moreover, we show that the proposed workflow is the first-of-its-kind embedded EM compiler, i.e., an autonomic HSF that can translate high-level EM behavior objectives to the corresponding, low-level EM actuation commands.Comment: Publication: Proceedings of ACM NANOCOM 2019. This work was funded by the European Union via the Horizon 2020: Future Emerging Topics call (FETOPEN), grant EU736876, project VISORSURF (http://www.visorsurf.eu

    Discharge estimation combining flow routing and occasional measurements of velocity

    Get PDF
    A new procedure is proposed for estimating river discharge hydrographs during flood events, using only water level data at a single gauged site, as well as 1-D shallow water modelling and occasional maximum surface flow velocity measurements. One-dimensional diffusive hydraulic model is used for routing the recorded stage hydrograph in the channel reach considering zero-diffusion downstream boundary condition. Based on synthetic tests concerning a broad prismatic channel, the “suitable” reach length is chosen in order to minimize the effect of the approximated downstream boundary condition on the estimation of the upstream discharge hydrograph. The Manning’s roughness coefficient is calibrated by using occasional instantaneous surface velocity measurements during the rising limb of flood that are used to estimate instantaneous discharges by adopting, in the flow area, a two-dimensional velocity distribution model. Several historical events recorded in three gauged sites along the upper Tiber River, wherein reliable rating curves are available, have been used for the validation. The outcomes of the analysis can be summarized as follows: (1) the criterion adopted for selecting the “suitable” channel length based on synthetic test studies has proved to be reliable for field applications to three gauged sites. Indeed, for each event a downstream reach length not more than 500m is found to be sufficient, for a good performances of the hydraulic model, thereby enabling the drastic reduction of river cross-sections data; (2) the procedure for Manning’s roughness coefficient calibration allowed for high performance in discharge estimation just considering the observed water levels and occasional measurements of maximum surface flow velocity during the Correspondence to: G. Corato ([email protected]) rising limb of flood. Indeed, errors in the peak discharge magnitude, for the optimal calibration, were found not exceeding 5% for all events observed in the three investigated gauged sections, while the Nash-Sutcliffe efficiency was, on average, greater than 0.95. Therefore, the proposed procedure well lend itself to be applied for: (1) the extrapolation of rating curve over the field of velocity measurements (2) discharge estimations in different cross sections during the same flood event using occasional surface flow velocity measures carried out, for instance, by hand-held radar sensors

    Non-Gaussian Hybrid Transfer Functions: Memorizing Mine Survivability Calculations

    Get PDF
    Hybrid algorithms and models have received significant interest in recent years and are increasingly used to solve real-world problems. Different from existing methods in radial basis transfer function construction, this study proposes a novel nonlinear-weight hybrid algorithm involving the non-Gaussian type radial basis transfer functions. The speed and simplicity of the non-Gaussian type with the accuracy and simplicity of radial basis function are used to produce fast and accurate on-the-fly model for survivability of emergency mine rescue operations, that is, the survivability under all conditions is precalculated and used to train the neural network. The proposed hybrid uses genetic algorithm as a learning method which performs parameter optimization within an integrated analytic framework, to improve network efficiency. Finally, the network parameters including mean iteration, standard variation, standard deviation, convergent time, and optimized error are evaluated using the mean squared error. The results demonstrate that the hybrid model is able to reduce the computation complexity, increase the robustness and optimize its parameters. This novel hybrid model shows outstanding performance and is competitive over other existing models

    Activities of the Research Institute for Advanced Computer Science

    Get PDF
    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report

    Application of constrained optimisation techniques in electrical impedance tomography

    Get PDF
    A Constrained Optimisation technique is described for the reconstruction of temporal resistivity images. The approach solves the Inverse problem by optimising a cost function under constraints, in the form of normalised boundary potentials. Mathematical models have been developed for two different data collection methods for the chosen criterion. Both of these models express the reconstructed image in terms of one dimensional (I-D) Lagrange multiplier functions. The reconstruction problem becomes one of estimating these 1-D functions from the normalised boundary potentials. These models are based on a cost criterion of the minimisation of the variance between the reconstructed resistivity distribution and the true resistivity distribution. The methods presented In this research extend the algorithms previously developed for X-ray systems. Computational efficiency is enhanced by exploiting the structure of the associated system matrices. The structure of the system matrices was preserved in the Electrical Impedance Tomography (EIT) implementations by applying a weighting due to non-linear current distribution during the backprojection of the Lagrange multiplier functions. In order to obtain the best possible reconstruction it is important to consider the effects of noise in the boundary data. This is achieved by using a fast algorithm which matches the statistics of the error in the approximate inverse of the associated system matrix with the statistics of the noise error in the boundary data. This yields the optimum solution with the available boundary data. Novel approaches have been developed to produce the Lagrange multiplier functions. Two alternative methods are given for the design of VLSI implementations of hardware accelerators to improve computational efficiencies. These accelerators are designed to implement parallel geometries and are modelled using a verification description language to assess their performance capabilities

    Sparse Similarity and Network Navigability for Markov Clustering Enhancement

    Get PDF
    Markov clustering (MCL) is an effective unsupervised pattern recognition algorithm for data clustering in high-dimensional feature space that simulates stochastic flows on a network of sample similarities to detect the structural organization of clusters in the data. However, it presents two main drawbacks: (1) its community detection performance in complex networks has been demonstrating results far from the state-of-the-art methods such as Infomap and Louvain, and (2) it has never been generalized to deal with data nonlinearity. In this work both aspects, although closely related, are taken as separated issues and addressed as such. Regarding the community detection, field under the network science ceiling, the crucial issue is to convert the unweighted network topology into a ‘smart enough’ pre-weighted connectivity that adequately steers the stochastic flow procedure behind Markov clustering. Here a conceptual innovation is introduced and discussed focusing on how to leverage network latent geometry notions in order to design similarity measures for pre-weighting the adjacency matrix used in Markov clustering community detection. The results demonstrate that the proposed strategy improves Markov clustering significantly, to the extent that it is often close to the performance of current state-of-the-art methods for community detection. These findings emerge considering both synthetic ‘realistic’ networks (with known ground-truth communities) and real networks (with community metadata), even when the real network connectivity is corrupted by noise artificially induced by missing or spurious links. Regarding the nonlinearity aspect, the development of algorithms for unsupervised pattern recognition by nonlinear clustering is a notable problem in data science. Minimum Curvilinearity (MC) is a principle that approximates nonlinear sample distances in the high-dimensional feature space by curvilinear distances, which are computed as transversal paths over their minimum spanning tree, and then stored in a kernel. Here, a nonlinear MCL algorithm termed MC-MCL is proposed, which is the first nonlinear kernel extension of MCL and exploits Minimum Curvilinearity to enhance the performance of MCL in real and synthetic high-dimensional data with underlying nonlinear patterns. Furthermore, improvements in the design of the so-called MC-kernel by applying base modifications to better approximate the data hidden geometry have been evaluated with positive outcomes. Thus, different nonlinear MCL versions are compared with baseline and state-of-art clustering methods, including DBSCAN, K-means, affinity propagation, density peaks, and deep-clustering. As result, the design of a suitable nonlinear kernel provides a valuable framework to estimate nonlinear distances when its kernel is applied in combination with MCL. Indeed, nonlinear-MCL variants overcome classical MCL and even state-of-art clustering algorithms in different nonlinear datasets. This dissertation discusses the enhancements and the generalized understanding of how network geometry plays a fundamental role in designing algorithms based on network navigability
    corecore