9,508 research outputs found

    Applications of Soft Computing in Mobile and Wireless Communications

    Get PDF
    Soft computing is a synergistic combination of artificial intelligence methodologies to model and solve real world problems that are either impossible or too difficult to model mathematically. Furthermore, the use of conventional modeling techniques demands rigor, precision and certainty, which carry computational cost. On the other hand, soft computing utilizes computation, reasoning and inference to reduce computational cost by exploiting tolerance for imprecision, uncertainty, partial truth and approximation. In addition to computational cost savings, soft computing is an excellent platform for autonomic computing, owing to its roots in artificial intelligence. Wireless communication networks are associated with much uncertainty and imprecision due to a number of stochastic processes such as escalating number of access points, constantly changing propagation channels, sudden variations in network load and random mobility of users. This reality has fuelled numerous applications of soft computing techniques in mobile and wireless communications. This paper reviews various applications of the core soft computing methodologies in mobile and wireless communications

    Unstructured mesh algorithms for aerodynamic calculations

    Get PDF
    The use of unstructured mesh techniques for solving complex aerodynamic flows is discussed. The principle advantages of unstructured mesh strategies, as they relate to complex geometries, adaptive meshing capabilities, and parallel processing are emphasized. The various aspects required for the efficient and accurate solution of aerodynamic flows are addressed. These include mesh generation, mesh adaptivity, solution algorithms, convergence acceleration, and turbulence modeling. Computations of viscous turbulent two-dimensional flows and inviscid three-dimensional flows about complex configurations are demonstrated. Remaining obstacles and directions for future research are also outlined

    Coarse-grained reconfigurable array architectures

    Get PDF
    Coarse-Grained Reconfigurable Array (CGRA) architectures accelerate the same inner loops that benefit from the high ILP support in VLIW architectures. By executing non-loop code on other cores, however, CGRAs can focus on such loops to execute them more efficiently. This chapter discusses the basic principles of CGRAs, and the wide range of design options available to a CGRA designer, covering a large number of existing CGRA designs. The impact of different options on flexibility, performance, and power-efficiency is discussed, as well as the need for compiler support. The ADRES CGRA design template is studied in more detail as a use case to illustrate the need for design space exploration, for compiler support and for the manual fine-tuning of source code

    Parallel and Distributed Performance of a Depth Estimation Algorithm

    Get PDF
    Expansion of dataset sizes and increasing complexity of processing algorithms have led to consideration of parallel and distributed implementations. The rationale for distributing the computational load may be to thin-provision computational resources, to accelerate data processing rate, or to efficiently reuse already available but otherwise idle computational resources. Whatever the rationale, an efficient solution of this type brings with it questions of data distribution, job partitioning, reliability, and robustness. This paper addresses the first two of these questions in the context of a local cluster-computing environment. Using the CHRT depth estimator, it considers active and passive data distribution and their effect on data throughput, focusing mainly on the compromises required to maintain minimal communications requirements between nodes. As metric, the algorithm considers the overall computation time for a given dataset (i.e., the time lag that a user would experience), and shows that although there are significant speedups to be had by relatively simple modifications to the algorithm, there are limitations to the parallelism that can be achieved efficiently, and a balance between inter-node parallelism (i.e., multiple nodes running in parallel) and intranode parallelism (i.e., multiple threads within one node) for most efficient utilization of available resources

    Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    Get PDF
    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified

    A parallel and distributed genetic-based learning classifier system with application in human electroencephalographic signal classification

    Full text link
    University of Technology, Sydney. Faculty of Engineering.Genetic-based Learning Classifier Systems have been proposed as a competent technology for the classification of medical data sets. What is not known about this class of system is twofold. Firstly, how does a Learning Classifier System (LCS) perform when applied to the single-step classification of multiple-channel, noisy, artefact-inclusive human EEG signals acquired from many participants? Secondly and more importantly, is how the learning classifier system performs when incorporated with migration strategies, inspired by multi- deme, coarse-grained Parallel Genetic Algorithms (PGA) to provide parallel and distributed classifier migration? This research investigates these open questions and concludes, subject to the considerations herein, that these technological approaches can provide competitive classification performance for such applications. We performed a preliminary examination and implementation of a parallel genetic algorithm and hybrid local search PGA using experimental methods. The parallelisation and incorporation of classical local search methods into a genetic algorithm are well known methods for increasing performance and we examine this. Furthermore, inspired by the significant improvements in convergence velocity and solution quality provided by the multi- deme, coarse-grained Parallel Genetic Algorithm, we incorporate the method into a learning classifier system with the aim of providing parallel and distributed classifier migration. As a result, a unique learning classifier system (pXCS) is proposed that improves classification accuracy, achieves increased learning rates and significantly reduces the classifier population during learning. It is compared to the extended learning Classifier System (XCS) and several state of the art non-evolutionary classifiers in the single-step classification of noisy, artefact- inclusive human EEG signals, derived from mental task experiments conducted using ten human participants. We also conclude that establishing an appropriate migration strategy is an important cause of pXCS learning and classification performance. However, an inappropriate migration rate, frequency or selection:replacement scheme can reduce performance and we document the factors associated with this. Furthermore, we conclude that both EEG segment size and representation both have a significant influence on classification performance. In effect, determining an appropriate representation of the raw EEG signal is tantamount to the classification method itself. This research allows us to further explore and incorporate pXCS evolved classifiers derived from multi-channel human EEG signals as an interface in the control of a device such as a powered wheelchair or brain-computer interface (BCI) applications

    The Drift Chambers Of The Nomad Experiment

    Get PDF
    We present a detailed description of the drift chambers used as an active target and a tracking device in the NOMAD experiment at CERN. The main characteristics of these chambers are a large area, a self supporting structure made of light composite materials and a low cost. A spatial resolution of 150 microns has been achieved with a single hit efficiency of 97%.Comment: 42 pages, 26 figure
    corecore