333 research outputs found

    Nature-inspired soft robotics: On articial cilia and magnetic locomotion

    Get PDF
    Inspired by micro-organisms in nature, people imagined using micro-scale soft robots to work inside the human body for therapeutic drug delivery, minimally invasive surgery, or diagnostic biochemical sensing. To create these robots is challenging due to their small size, viscosity environment, and soft constituting materials. In addition, the mechanisms of operation are quite different from the conventional rigid macro-scale robots that we are familiar with. In this PhD project, we focused on the computational analysis and design of micro-scale soft robots. Working closely with experimental groups, we studied artificial cilia and micro-swimmers that can realize particle manipulation, fluid transport, fluid mixing, or magnetic locomotion. Various cilia systems are considered, including soft inflatable cilia which could be controlled individually and programmable magnetic cilia featuring phase shifts and collective metachronal patterns. We also analyze micro-swimmers that are soft and adaptive in confined spaces. Driven by different external magnetic fields, the swimmer's motion can be changed between undulation crawling, undulation swimming, and helical crawling. By using computational modeling, we analyze the transport mechanisms of the soft robots and study the effect of different parameters to provide guidelines for the design of the robots in specific applications. By studying the physical mechanisms of micro-organisms in nature, we are not only able to understand more clearly their functional behaviour, it also opens the possibility of biomimetic design of soft robotic cilia and micro-swimmers

    The Impact of Novel Computing Architectures on Large-Scale Distributed Web Information Retrieval Systems

    Get PDF
    Web search engines are the most popular mean of interaction with the Web. Realizing a search engine which scales even to such issues presents many challenges. Fast crawling technology is needed to gather the Web documents. Indexing has to process hundreds of gigabytes of data efficiently. Queries have to be handled quickly, at a rate of thousands per second. As a solution, within a datacenter, services are built up from clusters of common homogeneous PCs. However, Information Retrieval (IR) has to face issues raised by the growing amount of Web data, as well as the number of new users. In response to these issues, cost-effective specialized hardware is available nowadays. In our opinion, this hardware is ideal for migrating distributed IR systems to computer clusters comprising heterogeneous processors in order to respond their need of computing power. Toward this end, we introduce K-model, a computational model to properly evaluate algorithms designed for such hardware. We study the impact of K-model rules on algorithm design. To evaluate the benefits of using K-model in evaluating algorithms, we compare the complexity of a solution built using our properly designed techniques, and the existing ones. Although in theory competitors are more efficient than us, empirically, K-model is able to prove because our solutions have been shown to be faster than the state-of-the-art implementations

    Methods and design issues for next generation network-aware applications

    Get PDF
    Networks are becoming an essential component of modern cyberinfrastructure and this work describes methods of designing distributed applications for high-speed networks to improve application scalability, performance and capabilities. As the amount of data generated by scientific applications continues to grow, to be able to handle and process it, applications should be designed to use parallel, distributed resources and high-speed networks. For scalable application design developers should move away from the current component-based approach and implement instead an integrated, non-layered architecture where applications can use specialized low-level interfaces. The main focus of this research is on interactive, collaborative visualization of large datasets. This work describes how a visualization application can be improved through using distributed resources and high-speed network links to interactively visualize tens of gigabytes of data and handle terabyte datasets while maintaining high quality. The application supports interactive frame rates, high resolution, collaborative visualization and sustains remote I/O bandwidths of several Gbps (up to 30 times faster than local I/O). Motivated by the distributed visualization application, this work also researches remote data access systems. Because wide-area networks may have a high latency, the remote I/O system uses an architecture that effectively hides latency. Five remote data access architectures are analyzed and the results show that an architecture that combines bulk and pipeline processing is the best solution for high-throughput remote data access. The resulting system, also supporting high-speed transport protocols and configurable remote operations, is up to 400 times faster than a comparable existing remote data access system. Transport protocols are compared to understand which protocol can best utilize high-speed network connections, concluding that a rate-based protocol is the best solution, being 8 times faster than standard TCP. An HD-based remote teaching application experiment is conducted, illustrating the potential of network-aware applications in a production environment. Future research areas are presented, with emphasis on network-aware optimization, execution and deployment scenarios

    Distributed Iterative Graph Processing Using NoSQL with Data Locality

    Get PDF
    A tremendous amount of data is generated every day from a wide range of sources such as social networks, sensors, and application logs. Among them, graph data is one type that represents valuable relationships between various entities. Analytics of large graphs has become an essential part of business processes and scientific studies because it leads to deep and meaningful insights into the related domain based on the connections between various entities. However, the optimal processing of large-scale iterative graph computations is very challenging due to the issues like fault tolerance, high memory requirement, parallelization, and scalability. Most of the contemporary systems focus either on keeping the entire graph data in memory and minimizing the disk access or on processing the graph data completely on a single node with a centralized disk system. GraphMap is one of the state-of-the-art scalable and efficient out-of-core disk-based iterative graph processing systems that focus on using the secondary storage and optimizing the I/O access. In this thesis, we investigate two new extensions to the existing out-of-core NoSQL-based distributed iterative graph processing system: 1) Intra-worker data locality and 2) Mincut-based partitioning. We design an additional suite of data locality that moves the computation towards the data rather than the other way around. A significant improvement in performance, up to 39\%, is demonstrated by this locality implementation. Similarly, we use the mincut-based graph partitioning technique to distribute the graph data uniformly across the workers for parallelization so that the inter-worker communication volume is minimized. By extensive experiments, we also show that the mincut-based graph partitioning technique can lead to improper parallelization due to sub-optimal load-balancing

    Generating Strong Diversity of Opinions: Agent Models of Continuous Opinion Dynamics

    Get PDF
    Opinion dynamics is the study of how opinions in a group of individuals change over time. A goal of opinion dynamics modelers has long been to find a social science-based model that generates strong diversity -- smooth, stable, possibly multi-modal distributions of opinions. This research lays the foundations for and develops such a model. First, a taxonomy is developed to precisely describe agent schedules in an opinion dynamics model. The importance of scheduling is shown with applications to generalized forms of two models. Next, the meta-contrast influence field (MIF) model is defined. It is rooted in self-categorization theory and improves on the existing meta-contrast model by providing a properly scaled, continuous influence basis. Finally, the MIF-Local Repulsion (MIF-LR) model is developed and presented. This augments the MIF model with a formulation of uniqueness theory. The MIF-LR model generates strong diversity. An application of the model shows that partisan polarization can be explained by increased non-local social ties enabled by communications technology

    Performance Improvement of Distributed Computing Framework and Scientific Big Data Analysis

    Get PDF
    Analysis of Big data to gain better insights has been the focus of researchers in the recent past. Traditional desktop computers or database management systems may not be suitable for efficient and timely analysis, due to the requirement of massive parallel processing. Distributed computing frameworks are being explored as a viable solution. For example, Google proposed MapReduce, which is becoming a de facto computing architecture for Big data solutions. However, scheduling in MapReduce is coarse grained and remains as a challenge for improvement. Related with MapReduce scheduler when configured over distributed clusters, we identify two issues: data locality disruption and random assignment of non-local map tasks. We propose a network aware scheduler to extend the existing rack awareness. The tasks are scheduled in the order of node, rack and any other rack within the same cluster to achieve cluster level data locality. The issue of random assignment non-local map tasks is handled by enhancing the scheduler to consider the network parameters, such as delay, bandwidth and packet loss between remote clusters. As part of Big data analysis at computational biology, we consider two major data intensive applications: indexing genome sequences and de Novo assembly. Both of these applications deal with the massive amount data generated from DNA sequencers. We developed a scalable algorithm to construct sub-trees of a suffix tree in parallel to address huge memory requirements needed for indexing the human genome. For the de Novo assembly, we propose Parallel Giraph based Assembler (PGA) to address the challenges associated with the assembly of large genomes over commodity hardware. PGA uses the de Bruijn graph to represent the data generated from sequencers. Huge memory demands and performance expectations are addressed by developing parallel algorithms based on the distributed graph-processing framework, Apache Giraph

    Development and clinical application of assessment measures to describe and quantify intra-limb coordination during walking in normal children and children with cerebral palsy

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of PhilosophyThis thesis investigates coordination of the lower limb joints within the limb during walking. The researcher was motivated by her clinical experience as a paediatric physiotherapist. She observed that the pattern of lower limb coordination differed between normal children and those with cerebral palsy. Many of the currently used interventions did not appear to influence this patterning. As a precursor to evaluating the effectiveness of treatments in modifying coordination, a tool to measure coordination was required. The researcher initially investigated qualitative and then quantitative methods of measuring within limb coordination. A technique was developed that used relative angular velocity of two joints to determine when joints were in-phase, antiphasic or in stasis. The phasic parameters of hip/knee, knee/ankle and hip/ankle joints coordination were quantified. There were some significant differences between normal children and children with cerebral palsy. Asymmetry of these phasic parameters was identified, with children with cerebral palsy being more asymmetrical than normal children. The clinical utility of this technique was tested by comparing 2 groups of children before and after 2 surgical procedures. This showed some significant differences in phasic parameters between pre and post-operative data for one procedure. Low samples sizes mean that further work is required to confirm these findings. Data from this work has been used to calculate sample sizes to give an a priori power of 0.8 and further research is proposed and potential applications discussed. It is hoped that this technique will raise awareness of abnormal intra-limb coordination and allow therapists to identify key interactions between joints that need to be facilitated during walking training

    Securing the Next Generation Web

    Get PDF
    With the ever-increasing digitalization of society, the need for secure systems is growing. While some security features, like HTTPS, are popular, securing web applications, and the clients we use to interact with them remains difficult.To secure web applications we focus on both the client-side and server-side. For the client-side, mainly web browsers, we analyze how new security features might solve a problem but introduce new ones. We show this by performing a systematic analysis of the new Content Security Policy (CSP)\ua0 directive navigate-to. In our research, we find that it does introduce new vulnerabilities, to which we recommend countermeasures. We also create AutoNav, a tool capable of automatically suggesting navigation policies for this directive. Finding server-side vulnerabilities in a black-box setting where\ua0 there is no access to the source code is challenging. To improve this, we develop novel black-box methods for automatically finding vulnerabilities. We\ua0 accomplish this by identifying key challenges in web scanning and combining the best of previous methods. Additionally, we leverage SMT solvers to\ua0 further improve the coverage and vulnerability detection rate of scanners.In addition to browsers, browser extensions also play an important role in the web ecosystem. These small programs, e.g. AdBlockers and password\ua0 managers, have powerful APIs and access to sensitive user data like browsing history. By systematically analyzing the extension ecosystem we find new\ua0 static and dynamic methods for detecting both malicious and vulnerable extensions. In addition, we develop a method for detecting malicious extensions\ua0 solely based on the meta-data of downloads over time. We analyze new attack vectors introduced by Google’s new vehicle OS, Android Automotive. This\ua0 is based on Android with the addition of vehicle APIs. Our analysis results in new attacks pertaining to safety, privacy, and availability. Furthermore, we\ua0 create AutoTame, which is designed to analyze third-party apps for vehicles for the vulnerabilities we found

    Methods to Improve Applicability and Efficiency of Distributed Data-Centric Compute Frameworks

    Get PDF
    The success of modern applications depends on the insights they collect from their data repositories. Data repositories for such applications currently exceed exabytes and are rapidly increasing in size, as they collect data from varied sources - web applications, mobile phones, sensors and other connected devices. Distributed storage and data-centric compute frameworks have been invented to store and analyze these large datasets. This dissertation focuses on extending the applicability and improving the efficiency of distributed data-centric compute frameworks
    • …
    corecore