8 research outputs found

    Fluidic logic for the control and design of soft robotic systems

    Get PDF
    State-of-the-art in robotics are machines that do jobs. These jobs, for instance, can be automated routine procedures for delivering services with minimal human intervention or the task of building, maintaining or removing infrastructure in areas where it is too dangerous for humans to go. These assignments and tasks are open areas of research which can have a net-positive impact on society. We make robots from subsystems of hard components and rigid links in a physical system stacked in a hierarchy--building blocks of transistors on printed circuit boards in integrated approaches to control motors and end effectors. The hard characteristic of these systems means we can predict the motion and trajectory of robots. However, these constrained environments limit the places we can use robots. Suppose we would like to use a robot to interact with a human. In that case, the rigid materials and control systems may be incompatible with this unconstrained environment. Soft robots represent a change in thinking about robotic systems' dominant materials and control methods. Soft roboticists use soft materials, compliant joints with variable stiffness, and deformable systems in interaction with the environment. Rather than using motors, soft robots use air or other fluids to inflate and deflate chambers to make soft robots move and grasp. The design heuristic in soft robotics combines simple elements to create more complex systems. In this hierarchical architecture, there is a one-to-one mapping of control hardware to actuators resulting in systems that are increasingly capable of a diverse range of movements and actions. Nevertheless, as soft robots become even more capable, we will reach practical limits in size and control. This thesis explores the interdependence of architecture and control, moving beyond the current design heuristic to increase the capability of soft robots. An ideal control system in a soft robot has a low number of hardware outputs controlling a large number of actuators. Such an architecture could improve our ability to implement desired motions and behaviours to perform valuable tasks and move towards increased autonomy in soft robotics. This problem is reminiscent of the mechanical analogue systems developed in the \nth{20} century for numerical ballistic calculations. A solution using an abstract system of logic and philosophy ultimately led to the invention of the transistor and the electronics hierarchy of transistors on printed circuit boards and integrated systems in computers and robotics today. In this study, I use a fluidic transistor primitive to build memory elements based on logic gates and combinational logic to control arrays of actuators. The contributions of this thesis include the following: (i) a perspective on the current paradigm in soft robotic architecture and the scaling problem of control schemes in soft robots; (ii) the uses of stacking and hierarchy as a design principle in soft robots; (iii) the applications of sequential logic and memory for multi-state automata soft robots; (iv) a description of design dependencies for fluidic systems for medium-to-large scale integration. In summary, I address the significant challenge in soft robotic control and design, moving beyond the limitation of the control architecture toward autonomy using a fluidic architecture. I move through the levels of automata theory from combinational logic to sequential circuits and finite-state machines using fluidic transistors. My studies may help lay the foundations of a fluidic hardware description language for building large-scale integrated fluidic circuits in soft robotics design

    Adaptive Intelligent Systems for Extreme Environments

    Get PDF
    As embedded processors become powerful, a growing number of embedded systems equipped with artificial intelligence (AI) algorithms have been used in radiation environments to perform routine tasks to reduce radiation risk for human workers. On the one hand, because of the low price, commercial-off-the-shelf devices and components are becoming increasingly popular to make such tasks more affordable. Meanwhile, it also presents new challenges to improve radiation tolerance, the capability to conduct multiple AI tasks and deliver the power efficiency of the embedded systems in harsh environments. There are three aspects of research work that have been completed in this thesis: 1) a fast simulation method for analysis of single event effect (SEE) in integrated circuits, 2) a self-refresh scheme to detect and correct bit-flips in random access memory (RAM), and 3) a hardware AI system with dynamic hardware accelerators and AI models for increasing flexibility and efficiency. The variances of the physical parameters in practical implementation, such as the nature of the particle, linear energy transfer and circuit characteristics, may have a large impact on the final simulation accuracy, which will significantly increase the complexity and cost in the workflow of the transistor level simulation for large-scale circuits. It makes it difficult to conduct SEE simulations for large-scale circuits. Therefore, in the first research work, a new SEE simulation scheme is proposed, to offer a fast and cost-efficient method to evaluate and compare the performance of large-scale circuits which subject to the effects of radiation particles. The advantages of transistor and hardware description language (HDL) simulations are combined here to produce accurate SEE digital error models for rapid error analysis in large-scale circuits. Under the proposed scheme, time-consuming back-end steps are skipped. The SEE analysis for large-scale circuits can be completed in just few hours. In high-radiation environments, bit-flips in RAMs can not only occur but may also be accumulated. However, the typical error mitigation methods can not handle high error rates with low hardware costs. In the second work, an adaptive scheme combined with correcting codes and refreshing techniques is proposed, to correct errors and mitigate error accumulation in extreme radiation environments. This scheme is proposed to continuously refresh the data in RAMs so that errors can not be accumulated. Furthermore, because the proposed design can share the same ports with the user module without changing the timing sequence, it thus can be easily applied to the system where the hardware modules are designed with fixed reading and writing latency. It is a challenge to implement intelligent systems with constrained hardware resources. In the third work, an adaptive hardware resource management system for multiple AI tasks in harsh environments was designed. Inspired by the “refreshing” concept in the second work, we utilise a key feature of FPGAs, partial reconfiguration, to improve the reliability and efficiency of the AI system. More importantly, this feature provides the capability to manage the hardware resources for deep learning acceleration. In the proposed design, the on-chip hardware resources are dynamically managed to improve the flexibility, performance and power efficiency of deep learning inference systems. The deep learning units provided by Xilinx are used to perform multiple AI tasks simultaneously, and the experiments show significant improvements in power efficiency for a wide range of scenarios with different workloads. To further improve the performance of the system, the concept of reconfiguration was further extended. As a result, an adaptive DL software framework was designed. This framework can provide a significant level of adaptability support for various deep learning algorithms on an FPGA-based edge computing platform. To meet the specific accuracy and latency requirements derived from the running applications and operating environments, the platform may dynamically update hardware and software (e.g., processing pipelines) to achieve better cost, power, and processing efficiency compared to the static system

    Framework for developing large-scale metal additive manufacturing (MAM) systems and procedures

    Get PDF
    Additive manufacturing (AM) is regarded as one of the most disruptive technologies of this era. However, the technology is still evolving, even as new applications continue to emerge, thus presenting an opportunity to manage disruptive changes more efficaciously, whilst strategically shaping critical innovation efforts. Accordingly, this study focusses on supplementary manufacturing requirements management (SMRM). Presently, maintenance challenges, such as the reworking of serviceable infrastructures and other similarly demanding industrial endeavours, contribute to higher costs, inefficiencies, and waste, with technical and other limitations, including disparate supply circumstances and demand patterns, further compounding issues. Underpinned by SMRM applications, the grounded theory (GT) method was applied, to construct a framework for developing large-scale metal AM (MAM) systems and procedures. Case data was derived from the development and commissioning of a unique system and its embodied concepts, which were predominantly enabled by commercial off-the-shelf (COTS) solutions, and subsequent testing and validation of the resulting open architecture (OA) bulk AM (BAM) platform. Empirical and numerical investigations were facilitated by an industrial titanium (Ti-6Al-4V) aerospace component, which demonstrated the suitability and effectiveness of the BAM platform for specific SMRM operations. While the physical and mechanical properties of derived BAM materials were characteristic, and within range of referenced ASTM standards, disparities between the predicted and quantified effects of reprocessing operations on the component necessitate further investigation. The main output of this study is a GT framework, which identifies six strategic developmental themes for more adoptable, compliant, functional, operable, systemical, and adaptable MAM solutions. Important priorities for improving interrelated system functions, procedures, and performance were also defined, alongside an original technical perspectives management concept, for navigating complex requirements in an evolving technology landscape, via the documentation, clarification, validation, and prioritisation of key elements that significantly impact planned innovations

    Cost effective robotics in the nuclear industry

    No full text
    corecore