380 research outputs found
Trends and challenges in neuroengineering: toward "Intelligent" neuroprostheses through brain-"brain inspired systems" communication
Future technologies aiming at restoring and enhancing organs function will intimately rely on near-physiological and energy-efficient communication between living and artificial biomimetic systems. Interfacing brain-inspired devices with the real brain is at the forefront of such emerging field, with the term "neurobiohybrids" indicating all those systems where such interaction is established. We argue that achieving a "high-level" communication and functional synergy between natural and artificial neuronal networks in vivo, will allow the development of a heterogeneous world of neurobiohybrids, which will include "living robots" but will also embrace “intelligent” neuroprostheses for augmentation of brain function. The societal and economical impact of intelligent neuroprostheses is likely to be potentially strong, as they will offer novel therapeutic perspectives for a number of diseases, and going beyond classical pharmaceutical schemes. However, they will unavoidably raise fundamental ethical questions on the intermingling between man and machine and more specifically, on how deeply it should be allowed that brain processing is affected by implanted "intelligent" artificial systems. Following this perspective, we provide the reader with insights on ongoing developments and trends in the field of neurobiohybrids. We address the topic also from a "community building" perspective, showing through a quantitative bibliographic analysis, how scientists working on the engineering of brain-inspired devices and brain-machine interfaces are increasing their interactions. We foresee that such trend preludes to a formidable technological and scientific revolution in brain-machine communication and to the opening of new avenues for restoring or even augmenting brain function for therapeutic purposes
Tools and Technologies for Enabling Characterisation in Synthetic Biology
Synthetic Biology represents a movement to utilise biological organisms for novel applications through the use of rigorous engineering principles. These principles rely on a solid and well versed understanding of the underlying biological components and functions (relevant to the application). In order to achieve this understanding, reliable behavioural and contextual information is required (more commonly known as characterisation data). Focussing on lowering the barrier of entry for current research facilities to regularly and easily perform characterisation assays will directly improve the communal knowledge base for Synthetic Biology and enable the further application of rational engineering principles.
Whilst characterisation remains a fundamental principle for Synthetic Biology research, the high time costs, subjective measurement protocols, and ambiguous data analysis specifications, deter regular performance of characterisation assays. Vitally, this prevents the valid application of many of the key Synthetic Biology processes that have been derived to improve research yield (with regards to solving application problems) and directly prevent the intended goal of addressing the ad hoc nature of modern research from being realised.
Designing new technologies and tools to facilitate rapid ‘hands off’ characterisation assays for research facilities will improve the uptake of characterisation within the research pipeline. To achieve this two core problem areas have been identified that limit current characterisation attempts in conventional research. Therefore, it was the primary aim of this investigation to overcome these two core problems to promote regular characterisation.
The first issue identified as preventing the regular use of characterisation assays was the user-intensive methodologies and technologies available to researchers. There is currently no standardised characterisation equipment for assaying samples and the methodologies are heavily dependent on the researcher and their application for successful and complete characterisation. This study proposed a novel high throughput solution to the characterisation problem that was capable of low cost, concurrent, and rapid characterisation of simple biological DNA elements. By combining in vitro transcription-translation with microfluidics a potent solution to the characterisation problem was proposed. By utilising a completely in vitro approach along with excellent control abilities of microfluidic technologies, a prototype platform for high throughput characterisation was developed.
The second issue identified was the lack of flexible, versatile software designed specifically for the data handling needs that are quickly arising within the characterisation speciality. The lack of general solutions in this area is problematic because of the increasing amount of data that is both required and generated for the characterisation output to be considered as rigorous and of value. To alleviate this issue a novel framework for laboratory data handling was developed that employs a plugin strategy for data submission and analysis. Employing a plugin strategy improves the shelf life of data handling software by allowing it to grow with the needs of the speciality. Another advantage to this strategy is the increased ability for well documented processing and analysis standards to arise that are available for all researchers. Finally, the software provided a powerful and flexible data storage schema that allowed all currently conceivable characterisation data types to be stored in a well-documented manner.
The two solutions identified within this study increase the amount of enabling tools and technologies available to researchers within Synthetic Biology, which in turn will increase the uptake of regular characterisation. Consequently, this will potentially improve the lateral transfer of knowledge between research projects and reduce the need to perform ad hoc experiments to investigate facets of the fundamental biological components being utilised.Open Acces
Developing Resilient and Expandable Adaptive Capacity Arbitration Algorithms for Future WCDMA (UMTS) Wireless Systems
The objective of this research paper is to tackle the emerging challenges associated with resource management in future WCDMA (UMTS) wireless systems by presenting resilient and expandable adaptive capacity arbitration algorithms. The escalating demands for wireless communication necessitate effective resource allocation, ensuring optimal performance and user satisfaction. Accordingly, this paper introduces innovative algorithms designed to dynamically distribute resources based on user requirements, channel circumstances, and Quality of Service (QoS) preferences. By conducting an extensive analysis of pertinent literature, this work identifies the limitations inherent in current resource allocation strategies applied in WCDMA/UMTS systems. The proposed algorithms place a strong emphasis on achieving resilience by taking into account interference issues, uncertainties, as well as evolving network conditions. Furthermore, the algorithms have been designed to address concerns related to scalability so that they can efficiently handle a growing number of users and devices. The approach involves developing these algorithms and then evaluating their performance comprehensively using simulation tools. The results indicate that the adaptive capacity arbitration algorithms proposed outperform existing methods in terms of throughput, latency, and resource utilization. These findings suggest that the algorithms have great potential to greatly enhance the efficiency and reliability of future wireless systems. In short, this research paper makes a valuable contribution to the field of wireless communication by presenting innovative adaptive capacity arbitration algorithms specifically tailored for WCDMA/UMTS wireless systems. With the demonstrated robustness and scalability, these algorithms hold significant promise in revolutionizing resource management within wireless networks, thereby paving way for better connectivity and enhanced user experiences. Avenues that could be explored in future research involve the practical application of these algorithms in real-world contexts and improving their efficiency under different network conditions
Technological coalescence, recombinant innovation and future work; Artificial Intelligence (AI) and the workforce
Recommended from our members
Ecological determinants of smart home ecosystems: A coopetition framework
publishedVersio
DESIGN OF MOBILE DATA COLLECTOR BASED CLUSTERING ROUTING PROTOCOL FOR WIRELESS SENSOR NETWORKS
Wireless Sensor Networks (WSNs) consisting of hundreds or even thousands of
nodes, canbe used for a multitude of applications such as warfare intelligence or to
monitor the environment. A typical WSN node has a limited and usually an
irreplaceable power source and the efficient use of the available power is of utmost
importance to ensure maximum lifetime of eachWSNapplication. Each of the nodes
needs to transmit and communicate sensed data to an aggregation point for use by
higher layer systems. Data and message transmission among nodes collectively
consume the largest amount of energy available in WSNs. The network routing
protocols ensure that every message reaches thedestination and has a direct impact on
the amount of transmissions to deliver messages successfully. To this end, the
transmission protocol within the WSNs should be scalable, adaptable and optimized
to consume the least possible amount of energy to suite different network
architectures and application domains. The inclusion of mobile nodes in the WSNs
deployment proves to be detrimental to protocol performance in terms of nodes
energy efficiency and reliable message delivery. This thesis which proposes a novel
Mobile Data Collector based clustering routing protocol for WSNs is designed that
combines cluster based hierarchical architecture and utilizes three-tier multi-hop
routing strategy between cluster heads to base station by the help of Mobile Data
Collector (MDC) for inter-cluster communication. In addition, a Mobile Data
Collector based routing protocol is compared with Low Energy Adaptive Clustering
Hierarchy and A Novel Application Specific Network Protocol for Wireless Sensor
Networks routing protocol. The protocol is designed with the following in mind:
minimize the energy consumption of sensor nodes, resolve communication holes
issues, maintain data reliability, finally reach tradeoff between energy efficiency and
latency in terms of End-to-End, and channel access delays. Simulation results have
shown that the Mobile Data Collector based clustering routing protocol for WSNs
could be easily implemented in environmental applications where energy efficiency of
sensor nodes, network lifetime and data reliability are major concerns
Analysis and implementation of distributed algorithms for multi-robot systems
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 159-166).Distributed algorithms for multi-robot systems rely on network communications to share information. However, the motion of the robots changes the network topology, which affects the information presented to the algorithm. For an algorithm to produce accurate output, robots need to communicate rapidly enough to keep the network topology correlated to their physical configuration. Infrequent communications will cause most multirobot distributed algorithms to produce less accurate results, and cause some algorithms to stop working altogether. The central theme of this work is that algorithm accuracy, communications bandwidth, and physical robot speed are related. This thesis has three main contributions: First, I develop a prototypical multi-robot application and computational model, propose a set of complexity metrics to evaluate distributed algorithm performance on multi-robot systems, and introduce the idea of the robot speed ratio, a dimensionless measure of robot speed relative to message speed in networks that rely on multi-hop communication. The robot speed ratio captures key relationships between communications bandwidth, mobility, and algorithm accuracy, and can be used at design time to trade off between them. I use this speed ratio to evaluate the performance of existing distributed algorithms for multi-hop communication and navigation. Second, I present a definition of boundaries in multi-robot systems, and develop new distributed algorithms to detect and characterize them. Finally, I define the problem of dynamic task assignment, and present four distributed algorithms that solve this problem, each representing a different trade-off between accuracy, running time, and communication resources. All the algorithms presented in this work are provably correct under ideal conditions and produce verifiable real-world performance.(cont.) They are self-stabilizing and robust to communications failures, population changes, and other errors. All the algorithms were tested on a swarm of 112 robots.by James Dwight McLurkin, IV.Ph.D
Visual Analysis Algorithms for Embedded Systems
Visual search systems are very popular applications, but on-line versions in 3G wireless environments suffer from network constraint like unstable or limited bandwidth that entail latency in query delivery, significantly degenerating the user’s experience. An alternative is to exploit the ability of the newest mobile devices to perform heterogeneous activities, like not only creating but also processing images. Visual feature extraction and compression can be performed on on-board Graphical Processing Units (GPUs), making smartphones capable of detecting a generic object (matching) in an exact way or of performing a classification activity.
The latest trends in visual search have resulted in dedicated efforts in MPEG standardization, namely the MPEG CDVS (Compact Descriptor for Visual Search) standard. CDVS is an ISO/IEC standard used to extract a compressed descriptor.
As regards to classification, in recent years neural networks have acquired an impressive importance and have been applied to several domains. This thesis focuses on the use of Deep Neural networks to classify images by means of Deep learning.
Implementing visual search algorithms and deep learning-based classification on embedded environments is not a mere code-porting activity. Recent embedded devices are equipped with a powerful but limited number of resources, like development boards such as GPGPUs. GPU architectures fit particularly well, because they allow to execute more operations in parallel, following the SIMD (Single Instruction Multiple Data) paradigm. Nonetheless, it is necessary to make good design choices for the best use of available hardware and memory.
For visual search, following the MPEG CDVS standard, the contribution of this thesis is an efficient feature computation phase, a parallel CDVS detector, completely implemented on embedded devices supporting the OpenCL framework. Algorithmic choices and implementation details to target the intrinsic characteristics of the selected embedded platforms are presented and discussed. Experimental results on several GPUs show that the GPU-based solution is up to 7× faster than the
CPU-based one. This speed-up opens new visual search scenarios exploiting entire real-time on-board computations with no data transfer.
As regards to the use of Deep convolutional neural networks for off-line image classification, their computational and memory requirements are huge, and this is an issue on embedded devices. Most of the complexity derives from the convolutional layers and in particular from the matrix multiplications they entail. The contribution of this thesis is a self-contained implementation to image classification providing common layers used in neural networks. The approach relies on a heterogeneous CPU-GPU scheme for performing convolutions in the transform domain. Experimental results show that the heterogeneous scheme described in this thesis boasts a 50× speedup over the CPU-only reference and outperforms a GPU-based reference by 2×, while slashing the power consumption by nearly 30%
Integration of Deep Learning and Extended Reality Technologies in Construction Engineering and Management: A Mixed Review Method
- …