74 research outputs found

    Driving the Network-on-Chip Revolution to Remove the Interconnect Bottleneck in Nanoscale Multi-Processor Systems-on-Chip

    Get PDF
    The sustained demand for faster, more powerful chips has been met by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SoC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MP-SoC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NoCs) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the onchip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation performs a design space exploration of network-on-chip architectures, in order to point-out the trade-offs associated with the design of each individual network building blocks and with the design of network topology overall. The design space exploration is preceded by a comparative analysis of state-of-the-art interconnect fabrics with themselves and with early networkon- chip prototypes. The ultimate objective is to point out the key advantages that NoC realizations provide with respect to state-of-the-art communication infrastructures and to point out the challenges that lie ahead in order to make this new interconnect technology come true. Among these latter, technologyrelated challenges are emerging that call for dedicated design techniques at all levels of the design hierarchy. In particular, leakage power dissipation, containment of process variations and of their effects. The achievement of the above objectives was enabled by means of a NoC simulation environment for cycleaccurate modelling and simulation and by means of a back-end facility for the study of NoC physical implementation effects. Overall, all the results provided by this work have been validated on actual silicon layout

    Dynamically reconfigurable architecture for embedded computer vision systems

    Get PDF
    The objective of this research work is to design, develop and implement a new architecture which integrates on the same chip all the processing levels of a complete Computer Vision system, so that the execution is efficient without compromising the power consumption while keeping a reduced cost. For this purpose, an analysis and classification of different mathematical operations and algorithms commonly used in Computer Vision are carried out, as well as a in-depth review of the image processing capabilities of current-generation hardware devices. This permits to determine the requirements and the key aspects for an efficient architecture. A representative set of algorithms is employed as benchmark to evaluate the proposed architecture, which is implemented on an FPGA-based system-on-chip. Finally, the prototype is compared to other related approaches in order to determine its advantages and weaknesses

    Detecting and Learning Out-of-Distribution Data in the Open world: Algorithm and Theory

    Full text link
    This thesis makes considerable contributions to the realm of machine learning, specifically in the context of open-world scenarios where systems face previously unseen data and contexts. Traditional machine learning models are usually trained and tested within a fixed and known set of classes, a condition known as the closed-world setting. While this assumption works in controlled environments, it falls short in real-world applications where new classes or categories of data can emerge dynamically and unexpectedly. To address this, our research investigates two intertwined steps essential for open-world machine learning: Out-of-distribution (OOD) Detection and Open-world Representation Learning (ORL). OOD detection focuses on identifying instances from unknown classes that fall outside the model's training distribution. This process reduces the risk of making overly confident, erroneous predictions about unfamiliar inputs. Moving beyond OOD detection, ORL extends the capabilities of the model to not only detect unknown instances but also learn from and incorporate knowledge about these new classes. By delving into these research problems of open-world learning, this thesis contributes both algorithmic solutions and theoretical foundations, which pave the way for building machine learning models that are not only performant but also reliable in the face of the evolving complexities of the real world.Comment: Ph.D. thesi

    Methodologies and Toolflows for the Predictable Design of Reliable and Low-Power NoCs

    Get PDF
    There is today the unmistakable need to evolve design methodologies and tool ows for Network-on-Chip based embedded systems. In particular, the quest for low-power requirements is nowadays a more-than-ever urgent dilemma. Modern circuits feature billion of transistors, and neither power management techniques nor batteries capacity are able to endure the increasingly higher integration capability of digital devices. Besides, power concerns come together with modern nanoscale silicon technology design issues. On one hand, system failure rates are expected to increase exponentially at every technology node when integrated circuit wear-out failure mechanisms are not compensated for. However, error detection and/or correction mechanisms have a non-negligible impact on the network power. On the other hand, to meet the stringent time-to-market deadlines, the design cycle of such a distributed and heterogeneous architecture must not be prolonged by unnecessary design iterations. Overall, there is a clear need to better discriminate reliability strategies and interconnect topology solutions upfront, by ranking designs based on power metric. In this thesis, we tackle this challenge by proposing power-aware design technologies. Finally, we take into account the most aggressive and disruptive methodology for embedded systems with ultra-low power constraints, by migrating NoC basic building blocks to asynchronous (or clockless) design style. We deal with this challenge delivering a standard cell design methodology and mainstream CAD tool ows, in this way partially relaxing the requirement of using asynchronous blocks only as hard macros

    JTIT

    Get PDF
    kwartalni

    Amelioration of prenatal alcohol effects by environmental enrichment in a mouse model of FASD

    Get PDF
    Maternal alcohol consumption during pregnancy results in a spectrum of behavioural and cognitive deficits collectively known as Fetal Alcohol Spectrum Disorders (FASD). Currently, little is know about if and how the external environment may modulate these deficits. I have used C57BL/6 mice to study this interaction between prenatal alcohol exposure and the postnatal environment. Alcohol exposure during synaptogenesis produces high levels of anxiety-like traits and decreased memory performance. Alcohol-exposed mice (and matched unexposed controls) were put in \u27environmentally-enriched\u27 conditions of voluntary exercise, physical activities and cognitive stimulation to ascertain the effects of a positive postnatal environment. The results show that environmental enrichment ameliorates anxiety-like behaviour and memory deficits of alcohol-exposed mice. However this recovery is incomplete, indicative of the long-lasting, potentially permanent damage of prenatal alcohol exposure on the developing brain. In follow-up studies, I have uncovered gene expression changes in the hippocampus that are associated with behavioural and cognitive amelioration. To accomplish this, I have used mouse hippocampal RNA for microarray and RNA-Seq. My results have identified several key genes and molecular pathways that are associated with synaptic and structural plasticity, neurogenesis, long-term potentiation and angiogenesis. The behavioural and molecular results of this project represent a novel finding in the field of FASD research. The genes and pathways uncovered provide a possible explanation to understand FASD. They are also potential targets when formulating behavioural and pharmacological rehabilitative therapies

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Network Features in Complex Applications

    Get PDF
    The aim of this thesis is to show the potential of Graph Theory and Network Science applied in real-case scenarios. Indeed, there is a gap in the state-of-art in combining mathematical theory with more practical applications such as helping the Law Enforcement Agencies (LEAs) to conduct their investigations, or in Deep Learning techniques which enable Artificial Neural Networks (ANNs) to work more efficiently. In particular, three main case studies on which evaluate the goodness of Social Network Analysis (SNA) tools were considered: (i) Criminal Networks Analysis, (ii) Networks Resilience, and (iii) ANN topology. We have addressed two typical problems in dealing with criminal networks: (i) how to efficiently slow down the information spreading within the criminal organisation by prompt and targeted investigative operations from LEAs and (ii) what is the impact of missing data during LEAs investigation. In the first case, we identified the appropriate centrality metric to effectively identify the criminals to be arrested, showing how, by neutralising only 5% of the top-ranking affiliates, the network connectivity dropped by 70%. In the second case, we simulated the missing data problem by pruning some criminal networks by removing nodes or links and compared these networks against the originals considering four metrics to compute graph similarities. We discovered that a negligible error (i.e., 30% difference from the real network) was detected when, for example, some wiretaps are missing. On the other hand, it is crucial to investigate the suspects in a timely fashion, since any exclusion of suspects from an investigation may lead to significant errors (i.e., 80% difference). Next, we defined a new approach for simulating network resilience by a probabilistic failure model. Indeed, while the classical approach for removing nodes was always successful, such an assumption was not realistic. Thus, we defined some models simulating the scenario in which nodes oppose resistance against removal. Once identified the centrality metric that on average, generates the biggest damage in the connectivity of the networks under scrutiny, we have compared our outcomes against the classical node removal approach, by ranking the nodes according to the same centrality metric, which confirmed our intuition. Lastly, we adopted SNA techniques to analyse ANNs. In particular, we moved a step forward from earlier works because not only did our experiments confirm the efficiency arising from training sparse ANNs, but they also managed to further exploit sparsity through a better tuned algorithm, featuring increased speed at a negligible accuracy loss. We focused on the role of the parameter used to fine-tune the training phase of Sparse ANNs. Our intuition has been that this step can be avoided as the accuracy loss is negligible and, as a consequence, the execution time is significantly reduced. Yet, it is evident that Network Science algorithms, by keeping sparsity in ANNs, are a promising direction for accelerating their training processes. All these studies pave the way for a range of unexplored possibilities for an effective use of Network Science at the service of society.PhD Scholarship (Data Science Research Centre, University of Derby
    corecore