158 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    CROSSTALK-RESILIANT CODING FOR HIGH DENSITY DIGITAL RECORDING

    Get PDF
    Increasing the track density in magnetic systems is very difficult due to inter-track interference (ITI) caused by the magnetic field of adjacent tracks. This work presents a two-track partial response class 4 magnetic channel with linear and symmetrical ITI; and explores modulation codes, signal processing methods and error correction codes in order to mitigate the effects of ITI. Recording codes were investigated, and a new class of two-dimensional run-length limited recording codes is described. The new class of codes controls the type of ITI and has been found to be about 10% more resilient to ITI compared to conventional run-length limited codes. A new adaptive trellis has also been described that adaptively solves for the effect of ITI. This has been found to give gains up to 5dB in signal to noise ratio (SNR) at 40% ITI. It was also found that the new class of codes were about 10% more resilient to ITI compared to conventional recording codes when decoded with the new trellis. Error correction coding methods were applied, and the use of Low Density Parity Check (LDPC) codes was investigated. It was found that at high SNR, conventional codes could perform as well as the new modulation codes in a combined modulation and error correction coding scheme. Results suggest that high rate LDPC codes can mitigate the effect of ITI, however the decoders have convergence problems beyond 30% ITI

    Securing Fisheye State Routing Algorithm Against Data Packet Dropping by Malicious Nodes in MANET.

    Get PDF
    Mobile Ad Hoc Network (MANET) is an emerging area of research in the communication network world. As the MANET is infrastructure less, it is having dynamic nature of arbitrary network topology. So, it needs set of new networking strategies to be implemented in order to provide efficient end to end communication. These (MANET) networks have immense application in various fields like disaster management, sensor networks, battle field etc. Many routing protocols have been proposed in MANET among which Fisheye State Routing (FSR) protocol scales well in large network. Security in MANET is a very difficult problem to incorporate without degrading the performance of the protocol. A performance comparison of different routing protocols has been given here and this research narrows down to security related issues associated with FSR. The attacks on the MANET can be broadly divided into 2 types as active attacks and passive attacks. The proposed scheme deals with minimizing passive attacks which causes dropping of data packets by the selfish nodes or malicious nodes. The idea is based on modifying the traditional Dijkstra’s Algorithm which computes shortest route to all destinations from a source. The actual FSR algorithm considers the link cost between two nodes as 1 if one node comes in the radio range of another. In our proposed scheme the weight has been assigned depending upon the number of times the next node has behaved maliciously or selfishly. Here we have proposed one scheme which uses a two hop time stamp method to detect a malicious node and the Dijkstra’s shortest path algorithm has been modified to re compute the optimal paths to destination and hence, to minimize the data packet dropping by malicious nodes in the network

    Doctor of Philosophy

    Get PDF
    dissertationCommunication surpasses computation as the power and performance bottleneck in forthcoming exascale processors. Scaling has made transistors cheap, but on-chip wires have grown more expensive, both in terms of latency as well as energy. Therefore, the need for low energy, high performance interconnects is highly pronounced, especially for long distance communication. In this work, we examine two aspects of the global signaling problem. The first part of the thesis focuses on a high bandwidth asynchronous signaling protocol for long distance communication. Asynchrony among intellectual property (IP) cores on a chip has become necessary in a System on Chip (SoC) environment. Traditional asynchronous handshaking protocol suffers from loss of throughput due to the added latency of sending the acknowledge signal back to the sender. We demonstrate a method that supports end-to-end communication across links with arbitrarily large latency, without limiting the bandwidth, so long as line variation can be reliably controlled. We also evaluate the energy and latency improvements as a result of the design choices made available by this protocol. The use of transmission lines as a physical interconnect medium shows promise for deep submicron technologies. In our evaluations, we notice a lower energy footprint, as well as vastly reduced wire latency for transmission line interconnects. We approach this problem from two sides. Using field solvers, we investigate the physical design choices to determine the optimal way to implement these lines for a given back-end-of-line (BEOL) stack. We also approach the problem from a system designer's viewpoint, looking at ways to optimize the lines for different performance targets. This work analyzes the advantages and pitfalls of implementing asynchronous channel protocols for communication over long distances. Finally, the innovations resulting from this work are applied to a network-on-chip design example and the resulting power-performance benefits are reported

    Democratizing machine learning

    Get PDF
    Modelle des maschinellen Lernens sind zunehmend in der Gesellschaft verankert, oft in Form von automatisierten Entscheidungsprozessen. Ein wesentlicher Grund dafür ist die verbesserte Zugänglichkeit von Daten, aber auch von Toolkits für maschinelles Lernen, die den Zugang zu Methoden des maschinellen Lernens für Nicht-Experten ermöglichen. Diese Arbeit umfasst mehrere Beiträge zur Demokratisierung des Zugangs zum maschinellem Lernen, mit dem Ziel, einem breiterem Publikum Zugang zu diesen Technologien zu er- möglichen. Die Beiträge in diesem Manuskript stammen aus mehreren Bereichen innerhalb dieses weiten Gebiets. Ein großer Teil ist dem Bereich des automatisierten maschinellen Lernens (AutoML) und der Hyperparameter-Optimierung gewidmet, mit dem Ziel, die oft mühsame Aufgabe, ein optimales Vorhersagemodell für einen gegebenen Datensatz zu finden, zu vereinfachen. Dieser Prozess besteht meist darin ein für vom Benutzer vorgegebene Leistungsmetrik(en) optimales Modell zu finden. Oft kann dieser Prozess durch Lernen aus vorhergehenden Experimenten verbessert oder beschleunigt werden. In dieser Arbeit werden drei solcher Methoden vorgestellt, die entweder darauf abzielen, eine feste Menge möglicher Hyperparameterkonfigurationen zu erhalten, die wahrscheinlich gute Lösungen für jeden neuen Datensatz enthalten, oder Eigenschaften der Datensätze zu nutzen, um neue Konfigurationen vorzuschlagen. Darüber hinaus wird eine Sammlung solcher erforderlichen Metadaten zu den Experimenten vorgestellt, und es wird gezeigt, wie solche Metadaten für die Entwicklung und als Testumgebung für neue Hyperparameter- Optimierungsmethoden verwendet werden können. Die weite Verbreitung von ML-Modellen in vielen Bereichen der Gesellschaft erfordert gleichzeitig eine genauere Untersuchung der Art und Weise, wie aus Modellen abgeleitete automatisierte Entscheidungen die Gesellschaft formen, und ob sie möglicherweise Individuen oder einzelne Bevölkerungsgruppen benachteiligen. In dieser Arbeit wird daher ein AutoML-Tool vorgestellt, das es ermöglicht, solche Überlegungen in die Suche nach einem optimalen Modell miteinzubeziehen. Diese Forderung nach Fairness wirft gleichzeitig die Frage auf, ob die Fairness eines Modells zuverlässig geschätzt werden kann, was in einem weiteren Beitrag in dieser Arbeit untersucht wird. Da der Zugang zu Methoden des maschinellen Lernens auch stark vom Zugang zu Software und Toolboxen abhängt, sind mehrere Beiträge in Form von Software Teil dieser Arbeit. Das R-Paket mlr3pipelines ermöglicht die Einbettung von Modellen in sogenan- nte Machine Learning Pipelines, die Vor- und Nachverarbeitungsschritte enthalten, die im maschinellen Lernen und AutoML häufig benötigt werden. Das mlr3fairness R-Paket hingegen ermöglicht es dem Benutzer, Modelle auf potentielle Benachteiligung hin zu über- prüfen und diese durch verschiedene Techniken zu reduzieren. Eine dieser Techniken, multi-calibration wurde darüberhinaus als seperate Software veröffentlicht.Machine learning artifacts are increasingly embedded in society, often in the form of automated decision-making processes. One major reason for this, along with methodological improvements, is the increasing accessibility of data but also machine learning toolkits that enable access to machine learning methodology for non-experts. The core focus of this thesis is exactly this – democratizing access to machine learning in order to enable a wider audience to benefit from its potential. Contributions in this manuscript stem from several different areas within this broader area. A major section is dedicated to the field of automated machine learning (AutoML) with the goal to abstract away the tedious task of obtaining an optimal predictive model for a given dataset. This process mostly consists of finding said optimal model, often through hyperparameter optimization, while the user in turn only selects the appropriate performance metric(s) and validates the resulting models. This process can be improved or sped up by learning from previous experiments. Three such methods one with the goal to obtain a fixed set of possible hyperparameter configurations that likely contain good solutions for any new dataset and two using dataset characteristics to propose new configurations are presented in this thesis. It furthermore presents a collection of required experiment metadata and how such meta-data can be used for the development and as a test bed for new hyperparameter optimization methods. The pervasion of models derived from ML in many aspects of society simultaneously calls for increased scrutiny with respect to how such models shape society and the eventual biases they exhibit. Therefore, this thesis presents an AutoML tool that allows incorporating fairness considerations into the search for an optimal model. This requirement for fairness simultaneously poses the question of whether we can reliably estimate a model’s fairness, which is studied in a further contribution in this thesis. Since access to machine learning methods also heavily depends on access to software and toolboxes, several contributions in the form of software are part of this thesis. The mlr3pipelines R package allows for embedding models in so-called machine learning pipelines that include pre- and postprocessing steps often required in machine learning and AutoML. The mlr3fairness R package on the other hand enables users to audit models for potential biases as well as reduce those biases through different debiasing techniques. One such technique, multi-calibration is published as a separate software package, mcboost

    Design of Energy-Efficient A/D Converters with Partial Embedded Equalization for High-Speed Wireline Receiver Applications

    Get PDF
    As the data rates of wireline communication links increases, channel impairments such as skin effect, dielectric loss, fiber dispersion, reflections and cross-talk become more pronounced. This warrants more interest in analog-to-digital converter (ADC)-based serial link receivers, as they allow for more complex and flexible back-end digital signal processing (DSP) relative to binary or mixed-signal receivers. Utilizing this back-end DSP allows for complex digital equalization and more bandwidth-efficient modulation schemes, while also displaying reduced process/voltage/temperature (PVT) sensitivity. Furthermore, these architectures offer straightforward design translation and can directly leverage the area and power scaling offered by new CMOS technology nodes. However, the power consumption of the ADC front-end and subsequent digital signal processing is a major issue. Embedding partial equalization inside the front-end ADC can potentially result in lowering the complexity of back-end DSP and/or decreasing the ADC resolution requirement, which results in a more energy-effcient receiver. This dissertation presents efficient implementations for multi-GS/s time-interleaved ADCs with partial embedded equalization. First prototype details a 6b 1.6GS/s ADC with a novel embedded redundant-cycle 1-tap DFE structure in 90nm CMOS. The other two prototypes explain more complex 6b 10GS/s ADCs with efficiently embedded feed-forward equalization (FFE) and decision feedback equalization (DFE) in 65nm CMOS. Leveraging a time-interleaved successive approximation ADC architecture, new structures for embedded DFE and FFE are proposed with low power/area overhead. Measurement results over FR4 channels verify the effectiveness of proposed embedded equalization schemes. The comparison of fabricated prototypes against state-of-the-art general-purpose ADCs at similar speed/resolution range shows comparable performances, while the proposed architectures include embedded equalization as well

    Equalization Architectures for High Speed ADC-Based Serial I/O Receivers

    Get PDF
    The growth in worldwide network traffic due to the rise of cloud computing and wireless video consumption has required servers and routers to support increased serial I/O data rates over legacy channels with significant frequency-dependent attenuation. For these high-loss channel applications, ADC-based high-speed links are being considered due to their ability to enable powerful digital signal processing (DSP) algorithms for equalization and symbol detection. Relative to mixed-signal equalizers, digital implementations offer robustness to process, voltage and temperature (PVT) variations, are easier to reconfigure, and can leverage CMOS technology scaling in a straight-forward manner. Despite these advantages, ADC-based receivers are generally more complex and have higher power consumption relative to mixed-signal receivers. The ensuing digital equalization can also consume a significant amount of power which is comparable to the ADC contribution. Novel techniques to reduce complexity and improve power efficiency, both for the ADC and the subsequent digital equalization, are necessary. This dissertation presents efficient modeling and implementation approaches for ADC-based serial I/O receivers. A statistical modeling framework is developed, which is able to capture ADC related errors, including quantization noise, INL/DNL errors and time interleaving mismatch errors. A novel 10GS/s hybrid ADC-based receiver, which combines both embedded and digital equalization, is then presented. Leveraging a time-interleaved asynchronous successive approximation ADC architecture, a new structure for 3-tap embedded FFE inside the ADC with low power/area overhead is used. In addition, a dynamically-enabled digital 4-tap FFE + 3-tap DFE equalizer architecture is introduced, which uses reliable symbol detection to achieve remarkable savings in the digital equalization power. Measurement results over several FR4 channels verify the accuracy of the modeling approach and the effectiveness of the proposed receiver. The comparison of the fabricated prototype against state-of-the-art ADC-based receivers shows the ability of the proposed archi-tecture to compensate for the highest loss channel, while achieving the best power efficiency among other works

    Técnicas alternativas para amplificação de Raman em telecomunicações

    Get PDF
    Doutoramento em FísicaO presente trabalho centra-se no estudo dos amplificadores de Raman em fibra ótica e suas aplicações em sistemas modernos de comunicações óticas. Abordaram-se tópicos específicos como a simulação espacial do amplificador de Raman, a equalização e alargamento do ganho, o uso de abordagens híbridas de amplificação através da associação de amplificadores de Raman em fibra ótica com amplificadores de fibra dopada com Érbio (EDFA) e os efeitos transitórios no ganho dos amplificadores. As actividades realizadas basearam-se em modelos teóricos, sendo os resultados validados experimentalmente. De entre as contribuições mais importantes desta tese, destaca-se (i) o desenvolvimento de um simulador eficiente para amplificadores de Raman que suporta arquitecturas de bombeamento contraprogantes e bidirecionais num contexto com multiplexagem no comprimento de onda (WDM); (ii) a implementação de um algoritmo de alocação de sinais de bombeamento usando a combinação do algoritmo genético com o método de Nelder- Mead; (iii) a apreciação de soluções de amplificação híbridas por associação dos amplificadores de Raman com EDFA em cenários de redes óticas passivas, nomeadamente WDM/TDM-PON com extensão a região espectral C+L; e (iv) a avaliação e caracterização de fenómenos transitórios em amplificadores para tráfego em rajadas/pacotes óticos e consequente desenvolvimento de soluções de mitigação baseadas em técnicas de clamping ótico.The present work is based on Raman Fiber Amplifiers and their applications in modern fiber communication systems. Specific topics were approached, namely the spatial simulation of Raman fiber amplifiers, the gain enlargement and equalization the use of hybrid amplification approaches by association of Raman amplifiers with Erbium doped fiber amplifiers (EDFA) and the transient effect on optical amplifiers gain. The work is based on theoretical models, being the obtained results validated experimentally. Among the main contributions, we remark: (i) the development of an efficient simulator for Raman fiber amplifiers that supports backward and bidirectional pumping architectures in a wavelength division multiplexing (WDM) context; (ii) the implementation of an algorithm to obtain enlargement and equalization of gain by allocation of pumps based on the association of the genetic algorithm with the Nelder-Mead method; (iii) the assessment of hybrid amplification solutions using Raman amplifiers and EDFA in the context of passive optical networks, namely WDM/TDM-PON with extension the C+L spectral bands; (iv) the assessment and characterization of transient effects on optical amplifiers with bursty/packeted traffic and the development of mitigation solutions based on optical clamping

    Final report on Assessment of the candidate Projects of Energy Community Interest (PECI) and Projects for Mutual Interest (PMI)

    Get PDF
    A REKK és a DNV GL konzorciuma a 2013-as első PECI lista kiválasztásában nyújtott tanácsadó munkája után 2016-ban is támogatást nyújtott a második PECI lista kiválasztáshoz. A konzorcium kialakított egy projekt értékelési módszertant, majd az elfogadott módszertan alapján értékelte a benyújtott infrastruktúra projekteket. Az értékelés egy előzetes vizsgálatból és egy modellezési és indikátor számítási szakaszból állt. Az előzetes értékelés során megvizsgáltuk, hogy a benyújtott projektek megfelelnek-e a 347/2013 EU rendelet Energy Community által adoptált változatában szereplő általános és specifikus kritériumoknak, illetve a benyújtott projektadatokat verifikáltuk. Az előzetes kritériumoknak megfelelt 31 projektet (12 villamos energia hálózati és 17 gázhálózati és 1 olaj infrastruktúra) értékeltük. A villamosenergia és gázhálózati infrastruktúra projektek esetében a módszertan két lépésen alapult. Első lépésben a modellezésen alapuló költség haszon vizsgálatát végeztük el a projekteknek és kiszámítottuk a társadalmi nettó jelenértékét. Ez az indikátor szerepelt az elemzésben a legnagyobb súllyal (60%). Második lépésben a nem monetizálható hasznok esetében további indikátorok (a rendszerbiztonság, a projektek előrehaladottsága, a piaci versenyhez elősegítése, stb.) kerültek kialakításra, az indikátorok 1-5 skálán pontokat kaptak és a súlyokkal beszorozva megkaptuk minden egyes projekt összpontszámát. Az ily módon sorrendbe állítható projektlista és a részletes érzékenységvizsgálatok eredménye alapján hozta meg a tagállamok képviselőiből álló csoport a döntését az előzetes PECI/PMI listáról
    corecore