246 research outputs found
Radio Measurements of Cosmic Rays at the South Pole
Die ultrahochenergetische kosmische Strahlung, die in der Erdatmosphäre massive Teilchenkaskaden (ausgedehnt Luftschauer) auslöst, kann am Erdboden mit Hilfe von Detektorfeldern gemessen werden. Unter den verschiedenen Detektoren, die zum Einsatz kommen, haben Radioantennen im letzten Jahrzehnt an Bedeutung gewonnen, da sie eine einzigartige Möglichkeit bieten diese Luftschauer zu untersuchen. Die Radioemission, die während der Entwicklung des Luftschauers hauptsächlich durch die Ablenkung der Elektronen und Positronen in der Teilchenkaskade durch das Erdmagnetfeld entsteht, enthält Informationen über die Art der Teilchen, die den Schauer ausgelöst haben. Insbesondere können Radioantennen zusammen mit Fluoreszenzteleskopen die Position des Maximums der Entwicklung des Luftschauers rekonstruieren. Dieser rekonstruierte Parameter ist abhängig von der Art des primären Atomkerns der kosmischen Strahlung, die den Luftschauer ausgelöst hat. Die Kenntnis des Typs der kosmischen Strahlung wiederum trägt zu einem besseren Verständnis der Beschleunigungsprozesse astrophysikalischer Quellen in unserem Universum bei.
Das IceCube Neutrino Observatorium am geografischen Südpol ist ein Mehrzweckdetektor, der sowohl astrophysikalische Neutrinos, als auch Luftschauer nachweisen kann, insbesondere mit seinem Oberflächendetektor, IceTop. Um IceTop als Detektor für kosmische Strahlung zu verbessern und die Auswirkungen der Schneeansammlung abzuschwächen, soll in den kommenden Jahren ein hybrider Dektector aus anhebbaren Szintillationsplatten und Radioantennen installiert werden. Dieser Sub-Detektor wird aus 32 Stationen bestehen, die jeweils 8 Szintillationspaneele und 3 Antennen umfassen und eine Fläche von 1 km abdecken. Die Radioantennen nutzen mit 70 bis 350 MHz statt 30 bis 80 MHz ein höheres Frequenzband als bisher üblich. Der erste vollständige Prototyp einer Hybridstation wurde im Januar 2020 in Betrieb genommen. Diese Arbeit behandelt die Hardware der Prototyp-Station und der zukünftigen geplanten Stationen, die Inbetriebnahme der Daten der Prototyp-Station sowie eine Methode zur Energie- und -Rekonstruktion, die auf der Grundlage gemessener Ereignisse und Monte-Carlo-Simulationen entwickelt wurde.
Insbesondere wurde eine Struktur zum Anheben der Antennen über dem Schnee entworfen, gebaut, im Feld getestet und produziert, zusammen mit einer Radio-Frontend-Platine für die analoge Vorverarbeitung des von den Antennen empfangenen Signals. Die Kalibrierung der anderen Radiosignalkomponenten bei verschiedenen Temperaturen erreicht eine Amplitudenunsicherheit von nur 3,9%, was deutlich unter der geforderten Unsicherheit von 10% für die Radio-Signalkette liegt. Die Funktionsweise der Detektoren wurde durch die Analyse des Radio-Untergrunds unter Verwendung der entwickelten Radio-Datenanalysekette bestätigt. Es wurden insgesamt 121 Luftschauer nachgewiesen, von denen 5 auch durch die anderen Detektoren nachgewiesen wurden. Sechszehn Luftschauer wurden verwendet, um die erste Energie- und -Rekonstruktionsmethode für die Radiokomponente der Detektorerweiterung zu entwickeln.
Diese Rekonstruktionsmethode basiert auf dem neuesten Stand der Technik für Radio-Detektoren. Es wurde eine Analyse des Einflusses des Radio-Untergrundes auf das Signal durchgeführt. Anschließend wird die üblicherweise verwendete Methode der -Minimierung durch eine Log-Likelihood-Minimierung mit einer Parametrisierung des Rauschens ersetzt, und es wird gezeigt, dass diese Technik mit den gemessenen Daten funktioniert. Darüber hinaus zeigt sich, dass bei denselben rekonstruierten Ereignissen das Hochfrequenzband mit den nur drei Antennen der Prototypstation eine deutlich bessere Genauigkeit als das traditionelle Niedrigfrequenzband aufweist. Sobald der gesamte Detektor fertiggestellt ist, wird die erwartete Rekonstruktionsgenauigkeit auf 15 g/cm für und besser als 10% für die Energie geschätzt
Trademarks in an Algorithmic World
According to the sole normative foundation for trademark protection—“search costs” theory—trademarks transmit useful information to consumers, enabling an efficient marketplace. The marketplace, however, is in the midst of a fundamental change. Increasingly, retail is virtual, marketing is data-driven, and purchasing decisions are automated by AI. Predictive analytics are changing how consumers shop. Search costs theory no longer accurately describes the function of trademarks in this marketplace. Consumers now have numerous digital alternatives to trademarks that more efficiently provide them with increasingly accurate product information. Just as store shelves are disappearing from consumers’ retail experience, so are trademarks disappearing from their product search. Consumers may want to buy a product where the brand is the essential feature of the product such that the brand is the product, but they no longer need the assistance of a trademark to find the product.
By reflexively continuing to protect trademarks in the name of search costs theory, courts give only lip service to consumer interests without questioning whether trademarks are fulfilling any useful information function. In many cases, trademarks may actually misinform consumers by masking the identity of the producer or its distanced relationship with the trademark owner. Without having deliberately decided to do so, trademark law is now protecting “brands as property” without any supportive normative rationale. Removing the veil of search costs theory will enable courts to consider whether trademark protection is justified in particular cases
Deployment of Deep Neural Networks on Dedicated Hardware Accelerators
Deep Neural Networks (DNNs) have established themselves as powerful tools for
a wide range of complex tasks, for example computer vision or natural language
processing. DNNs are notoriously demanding on compute resources and as a
result, dedicated hardware accelerators for all use cases are developed. Different
accelerators provide solutions from hyper scaling cloud environments for the
training of DNNs to inference devices in embedded systems. They implement
intrinsics for complex operations directly in hardware. A common example
are intrinsics for matrix multiplication. However, there exists a gap between
the ecosystems of applications for deep learning practitioners and hardware
accelerators. HowDNNs can efficiently utilize the specialized hardware intrinsics
is still mainly defined by human hardware and software experts.
Methods to automatically utilize hardware intrinsics in DNN operators are a
subject of active research. Existing literature often works with transformationdriven
approaches, which aim to establish a sequence of program rewrites and
data-layout transformations such that the hardware intrinsic can be used to
compute the operator. However, the complexity this of task has not yet been
explored, especially for less frequently used operators like Capsule Routing. And
not only the implementation of DNN operators with intrinsics is challenging,
also their optimization on the target device is difficult. Hardware-in-the-loop
tools are often used for this problem. They use latency measurements of implementations
candidates to find the fastest one. However, specialized accelerators
can have memory and programming limitations, so that not every arithmetically
correct implementation is a valid program for the accelerator. These invalid
implementations can lead to unnecessary long the optimization time.
This work investigates the complexity of transformation-driven processes to
automatically embed hardware intrinsics into DNN operators. It is explored
with a custom, graph-based intermediate representation (IR). While operators
like Fully Connected Layers can be handled with reasonable effort, increasing
operator complexity or advanced data-layout transformation can lead to scaling issues.
Building on these insights, this work proposes a novel method to embed
hardware intrinsics into DNN operators. It is based on a dataflow analysis.
The dataflow embedding method allows the exploration of how intrinsics and
operators match without explicit transformations. From the results it can derive
the data layout and program structure necessary to compute the operator with
the intrinsic. A prototype implementation for a dedicated hardware accelerator
demonstrates state-of-the art performance for a wide range of convolutions, while
being agnostic to the data layout. For some operators in the benchmark, the
presented method can also generate alternative implementation strategies to
improve hardware utilization, resulting in a geo-mean speed-up of Ă—2.813 while
reducing the memory footprint. Lastly, by curating the initial set of possible
implementations for the hardware-in-the-loop optimization, the median timeto-
solution is reduced by a factor of Ă—2.40. At the same time, the possibility to
have prolonged searches due a bad initial set of implementations is reduced,
improving the optimization’s robustness by ×2.35
25th Annual Computational Neuroscience Meeting: CNS-2016
Abstracts of the 25th Annual Computational Neuroscience
Meeting: CNS-2016
Seogwipo City, Jeju-do, South Korea. 2–7 July 201
Recommended from our members
Examining university student satisfaction and barriers to taking online remote exams
Recent years have seen a surge in the popularity of online exams at universities, due to the greater convenience and flexibility they offer both students and institutions. Driven by the dearth of empirical data on distance learning students' satisfaction levels and the difficulties they face when taking online exams, a survey with 562 students at The Open University (UK) was conducted to gain insights into their experiences with this type of exam. Satisfaction was reported with the environment and exams, while work commitments and technical difficulties presented the greatest barriers. Gender, race and disability were also associated with different levels of satisfaction and barriers. This study adds to the increasing number of studies into online exams, demonstrating how this type of exam can still have a substantial effect on students experienced in online learning systems and
technologies
Transforming our World through Universal Design for Human Development
An environment, or any building product or service in it, should ideally be designed to meet the needs of all those who wish to use it. Universal Design is the design and composition of environments, products, and services so that they can be accessed, understood and used to the greatest extent possible by all people, regardless of their age, size, ability or disability. It creates products, services and environments that meet people’s needs. In short, Universal Design is good design.
This book presents the proceedings of UD2022, the 6th International Conference on Universal Design, held from 7 - 9 September 2022 in Brescia, Italy.The conference is targeted at professionals and academics interested in the theme of universal design as related to the built environment and the wellbeing of users, but also covers mobility and urban environments, knowledge, and information transfer, bringing together research knowledge and best practice from all over the world. The book contains 72 papers from 13 countries, grouped into 8 sections and covering topics including the design of inclusive natural environments and urban spaces, communities, neighborhoods and cities; housing; healthcare; mobility and transport systems; and universally- designed learning environments, work places, cultural and recreational spaces. One section is devoted to universal design and cultural heritage, which had a particular focus at this edition of the conference.
The book reflects the professional and disciplinary diversity represented in the UD movement, and will be of interest to all those whose work involves inclusive design
Understanding Quantum Technologies 2022
Understanding Quantum Technologies 2022 is a creative-commons ebook that
provides a unique 360 degrees overview of quantum technologies from science and
technology to geopolitical and societal issues. It covers quantum physics
history, quantum physics 101, gate-based quantum computing, quantum computing
engineering (including quantum error corrections and quantum computing
energetics), quantum computing hardware (all qubit types, including quantum
annealing and quantum simulation paradigms, history, science, research,
implementation and vendors), quantum enabling technologies (cryogenics, control
electronics, photonics, components fabs, raw materials), quantum computing
algorithms, software development tools and use cases, unconventional computing
(potential alternatives to quantum and classical computing), quantum
telecommunications and cryptography, quantum sensing, quantum technologies
around the world, quantum technologies societal impact and even quantum fake
sciences. The main audience are computer science engineers, developers and IT
specialists as well as quantum scientists and students who want to acquire a
global view of how quantum technologies work, and particularly quantum
computing. This version is an extensive update to the 2021 edition published in
October 2021.Comment: 1132 pages, 920 figures, Letter forma
Quantification and segmentation of breast cancer diagnosis: efficient hardware accelerator approach
The mammography image eccentric area is the breast density percentage
measurement. The technical challenge of quantification in radiology leads to
misinterpretation in screening. Data feedback from society, institutional, and industry
shows that quantification and segmentation frameworks have rapidly become the
primary methodologies for structuring and interpreting mammogram digital images.
Segmentation clustering algorithms have setbacks on overlapping clusters, proportion,
and multidimensional scaling to map and leverage the data. In combination,
mammogram quantification creates a long-standing focus area. The algorithm
proposed must reduce complexity and target data points distributed in iterative, and
boost cluster centroid merged into a single updating process to evade the large storage
requirement. The mammogram database's initial test segment is critical for evaluating
performance and determining the Area Under the Curve (AUC) to alias with medical
policy. In addition, a new image clustering algorithm anticipates the need for largescale
serial and parallel processing. There is no solution on the market, and it is
necessary to implement communication protocols between devices. Exploiting and
targeting utilization hardware tasks will further extend the prospect of improvement in
the cluster. Benchmarking their resources and performance is required. Finally, the
medical imperatives cluster was objectively validated using qualitative and
quantitative inspection. The proposed method should overcome the technical
challenges that radiologists face
Introduction to Development Engineering
This open access textbook introduces the emerging field of Development Engineering and its constituent theories, methods, and applications. It is both a teaching text for students and a resource for researchers and practitioners engaged in the design and scaling of technologies for low-resource communities. The scope is broad, ranging from the development of mobile applications for low-literacy users to hardware and software solutions for providing electricity and water in remote settings. It is also highly interdisciplinary, drawing on methods and theory from the social sciences as well as engineering and the natural sciences. The opening section reviews the history of “technology-for-development” research, and presents a framework that formalizes this body of work and begins its transformation into an academic discipline. It identifies common challenges in development and explains the book’s iterative approach of “innovation, implementation, evaluation, adaptation.” Each of the next six thematic sections focuses on a different sector: energy and environment; market performance; education and labor; water, sanitation and health; digital governance; and connectivity. These thematic sections contain case studies from landmark research that directly integrates engineering innovation with technically rigorous methods from the social sciences. Each case study describes the design, evaluation, and/or scaling of a technology in the field and follows a single form, with common elements and discussion questions, to create continuity and pedagogical consistency. Together, they highlight successful solutions to development challenges, while also analyzing the rarely discussed failures. The book concludes by reiterating the core principles of development engineering illustrated in the case studies, highlighting common challenges that engineers and scientists will face in designing technology interventions that sustainably accelerate economic development. Development Engineering provides, for the first time, a coherent intellectual framework for attacking the challenges of poverty and global climate change through the design of better technologies. It offers the rigorous discipline needed to channel the energy of a new generation of scientists and engineers toward advancing social justice and improved living conditions in low-resource communities around the world
- …