4,915 research outputs found
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Multimodal spatio-temporal deep learning framework for 3D object detection in instrumented vehicles
This thesis presents the utilization of multiple modalities, such as image and lidar, to incorporate spatio-temporal information from sequence data into deep learning architectures for 3Dobject detection in instrumented vehicles. The race to autonomy in instrumented vehicles or self-driving cars has stimulated significant research in developing autonomous driver assistance systems (ADAS) technologies related explicitly to perception systems. Object detection plays a crucial role in perception systems by providing spatial information to its subsequent modules; hence, accurate detection is a significant task supporting autonomous driving. The advent of deep learning in computer vision applications and the availability of multiple sensing modalities such as 360° imaging, lidar, and radar have led to state-of-the-art 2D and 3Dobject detection architectures. Most current state-of-the-art 3D object detection frameworks consider single-frame reference. However, these methods do not utilize temporal information associated with the objects or scenes from the sequence data. Thus, the present research hypothesizes that multimodal temporal information can contribute to bridging the gap between 2D and 3D metric space by improving the accuracy of deep learning frameworks for 3D object estimations. The thesis presents understanding multimodal data representations and selecting hyper-parameters using public datasets such as KITTI and nuScenes with Frustum-ConvNet as a baseline architecture. Secondly, an attention mechanism was employed along with convolutional-LSTM to extract spatial-temporal information from sequence data to improve 3D estimations and to aid the architecture in focusing on salient lidar point cloud features. Finally, various fusion strategies are applied to fuse the modalities and temporal information into the architecture to assess its efficacy on performance and computational complexity. Overall, this thesis has established the importance and utility of multimodal systems for refined 3D object detection and proposed a complex pipeline incorporating spatial, temporal and attention mechanisms to improve specific, and general class accuracy demonstrated on key autonomous driving data sets
Artificial Intelligence, Robots, and Philosophy
This book is a collection of all the papers published in the special issue “Artificial Intelligence, Robots, and Philosophy,” Journal of Philosophy of Life, Vol.13, No.1, 2023, pp.1-146. The authors discuss a variety of topics such as science fiction and space ethics, the philosophy of artificial intelligence, the ethics of autonomous agents, and virtuous robots. Through their discussions, readers are able to think deeply about the essence of modern technology and the future of humanity. All papers were invited and completed in spring 2020, though because of the Covid-19 pandemic and other problems, the publication was delayed until this year. I apologize to the authors and potential readers for the delay. I hope that readers will enjoy these arguments on digital technology and its relationship with philosophy. ***
Contents***
Introduction
: Descartes and Artificial Intelligence;
Masahiro Morioka***
Isaac Asimov and the Current State of Space Science Fiction
: In the Light of Space Ethics;
Shin-ichiro Inaba***
Artificial Intelligence and Contemporary Philosophy
: Heidegger, Jonas, and Slime Mold;
Masahiro Morioka***
Implications of Automating Science
: The Possibility of Artificial Creativity and the Future of Science;
Makoto Kureha***
Why Autonomous Agents Should Not Be Built for War;
István Zoltán Zárdai***
Wheat and Pepper
: Interactions Between Technology and Humans;
Minao Kukita***
Clockwork Courage
: A Defense of Virtuous Robots;
Shimpei Okamoto***
Reconstructing Agency from Choice;
Yuko Murakami***
Gushing Prose
: Will Machines Ever be Able to Translate as Badly as
Humans?;
Rossa Ó Muireartaigh**
Resource Management in Mobile Edge Computing for Compute-intensive Application
With current and future mobile applications (e.g., healthcare, connected vehicles, and smart grids) becoming increasingly compute-intensive for many mission-critical use cases, the energy and computing capacities of embedded mobile devices are proving to be insufficient to handle all in-device computation. To address the energy and computing shortages of mobile devices, mobile edge computing (MEC) has emerged as a major distributed computing paradigm. Compared to traditional cloud-based computing, MEC integrates network control, distributed computing, and storage to customizable, fast, reliable, and secure edge services that are closer to the user and data sites. However, the diversity of applications and a variety of user specified requirements (viz., latency, scalability, availability, and reliability) add additional complications to the system and application optimization problems in terms of resource management. In this thesis dissertation, we aim to develop customized and intelligent placement and provisioning strategies that are needed to handle edge resource management problems for different challenging use cases: i) Firstly, we propose an energy-efficient framework to address the resource allocation problem of generic compute-intensive applications, such as Directed Acyclic Graph (DAG) based applications. We design partial task offloading and server selection strategies with the purpose of minimizing the transmission cost. Our experiment and simulation results indicate that partial task offloading provides considerable energy savings, especially for resource-constrained edge systems. ii) Secondly, to address the dynamism edge environments, we propose solutions that integrate Dynamic Spectrum Access (DSA) and Cooperative Spectrum Sensing (CSS) with fine-grained task offloading schemes. Similarly, we show the high efficiency of the proposed strategy in capturing dynamic channel states and enforcing intelligent channel sensing and task offloading decisions. iii) Finally, application-specific long-term optimization frameworks are proposed for two representative applications: a) multi-view 3D reconstruction and b) Deep Neural Network (DNN) inference. Here, in order to eliminate redundant and unnecessary reconstruction processing, we introduce key-frame and resolution selection incorporated with task assignment, quality prediction, and pipeline parallelization. The proposed framework is able to provide a flexible balance between reconstruction time and quality satisfaction. As for DNN inference, a joint resource allocation and DNN partitioning framework is proposed. The outcomes of this research seek to benefit the future distributed computing, smart applications, and data-intensive science communities to build effective, efficient, and robust MEC environments
Corporate Social Responsibility: the institutionalization of ESG
Understanding the impact of Corporate Social Responsibility (CSR) on firm performance as it relates to industries reliant on technological innovation is a complex and perpetually evolving challenge. To thoroughly investigate this topic, this dissertation will adopt an economics-based structure to address three primary hypotheses. This structure allows for each hypothesis to essentially be a standalone empirical paper, unified by an overall analysis of the nature of impact that ESG has on firm performance. The first hypothesis explores the evolution of CSR to the modern quantified iteration of ESG has led to the institutionalization and standardization of the CSR concept. The second hypothesis fills gaps in existing literature testing the relationship between firm performance and ESG by finding that the relationship is significantly positive in long-term, strategic metrics (ROA and ROIC) and that there is no correlation in short-term metrics (ROE and ROS). Finally, the third hypothesis states that if a firm has a long-term strategic ESG plan, as proxied by the publication of CSR reports, then it is more resilience to damage from controversies. This is supported by the finding that pro-ESG firms consistently fared better than their counterparts in both financial and ESG performance, even in the event of a controversy. However, firms with consistent reporting are also held to a higher standard than their nonreporting peers, suggesting a higher risk and higher reward dynamic. These findings support the theory of good management, in that long-term strategic planning is both immediately economically beneficial and serves as a means of risk management and social impact mitigation. Overall, this contributes to the literature by fillings gaps in the nature of impact that ESG has on firm performance, particularly from a management perspective
Redefining Community in the Age of the Internet: Will the Internet of Things (IoT) generate sustainable and equitable community development?
There is a problem so immense in our built world that it is often not fully realized. This problem is the disconnection between humanity and the physical world. In an era of limitless data and information at our fingertips, buildings, public spaces, and landscapes are divided from us due to their physical nature. Compared with the intense flow of information from our online world driven by the beating engine of the internet, our physical world is silent. This lack of connection not only has consequences for sustainability but also for how we perceive and communicate with our built environment in the modern age. A possible solution to bridge the gap between our physical and online worlds is a technology known as the Internet of Things (IoT). What is IoT? How does it work? Will IoT change the concept of the built environment for a participant within it, and in doing so enhance the dynamic link between humans and place? And what are the implications of IoT for privacy, security, and data for the public good? Lastly, we will identify the most pressing issues existing in the built environment by conducting and analyzing case studies from Pomona College and California State University, Northridge. By analyzing IoT in the context of case studies we can assess its viability and value as a tool for sustainability and equality in communities across the world
Patenting Genetic Information
The U.S. biotechnology industry got its start and grew to maturity over roughly three decades, beginning in the 1980s. During this period genes were patentable, and many gene patents were granted. University researchers performed basic research— often funded by the government—and then patented the genes they discovered with the encouragement of the Bayh-Dole Act, which sought to encourage practical applications of basic research by allowing patents on federally funded inventions and discoveries. At that time, when a researcher discovered the function of a gene, she could patent it such that no one else could work with that gene in the laboratory without a license. She had no right, however, to control genes in nature, including in human bodies. Universities licensed their researchers’ patents to industry, which brought in significant revenue for further research. University researchers also used gene patents as the basis for obtaining funding for start-up enterprises spun out of university labs. It was in this environment that many of today’s biotechnology companies started. In 2013, the Supreme Court held that naturally occurring genes could no longer be patented. This followed a 2012 decision that disallowed patents on many diagnostic processes. These decisions significantly changed the intellectual property protections in the biotechnology industry. Nevertheless, the industry has continued to grow and thrive. This Article investigates two questions. First, if some form of exclusive rights still applied to genes, would the biotech industry be even more robust, with more new entrants in addition to thriving, well-established companies? Second, does the current lack of protection for gene discoveries incentivize keeping such discoveries secret for the many years that it can take to develop a therapeutic based thereon—to the detriment of patients who could benefit from knowledge of the genetic associations, even before a treatment is developed? The Article concludes by analyzing what protection for discovering genetic associations, if any, will most increase social welfare
DIN Spec 91345 RAMI 4.0 compliant data pipelining: An approach to support data understanding and data acquisition in smart manufacturing environments
Today, data scientists in the manufacturing domain are confronted with a set of challenges associated to data acquisition as well as data processing including the extraction of valuable in-formation to support both, the work of the manufacturing equipment as well as the manufacturing processes behind it.
One essential aspect related to data acquisition is the pipelining, including various commu-nication standards, protocols and technologies to save and transfer heterogenous data. These circumstances make it hard to understand, find, access and extract data from the sources depend-ing on use cases and applications.
In order to support this data pipelining process, this thesis proposes the use of the semantic model. The selected semantic model should be able to describe smart manufacturing assets them-selves as well as to access their data along their life-cycle.
As a matter of fact, there are many research contributions in smart manufacturing, which already came out with reference architectures or standards for semantic-based meta data descrip-tion or asset classification. This research builds upon these outcomes and introduces a novel se-mantic model-based data pipelining approach using as a basis the Reference Architecture Model for Industry 4.0 (RAMI 4.0).Hoje em dia, os cientistas de dados no domínio da manufatura são confrontados com várias normas, protocolos e tecnologias de comunicação para gravar, processar e transferir vários tipos de dados. Estas circunstâncias tornam difícil compreender, encontrar, aceder e extrair dados necessários para aplicações dependentes de casos de utilização, desde os equipamentos aos respectivos processos de manufatura.
Um aspecto essencial poderia ser um processo de canalisação de dados incluindo vários normas de comunicação, protocolos e tecnologias para gravar e transferir dados. Uma solução para suporte deste processo, proposto por esta tese, é a aplicação de um modelo semântico que descreva os próprios recursos de manufactura inteligente e o acesso aos seus dados ao longo do seu ciclo de vida.
Muitas das contribuições de investigação em manufatura inteligente já produziram arquitecturas de referência como a RAMI 4.0 ou normas para a descrição semântica de meta dados ou classificação de recursos. Esta investigação baseia-se nestas fontes externas e introduz um novo modelo semântico baseado no Modelo de Arquitectura de Referência para Indústria 4.0 (RAMI 4.0), em conformidade com a abordagem de canalisação de dados no domínio da produção inteligente como caso exemplar de utilização para permitir uma fácil exploração, compreensão, descoberta, selecção e extracção de dados
- …