460 research outputs found
Personalized Resource Allocation in Wireless Networks: An AI-Enabled and Big Data-Driven Multi-Objective Optimization
The design and optimization of wireless networks have mostly been based on
strong mathematical and theoretical modeling. Nonetheless, as novel
applications emerge in the era of 5G and beyond, unprecedented levels of
complexity will be encountered in the design and optimization of the network.
As a result, the use of Artificial Intelligence (AI) is envisioned for wireless
network design and optimization due to the flexibility and adaptability it
offers in solving extremely complex problems in real-time. One of the main
future applications of AI is enabling user-level personalization for numerous
use cases. AI will revolutionize the way we interact with computers in which
computers will be able to sense commands and emotions from humans in a
non-intrusive manner, making the entire process transparent to users. By
leveraging this capability, and accelerated by the advances in computing
technologies, wireless networks can be redesigned to enable the personalization
of network services to the user level in real-time. While current wireless
networks are being optimized to achieve a predefined set of quality
requirements, the personalization technology advocated in this article is
supported by an intelligent big data-driven layer designed to micro-manage the
scarce network resources. This layer provides the intelligence required to
decide the necessary service quality that achieves the target satisfaction
level for each user. Due to its dynamic and flexible design, personalized
networks are expected to achieve unprecedented improvements in optimizing two
contradicting objectives in wireless networks: saving resources and improving
user satisfaction levels
Trade-offs among cost, integration, and segregation in the human connectome
AbstractThe human brain structural network is thought to be shaped by the optimal trade-off between cost and efficiency. However, most studies on this problem have focused on only the trade-off between cost and global efficiency (i.e., integration) and have overlooked the efficiency of segregated processing (i.e., segregation), which is essential for specialized information processing. Direct evidence on how trade-offs among cost, integration, and segregation shape the human brain network remains lacking. Here, adopting local efficiency and modularity as segregation factors, we used a multiobjective evolutionary algorithm to investigate this problem. We defined three trade-off models, which represented trade-offs between cost and integration (Dual-factor model), and trade-offs among cost, integration, and segregation (local efficiency or modularity; Tri-factor model), respectively. Among these, synthetic networks with optimal trade-off among cost, integration, and modularity (Tri-factor model [Q]) showed the best performance. They had a high recovery rate of structural connections and optimal performance in most network features, especially in segregated processing capacity and network robustness. Morphospace of this trade-off model could further capture the variation of individual behavioral/demographic characteristics in a domain-specific manner. Overall, our results highlight the importance of modularity in the formation of the human brain structural network and provide new insights into the original cost-efficiency trade-off hypothesis
Learning-based generative representations for automotive design optimization
In automotive design optimizations, engineers intuitively look for suitable representations of CAE models that can be used across different optimization problems. Determining a suitable compact representation of 3D CAE models facilitates faster search and optimization of 3D designs. Therefore, to support novice designers in the automotive design process, we envision a cooperative design system (CDS) which learns the experience embedded in past optimization data and is able to provide assistance to the designer while performing an engineering design optimization task. The research in this thesis addresses different aspects that can be combined to form a CDS framework.
First, based on the survey of deep learning techniques, a point cloud variational autoencoder (PC-VAE) is adapted from the literature, extended and evaluated as a shape generative model in design optimizations. The performance of the PC-VAE is verified with respect to state-of-the-art architectures. The PC-VAE is capable of generating a continuous low-dimensional search space for 3D designs, which further supports the generation of novel realistic 3D designs through interpolation and sampling in the latent space. In general, while designing a 3D car design, engineers need to consider multiple structural or functional performance criteria of a 3D design. Hence, in the second step, the latent representations of the PC-VAE are evaluated for generating novel designs satisfying multiple criteria and user preferences. A seeding method is proposed to provide a warm start to the optimization process and improve convergence time. Further, to replace expensive simulations for performance estimation in an optimization task, surrogate models are trained to map each latent representation of an input 3D design to their respective geometric and functional performance measures. However, the performance of the PC-VAE is less consistent due to additional regularization of the latent space.
Thirdly, to better understand which distinct region of the input 3D design is learned by a particular latent variable of the PC-VAE, a new deep generative model is proposed (Split-AE), which is an extension of the existing autoencoder architecture. The Split-AE learns input 3D point cloud representations and generates two sets of latent variables for each 3D design. The first set of latent variables, referred to as content, which helps to represent an overall underlying structure of the 3D shape to discriminate across other semantic shape categories. The second set of latent variables refers to the style, which represents the unique shape part of the input 3D shape and this allows grouping of shapes into shape classes. The reconstruction and latent variables disentanglement properties of the Split-AE are compared with other state-of-the-art architectures. In a series of experiments, it is shown that for given input shapes, the Split-AE is capable of generating the content and style variables which gives the flexibility to transfer and combine style features between different shapes. Thus, the Split-AE is able to disentangle features with minimum supervision and helps in generating novel shapes that are modified versions of the existing designs.
Lastly, to demonstrate the application of our initial envisioned CDS, two interactive systems were developed to assist designers in exploring design ideas. In the first CDS framework, the latent variables of the PC-VAE are integrated with a graphical user interface. This framework enables the designer to explore designs taking into account the data-driven knowledge and different performance measures of 3D designs. The second interactive system aims to guide the designers to achieve their design targets, for which past human experiences of performing 3D design modifications are captured and learned using a machine learning model. The trained model is then used to guide the (novice) engineers and designers by predicting the next step of design modification based on the current applied changes
Multicriteria Optimization Techniques for Understanding the Case Mix Landscape of a Hospital
Various medical and surgical units operate in a typical hospital and to treat
their patients these units compete for infrastructure like operating rooms (OR)
and ward beds. How that competition is regulated affects the capacity and
output of a hospital. This article considers the impact of treating different
patient case mix (PCM) in a hospital. As each case mix has an economic
consequence and a unique profile of hospital resource usage, this consideration
is important. To better understand the case mix landscape and to identify those
which are optimal from a capacity utilisation perspective, an improved
multicriteria optimization (MCO) approach is proposed. As there are many
patient types in a typical hospital, the task of generating an archive of
non-dominated (i.e., Pareto optimal) case mix is computationally challenging.
To generate a better archive, an improved parallelised epsilon constraint
method (ECM) is introduced. Our parallel random corrective approach is
significantly faster than prior methods and is not restricted to evaluating
points on a structured uniform mesh. As such we can generate more solutions.
The application of KD-Trees is another new contribution. We use them to perform
proximity testing and to store the high dimensional Pareto frontier (PF). For
generating, viewing, navigating, and querying an archive, the development of a
suitable decision support tool (DST) is proposed and demonstrated.Comment: 38 pages, 17 figures, 11 table
VI Congreso Científico de Jóvenes en Diseño de Experimentos y Ciencia de Datos
The Institute of Data Science and Artificial Intelligence (DATAI) at the University of Navarra is organizing the “VI Scientific Congress of Young researchers in Experimental Design and Data Science (JEDE 6)” along with the III DATAI Scientific Conference, to be held on June 5th, 6th, and 7th, 2023 in Pamplona (University of Navarra). The previous five JEDE meetings took place in Toledo in 2010, San Cristóbal de la Laguna in 2012, Pamplona in 2014, Salamanca in 2017, and Almería in 2021. Young Spanish and foreign researchers, many of them from Latin America, attended these meetings. The main objective of this congress is the exchange of knowledge and experiences among young researchers in experimental design and data science from Spanish universities, as well as professionals in these fields of knowledge, through active participation in a gathering specifically tailored to them. Presentations will take place in a relaxed and receptive environment, with the presence of national and international experts who will encourage scientific debate
SoC-based FPGA architecture for image analysis and other highly demanding applications
Al giorno d'oggi, lo sviluppo di algoritmi si concentra su calcoli efficienti in termini di prestazioni ed efficienza energetica. Tecnologie come il field programmable gate array (FPGA) e il system on chip (SoC) basato su FPGA (FPGA/SoC) hanno dimostrato la loro capacità di accelerare applicazioni di calcolo intensive risparmiando al contempo il consumo energetico, grazie alla loro capacità di elevato parallelismo e riconfigurazione dell'architettura.
Attualmente, i cicli di progettazione esistenti per FPGA/SoC sono lunghi, a causa della complessità dell'architettura. Pertanto, per colmare il divario tra le applicazioni e le architetture FPGA/SoC e ottenere un design hardware efficiente per l'analisi delle immagini e altri applicazioni altamente demandanti utilizzando lo strumento di sintesi di alto livello, vengono prese in considerazione due strategie complementari: tecniche ad hoc e stima delle prestazioni.
Per quanto riguarda le tecniche ad-hoc, tre applicazioni molto impegnative sono state accelerate attraverso gli strumenti HLS: discriminatore di forme di impulso per i raggi cosmici, classificazione automatica degli insetti e re-ranking per il recupero delle informazioni, sottolineando i vantaggi quando questo tipo di applicazioni viene attraversato da tecniche di compressione durante il targeting dispositivi FPGA/SoC.
Inoltre, in questa tesi viene proposto uno stimatore delle prestazioni per l'accelerazione hardware per prevedere efficacemente l'utilizzo delle risorse e la latenza per FPGA/SoC, costruendo un ponte tra l'applicazione e i domini architetturali. Lo strumento integra modelli analitici per la previsione delle prestazioni e un motore design space explorer (DSE) per fornire approfondimenti di alto livello agli sviluppatori di hardware, composto da due motori indipendenti: DSE basato sull'ottimizzazione a singolo obiettivo e DSE basato sull'ottimizzazione evolutiva multiobiettivo.Nowadays, the development of algorithms focuses on performance-efficient and energy-efficient computations. Technologies such as field programmable gate array (FPGA) and system on chip (SoC) based on FPGA (FPGA/SoC) have shown their ability to accelerate intensive computing applications while saving power consumption, owing to their capability of high parallelism and reconfiguration of the architecture.
Currently, the existing design cycles for FPGA/SoC are time-consuming, owing to the complexity of the architecture. Therefore, to address the gap between applications and FPGA/SoC architectures and to obtain an efficient hardware design for image analysis and highly demanding applications using the high-level synthesis tool, two complementary strategies are considered: ad-hoc techniques and performance estimator.
Regarding ad-hoc techniques, three highly demanding applications were accelerated through HLS tools: pulse shape discriminator for cosmic rays, automatic pest classification, and re-ranking for information retrieval, emphasizing the benefits when this type of applications are traversed by compression techniques when targeting FPGA/SoC devices.
Furthermore, a comprehensive performance estimator for hardware acceleration is proposed in this thesis to effectively predict the resource utilization and latency for FPGA/SoC, building a bridge between the application and architectural domains. The tool integrates analytical models for performance prediction, and a design space explorer (DSE) engine for providing high-level insights to hardware developers, composed of two independent sub-engines: DSE based on single-objective optimization and DSE based on evolutionary multi-objective optimization
Multi-objective Optimization of Space-Air-Ground Integrated Network Slicing Relying on a Pair of Central and Distributed Learning Algorithms
As an attractive enabling technology for next-generation wireless
communications, network slicing supports diverse customized services in the
global space-air-ground integrated network (SAGIN) with diverse resource
constraints. In this paper, we dynamically consider three typical classes of
radio access network (RAN) slices, namely high-throughput slices, low-delay
slices and wide-coverage slices, under the same underlying physical SAGIN. The
throughput, the service delay and the coverage area of these three classes of
RAN slices are jointly optimized in a non-scalar form by considering the
distinct channel features and service advantages of the terrestrial, aerial and
satellite components of SAGINs. A joint central and distributed multi-agent
deep deterministic policy gradient (CDMADDPG) algorithm is proposed for solving
the above problem to obtain the Pareto optimal solutions. The algorithm first
determines the optimal virtual unmanned aerial vehicle (vUAV) positions and the
inter-slice sub-channel and power sharing by relying on a centralized unit.
Then it optimizes the intra-slice sub-channel and power allocation, and the
virtual base station (vBS)/vUAV/virtual low earth orbit (vLEO) satellite
deployment in support of three classes of slices by three separate distributed
units. Simulation results verify that the proposed method approaches the
Pareto-optimal exploitation of multiple RAN slices, and outperforms the
benchmarkers.Comment: 19 pages, 14 figures, journa
Can social norms explain long-term trends in alcohol use? Insights from inverse generative social science
Social psychological theory posits entities and mechanisms that attempt to explain observable differences in behavior. For example, dual process theory suggests that an agent's behavior is influenced by intentional (arising from reasoning involving attitudes and perceived norms) and unintentional (i.e., habitual) processes. In order to pass the generative sufficiency test as an explanation of alcohol use, we argue that the theory should be able to explain notable patterns in alcohol use that exist in the population, e.g., the distinct differences in drinking prevalence and average quantities consumed by males and females. In this study, we further develop and apply inverse generative social science (iGSS) methods to an existing agent-based model of dual process theory of alcohol use. Using iGSS, implemented within a multi-objective grammar-based genetic program, we search through the space of model structures to identify whether a single parsimonious model can best explain both male and female drinking, or whether separate and more complex models are needed. Focusing on alcohol use trends in New York State, we identify an interpretable model structure that achieves high goodness-of-fit for both male and female drinking patterns simultaneously, and which also validates successfully against reserved trend data. This structure offers a novel interpretation of the role of norms in formulating drinking intentions, but the structure's theoretical validity is questioned by its suggestion that individuals with low autonomy would act against perceived descriptive norms. Improved evidence on the distribution of autonomy in the population is needed to understand whether this finding is substantive or is a modeling artefact
Advanced VLBI Imaging
Very Long Baseline Interferometry (VLBI) is an observational technique developed in astronomy for combining multiple radio telescopes into a single virtual instrument with an effective aperture reaching up to many thousand kilometers and enabling measurements at highest angular resolutions. The celebrated examples of applying VLBI to astrophysical studies include detailed, high-resolution images of the innermost parts of relativistic outflows (jets) in active galactic nuclei (AGN) and recent pioneering observations of the shadows of supermassive black holes (SMBH) in the center of our Galaxy and in the galaxy M87.
Despite these and many other proven successes of VLBI, analysis and imaging of VLBI data still remain difficult, owing in part to the fact that VLBI imaging inherently constitutes an ill-posed inverse problem. Historically, this problem has been addressed in radio interferometry by the CLEAN algorithm, a matching-pursuit inverse modeling method developed in the early 1970-s and since then established as a de-facto standard approach for imaging VLBI data.
In recent years, the constantly increasing demand for improving quality and fidelity of interferometric image reconstruction has resulted in several attempts to employ new approaches, such as forward modeling and Bayesian estimation, for application to VLBI imaging.
While the current state-of-the-art forward modeling and Bayesian techniques may outperform CLEAN in terms of accuracy, resolution, robustness, and adaptability, they also tend to require more complex structure and longer computation times, and rely on extensive finetuning of a larger number of non-trivial hyperparameters. This leaves an ample room for further searches for potentially more effective imaging approaches and provides the main motivation for this dissertation and its particular focusing on the need to unify algorithmic frameworks and to study VLBI imaging from the perspective of inverse problems in general.
In pursuit of this goal, and based on an extensive qualitative comparison of the existing methods, this dissertation comprises the development, testing, and first implementations of two novel concepts for improved interferometric image reconstruction. The concepts combine the known benefits of current forward modeling techniques, develop more automatic and less supervised algorithms for image reconstruction, and realize them within two different frameworks.
The first framework unites multiscale imaging algorithms in the spirit of compressive sensing with a dictionary adapted to the uv-coverage and its defects (DoG-HiT, DoB-CLEAN). We extend this approach to dynamical imaging and polarimetric imaging. The core components of this framework are realized in a multidisciplinary and multipurpose software MrBeam, developed as part of this dissertation.
The second framework employs a multiobjective genetic evolutionary algorithm (MOEA/D) for the purpose of achieving fully unsupervised image reconstruction and hyperparameter optimization.
These new methods are shown to outperform the existing methods in various metrics such as angular resolution, structural sensitivity, and degree of supervision. We demonstrate the great potential of these new techniques with selected applications to frontline VLBI observations of AGN jets and SMBH.
In addition to improving the quality and robustness of image reconstruction, DoG-HiT, DoB-CLEAN and MOEA/D also provide such novel capabilities as dynamic reconstruction of polarimetric images on minute time-scales, or near-real time and unsupervised data analysis (useful in particular for application to large imaging surveys).
The techniques and software developed in this dissertation are of interest for a wider range of inverse problems as well. This includes such versatile fields such as Ly-alpha tomography (where we improve estimates of the thermal state of the intergalactic medium), the cosmographic search for dark matter (where we improve forecasted bounds on ultralight dilatons), medical imaging, and solar spectroscopy
Evolutionary Reinforcement Learning: A Survey
Reinforcement learning (RL) is a machine learning approach that trains agents
to maximize cumulative rewards through interactions with environments. The
integration of RL with deep learning has recently resulted in impressive
achievements in a wide range of challenging tasks, including board games,
arcade games, and robot control. Despite these successes, there remain several
crucial challenges, including brittle convergence properties caused by
sensitive hyperparameters, difficulties in temporal credit assignment with long
time horizons and sparse rewards, a lack of diverse exploration, especially in
continuous search space scenarios, difficulties in credit assignment in
multi-agent reinforcement learning, and conflicting objectives for rewards.
Evolutionary computation (EC), which maintains a population of learning agents,
has demonstrated promising performance in addressing these limitations. This
article presents a comprehensive survey of state-of-the-art methods for
integrating EC into RL, referred to as evolutionary reinforcement learning
(EvoRL). We categorize EvoRL methods according to key research fields in RL,
including hyperparameter optimization, policy search, exploration, reward
shaping, meta-RL, and multi-objective RL. We then discuss future research
directions in terms of efficient methods, benchmarks, and scalable platforms.
This survey serves as a resource for researchers and practitioners interested
in the field of EvoRL, highlighting the important challenges and opportunities
for future research. With the help of this survey, researchers and
practitioners can develop more efficient methods and tailored benchmarks for
EvoRL, further advancing this promising cross-disciplinary research field
- …