20,856 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Efficient inference in the transverse field Ising model
In this paper we introduce an approximate method to solve the quantum cavity
equations for transverse field Ising models. The method relies on a projective
approximation of the exact cavity distributions of imaginary time trajectories
(paths). A key feature, novel in the context of similar algorithms, is the
explicit separation of the classical and quantum parts of the distributions.
Numerical simulations show accurate results in comparison with the sampled
solution of the cavity equations, the exact diagonalization of the Hamiltonian
(when possible) and other approximate inference methods in the literature. The
computational complexity of this new algorithm scales linearly with the
connectivity of the underlying lattice, enabling the study of highly connected
networks, as the ones often encountered in quantum machine learning problems
Rehabilitation Exercise Repetition Segmentation and Counting using Skeletal Body Joints
Physical exercise is an essential component of rehabilitation programs that
improve quality of life and reduce mortality and re-hospitalization rates. In
AI-driven virtual rehabilitation programs, patients complete their exercises
independently at home, while AI algorithms analyze the exercise data to provide
feedback to patients and report their progress to clinicians. To analyze
exercise data, the first step is to segment it into consecutive repetitions.
There has been a significant amount of research performed on segmenting and
counting the repetitive activities of healthy individuals using raw video data,
which raises concerns regarding privacy and is computationally intensive.
Previous research on patients' rehabilitation exercise segmentation relied on
data collected by multiple wearable sensors, which are difficult to use at home
by rehabilitation patients. Compared to healthy individuals, segmenting and
counting exercise repetitions in patients is more challenging because of the
irregular repetition duration and the variation between repetitions. This paper
presents a novel approach for segmenting and counting the repetitions of
rehabilitation exercises performed by patients, based on their skeletal body
joints. Skeletal body joints can be acquired through depth cameras or computer
vision techniques applied to RGB videos of patients. Various sequential neural
networks are designed to analyze the sequences of skeletal body joints and
perform repetition segmentation and counting. Extensive experiments on three
publicly available rehabilitation exercise datasets, KIMORE, UI-PRMD, and
IntelliRehabDS, demonstrate the superiority of the proposed method compared to
previous methods. The proposed method enables accurate exercise analysis while
preserving privacy, facilitating the effective delivery of virtual
rehabilitation programs.Comment: 8 pages, 1 figure, 2 table
RAPID: Enabling Fast Online Policy Learning in Dynamic Public Cloud Environments
Resource sharing between multiple workloads has become a prominent practice
among cloud service providers, motivated by demand for improved resource
utilization and reduced cost of ownership. Effective resource sharing, however,
remains an open challenge due to the adverse effects that resource contention
can have on high-priority, user-facing workloads with strict Quality of Service
(QoS) requirements. Although recent approaches have demonstrated promising
results, those works remain largely impractical in public cloud environments
since workloads are not known in advance and may only run for a brief period,
thus prohibiting offline learning and significantly hindering online learning.
In this paper, we propose RAPID, a novel framework for fast, fully-online
resource allocation policy learning in highly dynamic operating environments.
RAPID leverages lightweight QoS predictions, enabled by
domain-knowledge-inspired techniques for sample efficiency and bias reduction,
to decouple control from conventional feedback sources and guide policy
learning at a rate orders of magnitude faster than prior work. Evaluation on a
real-world server platform with representative cloud workloads confirms that
RAPID can learn stable resource allocation policies in minutes, as compared
with hours in prior state-of-the-art, while improving QoS by 9.0x and
increasing best-effort workload performance by 19-43%
Communicating Actor Automata -- Modelling Erlang Processes as Communicating Machines
Brand and Zafiropulo's notion of Communicating Finite-State Machines (CFSMs)
provides a succinct and powerful model of message-passing concurrency, based
around channels. However, a major variant of message-passing concurrency is not
readily captured by CFSMs: the actor model. In this work, we define a variant
of CFSMs, called Communicating Actor Automata, to capture the actor model of
concurrency as provided by Erlang: with mailboxes, from which messages are
received according to repeated application of pattern matching. Furthermore,
this variant of CFSMs supports dynamic process topologies, capturing common
programming idioms in the context of actor-based message-passing concurrency.
This gives a new basis for modelling, specifying, and verifying Erlang
programs. We also consider a class of CAAs that give rise to freedom from race
conditions.Comment: In Proceedings PLACES 2023, arXiv:2304.0543
Self-Supervised Learning to Prove Equivalence Between Straight-Line Programs via Rewrite Rules
We target the problem of automatically synthesizing proofs of semantic
equivalence between two programs made of sequences of statements. We represent
programs using abstract syntax trees (AST), where a given set of
semantics-preserving rewrite rules can be applied on a specific AST pattern to
generate a transformed and semantically equivalent program. In our system, two
programs are equivalent if there exists a sequence of application of these
rewrite rules that leads to rewriting one program into the other. We propose a
neural network architecture based on a transformer model to generate proofs of
equivalence between program pairs. The system outputs a sequence of rewrites,
and the validity of the sequence is simply checked by verifying it can be
applied. If no valid sequence is produced by the neural network, the system
reports the programs as non-equivalent, ensuring by design no programs may be
incorrectly reported as equivalent. Our system is fully implemented for a given
grammar which can represent straight-line programs with function calls and
multiple types. To efficiently train the system to generate such sequences, we
develop an original incremental training technique, named self-supervised
sample selection. We extensively study the effectiveness of this novel training
approach on proofs of increasing complexity and length. Our system, S4Eq,
achieves 97% proof success on a curated dataset of 10,000 pairs of equivalent
programsComment: 30 pages including appendi
BRAMAC: Compute-in-BRAM Architectures for Multiply-Accumulate on FPGAs
Deep neural network (DNN) inference using reduced integer precision has been
shown to achieve significant improvements in memory utilization and compute
throughput with little or no accuracy loss compared to full-precision
floating-point. Modern FPGA-based DNN inference relies heavily on the on-chip
block RAM (BRAM) for model storage and the digital signal processing (DSP) unit
for implementing the multiply-accumulate (MAC) operation, a fundamental DNN
primitive. In this paper, we enhance the existing BRAM to also compute MAC by
proposing BRAMAC (Compute-in-AM
rchitectures for
ultiply-cumulate). BRAMAC supports
2's complement 2- to 8-bit MAC in a small dummy BRAM array using a hybrid
bit-serial & bit-parallel data flow. Unlike previous compute-in-BRAM
architectures, BRAMAC allows read/write access to the main BRAM array while
computing in the dummy BRAM array, enabling both persistent and tiling-based
DNN inference. We explore two BRAMAC variants: BRAMAC-2SA (with 2 synchronous
dummy arrays) and BRAMAC-1DA (with 1 double-pumped dummy array).
BRAMAC-2SA/BRAMAC-1DA can boost the peak MAC throughput of a large Arria-10
FPGA by 2.6/2.1, 2.3/2.0, and
1.9/1.7 for 2-bit, 4-bit, and 8-bit precisions, respectively at
the cost of 6.8%/3.4% increase in the FPGA core area. By adding
BRAMAC-2SA/BRAMAC-1DA to a state-of-the-art tiling-based DNN accelerator, an
average speedup of 2.05/1.7 and 1.33/1.52 can
be achieved for AlexNet and ResNet-34, respectively across different model
precisions.Comment: 11 pages, 13 figures, 3 tables, FCCM conference 202
Path integrals and stochastic calculus
Path integrals are a ubiquitous tool in theoretical physics. However, their
use is sometimes hindered by the lack of control on various manipulations --
such as performing a change of the integration path -- one would like to carry
out in the light-hearted fashion that physicists enjoy. Similar issues arise in
the field of stochastic calculus, which we review to prepare the ground for a
proper construction of path integrals. At the level of path integration, and in
arbitrary space dimension, we not only report on existing Riemannian
geometry-based approaches that render path integrals amenable to the standard
rules of calculus, but also bring forth new routes, based on a fully
time-discretized approach, that achieve the same goal. We illustrate these
various definitions of path integration on simple examples such as the
diffusion of a particle on a sphere.Comment: 96 pages, 4 figures. New title, expanded introduction and additional
references. Version accepted in Advandes in Physic
Semi-supervised detection of structural damage using Variational Autoencoder and a One-Class Support Vector Machine
In recent years, Artificial Neural Networks (ANNs) have been introduced in
Structural Health Monitoring (SHM) systems. A semi-supervised method with a
data-driven approach allows the ANN training on data acquired from an undamaged
structural condition to detect structural damages. In standard approaches,
after the training stage, a decision rule is manually defined to detect
anomalous data. However, this process could be made automatic using machine
learning methods, whom performances are maximised using hyperparameter
optimization techniques. The paper proposes a semi-supervised method with a
data-driven approach to detect structural anomalies. The methodology consists
of: (i) a Variational Autoencoder (VAE) to approximate undamaged data
distribution and (ii) a One-Class Support Vector Machine (OC-SVM) to
discriminate different health conditions using damage sensitive features
extracted from VAE's signal reconstruction. The method is applied to a scale
steel structure that was tested in nine damage's scenarios by IASC-ASCE
Structural Health Monitoring Task Group
Deep Transfer Learning Applications in Intrusion Detection Systems: A Comprehensive Review
Globally, the external Internet is increasingly being connected to the
contemporary industrial control system. As a result, there is an immediate need
to protect the network from several threats. The key infrastructure of
industrial activity may be protected from harm by using an intrusion detection
system (IDS), a preventive measure mechanism, to recognize new kinds of
dangerous threats and hostile activities. The most recent artificial
intelligence (AI) techniques used to create IDS in many kinds of industrial
control networks are examined in this study, with a particular emphasis on
IDS-based deep transfer learning (DTL). This latter can be seen as a type of
information fusion that merge, and/or adapt knowledge from multiple domains to
enhance the performance of the target task, particularly when the labeled data
in the target domain is scarce. Publications issued after 2015 were taken into
account. These selected publications were divided into three categories:
DTL-only and IDS-only are involved in the introduction and background, and
DTL-based IDS papers are involved in the core papers of this review.
Researchers will be able to have a better grasp of the current state of DTL
approaches used in IDS in many different types of networks by reading this
review paper. Other useful information, such as the datasets used, the sort of
DTL employed, the pre-trained network, IDS techniques, the evaluation metrics
including accuracy/F-score and false alarm rate (FAR), and the improvement
gained, were also covered. The algorithms, and methods used in several studies,
or illustrate deeply and clearly the principle in any DTL-based IDS subcategory
are presented to the reader
- …