28,261 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control
This paper provides an overview of the current state-of-the-art in selective
harvesting robots (SHRs) and their potential for addressing the challenges of
global food production. SHRs have the potential to increase productivity,
reduce labour costs, and minimise food waste by selectively harvesting only
ripe fruits and vegetables. The paper discusses the main components of SHRs,
including perception, grasping, cutting, motion planning, and control. It also
highlights the challenges in developing SHR technologies, particularly in the
areas of robot design, motion planning and control. The paper also discusses
the potential benefits of integrating AI and soft robots and data-driven
methods to enhance the performance and robustness of SHR systems. Finally, the
paper identifies several open research questions in the field and highlights
the need for further research and development efforts to advance SHR
technologies to meet the challenges of global food production. Overall, this
paper provides a starting point for researchers and practitioners interested in
developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic
Entanglement in the full state vector of boson sampling
The full state vector of boson sampling is generated by passing S single
photons through beam splitters of M modes. The initial Fock state is expressed
withgeneralized coherent states, and an exact application of the unitary
evolution becomes possible. Due to the favorable polynomial scaling in M , we
can investigate Renyi entanglement entropies for moderate particle and huge
mode numbers. We find (almost) Renyi index independent symmetric Page curves
with maximum entropy at equal partition. Furthermore, the maximum entropy as a
function of mode index saturates as a function of M in the collision-free
subspace case. The asymptotic value of the entropy increases linearly with S.
Furthermore, we show that the build-up of the entanglement leads to a cusp at
subsystem size equal to S in the asymmetric entanglement curve. The maximum
entanglement is reached surprisingly early before the mode population is
distributed over the whole system
Towards Advantages of Parameterized Quantum Pulses
The advantages of quantum pulses over quantum gates have attracted increasing
attention from researchers. Quantum pulses offer benefits such as flexibility,
high fidelity, scalability, and real-time tuning. However, while there are
established workflows and processes to evaluate the performance of quantum
gates, there has been limited research on profiling parameterized pulses and
providing guidance for pulse circuit design. To address this gap, our study
proposes a set of design spaces for parameterized pulses, evaluating these
pulses based on metrics such as expressivity, entanglement capability, and
effective parameter dimension. Using these design spaces, we demonstrate the
advantages of parameterized pulses over gate circuits in the aspect of duration
and performance at the same time thus enabling high-performance quantum
computing. Our proposed design space for parameterized pulse circuits has shown
promising results in quantum chemistry benchmarks.Comment: 11 Figures, 4 Table
Geometry of Rounding: Near Optimal Bounds and a New Neighborhood Sperner's Lemma
A partition of is called a
-secluded partition if, for every ,
the ball intersects at most
members of . A goal in designing such secluded partitions is to
minimize while making as large as possible. This partition
problem has connections to a diverse range of topics, including deterministic
rounding schemes, pseudodeterminism, replicability, as well as Sperner/KKM-type
results.
In this work, we establish near-optimal relationships between and
. We show that, for any bounded measure partitions and for any
, it must be that . Thus, when is
restricted to , it follows that . This bound is tight up to log factors, as it is
known that there exist secluded partitions with and
. We also provide new constructions of secluded
partitions that work for a broad spectrum of and
parameters. Specifically, we prove that, for any
, there is a secluded partition with
and
. These new partitions are optimal up to
factors for various choices of and . Based
on the lower bound result, we establish a new neighborhood version of Sperner's
lemma over hypercubes, which is of independent interest. In addition, we prove
a no-free-lunch theorem about the limitations of rounding schemes in the
context of pseudodeterministic/replicable algorithms
Self-Supervised Learning to Prove Equivalence Between Straight-Line Programs via Rewrite Rules
We target the problem of automatically synthesizing proofs of semantic
equivalence between two programs made of sequences of statements. We represent
programs using abstract syntax trees (AST), where a given set of
semantics-preserving rewrite rules can be applied on a specific AST pattern to
generate a transformed and semantically equivalent program. In our system, two
programs are equivalent if there exists a sequence of application of these
rewrite rules that leads to rewriting one program into the other. We propose a
neural network architecture based on a transformer model to generate proofs of
equivalence between program pairs. The system outputs a sequence of rewrites,
and the validity of the sequence is simply checked by verifying it can be
applied. If no valid sequence is produced by the neural network, the system
reports the programs as non-equivalent, ensuring by design no programs may be
incorrectly reported as equivalent. Our system is fully implemented for a given
grammar which can represent straight-line programs with function calls and
multiple types. To efficiently train the system to generate such sequences, we
develop an original incremental training technique, named self-supervised
sample selection. We extensively study the effectiveness of this novel training
approach on proofs of increasing complexity and length. Our system, S4Eq,
achieves 97% proof success on a curated dataset of 10,000 pairs of equivalent
programsComment: 30 pages including appendi
Full trajectory optimizing operator inference for reduced-order modeling using differentiable programming
Accurate and inexpensive Reduced Order Models (ROMs) for forecasting
turbulent flows can facilitate rapid design iterations and thus prove critical
for predictive control in engineering problems. Galerkin projection based
Reduced Order Models (GP-ROMs), derived by projecting the Navier-Stokes
equations on a truncated Proper Orthogonal Decomposition (POD) basis, are
popular because of their low computational costs and theoretical foundations.
However, the accuracy of traditional GP-ROMs degrades over long time prediction
horizons. To address this issue, we extend the recently proposed Neural
Galerkin Projection (NeuralGP) data driven framework to
compressibility-dominated transonic flow, considering a prototypical problem of
a buffeting NACA0012 airfoil governed by the full Navier-Stokes equations. The
algorithm maintains the form of the ROM-ODE obtained from the Galerkin
projection; however coefficients are learned directly from the data using
gradient descent facilitated by differentiable programming. This blends the
strengths of the physics driven GP-ROM and purely data driven neural
network-based techniques, resulting in a computationally cheaper model that is
easier to interpret. We show that the NeuralGP method minimizes a more rigorous
full trajectory error norm compared to a linearized error definition optimized
by the calibration procedure. We also find that while both procedures stabilize
the ROM by displacing the eigenvalues of the linear dynamics matrix of the
ROM-ODE to the complex left half-plane, the NeuralGP algorithm adds more
dissipation to the trailing POD modes resulting in its better long-term
performance. The results presented highlight the superior accuracy of the
NeuralGP technique compared to the traditional calibrated GP-ROM method
Countermeasures for the majority attack in blockchain distributed systems
La tecnología Blockchain es considerada como uno de los paradigmas informáticos más importantes posterior al Internet; en función a sus características únicas que la hacen ideal para registrar, verificar y administrar información de diferentes transacciones. A pesar de esto, Blockchain se enfrenta a diferentes problemas de seguridad, siendo el ataque del 51% o ataque mayoritario uno de los más importantes. Este consiste en que uno o más mineros tomen el control de al menos el 51% del Hash extraído o del cómputo en una red; de modo que un minero puede manipular y modificar arbitrariamente la información registrada en esta tecnología. Este trabajo se enfocó en diseñar e implementar estrategias de detección y mitigación de ataques mayoritarios (51% de ataque) en un sistema distribuido Blockchain, a partir de la caracterización del comportamiento de los mineros. Para lograr esto, se analizó y evaluó el Hash Rate / Share de los mineros de Bitcoin y Crypto Ethereum, seguido del diseño e implementación de un protocolo de consenso para controlar el poder de cómputo de los mineros. Posteriormente, se realizó la exploración y evaluación de modelos de Machine Learning para detectar software malicioso de tipo Cryptojacking.DoctoradoDoctor en Ingeniería de Sistemas y Computació
Regret Distribution in Stochastic Bandits: Optimal Trade-off between Expectation and Tail Risk
We study the trade-off between expectation and tail risk for regret
distribution in the stochastic multi-armed bandit problem. We fully
characterize the interplay among three desired properties for policy design:
worst-case optimality, instance-dependent consistency, and light-tailed risk.
We show how the order of expected regret exactly affects the decaying rate of
the regret tail probability for both the worst-case and instance-dependent
scenario. A novel policy is proposed to characterize the optimal regret tail
probability for any regret threshold. Concretely, for any given and , our policy achieves a worst-case expected regret
of (we call it -optimal) and an instance-dependent
expected regret of (we call it -consistent), while
enjoys a probability of incurring an regret
( in the worst-case scenario and in the
instance-dependent scenario) that decays exponentially with a polynomial
term. Such decaying rate is proved to be best achievable. Moreover, we discover
an intrinsic gap of the optimal tail rate under the instance-dependent scenario
between whether the time horizon is known a priori or not. Interestingly,
when it comes to the worst-case scenario, this gap disappears. Finally, we
extend our proposed policy design to (1) a stochastic multi-armed bandit
setting with non-stationary baseline rewards, and (2) a stochastic linear
bandit setting. Our results reveal insights on the trade-off between regret
expectation and regret tail risk for both worst-case and instance-dependent
scenarios, indicating that more sub-optimality and inconsistency leave space
for more light-tailed risk of incurring a large regret, and that knowing the
planning horizon in advance can make a difference on alleviating tail risks
- …