408 research outputs found
Smart and Safe packaging
In line with the latest innovations in the packaging field, this joint project aims at implementing new and innovative micro- and nanoparticles for the development of active and intelligent packaging solutions dedicated to food and medical packaging applications. More specifically, the project combines two major developments which both falls within the scope of active and intelligent packaging. In this work, a specific focus was given to the development of an antibacterial packaging solution and to the development of smart gas sensors. The antibacterial strategy developed was based on the combination of two active materials - silver nanowires and cellulose nanofibrils - to prepare antibacterial surfaces. The formulation as an ink and the deposition processing has been deeply studied for different surface deposition processes that include coatings or screen-printing. Results showed surfaces that display strong antibacterial activity both against Gram-positive and Gram-negative bacteria, but also interesting properties for active packaging applications such as a highly retained transparency or enhanced barrier properties. Regarding the second strategy, gas sensors have been prepared using a combination of Copper benzene-1,3,5-tricarboxylate Metal Organic Framework and carbon-graphene materials, deposited on flexible screen-printed electrodes. The easy-to-produce and optimized sensors exhibit good performances toward ammonia and toward humidity sensing, proving the versatility and the great potential of such solution to be adapted for different target applications. The results of this project lead to innovative solutions that can meet the challenges raised by the packaging industry
Recommended from our members
Development of an electrochemical micromachining (μECM) machine
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London.Electrochemical machining (ECM) and especially electrochemical micromachining
(μECM) became an attractive area of research due to the fact that this process does not
create any defective layer after machining and that there is a growing demand for better
surface integrity on different micro applications such as microfluidics systems and stressfree
drilled holes in the automotive and aerospace sectors. Electrochemical machining is considered as a non-conventional machining process based on the phenomenon of electrolysis. This process requires maintaining a small gap - the interelectrode gap (IEG) - between the anode (workpiece) and the cathode (tool-electrode)
in order to achieve acceptable machining results (i.e. accuracy, high aspect ratio with appropriate material removal rate and efficiency). This work presents the design of a next generation μECM machine for the automotive, aerospace, medical and metrology sectors. It has 3 axes of motion (X, Y and Z) and a spindle
allowing the tool-electrode to rotate during machining. The linear slides for each axis use air bearings with linear DC brushless motors and 2nmresolution encoders for ultra-precise motion. The control system is based on the Power PMAC motion controller from Delta Tau. The electrolyte tank is located at the rear of the
machine and allows the electrolyte to be changed quickly. A pulse power supply unit (PSU) and a special control algorithm have been implemented. The pulse power supply provides not only ultra-short pulses (50ns), but also plus and minus biases as well as a polarity switching functionality. It fulfils the requirements of tool
preparation with reversed ECM on the machine. Moreover, the PSU is equipped with an ultrafast over current protection which prevents the tool-electrode from being damaged in case of short-circuits.
Two different process control algorithms were made: one is fuzzy logic based and the other
is adapting the feed rate according to the position and time at which short-circuits were
detected. The developed machine is capable of drilling micro holes in hard-to-machine materials but
also machine micro-styli and micro-needles for the metrology (micro CMM) and medical
sectors. This work also presents drilling trials performed with the machine with an orbiting
tool. Machining experiments were also carried out using electrolytes made of a combination
of HCl and NaNO aqueous solutions. The developed machine was used to fabricate micro tools out of 170μm WC-Co alloy shafts via micro electrochemical turning and drill deep holes via μECM in disks made of 18NiCr6 alloy. Results suggest that this process can be used for industrial applications for hard-to-machine
materials. The author also suggests that the developed machine can be used to manufacture
micro-probes and micro-tools for metrology and micro-manufacturing purposes.Brunel University European Commissio
Pottery and Glass in Byzantium
Though pottery and glass are in some ways related, it is not clear that they share sufficiently similar conditions of manufacture, diffusion, or use to allow these aspects to be discussed in conjunction. Pottery appears to have been used in the greater quantity, or is at least found more frequently, and, while glass could well have been a luxury product, pottery practically never was such in the Byzantine world. In addition, research into pottery is further advanced than into glass
On the Statistics and Predictability of Go-Arounds
This paper takes an empirical approach to identify operational factors at busy airports that may predate go-around maneuvers. Using four years of data from San Francisco International Airport, we begin our investigation with a statistical approach to investigate which features of airborne, ground operations (e.g., number of inbound aircraft, number of aircraft taxiing from gate, etc.) or weather are most likely to fluctuate, relative to nominal operations, in the minutes immediately preceding a missed approach. We analyze these findings both in terms of their implication on current airport operations and discuss how the antecedent factors may affect NextGen. Finally, as a means to assist air traffic controllers, we draw upon techniques from the machine learning community to develop a preliminary alert system for go-around prediction.United States. National Aeronautics and Space Administration (Grant NNX08AY52A)
On the Statistics and Predictability of Go-Arounds
This paper takes an empirical approach to identify operational factors at
busy airports that may predate go-around maneuvers. Using four years of data
from San Francisco International Airport, we begin our investigation with a
statistical approach to investigate which features of airborne, ground
operations (e.g., number of inbound aircraft, number of aircraft taxiing from
gate, etc.) or weather are most likely to fluctuate, relative to nominal
operations, in the minutes immediately preceding a missed approach. We analyze
these findings both in terms of their implication on current airport operations
and discuss how the antecedent factors may affect NextGen. Finally, as a means
to assist air traffic controllers, we draw upon techniques from the machine
learning community to develop a preliminary alert system for go-around
prediction.Comment: 10 pages, 14 figures, Submitted to USA/Europe ATM Seminar 201
Stabilizing the Psychological Dynamics of People in a Crowd
This thesis investigates the use of control theory as a means to study and ultimately control the psychological dynamics of people in a crowd. Gustav LeBon's suggestibility theory, a well-known account of collective behaviour, is used to develop a discrete-time nonlinear model of psychological crowd behavior that, consistent with suggestibility theory, is open-loop unstable. As a first attempt to stabilize the dynamics, linear observer-based output-feedback techniques and a collection of simple nonlinear control strategies are pursued. The poor performance afforded by these schemes motivates an agent-oriented control strategy in which authoritative figures, termed control agents, are interspersed within the crowd and, similar to the technique of feedback linearization, use knowledge of the system dynamics to issue signals that propagate through the crowd to drive specific components of the state to zero. It is shown that if these states are chosen judiciously then it follows that a collection of other state signals are, themselves, zero. This realization is used to develop a stability result for a simple crowd structure and this result is, in turn, used as a template to develop similar results for crowds of greater complexity. Simulations are used to verify the functionality of the reported schemes and the advantages of using multiple control agents, instead of a single control agent, are emphasized. While the mathematical study of complex social phenomena, including crowds, is prefixed by an assortment of unique challenges, the main conclusion of this thesis is that control theory is a potentially powerful framework to study the underlying dynamics at play in such systems
Multiagent Maximum Coverage Problems: The Trade-off Between Anarchy and Stability
The price of anarchy and price of stability are three well-studied
performance metrics that seek to characterize the inefficiency of equilibria in
distributed systems. The distinction between these two performance metrics
centers on the equilibria that they focus on: the price of anarchy
characterizes the quality of the worst-performing equilibria, while the price
of stability characterizes the quality of the best-performing equilibria. While
much of the literature focuses on these metrics from an analysis perspective,
in this work we consider these performance metrics from a design perspective.
Specifically, we focus on the setting where a system operator is tasked with
designing local utility functions to optimize these performance metrics in a
class of games termed covering games. Our main result characterizes a
fundamental trade-off between the price of anarchy and price of stability in
the form of a fully explicit Pareto frontier. Within this setup, optimizing the
price of anarchy comes directly at the expense of the price of stability (and
vice versa). Our second results demonstrates how a system-operator could
incorporate an additional piece of system-level information into the design of
the agents' utility functions to breach these limitations and improve the
system's performance. This valuable piece of system-level information pertains
to the performance of worst performing agent in the system.Comment: 14 pages, 4 figure
- …