70 research outputs found
Entropy in Image Analysis III
Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future
Efficient Multi-Objective NeuroEvolution in Computer Vision and Applications for Threat Identification
Concealed threat detection is at the heart of critical security systems designed to en- sure public safety. Currently, methods for threat identification and detection are primarily manual, but there is a recent vision to automate the process. Problematically, developing computer vision models capable of operating in a wide range of settings, such as the ones arising in threat detection, is a challenging task involving multiple (and often conflicting) objectives.
Automated machine learning (AutoML) is a flourishing field which endeavours to dis- cover and optimise models and hyperparameters autonomously, providing an alternative to classic, effort-intensive hyperparameter search. However, existing approaches typ- ically show significant downsides, like their (1) high computational cost/greediness in resources, (2) limited (or absent) scalability to custom datasets, (3) inability to provide competitive alternatives to expert-designed and heuristic approaches and (4) common consideration of a single objective. Moreover, most existing studies focus on standard classification tasks and thus cannot address a plethora of problems in threat detection and, more broadly, in a wide variety of compelling computer vision scenarios.
This thesis leverages state-of-the-art convolutional autoencoders and semantic seg- mentation (Chapter 2) to develop effective multi-objective AutoML strategies for neural architecture search. These strategies are designed for threat detection and provide in- sights into some quintessential computer vision problems. To this end, the thesis first introduces two new models, a practical Multi-Objective Neuroevolutionary approach for Convolutional Autoencoders (MONCAE, Chapter 3) and a Resource-Aware model for Multi-Objective Semantic Segmentation (RAMOSS, Chapter 4). Interestingly, these ap- proaches reached state-of-the-art results using a fraction of computational resources re- quired by competing systems (0.33 GPU days compared to 3150), yet allowing for mul- tiple objectives (e.g., performance and number of parameters) to be simultaneously op- timised. This drastic speed-up was possible through the coalescence of neuroevolution algorithms with a new heuristic technique termed Progressive Stratified Sampling. The presented methods are evaluated on a range of benchmark datasets and then applied to several threat detection problems, outperforming previous attempts in balancing multiple objectives.
The final chapter of the thesis focuses on thread detection, exploiting these two mod-
els and novel components. It presents first a new modification of specialised proxy scores to be embedded in RAMOSS, enabling us to further accelerate the AutoML process even more drastically while maintaining avant-garde performance (above 85% precision for SIXray). This approach rendered a new automatic evolutionary Multi-objEctive method for cOncealed Weapon detection (MEOW), which outperforms state-of-the-art models for threat detection in key datasets: a gold standard benchmark (SixRay) and a security- critical, proprietary dataset.
Finally, the thesis shifts the focus from neural architecture search to identifying the most representative data samples. Specifically, the Multi-objectIve Core-set Discovery through evolutionAry algorithMs in computEr vision approach (MIRA-ME) showcases how the new neural architecture search techniques developed in previous chapters can be adapted to operate on data space. MIRA-ME offers supervised and unsupervised ways to select maximally informative, compact sets of images via dataset compression. This operation can offset the computational cost further (above 90% compression), with a minimal sacrifice in performance (less than 5% for MNIST and less than 13% for SIXray). Overall, this thesis proposes novel model- and data-centred approaches towards a more widespread use of AutoML as an optimal tool for architecture and coreset discov- ery. With the presented and future developments, the work suggests that AutoML can effectively operate in real-time and performance-critical settings such as in threat de- tection, even fostering interpretability by uncovering more parsimonious optimal models. More widely, these approaches have the potential to provide effective solutions to chal- lenging computer vision problems that nowadays are typically considered unfeasible for AutoML settings
Analyse et amƩlioration de la qualitƩ de services WEB multimƩdia et leurs mises en oeuvre sur ordinateur et sur FPGA
RĆ©sumĆ© : Les services Web, issus de lāavanceĢe technologique dans le domaine des reĢseaux informatiques et des dispositifs de teĢleĢcommunications portables et fixes, occupent une place primordiale dans la vie quotidienne des gens. La demande croissante sur des services Web multimeĢdia (SWM), en particulier, augmente la charge sur les reĢseaux dāInternet, les fournisseurs de services et les serveurs Web. Cette charge est essentiellement due au fait que les SWM de haute qualiteĢ neĢcessitent des deĢbits de transfert et des tailles de paquets importants. La qualiteĢ de service (par deĢfinition, telle que vue par lāutilisateur) est influenceĢe par plusieurs facteurs de performance, comme le temps de traitement, le deĢlai de propagation, le temps de reĢponse, la reĢsolution dāimages et lāefficaciteĢ de compression.
Le travail deĢcrit dans cette theĢse est motiveĢ par la demande continuellement croissante de nouveaux SWM et le besoin de maintenir et dāameĢliorer la qualiteĢ de ces services. Nous nous inteĢressons tout dāabord aĢ la qualiteĢ de services (QdS) des SWM lorsquāils sont mis en Åuvre sur des ordinateurs, tels que les ordinateurs de bureau ou les portables. Nous commencĢ§ons par eĢtudier les aspects de compatibiliteĢ afin dāobtenir des SWM fonctionnant de manieĢre satisfaisante sur diffeĢrentes plate-formes. Nous eĢtudions ensuite la QdS des SWM lorsquāils sont mis en Åuvre selon deux approches diffeĢrentes, soit le protocole SOAP et le style RESTful. Nous eĢtudions plus particulieĢrement le taux de compression qui est un des facteurs influencĢ§ant la QdS.
ApreĢs avoir consideĢreĢ sous diffeĢrents angles les SWM avec mise en Åuvre sur des ordinateurs, nous nous inteĢressons aĢ la QdS des SWM lorsquāils sont mis en Åuvre sur FPGA. Nous effectuons alors une eĢtude et une mise en Åuvre qui permet dāidentifier les avantages aĢ mettre en Åuvre des SWM sur FPGA.
Les contributions se deĢfinissent en cinq volets comme suit :
1. Nous introduisons des meĢthodes de creĢation, cāest-aĢ-dire conception et mise en Åuvre, de SWM sur des plate-formes logicielles heĢteĢrogeĢnes dans diffeĢrents environnements tels que Windows, OS X et Solaris. Un objectif que nous visons est de proposer une approche permettant dāajouter de nouveaux SWM tout en garantissant la compatibiliteĢ entre les plate-formes, dans le sens ouĢ nous identifions les options nous permettant dāoffrir un ensemble riche et varieĢ de SWM pouvant fonctionner sur les diffeĢrentes plate-formes.
2. Nous identifions une liste de parameĢtres pertinents influencĢ§ant la QdS des SWM mis en Åuvre selon le protocole SOAP et selon le style REST.
3. Nous deĢveloppons un environnement dāanalyse pour quantifier les impacts de chaque parameĢtre identifieĢ sur la QdS de SWM. Pour cela, nous consideĢrons les SWM mis en Åuvre selon le protocole SOAP et aussi selon style REST. Les QdS obtenues avec SOAP et REST sont compareĢes objectivement. Pour faciliter la comparaison, la meĢme gamme dāimages (dans lāanalyse de SWM SOAP) a eĢteĢ reĢutiliseĢe et les meĢmes plate-formes logicielles.
4. Nous deĢveloppons une proceĢdure dāanalyse qui permet de deĢterminer une correĢlation entre la dimension dāune image et le taux de compression adeĢquat. Les reĢsultats obtenus confirment cette contribution propre aĢ cette theĢse qui confirme que le taux de compression peut eĢtre optimiseĢ lorsque les dimensions de lāimage ont la proprieĢteĢ suivante : le rapport entre la longueur et la largeur est eĢgal au nombre dāor connu dans la nature. Trois libraires ont eĢteĢ utiliseĢes aĢ savoir JPEG, JPEG2000 et DjVu.
5. Dans un volet compleĢmentaire aux quatre volets preĢceĢdents, qui concernent les SWM sur ordinateurs, nous eĢtudions ainsi la conception et la mise en Åuvre de SWM sur FPGA. Nous justifions lāoption de FPGA en identifiant ses avantages par rapport aĢ deux autres options : ordinateurs et ASICs. Afin de confirmer plusieurs avantages identifieĢs, un SWM de QdS eĢleveĢe et de haute performance est creĢeĢ sur FPGA, en utilisant des outils de conception gratuits, du code ouvert (open-source) et une meĢthode fondeĢe uniquement sur HDL. Notre approche facilitera lāajout dāautres modules de gestions et dāorchestration de SWM.
6. La mise aĢ jour et lāadaptation du code open-source et de la documentation du module Ethernet IP Core pour la communication entre le FPGA et le port Ethernet sur la carte Nexys3. Ceci a pour effet de faciliter la mise en Åuvre de SWM sur la carte Nexys3. // Abstract : Web services, which are the outcome of the technological advancements in IT networks
and hand-held mobile devices for telecommunications, occupy an important role in our
daily life. The increasing demand on multimedia Web services (MWS), in particular,
augments the load on the Internet, on service providers and Web servers. This load
is mainly due to the fact that the high-quality multimedia Web services necessitate
high data transfer rates and considerable payload sizes. The quality of service (QoS,
by definition as it is perceived by the user) is influenced by several factors, such as
processing time, propagation delay, response time, image resolution and compression
efficacy.
The research work in this thesis is motivated by the persistent demand on new MWS,
and the need to maintain and improve the QoS. Firstly, we focus on the QoS of MWS
when they are implemented on desktop and laptop computers. We start with studying
the compatibility aspects in order to obtain MWS functioning satisfactorily on different
platforms. Secondly, we study the QoS for MWS implemented according to the SOAP
protocol and the RESTful style. In particular, we study the compression rate, which is
one of the pertinent factors influencing the QoS.
Thirdly, after the study of MWS when implemented on computers, we proceed with the
study of QoS of MWS when implemented on hardware, in particular on FPGAs. We
achieved thus comprehensive study and implementations that show and compare the
advantages of MWS on FPGAs.
The contributions of this thesis can be resumed as follows:
1. We introduce methods of design and implementation of MWS on heterogeneous
platforms, such as Windows, OS X and Solaris. One of our objectives is to
propose an approach that facilitates the integration of new MWS while assuring
the compatibility amongst involved platforms. This means that we identify the
options that enable offering a set of rich and various MWS that can run on different
platforms.
2. We determine a list of relevant parameters that influence the QoS of MWS.
3. We build an analysis environment that quantifies the impact of each parameter on
the QoS of MWS implemented on both SOAP protocol and RESTful style. Both
QoS for SOAP and REST are objectively compared. The analysis has been held on
a large scale of different images, which produces a realistic point of view describing
the behaviour of real MWS.
4. We develop an analysis procedure to determine the correlation between the
aspect ratio of an image and its compression ratio. Our results confirm that
the compression ratio can be improved and optimised when the aspect ratio of
iiiiv
an image is close to the golden ratio, which exists in nature. Three libraries of
compression schemes have been used, namely: JPEG, JPEG2000 and DjVu.
5. Complementary to the four contributions mentioned above, which concern the
MWS on computers, we study also the design and implementation of MWS on
FPGA. This is justified by the numerous advantages that are offered by FPGAs,
compared to the other technologies such as computers and ASICs. In order to
highlight the advantages of implementing MWS on FPGA, we developed on FPGA
a MWS of high performance and high level of QoS. To achieve our goal, we utilised
freely available design utilities, open-source code and a method based only on
HDL. This approach is adequate for future extensions and add-on modules for
MWS orchestration
Temperature Dependence and Touch Sensitivity of Electrical Transport in Novel Nanocomposite Printable Inks
Printed electronics is an established industry allowing the production of electronic components such as resistors, and more complex structures such as solar cells, from functional inks. Composites, a mixture of two or more materials with different physical and/or chemical properties that combine to create a new material with properties differing from its constituent parts, have been important in areas such as the textile and automotive industries, and are significant in printed electronics as inks for printed circuit components, touch and vapour sensors. Here, the functional performance and physical behaviour of two screen printable multi-component nanocomposite inks, formulated for touch-pressure sensing applications, are investigated. They each comprise a proprietary mixture of electrically conducting and insulating nanoparticles dispersed in an insulating polymer binder, where one is opaque and the other transparent. The opaque ink has a complex surface structure consisting of a homogeneous dispersion of nanoparticles. The transparent inks structure is characterised by large aggregates of nanoparticles distributed through the printed layer. Temperature dependent electrical transport measurements under a range of compressive loadings reveal similar non-linear behaviour in both inks, with some hysteresis observed, and this behaviour is linked to the inks structures. A physical model comprising a combination of linear and non-linear conduction contributions, with the linear term attributed to direct connections between conductive particles and the non-linear term attributed to field-assisted quantum tunnelling, has been developed and used successfully to describe the underpinning physical processes behind the unique electrical functionality of the opaque ink and, to a lesser extent, the transparent ink
Doctor of Philosophy
dissertationCongenital heart defects are classes of birth defects that affect the structure and function of the heart. These defects are attributed to the abnormal or incomplete development of a fetal heart during the first few weeks following conception. The overall detection rate of congenital heart defects during routine prenatal examination is low. This is attributed to the insufficient number of trained personnel in many local health centers where many cases of congenital heart defects go undetected. This dissertation presents a system to identify congenital heart defects to improve pregnancy outcomes and increase their detection rates. The system was developed and its performance assessed in identifying the presence of ventricular defects (congenital heart defects that affect the size of the ventricles) using four-dimensional fetal chocardiographic images. The designed system consists of three components: 1) a fetal heart location estimation component, 2) a fetal heart chamber segmentation component, and 3) a detection component that detects congenital heart defects from the segmented chambers. The location estimation component is used to isolate a fetal heart in any four-dimensional fetal echocardiographic image. It uses a hybrid region of interest extraction method that is robust to speckle noise degradation inherent in all ultrasound images. The location estimation method's performance was analyzed on 130 four-dimensional fetal echocardiographic images by comparison with manually identified fetal heart region of interest. The location estimation method showed good agreement with the manually identified standard using four quantitative indexes: Jaccard index, SĆørenson-Dice index, Sensitivity index and Specificity index. The average values of these indexes were measured at 80.70%, 89.19%, 91.04%, and 99.17%, respectively. The fetal heart chamber segmentation component uses velocity vector field estimates computed on frames contained in a four-dimensional image to identify the fetal heart chambers. The velocity vector fields are computed using a histogram-based optical flow technique which is formulated on local image characteristics to reduces the effect of speckle noise and nonuniform echogenicity on the velocity vector field estimates. Features based on the velocity vector field estimates, voxel brightness/intensity values, and voxel Cartesian coordinate positions were extracted and used with kernel k-means algorithm to identify the individual chambers. The segmentation method's performance was evaluated on 130 images from 31 patients by comparing the segmentation results with manually identified fetal heart chambers. Evaluation was based on the SĆørenson-Dice index, the absolute volume difference and the Hausdorff distance, with each resulting in per patient average values of 69.92%, 22.08%, and 2.82 mm, respectively. The detection component uses the volumes of the identified fetal heart chambers to flag the possible occurrence of hypoplastic left heart syndrome, a type of congenital heart defect. An empirical volume threshold defined on the relative ratio of adjacent fetal heart chamber volumes obtained manually is used in the detection process. The performance of the detection procedure was assessed by comparison with a set of images with confirmed diagnosis of hypoplastic left heart syndrome and a control group of normal fetal hearts. Of the 130 images considered 18 of 20 (90%) fetal hearts were correctly detected as having hypoplastic left heart syndrome and 84 of 110 (76.36%) fetal hearts were correctly detected as normal in the control group. The results show that the detection system performs better than the overall detection rate for congenital heart defect which is reported to be between 30% and 60%
- ā¦