10,572 research outputs found
A Design Science Research Approach to Smart and Collaborative Urban Supply Networks
Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness.
A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense.
Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice
Open-source high-performance software packages for direct and inverse solving of horizontal capillary flow
This work introduces Fronts, a set of open-source numerical software packages for nonlinear horizontal capillary-driven flow problems in unsaturated porous media governed by the Richards equation. The software uses the Boltzmann transformation to solve such problems in semi-infinite domains. The scheme adopted by Fronts allows it to be faster and easier to use than other tools, and provide continuous functions for all involved fields. The software is capable of solving problems that appear in hydrology, but also in other particular domains of interest such as paper-based microfluidics. As the first known open-source implementation to adopt this approach, Fronts has been validated against analytical solutions as well as existing software achieving remarkable results in terms of computational costs and numerical precision, and is meant to aid the study and modeling of capillary flow. Fronts can be freely downloaded and installed, and offers a friendly environment for new users with its complete documentation and tutorial cases.Cited as: Gerlero, G. S., Berli, C. L. A., Kler, P. A. Open-source high-performance software packages for direct and inverse solving of horizontal capillary flow. Capillarity, 2023, 6(2): 31-40. https://doi.org/10.46690/capi.2023.02.0
Anuário cientÃfico da Escola Superior de Tecnologia da Saúde de Lisboa - 2021
É com grande prazer que apresentamos a mais recente edição (a 11.ª) do Anuário CientÃfico da Escola Superior de Tecnologia da Saúde de Lisboa. Como instituição de ensino superior, temos o compromisso de promover e incentivar a pesquisa cientÃfica em todas as áreas do conhecimento que contemplam a nossa missão. Esta publicação tem como objetivo divulgar toda a produção cientÃfica desenvolvida pelos Professores, Investigadores, Estudantes e Pessoal não Docente da ESTeSL durante 2021. Este Anuário é, assim, o reflexo do trabalho árduo e dedicado da nossa comunidade, que se empenhou na produção de conteúdo cientÃfico de elevada qualidade e partilhada com a Sociedade na forma de livros, capÃtulos de livros, artigos publicados em revistas nacionais e internacionais, resumos de comunicações orais e pósteres, bem como resultado dos trabalhos de 1º e 2º ciclo. Com isto, o conteúdo desta publicação abrange uma ampla variedade de tópicos, desde temas mais fundamentais até estudos de aplicação prática em contextos especÃficos de Saúde, refletindo desta forma a pluralidade e diversidade de áreas que definem, e tornam única, a ESTeSL. Acreditamos que a investigação e pesquisa cientÃfica é um eixo fundamental para o desenvolvimento da sociedade e é por isso que incentivamos os nossos estudantes a envolverem-se em atividades de pesquisa e prática baseada na evidência desde o inÃcio dos seus estudos na ESTeSL. Esta publicação é um exemplo do sucesso desses esforços, sendo a maior de sempre, o que faz com que estejamos muito orgulhosos em partilhar os resultados e descobertas dos nossos investigadores com a comunidade cientÃfica e o público em geral. Esperamos que este Anuário inspire e motive outros estudantes, profissionais de saúde, professores e outros colaboradores a continuarem a explorar novas ideias e contribuir para o avanço da ciência e da tecnologia no corpo de conhecimento próprio das áreas que compõe a ESTeSL. Agradecemos a todos os envolvidos na produção deste anuário e desejamos uma leitura inspiradora e agradável.info:eu-repo/semantics/publishedVersio
Application of advanced fluorescence microscopy and spectroscopy in live-cell imaging
Since its inception, fluorescence microscopy has been a key source of discoveries in cell biology. Advancements in fluorophores, labeling techniques and instrumentation have made fluorescence microscopy a versatile quantitative tool for studying dynamic processes and interactions both in vitro and in live-cells. In this thesis, I apply quantitative fluorescence microscopy techniques in live-cell environments to investigate several biological processes. To study Gag processing in HIV-1 particles, fluorescence lifetime imaging microscopy and single particle tracking are combined to follow nascent HIV-1 virus particles during assembly and release on the plasma membrane of living cells. Proteolytic release of eCFP embedded in the Gag lattice of immature HIV-1 virus particles results in a characteristic increase in its fluorescence lifetime. Gag processing and rearrangement can be detected in individual virus particles using this approach. In another project, a robust method for quantifying Förster resonance energy transfer in live-cells is developed to allow direct comparison of live-cell FRET experiments between laboratories. Finally, I apply image fluctuation spectroscopy to study protein behavior in a variety of cellular environments. Image cross-correlation spectroscopy is used to study the oligomerization of CXCR4, a G-protein coupled receptor on the plasma membrane. With raster image correlation spectroscopy, I measure the diffusion of histones in the nucleoplasm and heterochromatin domains of the nuclei of early mouse embryos. The lower diffusion coefficient of histones in the heterochromatin domain supports the conclusion that heterochromatin forms a liquid phase-separated domain. The wide range of topics covered in this thesis demonstrate that fluorescence microscopy is more than just an imaging tool but also a powerful instrument for the quantification and elucidation of dynamic cellular processes
Image classification over unknown and anomalous domains
A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting.
Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each.
While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so.
In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks
Foundations for programming and implementing effect handlers
First-class control operators provide programmers with an expressive and efficient
means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and
control idioms as shareable libraries. Effect handlers provide a particularly structured
approach to programming with first-class control by naming control reifying operations
and separating from their handling.
This thesis is composed of three strands of work in which I develop operational
foundations for programming and implementing effect handlers as well as exploring
the expressive power of effect handlers.
The first strand develops a fine-grain call-by-value core calculus of a statically
typed programming language with a structural notion of effect types, as opposed to the
nominal notion of effect types that dominates the literature. With the structural approach,
effects need not be declared before use. The usual safety properties of statically typed
programming are retained by making crucial use of row polymorphism to build and
track effect signatures. The calculus features three forms of handlers: deep, shallow,
and parameterised. They each offer a different approach to manipulate the control state
of programs. Traditional deep handlers are defined by folds over computation trees,
and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are
defined by case splits (rather than folds) over computation trees. Parameterised handlers
are deep handlers extended with a state value that is threaded through the folds over
computation trees. To demonstrate the usefulness of effects and handlers as a practical
programming abstraction I implement the essence of a small UNIX-style operating
system complete with multi-user environment, time-sharing, and file I/O.
The second strand studies continuation passing style (CPS) and abstract machine
semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow, and parameterised effect handlers in the same environment. The
CPS translation is obtained through a series of refinements of a basic first-order CPS
translation for a fine-grain call-by-value language into an untyped language. Each refinement moves toward a more intensional representation of continuations eventually
arriving at the notion of generalised continuation, which admit simultaneous support for
deep, shallow, and parameterised handlers. The initial refinement adds support for deep
handlers by representing stacks of continuations and handlers as a curried sequence of
arguments. The image of the resulting translation is not properly tail-recursive, meaning some function application terms do not appear in tail position. To rectify this the
CPS translation is refined once more to obtain an uncurried representation of stacks
of continuations and handlers. Finally, the translation is made higher-order in order to
contract administrative redexes at translation time. The generalised continuation representation is used to construct an abstract machine that provide simultaneous support for
deep, shallow, and parameterised effect handlers. kinds of effect handlers.
The third strand explores the expressiveness of effect handlers. First, I show that
deep, shallow, and parameterised notions of handlers are interdefinable by way of typed
macro-expressiveness, which provides a syntactic notion of expressiveness that affirms
the existence of encodings between handlers, but it provides no information about the
computational content of the encodings. Second, using the semantic notion of expressiveness I show that for a class of programs a programming language with first-class
control (e.g. effect handlers) admits asymptotically faster implementations than possible in a language without first-class control
AIUCD 2022 - Proceedings
L’undicesima edizione del Convegno Nazionale dell’AIUCD-Associazione di Informatica Umanistica ha per titolo Culture digitali. Intersezioni: filosofia, arti, media. Nel titolo è presente, in maniera esplicita, la richiesta di una riflessione, metodologica e teorica, sull’interrelazione tra tecnologie digitali, scienze dell’informazione, discipline filosofiche, mondo delle arti e cultural studies
SYSTEMS METHODS FOR ANALYSIS OF HETEROGENEOUS GLIOBLASTOMA DATASETS TOWARDS ELUCIDATION OF INTER-TUMOURAL RESISTANCE PATHWAYS AND NEW THERAPEUTIC TARGETS
In this PhD thesis is described an endeavour to compile litterature about Glioblastoma key molecular mechanisms into a directed network followin Disease Maps standards, analyse its topology and compare results with quantitative analysis of multi-omics datasets in order to investigate Glioblastoma resistance mechanisms. The work also integrated implementation of Data Management good practices and procedures
Graphical scaffolding for the learning of data wrangling APIs
In order for students across the sciences to avail themselves of modern data streams, they must first know how to wrangle data: how to reshape ill-organised, tabular data into another format, and how to do this programmatically, in languages such as Python and R. Despite the cross-departmental demand and the ubiquity of data wrangling in analytical workflows, the research on how to optimise the instruction of it has been minimal. Although data wrangling as a programming domain presents distinctive challenges - characterised by on-the-fly syntax lookup and code example integration - it also presents opportunities. One such opportunity is how tabular data structures are easily visualised. To leverage the inherent visualisability of data wrangling, this dissertation evaluates three types of graphics that could be employed as scaffolding for novices: subgoal graphics, thumbnail graphics, and parameter graphics. Using a specially built e-learning platform, this dissertation documents a multi-institutional, randomised, and controlled experiment that investigates the pedagogical effects of these. Our results indicate that the graphics are well-received, that subgoal graphics boost the completion rate, and that thumbnail graphics improve navigability within a command menu. We also obtained several non-significant results, and indications that parameter graphics are counter-productive. We will discuss these findings in the context of general scaffolding dilemmas, and how they fit into a wider research programme on data wrangling instruction
Principles of Massively Parallel Sequencing for Engineering and Characterizing Gene Delivery
The advent of massively parallel sequencing and synthesis technologies have ushered in a new paradigm of biology, where high throughput screening of billions of nucleid acid molecules and production of libraries of millions of genetic mutants are now routine in labs and clinics. During my Ph.D., I worked to develop data analysis and experimental methods that take advantage of the scale of this data, while making the minimal assumptions necessary for deriving value from their application. My Ph.D. work began with the development of software and principles for analyzing deep mutational scanning data of libraries of engineered AAV capsids. By looking at not only the top variant in a round of directed evolution, but instead a broad distribution of the variants and their phenotypes, we were able to identify AAV variants with enhanced ability to transduce specific cells in the brain after intravenous injection. I then shifted to better understand the phenotypic profile of these engineered variants. To that end, I turned to single-cell RNA sequencing to seek to identify, with high resolution, the delivery profile of these variants in all cell types present in the cortex of a mouse brain. I began by developing infrastructure and tools for dealing with the data analysis demands of these experiments. Then, by delivering an engineered variant to the animal, I was able to use the single-cell RNA sequencing profile, coupled with a sequencing readout of the delivered genetic cargo present in each cell type, to define the variant’s tropism across the full spectrum of cell types in a single step. To increase the throughput of this experimental paradigm, I then worked to develop a multiplexing strategy for delivering up to 7 engineered variants in a single animal, and obtain the same high resolution readout for each variant in a single experiment. Finally, to take a step towards translation to human diagnostics, I leveraged the tools I built for scaling single-cell RNA sequencing studies and worked to develop a protocol for obtaining single-cell immune profiles of low volumes of self-collected blood. This study enabled repeat sampling in a short period of time, and revealed an incredible richness in individual variability and time-of-day dependence of human immune gene expression. Together, my Ph.D. work provides strategies for employing massively parallel sequencing and synthesis for new biological applications, and builds towards a future paradigm where personalized, high-resolution sequencing might be coupled with modular, customized gene therapy delivery.</p
- …