13,023 research outputs found
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
On the Mechanism of Building Core Competencies: a Study of Chinese Multinational Port Enterprises
This study aims to explore how Chinese multinational port enterprises (MNPEs) build
their core competencies. Core competencies are firms’special capabilities and sources
to gain sustainable competitive advantage (SCA) in marketplace, and the concept led
to extensive research and debates. However, few studies include inquiries about the
mechanisms of building core competencies in the context of Chinese MNPEs.
Accordingly, answers were sought to three research questions:
1. What are the core competencies of the Chinese MNPEs?
2. What are the mechanisms that the Chinese MNPEs use to build their core
competencies?
3. What are the paths that the Chinese MNPEs pursue to build their resources bases?
The study adopted a multiple-case study design, focusing on building mechanism of
core competencies with RBV. It selected purposively five Chinese leading MNPEs
and three industry associations as Case Companies.
The study revealed three main findings. First, it identified three generic core
competencies possessed by Case Companies, i.e., innovation in business models and
operations, utilisation of technologies, and acquisition of strategic resources. Second,
it developed the conceptual framework of the Mechanism of Building Core
Competencies (MBCC), which is a process of change of collective learning in
effective and efficient utilization of resources of a firm in response to critical events.
Third, it proposed three paths to build core competencies, i.e., enhancing collective
learning, selecting sustainable processes, and building resource base.
The study contributes to the knowledge of core competencies and RBV in three ways:
(1) presenting three generic core competencies of the Chinese MNPEs, (2) proposing
a new conceptual framework to explain how Chinese MNPEs build their core
competencies, (3) suggesting a solid anchor point (MBCC) to explain the links among
resources, core competencies, and SCA. The findings set benchmarks for Chinese
logistics industry and provide guidelines to build core competencies
Gasificação direta de biomassa para produção de gás combustÃvel
The excessive consumption of fossil fuels to satisfy the world necessities of
energy and commodities led to the emission of large amounts of greenhouse
gases in the last decades, contributing significantly to the greatest
environmental threat of the 21st century: Climate Change. The answer to this
man-made disaster is not simple and can only be made if distinct stakeholders
and governments are brought to cooperate and work together. This is
mandatory if we want to change our economy to one more sustainable and
based in renewable materials, and whose energy is provided by the eternal
nature energies (e.g., wind, solar). In this regard, biomass can have a main role
as an adjustable and renewable feedstock that allows the replacement of fossil
fuels in various applications, and the conversion by gasification allows the
necessary flexibility for that purpose. In fact, fossil fuels are just biomass that
underwent extreme pressures and heat for millions of years. Furthermore,
biomass is a resource that, if not used or managed, increases wildfire risks.
Consequently, we also have the obligation of valorizing and using this
resource.
In this work, it was obtained new scientific knowledge to support the
development of direct (air) gasification of biomass in bubbling fluidized bed
reactors to obtain a fuel gas with suitable properties to replace natural gas in
industrial gas burners. This is the first step for the integration and development
of gasification-based biorefineries, which will produce a diverse number of
value-added products from biomass and compete with current petrochemical
refineries in the future. In this regard, solutions for the improvement of the raw
producer gas quality and process efficiency parameters were defined and
analyzed. First, addition of superheated steam as primary measure allowed the
increase of H2 concentration and H2/CO molar ratio in the producer gas without
compromising the stability of the process. However, the measure mainly
showed potential for the direct (air) gasification of high-density biomass (e.g.,
pellets), due to the necessity of having char accumulation in the reactor bottom
bed for char-steam reforming reactions. Secondly, addition of refused derived
fuel to the biomass feedstock led to enhanced gasification products, revealing
itself as a highly promising strategy in terms of economic viability and
environmental benefits of future gasification-based biorefineries, due to the
high availability and low costs of wastes. Nevertheless, integrated techno economic and life cycle analyses must be performed to fully characterize the
process. Thirdly, application of low-cost catalyst as primary measure revealed
potential by allowing the improvement of the producer gas quality (e.g., H2 and
CO concentration, lower heating value) and process efficiency parameters with
distinct solid materials; particularly, the application of concrete, synthetic
fayalite and wood pellets chars, showed promising results. Finally, the
economic viability of the integration of direct (air) biomass gasification
processes in the pulp and paper industry was also shown, despite still lacking
interest to potential investors. In this context, the role of government policies
and appropriate economic instruments are of major relevance to increase the
implementation of these projects.O consumo excessivo de combustÃveis fósseis para garantir as necessidades e
interesses da sociedade conduziu à emissão de elevadas quantidades de
gases com efeito de estufa nas últimas décadas, contribuindo
significativamente para a maior ameaça ambiental do século XXI: Alterações
Climáticas. A solução para este desastre de origem humana é de caráter
complexo e só pode ser atingida através da cooperação de todos os governos
e partes interessadas. Para isto, é obrigatória a criação de uma bioeconomia
como base de um futuro mais sustentável, cujas necessidades energéticas e
materiais sejam garantidas pelas eternas energias da natureza (e.g., vento,
sol). Neste sentido, a biomassa pode ter um papel principal como uma matéria prima ajustável e renovável que permite a substituição de combustÃveis fósseis
num variado número de aplicações, e a sua conversão através da gasificação
pode ser a chave para este propósito. Afinal, na prática, os combustÃveis
fósseis são apenas biomassa sujeita a elevada temperatura e pressão durante
milhões de anos. Além do mais, a gestão eficaz da biomassa é fundamental
para a redução dos riscos de incêndio florestal e, como tal, temos o dever de
utilizar e valorizar este recurso.
Neste trabalho, foi obtido novo conhecimento cientÃfico para suporte do
desenvolvimento das tecnologias de gasificação direta (ar) de biomassa em
leitos fluidizados borbulhantes para produção de gás combustÃvel, com o
objetivo da substituição de gás natural em queimadores industriais. Este é o
primeiro passo para o desenvolvimento de biorrefinarias de gasificação, uma
potencial futura indústria que irá providenciar um variado número de produtos
de valor acrescentado através da biomassa e competir com a atual indústria
petroquÃmica. Neste sentido, foram analisadas várias medidas para a melhoria
da qualidade do gás produto bruto e dos parâmetros de eficiência do processo.
Em primeiro, a adição de vapor sobreaquecido como medida primária permitiu
o aumento da concentração de H2 e da razão molar H2/CO no gás produto sem
comprometer a estabilidade do processo. No entanto, esta medida somente
revelou potencial para a gasificação direta (ar) de biomassa de alta densidade
(e.g., pellets) devido à necessidade da acumulação de carbonizados no leito
do reator para a ocorrência de reações de reforma com vapor. Em segundo, a
mistura de combustÃveis derivados de resÃduos e biomassa residual florestal
permitiu a melhoria dos produtos de gasificação, constituindo desta forma uma
estratégia bastante promissora a nÃvel económico e ambiental, devido Ã
elevada abundância e baixo custo dos resÃduos urbanos. Contudo, devem ser
efetuadas análises técnico-económicas e de ciclo de vida para a completa
caraterização do processo. Em terceiro, a aplicação de catalisadores de baixo
custo como medida primária demonstrou elevado potencial para a melhoria do
gás produto (e.g., concentração de H2 e CO, poder calorÃfico inferior) e para o
incremento dos parâmetros de eficiência do processo; em particular, a
aplicação de betão, faialite sintética e carbonizados de pellets de madeira,
demonstrou resultados promissores. Finalmente, foi demonstrada a viabilidade
económica da integração do processo de gasificação direta (ar) de biomassa
na indústria da pasta e papel, apesar dos parâmetros determinados não serem
atrativos para potenciais investidores. Neste contexto, a intervenção dos
governos e o desenvolvimento de instrumentos de apoio económico é de
grande relevância para a implementação destes projetos.Este trabalho foi financiado pela The Navigator Company e por Fundos Nacionais através da Fundação para a Ciência e a Tecnologia (FCT).Programa Doutoral em Engenharia da Refinação, PetroquÃmica e QuÃmic
Extractive desulfurization of fuel oils using ionic liquids
The sulphur content of transportation fuels must be reduced in high-sulphur crude oil by desulfurization. Traditionally, desulfurization methods have required harsh reaction conditions and are not very effective at removing refractory sulfur compounds such as benzothiophene (BT), dibenzothiophene (DBT) and 4,6-dimethyldibenzothiophene (4,6-DMDBT). Alternative methods, such as ionic liquid (IL)-mediated desulfurization, are both effective and environmentally friendly. Isolants ideal for desulfurization are required to be recyclable, insoluble in oil, selective for compounds containing sulphur, and eco-friendly. These properties are offered by ILs based on pyridinium. Therefore, the primary objectives of this thesis were to: (1) investigate the properties of N-butyl-pyridinium tetrafluoroborate ([BPy][BF₄]) and N-carboxymethyl pyridinium hydrogen sulfate ([CH₂COOHPy][HSO₄]); (2) understand the effects of reaction parameters (temperature, volume ratio, oxidant dosage, quantities of sulphur compound extracted, etc.) on desulfurization efficiency; (3) clarify the interactions between ILs and sulphur compounds; and (4) investigate the recycling and regeneration of ILs. Experimental results showed that the desulfurization efficiency of [BPy][BF₄] increased with temperature and oxidant dosage and declined with IL to fuel volume ratio. It was observed that at 30゚C, 1:1 ration of IL to model fuel [BPy][BF₄] could remove up to 79% of DBT in 80 min in the presence of oxidant H₂O₂. [CH₂COOHPy] [HSO₄] was found to be more effective in desulfurization, capable of removing up to 99.9% of DBT in the presence of oxidant H₂O₂ within 40 min at 25゚C, 1:1 ratio of IL to model fuel. The recycled [CH₂COOHPy][HSO₄] marginally lost effectiveness
after 8 recycles. It was also found that the effectiveness of both ILs was lower in real diesel compared to model fuels.
Computational density functional theory-based structural analysis revealed that there were two types of possible π-π interactions between [BPy] [BF₄] and DBT/DBTO₂, resulting in the formation of complexes with different geometries. [CH₂COOHPy][HSO₄] also exhibits similar potential π−π interactions with DBT/DBTO₂. Moreover, both ILs undergo the same oxidative mechanism of desulfurization, as they involve π-π interactions and hydrogen bonds
Cultivating Agrobiodiversity in the U.S.: Barriers and Bridges at Multiple Scales
The diversity of crops grown in the United States (U.S.) is declining, causing agricultural landscapes to become more and more simplified. This trend is concerning for the loss of important plant, insect, and animal species, as well as the pollution and degradation of our environment. Through three separate but related studies, this dissertation addresses the need to increase the diversity of these agricultural landscapes in the U.S., particularly through diversifying the type and number of crops grown. The first study uses multiple, openly accessible datasets related to agricultural land use and policies to document and visualize change over recent decades. Through this, I show that U.S. agriculture has gradually become more specialized in the crops grown, crop production is heavily concentrated in certain areas, and crop diversity is continuing to decline. Meanwhile, federal agricultural policy, while having become more influential over how U.S. agriculture operates, incentivizes this specialization. The second study uses nonlinear statistical modeling to identify and compare social, political, and ecological factors that best predict crop diversity across nine regions in the U.S. Factors of climate, prior land use, and farm inputs best predict diversity across regions, but regions show key differences in how factors are important, indicating that patterns at the regional scale constrain and enable further diversification. Finally, the third study relied on interviews with farmers and key informants in southern Idaho’s Magic Valley – a cluster of eight counties that is known to be agriculturally diverse. Interviews gauge what farmers are currently doing to manage crop diversity (the present) and how they imagine alternative landscapes (the imaginary). We found that farmers in the Magic Valley manage current diversity mainly through cover cropping and diverse crop rotations, but daily struggles and political barriers make experimenting with and imagining alternative landscapes difficult and unlikely to occur. Together, these three studies provide an integrated view of how and why U.S. agriculture landscapes simplify or diversify, as well as the barriers and bridges such pathways of diversification
Investigating and mitigating the role of neutralisation techniques on information security policies violation in healthcare organisations
Healthcare organisations today rely heavily on Electronic Medical Records systems (EMRs), which have become highly crucial IT assets that require significant security efforts to safeguard patients’ information. Individuals who have legitimate access to an organisation’s assets to perform their day-to-day duties but intentionally or unintentionally violate information security policies can jeopardise their organisation’s information security efforts and cause significant legal and financial losses. In the information security (InfoSec) literature, several studies emphasised the necessity to understand why employees behave in ways that contradict information security requirements but have offered widely different solutions. In an effort to respond to this situation, this thesis addressed the gap in the information security academic research by providing a deep understanding of the problem of medical practitioners’ behavioural justifications to violate information security policies and then determining proper solutions to reduce this undesirable behaviour. Neutralisation theory was used as the theoretical basis for the research. This thesis adopted a mixed-method research approach that comprises four consecutive phases, and each phase represents a research study that was conducted in light of the results from the preceding phase. The first phase of the thesis started by investigating the relationship between medical practitioners’ neutralisation techniques and their intention to violate information security policies that protect a patient’s privacy. A quantitative study was conducted to extend the work of Siponen and Vance [1] through a study of the Saudi Arabia healthcare industry. The data was collected via an online questionnaire from 66 Medical Interns (MIs) working in four academic hospitals. The study found that six neutralisation techniques—(1) appeal to higher loyalties, (2) defence of necessity, (3) the metaphor of ledger, (4) denial of responsibility, (5) denial of injury, and (6) condemnation of condemners—significantly contribute to the justifications of the MIs in hypothetically violating information security policies. The second phase of this research used a series of semi-structured interviews with IT security professionals in one of the largest academic hospitals in Saudi Arabia to explore the environmental factors that motivated the medical practitioners to evoke various neutralisation techniques. The results revealed that social, organisational, and emotional factors all stimulated the behavioural justifications to breach information security policies. During these interviews, it became clear that the IT department needed to ensure that security policies fit the daily tasks of the medical practitioners by providing alternative solutions to ensure the effectiveness of those policies. Based on these interviews, the objective of the following two phases was to improve the effectiveness of InfoSec policies against the use of behavioural justification by engaging the end users in the modification of existing policies via a collaborative writing process. Those two phases were conducted in the UK and Saudi Arabia to determine whether the collaborative writing process could produce a more effective security policy that balanced the security requirements with daily business needs, thus leading to a reduction in the use of neutralisation techniques to violate security policies. The overall result confirmed that the involvement of the end users via a collaborative writing process positively improved the effectiveness of the security policy to mitigate the individual behavioural justifications, showing that the process is a promising one to enhance security compliance
Robustness against adversarial attacks on deep neural networks
While deep neural networks have been successfully applied in several different domains, they exhibit vulnerabilities to artificially-crafted perturbations in data. Moreover, these perturbations have been shown to be transferable across different networks where the same perturbations can be transferred between different models. In response to this problem, many robust learning approaches have emerged. Adversarial training is regarded as a mainstream approach to enhance the robustness of deep neural networks with respect to norm-constrained perturbations. However, adversarial training requires a large number of perturbed examples (e.g., over 100,000 examples are required for MNIST dataset) trained on the deep neural networks before robustness can be considerably enhanced. This is problematic due to the large computational cost of obtaining attacks. Developing computationally effective approaches while retaining robustness against norm-constrained perturbations remains a challenge in the literature.
In this research we present two novel robust training algorithms based on Monte-Carlo Tree Search (MCTS) [1] to enhance robustness under norm-constrained perturbations [2, 3]. The first algorithm searches potential candidates with Scale Invariant Feature Transform method and makes decisions with Monte-Carlo Tree Search method [2]. The second algorithm adopts Decision Tree Search method (DTS) to accelerate the search process while maintaining efficiency [3]. Our overarching objective is to provide computationally effective approaches that can be deployed to train deep neural networks robust against perturbations in data. We illustrate the robustness with these algorithms by studying the resistances to adversarial examples obtained in the context of the MNIST and CIFAR10 datasets. For MNIST, the results showed an average training efforts saving of 21.1\% when compared to Projected Gradient Descent (PGD) and 28.3\% when compared to Fast Gradient Sign Methods (FGSM). For CIFAR10, we obtained an average improvement of efficiency of 9.8\% compared to PGD and 13.8\% compared to FGSM. The results suggest that these two methods here introduced are not only robust to norm-constrained perturbations but also efficient during training.
In regards to transferability of defences, our experiments [4] reveal that across different network architectures, across a variety of attack methods from white-box to black-box and across various datasets including MNIST and CIFAR10, our algorithms outperform other state-of-the-art methods, e.g., PGD and FGSM. Furthermore, the derived attacks and robust models obtained on our framework are reusable in the sense that the same norm-constrained perturbations can facilitate robust training across different networks. Lastly, we investigate the robustness of intra-technique and cross-technique transferability and the relations with different impact factors from adversarial strength to network capacity. The results suggest that known attacks on the resulting models are less transferable than those models trained by other state-of-the-art attack algorithms.
Our results suggest that exploiting these tree search frameworks can result in significant improvements in the robustness of deep neural networks while saving computational cost on robust training. This paves the way for several future directions, both algorithmic and theoretical, as well as numerous applications to establish the robustness of deep neural networks with increasing trust and safety.Open Acces
Optimizing transcriptomics to study the evolutionary effect of FOXP2
The field of genomics was established with the sequencing of the human genome, a pivotal achievement that has allowed us to address various questions in biology from a unique perspective. One question in particular, that of the evolution of human speech, has gripped philosophers, evolutionary biologists, and now genomicists. However, little is known of the genetic basis that allowed humans to evolve the ability to speak. Of the few genes implicated in human speech, one of the most studied is FOXP2, which encodes for the transcription factor Forkhead box protein P2 (FOXP2). FOXP2 is essential for proper speech development and two mutations in the human lineage are believed to have contributed to the evolution of human speech. To address the effect of FOXP2 and investigate its evolutionary contribution to human speech, one can utilize the power of genomics, more specifically gene expression analysis via ribonucleic acid sequencing (RNA-seq).
To this end, I first contributed in developing mcSCRB-seq, a highly sensitive, powerful, and efficient single cell RNA-seq (scRNA-seq) protocol. Previously having emerged as a central method for studying cellular heterogeneity and identifying cellular processes, scRNA-seq was a powerful genomic tool but lacked the sensitivity and cost-efficiency of more established protocols. By systematically evaluating each step of the process, I helped find that the addition of polyethylene glycol increased sensitivity by enhancing the cDNA synthesis reaction. This, along with other optimizations resulted in developing a sensitive and flexible protocol that is cost-efficient and ideal in many research settings.
A primary motivation driving the extensive optimizations surrounding single cell transcriptomics has been the generation of cellular atlases, which aim to identify and characterize all of the cells in an organism. As such efforts are carried out in a variety of research groups using a number of different RNA-seq protocols, I contributed in an effort to benchmark and standardize scRNA-seq methods. This not only identified methods which may be ideal for the purpose of cell atlas creation, but also highlighted optimizations that could be integrated into existing protocols.
Using mcSCRB-seq as a foundation as well as the findings from the scRNA-seq benchmarking, I helped develop prime-seq, a sensitive, robust, and most importantly, affordable bulk RNA-seq protocol. Bulk RNA-seq was frequently overlooked during the efforts to optimize and establish single-cell techniques, even though the method is still extensively used in analyzing gene expression. Introducing early barcoding and reducing library generation costs kept prime-seq cost-efficient, but basing it off of single-cell methods ensured that it would be a sensitive and powerful technique. I helped verify this by benchmarking it against TruSeq generated data and then helped test the robustness by generating prime-seq libraries from over seventeen species. These optimizations resulted in a final protocol that is well suited for investigating gene expression in comprehensive and high-throughput studies.
Finally, I utilized prime-seq in order to develop a comprehensive gene expression atlas to study the function of FOXP2 and its role in speech evolution. I used previously generated mouse models: a knockout model containing one non-functional Foxp2 allele and a humanized model, which has a variant Foxp2 allele with two human-specific mutations. To study the effect globally across the mouse, I helped harvest eighteen tissues which were previously identified to express FOXP2. By then comparing the mouse models to wild-type mice, I helped highlight the importance of FOXP2 within lung development and the importance of the human variant allele in the brain.
Both mcSCRB-seq and prime-seq have already been used and published in numerous studies to address a variety of biological and biomedical questions. Additionally, my work on FOXP2 not only provides a thorough expression atlas, but also provides a detailed and cost-efficient plan for undertaking a similar study on other genes of interest. Lastly, the studies on FOXP2 done within this work, lay the foundation for future studies investigating the role of FOXP2 in modulating learning behavior, and thereby affecting human speech
Information Flow Guided Synthesis
Compositional synthesis relies on the discovery of assumptions, i.e., restrictions on the behavior of the remainder of the system that allow a component to realize its specification.
In order to avoid losing valid solutions, these assumptions should be necessary conditions for realizability. However, because there are typically many different behaviors that realize the same specification, necessary behavioral restrictions often do not exist.
In this paper, we introduce a new class of assumptions for compositional synthesis, which we call information flow assumptions. Such assumptions capture an essential aspect of distributed computing, because components often need to act upon information that is available only in other components. The presence of a certain flow of information is therefore often a necessary requirement, while
the actual behavior that establishes the information flow is unconstrained.
In contrast to behavioral assumptions, which are properties of individual computation traces, information flow assumptions are hyperproperties, i.e., properties of sets of traces. We present a method for the automatic derivation of information-flow assumptions from a temporal logic specification of the system. We then provide a technique for the automatic synthesis of component implementations based on information flow assumptions. This provides a new compositional approach to the synthesis of distributed systems. We report on encouraging first experiments with the approach, carried out with the BoSyHyper synthesis tool
The Relationship Between Climate Change and Food Insecurity In Sub-Saharan Africa
Magister Artium (Development Studies) - MA(DVS)According to the research conducted for this thesis, climate change has a potential to be a
hazard to food security in not only South Africa, but also to most of Sub-Saharan Africa. The
threat is presented in terms of food distribution and consumption, including agricultural
productivity. Food security is impacted by global warming, global warming in turn is a direct
result of climate change since it affects the supply of food, its accessibility, how it is utilized,
and whether or not people can afford it. The only way to mitigate the dangers is through an
integrated policy approach that protects fertile land from global warming. The key point
presented here is that Sub-Saharan Africa has all of the resources necessary to adapt to climate
change and secure food supplies; nevertheless, it is critical that they first recognize the hazards
that various agricultural products face because of global warming. However, a lot of emerging
countries face significant challenges as a result of a lack of robust institutions, making policy
changes difficult. The influence on food security will be significant, and it may be broken down
into three categories: availability, access, and use. Systematic peer-reviewed literature reviews
of climate change and food security research were undertaken utilizing the realist review
approach as the methodology for this study. In order to alleviate the region's acute food
insecurity, adaptation approaches were thoroughly investigated. This is related to development
challenges, where adaptation is necessary to mitigate negative effects and improve the
population's ability to participate in development processes. Finances are also a concern for
poor countries, such as South Africa, because there is a disparity between the cost of adaptation
and government subsidies. The remedy could come in the form of technology interventions
that help to make food systems less vulnerable to dangers
- …