13,249 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control
This paper provides an overview of the current state-of-the-art in selective
harvesting robots (SHRs) and their potential for addressing the challenges of
global food production. SHRs have the potential to increase productivity,
reduce labour costs, and minimise food waste by selectively harvesting only
ripe fruits and vegetables. The paper discusses the main components of SHRs,
including perception, grasping, cutting, motion planning, and control. It also
highlights the challenges in developing SHR technologies, particularly in the
areas of robot design, motion planning and control. The paper also discusses
the potential benefits of integrating AI and soft robots and data-driven
methods to enhance the performance and robustness of SHR systems. Finally, the
paper identifies several open research questions in the field and highlights
the need for further research and development efforts to advance SHR
technologies to meet the challenges of global food production. Overall, this
paper provides a starting point for researchers and practitioners interested in
developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Genetic population structure of the precious coral Corallium japonicum in the Northwest Pacific
Population sizes of the Japanese red coral Corallium japonicum have been severely affected by poaching and overfishing. Although genetic structure and connectivity patterns are considered important parameters for conservation strategies, there are few studies focusing on the population genetics of C. japonicum in the Northwest Pacific. We examined the genetic population structure of C. japonicum, in the Northwest Pacific. We used restriction-site-associated DNA sequencing (RAD-seq), which can be used to identify genome-wide single-nucleotide polymorphism (SNPs), to reveal detailed within-species genetic variations. Using the variable SNP loci identified from this analysis, we successfully evaluated the population-level genetic diversity and patterns of gene flow among multiple populations of C. japonicum around Japan. The results of genetic analysis basically showed that gene flow is widely maintained in the geographic range examined in this study, but the analysis in combination with larval dispersal simulations revealed several populations that were genetically distinct from the other populations, suggesting geographically limited gene flows. The information obtained from this study will be useful for the design of effective management schemes for C. japonicum, which is under threat from overfishing
Intelligent Control Schemes for Maximum Power Extraction from Photovoltaic Arrays under Faults
Investigation of power output from PV arrays under different fault conditions is an essential task to enhance performance of a photovoltaic system under all operating conditions. Significant reduction in power output can occur during various PV faults such as module disconnection, bypass diode failure, bridge fault, and short circuit fault under non-uniform shading conditions. These PV faults may cause several peaks in the characteristics curve of PV arrays, which can lead to failure of the MPPT control strategy. In fact, impact of a fault can differ depending on the type of PV array, and it can make the control of the system more complex. Therefore, consideration of suitable PV arrays with an effective control design is necessary for maximum power output from a PV system. For this purpose, the proposed study presents a comparative study of two intelligent control schemes, i.e., fuzzy logic (FL) and particle swarm optimization (PSO), with a conventional control scheme known as perturb and observe (P&O) for power extraction from a PV system. The comparative analysis is based on the performance of the control strategies under several faults and the types of PV modules, i.e., monocrystalline and thin-film PV arrays. In this study, numerical analysis for complex fault scenarios like multiple faults under partial shading have also been performed. Different from the previous literature, this study will reveal the performance of FL-, PSO-, and P&O-based MPPT strategies to track maximum peak power during multiple severe fault conditions while considering the accuracy and fast-tracking efficiencies of the control techniques. A thorough analysis along with in-depth quantitative data are presented, confirming the superiority of intelligent control techniques under multiple faults and different PV types
Thermodynamic Assessment and Optimisation of Supercritical and Transcritical Power Cycles Operating on CO2 Mixtures by Means of Artificial Neural Networks
Feb 21, 2022 to Feb 24, 2022, San Antonio, TX, United StatesClosed supercritical and transcritical power cycles operating on Carbon Dioxide have proven to be a promising technology for power generation and, as such, they are being researched by numerous international projects today. Despite the advantageous features of these cycles enabling very high efficiencies in intermediate temperature applications, the major shortcoming of the technology is a strong dependence on ambient temperature; in order to perform compression near the CO2 critical point (31ºC), low ambient temperatures are needed. This is particularly challenging in Concentrated Solar Power applications, typically found in hot, semi-arid locations.
To overcome this limitation, the SCARABEUS project explores the idea of blending raw carbon dioxide with small amounts of certain dopants in order to shift the critical temperature of the resulting working fluid to higher values, hence enabling gaseous compression near the critical point or even liquid compression regardless of a high ambient temperature. Different dopants have been studied within the project so far (i.e. C6F6, TiCl4 and SO2) but the final selection will have to account for trade-offs between thermodynamic performance, economic metrics and system reliability.
Bearing all this in mind, the present paper deals with the development of a non-physics-based model using Artificial Neural Networks (ANN), developed using Matlab’s Deep Learning Toolbox, to enable SCARABEUS system optimisation without running the detailed – and extremely time consuming – thermal models, developed with Thermoflex and Matlab software.
In the first part of the paper, the candidate dopants and cycle layouts are presented and discussed, and a thorough description of the ANN training methodology is provided, along with all the main assumptions and hypothesis made.
In the second part of the manuscript, results confirms that the ANN is a reliable tool capable of successfully reproducing the detailed Thermoflex model, estimating the cycle thermal efficiency with a Root Mean Square Error lower than 0.2 percentage points. Furthermore, the great advantage of using the Artificial Neural Network proposed is demonstrated by the huge reduction in the computational time needed, up to 99% lower than the one consumed by the detailed model. Finally, the high flexibility and versatility of the ANN is shown, applying this tool in different scenarios and estimating different cycle thermal efficiency for a great variety of boundary conditions.Unión Europea H2020-81498
Computertomographie-basierte Bestimmung von Aortenklappenkalk und seine Assoziation mit Komplikationen nach interventioneller Aortenklappenimplantation (TAVI)
Background: Severe aortic valve calcification (AVC) has generally been recognized as a key factor in the occurrence of adverse events after transcatheter aortic valve implantation (TAVI). To date, however, a consensus on a standardized calcium detection threshold for aortic valve calcium quantification in contrast-enhanced computed tomography angiography (CTA) is still lacking. The present thesis aimed at comparing two different approaches for quantifying AVC in CTA scans based on their predictive power for adverse events and survival after a TAVI procedure.
Methods: The extensive dataset of this study included 198 characteristics for each of the 965 prospectively included patients who had undergone TAVI between November 2012 and December 2019 at the German Heart Center Berlin (DHZB). AVC quantification in CTA scans was performed at a fixed Hounsfield Unit (HU) threshold of 850 HU (HU 850 approach) and at a patient-specific threshold, where the HU threshold was set by multiplying the mean luminal attenuation of the ascending aorta by 2 (+100 % HUAorta approach). The primary endpoint of this study consisted of a combination of post-TAVI outcomes (paravalvular leak ≥ mild, implant-related conduction disturbances, 30-day mortality, post-procedural stroke, annulus rupture, and device migration). The Akaike information criterion was used to select variables for the multivariable regression model. Multivariable analysis was carried out to determine the predictive power of the investigated approaches.
Results: Multivariable analyses showed that a fixed threshold of 850 HU (calcium volume cut-off 146 mm3) was unable to predict the composite clinical endpoint post-TAVI (OR=1.13, 95 % CI 0.87 to 1.48, p=0.35). In contrast, the +100 % HUAorta approach (calcium volume cut-off 1421 mm3) enabled independent prediction of the composite clinical endpoint post-TAVI (OR=2, 95 % CI 1.52 to 2.64, p=9.2x10-7). No significant difference in the Kaplan-Meier survival analysis was observed for either of the approaches.
Conclusions: The patient-specific calcium detection threshold +100 % HUAorta is more predictive of post-TAVI adverse events included in the combined clinical endpoint than the fixed HU 850 approach. For the +100 % HUAorta approach, a calcium volume cut-off of 1421 mm3 of the aortic valve had the highest predictive value.Hintergrund: Ein wichtiger Auslöser von Komplikationen nach einer Transkatheter-Aortenklappen-Implantation (TAVI) sind ausgeprägte Kalkablagerung an der Aortenklappe. Dennoch erfolgte bisher keine Einigung auf ein standardisiertes Messverfahren zur Quantifizierung der Kalklast der Aortenklappe in einer kontrastverstärkten dynamischen computertomographischen Angiographie (CTA). Die vorliegende Dissertation untersucht, inwieweit die Wahl des Analyseverfahrens zur Quantifizierung von Kalkablagerungen in der Aortenklappe die Prognose von Komplikationen und der Überlebensdauer nach einer TAVI beeinflusst.
Methodik: Der Untersuchung liegt ein umfangreicher Datensatz von 965 Patienten mit 198 Merkmalen pro Patienten zugrunde, welche sich zwischen 2012 und 2019 am Deutschen Herzzentrum Berlin einer TAVI unterzogen haben. Die Quantifizierung der Kalkablagerung an der Aortenklappe mittels CTA wurde einerseits mit einem starren Grenzwert von 850 Hounsfield Einheiten (HU) (HU 850 Verfahren) und andererseits anhand eines individuellen Grenzwertes bemessen. Letzterer ergibt sich aus der HU-Dämpfung in dem Lumen der Aorta ascendens multipliziert mit 2 (+100 % HUAorta Verfahren). Der primäre klinische Endpunkt dieser Dissertation besteht aus einem aus sechs Variablen zusammengesetzten klinischen Endpunkt, welcher ungewünschte Ereignisse nach einer TAVI abbildet (paravalvuläre Leckage ≥mild, Herzrhythmusstörungen nach einer TAVI, Tod innerhalb von 30 Tagen, post-TAVI Schlaganfall, Ruptur des Annulus und Prothesendislokation). Mögliche Störfaktoren, die auf das Eintreten der Komplikationen nach TAVI Einfluss haben, wurden durch den Einsatz des Akaike Informationskriterium ermittelt. Um die Vorhersagekraft von Komplikationen nach einer TAVI durch beide Verfahren zu ermitteln, wurde eine multivariate Regressionsanalyse durchgeführt.
Ergebnisse: Die multivariaten logistischen Regressionen zeigen, dass die Messung der Kalkablagerungen anhand der HU 850 Messung (Kalklast Grenzwert von 146 mm3) die Komplikationen und die Überlebensdauer nicht vorhersagen konnten (OR=1.13, 95 % CI 0.87 bis 1.48, p=0.35). Die nach dem +100 % HUAorta Verfahren (Kalklast Grenzwert von 1421 mm3) individualisierte Kalkmessung erwies sich hingegen als sehr aussagekräftig, da hiermit Komplikationen nach einer TAVI signifikant vorhergesagt werden konnten (OR=2, 95 % CI 1.52 bis 2.64, p=9.2x10-7). In Hinblick auf die postoperative Kaplan-Meier Überlebenszeitanalyse kann auch mit dem +100 % HUAorta Verfahren keine Vorhersage getroffen werden.
Fazit: Aus der Dissertation ergibt sich die Empfehlung, die Messung von Kalkablagerungen nach dem +100 % HUAorta Verfahren vorzunehmen, da Komplikationen wesentlich besser und zuverlässiger als nach der gängigen HU 850 Messmethode vorhergesagt werden können. Für das +100 % HUAorta Verfahren lag der optimale Kalklast Grenzwert bei 1421 mm3
Die akute Appendizitis im Kindes- und Jugendalter: neue diagnostische Verfahren für die prätherapeutische Differenzierung histopathologischer Entitäten zur Unterstützung konservativer Therapiestrategien
Hintergrund der hier zusammengefassten Studien war die aktuelle Datenlage, die dafür spricht, dass es sich bei der klinisch unkomplizierten, histopathologisch phlegmonösen und der klinisch komplizierten, histopathologisch gangränösen Appendizitis um unabhängige Entitäten handelt. Diese können unterschiedlichen Therapieoptionen (konservativ vs. operativ) zugeführt werden. Vor diesem Hintergrund war es ein Ziel der Arbeiten zu untersuchen, wie die Formen der akuten Appendizitis im Kindes- und Jugendalter bereits prätherapeutisch unterschieden werden können.
Sowohl in der Labordiagnostik (P1 und P2) als auch im Ultraschall (P3) lassen sich Unterschiede zwischen Patient*innen mit unkomplizierter, phlegmonöser und komplizierter (gangränöser und perforierender) Appendizitis aufzeigen. Hierdurch allein kann allerdings aufgrund unzureichender Trennschärfe noch keine ausreichende Entscheidungssicherheit erreicht werden. Mit Verfahren der künstlichen Intelligenz auf Untersucher-unabhängige diagnostische Parameter (P4) konnte die Vorhersagegenauigkeit der akuten Appendizitis weiter gesteigert werden. Interessante Ergebnisse bezüglich der unterschiedlichen Pathomechanismen der beiden inflammatorischen Entitäten ergaben sich durch eine differenzielle Genexpressionsanalyse (P5). In einer Proof-of-Concept-Studie wurden zuvor beschriebene Methoden der künstlichen Intelligenz auf die Genexpressionsdaten angewandt (P6). Hierdurch konnte im Modell eine grundsätzliche Differenzierbarkeit der Entitäten durch die Anwendung der neuen Methode aufgezeigt werden.
Ein mittelfristiges Ziel ist es, eine Biomarkersignatur zu definieren, die ihre Aussagekraft durch einen Computeralgorithmus hat. Hierdurch soll eine schnelle Therapieentscheidung ermöglicht werden. Im Idealfall sollte diese Biomarkersignatur sicher, objektiv und einfach zu bestimmen sein sowie eine höhere diagnostische Sicherheit als die bisherige Diagnostik mittels Anamnese, Untersuchung, Laboranalyse und Ultraschall bieten.
Langfristiges Ziel von Folgestudien ist die Identifizierung einer Biomarkersignatur mit der bestmöglichen Vorhersagekraft. Hinsichtlich der routinemäßigen klinischen Diagnostik ist die Anwendung von Point-of-Care Devices auf PCR-Basis denkbar. Hier könnte eine limitierte Anzahl von Primern für eine Biomarkersignatur mit hoher Vorhersagekraft zum Einsatz kommen. Der dadurch ermittelte Biomarker würde seine Aussagekraft durch einen einfach anzuwendenden Computeralgorithmus erhalten. Die Kombination aus Genexpressionsanalyse mit Methoden der künstlichen Intelligenz kann somit die Grundlage für ein neues diagnostisches Instrument zur sicheren Unterscheidung unterschiedlicher Appendizitisentitäten darstellen
Norsk rå kumelk, en kilde til zoonotiske patogener?
The worldwide emerging trend of eating “natural” foods, that has not been
processed, also applies for beverages. According to Norwegian legislation, all
milk must be pasteurized before commercial sale but drinking milk that has
not been heat-treated, is gaining increasing popularity. Scientist are warning
against this trend and highlights the risk of contracting disease from milkborne
microorganisms. To examine potential risks associated with drinking
unpasteurized milk in Norway, milk- and environmental samples were
collected from dairy farms located in south-east of Norway. The samples
were analyzed for the presence of specific zoonotic pathogens; Listeria
monocytogenes, Campylobacter spp., and Shiga toxin-producing Escherichia
coli (STEC). Cattle are known to be healthy carriers of these pathogens, and
Campylobacter spp. and STEC have a low infectious dose, meaning that
infection can be established by ingesting a low number of bacterial cells. L.
monocytogenes causes one of the most severe foodborne zoonotic diseases,
listeriosis, that has a high fatality rate. All three pathogens have caused milk
borne disease outbreaks all over the world, also in Norway.
During this work, we observed that the prevalence of the three examined
bacteria were high in the environment at the examined farms. In addition, 7%
of the milk filters were contaminated by STEC, 13% by L. monocytogenes and
4% by Campylobacter spp. Four of the STEC isolates detected were eaepositive,
which is associated with the capability to cause severe human
disease. One of the eae-positive STEC isolates were collected from a milk
filter, which strongly indicate that Norwegian raw milk may contain potential
pathogenic STEC.
To further assess the possibilities of getting ill by STEC after consuming raw
milk, we examined the growth of the four eae-positive STEC isolates in raw milk at different temperatures. All four isolates seemed to have ability to multiply in raw milk at 8°C, and one isolate had significant growth after 72 hours. Incubation at 6°C seemed to reduce the number of bacteria during the
first 24 hours before cell death stopped. These findings highlight the
importance of stable refrigerator temperatures, preferable < 4°C, for storage
of raw milk.
The L. monocytogenes isolates collected during this study show genetic
similarities to isolates collected from urban and rural environmental
locations, but different clones were predominant in agricultural
environments compared to clinical and food environments. However, the
results indicate that the same clone can persist in a farm over time, and that
milk can be contaminated by L. monocytogenes clones present in farm
environment.
Despite testing small volumes (25 mL) of milk, we were able to isolate both
STEC and Campylobacter spp. directly from raw milk. A proportion of 3% of
the bulk tank milk and teat milk samples were contaminated by
Campylobacter spp. and one STEC was isolated from bulk tank milk. L
monocytogenes was not detected in bulk tank milk, nor in teat milk samples.
The agricultural evolvement during the past decades have led to larger
production units and new food safety challenges. Dairy cattle production in
Norway is in a current transition from tie-stall housing with conventional
pipeline milking systems, to modern loose housing systems with robotic
milking. The occurrence of the three pathogens in this project were higher in
samples collected from farms with loose housing compared to those with tiestall
housing.
Pasteurization of cow’s milk is a risk reducing procedure to protect
consumers from microbial pathogens and in most EU countries, commercial
distribution of unpasteurized milk is legally restricted. Together, the results
presented in this thesis show that the animal housing may influence the level
of pathogenic bacteria in the raw milk and that ingestion of Norwegian raw
cow’s milk may expose consumers to pathogenic bacteria which can cause
severe disease, especially in children, elderly and in persons with underlying
diseases. The results also highlight the importance of storing raw milk at low
temperatures between milking and consumption.Å spise mat som er mindre prosessert og mer «naturlig» er en pågående
trend i Norge og i andre deler av verden. Interessen for å drikke melk som
ikke er varmebehandlet, såkalt rå melk, er også økende. I Norge er det påbudt
å pasteurisere melk før kommersielt salg for å beskytte forbrukeren mot
sykdomsfremkallende mikroorganismer. Fagfolk advarer mot å drikke rå
melk, og påpeker risikoen for å bli syk av patogene bakterier som kan finnes i
melken.
I denne avhandlingen undersøker vi den potensielle risikoen det medfører å
drikke upasteurisert melk fra Norge. I tillegg til å samle inn tankmelk- og
speneprøver fra melkegårder i sørøst Norge, samlet vi også miljøprøver fra
de samme gårdene for å kartlegge forekomst og for å identifisere potensielle
mattrygghetsrisikoer i melkeproduksjonen. Alle prøvene ble analysert for de
zoonotiske sykdomsfremkallende bakteriene Listeria monocytogenes,
Campylobacter spp., og Shiga toksin-produserende Escherichia coli (STEC).
Kyr kan være friske smittebærere av disse bakteriene, som dermed kan
etablere et reservoar på gårdene. Bakteriene kan overføres fra gårdsmiljøet
til melkekjeden og dermed utfordre mattryggheten. Disse bakteriene har
forårsaket melkebårne sykdomsutbrudd over hele verden, også i Norge.
Campylobacter spp. og STEC har lav infeksiøs dose, som vil si at man kan bli
syk selv om man bare inntar et lavt antall bakterieceller. L. monocytogenes
kan gi sykdommen listeriose, en av de mest alvorlige matbårne zoonotiske
sykdommene vi har i den vestlige verden.
Resultater fra denne oppgaven viser en høy forekomst av de tre patogenene i
gårdsmiljøet. I tillegg var 7% av melkefiltrene vi testet positive for STEC, 13%
positive for L. monocytogenes og 4% positive for Campylobacter spp.. Fire av
STEC isolatene bar genet for Intimin, eae, som er ansett som en viktig
virulensfaktor som øker sjansen for alvorlig sykdom. Ett av de eae-positive
isolatene ble funnet i et melkefilter, noe som indikerer at norsk rå melk kan
inneholde patogene STEC. For å videre vurdere risikoen for å bli syk av STEC
fra rå melk undersøkte vi hvordan de fire eae-positive isolatene vokste i rå
melk lagret ved forskjellige temperaturer. For alle isolatene økte antall
bakterier etter lagring ved 8°C, og for et isolat var veksten signifikant. Etter
lagring ved 6°C ble antallet bakterier redusert de første 24 timene, deretter
stoppet reduksjonen i antall bakterier. Disse resultatene viser hvor viktig det
er å ha stabil lav lagringstemperatur for rå melk, helst < 4°C.
L. monocytogenes isolatene som ble samlet inn fra melkegårdene viste
genetiske likheter med isolater samlet inn fra urbane og rurale miljøer rundt
omkring i Norge. Derimot var kloner som dominerte i landbruksmiljøet
forskjellige fra kliniske isolater og isolater fra matproduksjonslokaler. Videre
så man at en klone kan persistere på en gård over tid og at melk kan
kontamineres av L. monocytogenes kloner som er til stede i gårdsmiljøet.
Til tross for små testvolum av tankmelken (25 mL) fant vi både STEC og
Campylobacter spp. i melkeprøvene. 3% av tankmelkprøvene og
speneprøvene var positive for Campylobacter spp. og ett STEC isolat ble
funnet i tankmelk. L. monocytogenes ble ikke funnet direkte i melkeprøvene.
Landbruket i Norge er i stadig utvikling der besetningene blir større, men
færre. Melkebesetningene er midt i en overgang der tradisjonell oppstalling
med melking på bås byttes ut med løsdriftssystemer og melkeroboter.
Forekomsten av de tre patogenene funnet i denne studien var høyere i
besetningene med løsdrift sammenliknet med besetningene som hadde
melkekyrne oppstallet på bås.
Pasteurisering er et viktig forebyggende tiltak for å beskytte konsumenter fra
mikrobielle patogener, og i de fleste EU-land er kommersielt salg av rå melk
juridisk begrenset. Denne studien viser at oppstallingstype kan påvirke
nivåene av patogene bakterier i gårdsmiljøet og i rå melk. Inntak av rå melk
kan eksponere forbruker for patogene bakterier som kan gi alvorlig sykdom,
spesielt hos barn, eldre og personer med underliggende sykdommer.
Resultatene underbygger viktigheten av å pasteurisere melk for å sikre
mattryggheten, og at det er avgjørende å lagre rå melk ved kontinuerlig lave
temperaturer for å forebygge vekst av zoonotiske patogener
Big Tech and research funding: A bibliometric approach
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Business AnalyticsTechnology companies have radically transformed our daily life in the recent years with help of the wide usage of internet. While transforming our lives, these companies also have grown up even bigger in the recent times and have become more powerful not only financially, but also in terms of computing power and data. Although there have been lots of research done on the influence of large digital economy players (Big Tech) in different fields, the academic influence of these companies is little understood. By drawing on 130,000 academic papers for which there is evidence of support by the Big Tech, the present work applies bibliometric approaches (on the metadata) and text mining techniques (on the contents) to shed a light on the outcomes of this relationship. In particular, we take into consideration research funding (direct strategies) and conference sponsorships (indirect strategies) to empirically explore this relatively unexplored side of Big Tech’s influence in contemporary society. While developing the analysis a key limitation was the scarcity of prior work exploring the connections between digital platforms and the scientific enterprise. There are several results that come to light from such a perspective, one of these findings is that among the research supported by Big Tech companies, there is big gap between the number of outcomes with the content about the technical perspectives (like machine learning or artificial intelligence) than the content about reflexive (say ethical or environmental) dimensions of innovation, ladder being very small. These findings may stimulate further inquiries into identifying the possible risks, if any, are generated from the direct and indirect financial support by corporate informational giants to academia. The causes and consequences of this non-market activity by companies with big market power may require further attention and research in this field
- …