887 research outputs found
Effects of ovarian fluid on sperm velocity in Arctic charr ( Salvelinus alpinus )
Numbers of studies in externally fertilizing fish species provide evidence for an effect of ovarian fluid on sperm motility characteristics such as duration of forward mobility, velocity or percent motile sperm cells. Yet, because of variations among females in the quality of their ovarian fluid, such effects might differ between individuals. Additionally, ovarian fluid from different females could also be expected to affect each ejaculate differently, resulting in cryptic female choice. In this study on Artic charr (Salvelnius alpinus), sperm velocity from several males was measured in the diluted ovarian fluid of several females according to a fully balanced crossing design. This design allowed us to estimate variations among females in the effect of their ovarian fluid on the velocity of sperm from different males, and to detect variations among males in the ability of their sperm to swim in ovarian fluid. Sperm velocity was estimated by computer-assisted sperm analysis. Average velocity was found to vary among females, with some females having constantly higher velocity measurements in their ovarian fluid, and among males, indicating that some males had overall faster sperm in ovarian fluid than others. Moreover, variation in sperm velocity was shown to depend on individual female-male interactions. Our results document that females vary in the effect of their ovarian fluid on sperm velocity and that their ovarian fluid may stimulate sperm velocity according to individual characteristics of males. This latter result suggests a potential mechanism for cryptic female choic
Optimal operating strategies of the micro-CHP for improved interaction between the electrical and thermal demand and supply
The research center of Zero Emissions Buildings (ZEB) has a goal of eliminating the greenhouse gas emissions associated with all phases of building development and use. This is achieved through more sustainable building construction and more efficient energy use. The Norwegian government has a similar goal of achieving zero energy buildings as a standard by 2020. This has led to proper investigation in technological solutions that can help to achieve these goals. In a net-ZEB perspective, combined heat and power (CHP) is considered as a potential energy supply solution for buildings. CHP is seen as an emerging technology which has the potential to reduce primary energy consumption and the associated greenhouse gas emissions. This is achieved through concurrent production of electricity and heat using the same fuel. However, since the thermal output of CHP is substantially larger than the electrical output, the potential offered by CHP systems depend on their suitable integration with the thermal demand of the building. In this thesis, a simulation model is used to investigate the performance of a CHP system compared to a conventional gas boiler system in a multi-family building that complies with the Norwegian building norm, TEK10. Different operational strategies are applied to the CHP model to investigate its optimal integration in domestic dwellings. Analyzing the simulation results indicates that the CHP system gives primary energy savings in all operational strategies, but operating the system in follow thermal mode represents the greatest savings. Applying load management resulted in further savings, and the fuel efficiency did increase, achieving a value of 75.1% on a higher heating value (HHV) basis. The CHP device is more capable of covering the electricity demand as peaks are shaved. This implies that CHP is better suited for buildings with stable electricity and heat demand. Electric demand following operation did however result in poorer primary energy savings and the corresponding CHP efficiency did decrease due to poorer heat recovery efficiency and frequent part load operation. Using renewable upgraded biogas as fuel in thermal following mode did result in the highest primary energy savings. Primary energy consumption was reduced by 34.3%, and the corresponding system efficiency based on primary energy was 70.7% on a HHV basis. From an environmental perspective, it has been found that the CHP system is more favorable when the CO2-emission factor for electricity is high. This is due to the reduction in electricity imports from the grid, and the part substituted electricity covered by the electricity exports from the CHP system. The greatest reduction in grid imports was seen when the CHP-device was set to follow the electrical demand of the building without restriction in thermal surplus. The CHP was able to cover 88.27% of the electricity demand, but the system efficiency decreased as significant amounts of heat was wasted due to overproduction. The highest amount of exports was seen when load management was implemented in thermal demand following mode, and represented 76.61% of the produced electricity. Using the current CO2-emission factor for the UCPTE electricity mix, a reduction in CO2 emissions was seen for all CHP configurations. The use of renewable fuel resulted in the greatest savings, and emissions were reduced by 71.91% compared to the gas boiler, representing a tremendous reduction. The use of natural gas as fuel resulted in significantly lower savings. The best case achieved a 26.58% reduction compared to the reference system. When using the net-ZEB definition, only CHP fuelled on renewable fuel did achieve CO2-savings. This questions the environmental viability of today s CHP systems as the CO2-emission factor for electricity is expected to decrease over the coming years due to an expected increase in use of renewable fuels. Further research should therefore be done in order to enable an efficient CHP technology based on renewable fuels. This will decrease the emissions significantly, making CHP more competitive
Fordeling av hudbrems i forskjellige distrikt, kjønn og aldersklasser av rein. Betydning for behandling
Dynamics of energy and carbon emissions in residential building stocks: - The role of solutions for multi-family houses and apartment blocks
Three building typologies are analyzed in this report, where the first one is apartment blocks built before 1956, the second one is apartment blocks built in the period 1956-1970, and the last typology is apartment blocks built in the period 1971-1980. A literature study of typical dwellings throughout time is completed and typical apartments from each of the time periods are defined. The model used to calculate the building s energy need for space heating and domestic hot water is based on the TABULA methodology, but is constructed as an energy balance model that uses the principles of a material flow analysis. This model is used to calculate the energy need before and after renovation. For each time period two building states are analyzed; original building state and historical refurbished building state. This is done since a big part of the buildings built before 1980 have already gone through some sort of renovation, and the energy saving potential by implementing new energy efficiency measures to these partly refurbished buildings are smaller than the energy saving potential for the same building types in original state. A life cycle costing model that uses the principles of net present value is used to calculate the economical output of each renovation package analyzed in this Master Thesis. A scenario model, that uses inputs from the segmented building stock model (see chapter 3.3.1) and the energy model (see chapter 3.1), is used to estimate the future energy need for space heating and domestic hot water for the part of the Norwegian dwelling stock analyzed in this report.The energy reduction potential for improving a typical building constructed before 1956 from original state to TEK10 level is 68 % for space heating. Improving it further down to a passive house level gives a reduction potential of 81 %, which shows that these buildings have a major improvement potential. Only a minority (16%) of the apartment blocks from this period are however in original state, which means that a more realistic reduction potential is seen from historical refurbished state to TEK10- or passive house level. The reduction potential for a TEK10-refurbishment is then 46 % and 67 % for a passive house refurbishment. For the two other building typologies the general pattern is that the energy savings decrease as the quality of the building in original and historical refurbished state improves. Apartment blocks built between 1971 and 1980 have the lowest saving potential since the quality before new renovation is high. This also makes these building types less economical efficient for different renovation projects. General it is shown that almost all renovations are efficient for apartment blocks built before 1956 and between 1956-1970 in original state, as these building types have the highest energy use before renovation. However, improving the building envelope to TEK10 or passive house level, as well as installing air-to-air heat pumps as supplementary measures are seen profitable for all the building types analyzed over a period of 36 years. Installation of a balanced ventilation system is only estimated to be profitable for apartment blocks built before 1956 and between 1956-1970 in original state. However, when upgrading the building envelope to passive house level it is recommended to install a balanced ventilation system to ensure a satisfactory air quality (Thomsen & Berge, 2012). Since there is high willingness to pay for comfort it is anticipated that installation of a balanced ventilation system combined with a passive house envelope upgrade is realistic for all building types even though the net present value is up to 400 NOK/m2 BRA higher than for base case (no energy-related upgrades to the building)
Human Factor Design of a Control Room for Fleet Management using Underwater Drones in the O&G Industry - Use case scenarios for remote operations of UIDs criticizing functional allocation in IS0 11064.
Bruk av undervannsdroner for inspeksjonsoperasjoner og -oppgaver på subsea anlegg har vist seg å være viktig for olje og gassindustrien. De er mer effektive og tryggere enn menneskelige dykkere, men er kostbare. I dag er droner koblet til et «top site» skip som er påvirket av de harde værforhold og krefter som finnes ute på havet, i tillegg til kontrakter og begrenset tilgjengelighet. Ved å plassere dokkingstasjoner for dronene på havbunnen vil det være mulig å fjerne støtte-båten og fjernstyre dronene fra et kontrollrom som er plassert på land. I tillegg vil dokkingstasjonen fungere som batterikilde for dronen og brukes til dataoverføring. Ved å plassere en flåte med UIDer på havbunnen til å gjennomføre inspeksjonsoppdrag, altså jobbe som vaktmestere ved å se etter ujevnheter og mønstre på det stasjonerte utstyret/instrumentene, vil en åpne for nye muligheter i sektoren, samtidig som en reduserer kostnader og ivaretar HMS.
Design av et kontrollrom for dette behovet er et komplekst problem. Problemstillingen er derfor sentrert rundt menneskelige faktorer og ISO 11064 Ergonomisk design av kontrollsentere, og fokuserer på funksjonsallokering mellom mennesker og maskiner. En funksjonsanalyse er gjennomført ved bruk av en kvalitativ metode hvor en intervjuet 10 stykker. Det har gitt flere nødvendige funksjoner og systemer for kontrollrommet og dets støttespillere, i tillegg til designspesifikasjoner. Analysen har også vist at for de tre ulike inspeksjonsoppdragene er det ulike behov og tidsmessige/timelige forhold for de ulike funksjoner/systemer som en trenger, i tillegg til at tildelingen av dem mellom menneske og maskin varierer basert på oppdrag/oppgaver og autonomi.
Resultatene, i lys av ISO 11064-1, viser at en dynamisk allokering av funksjonene må til for å få den nødvendige fleksibiliteten og skalerbarheten som systemet trenger. Dette er fordi dronene vil ha behov for ulik grad av autonomi når de gjennomfører de ulike inspeksjonsoppdragene og oppgavene i de tre tilfellene, og menneske-maskin-samspillet må være optimalisert for dette. Studien viser også at det er utfordrende å overføre data under vann mellom dronene og kontrollrommet, og at en prioritet av hvilken data som er nødvendig å ha i sanntid for kontrollromsoperatøren(e) som styrer flåten med droner, er viktig med tanke på sikkerheten. Intervjuene viser også at det er to strukturelle konsepter som er mulige når det gjelder den strukturelle organiseringen av kontrollrommet. Det som skiller de to typene er hvem som skal ha ansvar for kontroll/styring av dronene og/eller observasjon, ettersom proprietære kontrollrom er en mulighet. Dette påvirker også data-tilførselen og rutingen.
Med det økte fokuset på digitalisering i olje og gass-industrien har sammen med integrerte og fjernstyrte operasjoner, ser vi at bruken av et kontrollrom for flåtestyring av UIDer som utfører inspeksjonsoperasjoner på subsea-installasjoner vil være et godt bidrag til behovene i bransjen. Derimot så trenges det mer (videre)utvikling av teknologien, spesielt på autonomi i forbindelse med å gi systemet nok tillit for å kunne opererer autonomt. Teknologien trenger også å modnes og være robust nok.Using underwater drones for inspection operations and tasks on subsea facilities has shown to be useful for the oil and gas industry. They are more efficient and safer than human divers, but expensive. Today are the drones connected to a top-site vessel and it is affected by harsh weather and forces, as well as contracts and availability. Placing docking stations on the seabed makes it possible to remove the support vessel and remotely operate the drones from a control room placed onshore, as well as use the docking stations for power supply and data transfer. Placing a fleet of UIDs on the seabed performing inspection operations, working as janitors by looking for anomalies and patterns on the placed equipment/instruments, provides new opportunities for the sector, as well as reduces costs and improve HSE.
Designing a control room for this use is a complex problem. This project, therefore, centers the design around human factors engineering and ISO 11064 - Ergonomic design of control centers, focusing on function allocation between man and/or machine. A performed functional analysis using a qualitative method with 10 conducted interviews, has provided several necessary functions and systems for the control room and its support functions, as well as design specifications. The analysis has also shown that the three use-cases in focus, all have different needs and temporal conditions, along with autonomy, which affects the needed functions/systems, as well as the allocation of the functions between human and/or machine.
The results, in light of ISO 11064-1, show that a dynamic allocation of the functions is needed to get the necessary flexibility and scalability in the system(s). This is because the drones will have different degrees/levels of autonomy when performing the inspection operations and tasks in the three work processes, and the human-machine teaming must be optimized for this. The study also shows that there are difficulties in transferring data from underwater between the drones and the control room and that a priority of which data that is necessary to have in real-time for the control room operator(s) for controlling the fleet safely must be performed. The interviews also show that there are two main structural approaches/concepts when it comes to the structural organization of the control room. They primarily differ in who has the responsibility of the control and/or observation of the drones, as proprietary rooms are a possibility. This also affects the data streams and routing.
With the increased focus on digitalization in the oil and gas industry along with integrated and remote operations, we see that the use of a control room for a fleet of UIDs performing operations at subsea installations will be a good contribution to the needs in the industry. However, it needs further development of technology, especially on autonomy in regard to trusting the drones to operate autonomously. The technology also needs to mature and be robust
Transformer Pre-Trained Language Models and Active Learning for Improved Blocking Performance in Entity Matching
Entitetsgjenkjenning har som mål å redusere entropien mellom to ulike datakilder ved å
identifisere hvilke record som refererer til de samme entitetene i virkeligheten. Vanligvis
krever mange foreslåtte blokkeringsmetoder tilstrekkelig menneskelig domenekunnskap
og/eller en stor mengde merkede data, imidlertid er dette ofte utilgjengelig for app-
likasjoner i entitetsgjenkjenning for å oppnå nyttige blokkeringmodeller. I dette arbeidet
foreslår vi TopKDAL, en dyplæringsbasert tilnærming som er rettet mot en situasjon
med begrenset mengde merkede data ved å kombinere aktiv læring med forhåndstrente
transformer språksmodeller. Disse språkmodellene er en lovende tilnærming for å oppnå
semantisk meningsfulle embeddings og muligheten til å lære hvor man skal fokusere mel-
lom records. Ved å gjøre det avslører transformer modellene likheter mellom entitetene.
Vi bruker aktiv læring for å velge informative eksempler for å finjustere en transformer
språkmodell og for å takle knapphet på merkede data. På denne måten undersøker vi
hvordan arbeidet med merking av data kan reduseres, samtidig som modellnøyaktigheten
og blokkeringsytelsen opprettholdes.
Eksperimenter med fem referansedatasett for entitetsgjenkjenning viser effektiviteten til
TopKDAL med hensyn på pairs completeness (PC), reduction rate (RR) og kjøretid. Vi
fant at aktive læringsstrategier gir bedre resultater med en størrelsesorden færre merkede
eksempler sammenlignet med en supervised baseline som er trent på alle tilgjengelige
data. TopKDAL oppnådde den beste ytelsen med Imbalanced-Partition-2 og Balanced-
Uncertainty. Balanced-Uncertainty trenes basert på et balansert treningssett i starten,
som bidrar til å forbedre den aktive læringsytelsen og redusere risikoen for kaldstart-
problemer. Imidlertid kreves dette ekstra implementasjon for å muliggjøre potensialet
med en balansert start strategi. For å redusere biases ble Random-P/N-strategien trent
med et ubalansert treningssett som gir konkurransedyktig ytelse mot de mer avanserte
prøvetaking strategiene. Vår foreslåtte TopKDAL krever ingen menneskelige designbeslut-
ninger, og features læres fra dataene. Finjustering av hyperparametere anbefales fortsatt
for å optimalisere modellytelsen.Entity matching (EM) aims to reduce the entropy between two different data sources by
identifying which records refer to the same real-world entity. Typically, many proposed
blocking approaches require sufficient human expert involvement and/or a large amount of
labeled data, however often unavailable for EM applications to achieve useful models. In
this work, we propose TopKDAL, a deep learning-based approach targeting a low-resource
setting through a combination of active learning (AL) with pre-trained transformer lan-
guage models (TPLM). TPLMs are a promising approach towards hands-off blocking to
obtain semantically meaningful sentence embeddings and the ability to learn where to
pay attention between the records. Doing so, TPLMs unveil similarities between entities.
We incorporate active learning to select informative examples to fine-tune a TPLM and
to cope with labeled data scarcity. In this way, we investigate how to reduce the required
labeling effort while maintaining the model accuracy and the blocking performance.
Experiments on five EM benchmark datasets showed the effectiveness of TopKDAL with
respect to pair completeness (PC), reduction rate, and running time. We found ac-
tive learning strategies yield better results with an order of magnitude fewer labeled
examples compared to a supervised Baseline trained on all available data. TopKDAL
demonstrates best performance with Imbalanced-Partition-2 and Balanced-Uncertainty.
Balanced-Uncertainty consumes an initial balanced training set, which contributes to
kick-start the active learning performance and reduces the risk for cold start problems.
However, it is an extra overhead required to unlock the potential with a balanced start-
ing strategy. Towards mitigating biases, Random-P/N yield competitive performance
towards the more advanced query sampling strategies when it is trained initially on an
imbalanced initial training set. Our proposed TopKDAL requires no design decisions from
a human and features are learned from the data. Fine-tuning hyperparameters are still
recommended to optimize the model performance
Effect of Different Polymorphs of SiO2 on the Reaction between SiO2 and SiC in Si Production
The aim of this work have been to investigate the phase transformations in silica (SiO2) during heating to target temperature, and the effect of SiO2 polymorphs on the reduction reaction between SiO2 and silicon carbide (SiC) in silicon production. This is done as a step to get a better understanding and to improve the furnace operations of the silicon process. Earlier studies have found no difference in effect of quartz and cristobalite on the reduction reaction. The effect of amorphous silica is therefore especially of interest in this study. The goals of this thesis have been investigated by performing
Non-isothermal furnace experiments with different heating rates to study the phase transformations in SiO2 during heating to target temperature 1700-1900 C.
Isothermal furnace experiments with different heating rates to 1700-1900 C to study the effect of SiO2 polymorphs on the reaction rate for the reduction reaction between SiO2 and SiC.
Pellets of Quartz, Qz20 and SiC have been heated and XRD analyses using internal standard method found that several SiO2 phase transformations occurred before target temperature was reached. Both target temperature and heating rate have an impact on amount of SiO2 phases formed during heating. Amorphous silica forms both from -quartz and from softened/molten SiO2, and amount of amorphous silica increased with increasing target temperature. Different heating rates to 1800 C gave a greater variation in amount of amorphous silica than different heating rates to 1700 C. Morphology images found that no visual differences can be observed between the different SiO2 phases.
The different SiO2 polymorphs had no effect on the reduction reaction between SiO2 and SiC. Conversion was measured from the weight loss of the pellets and the method used is found to give reliable results. The conversion increased with increasing temperature from 1700-1900 C. Morphology images found that the remaining pellets after furnace experiments at 1700 C had started to melt, while at 1800 C the remaining samples were completely melted. Melting of SiO2 would expect to give a higher reaction rate as it increases the contact area of the reactants
Visualizing Digital Identity
We cannot see what our digital identity looks like or view what it says about our behavior, feelings, or actions that are stored online. The data that is collected by surveillance from our smartphones build a fragmented and distorted reflection of ourselves. This stockpile of our data and this mirrored self it creates have become an increasingly valuable commodity that fuels the surveillance economy. While the existence of these datasets is not always obvious, they are attainable by request while the reflection of yourself they create is a heavily guarded secret. The digital identity created by this collection of data is explored through visual communication design. Datasets were acquired by applications such as Google, Spotify and Instagram from my smartphone. With this information I explore if design methodology can be applied to create a representation of my digital identity, and what critical reflections can these representations reveal about my digital identity? New processes were explored by using researching through design. In this process, the disconnections between concepts contained within the topic of digital surveillance and digital identity are analyzed. The scope of the project and the visualizations move through cycles of simplification as they narrow towards a place where a reflection can be made. These visualizations use Methodology from Surrealism and Discursive Design in their process and reflection. This recounts my research through design, and explores visualization of digital identity through using design as a medium for understanding. The visualizations move in a timeline through a series of iterative explorations. Each iteration contains its own context, process, and reflection. The iterative exploration moves towards understanding the digital identity by approaching the subject from multiple perspectives and executions of design. A theme of a distorted and fragmented reflection appears as the visualizations evolve in their process of iteration. The resulting understanding of a digital identity may never be finalized, but the resulting discourse invited through exploration becomes subjective reflections. This understanding of our fragmented and distorted reflections of our digital identity opens possibilities for further research.Master of Design, Visual CommunicationMAV
Slag and its effect on Si and FeSi production
Silicon (Si) is one of the most useful elements on earth and is popular due to its semiconductor characteristics and its abundance in form of quartz (SiO2). Both metallurgical Si and ferrosilicon (FeSi) are produced industrially by carbothermic reduction of quartz (SiO2) in a submerged arc furnace. When this work started in 2017, the knowledge about slag in industrial Si and FeSi furnaces was limited. The understanding about the slag and how it behaves and reacts in the furnace are important to avoid accumulated slag in the furnaces and to optimize a good flow of materials in the furnaces. The amount of slag is one of the main differences between different furnaces and is believed to be an important factor in the productivity of the furnace. Additionally, knowledge about the slag present in the furnaces is an important step towards identifying the different zones and materials in the furnaces.
This study is divided into three parts. Part 1 focuses on slag from industrial Si and FeSi furnaces, part 2 investigates the impurities in the quartz, who later contributes to the slag formation in the furnace, and part 3 studies the main action used today for removing accumulated slag from the industrial furnace: dissolving calcium oxide (CaO) in the slag to reduce its viscosity.
Slag from different zones in six different Si and FeSi furnaces collected during excavations, tapped slag from three different furnaces and slag from the charge surface of two FeSi furnaces during operation are the basis for the industrial part of this work. Accumulated slag is typically found along the furnace walls, sometimes extending all the way up to the charge top, and in a thick layer at the furnace bottom, which the metal must pass to exit the tap-hole. Both the accumulated slag and the tapped slag mainly contain SiO2, CaO and Al2O3. In the accumulated slag samples, it is found that the slag towards the furnace wall in the higher parts generally has a higher SiO2 content compared to the slag accumulated at the furnace bottom. The possible explanation suggested in this study for both the existence of slag in this area, and the increased SiO2 content in the slag, is a high crater pressure that pushes the slag towards the furnace walls and upwards.
Furthermore, for the slag next to the tapping channel, the slag above the tapping channel generally has a higher SiO2 content compared to the slag below it. This is believed to be due to variations in the densities, where slags with higher SiO2 content have lower densities. No significant differences were found in the composition of the slag between different tap-holes within the same furnace. Visually, the zones around the tapping channel appear similar. Next to the Si flow is a bright green layer measuring 5-15 cm in thickness. This layer mainly consists of SiO2-CaO-Al2O3 slag and smaller silicon carbide (SiC) particles. Following the green layer is a dark grey layer that is mainly SiO2-CaO-Al2O3 slag and larger SiC particles. The green and grey color seem to be dependent on the size of the SiC particles, and not the composition of the slag.
The tapped slags were found to be liquid during tapping at a temperature of 1800 °C, except for the solid SiC particles present in the slag. The main difference between the normal tapped slag and the slag reported as high-viscosity slag is the increased amount of SiO2 in the slag, and the presence of SiO2 and condensates in the samples. These SiO2 areas are former quartz which has melted, but not fully dissolved in the slag. The amount of slag, the amount of solid SiC in the liquid slag and the viscosity of the slag are three of the main factors that influence the flow of slag through the tap-hole.
Impurities in quartz will affect both the SiO2 properties during heating to elevated temperatures and it affects the composition of the slag in the furnaces. The impurities and the properties in the quartz of six different quartz types suited for Si and FeSi production were studied. It is found that an increased amount of impurities lowers the softening temperature and the melting time. It is also found that SiO2 dissolves in the impurities as the temperature increases.
The crack formation in quartz during heating were found to mainly happen at two temperature intervals, ~300-600 °C and ~1300-1600 °C. Cracks formed from 300 °C are from an uneven SiO2 surface, from activities in form of volume change or color change in the impurity areas, or from fluid inclusion cavities, while cracks occurring from 1300 °C is believed to be due to the volume increase and the phase transformations from quartz to cristobalite. The degree of cracking is also found to be different between the different quartz types. No correlation could be found between amounts of cracks and fines formation, nor between the crack formation and the impurity composition.
CaO in form of lime (CaCO3) is commonly added as a flux in Si and FeSi production to lower the viscosity of the slag, which is beneficial to ensure a good flow of materials through the furnace. The dissolution of CaO in three different compositions of SiO2-CaO-Al2O3 slag were investigated at temperatures between 1500-1600 °C. It is found that CaO dissolution into the slag is fast. The initial effect of increasing the CaO content in the slag from 15-21% to 25-30% gives a significant reduction in the viscosity.
During the dissolution process, a boundary layer containing 35-42% CaO formed between the CaO particle and the slag, which corresponded to the phases CaO·Al2O3·2SiO2 or 2CaO·Al2O3·SiO2 in this study.
Two models were investigated to determine the dissolution rate of the three slags. In the first model, the CaO particle is assumed to be a smooth shrinking sphere, and the rate controlled by the chemical reaction rate. The second model assumes that the rate is controlled by mass transport and depends on the diffusion rate of CaO through a boundary layer on the surface of the CaO. Both models gave similar results, and a proportional relationship between the rate constants and the viscosities was obtained. The diffusion coefficients were found to be within the range of 10-6 cm2/s
Designing Chatbots for Crises: A Case Study Contrasting Potential and Reality
Chatbots are becoming ubiquitous technologies, and their popularity and adoption are rapidly spreading. The potential of chatbots in engaging people with digital services is fully recognised. However, the reputation of this technology with regards to usefulness and real impact remains rather questionable. Studies that evaluate how people perceive and utilise chatbots are generally lacking. During the last Kenyan elections, we deployed a chatbot on Facebook Messenger to help people submit reports of violence and misconduct experienced in the polling stations. Even though the chatbot was visited by more than 3,000 times, there was a clear mismatch between the users’ perception of the technology and its design. In this paper, we analyse the user interactions and content generated through this application and discuss the challenges and directions for designing more effective chatbots
- …
