171 research outputs found

    Degeneracy of the b-boundary in General Relativity

    Get PDF
    The b-boundary construction by B. Schmidt is a general way of providing a boundary to a manifold with connection. It has been shown to have undesirable topological properties however. C. J. S. Clarke gave a result showing that for space-times, non-Hausdorffness is to be expected in general, but the argument contains some errors. We show that under somewhat different conditions on the curvature, the b-boundary will be non-Hausdorff, and illustrate the degeneracy by applying the conditions to some well known exact solutions of general relativity.Comment: 25 pages, AMS-LaTeX v1.2, AMSFonts, submitted to Commun. Math. Phy

    On Imprisoned Curves and b-length in General Relativity

    Get PDF
    This paper is concerned with two themes: imprisoned curves and the b-length functional. In an earlier paper by the author, it was claimed that an endless incomplete curve partially imprisoned in a compact set admits an endless null geodesic cluster curve. Unfortunately, the proof was flawed. We give an outline of the problem and remedy the situation by providing a proof by different methods. Next, we obtain some results concerning the structure of b-length neighbourhoods, which gives a clue to how the geometry of a spacetime is encoded in the pseudo-orthonormal frame bundle equipped with the b-metric. We also show that a previous result by the author, proving total degeneracy of a b-boundary fibre in some cases, does not apply to imprisoned curves. Finally, we correct some results in the literature linking the b-lengths of general curves in the frame bundle with the b-length of the corresponding horizontal curves.Comment: 26 pages, 7 figures, LaTeX 2e with AMSLaTeX 1.2 and AMSFonts, submitted to J. Math. Phy

    The Geometry of the Frame Bundle over Spacetime

    Get PDF
    One of the known mathematical descriptions of singularities in General Relativity is the b-boundary, which is a way of attaching endpoints to inextendible endless curves in a spacetime. The b-boundary of a manifold M with connection is constructed by forming the Cauchy completion of the frame bundle LM equipped with a certain Riemannian metric, the b-metric G. We study the geometry of (LM,G) as a Riemannian manifold in the case when the connection is the Levi-Civita connection of a Lorentzian metric g on M. In particular, we give expressions for the curvature and discuss the isometries and the geodesics of (LM,G) in relation to the geometry of (M,g).Comment: 14 pages, no figures, LaTeX 2e with AMSLaTeX 1.2 and AMSFonts, submitted to J. Math. Phy

    Shock Waves in Plane Symmetric Spacetimes

    Full text link
    We consider Einstein's equations coupled to the Euler equations in plane symmetry, with compact spatial slices and constant mean curvature time. We show that for a wide variety of equations of state and a large class of initial data, classical solutions break down in finite time. The key mathematical result is a new theorem on the breakdown of solutions of systems of balance laws. We also show that an extension of the solution is possible if the spatial derivatives of the energy density and the velocity are bounded, indicating that the breakdown is really due to the formation of shock waves

    Dual-energy imaging in stroke : feasibility of dual-layer detector cone-beam computed tomography

    Get PDF
    Background: Dual-energy computed tomography (DECT) is increasingly available and used in the standard diagnostic setting of ischemic stroke patients. For stroke patients with suspected large vessel occlusion, cone-beam computed tomography (CBCT) in the interventional suite could be an alternative to CT to shorten door to thrombectomy time. This approach could potentially lead to an improved patient outcome. However, image quality in CBCT is typically limited by artifacts and poor differentiation between gray and white matter. A dual-layer detector CBCT (DL-CBCT) system could be used to separate photon energy spectra with the potential to increase visibility of clinically relevant features, and acquire additional information. Purpose: Paper I evaluated how a range of DECT virtual monoenergetic images (VMI) impact identification of early ischemic changes, compared to conventional polyenergetic CT images. Paper II characterized the performance of a novel DLCBCT system with regards to clinically relevant imaging features. Paper III & IV investigated if DL-CBCT VMIs are sufficient for stroke diagnosis in the interventional suite, compared to reference standard CT. Methods: Paper I was a retrospective single-center study including consecutive patients presenting with acute ischemic stroke caused by an occlusion of the intracranial internal carotid artery or proximal middle cerebral artery. Automated Alberta Stroke Program Early Computed Tomography Score (ASPECTS) results from conventional images and 40-120 keV VMI were generated and compared to reference standard CT ASPECTS. In paper II, a prototype dual-layer detector was fitted into a commercial interventional C-arm CBCT system to enable dual-energy acquisitions. Metrics for spatial resolution, noise and uniformity were gathered. Clinically relevant tissue and iodine substitutes were characterized in terms of effective atomic numbers and electron densities. Iodine quantification was performed and virtual non-contrast (VNC) images were evaluated. VMIs were reconstructed and used for CT number estimation and evaluation of contrast-to-noise ratios (CNR) in relevant tissue pairings. In paper III and IV, a prospective single-center study enrolled consecutive participants with ischemic or hemorrhagic stroke on CT. In paper III, hemorrhage detection accuracy, ASPECTS accuracy, subjective and objective image quality were evaluated on non-contrast DL-CBCT 75 keV VMI and compared to reference standard CT. In paper IV, intracranial arterial segment vessel visibility and artifacts were evaluated on intravenous DL-CBCT angiography (DL-CBCTA) 70 keV VMI and compared to CT angiography (CTA). In both paper III and IV, non-inferiority was determined by the exact binomial test with a one-sided lower performance boundary set to 80% (98.75% CI). Main results: In paper I, 24 patients were included. 70 keV VMI had the highest region-based ASPECTS accuracy (0.90), sensitivity (0.82) and negative predictive value (0.94), whereas 40 keV VMI had the lowest accuracy (0.77), sensitivity (0.34) and negative predictive value (0.80). In paper II, the prototype and commercial CBCT had a similar spatial resolution and noise using the same standard reconstruction. For all tissue substitutes, the mean accuracy in effective atomic number was 98.2% (SD 1.2%) and 100.3% (SD 0.9%) for electron density. Iodine quantification had a mean difference of -0.1 (SD 0.5) mg/ml compared to the true concentrations. For VNC images, iodine substitutes with blood averaged 43.2 HU, blood only 44.8 HU, iodine substitutes with water 2.6 HU. A noise-suppressed dataset showed a CNR peak at 40 keV VMI and low at 120 keV VMI. In the same dataset without noise suppression, peak CNR was seen at 70 keV VMI and a low at 120 keV VMI. CT numbers of various clinically relevant objects generally matched the calculated CT number in a wide range of VMIs. In paper III, 27 participants were included. One reader missed a small bleeding, however all hemorrhages were detected in the majority analysis (100% accuracy, CI lower boundary 86%, p=0.002). ASPECTS majority analysis had 90% accuracy (CI lower boundary 85%, p<0.001), sensitivity was 66% (individual readers 67%, 69% and 76%), specificity was 97% (97%, 96% and 89%). Subjective and objective image quality metrics were inferior to CT. In paper IV, 21 participants had matched image sets. After excluding examinations with scan issues, all readers considered DL-CBCTA non-inferior to CTA (CI boundary 93%, 84%, 80%, respectively), when assessing arteries relevant in candidates for intracranial thrombectomy. Artifacts were more prevalent compared to CTA. Conclusions: In paper I, automated 70 keV VMI ASPECTS had the highest diagnostic accuracy, sensitivity and negative predictive value overall. Different VMI energy levels impact the identification of early ischemic changes on DECT. In paper II, the DL-CBCT prototype system showed comparable technical metrics to a commercial CBCT system, while offering dual-energy capability. The dual-energy images indicated a consistent ability to separate and characterize clinically relevant tissues, blood and iodine. Thus, the DL-CBCT system could find utility in the diagnostic setting. In paper III, non-contrast DL-CBCT 75 keV VMI showed non-inferior hemorrhage detection and ASPECTS accuracy to CT. However, image quality was inferior compared to CT, and visualization of small subarachnoid hemorrhages after treatment remains a challenge. In the same stroke cohort, paper IV showed non-inferior vessel visibility for DL-CBCTA 70 keV VMI compared to CTA under certain conditions. Specifically, the prototype system had a long scan time and was not capable of bolus tracking which resulted in scan issues. After excluding participants with such issues, DL-CBCTA 70 keV VMI were found non-inferior to CTA. In summary, the findings of this thesis indicate that DL-CBCT may be sufficient for stroke assessment in the interventional suite with the potential to bypass CT in patients eligible for thrombectomy. However, issues related to the prototype system and the visualization of small hemorrhages highlight the need of further development

    Diabetes Mellitus Modelling Based on Blood Glucose Measurements

    Get PDF
    Insulin Dependant Diabetes Mellitus(IDDM) is a chronic disease characterized by the inability of the pancreas to produce sufficient amounts of insulin. To cover the deficiency 4-6 insulin injections have to be taken daily. The aim of this insulin therapy is to maintain normoglycemia, blood glucose level between 4-7 mmol/L. To determine the amount and timing of these injections different approaches are used. Mostly qualitative and semiquantitative models and reasoning are used to design such a therapy. In this Master Thesis an attempt is made to show how system identification and automatic control perspectives may be used to estimate quantitative models. Such models can then be used to design optimal insulin regimens. The system was divided into three subsystems, the insulin subsystem, the glucose subsystem and the insulin/glucose interaction. The insulin subsystem aims to describe the absorption of injected insulin from the subcutaneous depots and the glucose subsystem the absorption of glucose from the gut following a meal. These subsystems were modelled using compartment models and proposed models found in the literature. Several black box models and grey-box models describing the insulin/glucose interaction have been developed and analysed. These models have been fitted to real data monitored by a IDDM patient. Many difficulties were encountered, typical of biomedical systems. Non-uniform and scarce sampling, time-varying dynamics and severe non-linearities were some of the difficulties encountered during the modelling. None of the proposed models were able to describe the system accurately. However, all the linear models shared some dynamics, and there is ground to suspect that these dynamics are essential parts of the true system. More research has to be undertaken, primarily to investigate the non-linear nature of the system and to see whether other variables than glucose flux and insulin absorption are important for the dynamics of the system

    Diabetes Mellitus Glucose Prediction by Linear and Bayesian Ensemble Modeling

    Get PDF
    Diabetes Mellitus is a chronic disease of impaired blood glucose control due to degraded or absent bodily-specific insulin production, or utilization. To the affected, this in many cases implies relying on insulin injections and blood glucose measurements, in order to keep the blood glucose level within acceptable limits. Risks of developing short- and long-term complications, due to both too high and too low blood glucose concentrations are severalfold, and, generally, the glucose dynamics are not easy too fully comprehend for the affected individual—resulting in poor glucose control. To reduce the burden this implies to the patient and society, in terms of physiological and monetary costs, different technical solutions, based on closed or semi-closed loop blood glucose control, have been suggested. To this end, this thesis investigates simplified linear and merged models of glucose dynamics for the purpose of short-term prediction, developed within the EU FP7 DIAdvisor project. These models could, e.g., be used, in a decision support system, to alert the user of future low and high glucose levels, and, when implemented in a control framework, to suggest proactive actions. The simplified models were evaluated on 47 patient data records from the first DIAdvisor trial. Qualitatively physiological correct responses were imposed, and model-based prediction, up to two hours ahead, and specifically for low blood glucose detection, was evaluated. The glucose raising, and lowering effect of meals and insulin were estimated, together with the clinically relevant carbohydrate-to-insulin ratio. The model was further expanded to include the blood-to-interstitial lag, and tested for one patient data set. Finally, a novel algorithm for merging of multiple prediction models was developed and validated on both artificial data and 12 datasets from the second DIAdvisor trial

    Distribution of candidate genes for experimentally induced arthritis in rats

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Rat models are frequently used to link genomic regions to experimentally induced arthritis in quantitative trait locus (QTL) analyses. To facilitate the search for candidate genes within such regions, we have previously developed an application (CGC) that uses weighted keywords to rank genes based on their descriptive text. In this study, CGC is used for analyzing the localization of candidate genes from two viewpoints: distribution over the rat genome and functional connections between arthritis QTLs.</p> <p>Methods</p> <p>To investigate if candidate genes identified by CGC are more likely to be found inside QTLs, we ranked 2403 genes genome wide in rat. The number of genes within different ranges of CGC scores localized inside and outside QTLs was then calculated. Furthermore, we investigated if candidate genes within certain QTLs share similar functions, and if these functions could be connected to genes within other QTLs. Based on references between genes in OMIM, we created connections between genes in QTLs identified in two distinct rat crosses. In this way, QTL pairs with one QTL from each cross that share an unexpectedly high number of gene connections were identified. The genes that were found to connect a pair of QTLs were then functionally analysed using a publicly available classification tool.</p> <p>Results</p> <p>Out of the 2403 genes ranked by the CGC application, 1160 were localized within QTL regions. No difference was observed between highly and lowly rated genes. Hence, highly rated candidate genes for arthritis seem to be distributed randomly inside and outside QTLs. Furthermore, we found five pairs of QTLs that shared a significantly high number of interconnected genes. When functionally analyzed, most genes connecting two QTLs could be included in a single functional cluster. Thus, the functional connections between these genes could very well be involved in the development of an arthritis phenotype.</p> <p>Conclusions</p> <p>From the genome wide CGC search, we conclude that candidate genes for arthritis in rat are randomly distributed between QTL and non-QTL regions. We do however find certain pairs of QTLs that share a large number of functionally connected candidate genes, suggesting that these QTLs contain a number of genes involved in similar functions contributing to the arthritis phenotype.</p

    A CA perspective on kills and deaths in Counter-Strike : Global Offensive video game play

    Get PDF
    The interest in studying multiplayer video game play has been growing since the mid-2000s. This is in part due to growing interest in games that are part of eSports settings such as Counter-Strike: Global Offensive (CS:GO), which is one of the main games within eSports, and is the video game that is studied in this paper. Studies of multiplayer video game play from a conversation analysis (CA) participant perspective appear to be scarce, although they are steadily becoming a legitimate topic in ethnomethodological conversation analytical (EMCA) studies. EMCA studies have mostly focused on aspects around the screen, and on how physically present players interact and draw upon resources both on- and off-screen. Some studies have taken the CA perspective further and blur the on-/off-screen dichotomy to better understand on-screen actions as social actions worthy of study. The aim of this article is to describe and gain new understanding of how participants socially organize their game play with a focus on sequentiality and accountability connected to “kills” (K) and “deaths” (D) in CS:GO. The social organizational structure of game play connected to K- and D-events in CS:GO can be described as a set of “rules” that participants orient to. In short, these rules appear to encompass communication efficiency: K-events are more often other-topicalized, and D-events are more often self-topicalized; spectating provides more sequential and temporal space for topicalization; and D-events are oriented to as more problematic events in need of further negotiation. In-and-through describing the social organization connected to K- and D-events from a participant’s perspective, it becomes evident that “killing” and “dying” in-game is not oriented to in a literal fashion. They are oriented to as frequent events that are basic parts of the game’s mechanics and of playing the game to win or lose.publishedVersio

    Exploring Peer Mentoring and Learning Among Experts and Novices in Online in-Game Interactions

    Get PDF
    Becoming a competent player of online games involves complex processes and networks of online and offline life where the player is socialized into social norms and expectations. An important aspect of what constitutes gamers learning trajectories is guidance from experienced players. Games are public spheres where learning is social and distributed and where players often are enabled to learn new and advanced competencies. However, there is little educational research on how these competencies are cultivated and employed within a competitive gaming scene. In the current paper, we analyze the mentor-apprentice relationship between an expert and a novice in the multiplayer FPS CS:GO within an eSports and educational context. By assuming a dialogic approach to meaning making, we will examine how novices and experts uphold and talk the relationship into being and how the peer teaching and learning manifests in the in-game interaction. The ethnographic data was collected in collaboration with a vocational school with an eSports program in Finland in 2017-2018. Students (aged 17-18, all male) playing CS:GO shared screen recordings of their matches and took part in interviews. The participants play in two different teams. Here, we focus on Martin (expert) and John (novice) from team one. Martin was the highest ranked team member, something his team members are aware of and make relevant in interviews and in-game interactions. This position seems to provide him authority and leadership within the team. In the interviews, Martin aligns with being the leader and repeatedly mentions that he coached John to become part of the team. This relationship is also evident in the in-game data where Martin, together with the rest of the team, often provides feedback and support for John. The learning appears to be how to become competent in the game, and there are strong indications of other aspects of learning that relate to sociality and leadership.acceptedVersio
    • 

    corecore