315 research outputs found

    Polyfluoro-Pyridyl Glycosyl donors

    Get PDF
    Carbohydrates are one of the most structurally and functionally diverse classes of naturally occurring compounds and it is well established that they play an essential role in a vast array of biological processes. The synthesis of stereochemically defined oligosaccharides by a series of glycosylation processes, involving the reaction between a glycosyl donor and acceptor, is of paramount importance in synthetic carbohydrate chemistry and glycobiology. However, despite the importance of glycosylation chemistry and the development of sophisticated methodologies, there remains no general and stereoselective strategy that has been universally adopted for the syntheses of oligosaccharides. In this thesis we present the synthesis and function of a novel family of glycosyl donors, in which fluorinated pyridine systems are utilised as the leaving group. In systems of this type it proved possible to 'tune' the glycosylation capability of the donor via variation of the substituents present on the pyridine ring and the type of Lewis acid activator used. The formation of a glycosidic bond with control of the stereochemistry at the anomeric centre is usually difficult. Interestingly glycosyl donor systems of this type provide a high degree of stereoselectivity, providing diastereomeric excesses in the region of 80 to 98%. It has been determined that polyfluoro-pyridyl glycosyl donors do not react via the established Sn1 glycosylation process but via an unique Sn2 process which gives rise to the high degree of stereoselectivity observed

    Automated Identification and Reconstruction of YouTube Video Access

    Get PDF
    YouTube is one of the most popular video-sharing websites on the Internet, allowing users to upload, view and share videos with other users all over the world. YouTube contains many different types of videos, from homemade sketches to instructional and educational tutorials, and therefore attracts a wide variety of users with different interests. The majority of YouTube visits are perfectly innocent, but there may be circumstances where YouTube video access is related to a digital investigation, e.g. viewing instructional videos on how to perform potentially unlawful actions or how to make unlawful articles. When a user accesses a YouTube video through their browser, certain digital artefacts relating to that video access may be left on their system in a number of different locations. However, there has been very little research published in the area of YouTube video artefacts. The paper discusses the identification of some of the artefacts that are left by the Internet Explorer web browser on a Windows system after accessing a YouTube video. The information that can be recovered from these artefacts can include the video ID, the video name and possibly a cached copy of the video itself. In addition to identifying the artefacts that are left, the paper also investigates how these artefacts can be brought together and analysed to infer specifics about the user’s interaction with the YouTube website, for example whether the video was searched for or visited as a result of a suggestion after viewing a previous video. The result of this research is a Python based prototype that will analyse a mounted disk image, automatically extract the artefacts related to YouTube visits and produce a report summarising the YouTube video accesses on a system

    TraceGen: user activity emulation for digital forensic test image generation

    Get PDF
    Digital forensic test images are commonly used across a variety of digital forensic use cases including education and training, tool testing and validation, proficiency testing, malware analysis, and research and development. Using real digital evidence for these purposes is often not viable or permissible, especially when factoring in the ethical and in some cases legal considerations of working with individuals' personal data. Furthermore, when using real data it is not usually known what actions were performed when, i.e., what was the ‘ground truth’. The creation of synthetic digital forensic test images typically involves an arduous, time-consuming process of manually performing a list of actions, or following a ‘story’ to generate artefacts in a subsequently imaged disk. Besides the manual effort and time needed in executing the relevant actions in the scenario, there is often little room to build a realistic volume of non-pertinent wear-and-tear or ‘background noise’ on the suspect device, meaning the resulting disk images are inherently limited and to a certain extent simplistic. This work presents the TraceGen framework, an automated system focused on the emulation of user actions to create realistic and comprehensive artefacts in an auditable and reproducible manner. The framework consists of a series of actions contained within scripts that are executed both externally and internally to a target virtual machine. These actions use existing automation APIs to emulate a real user's behaviour on a Windows system to generate realistic and comprehensive artefacts. These actions can be quickly scripted together to form complex stories or to emulate wear-and-tear on the test image. In addition to the development of the framework, evaluation is also performed in terms of the ability to produce background artefacts at scale, and also the realism of the artefacts compared with their human-generated counterparts

    ChatGPT for Digital Forensic Investigation: The Good, The Bad, and The Unknown

    Full text link
    The disruptive application of ChatGPT (GPT-3.5, GPT-4) to a variety of domains has become a topic of much discussion in the scientific community and society at large. Large Language Models (LLMs), e.g., BERT, Bard, Generative Pre-trained Transformers (GPTs), LLaMA, etc., have the ability to take instructions, or prompts, from users and generate answers and solutions based on very large volumes of text-based training data. This paper assesses the impact and potential impact of ChatGPT on the field of digital forensics, specifically looking at its latest pre-trained LLM, GPT-4. A series of experiments are conducted to assess its capability across several digital forensic use cases including artefact understanding, evidence searching, code generation, anomaly detection, incident response, and education. Across these topics, its strengths and risks are outlined and a number of general conclusions are drawn. Overall this paper concludes that while there are some potential low-risk applications of ChatGPT within digital forensics, many are either unsuitable at present, since the evidence would need to be uploaded to the service, or they require sufficient knowledge of the topic being asked of the tool to identify incorrect assumptions, inaccuracies, and mistakes. However, to an appropriately knowledgeable user, it could act as a useful supporting tool in some circumstances

    ChatGPT for digital forensic investigation: The good, the bad, and the unknown

    Get PDF
    The disruptive application of ChatGPT (GPT-3.5, GPT-4) to a variety of domains has become a topic of much discussion in the scientific community and society at large. Large Language Models (LLMs), e.g., BERT, Bard, Generative Pre-trained Transformers (GPTs), LLaMA, etc., have the ability to take instructions, or prompts, from users and generate answers and solutions based on very large volumes of text-based training data. This paper assesses the impact and potential impact of ChatGPT on the field of digital forensics, specifically looking at its latest pre-trained LLM, GPT-4. A series of experiments are conducted to assess its capability across several digital forensic use cases including artefact understanding, evidence searching, code generation, anomaly detection, incident response, and education. Across these topics, its strengths and risks are outlined and a number of general conclusions are drawn. Overall this paper concludes that while there are some potential low-risk applications of ChatGPT within digital forensics, many are either unsuitable at present, since the evidence would need to be uploaded to the service, or they require sufficient knowledge of the topic being asked of the tool to identify incorrect assumptions, inaccuracies, and mistakes. However, to an appropriately knowledgeable user, it could act as a useful supporting tool in some circumstances

    Microsoft Teams and team performance in the COVID-19 pandemic within an NHS Trust Community Service in North-West England

    Get PDF
    Purpose This study aims to evaluate the impact the introduction of Microsoft Teams has had on team performance in response to the COVID-19 pandemic within a National Health Service (NHS) Community Service. Design/methodology/approach Microsoft Teams was rolled out across the NHS over a period of four days, partly in response to the need for social distancing. This case study reviews how becoming a virtual team affected team performance, the role Microsoft Teams had played in supporting staff to work in higher virtuality, understand what elements underpin a successful virtual team and how these results correlate to the technology acceptance model (Davis, 1985). Findings The findings indicate that Teams made a positive impact to the team at a time of heightened clinical pressures and working in unfamiliar environments without the supportive benefits of face-to-face contact with colleagues in terms of incidental knowledge sharing and health and well-being. Originality/value Further developments were needed to make virtual meetings more accessible for introverted colleagues, support asynchronous communication, address training needs and support leaders to adapt and operate in higher virtuality

    Exploring Halo Substructure with Giant Stars IV: The Extended Structure of the Ursa Minor Dwarf Spheroidal

    Full text link
    We present a large area photometric survey of the Ursa Minor dSph. We identify UMi giant star candidates extending to ~3 deg from the center of the dSph. Comparison to previous catalogues of stars within the tidal radius of UMi suggests that our photometric luminosity classification is 100% accurate. Over a large fraction of the survey area, blue horizontal branch stars associated with UMi can also be identified. The spatial distribution of both the UMi giant stars and the BHB stars are remarkably similar, and a large fraction of both samples of stars are found outside the tidal radius of UMi. An isodensity contour map of the stars within the tidal radius of UMi reveals two morphological peculiarities: (1) The highest density of dSph stars is offset from the center of symmetry of the outer isodensity contours. (2) The overall shape of the outer contours appear S-shaped. We find that previously determined King profiles with ~50' tidal radii do not fit well the distribution of our UMi stars. A King profile with a larger tidal radius produces a reasonable fit, however a power law with index -3 provides a better fit for radii > 20'. The existence of UMi stars at large distances from the core of the galaxy, the peculiar morphology of the dSph within its tidal radius, and the shape of its surface density profile all suggest that UMi is evolving significantly due to the tidal influence of the Milky Way. However, the photometric data on UMi stars alone does not allow us to determine if the candidate extratidal stars are now unbound or if they remain bound to the dSph within an extended dark matter halo. (Abridged)Comment: accepted by AJ, 32 pages, 15 figures, emulateapj5 styl

    Noise2Recon: Enabling Joint MRI Reconstruction and Denoising with Semi-Supervised and Self-Supervised Learning

    Full text link
    Deep learning (DL) has shown promise for faster, high quality accelerated MRI reconstruction. However, supervised DL methods depend on extensive amounts of fully-sampled (labeled) data and are sensitive to out-of-distribution (OOD) shifts, particularly low signal-to-noise ratio (SNR) acquisitions. To alleviate this challenge, we propose Noise2Recon, a model-agnostic, consistency training method for joint MRI reconstruction and denoising that can use both fully-sampled (labeled) and undersampled (unlabeled) scans in semi-supervised and self-supervised settings. With limited or no labeled training data, Noise2Recon outperforms compressed sensing and deep learning baselines, including supervised networks, augmentation-based training, fine-tuned denoisers, and self-supervised methods, and matches performance of supervised models, which were trained with 14x more fully-sampled scans. Noise2Recon also outperforms all baselines, including state-of-the-art fine-tuning and augmentation techniques, among low-SNR scans and when generalizing to other OOD factors, such as changes in acceleration factors and different datasets. Augmentation extent and loss weighting hyperparameters had negligible impact on Noise2Recon compared to supervised methods, which may indicate increased training stability. Our code is available at https://github.com/ad12/meddlr

    Molecular Recognition of Insulin by a Synthetic Receptor

    Get PDF
    The discovery of molecules that bind tightly and selectively to desired proteins continues to drive innovation at the interface of chemistry and biology. This paper describes the binding of human insulin by the synthetic receptor cucurbit[7]uril (Q7) in vitro. Isothermal titration calorimetry and fluorescence spectroscopy experiments show that Q7 binds to insulin with an equilibrium association constant of 1.5 × 106 M−1 and with 50−100-fold selectivity versus proteins that are much larger but lack an N-terminal aromatic residue, and with \u3e1000-fold selectivity versus an insulin variant lacking the N-terminal phenylalanine (Phe) residue. The crystal structure of the Q7·insulin complex shows that binding occurs at the N-terminal Phe residue and that the N-terminus unfolds to enable binding. These findings suggest that site-selective recognition is based on the properties inherent to a protein terminus, including the unique chemical epitope presented by the terminal residue and the greater freedom of the terminus to unfold, like the end of a ball of string, to accommodate binding. Insulin recognition was predicted accurately from studies on short peptides and exemplifies an approach to protein recognition by targeting the terminus
    corecore