7,238 research outputs found

    The first stars: formation, properties, and impact

    Full text link
    The first generation of stars, often called Population III (or Pop III), form from metal-free primordial gas at redshifts 30 and below. They dominate the cosmic star formation history until redshifts 15 to 20, at which point the formation of metal-enriched Pop II stars takes over. We review current theoretical models for the formation, properties and impact of Pop III stars, and discuss existing and future observational constraints. Key takeaways from this review include the following: (1) Primordial gas is highly susceptible to fragmentation and Pop III stars form as members of small clusters with a logarithmically flat mass function. (2) Feedback from massive Pop III stars plays a central role in regulating subsequent star formation, but major uncertainties remain regarding its immediate impact. (3) In extreme conditions, supermassive Pop III stars can form, reaching masses of several 10^5 Msun. Their remnants may be the seeds of the supermassive black holes observed in high-redshift quasars. (4) Direct observations of Pop III stars in the early Universe remain extremely challenging. Indirect constraints from the global 21cm signal or gravitational waves are more promising. (5) Stellar archeological surveys allow us to constrain both the low-mass and the high-mass ends of the Pop III mass distribution. Observations suggest that most massive Pop III stars end their lives as core-collapse supernovae rather than as pair-instability supernovae.Comment: To appear in Annual Reviews of Astronomy and Astrophysics (75 pages, 14 figures, 600 references

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Estudo da remodelagem reversa miocárdica através da análise proteómica do miocárdio e do líquido pericárdico

    Get PDF
    Valve replacement remains as the standard therapeutic option for aortic stenosis patients, aiming at abolishing pressure overload and triggering myocardial reverse remodeling. However, despite the instant hemodynamic benefit, not all patients show complete regression of myocardial hypertrophy, being at higher risk for adverse outcomes, such as heart failure. The current comprehension of the biological mechanisms underlying an incomplete reverse remodeling is far from complete. Furthermore, definitive prognostic tools and ancillary therapies to improve the outcome of the patients undergoing valve replacement are missing. To help abridge these gaps, a combined myocardial (phospho)proteomics and pericardial fluid proteomics approach was followed, taking advantage of human biopsies and pericardial fluid collected during surgery and whose origin anticipated a wealth of molecular information contained therein. From over 1800 and 750 proteins identified, respectively, in the myocardium and in the pericardial fluid of aortic stenosis patients, a total of 90 dysregulated proteins were detected. Gene annotation and pathway enrichment analyses, together with discriminant analysis, are compatible with a scenario of increased pro-hypertrophic gene expression and protein synthesis, defective ubiquitinproteasome system activity, proclivity to cell death (potentially fed by complement activity and other extrinsic factors, such as death receptor activators), acute-phase response, immune system activation and fibrosis. Specific validation of some targets through immunoblot techniques and correlation with clinical data pointed to complement C3 β chain, Muscle Ring Finger protein 1 (MuRF1) and the dual-specificity Tyr-phosphorylation regulated kinase 1A (DYRK1A) as potential markers of an incomplete response. In addition, kinase prediction from phosphoproteome data suggests that the modulation of casein kinase 2, the family of IκB kinases, glycogen synthase kinase 3 and DYRK1A may help improve the outcome of patients undergoing valve replacement. Particularly, functional studies with DYRK1A+/- cardiomyocytes show that this kinase may be an important target to treat cardiac dysfunction, provided that mutant cells presented a different response to stretch and reduced ability to develop force (active tension). This study opens many avenues in post-aortic valve replacement reverse remodeling research. In the future, gain-of-function and/or loss-of-function studies with isolated cardiomyocytes or with animal models of aortic bandingdebanding will help disclose the efficacy of targeting the surrogate therapeutic targets. Besides, clinical studies in larger cohorts will bring definitive proof of complement C3, MuRF1 and DYRK1A prognostic value.A substituição da válvula aórtica continua a ser a opção terapêutica de referência para doentes com estenose aórtica e visa a eliminação da sobrecarga de pressão, desencadeando a remodelagem reversa miocárdica. Contudo, apesar do benefício hemodinâmico imediato, nem todos os pacientes apresentam regressão completa da hipertrofia do miocárdio, ficando com maior risco de eventos adversos, como a insuficiência cardíaca. Atualmente, os mecanismos biológicos subjacentes a uma remodelagem reversa incompleta ainda não são claros. Além disso, não dispomos de ferramentas de prognóstico definitivos nem de terapias auxiliares para melhorar a condição dos pacientes indicados para substituição da válvula. Para ajudar a resolver estas lacunas, uma abordagem combinada de (fosfo)proteómica e proteómica para a caracterização, respetivamente, do miocárdio e do líquido pericárdico foi seguida, tomando partido de biópsias e líquidos pericárdicos recolhidos em ambiente cirúrgico. Das mais de 1800 e 750 proteínas identificadas, respetivamente, no miocárdio e no líquido pericárdico dos pacientes com estenose aórtica, um total de 90 proteínas desreguladas foram detetadas. As análises de anotação de genes, de enriquecimento de vias celulares e discriminativa corroboram um cenário de aumento da expressão de genes pro-hipertróficos e de síntese proteica, um sistema ubiquitina-proteassoma ineficiente, uma tendência para morte celular (potencialmente acelerada pela atividade do complemento e por outros fatores extrínsecos que ativam death receptors), com ativação da resposta de fase aguda e do sistema imune, assim como da fibrose. A validação de alguns alvos específicos através de immunoblot e correlação com dados clínicos apontou para a cadeia β do complemento C3, a Muscle Ring Finger protein 1 (MuRF1) e a dual-specificity Tyr-phosphoylation regulated kinase 1A (DYRK1A) como potenciais marcadores de uma resposta incompleta. Por outro lado, a predição de cinases a partir do fosfoproteoma, sugere que a modulação da caseína cinase 2, a família de cinases do IκB, a glicogénio sintase cinase 3 e da DYRK1A pode ajudar a melhorar a condição dos pacientes indicados para intervenção. Em particular, a avaliação funcional de cardiomiócitos DYRK1A+/- mostraram que esta cinase pode ser um alvo importante para tratar a disfunção cardíaca, uma vez que os miócitos mutantes responderam de forma diferente ao estiramento e mostraram uma menor capacidade para desenvolver força (tensão ativa). Este estudo levanta várias hipóteses na investigação da remodelagem reversa. No futuro, estudos de ganho e/ou perda de função realizados em cardiomiócitos isolados ou em modelos animais de banding-debanding da aorta ajudarão a testar a eficácia de modular os potenciais alvos terapêuticos encontrados. Além disso, estudos clínicos em coortes de maior dimensão trarão conclusões definitivas quanto ao valor de prognóstico do complemento C3, MuRF1 e DYRK1A.Programa Doutoral em Biomedicin

    Mathematical models to evaluate the impact of increasing serotype coverage in pneumococcal conjugate vaccines

    Get PDF
    Of over 100 serotypes of Streptococcus pneumoniae, only 7 were included in the first pneumo- coccal conjugate vaccine (PCV). While PCV reduced the disease incidence, in part because of a herd immunity effect, a replacement effect was observed whereby disease was increasingly caused by serotypes not included in the vaccine. Dynamic transmission models can account for these effects to describe post-vaccination scenarios, whereas economic evaluations can enable decision-makers to compare vaccines of increasing valency for implementation. This thesis has four aims. First, to explore the limitations and assumptions of published pneu- mococcal models and the implications for future vaccine formulation and policy. Second, to conduct a trend analysis assembling all the available evidence for serotype replacement in Europe, North America and Australia to characterise invasive pneumococcal disease (IPD) caused by vaccine-type (VT) and non-vaccine-types (NVT) serotypes. The motivation behind this is to assess the patterns of relative abundance in IPD cases pre- and post-vaccination, to examine country-level differences in relation to the vaccines employed over time since introduction, and to assess the growth of the replacement serotypes in comparison with the serotypes targeted by the vaccine. The third aim is to use a Bayesian framework to estimate serotype-specific invasiveness, i.e. the rate of invasive disease given carriage. This is useful for dynamic transmission modelling, as transmission is through carriage but a majority of serotype-specific pneumococcal data lies in active disease surveillance. This is also helpful to address whether serotype replacement reflects serotypes that are more invasive or whether serotypes in a specific location are equally more invasive than in other locations. Finally, the last aim of this thesis is to estimate the epidemiological and economic impact of increas- ing serotype coverage in PCVs using a dynamic transmission model. Together, the results highlight that though there are key parameter uncertainties that merit further exploration, divergence in serotype replacement and inconsistencies in invasiveness on a country-level may make a universal PCV suboptimal.Open Acces

    Examining the Impact of Personal Social Media Use at Work on Workplace Outcomes

    Get PDF
    A noticable shift is underway in today’s multi-generational workforce. As younger employees propel digital workforce transformation and embrace technology adoption in the workplace, organisations need to show they are forward-thinking in their digital transformation strategies, and the emergent integration of social media in organisations is reshaping internal communication strategies, in a bid to improve corporate reputations and foster employee engagement. However, the impact of personal social media use on psychological and behavioural workplace outcomes is still debatebale with contrasting results in the literature identifying both positive and negative effects on workplace outcomes among organisational employees. This study seeks to examine this debate through the lens of social capital theory and study personal social media use at work using distinct variables of social use, cognitive use, and hedonic use. A quantitative analysis of data from 419 organisational employees in Jordan using SEM-PLS reveals that personal social media use at work is a double-edged sword as its impact differs by usage types. First, the social use of personal social media at work reduces job burnout, turnover intention, presenteeism, and absenteeism; it also increases job involvement and organisational citizen behaviour. Second, the cognitive use of personal social media at work increases job involvement, organisational citizen behaviour, employee adaptability, and decreases presenteeism and absenteeism; it also increases job burnout and turnover intention. Finally, the hedonic use of personal social media at work carries only negative effects by increasing job burnout and turnover intention. This study contributes to managerial understanding by showing the impact of different types of personal social media usage and recommends that organisations not limit employee access to personal social media within work time, but rather focus on raising awareness of the negative effects of excessive usage on employee well-being and encourage low to moderate use of personal social media at work and other personal and work-related online interaction associated with positive workplace outcomes. It also clarifies the need for further research in regions such as the Middle East with distinct cultural and socio-economic contexts

    Elasto-plastic deformations within a material point framework on modern GPU architectures

    Get PDF
    Plastic strain localization is an important process on Earth. It strongly influ- ences the mechanical behaviour of natural processes, such as fault mechanics, earthquakes or orogeny. At a smaller scale, a landslide is a fantastic example of elasto-plastic deformations. Such behaviour spans from pre-failure mech- anisms to post-failure propagation of the unstable material. To fully resolve the landslide mechanics, the selected numerical methods should be able to efficiently address a wide range of deformation magnitudes. Accurate and performant numerical modelling requires important compu- tational resources. Mesh-free numerical methods such as the material point method (MPM) or the smoothed-particle hydrodynamics (SPH) are particu- larly computationally expensive, when compared with mesh-based methods, such as the finite element method (FEM) or the finite difference method (FDM). Still, mesh-free methods are particularly well-suited to numerical problems involving large elasto-plastic deformations. But, the computational efficiency of these methods should be first improved in order to tackle complex three-dimensional problems, i.e., landslides. As such, this research work attempts to alleviate the computational cost of the material point method by using the most recent graphics processing unit (GPU) architectures available. GPUs are many-core processors originally designed to refresh screen pixels (e.g., for computer games) independently. This allows GPUs to delivers a massive parallelism when compared to central processing units (CPUs). To do so, this research work first investigates code prototyping in a high- level language, e.g., MATLAB. This allows to implement vectorized algorithms and benchmark numerical results of two-dimensional analysis with analytical solutions and/or experimental results in an affordable amount of time. After- wards, low-level language such as CUDA C is used to efficiently implement a GPU-based solver, i.e., ep2-3De v1.0, can resolve three-dimensional prob- lems in a decent amount of time. This part takes advantages of the massive parallelism of modern GPU architectures. In addition, a first attempt of GPU parallel computing, i.e., multi-GPU codes, is performed to increase even more the performance and to address the on-chip memory limitation. Finally, this GPU-based solver is used to investigate three-dimensional granular collapses and is compared with experimental evidences obtained in the laboratory. This research work demonstrates that the material point method is well suited to resolve small to large elasto-plastic deformations. Moreover, the computational efficiency of the method can be dramatically increased using modern GPU architectures. These allow fast, performant and accurate three- dimensional modelling of landslides, provided that the on-chip memory limi- tation is alleviated with an appropriate parallel strategy

    Estudo do IPFS como protocolo de distribuição de conteúdos em redes veiculares

    Get PDF
    Over the last few years, vehicular ad-hoc networks (VANETs) have been the focus of great progress due to the interest in autonomous vehicles and in distributing content not only between vehicles, but also to the Cloud. Performing a download/upload to/from a vehicle typically requires the existence of a cellular connection, but the costs associated with mobile data transfers in hundreds or thousands of vehicles quickly become prohibitive. A VANET allows the costs to be several orders of magnitude lower - while keeping the same large volumes of data - because it is strongly based in the communication between vehicles (nodes of the network) and the infrastructure. The InterPlanetary File System (IPFS) is a protocol for storing and distributing content, where information is addressed by its content, instead of its location. It was created in 2014 and it seeks to connect all computing devices with the same system of files, comparable to a BitTorrent swarm exchanging Git objects. It has been tested and deployed in wired networks, but never in an environment where nodes have intermittent connectivity, such as a VANET. This work focuses on understanding IPFS, how/if it can be applied to the vehicular network context, and comparing it with other content distribution protocols. In this dissertation, IPFS has been tested in a small and controlled network to understand its working applicability to VANETs. Issues such as neighbor discoverability times and poor hashing performance have been addressed. To compare IPFS with other protocols (such as Veniam’s proprietary solution or BitTorrent) in a relevant way and in a large scale, an emulation platform was created. The tests in this emulator were performed in different times of the day, with a variable number of files and file sizes. Emulated results show that IPFS is on par with Veniam’s custom V2V protocol built specifically for V2V, and greatly outperforms BitTorrent regarding neighbor discoverability and data transfers. An analysis of IPFS’ performance in a real scenario was also conducted, using a subset of STCP’s vehicular network in Oporto, with the support of Veniam. Results from these tests show that IPFS can be used as a content dissemination protocol, showing it is up to the challenge provided by a constantly changing network topology, and achieving throughputs up to 2.8 MB/s, values similar or in some cases even better than Veniam’s proprietary solution.Nos últimos anos, as redes veiculares (VANETs) têm sido o foco de grandes avanços devido ao interesse em veículos autónomos e em distribuir conteúdos, não só entre veículos mas também para a "nuvem" (Cloud). Tipicamente, fazer um download/upload de/para um veículo exige a utilização de uma ligação celular (SIM), mas os custos associados a fazer transferências com dados móveis em centenas ou milhares de veículos rapidamente se tornam proibitivos. Uma VANET permite que estes custos sejam consideravelmente inferiores - mantendo o mesmo volume de dados - pois é fortemente baseada na comunicação entre veículos (nós da rede) e a infraestrutura. O InterPlanetary File System (IPFS - "sistema de ficheiros interplanetário") é um protocolo de armazenamento e distribuição de conteúdos, onde a informação é endereçada pelo conteúdo, em vez da sua localização. Foi criado em 2014 e tem como objetivo ligar todos os dispositivos de computação num só sistema de ficheiros, comparável a um swarm BitTorrent a trocar objetos Git. Já foi testado e usado em redes com fios, mas nunca num ambiente onde os nós têm conetividade intermitente, tal como numa VANET. Este trabalho tem como foco perceber o IPFS, como/se pode ser aplicado ao contexto de rede veicular e compará-lo a outros protocolos de distribuição de conteúdos. Numa primeira fase o IPFS foi testado numa pequena rede controlada, de forma a perceber a sua aplicabilidade às VANETs, e resolver os seus primeiros problemas como os tempos elevados de descoberta de vizinhos e o fraco desempenho de hashing. De modo a poder comparar o IPFS com outros protocolos (tais como a solução proprietária da Veniam ou o BitTorrent) de forma relevante e em grande escala, foi criada uma plataforma de emulação. Os testes neste emulador foram efetuados usando registos de mobilidade e conetividade veicular de alturas diferentes de um dia, com um número variável de ficheiros e tamanhos de ficheiros. Os resultados destes testes mostram que o IPFS está a par do protocolo V2V da Veniam (desenvolvido especificamente para V2V e VANETs), e que o IPFS é significativamente melhor que o BitTorrent no que toca ao tempo de descoberta de vizinhos e transferência de informação. Uma análise do desempenho do IPFS em cenário real também foi efetuada, usando um pequeno conjunto de nós da rede veicular da STCP no Porto, com o apoio da Veniam. Os resultados destes testes demonstram que o IPFS pode ser usado como protocolo de disseminação de conteúdos numa VANET, mostrando-se adequado a uma topologia constantemente sob alteração, e alcançando débitos até 2.8 MB/s, valores parecidos ou nalguns casos superiores aos do protocolo proprietário da Veniam.Mestrado em Engenharia de Computadores e Telemátic

    Empowering People Living with Dementia Through Designing

    Get PDF
    The ‘wicked problem’ (Rittel and Webber, 1973) of dementia is a leading global healthcare concern. The prevalence of diagnosis is increasing significantly and correlats with longer life expectancy (Spijker and Macinnes, 2013). In the UK has an estimated 850,000 people living with dementia (PLWD). For whom the greatest burden of care is placed on loved ones and privately funded approaches (Alzheimer Society, 2015). The result can be hugely challenging for the person diagnosed with dementia and their loved ones, leading to further issues of ill-health (Marriot, 2009). The Prime Minister’s challenge on dementia (2012) has encouraged development of multi-faceted responses and interventions to deliver improvements in care and research. As a result, designers have been encouraged to become skilled specialists engaged in thinking differently around dementia and the associated problems. This research explores co-design (Scrivener, 2005) with people living with dementia in order to understand their complex problems, and to propose and to shape interventions or solutions that can alleviate pressures which include, social isolation, stress, infantilisation and a sense of hopelessness (Kitwood, 1990). Through fifteen projects achieved within series of co-design workshops, the research explores empowerment of PLWD through their own advocacy. The research shares how co-design can be an enduring process that stimulates new behaviours and memories whilst building resilience and keeping people active in society. Which, ultimately asks questions as to how common practices of co-design can change hierarchy and ownership in order to transform practices of design done ‘to’or ‘for’ PLWD to integrated projects ‘with’ and ‘by’ them. The results propose that people living with dementia can maintain highly significant efficacy in shaping lived experiences, making decisions, building relationships, and producing impactful designs. The resultant projects and proceses supports their right to make decisions and to develop their own prowess through meaningful, deeply involved, and astutely delivered designs

    Optimización del rendimiento y la eficiencia energética en sistemas masivamente paralelos

    Get PDF
    RESUMEN Los sistemas heterogéneos son cada vez más relevantes, debido a sus capacidades de rendimiento y eficiencia energética, estando presentes en todo tipo de plataformas de cómputo, desde dispositivos embebidos y servidores, hasta nodos HPC de grandes centros de datos. Su complejidad hace que sean habitualmente usados bajo el paradigma de tareas y el modelo de programación host-device. Esto penaliza fuertemente el aprovechamiento de los aceleradores y el consumo energético del sistema, además de dificultar la adaptación de las aplicaciones. La co-ejecución permite que todos los dispositivos cooperen para computar el mismo problema, consumiendo menos tiempo y energía. No obstante, los programadores deben encargarse de toda la gestión de los dispositivos, la distribución de la carga y la portabilidad del código entre sistemas, complicando notablemente su programación. Esta tesis ofrece contribuciones para mejorar el rendimiento y la eficiencia energética en estos sistemas masivamente paralelos. Se realizan propuestas que abordan objetivos generalmente contrapuestos: se mejora la usabilidad y la programabilidad, a la vez que se garantiza una mayor abstracción y extensibilidad del sistema, y al mismo tiempo se aumenta el rendimiento, la escalabilidad y la eficiencia energética. Para ello, se proponen dos motores de ejecución con enfoques completamente distintos. EngineCL, centrado en OpenCL y con una API de alto nivel, favorece la máxima compatibilidad entre todo tipo de dispositivos y proporciona un sistema modular extensible. Su versatilidad permite adaptarlo a entornos para los que no fue concebido, como aplicaciones con ejecuciones restringidas por tiempo o simuladores HPC de dinámica molecular, como el utilizado en un centro de investigación internacional. Considerando las tendencias industriales y enfatizando la aplicabilidad profesional, CoexecutorRuntime proporciona un sistema flexible centrado en C++/SYCL que dota de soporte a la co-ejecución a la tecnología oneAPI. Este runtime acerca a los programadores al dominio del problema, posibilitando la explotación de estrategias dinámicas adaptativas que mejoran la eficiencia en todo tipo de aplicaciones.ABSTRACT Heterogeneous systems are becoming increasingly relevant, due to their performance and energy efficiency capabilities, being present in all types of computing platforms, from embedded devices and servers to HPC nodes in large data centers. Their complexity implies that they are usually used under the task paradigm and the host-device programming model. This strongly penalizes accelerator utilization and system energy consumption, as well as making it difficult to adapt applications. Co-execution allows all devices to simultaneously compute the same problem, cooperating to consume less time and energy. However, programmers must handle all device management, workload distribution and code portability between systems, significantly complicating their programming. This thesis offers contributions to improve performance and energy efficiency in these massively parallel systems. The proposals address the following generally conflicting objectives: usability and programmability are improved, while ensuring enhanced system abstraction and extensibility, and at the same time performance, scalability and energy efficiency are increased. To achieve this, two runtime systems with completely different approaches are proposed. EngineCL, focused on OpenCL and with a high-level API, provides an extensible modular system and favors maximum compatibility between all types of devices. Its versatility allows it to be adapted to environments for which it was not originally designed, including applications with time-constrained executions or molecular dynamics HPC simulators, such as the one used in an international research center. Considering industrial trends and emphasizing professional applicability, CoexecutorRuntime provides a flexible C++/SYCL-based system that provides co-execution support for oneAPI technology. This runtime brings programmers closer to the problem domain, enabling the exploitation of dynamic adaptive strategies that improve efficiency in all types of applications.Funding: This PhD has been supported by the Spanish Ministry of Education (FPU16/03299 grant), the Spanish Science and Technology Commission under contracts TIN2016-76635-C2-2-R and PID2019-105660RB-C22. This work has also been partially supported by the Mont-Blanc 3: European Scalable and Power Efficient HPC Platform based on Low-Power Embedded Technology project (G.A. No. 671697) from the European Union’s Horizon 2020 Research and Innovation Programme (H2020 Programme). Some activities have also been funded by the Spanish Science and Technology Commission under contract TIN2016-81840-REDT (CAPAP-H6 network). The Integration II: Hybrid programming models of Chapter 4 has been partially performed under the Project HPC-EUROPA3 (INFRAIA-2016-1-730897), with the support of the EC Research Innovation Action under the H2020 Programme. In particular, the author gratefully acknowledges the support of the SPMT Department of the High Performance Computing Center Stuttgart (HLRS)
    corecore