121 research outputs found

    OS PORQUÊS DO CONSUMO CONSPÍCUO: VEBLEN E A PSICOLOGIA EVOLUCIONÁRIA

    Get PDF
    TCC (graduação) - Universidade Federal de Santa Catarina. Centro Sócio-Econômico. Economia.Para além da lógica necessidade de garantir a subsistência, a presente investigação sugere que a aquisição de bens possui um sentido supra econômico, indo além da mera garantia do mínimo referente a sobrevivência. Tratando o consumo dentro de uma perspectiva evolucionária, como feito por Veblen (1980) e Miller (2010), conclui-se que o papel de sinalização deste é fundamental para se entender sua significação mental, esta não tão aparente. As várias interpretações do consumo em Economia vinculam-no, de forma empírica, à renda. Ao fazê-lo, os economistas entendem que a decisão de consumo está em maior ou menor grau ligada à conceitos como propensão marginal a consumir, renda disponível, renda permanente, renda intertemporal, etc. Em decorrência, reforça-se a ideia de o consumo ser uma variável cuja explicação se dá majoritariamente através de categorias objetivas, limitando-se a análise ao campo material. Tais abordagens deixam de lado o porquê de o consumo existir como fenômeno, não da a devida atenção a quais seriam suas razões de ser. Deste modo, é proposta uma abordagem a partir da Psicologia Evolucionária para tratar do consumo e seus determinante

    Deep learning-based quantum algorithms for solving nonlinear partial differential equations

    Full text link
    Partial differential equations frequently appear in the natural sciences and related disciplines. Solving them is often challenging, particularly in high dimensions, due to the "curse of dimensionality". In this work, we explore the potential for enhancing a classical deep learning-based method for solving high-dimensional nonlinear partial differential equations with suitable quantum subroutines. First, with near-term noisy intermediate-scale quantum computers in mind, we construct architectures employing variational quantum circuits and classical neural networks in conjunction. While the hybrid architectures show equal or worse performance than their fully classical counterparts in simulations, they may still be of use in very high-dimensional cases or if the problem is of a quantum mechanical nature. Next, we identify the bottlenecks imposed by Monte Carlo sampling and the training of the neural networks. We find that quantum-accelerated Monte Carlo methods offer the potential to speed up the estimation of the loss function. In addition, we identify and analyse the trade-offs when using quantum-accelerated Monte Carlo methods to estimate the gradients with different methods, including a recently developed backpropagation-free forward gradient method. Finally, we discuss the usage of a suitable quantum algorithm for accelerating the training of feed-forward neural networks. Hence, this work provides different avenues with the potential for polynomial speedups for deep learning-based methods for nonlinear partial differential equations.Comment: 48 pages, 17 figure

    Data-independent acquisition improves quantitative cross-linking mass spectrometry

    Get PDF
    Quantitative cross-linking mass spectrometry (QCLMS) reveals structural detail on altered protein states in solution. On its way to becoming a routine technology, QCLMS could benefit from data-independent acquisition (DIA), which generally enables greater reproducibility than data-dependent acquisition (DDA) and increased throughput over targeted methods. Therefore, here we introduce DIA to QCLMS by extending a widely used DIA software, Spectronaut, to also accommodate cross-link data. A mixture of seven proteins cross-linked with bis[sulfosuccinimidyl] suberate (BS3) was used to evaluate this workflow. Out of the 414 identified unique residue pairs, 292 (70%) were quantifiable across triplicates with a coefficient of variation (CV) of 10%, with manual correction of peak selection and boundaries for PSMs in the lower quartile of individual CV values. This compares favorably to DDA where we quantified cross-links across triplicates with a CV of 66%, for a single protein. We found DIA-QCLMS to be capable of detecting changing abundances of cross-linked peptides in complex mixtures, despite the ratio compression encountered when increasing sample complexity through the addition of E. coli cell lysate as matrix. In conclusion, the DIA software Spectronaut can now be used in cross-linking and DIA is indeed able to improve QCLMS

    Impact of the horizontal resolution on the simulation of extremes

    Get PDF
    The simulation of extremes using climate models is still a challenging task. Currently, the model grid horizontal resolution of state-of-the art regional climate models (RCMs) is about 11–25 km, which may still be too coarse to represent local extremes realistically. In this study we use dynamically downscaled ERA-40 reanalysis data of the RCM COSMO-CLM at 18 km resolution, downscale it dynamically further to 4.5 km and finally to 1.3 km to investigate the impact of the horizontal resolution on extremes. Extremes are estimated as return levels for the 2, 5 and 10‑year return periods using ‘peaks-over-threshold’ (POT) models. Daily return levels are calculated for precipitation and maximum 2 m temperature in summer as well as precipitation and 2 m minimum temperature in winter. The results show that CCLM is able to capture the spatial and temporal structure of the observed extremes, except for summer precipitation extremes. Furthermore, the spatial variability of the return levels increases with resolution. This effect is more distinct in case of temperature extremes due to a higher correlation with the better resolved orography. This dependency increases with increasing horizontal resolution. In comparison to observations, the spatial variability of temperature extremes is better simulated at a resolution of 1.3 km, but the return levels are cold-biased in summer and warm-biased in winter. Regarding precipitation, the spatial variability improves as well, although the return levels were slightly overestimated in summer by all CCLM simulations. In summary, the results indicate that an increase of the horizontal resolution of CCLM does have a significant effect on the simulation of extremes and that impact models and assessment studies may benefit from such high-resolution model output

    Machine learning classification of OARSI-scored human articular cartilage using magnetic resonance imaging

    Get PDF
    SummaryObjectiveThe purpose of this study is to evaluate the ability of machine learning to discriminate between magnetic resonance images (MRI) of normal and pathological human articular cartilage obtained under standard clinical conditions.MethodAn approach to MRI classification of cartilage degradation is proposed using pattern recognition and multivariable regression in which image features from MRIs of histologically scored human articular cartilage plugs were computed using weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHRM). The WND-CHRM method was first applied to several clinically available MRI scan types to perform binary classification of normal and osteoarthritic osteochondral plugs based on the Osteoarthritis Research Society International (OARSI) histological system. In addition, the image features computed from WND-CHRM were used to develop a multiple linear least-squares regression model for classification and prediction of an OARSI score for each cartilage plug.ResultsThe binary classification of normal and osteoarthritic plugs yielded results of limited quality with accuracies between 36% and 70%. However, multiple linear least-squares regression successfully predicted OARSI scores and classified plugs with accuracies as high as 86%. The present results improve upon the previously-reported accuracy of classification using average MRI signal intensities and parameter values.ConclusionMRI features detected by WND-CHRM reflect cartilage degradation status as assessed by OARSI histologic grading. WND-CHRM is therefore of potential use in the clinical detection and grading of osteoarthritis

    Magnetically Induced Current Densities in Toroidal Carbon Nanotubes

    Get PDF
    Molecular structures of toroidal carbon nanotubes (TCNTs) have been constructed and optimized at the density functional theory (DFT) level. The TCNT structures have been constrained by using point groups with high symmetry. TCNTs consisting of only hexagons (polyhex) with armchair, chiral, and zigzag structures as well as TCNTs with pentagons and heptagons have been studied. The employed method for constructing general polyhex TCNTs is discussed. Magnetically induced current densities have been calculated using the gauge-including magnetically induced currents (GIMIC) method. The strength of the magnetically induced ring currents has been obtained by integrating the current density passing a plane cutting the ring of the TCNT. The main pathways of the current density have been identified by visualizing the current density. The calculations show that the strength of the diatropic ring current of polyhex TCNTs with an armchair structure generally increases with the size of the TCNT, whereas TCNTs with a zigzag structure sustain very weak diatropic ring currents. Some of the TCNTs with pentagons and heptagons sustain a strong diatropic ring current, whereas other TCNT structures with pentagons and heptagons sustain paratropic ring currents that are, in most cases, relatively weak. We discuss the reasons for the different behaviors of the current density of the seemingly similar TCNTs.Peer reviewe

    Cost-effective generation of precise label-free quantitative proteomes in high-throughput by microLC and data-independent acquisition

    Get PDF
    Quantitative proteomics is key for basic research, but needs improvements to satisfy an increasing demand for large sample series in diagnostics, academia and industry. A switch from nanoflowrate to microflowrate chromatography can improve throughput and reduce costs. However, concerns about undersampling and coverage have so far hampered its broad application. We used a QTOF mass spectrometer of the penultimate generation (TripleTOF5600), converted a nanoLC system into a microflow platform, and adapted a SWATH regime for large sample series by implementing retention time-A nd batch correction strategies. From 3 μg to 5 μg of unfractionated tryptic digests that are obtained from proteomics-typical amounts of starting material, microLC-SWATH-MS quantifies up to 4000 human or 1750 yeast proteins in an hour or less. In the acquisition of 750 yeast proteomes, retention times varied between 2% and 5%, and quantified the typical peptide with 5-8% signal variation in replicates, and below 20% in samples acquired over a five-months period. Providing precise quantities without being dependent on the latest hardware, our study demonstrates that the combination of microflow chromatography and data-independent acquisition strategies has the potential to overcome current bottlenecks in academia and industry, enabling the cost-effective generation of precise quantitative proteomes in large scale

    Successful treatment of metastatic uveal melanoma with ipilimumab and nivolumab after severe progression under tebentafusp: a case report

    Get PDF
    Metastatic uveal melanoma (UM) is a rare form of melanoma differing from cutaneous melanoma by etiology, prognosis, driver mutations, pattern of metastases and poor response rate to immune checkpoint inhibitors (ICI). Recently, a bispecific gp100 peptide-HLA-directed CD3 T cell engager, tebentafusp, has been approved for the treatment of HLA-A*02:01 metastatic or unresectable UM. While the treatment regime is complex with weekly administrations and close monitoring, the response rate is limited. Only a few data exist on combined ICI in UM after previous progression on tebentafusp. In this case report, we present a patient with metastatic UM who first suffered extensive progression under treatment with tebentafusp but in the following had an excellent response to combined ICI. We discuss possible interactions that could explain responsiveness to ICI after pretreatment with tebentafusp in advanced UM

    VLA Imaging of H i-bearing Ultra-Diffuse Galaxies from the ALFALFA Survey

    Get PDF
    Ultra-diffuse galaxies have generated significant interest due to their large optical extents and low optical surface brightnesses, which challenge galaxy formation models. Here we present resolved synthesis observations of 12 H i-bearing ultra-diffuse galaxies (HUDs) from the Karl G. Jansky Very Large Array (VLA), as well as deep optical imaging from the WIYN 3.5-meter telescope at Kitt Peak National Observatory. We present the data processing and images, including total intensity H i maps and H i velocity fields. The HUDs show ordered gas distributions and evidence of rotation, important prerequisites for the detailed kinematic models in Mancera Pi˜na et al. (2019b). We compare the H i and stellar alignment and extent, and find the H i extends beyond the already extended stellar component and that the H i disk is often misaligned with respect to the stellar one, emphasizing the importance of caution when approaching inclination measurements for these extreme sources. We explore the H i mass-diameter scaling relation, and find that although the HUDs have diffuse stellar populations, they fall along the relation, with typical global H i surface densities. This resolved sample forms an important basis for more detailed study of the H i distribution in this extreme extragalactic population
    corecore