3,969 research outputs found

    MiniTeste: uma ferramenta ágil para aplicação de avaliações personalizadas

    Get PDF
    O uso da tecnologia pode potencializar metodologias que proporcionem umaparticipação mais ativa dos estudantes. No caso de turmas numerosas, esses recursos tecnológicos tornam-se indispensáveis. Neste artigo, apresentamos a ferramenta MiniTeste para a aplicação e a gestão de avaliações rápidas e individualizadas, em papel ou dispositivo móvel. Discutiremos como suas funcionalidades podem viabilizar, no contexto brasileiro, dinâmicas de avaliação tendo por base testes conceituais curtos e frequentes e a interação entre os estudantes, permitindo a adaptação em tempo real a diferentes cenários de respostas por parte da turma. A ferramenta MiniTeste foi desenvolvida para dar suporte à implantação, em turmas numerosas, da combinação de duas metodologias, a saber, a sala de aula invertida e a instrução por pares, podendo ainda ser usada em vários outros contexto

    Standard operating procedures (SOP) in experimental stroke research: SOP for middle cerebral artery occlusion in the mouse

    Get PDF
    Systematic reviews have found quantitative evidence that low study quality may have introduced a bias into preclinical stroke research. Monitoring, auditing, and standard operating procedures (SOPs) are already key elements of quality control in randomized clinical trials and will hopefully be widely adopted by preclinical stroke research in the near future. Increasingly, funding bodies and review boards overseeing animal experiments are taking a proactive stance, and demand auditable quality control measures in preclinical research. Every good quality control system is based on its SOPs. This article introduces the concept of quality control and presents a SOP in experimental stroke research

    A study of the relationship between the poetry and criticism of Ezra Pound 1908-1920

    Get PDF
    From the preface: The purpose of this thesis is exposition rather than criticism. Pound's position in the hierarchy of the 'New Criticism' would provide an extremely interesting subject; but I have rather tried to outline; the standards which he has laid down as being central in the technics of good poetry and to show how closely he has adhered to them in his own verse. I have limited the period to be discussed because all of the essential principles which he employs in his writing after 1920 are discernible in the body of his work published before that date

    The Complete Calibration of the Color-Redshift Relation (C3R2) Survey: Survey Overview and Data Release 1

    Get PDF
    A key goal of the Stage IV dark energy experiments Euclid, LSST and WFIRST is to measure the growth of structure with cosmic time from weak lensing analysis over large regions of the sky. Weak lensing cosmology will be challenging: in addition to highly accurate galaxy shape measurements, statistically robust and accurate photometric redshift (photo-z) estimates for billions of faint galaxies will be needed in order to reconstruct the three-dimensional matter distribution. Here we present an overview of and initial results from the Complete Calibration of the Color-Redshift Relation (C3R2) survey, designed specifically to calibrate the empirical galaxy color-redshift relation to the Euclid depth. These redshifts will also be important for the calibrations of LSST and WFIRST. The C3R2 survey is obtaining multiplexed observations with Keck (DEIMOS, LRIS, and MOSFIRE), the Gran Telescopio Canarias (GTC; OSIRIS), and the Very Large Telescope (VLT; FORS2 and KMOS) of a targeted sample of galaxies most important for the redshift calibration. We focus spectroscopic efforts on under-sampled regions of galaxy color space identified in previous work in order to minimize the number of spectroscopic redshifts needed to map the color-redshift relation to the required accuracy. Here we present the C3R2 survey strategy and initial results, including the 1283 high confidence redshifts obtained in the 2016A semester and released as Data Release 1.Comment: Accepted to ApJ. 11 pages, 5 figures. Redshifts can be found at http://c3r2.ipac.caltech.edu/c3r2_DR1_mrt.tx

    Joint Hybrid Precoder and Combiner Design for mmWave Spatial Multiplexing Transmission

    Full text link
    Millimeter-wave (mmWave) communications have been considered as a key technology for future 5G wireless networks because of the orders-of-magnitude wider bandwidth than current cellular bands. In this paper, we consider the problem of codebook-based joint analog-digital hybrid precoder and combiner design for spatial multiplexing transmission in a mmWave multiple-input multiple-output (MIMO) system. We propose to jointly select analog precoder and combiner pair for each data stream successively aiming at maximizing the channel gain while suppressing the interference between different data streams. After all analog precoder/combiner pairs have been determined, we can obtain the effective baseband channel. Then, the digital precoder and combiner are computed based on the obtained effective baseband channel to further mitigate the interference and maximize the sum-rate. Simulation results demonstrate that our proposed algorithm exhibits prominent advantages in combating interference between different data streams and offer satisfactory performance improvement compared to the existing codebook-based hybrid beamforming schemes

    Cosmological Horizons, Uncertainty Principle and Maximum Length Quantum Mechanics

    Full text link
    The cosmological particle horizon is the maximum measurable length in the Universe. The existence of such a maximum observable length scale implies a modification of the quantum uncertainty principle. Thus due to non-locality of quantum mechanics, the global properties of the Universe could produce a signature on the behaviour of local quantum systems. A Generalized Uncertainty Principle (GUP) that is consistent with the existence of such a maximum observable length scale lmaxl_{max} is ΔxΔp2  11αΔx2\Delta x \Delta p \geq \frac{\hbar}{2}\;\frac{1}{1-\alpha \Delta x^2} where α=lmax2(H0/c)2\alpha = l_{max}^{-2}\simeq (H_0/c)^2 (H0H_0 is the Hubble parameter and cc is the speed of light). In addition to the existence of a maximum measurable length lmax=1αl_{max}=\frac{1}{\sqrt \alpha}, this form of GUP implies also the existence of a minimum measurable momentum pmin=334αp_{min}=\frac{3 \sqrt{3}}{4}\hbar \sqrt{\alpha}. Using appropriate representation of the position and momentum quantum operators we show that the spectrum of the one dimensional harmonic oscillator becomes Eˉn=2n+1+λnαˉ\bar{\mathcal{E}}_n=2n+1+\lambda_n \bar{\alpha} where Eˉn2En/ω\bar{\mathcal{E}}_n\equiv 2E_n/\hbar \omega is the dimensionless properly normalized nthn^{th} energy level, αˉ\bar{\alpha} is a dimensionless parameter with αˉα/mω\bar{\alpha}\equiv \alpha \hbar/m \omega and λnn2\lambda_n\sim n^2 for n1n\gg 1 (we show the full form of λn\lambda_n in the text). For a typical vibrating diatomic molecule and lmax=c/H0l_{max}=c/H_0 we find αˉ1077\bar{\alpha}\sim 10^{-77} and therefore for such a system, this effect is beyond reach of current experiments. However, this effect could be more important in the early universe and could produce signatures in the primordial perturbation spectrum induced by quantum fluctuations of the inflaton field.Comment: 11 pages, 7 Figures. The Mathematica file that was used for the production of the Figures may be downloaded from http://leandros.physics.uoi.gr/maxlenqm

    Is "Better Data" Better than "Better Data Miners"? (On the Benefits of Tuning SMOTE for Defect Prediction)

    Full text link
    We report and fix an important systematic error in prior studies that ranked classifiers for software analytics. Those studies did not (a) assess classifiers on multiple criteria and they did not (b) study how variations in the data affect the results. Hence, this paper applies (a) multi-criteria tests while (b) fixing the weaker regions of the training data (using SMOTUNED, which is a self-tuning version of SMOTE). This approach leads to dramatically large increases in software defect predictions. When applied in a 5*5 cross-validation study for 3,681 JAVA classes (containing over a million lines of code) from open source systems, SMOTUNED increased AUC and recall by 60% and 20% respectively. These improvements are independent of the classifier used to predict for quality. Same kind of pattern (improvement) was observed when a comparative analysis of SMOTE and SMOTUNED was done against the most recent class imbalance technique. In conclusion, for software analytic tasks like defect prediction, (1) data pre-processing can be more important than classifier choice, (2) ranking studies are incomplete without such pre-processing, and (3) SMOTUNED is a promising candidate for pre-processing.Comment: 10 pages + 2 references. Accepted to International Conference of Software Engineering (ICSE), 201

    Discussion on "Sparse graphs using exchangeable random measures" by F. Caron and E. B. Fox

    Full text link
    Discussion on "Sparse graphs using exchangeable random measures" by F. Caron and E. B. Fox. In this discussion we contribute to the analysis of the GGP model as compared to the Erdos-Renyi (ER) and the preferential attachment (AB) models, using different measures such as number of connected components, global clustering coefficient, assortativity coefficient and share of nodes in the core.Comment: 2 pages, 1 figur

    Magnetic field generation in finite beam plasma system

    Full text link
    For finite systems boundaries can introduce remarkable novel features. A well known example is the Casimir effect [1, 2] that is observed in quantum electrodynamic systems. In classical systems too novel effects associated with finite boundaries have been observed, for example the surface plasmon mode [3] that appears when the plasma has a finite extension. In this work a novel instability associated with the finite transverse size of a beam owing through a plasma system has been shown to exist. This instability leads to distinct characteristic features of the associated magnetic field that gets generated. For example, in contrast to the well known unstable Weibel mode of a beam plasma system which generates magnetic field at the skin depth scale, this instability generates magnetic field at the scales length of the transverse beam dimension [4]. The existence of this new instability is demonstrated by analytical arguments and by simulations conducted with the help of a variety of Particle - In - Cell (PIC) codes (e.g. OSIRIS, EPOCH, PICPSI). Two fluid simulations have also been conducted which confirm the observations. Furthermore, laboratory experiments on laser plasma system also provides evidence of such an instability mechanism at work

    Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case

    Full text link
    The analysis in Part I revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization when gradient noise is present. These algorithms are used when the risk functions are non-smooth and involve non-differentiable components. They have been long recognized as being slow converging methods. However, it was revealed in Part I that the rate of convergence becomes linear for stochastic optimization problems, with the error iterate converging at an exponential rate αi\alpha^i to within an O(μ)O(\mu)-neighborhood of the optimizer, for some α(0,1)\alpha \in (0,1) and small step-size μ\mu. The conclusion was established under weaker assumptions than the prior literature and, moreover, several important problems (such as LASSO, SVM, and Total Variation) were shown to satisfy these weaker assumptions automatically (but not the previously used conditions from the literature). These results revealed that sub-gradient learning methods have more favorable behavior than originally thought when used to enable continuous adaptation and learning. The results of Part I were exclusive to single-agent adaptation. The purpose of the current Part II is to examine the implications of these discoveries when a collection of networked agents employs subgradient learning as their cooperative mechanism. The analysis will show that, despite the coupled dynamics that arises in a networked scenario, the agents are still able to attain linear convergence in the stochastic case; they are also able to reach agreement within O(μ)O(\mu) of the optimizer
    corecore