378 research outputs found
An Application of Clustering Analysis to International Private Indebtedness
This paper presents a procedure for clustering analysis that combines Kohone’s Self organizing Feature Map (SOFM) and statistical schemes. The idea is to cluster the data in two stages: run SOFM and then minimize the segmentation dispersion. The advantages of proposed procedure will be illustrated through a synthetic experiment and a real macroeconomic problem. The procedure is then used to explore the relationship between private indebtedness and some macroeconomic variables commonly used to measure macroeconomic performance. The experiences of thirty-nine countries in the early nineties are analyzed. The procedure outperformed others clustering techniques in the job of identifying consistent groups of countries from the economic and statistical viewpoints. It found out similarities in different countries concerning their respective levels of private indebtedness when added to well accepted parameters to measure macroeconomic performance.Vector quantization, Clustering, Self-Organizing Feature Map,Macroeconomic Performance, Private Indebtedness.
The econometrics of randomly spaced financial data: a survey
This paper provides an introduction to the problem of modeling randomly spaced longitudinal data. Although Point Process theory was developed mostly in the sixties and early seventies, only in the nineties did this field of Probability theory attract the attention of researchers working in Financial Econometrics. The large increase, observed since, in the number of different classes of Econometric models for dealing with financial duration data, has been mostly due to the increased availability of both trade-by-trade data from equity markets and daily default and rating migration data from credit markets. This paper provides an overview of the main Econometric models available in the literature for dealing with what is sometimes called tick data. Additionally, a synthesis of the basic theory underlying these models is also presented. Finally, a new theorem dealing with the identifiability of latent intensity factors from point process data, jointly with a heuristic proof, is introduced.Tick data, Financial duration models, Point processes, Migration models
Comparação de testes de adequabilidadede ajuste em distribuições multinomiais
Orientador: Euclydes Custodio de Lima FilhoDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Ciencia da ComputaçãoResumo: Não informadoAbstract: Not informedMestradoMestre em Estatístic
Deep learning techniques applied to skin lesion classification: a review
Skin cancer is one of the most common cancers in the world. The most dangerous type of skin cancer is melanoma, which can be lethal if not treated early. However, diagnosing skin lesions can be a difficult task. Therefore, deep learning techniques applied to the diagnosis of skin lesions have been explored by researchers, given their effectiveness in extracting features and classifying input data. In this work, we present a review of latest approaches that apply deep learning techniques to skin lesion classification task. In addition, some datasets used for training and validating the models are introduced, informing their characteristics and specificities, as well as popular pre-processing steps and skin lesion segmentation approaches. Finally, we comment the effectiveness of the proposed models.info:eu-repo/semantics/publishedVersio
"It is the economy, companheiro!": an empirical analysis of Lula's re-election based on municipal data
This paper discusses the reasons that led to the Lula's 2006 re-election. Spatial analysis methods revealed that, contrary to 2002, the President had more votes in less developed municipalities of Brazil. The econometric results cast doubt on the analyses that attribute to Bolsa FamÃlia Programme total responsibility for the re-election. Lula''s electoral success results from changes in the labor market, low inflation and an export boom that have reduced inequality and improved the real wages of the Brazilian poor.
CUSTOS DE PRODUÇÃO DA ATIVIDADE LEITEIRA NA REGIÃO SUL DE MINAS GERAIS
The production costs for dairy milk in the southern region of the state of Minas Gerais are presented in this study. This research identifies economic indexes of costs that most influence decisions made by milk producers. The research is based on the theory of costs and the study location was the southern region of the state of Minas Gerais, where data on the 12 milk production units were gathered from March of 2000 to February of 2001, characterizing a multicase study. The economic analysis proved that milk prices were below average total cost but exceeded average variable cost. Even though profits may be negative in this case, as long as variable cost gets covered, the profit-maximizing decision is to continue production. Thus, not all of the fixed costs are lost. The research shows that expenses on variable resources represent the greater portion of the final cost of milk, like costs with cattle feed and labor. The items with fixed costs which affected most on the cost of milk production in the south of Minas Gerais were machinery and equipment.production costs, milk, south of Minas Gerais.,
Learning from failure: a case study on creative problem solving
Second International Conference on Leadership, Technology and Innovation ManagementThis research is aimed at improving the creative problem solving (CPS) facilitation process by case analysis, through which we
try to learn even from failure. With the goal of increasing efficiency by reducing session time and also due to theoretical
considerations, a four-step model was designed, comprising the stages of objective-finding, problem-definition, action-planning and
the action itself.
Following these adaptations, our research involved an organisation that enabled us to bring managers and volunteers to work on
a project. The organisation is the only private museum in the Algarve region of Portugal; it is involved in regional culture and,
despite competent management, faces serious financial difficulties. A team of 22 people was established, representing both
immediate and remote geographical communities, cultural organisations, and representatives of innovative projects related to the
hospitality industry. From the interventions, and the follow up procedures, we learned that some project failures could have been prevented by a more thorough team facilitation, considering the team size, and a better handling of the client's ownership of the problem. The analyses and conclusions allowed the development of principles that will be applied in future interventions, giving
rise to improvements in the facilitation process, bringing in important implications for developing collaboration between
organizations.
Team composition and the handling of client-team relationships seem to be promising areas for research, given their potential impact on a project's effectiveness, as to its final results for the organization considere
Constraining Representations Yields Models That Know What They Don't Know
A well-known failure mode of neural networks is that they may confidently
return erroneous predictions. Such unsafe behaviour is particularly frequent
when the use case slightly differs from the training context, and/or in the
presence of an adversary. This work presents a novel direction to address these
issues in a broad, general manner: imposing class-aware constraints on a
model's internal activation patterns. Specifically, we assign to each class a
unique, fixed, randomly-generated binary vector - hereafter called class code -
and train the model so that its cross-depths activation patterns predict the
appropriate class code according to the input sample's class. The resulting
predictors are dubbed Total Activation Classifiers (TAC), and TACs may either
be trained from scratch, or used with negligible cost as a thin add-on on top
of a frozen, pre-trained neural network. The distance between a TAC's
activation pattern and the closest valid code acts as an additional confidence
score, besides the default unTAC'ed prediction head's. In the add-on case, the
original neural network's inference head is completely unaffected (so its
accuracy remains the same) but we now have the option to use TAC's own
confidence and prediction when determining which course of action to take in an
hypothetical production workflow. In particular, we show that TAC strictly
improves the value derived from models allowed to reject/defer. We provide
further empirical evidence that TAC works well on multiple types of
architectures and data modalities and that it is at least as good as
state-of-the-art alternative confidence scores derived from existing models.Comment: CR version published at ICLR 202
Reliability of reflectance measures in passive filters
Measurements of optical reflectance in passive filters impregnated with a reactive chemical solution may be transformed to ozone concentrations via a calibration curve and constitute a low cost alternative for environmental monitoring, mainly to estimate human exposure. Given the possibility of errors caused by exposure bias, it is common to consider sets of m filters exposed during a certain period to estimate the latent reflectance on n different sample occasions at a certain location. Mixed models with sample occasions as random effects are useful to analyze data obtained under such setups. the intra-class correlation coefficient of the mean of the m measurements is an indicator of the reliability of the latent reflectance estimates. Our objective is to determine m in order to obtain a pre-specified reliability of the estimates, taking possible outliers into account. To illustrate the procedure, we consider an experiment conducted at the Laboratory of Experimental Air Pollution, University of São Paulo, Brazil (LPAE/FMUSP), where sets of m = 3 filters were exposed during 7 days on n = 9 different occasions at a certain location. the results show that the reliability of the latent reflectance estimates for each occasion obtained under homoskedasticity is k(m) = 0.74. A residual analysis suggests that the within-occasion variance for two of the occasions should be different from the others. A refined model with two within-occasion variance components was considered, yielding k(m) = 0.56 for these occasions and k(m) = 0.87 for the remaining ones. To guarantee that all estimates have a reliability of at least 80% we require measurements on m = 10 filters on each occasion. (C) 2014 the Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).INAIRA - Instituto Nacional de Avaliacao Integrada de Risco AmbientalConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Univ São Paulo, Inst Math & Stat, BR-05508 São Paulo, BrazilUniv São Paulo, Sch Med, BR-05508 São Paulo, BrazilUniversidade Federal de São Paulo, São Paulo, BrazilUniversidade Federal de São Paulo, São Paulo, BrazilCNPq: 15/2008FAPESP: 2008/57717-6CNPq: 308613/2011-2Web of Scienc
- …