298 research outputs found

    Small Solar Wind Transients 1995 - 2014: Properties, Modeling, and Effects on the Magnetosphere

    Get PDF
    Using case studies and statistical analysis on a large data sample we investigate properties of small solar wind transients (STs) and discuss their modeling. The observations are from the Wind and STEREO spacecraft. By small we mean a duration of 0.5-12 hours. We do not restrict ourselves to magnetic flux ropes. Having arrived at a definition based on an extension of previous work, we apply an automated algorithm to search for STs. In one chapter we focus on the solar activity minimum years 2007-2009. We find an average ST duration of ~4.3 hours, with 75% lasting less than 6 hours. A major difference from large-scale transients (i.e. ICMEs) in the same solar minimum is that the low proton temperature (Tp) is not a robust signature of STs, which is opposite to the trend in ICMEs. Further, the plasma beta (electrons + protons) ~ 1 and thus force free modeling of flux rope STs may not be appropriate. We then examine a much wider sample covering almost 2 solar cycles (1995-2014). After Alfve\u27nic fluctuations are removed, we obtain about 2000 STs. We find that their occurrence frequency has a two-fold dependence: it is (i) correlated strongly with slow solar wind speeds, and (ii) anti-correlated with solar activity, as monitored by the sunspot number. As regards (i) we find that over 80% of STs occur in the slow wind (\u3c 450 km/s). The anti-correlation with solar cycle activity is contrary to what is observed with ICMEs. Most of the STs convect with the ambient solar wind. Studying the normalized expansion parameter, we conclude that many STs do not expand at all, i.e. they are static structures. Only ~ 5% of STs show enhanced values of iron charge states. We also find that the plasma beta of STs depends on solar activity level, being \u3c\u3c 1 for maximum and of order 1 or more for solar minimum. Thus non-force free models should be used in solar minimum years while the force free models could be used in solar maximum. Motivated by these results, we then explore ST modeling with static, non-force free methods, using two analytical (with circular and elliptical cross-sections, respectively) and one numerical model (Grad-Shafranov reconstruction). We illustrate and compare results for 8 examples of flux rope STs. The two analytical models give fairly similar results. We show that our non-force free models can also fit the data as well, or even better, than the force free model. Grad-Shafranov reconstruction shows that the small flux ropes tend to have elliptical cross-section. Finally, we address some aspects of the disturbances in the Earth\u27s magnetosphere, focusing on substorms and storms. We find substorm occurrence to be relatively common during passages of STs at Earth: ~47% of STs of duration 1-5 hours were associated with substorms, a conclusion reached in other studies but here valid over a much larger data set. Further, about 3% of these STs were associated with geomagnetic storms

    Behavioral analysis of anisotropic diffusion in image processing

    Get PDF
    ©1996 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/83.541424In this paper, we analyze the behavior of the anisotropic diffusion model of Perona and Malik (1990). The main idea is to express the anisotropic diffusion equation as coming from a certain optimization problem, so its behavior can be analyzed based on the shape of the corresponding energy surface. We show that anisotropic diffusion is the steepest descent method for solving an energy minimization problem. It is demonstrated that an anisotropic diffusion is well posed when there exists a unique global minimum for the energy functional and that the ill posedness of a certain anisotropic diffusion is caused by the fact that its energy functional has an infinite number of global minima that are dense in the image space. We give a sufficient condition for an anisotropic diffusion to be well posed and a sufficient and necessary condition for it to be ill posed due to the dense global minima. The mechanism of smoothing and edge enhancement of anisotropic diffusion is illustrated through a particular orthogonal decomposition of the diffusion operator into two parts: one that diffuses tangentially to the edges and therefore acts as an anisotropic smoothing operator, and the other that flows normally to the edges and thus acts as an enhancement operator

    Towards Certain Fixes with Editing Rules and Master Data

    Get PDF
    A variety of integrity constraints have been studied for data cleaning. While these constraints can detect the presence of errors, they fall short of guiding us to correct the errors. Indeed, data repairing based on these constraints may not find certain fixes that are absolutely correct, and worse, may introduce new errors when repairing the data. We propose a method for finding certain fixes, based on master data, a notion of certain regions , and a class of editing rules . A certain region is a set of attributes that are assured correct by the users. Given a certain region and master data, editing rules tell us what attributes to fix and how to update them. We show how the method can be used in data monitoring and enrichment. We develop techniques for reasoning about editing rules, to decide whether they lead to a unique fix and whether they are able to fix all the attributes in a tuple, relative to master data and a certain region. We also provide an algorithm to identify minimal certain regions, such that a certain fix is warranted by editing rules and master data as long as one of the regions is correct. We experimentally verify the effectiveness and scalability of the algorithm. </jats:p

    Optimal dividend and capital injection under spectrally positive Markov additive models

    Full text link
    This paper studies De Finetti's optimal dividend problem with capital injection under spectrally positive Markov additive models. Based on dynamic programming principle, we first study an auxiliary singular control problem with a final payoff at an exponential random time. The double barrier strategy is shown to be optimal and the optimal barriers are characterized in analytical form using fluctuation identities of spectrally positive Levy processes. We then transform the original problem under spectrally positive Markov additive models into an equivalent series of local optimization problems with the final payoff at the regime-switching time. The optimality of the regime-modulated double barrier strategy can be confirmed for the original problem using results from the auxiliary problem and the fixed point argument for recursive iterations.Comment: Keywords: Spectrally positive Levy process, regime switching, De Finetti's optimal dividend, capital injection, double barrier strategy, singular contro

    Improving data quality : data consistency, deduplication, currency and accuracy

    Get PDF
    Data quality is one of the key problems in data management. An unprecedented amount of data has been accumulated and has become a valuable asset of an organization. The value of the data relies greatly on its quality. However, data is often dirty in real life. It may be inconsistent, duplicated, stale, inaccurate or incomplete, which can reduce its usability and increase the cost of businesses. Consequently the need for improving data quality arises, which comprises of five central issues of improving data quality, namely, data consistency, data deduplication, data currency, data accuracy and information completeness. This thesis presents the results of our work on the first four issues with regards to data consistency, deduplication, currency and accuracy. The first part of the thesis investigates incremental verifications of data consistencies in distributed data. Given a distributed database D, a set S of conditional functional dependencies (CFDs), the set V of violations of the CFDs in D, and updates ΔD to D, it is to find, with minimum data shipment, changes ΔV to V in response to ΔD. Although the problems are intractable, we show that they are bounded: there exist algorithms to detect errors such that their computational cost and data shipment are both linear in the size of ΔD and ΔV, independent of the size of the database D. Such incremental algorithms are provided for both vertically and horizontally partitioned data, and we show that the algorithms are optimal. The second part of the thesis studies the interaction between record matching and data repairing. Record matching, the main technique underlying data deduplication, aims to identify tuples that refer to the same real-world object, and repairing is to make a database consistent by fixing errors in the data using constraints. These are treated as separate processes in most data cleaning systems, based on heuristic solutions. However, our studies show that repairing can effectively help us identify matches, and vice versa. To capture the interaction, a uniform framework that seamlessly unifies repairing and matching operations is proposed to clean a database based on integrity constraints, matching rules and master data. The third part of the thesis presents our study of finding certain fixes that are absolutely correct for data repairing. Data repairing methods based on integrity constraints are normally heuristic, and they may not find certain fixes. Worse still, they may even introduce new errors when attempting to repair the data, which may not work well when repairing critical data such as medical records, in which a seemingly minor error often has disastrous consequences. We propose a framework and an algorithm to find certain fixes, based on master data, a class of editing rules and user interactions. A prototype system is also developed. The fourth part of the thesis introduces inferring data currency and consistency for conflict resolution, where data currency aims to identify the current values of entities, and conflict resolution is to combine tuples that pertain to the same real-world entity into a single tuple and resolve conflicts, which is also an important issue for data deduplication. We show that data currency and consistency help each other in resolving conflicts. We study a number of associated fundamental problems, and develop an approach for conflict resolution by inferring data currency and consistency. The last part of the thesis reports our study of data accuracy on the longstanding relative accuracy problem which is to determine, given tuples t1 and t2 that refer to the same entity e, whether t1[A] is more accurate than t2[A], i.e., t1[A] is closer to the true value of the A attribute of e than t2[A]. We introduce a class of accuracy rules and an inference system with a chase procedure to deduce relative accuracy, and the related fundamental problems are studied. We also propose a framework and algorithms for inferring accurate values with users’ interaction

    In whom do we trust? Critical success factors impacting intercultural communication in multicultural project teams

    Get PDF
    Trust is a significant enabler for intercultural communication in project teams. Researchers and practitioners, therefore, need to know which factors might enhance trust in intercultural communication. Contributing to the yet limited number of studies in the field of intercultural communication for multicultural project teams, this research theoretically analyzes and empirically investigates the enablers of trust for intercultural communication focusing on emotional intelligence, empathy, interaction, and transparency. Using a field sample of 117 experienced project managers working in multicultural project teams, we find that interaction and transparency significantly and positively influence trust in intercultural communication; empathy marginally and positively influences trust. Emotional intelligence does not exert an effect on it. These results provide novel theoretical and empirical insights which have practical implications for project managers. The findings direct suggestions for additional theoretical work

    DMCS : Density Modularity based Community Search

    Full text link
    Community Search, or finding a connected subgraph (known as a community) containing the given query nodes in a social network, is a fundamental problem. Most of the existing community search models only focus on the internal cohesiveness of a community. However, a high-quality community often has high modularity, which means dense connections inside communities and sparse connections to the nodes outside the community. In this paper, we conduct a pioneer study on searching a community with high modularity. We point out that while modularity has been popularly used in community detection (without query nodes), it has not been adopted for community search, surprisingly, and its application in community search (related to query nodes) brings in new challenges. We address these challenges by designing a new graph modularity function named Density Modularity. To the best of our knowledge, this is the first work on the community search problem using graph modularity. The community search based on the density modularity, termed as DMCS, is to find a community in a social network that contains all the query nodes and has high density-modularity. We prove that the DMCS problem is NP-hard. To efficiently address DMCS, we present new algorithms that run in log-linear time to the graph size. We conduct extensive experimental studies in real-world and synthetic networks, which offer insights into the efficiency and effectiveness of our algorithms. In particular, our algorithm achieves up to 8.5 times higher accuracy in terms of NMI than baseline algorithms
    corecore