3,240 research outputs found
Quantitative weighted estimates for Rubio de Francia's Littlewood--Paley square function
We consider the Rubio de Francia's Littlewood--Paley square function
associated with an arbitrary family of intervals in with finite
overlapping. Quantitative weighted estimates are obtained for this operator.
The linear dependence on the characteristic of the weight turns
out to be sharp for , whereas the sharpness in the range
remains as an open question. Weighted weak-type estimates in the endpoint
are also provided. The results arise as a consequence of a sparse domination
shown for these operators, obtained by suitably adapting the ideas coming from
Benea (2015) and Culiuc et al. (2016).Comment: 18 pages. Revised versio
FLASH: Randomized Algorithms Accelerated over CPU-GPU for Ultra-High Dimensional Similarity Search
We present FLASH (\textbf{F}ast \textbf{L}SH \textbf{A}lgorithm for
\textbf{S}imilarity search accelerated with \textbf{H}PC), a similarity search
system for ultra-high dimensional datasets on a single machine, that does not
require similarity computations and is tailored for high-performance computing
platforms. By leveraging a LSH style randomized indexing procedure and
combining it with several principled techniques, such as reservoir sampling,
recent advances in one-pass minwise hashing, and count based estimations, we
reduce the computational and parallelization costs of similarity search, while
retaining sound theoretical guarantees.
We evaluate FLASH on several real, high-dimensional datasets from different
domains, including text, malicious URL, click-through prediction, social
networks, etc. Our experiments shed new light on the difficulties associated
with datasets having several million dimensions. Current state-of-the-art
implementations either fail on the presented scale or are orders of magnitude
slower than FLASH. FLASH is capable of computing an approximate k-NN graph,
from scratch, over the full webspam dataset (1.3 billion nonzeros) in less than
10 seconds. Computing a full k-NN graph in less than 10 seconds on the webspam
dataset, using brute-force (), will require at least 20 teraflops. We
provide CPU and GPU implementations of FLASH for replicability of our results
FP-tree and COFI Based Approach for Mining of Multiple Level Association Rules in Large Databases
In recent years, discovery of association rules among itemsets in a large
database has been described as an important database-mining problem. The
problem of discovering association rules has received considerable research
attention and several algorithms for mining frequent itemsets have been
developed. Many algorithms have been proposed to discover rules at single
concept level. However, mining association rules at multiple concept levels may
lead to the discovery of more specific and concrete knowledge from data. The
discovery of multiple level association rules is very much useful in many
applications. In most of the studies for multiple level association rule
mining, the database is scanned repeatedly which affects the efficiency of
mining process. In this research paper, a new method for discovering multilevel
association rules is proposed. It is based on FP-tree structure and uses
cooccurrence frequent item tree to find frequent items in multilevel concept
hierarchy.Comment: Pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS, Vol. 7 No. 2, February 2010, USA. ISSN 1947
5500, http://sites.google.com/site/ijcsis
Control-structure interaction in a free beam
A simple energy approach to study the problem of control structure interactions in large space structures is presented. For the illustrative case of a free-free beam, the vibrational energy imparted during operation of constant, step, and pulsed thrusters is found in a nondimensional closed form. Then based on a parametric study, suggestions are made on the choice of parameters to minimize the control structure interactions. The study of this simple system provides physical insight and understanding for more complex systems
Integrating diversity into the medical curriculum
In the field of medical education, diversity refers to the presence and representation of persons from a diversified range of personal backgrounds, experiences, and characteristics across the student community, faculty members, and employees in the institution. The scope of diversity in medical education is immense and it plays a vital role in creating an effective learning environment. Once students are exposed to a group of diverse students and patients during their undergraduate training, there is a significant improvement in cultural competence, which becomes crucial in our mission to deliver patient-centered care. Considering the merits of diversity in the medical curriculum for medical students, there is an indispensable need to take specific measures to ensure that diversity is integrated in the curriculum, as it will also ensure the delivery of equitable and culturally competent medical care. As important is to ensure the integration of diversity into medical curriculum, equal importance has to be given to the measurement of various initiatives that have been taken to promote diversity in medical education. In conclusion, diversity in medical education is the need of the hour to create a fruitful learning environment for medical students. This calls for the need to take measures for the integration of diversity into the medical curriculum and subsequently identify strategies and indicators to measure and monitor the progress of diversity initiatives in medical institutions
Feasibility analysis of design for remanufacturing in bearing using hybrid fuzzy-topsis and taguchi optimization
The tremendous advancement in technology, productivity and improved standard of living has come at the cost of environmental deterioration, increased energy and raw material consumption. In this regard, remanufacturing is viable option to reduce energy usage, carbon footprint and raw material usage. In this manuscript, using computational intelligence techniques we try to determine the feasibility of remanufacturing in case of roller bearings. We collected used N308 bearings from 5 different Indian cities. Using Fuzzy-TOPSIS, we found that the roundness, surface roughness and weight play a vital role in design for remanufacturing of roller bearings. Change in diameter, change in thickness and change in width showed minimal influence. We also used Taguchi analysis to reassess the problem. The roundness of inner and outer race was found to be the most influential parameters in deciding the selection of bearing for remanufacturing. The results suggest the bearing designer to design the bearing in such a way that roundness of both races will be taken cared while manufacturing a bearing. However, using Taguchi the weight of the rollers was found to be of least influence. Overall, the predictions of Taguchi analysis were found to be similar to Fuzzy-TOPSIS analysis
Existence and optimal regularity theory for weak solutions of free transmission problems of quasilinear type via Leray-Lions method
We study existence and regularity of weak solutions for the following PDE
-\dive(A(x,u)|\nabla u|^{p-2}\nabla u) = f(x,u),\;\;\mbox{in $B_1$}. where
and . Under the ellipticity assumption
that , A_{\pm}\in C(\O) and f_{\pm}\in
L^N(\O), we prove that under appropriate conditions the PDE above admits a
weak solution in which is also for every
with precise estimates. Our methods relies on similar
techniques as those developed by Caffarelli to treat viscosity solutions for
fully non-linear PDEs (c.f. \cite{C89}). Other key ingredients in our proofs
are the \TT_{a,b} operator (which was introduced in \cite{MS22}) and
Leray-Lions method (c.f. \cite{BM92}, \cite{MT03})
- …