923 research outputs found

    Optimal paths on the road network as directed polymers

    Get PDF
    We analyze the statistics of the shortest and fastest paths on the road network between randomly sampled end points. To a good approximation, these optimal paths are found to be directed in that their lengths (at large scales) are linearly proportional to the absolute distance between them. This motivates comparisons to universal features of directed polymers in random media. There are similarities in scalings of fluctuations in length/time and transverse wanderings, but also important distinctions in the scaling exponents, likely due to long-range correlations in geographic and man-made features. At short scales the optimal paths are not directed due to circuitous excursions governed by a fat-tailed (power-law) probability distribution.Comment: 5 pages, 7 figure

    Brownian motion of a charged particle driven internally by correlated noise

    Full text link
    We give an exact solution to the generalized Langevin equation of motion of a charged Brownian particle in a uniform magnetic field that is driven internally by an exponentially-correlated stochastic force. A strong dissipation regime is described in which the ensemble-averaged fluctuations of the velocity exhibit transient oscillations that arise from memory effects. Also, we calculate generalized diffusion coefficients describing the transport of these particles and briefly discuss how they are affected by the magnetic field strength and correlation time. Our asymptotic results are extended to the general case of internal driving by correlated Gaussian stochastic forces with finite autocorrelation times.Comment: 10 pages, 4 figures with subfigures, RevTeX, v2: revise

    Even faster elastic-degenerate string matching via fast matrix multiplication

    Get PDF
    An elastic-degenerate (ED) string is a sequence of n sets of strings of total length N, which was recently proposed to model a set of similar sequences. The ED string matching (EDSM) problem is to find all occurrences of a pattern of length m in an ED text. The EDSM problem has recently received some attention in the combinatorial pattern matching community, and an O(nm1.5 √(log m) + N)-time algorithm is known [Aoyama et al., CPM 2018]. The standard assumption in the prior work on this question is that N is substantially larger than both n and m, and thus we would like to have a linear dependency on the former. Under this assumption, the natural open problem is whether we can decrease the 1.5 exponent in the time complexity, similarly as in the related (but, to the best of our knowledge, not equivalent) word break problem [Backurs and Indyk, FOCS 2016].Our starting point is a conditional lower bound for the EDSM problem. We use the popular combinatorial Boolean matrix multiplication (BMM) conjecture stating that there is no truly subcubic combinatorial algorithm for BMM [Abboud and Williams, FOCS 2014]. By designing an appropriate reduction we show that a combinatorial algorithm solving the EDSM problem in O(nm1.5−∊ + N) time, for any ∊ > 0, refutes this conjecture. Of course, the notion of combinatorial algorithms is not clearly defined, so our reduction should be understood as an indication that decreasing the exponent requires fast matrix multiplication.Two standard tools used in algorithms on strings are string periodicity and fast Fourier transform. Our main technical contribution is that we successfully combine these tools with fast matrix multiplication to design a non-combinatorial O(nm1.381 + N)-time algorithm for EDSM. To the best of our knowledge, we are the first to do so.</p

    Bidirectional string anchors: A new string sampling mechanism

    Get PDF
    The minimizers sampling mechanism is a popular mechanism for string sampling introduced independently by Schleimer et al. [SIGMOD 2003] and by Roberts et al. [Bioinf. 2004]. Given two positive integers w and k, it selects the lexicographically smallest length-k substring in every fragment of w consecutive length-k substrings (in every sliding window of length w+k-1). Minimizers samples are approximately uniform, locally consistent, and computable in linear time. Although they do not have good worst-case guarantees on their size, they are often small in practice. They thus have been successfully employed in several string processing applications. Two main disadvantages of minimizers sampling mechanisms are: first, they also do not have good guarantees on the expected size of their samples for every combination of w and k; and, second, indexes that are constructed over their samples do not have good worst-case guarantees for on-line pattern searches. To alleviate these disadvantages, we introduce bidirectional string anchors (bd-anchors), a new string sampling mechanism. Given a positive integer , our mechanism selects the lexicographically smallest rotation in every length- fragment (in every sliding window of length ). We show that bd-anchors samples are also approximately uniform, locally consistent, and computable in linear time. In addition, our experimen

    All-pairs suffix/prefix in optimal time using Aho-Corasick space

    Get PDF
    The all-pairs suffix/prefix (APSP) problem is a classic problem in computer science with many applications in bioinformatics. Given a set {S1,
,Sk} of k strings of total length n, we are asked to find, for each string Si, i∈[1,k], its longest suffix that is a prefix of string Sj, for all j≠i, j∈[1,k]. Several algorithms running in the optimal O(n+k2) time for solving APSP are known. All of these algorithms are based on suffix sorting and thus require space Ω(n) in any case. We consider the parameterized version of the APSP problem, denoted by ℓ-APSP, in which we are asked to output only the pairs whose suffix/prefix overlap is of length at least ℓ. We give an algorithm for solving ℓ-APSP that runs in the optimal O(n+|OUTPUTℓ|) time using O(n) space, where OUTPUTℓ is the set of output pairs. Our algorithm is thus optimal for the APSP problem as well by setting ℓ=0. Notably, our algorithm is fundamentally different from all optimal algorithms solving the APSP problem: it does not rely on sorting the suffixes of all input strings but on a novel traversal of the Aho-Corasick machine, and it thus requires space linear in the size of the machine

    New hybrid materials with porphyrin-ferrocene and porphyrin-pyrene covalently linked to single-walled carbon nanotubes.

    Get PDF
    Novel porphyrin derivatives bearing additional pyrene or ferrocene units as light harvesting antenna systems were synthesized and fully characterized. Following a covalent functionalization approach for single-walled carbon nanotubes (SWCNTs), stable SWCNT suspensions in common organic solvents 10 were produced. Subsequently, the resulting porphyrin-pyrene and porphyrin-ferrocene dyads were incorporated onto the nanotubes' backbone yielding donor-donor-acceptor hybrids. The resulting hybrid materials were soluble in common organic solvents and were characterized using micro-Raman, ATR-IR, UV-Vis and photoluminescence spectroscopy, transmission electron microscopy, thermogravimetric analysis and Δlectrochemistry. Photoluminescence quenching of the porphyrin emission in both hybrid 15 materials was detected thus suggesting the potentiality of these materials in photoelectrochemical cells

    Accountable Algorithms

    Get PDF
    Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for IRS audit, grant or deny immigration visas, and more. The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decisionmakers and often fail when applied to computers instead. For example, how do you judge the intent of a piece of software? Because automated decision systems can return potentially incorrect, unjustified, or unfair results, additional approaches are needed to make such systems accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness. We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the issues analyzing code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it discloses private information or permits tax cheats or terrorists to game the systems determining audits or security screening. The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities—subtler and more flexible than total transparency—to design decisionmaking algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of automated decisions, but also—in certain cases—the governance of decisionmaking in general. The implicit (or explicit) biases of human decisionmakers can be difficult to find and root out, but we can peer into the “brain” of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterward. The technological tools introduced in this Article apply widely. They can be used in designing decisionmaking processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decisionmakers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society. Part I of this Article provides an accessible and concise introduction to foundational computer science techniques that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decisions or the processes by which the decisions were reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department’s diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how automated decisionmaking may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly, in Part IV, we propose an agenda to further synergistic collaboration between computer science, law, and policy to advance the design of automated decision processes for accountabilit

    Equality of opportunity and educational achievement in Portugal

    Get PDF
    Portugal has one of the highest levels of income inequality in Europe, and low wages and unemployment are concentrated among low skill individuals. Education is an important determinant of inequality. However, there are large differences in the educational attainment of different individuals in the population, and the sources of these differences emerge early in the life-cycle when families play a central role in individual development. We estimate that most of the variance of school achievement at age 15 is explained by family characteristics. Observed school inputs explain very little of adolescent performance. Children from highly educated parents benefit of rich cultural environments in the home and become highly educated adults. Education policy needs to be innovative: (1) it needs to explicitly recognize the fundamental long run role of families on child development; (2) it needs to acknowledge the failure of traditional input based policies

    Microbiological quality of Brazil nuts milk submitted to different dehulling methods.

    Get PDF
    Bertholletia excelsa Bonpl populaly known as Brazil nut, is considered one of the noblest species of the Amazon rainforest, being found throughout this territory. Its fruit represents a high economic value due to its use in both human and animal feeding, presenting about 60 to 70% of lipids, polyunsaturated fatty acids and 15 to 20% of protein. There are many uses of Brazil nuts, and the "milk" extracted from the nuts, is usually consumed pure and used by the natives as typical food. Due to the high content of unsaturated fatty acids in its composition, the nuts becomes very perishable, due oxidative processes, reduction of nutritional value, appearance of smell and rancid flavor, leading to a product with low quality, besides the susceptibility to colonization by pathogenic microorganisms due poorly handled. The process for obtaining Brazil nuts milk involves the stages of degumming of the nuts, extraction, separation of the insoluble residue, formulation and packaging. The most common form of degumming is manual. The objective of the work was to evaluate the microbiological quality of Brazil nuts and its milk obtained through the processes of manual or NaOH dehulling. The count of aerobic bacteria in the standard agar for counting (PCA), total coliforms and E. coli, presence of Salmonella sp., and counting of molds and yeasts using Potato Dextrose Agar (PDA) were counted
    • 

    corecore