
The real part (red) and imaginary part (blue) of the critical line Re(s) = 1/2 of the Riemann zetafunction.
Image credit:
User:Army1987 
The Riemann hypothesis, first formulated by
Bernhard Riemann in 1859, is one of the most famous unsolved problems. It has been an open question for well over a century, despite attracting concentrated efforts from many outstanding mathematicians.
The Riemann hypothesis is a
conjecture about the distribution of the
zeros of the
Riemann zetafunction ζ(s). The Riemann zetafunction is defined for all
complex numbers s ≠ 1. It has zeros at the negative even integers (i.e. at s=2, s=4, s=6, ...). These are called the trivial zeros. The Riemann hypothesis is concerned with the nontrivial zeros, and states that:
 The real part of any nontrivial zero of the Riemann zeta function is ½
Thus the nontrivial zeros should lie on the socalled critical line ½ + it with t a
real number and i the
imaginary unit. The Riemann zetafunction along the critical line is sometimes studied in terms of the
Zfunction, whose real zeros correspond to the zeros of the zetafunction on the critical line.
The Riemann hypothesis is one of the most important open problems in contemporary mathematics; a $1,000,000 prize has been offered by the
Clay Mathematics Institute for a proof. Most mathematicians believe the Riemann hypothesis to be true. (
J. E. Littlewood and
Atle Selberg have been reported as skeptical. Selberg's skepticism, if any, waned, from his young days. In a 1989 paper, he suggested that an analogue should hold for a much wider class of functions, the
Selberg class.)
Simpson's paradox (also known as the Yule–Simpson effect) states that an observed
association between two
variables can reverse when considered at separate levels of a third variable (or, conversely, that the association can reverse when separate groups are combined). Shown here is an illustration of the paradox for
quantitative data. In the graph the overall association between X and Y is negative (as X increases, Y tends to decrease when all of the data is considered, as indicated by the negative slope of the dashed line); but when the blue and red points are considered separately (two levels of a third variable, color), the association between X and Y appears to be positive in each subgroup (positive slopes on the blue and red lines — note that the effect in realworld data is rarely this extreme). Named after British statistician
Edward H. Simpson, who first described the paradox in 1951 (in the context of
qualitative data), similar effects had been mentioned by
Karl Pearson (and coauthors) in 1899, and by
Udny Yule in 1903. One famous reallife instance of Simpson's paradox occurred in the
UC Berkeley genderbias case of the 1970s, in which the university was sued for
gender discrimination because it had a higher admission rate for male applicants to its graduate schools than for female applicants (and the effect was
statistically significant). The effect was reversed, however, when the data was split by department: most departments showed a small but significant bias in favor of women. The explanation was that women tended to apply to competitive departments with low rates of admission even among qualified applicants, whereas men tended to apply to lesscompetitive departments with high rates of admission among qualified applicants. (Note that splitting by department was a more appropriate way of looking at the data since it is individual departments, not the university as a whole, that admit graduate students.)