Or maybe not ... ;) Note the absence of snail shells.
Saturday, June 29, 2013
Friday, June 28, 2013
Solomonoff Universal Induction
In an earlier post (Kolmogorov, Solomonoff, and de Finetti) I linked to a historical article on the problem of induction. Here's an even better one, which gives a very clear introduction to Solomonoff Induction.
A Philosophical Treatise of Universal InductionOf course, properties such as Turing machineindependence and other key results are asymptotic in nature (only in the limit of very long sequences of data does it cease to matter exactly which reference Turing machine you choose to define program length). When it comes to practical implementations, the devil is in the details! You can think of the Solomonoff Universal Prior as a formalization of the a priori assumption that the information in our Universe is highly compressible (i.e., there are underlying simple algorithms  laws of physics  governing its evolution). See also Information, information processing and black holes. From the paper:
Samuel Rathmanner, Marcus Hutter
Understanding inductive reasoning is a problem that has engaged mankind for thousands of years. This problem is relevant to a wide range of fields and is integral to the philosophy of science. It has been tackled by many great minds ranging from philosophers to scientists to mathematicians, and more recently computer scientists. In this article we argue the case for Solomonoff Induction, a formal inductive framework which combines algorithmic information theory with the Bayesian framework. Although it achieves excellent theoretical results and is based on solid philosophical foundations, the requisite technical knowledge necessary for understanding this framework has caused it to remain largely unknown and unappreciated in the wider scientific community. The main contribution of this article is to convey Solomonoff induction and its related concepts in a generally accessible form with the aim of bridging this current technical gap. In the process we examine the major historical contributions that have led to the formulation of Solomonoff Induction as well as criticisms of Solomonoff and induction in general. In particular we examine how Solomonoff induction addresses many issues that have plagued other inductive systems, such as the black ravens paradox and the confirmation problem, and compare this approach with other recent approaches.
... The formalization of Solomonoff induction makes use of concepts and results from computer science, statistics, information theory, and philosophy. It is interesting that the development of a rigorous formalization of induction, which is fundamental to almost all scientific inquiry, is a highly multidisciplinary undertaking, drawing from these various areas. Unfortunately this means that a high level of technical knowledge from these various disciplines is necessary to fully understand the technical content of Solomonoff induction. This has restricted a deep understanding of the concept to a fairly small proportion of academia which has hindered its discussion and hence progress.Here are some nice informal comments by Solomonoff himself.
... Every major contribution to the foundations of inductive reasoning has been a contribution to under standing rational thought. Occam explicitly stated our natural disposition towards simplicity and elegance. Bayes inspired the school of Bayesianism which has made us much more aware of the mechanics behind our belief system. Now, through Solomonoff, it can be argued that the problem of formalizing optimal inductive inference is solved.
Being able to precisely formulate the process of (universal) inductive inference is also hugely significant for general artificial intelligence. Obviously reasoning is synonymous with intelligence, but true intelligence is a theory of how to act on the conclusions we make through reasoning. It may be argued that optimal intelligence is nothing more than optimal inductive inference combined with optimal decision making. Since Solomonoff provides optimal inductive inference and decision theory solves the problem of choosing optimal actions, they can therefore be combined to produce intelligence. ... [ Do we really need Solomonoff? Did Nature make use of his Universal Prior in producing us? It seems like cheaper tricks can produce "intelligence" ;) ]
Thursday, June 27, 2013
Wednesday, June 26, 2013
Kolmogorov, Solomonoff, and de Finetti
This is a nice historical article that connects a number of key figures in the history of probability and information theory. See also Frequentists and Bayesians, Jaynes and Bayes, and On the Origin of Probability in Quantum Mechanics.
Induction: From Kolmogorov and Solomonoff to De Finetti and Back to KolmogorovExcerpt below from the paper. I doubt Nature (evolution) uses Solomonoff's Universal Prior (determined by minimum length programs), as it is quite expensive to compute. I think our priors and heuristics are much more specialized and primitive.
John J. McCall DOI: 10.1111/j.00261386.2004.00190.x
ABSTRACT This paper compares the solutions to “the induction problem” by Kolmogorov, de Finetti, and Solomonoff. Brief sketches of the intellectual history of de Finetti and Kolmogorov are also composed. Kolmogorov's contributions to information theory culminated in his notion of algorithmic complexity. The development of algorithmic complexity was inspired by information theory and randomness. Kolmogorov's bestknown contribution was the axiomatization of probability in 1933. Its influence on probability and statistics was swift, dramatic, and fundamental. However, Kolmogorov was not satisfied by his treatment of the frequency aspect of his creation. This in time gave rise to Kolmogorov complexity. De Finetti, on the other hand, had a profound vision early in his life which was encapsulated in his exchangeability theorem. This insight simultaneously resolved a fundamental philosophical conundrum—Hume's problem, and provided the bricks and mortar for de Finetti's constructive probabilistic theory. Most of his subsequent research involved extensions of his representation theorem. De Finetti was against determinism and celebrated quantum theory, while Kolmogorov was convinced that in every seemingly indeterministic manifestation there lurked a hidden deterministic mechanism. Solomonoff introduced algorithmic complexity independently of Kolmogorov and Chaitin. Solomonoff's motivation was firmly focused on induction. His interest in induction was to a marked extent sparked by Keynes’ 1921 seminal book. This interest in induction has never faltered, remaining prominent in his most recent research. The decisive connection between de Finetti and Kolmogorov was their lifelong interest in the frequency aspect of induction. Kolmogorov's solution to the problem was algorithmic complexity. De Finetti's solution to his frequency problem occurred early in his career with the discovery of the representation theorem. In this paper, we try to explain these solutions and mention related topics which captured the interest of these giants.
... There are a host of similarities and equivalences joining concepts like Shannon entropy, Kolmogorov complexity, maximum entropy, Bayes etc. Some of these are noted. In short, the psychological aspects of probability and induction emphasized by de Finetti, Ramsey and Keynes may eventually emerge from a careful and novel neuroscientific study. In this study, neuronal actors would interact as they portray personal perception and mem ories. In this way, these versatile neuronal actors comprise the foundation of psychology.
... In comparing de Finetti and Kolmogorov one becomes entangled in a host of controversial issues. de Finetti was an indeterminist who championed a subjective inductive inference. Kolmogorov sought determinism in even the most chaotic natural phenomena and reluctantly accepted a frequency approach to statistics, an objective science. In the 1960s Kolmogorov questioned his frequency position and developed algorithmic complexity, together with Solomonoff and Chaitin. With respect to the frequency doubts, he would have been enlightened by de Finetti’s representation theorem. The KCS research remains an exciting research area in probability, statistics and computer science. It has raised a whole series of controversial issues. It appears to challenge both the frequency and Bayesian schools: the former by proclaiming that the foundations of uncertainty are to be found in algorithms, information and combinatorics, rather than probability; the latter by replacing the Bayes prior which differs across individuals in accord with their distinctive beliefs with a universal prior applicable to all and drained of subjectivity.
Sunday, June 23, 2013
HDLC heritability from whole genomes: common variants dominate
One of the advantages of whole genomes vs microarrays is that you can examine the impact of rare variants; in this study 25M variants (SNVs) were used as opposed to the usual 1M or so SNPs. The authors examine HDLC level, which has heritability of 4776%. They find that common variants account for almost all of the heritability, with rare variants (MAF < 1%) accounting for perhaps 1020 percent as much as common variants.
Wholegenome sequence–based analysis of highdensity lipoprotein cholesterol (Nature Genetics) (Supplement)
We describe initial steps for interrogating wholegenome sequence data to characterize the genetic architecture of a complex trait, levels of highdensity lipoprotein cholesterol (HDLC). We report wholegenome sequencing and analysis of 962 individuals from the Cohorts for Heart and Aging Research in Genetic Epidemiology (CHARGE) studies. From this analysis, we estimate that common variation contributes more to heritability of HDLC levels than rare variation, and screening for mendelian variants for dyslipidemia identified individuals with extreme HDLC levels. Wholegenome sequencing analyses highlight the value of regulatory and nonproteincoding regions of the genome in addition to proteincoding regions.
Saturday, June 22, 2013
Android Dreams
These videos will be very interesting to Blade Runner fans. In the first, Dick talks about
A lovely Sean Young, at the beginning of her movie career, talks about the challenges of playing an android.
The love scene between Rachael and Deckard, uncut.
Rutger Hauer: "Harrison is the villain." I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I've watched C beams glitter in the dark near the Tannhauser gate. All those moments will be lost in time like tears in rain.
"... the problem of differentiating an authentic human being from the reflexmachine which I call an android... The word android is a metaphor for someone who is physiologically human but psychologically ... nonhuman. I got interested in this when I was doing research for Man in the High Castle [excellent alternative history novel in which the Japanese and Germans won WWII] and I was studying the Nazi mentality. I discovered that although these people were highly intelligent they were definitely deficient in some kind of ... appropriate affect or appropriate emotions."Hence the "affect Turing test" used on androids in Blade Runner / Do Androids Dream of Electric Sheep.
A lovely Sean Young, at the beginning of her movie career, talks about the challenges of playing an android.
The love scene between Rachael and Deckard, uncut.
Rutger Hauer: "Harrison is the villain." I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I've watched C beams glitter in the dark near the Tannhauser gate. All those moments will be lost in time like tears in rain.
WDIST and PLINK
News from BGI Cognitive Genomics.
31 May 2013: We have started the process of returning genetic data to our first round of volunteers. Everyone who was sequenced will be contacted within the next few weeks.
We are also starting public testing of our new bioinformatics tool: WDIST, an increasingly complete rewrite of PLINK designed for tomorrow's large datasets, developed by Christopher Chang with support from the NIHNIDDK's Laboratory of Biological Modeling and others. It uses a streaming strategy to reduce memory requirements, and executes many of PLINK's slowest functions, including identitybystate/identitybydescent computation, LDbased pruning of marker sets, and association analysis max(T) permutation tests, over 100x (and sometimes even over 1000x) as quickly. Some newer calculations, such as the GCTA relationship matrix, are also supported. We have developed several novel algorithms, including a fast Fisher's exact test (2x2/2x3) which comfortably handles contingency tables with entries in the millions (try our browser demo!). Software engineers can see more details on our WDIST core algorithms page, and download the GPLv3 source code from our GitHub repository.
Friday, June 21, 2013
Ken Wilson, dead at 77
Wilson was a hero to many, many theoretical physicists, including me. Wilson's father did his PhD with Linus Pauling, Wilson with Murray GellMann, both at Caltech (see Defining Merit). To Wilson we owe much of our modern understanding of renormalization, effective field theory, phase transitions, lattice quantum field theory, and, of course, the renormalization group.
NYTimes: ... His colleagues hailed Dr. Wilson as a legend who had changed how theoretical physicists went about their work, especially in particle physics, the study of the elementary and fundamental constituents of nature. He was also a pioneer in using computers and then supercomputers to study the properties of quarks, the building blocks of protons and neutrons.From Wilson's 1982 Nobel Lecture:
“He’s a giant in theoretical physics,” said Frank Wilczek, a Nobelist at the Massachusetts Institute of Technology, calling his work “quite profound.”
Steven Weinberg, a Nobel winner at the University of Texas at Austin, said, “Ken Wilson was one of a very small number of physicists who changed the way we all think, not just about specific phenomena, but about a vast range of different phenomena.”
Kenneth Geddes Wilson was born on June 8, 1936, in Waltham, Mass., the first of three children of Edgar and Emily Buckingham Wilson. His father was a chemist at Harvard. His mother had been a physics graduate student before marrying. One grandfather was an engineering professor at M.I.T. and the other the speaker of the Tennessee House of Representatives.
Kenneth Wilson entered Harvard at 16, majored in math and was the Ivy League mile champion. He obtained his Ph.D. at the California Institute of Technology under the legendary theorist Murray GellMann, then did postdoctoral studies at Harvard as a junior fellow that included a year at CERN, the European nuclear research organization in Geneva. He joined Cornell as a physics professor in 1963.
... From the start, Dr. Wilson was drawn to difficult problems that could take years to solve, said Kurt Gottfried, a Cornell colleague. One such problem was phase transitions, the passage from water to steam or atoms lining up to make a magnet. At the critical point — the temperature at which the change happens — orderly behavior breaks down, but theorists had few clues to how to calculate what was happening.
Dr. Wilson realized that the key to the problem was that fluctuations were happening on all scales at once — from the jostling and zooming of individual atoms to the oscillations of the entire system — something conventional theory could not handle.
At the heart of Dr. Wilson’s work was an abstruse mathematical apparatus known as the renormalization group, which had been conceived by his thesis adviser, Dr. GellMann, and Francis Low in 1951. They had pointed out that fundamental properties of particles and forces varied depending on the scale over which they are measured.
Dr. Wilson realized that such “scaling” was intrinsic to the problems in phase transitions. In a series of papers in the early 1970s, building on the work of Michael Fisher and Benjamin Widom at Cornell and Leo Kadanoff, then at the University of Chicago, he applied the renormalization idea to show how the critical phenomena could be solved by dividing the problem up into simpler pieces, so that what was happening at the melting point, for example, could be considered on one scale at a time.
The results showed that many seemingly unrelated systems — from magnets to liquids — could exhibit the same characteristic behavior as they approached the critical point. The concept proved to be of wide relevance in physics and was cited by the Royal Swedish Academy of Sciences in presenting the Nobel.
Dr. Wilson went on to apply the same divideandconquer strategy to quantum field theory, the mathematical language that underlies the study of the most elementary particles and fundamental forces in nature. The theory was plagued by such vexing issues as infinities and other mathematical absurdities when physicists tried to calculate something like the mass of an electron. A method had been developed to work around these anomalies, but many physicists worried that they were just sweeping a fatal flaw in physics under the rug and that, in the words of Dr. Wilczek, “quantum field theory was doomed.”
Dr. Wilson’s new technique banished the infinities for good, putting the theory on a sounder footing. As the Caltech physicist John Preskill put it in a blog post, “Wilson changed that.”
Dr. Wilson’s ideas played a major role in the development of quantum chromodynamics, the branch of quantum theory that describes the behavior of quarks and the gluons that stick them together to form protons and neutrons. In 1974, in order to solve the equations of this theory numerically and gain a more precise understanding of this process, he invented a digitized version of the theory called lattice gauge theory, in which space is imagined as a kind of finely resolved jungle gym where every intersection of the bars represents a point in spacetime.
... When I entered graduate school at California Institute of Technology, in 1956, the default for the most promising students was to enter elementary particle theory, the field in which Murray GellMann, Richard Feynman, and Jon Mathews were all engaged. I rebelled briefly against this default, spending a summer at the General Atomic Corp. working for Marshall Rosenbluth on plasma physics and talking with S. Chandresekhar who was also at General Atomic for the summer. After about a month of work I was ordered to write up my results, as a result of which I swore to myself that I would choose a subject for research where it would take at least five years before I had anything worth writing about. Elementary particle theory seemed to offer the best prospects of meeting this criterion and I asked Murray for a problem to work on.From Wilson's Nobel biographical entry:
... In 1960 I turned in a thesis to Cal Tech containing a mishmash of curious calculations. I was already a Junior Fellow at Harvard. In 1962 I went to CERN for a year. ... By 1963 it was clear that the only subject I wanted to pursue was quantum field theory applied to strong interactions. I rejected S matrix theory because the equations of S matrix theory, even if one could write them down, were too complicated and inelegant to be a theory; in contrast the existence of a strong coupling approximation as well as a weak coupling approximation to fixed source meson theory helped me believe that quantum field theory might make sense. As far as strong interactions were concerned, all that one could say was that the theories one could write down, such as pseudoscalar meson theory, were obviously wrong. No one had any idea of a theory that could be correct. One could make these statements even though no one had the foggiest notion how to solve these theories in the strong coupling domain.
... When I entered graduate school, I had carried out the instructions given to me by my father and had knocked on both Murray GellMann’s and Feynman’s doors, and asked them what they were currently doing. Murray wrote down the partition function for the three dimensional Ising model and said it would be nice if I could solve it (at least that is how I remember the conversation). Feynman’s answer was “nothing”.
... My very strong desire to work in quantum field did not seem likely to lead to quick publications; but I had already found out that I seemed to be able to get jobs even if I didn’t publish anything so I did not worry about ‘publish or perish’ questions.
... This work showed me that a renormalization group transformation, whose purpose was to eliminate an energy scale or a length scale or whatever from a problem, could produce an effective interaction with arbitrarily many coupling constants, without being a disaster. The renormalization group formalism based on fixed points could still be correct, and furthermore one could hope that only a small finite number of these couplings would be important for the qualitative behavior of the transformations, with the remaining couplings being important only for quantitative computations. In other words the couplings should have an order of importance, and for any desired but given degree of accuracy only a finite subset of the couplings would be needed. In my model the order of importance was determined by orders in the expansion in powers of l/L. ...
... My schooling took place in Wellesley, Woods Hole, Massachusetts (second, third/fourth grades in two years), Shady Hill School in Cambridge, Mass. (from fifth to eighth grade), ninth grade at the Magdalen College School in Oxford, England, and tenth and twelfth grades (skipping the eleventh) at the George School in eastern Pennsylvania. Before the year in England I had read about mathematics and physics in books supplied by my father and his friends. I learned the basic principle of calculus from Mathematics and Imagination by Kasner and Newman, and went of to work through a calculus text, until I got stuck in a chapter on involutes and evolutes. Around this time I decided to become a physicist. Later (before entering college) I remember working on symbolic logic with my father; he also tried, unsuccessfully, to teach me group theory. I found high school dull. In 1952 I entered Harvard. I majored in mathematics, but studied physics (both by intent), participated in the Putnam Mathematics competition, and ran the mile for the track team (and crosscountry as well). I began research, working summers at the Woods Hole Oceanographic Institution, especially for Arnold Arons (then based at Amherst).IIRC GellMann was twice nominated for the Society of Fellows and twice rejected! Was there ever a bigger mistake in personnel selection? :) On the other hand, John Bardeen (JF '35) twice won the Nobel Prize in physics (once for the transistor, once for superconductivity), so the selection process must have something going for it! See these slides from a talk by Howard Georgi for some more details about theoretical physics at Harvard in the 1970s.
My graduate studies were carried out at the California Institute of Technology. I spent two years in the Kellogg Laboratory of nuclear physics, gaining experimental experience while taking theory courses; I then worked on a thesis for Murray GellMann. While at Cal Tech I talked a lot with Jon Mathews, then a junior faculty member; he taught me how to use the Institute's computer; we also went on hikes together. I spent a summer at the General Atomic Company in San Diego working with Marshall Rosenbluth in plasma physics. Another summer Donald Groom (then a fellow graduate student) and I hiked the John Muir Trail in the Sierra Nevada from Yosemite Park to Mt. Whitney. After my third year I went off to Harvard to be a Junior Fellow while GellMann went off to Paris. During the first year of the fellowship I went back to Cal Tech for a few months to finish my thesis. There was relatively little theoretical activity at Harvard at the time; I went often to M.I.T. to use their computer and eat lunch with the M.I.T. theory group, led by Francis Low.
Thursday, June 20, 2013
Beanbags and causal variants
Not only do these results implicate common causal variants as the source of heritability in disease susceptibility, but they also suggest that genegene (epistasis) and geneenvironment interactions are of limited impact. Both the genetic and environmental backgrounds for a particular allele vary across Eurasia, so replicability puts an upper limit on their influence. See also Epistasis vs Additivty.
Long live "beanbag genetics"! :)
High Transethnic Replicability of GWAS Results Implies Common Causal Variants (PLOS Genetics)Figure 7 from the paper shows the strong correlation (with slope = 1) between odds ratios in East Asian and European discovery samples. (Click for larger version.)
Genomewide association studies (GWAS) have detected many disease associations. However, the reported variants tend to explain small fractions of risk, and there are doubts about issues such as the portability of findings over different ethnic groups or the relative roles of rare versus common variants in the genetic architecture of complex disease. Studying the degree of sharing of diseaseassociated variants across populations can help in solving these issues. We present a comprehensive survey of GWAS replicability across 28 diseases. Most loci and SNPs discovered in Europeans for these conditions have been extensively replicated using peoples of European and East Asian ancestry, while the replication with individuals of African ancestry is much less common. We found a strong and significant correlation of Odds Ratios across Europeans and East Asians, indicating that underlying causal variants are common and shared between the two ancestries. Moreover, SNPs that failed to replicate in East Asians map into genomic regions where Linkage Disequilibrium patterns differ significantly between populations. Finally, we observed that GWAS with larger sample sizes have detected variants with weaker effects rather than with lower frequencies. Our results indicate that most GWAS results are due to common variants. In addition, the sharing of disease alleles and the high correlation in their effect sizes suggest that most of the underlying causal variants are shared between Europeans and East Asians and that they tend to map close to the associated marker SNPs.
Long live "beanbag genetics"! :)
A Defense of Beanbag Genetics
JBS Haldane
My friend Professor Ernst Mayr, of Harvard University, in his recent book Animal Species and Evolution1, which I find admirable, though I disagree with quite a lot of it, has the following sentences on page 263.
The Mendelian was apt to compare the genetic contents of a population to a bag full of colored beans. Mutation was the exchange of one kind of bean for another. This conceptualization has been referred to as “beanbag genetics”. Work in population and developmental genetics has shown, however, that the thinking of beanbag genetics is in many ways quite misleading. To consider genes as independent units is meaningless from the physiological as well as the evolutionary viewpoint.... In another place2 Mayr made a more specific challenge. He stated that Fisher, Wright, and I “have worked out an impressive mathematical theory of genetical variaion and evolutionary change. But what, precisely, has been the contribution of this mathematical school to evolutionary theory, if I may be permitted to ask such a provocative question?” “However,” he continued in the next paragraph, “I should perhaps leave it to Fisher, Wright, and Haldane to point out what they consider their major contributions.” While Mayr may certainly ask this question, I may not answer it at Cold Spring Harbor, as I have been officially informed that I am ineligible for a visa for entering the United States†. Fisher is dead, but when alive preferred attack to defense. Wright is one of the gentlest men I have ever met, and if he defends himself, will not counterattack. This leaves me to hold the fort, and that by writing rather than speech.
Now, in the first place I deny that the mathematical theory of population genetics is at all impressive, at least to a mathematician. On the contrary, Wright, Fisher, and I all made simplifying assumptions which allowed us to pose problems soluble by the elementary mathematics at our disposal, and even then did not always fully solve the simple problems we set ourselves. Our mathematics may impress zoologists but do not greatly impress mathematicians. Let me give a simple example. ...
Wednesday, June 19, 2013
Sinica podcast
I'm always looking for good podcasts to listen to in the car or when I'm exercising. Here's one I've been enjoying a lot in the last week:
Sinica podcast
For some reason the podcasts are distributed along with language lessons on iTunes: Popup Chinese (iTunes).
Here's the most recent (June 14) episode:
Sinica podcast
For some reason the podcasts are distributed along with language lessons on iTunes: Popup Chinese (iTunes).
Here's the most recent (June 14) episode:
This week on Sinica, Kaiser Kuo and Jeremy Goldkorn are delighted to host Matthew Niederhauser. A photographer focusing on urban development in China, Matthew has been published in various journals including The New Yorker, National Geographic, The New York Times Magazine, Le Monde, and Time Magazine among others. As Jeremy describes it, his visuals of Beijing and other parts of modern China capture the feeling of a society that operates on a scale both "beautiful and horrifying" at the same time. Matthew is currently working on a project documenting China's megablock urban development project. But all of his work is stunning.Some Neiderhauser photographs: (see also Futurismo)
Sunday, June 16, 2013
China 3.0
This report from the European Council on Foreign Relations aims to give Western readers a sense of the debate about China's future among its policy and intellectual elite. The funny thing about China is that even the elites have no idea where it's going. This is not unrelated to the miniboom in real estate in large cities on the US west coast and in NYC. On the other hand, one of the economists in the survey writes confidently about continued 8% GDP growth rates and a Chinese economy in 2030 that will be twice as large as that of the US.
[ Mark Leonard: What does the new China think? ] ... The Chinese like to think of history progressing in 30year cycles. They think of China 1.0 as the years of Mao Zedong, which lasted from 1949 to 1978, when China had a planned economy, a Leninist political system, and a foreign policy of spreading global revolution. China 2.0 was the China that began with Deng Xiaoping in 1978 and spanned a generation until the financial crisis of 2008. Deng’s economic policy – launched under the label of “socialism with Chinese characteristics” – was defined by exportled growth backed up by “financial repression”. Deng’s political agenda was characterised by the quest for stability and elite consensus in the wake of the Tiananmen massacre. And his foreignpolicy outlook was about creating a peaceful environment for China’s development by quietly amassing power and keeping a low profile.
Since the global financial meltdown of 2008, China has been facing a crisis of success as each of the three goals of Deng’s era – affluence, stability, and power – is seen as the source of new problems. François Godement has characterised it as a success trap: the incredible achievements of the past have built up a powerful constituency for each of the policies of the Deng era but sticking to them now runs the risk of being selfdefeating. Incredible as it might seem, some intellectuals have started to talk of the Hu–Wen era, which delivered an average of 10 percent annual growth, as a “lost decade” because muchneeded reforms were not made. China 3.0 will be defined by a quest for solutions to these three crises.
[ Zhao Jing  "Michael Anti" ] ... smart censorship hasn’t stopped the Chinanet from developing into a genuine public sphere – a “battlefield” for public opinion and a nightmare for some Chinese officials. China’s 300 million microbloggers – equivalent to the entire population of the United States – constitute a powerful force. For example, the authorities’ attempt to cover up a train crash in Wenzhou in southern China in July 2011 caused huge anger among Chinese netizens. In the first five days after the train crash, 10 million people posted criticisms of the government on social media – something that had never happened before in China. This year, the former railways minister was sacked and sentenced to 10 years in prison.
... So why is Chinese social networking booming despite the censorship? Part of the reason is the Chinese language. Posts on Twitter and Twitter clones such as Weibo are limited to 140 characters. In English that comes to about 20 words or a sentence with a short link – in effect, a headline. But in Chinese you can write a whole paragraph or tell a whole story in 140 characters. One Chinese tweet is equal to 3.5 English tweets. In some ways, Weibo (which means “microblog” in Chinese) is more like Facebook than Twitter. As far as the Chinese are concerned, if something is not on Weibo, it does not exist.
The Chinanet is changing the way people in China think and live. It has given the voiceless a channel to make their voices heard. In the past, China had a petition system – a remedy outside the judicial system that allowed ordinary people to bypass corrupt local officials and appeal directly to the central authorities. But if you have a lot of people going to Beijing, it increases the risk of a revolution. In recent years, many people going to Beijing have been sent back or even thrown into black jails. But now we have Weibo – an alternative way for people to petition the government from their mobile phones.
Some of these complaints are picked up by reporters, professors, or celebrities. The most popular microblogger in China, Yao Chen, has about 21 million followers – almost like a national television station. So, despite censorship, Weibo has given 300 million Chinese people a real chance to talk to each other every day. In fact, it’s the first time there has been a real public sphere in China.
See also Is there a China model? (“performance legitimacy”), and these arguments by venture capitalist and PRC apologist Eric X. Li:
TED blog: ...
1. Adaptability: Political scientists say that oneparty systems are incapable of selfcorrection. Li counters this with the fact that the Party has selfcorrected dramatically in the last 64 years, more than any other country in recent memory. The Party’s policies encompassed land collectivization, the Great Leap Forward, the Cultural Revolution, Deng Xiaoping’s market reforms, and Jiang Zemin opening Party membership to private businesspeople — “something unimaginable during Mao’s rule.” And the Party selfcorrects in dramatic fashion. New rules get enacted to correct past mistakes, such as term limits with mandatory retirement rates. We also often hear that China is in dire need of political reform, but Li argues this is rhetoric — even if critics don’t see the reform they want to see, political reforms have never stopped. Chinese society is unrecognizable today as compared to 30 years ago. In fact, Li says, “I would venture to suggest that the Party is world’s leading expert in political reform.”
2. Meritocracy Another assumption is that oneparty rule leads to a closed political system in which power gets concentrated in the hands of the few, leading to bad governance and corruption. Li argues that actually, the Party is one of the most meritocratic political institutions in the world. Only one fifth of Politburo members come from privileged backgrounds, and in the Central Committee of more than 300, the percentage is even smaller. This is thanks to a body little known to Westerners — the Party’s Organization Department system that guides candidates through integrated career tracks for Chinese officials, recruiting college graduates into entrylevel positions and promoting them through the ranks, including high officialdom — a process requiring up to three decades. While patronage plays a role, merit is the underlying driver, says Li. “Within this system,” Li says, “and this is not a putdown – merely a statement of fact: George W. Bush and Barack Obama, before running for president, would not have made smallcounty chief in China’s system.”
3. Legitimacy Westerners assume that multiparty elections with universal suffrage is the only source of legitimacy. When asked how the Party justifies legitimacy, Li asks, “How about competency?” He cites the fact that since 1949 when the Party took over, China was mired in civil war and foreign aggression, and its average life expectancy was 41. Today, it’s the second largest economy in the world, an industrial powerhouse, and its people live in increasing prosperity. Pew Research polls of public attitudes suggest consistently that citizens are highly satisfied with how the country and nation are progressing. A Financial Times survey recently released suggests that 93% of China’s Generation Y are optimistic about their country’s future. Says Li: “If this isn’t legitimacy, I don’t know what is.” Contrast this, he suggests, to the dismal performance of many electoral democracies around the world: “Governments get elected and then fall below approval a few months later and stay there or fall until the next election. Democracy is becoming a perpetual cycle of ‘elect and regret.’”
Of course, Li concedes the country faces enormous challenges: pollution, population, food safety, and on the political front, corruption, which is widespread and undermines moral legitimacy. But the argument that the oneparty system causes corruption doesn’t hold water. According to the Transparency International index of corruption, China has recently ranked between 70 and 80 among 170 countries and moving up, while India, the largest electoral democracy in the world, is at 95 and dropping.
Erdos and Tao
Paul Erdos and Terence Tao in 1985. Tao would have been 10 years old or so. (From Tao's G+ feed.) Tao is possibly SMPY's most famous alumnus, although I am not 100% sure.
Erdos slang:
Children were referred to as "epsilons" (because in mathematics, particularly calculus, an arbitrarily small positive quantity is commonly denoted by the Greek letter (ε))
Women were "bosses"
Men were "slaves"
People who stopped doing mathematics had "died"
People who physically died had "left"
Alcoholic drinks were "poison"
Music was "noise"
People who had married were "captured"
People who had divorced were "liberated"
To give a mathematical lecture was "to preach"
To give an oral exam to a student was "to torture" him/her.
Friday, June 14, 2013
Spy vs Spy
You'd have to be very naive to think that national intelligence agencies don't have dedicated hacking and information security penetration operations. In fact, if the US lacked this capability our spymasters would be derelict in their duty. Most of the complaining about foreign hacking or signals intelligence is just playing to (the dumb or naive part of) the domestic audience.
It was always amusing to play spot the Fed at Def Con ;)
The manpower necessary to practice traditional SIGINT can be found in welldefined places  you need people with CS, EE, Physics and Math backgrounds. For crypto you need very smart guys with math ability. But hacking/cracking involves a certain obsessivecompulsive personality component: you have to focus really hard on ugly bits of (often poorly designed) code and immerse yourself in the inelegant details. There's also an associated antiauthoritarian streak, which clashes with the nature of government service. So it's challenging for the spooks to recruit and retain hacker/cracker talent. The suits coexist uneasily with the "wildtype" found at places like Def Con. (Did I ever mention I almost accepted a summer job offer from the Institute for Defense Analysis after I graduated from Caltech? That's yet another story ...)
Here's something about TAO ("Tailored Access Operations"!), within the NSA.
Foreign Policy: ... By the time Obama became president of the United States in January 2009, TAO had become something akin to the wunderkind of the U.S. intelligence community. "It's become an industry unto itself," a former NSA official said of TAO at the time. "They go places and get things that nobody else in the IC [intelligence community] can."
Given the nature and extraordinary political sensitivity of its work, it will come as no surprise that TAO has always been, and remains, extraordinarily publicity shy. Everything about TAO is classified top secret codeword, even within the hypersecretive NSA. Its name has appeared in print only a few times over the past decade, and the handful of reporters who have dared inquire about it have been politely but very firmly warned by senior U.S. intelligence officials not to describe its work for fear that it might compromise its ongoing efforts. According to a senior U.S. defense official who is familiar with TAO's work, "The agency believes that the less people know about them [TAO] the better."
The word among NSA officials is that if you want to get promoted or recognized, get a transfer to TAO as soon as you can. The current head of the NSA's SIGINT Directorate, Teresa Shea, 54, got her current job in large part because of the work she did as chief of TAO in the years after the 9/11 terrorist attacks, when the unit earned plaudits for its ability to collect extremely hardtocomeby information during the latter part of George W. Bush's administration. We do not know what the information was, but sources suggest that it must have been pretty important to propel Shea to her position today. But according to a recently retired NSA official, TAO "is the place to be right now."
There's no question that TAO has continued to grow in size and importance since Obama took office in 2009, which is indicative of its outsized role. In recent years, TAO's collection operations have expanded from Fort Meade to some of the agency's most important listening posts in the United States. There are now miniTAO units operating at the huge NSA SIGINT intercept and processing centers at NSA Hawaii at Wahiawa on the island of Oahu; NSA Georgia at Fort Gordon, Georgia; and NSA Texas at the Medina Annex outside San Antonio, Texas; and within the huge NSA listening post at Buckley Air Force Base outside Denver.
The problem is that TAO has become so large and produces so much valuable intelligence information that it has become virtually impossible to hide it anymore. The Chinese government is certainly aware of TAO's activities. The "mountains of data" statement by China's top Internet official, Huang Chengqing, is clearly an implied threat by Beijing to release this data. Thus it is unlikely that President Obama pressed President Xi too hard at the Sunnydale summit on the question of China's cyberespionage activities. As any highstakes poker player knows, you can only press your luck so far when the guy on the other side of the table knows what cards you have in your hand.
Tuesday, June 11, 2013
The ratchet of power
I voted twice for Obama, and always despised BushCheney. But I can't disagree with Cheney's remarks below.
New Yorker: After Barack Obama was elected to his first term as President but before he took the oath of office, VicePresident Dick Cheney gave an exit interview to Rush Limbaugh. Under George W. Bush, Cheney was the architect, along with his legal counsel, David Addington, of a dramatic expansion of executive authority—a power grab that Obama criticized, fiercely, on the campaign trail, and promised to “reverse.” But when Limbaugh inquired about this criticism Cheney swatted it aside, saying, “My guess is that, once they get here and they’re faced with the same problems we deal with every day, they will appreciate some of the things we’ve put in place.”See also Making Alberto Gonzales Look Good.
Sunday, June 09, 2013
If you can't fix it you've got to stand it
I have to give the Economist editors credit for the cojones to print a Brokeback Mountain cover with Obama and Xi: If you can't fix it you've got to stand it.
Compare the assumptions I made in some 2004 calculations to the widget below.
Horizons of truth
I'm putting these links and quotes here for my future reference. Sorry if this post seems disjointed and confusing. The Kanamori link below is a nice historical description of Paul Cohen and his work on the Continuum Hypothesis. Amusingly, Cohen once wrote
Cohen and Set Theory (Kanamori)
Skolem and pessimism about proof in mathematics (Cohen)
The discovery of forcing (Cohen)
Cohen on discovering Godel's Incompleteness Theorem as a graduate student. (From My interaction with Kurt Godel, reprinted in Cohen's Set Theory and the Continuum Hypothesis.)
[Cohen:] ... Even if the formalist position is adopted, in actual thinking about mathematics one can have no intuition unless one assumes that models exist and that the structures are real.[Kanamori:] Cohen then returned to the bedrock of number theory and gave as an example the twin primes conjecture as beyond the reach of proof. “Is it not very likely that, simply as a random set of numbers, the primes do satisfy the hypothesis, but there is no logical law that implies this?” [But weak twin primes has been proved!]
So, let me say that I will ascribe to Skolem a view, not explicitly stated by him, that there is a reality to mathematics, but axioms cannot describe it. Indeed one goes further and says that there is no reason to think that any axiom system can adequately describe it.
Cohen and Set Theory (Kanamori)
Skolem and pessimism about proof in mathematics (Cohen)
The discovery of forcing (Cohen)
Cohen on discovering Godel's Incompleteness Theorem as a graduate student. (From My interaction with Kurt Godel, reprinted in Cohen's Set Theory and the Continuum Hypothesis.)
... I still had a feeling of skepticism about Godel's work, but skepticism mixed with awe and admiration.What is my attitude toward foundational work in mathematics, logic and set theory? the nature of proof and rigor? See this earlier post on the relation between physics and mathematics (GC = Gregory Chaitin).
I can say my feeling was roughly this: How can someone thinking about logic in almost philosophical terms discover a result that had implications for Diophantine equations? ... I closed the book and tried to rediscover the proof, which I still feel is the best way to understand things. I totally capitulated. The Incompleteness Theorem was true, and Godel was far superior to me in understanding the nature of mathematics.
Although the proof was basically simple, when stripped to its essentials I felt that its discoverer was above me and other mere mortals in his ability to understand what mathematics  and even human thought, for that matter  really was. From that moment on, my regard for Godel was so high that I almost felt it would be beyond my wildest dreams to meet him and discover for myself how he thought about mathematics and the fount from which his deep intuition flowed. I could imagine myself as a clever mathematician solving difficult problems, but how could I emulate a result of the magnitude of the Incompleteness Theorem? There it stood, in splendid isolation and majesty, not allowing any kind of completion or addition because it answered the basic questions with such finality.
... Let's recall David Deutsch's 1982 statement:
The reason why we find it possible to construct, say, electronic calculators, and indeed why we can perform mental arithmetic, cannot be found in mathematics or logic. The reason is that the laws of physics "happen" to permit the existence of physical models for the operations of arithmetic such as addition, subtraction and multiplication.
Does this apply to mathematics too? ...
GC: But mathematicians shouldn't think they can replace physicists: There's a beautiful little 1943 book on Experiment and Theory in Physics by Max Born where he decries the view that mathematics can enable us to discover how the world works by pure thought, without substantial input from experiment.
CC: What about set theory? Does this have anything to do with physics?
GC: I think so. I think it's reasonable to demand that set theory has to apply to our universe. In my opinion it's a fantasy to talk about infinities or Cantorian cardinals that are larger than what you have in your physical universe. And what's our universe actually like?
a finite universe?
discrete but infinite universe (ℵ0)?
universe with continuity and real numbers (ℵ1)?
universe with higherorder cardinals (≥ ℵ2)?
Does it really make sense to postulate higherorder infinities than you have in your physical universe? Does it make sense to believe in real numbers if our world is actually discrete? Does it make sense to believe in the set {0, 1, 2, ...} of all natural numbers if our world is really finite?
CC: Of course, we may never know if our universe is finite or not. And we may never know if at the bottom level the physical universe is discrete or continuous...
GC: Amazingly enough, Cris, there is some evidence that the world may be discrete, and even, in a way, twodimensional. There's something called the holographic principle, and something else called the Bekenstein bound. These ideas come from trying to understand black holes using thermodynamics. The tentative conclusion is that any physical system only contains a finite number of bits of information, which in fact grows as the surface area of the physical system, not as the volume of the system as you might expect, whence the term ``holographic.'' ...
CC: We seem to have concluded that mathematics depends on physics, haven't we? But mathematics is the main tool to understand physics. Don't we have some kind of circularity?
GC: Yeah, that sounds very bad! But if math is actually, as Imre Lakatos termed it, quasiempirical, then that's exactly what you'd expect. And as you know Cris, for years I've been arguing that informationtheoretic incompleteness results inevitably push us in the direction of a quasiempirical view of math, one in which math and physics are different, but maybe not as different as most people think. As Vladimir Arnold provocatively puts it, math and physics are the same, except that in math the experiments are a lot cheaper!
CC: In a sense the relationship between mathematics and physics looks similar to the relationship between metamathematics and mathematics. The incompleteness theorem puts a limit on what we can do in axiomatic mathematics, but its proof is built using a substantial amount of mathematics!
GC: What do you mean, Cris?
CC: Because mathematics is incomplete, but incompleteness is proved within mathematics, metamathematics is itself incomplete, so we have a kind of unending uncertainty in mathematics. This seems to be replicated in physics as well: Our understanding of physics comes through mathematics, but mathematics is as certain (or uncertain) as physics, because it depends on the physical laws of the universe where mathematics is done, so again we seem to have unending uncertainty. Furthermore, because physics is uncertain, you can derive a new form of uncertainty principle for mathematics itself...
GC: Well, I don't believe in absolute truth, in total certainty. Maybe it exists in the Platonic world of ideas, or in the mind of GodI guess that's why I became a mathematicianbut I don't think it exists down here on Earth where we are. Ultimately, I think that that's what incompleteness forces us to do, to accept a spectrum, a continuum, of possible truth values, not just black and white absolute truth.
In other words, I think incompleteness means that we have to also accept heuristic proofs, the kinds of proofs that George Pólya liked, arguments that are rather convincing even if they are not totally rigorous, the kinds of proofs that physicists like. Jonathan Borwein and David Bailey talk a lot about the advantages of that kind of approach in their twovolume work on experimental mathematics. Sometimes the evidence is pretty convincing even if it's not a conventional proof. For example, if two real numbers calculated for thousands of digits look exactly alike...
CC: It's true, Greg, that even now, a century after Gödel's birth, incompleteness remains controversial. I just discovered two recent essays by important mathematicians, Paul Cohen and Jack Schwartz.* Have you seen these essays?
*P. J. Cohen, ``Skolem and pessimism about proof in mathematics,'' Phil. Trans. R. Soc. A (2005) 363, 24072418; J. T. Schwartz, ``Do the integers exist? The unknowability of arithmetic consistency,'' Comm. Pure & Appl. Math. (2005) LVIII, 12801286.
GC: No.
CC: Listen to what Cohen has to say:
``I believe that the vast majority of statements about the integers are totally and permanently beyond proof in any reasonable system.''
And according to Schwartz,
``truly comprehensive search for an inconsistency in any set of axioms is impossible.''
GC: Well, my current model of mathematics is that it's a living organism that develops and evolves, forever. That's a long way from the traditional Platonic view that mathematical truth is perfect, static and eternal.
CC: What about Einstein's famous statement that
``Insofar as mathematical theorems refer to reality, they are not sure, and insofar as they are sure, they do not refer to reality.''
Still valid?
GC: Or, slightly misquoting Pablo Picasso, theories are lies that help us to see the truth!
Friday, June 07, 2013
Weak Meat Strong Eat
弱肉強食
These characters mean "Weak Meat Strong Eat"  the weak are meat for the strong. It's a Chinese (and Japanese) saying. Sometimes it is even translated as "Survival of the fittest"!
One of the repeated themes in Cloud Atlas is: The weak are meat the strong do eat. David Mitchell, the author, lived in Japan for many years and has a Japanese wife. I suspect he learned this phrasing from the Japanese.
My favorite subplot in Cloud Atlas is the Orison of Sonmi 451. Hmm... what is psychogenomics? (See also Dune.)
These characters mean "Weak Meat Strong Eat"  the weak are meat for the strong. It's a Chinese (and Japanese) saying. Sometimes it is even translated as "Survival of the fittest"!
One of the repeated themes in Cloud Atlas is: The weak are meat the strong do eat. David Mitchell, the author, lived in Japan for many years and has a Japanese wife. I suspect he learned this phrasing from the Japanese.
My favorite subplot in Cloud Atlas is the Orison of Sonmi 451. Hmm... what is psychogenomics? (See also Dune.)
Cloud Atlas: ... sourced his supply of psychogenomics theses from an obscure tech institute in Baikal. The original author of my xpostgrad’s work was a production zone immigrant named Yusouf Suleiman. Xtremists were killing genomicists in Siberia at that time, and Suleiman and three of his professors were blown up by a car bomb. Baikal being Baikal, Suleiman’s research languished in obscurity for ten years until it was sold on. The agent liaised with contacts at Papa Song Corp to instream Suleiman’s ascension neuroformula to our Soap. Yoona939 was the prime specimen; I was a modified backup. If all that sounds unlikely, HaeJoo added, I should remember that most of science’s holy grails are discovered by accident, in unxpected places.Some video of Suleiman (identified by caption) appears in the movie in the scene in which HaeJoo educates Sonmi about fabricants and genetic engineering. Suleiman is lecturing in front of a board covered with equations  no snail shells in sight.
Tuesday, June 04, 2013
Morgan Freeman on physics and physicists
Freeman is the host of the Science Channel show Through the Wormhole. (Thanks to a dude at PIMCO for sending me this video :)
I think I'm the only physicist on the show who actually went through a wormhole  in a VW bug, no less :)
I think I'm the only physicist on the show who actually went through a wormhole  in a VW bug, no less :)
Sunday, June 02, 2013
All that is left is the wind in the pines
The FT's Asia editor has lunch with Japan scholar Donald Keene (shown above with Yukio Mishima and Hiroshi Akutagawa). Keene has a keen intelligence and fine literary and historical sensibilities. He recently became a Japanese citizen at the age of 89. I highly recommend his memoir Chronicles of My Life: An American in the Heart of Japan. See also Japanese Diaries and Yukio Mishima. (Via Gwern.)
Financial Times: ... I steer Keene back more than 70 years to when, as an 18yearold, he came across a translation of The Tale of Genji in the Astor Hotel in New York. At the time, Keene was studying French and Greek literature at Columbia University, having won a scholarship to study there at the age of 16. He bought the book because, at 59 cents, the epic story, written 1,100 years ago, contained more words per dollar than any book in the store. That was how the love affair began.
... “He was an extraordinary person,” says Keene, who knew Mishima well. They had met, symbolically enough, outside Tokyo’s Kabukiza theatre in 1954, and had gone to see plays together. Keene had translated one of Mishima’s modern Noh plays.
“He died, as you know, at the age of 45, leaving at least 45 stacked volumes of novels, plays, criticism, poetry.” Mishima slit his belly after leading a failed, and farcical, coup to restore the emperor’s power but Keene thinks he committed suicide because he was passed over for the Nobel Prize. During the 1964 Tokyo Olympics, Mishima had written Keene a letter with the line, “I envy the athletes who know if they are first, second or third.” Keene says: “That was all he said but I knew exactly what he meant.” The irony was that Kawabata, who did win the Nobel Prize, also committed suicide because of the pressure of living up to his new reputation.
As our coffee arrives, I mention the writer Jun Takami (19071965), one of whose plays ends with the line, “All that is left is the wind in the pines.” Keene gives me an ecstatic look. “Yes, it’s the most marvellous end to any play. There’s nothing on the stage at all, nothing but the pine.” He shakes his head at the sadness and the beauty. “Oh, what a stroke of genius that was. I think of it now and I’ve got shivers going down my spine.”
One passage in Takami’s diaries was written in 1944 during the wartime bombing of Tokyo when he was trying to get his mother to safety in the countryside. At Ueno station “everybody is quiet, everybody’s just moving slowly and no one is trying to get ahead of anybody else,” says Keene. “And Takami thought, ‘I want to live with these people. I want to die with these people’. And that is what I [Keene] thought in January, when I was in hospital. ‘I want to live with these people. I want to die with these people.’”
Keene is in a kind of reverie by now, lost in the personal and literary wanderings of a lifetime. “I knew Takami Jun,” he says, using the Japanese name order. “He was a very elegant man. The last time I saw him, he was wearing a white suit and he was surrounded by about seven or eight young women. And he was smiling.” Keene’s eyes are moist. He is staring past me or through me. The restaurant is still quite empty but Keene has flooded it with the memories of people, mostly long dead. He stands to leave and is helped up the narrow stairs to the city above. Down in the basement, I am left at the empty table. There is nothing, not even the wind in the pines.
Lore
This is a beautiful but difficult movie. You can watch it right now on Google Play. Interview with director Cate Shortland.
See also Bitter Defeat.
Saturday, June 01, 2013
The genetics of humanness
Roughly speaking, modern humans differ from chimpanzees with probability 0.01 at a particular base in the genome, from neanderthals with probability 0.003, and from each other with probability 0.001 (this final number varies by about 15% depending on ancestral population). The neanderthal research is particularly interesting in that we will eventually be able to determine the specific genetic changes that make modern humans different (smarter?) than neanderthals. Certain regions in the genome, known as HARs (Human Accelerated Regions) are conserved in mammals such as mice, dogs, chimpanzees, even neanderthals, but show rapid recent changes in humans. It's reasonable to suspect that these regions are doing interesting things ...
See also this recent paper: Analysis of Human Accelerated DNA Regions Using Archaic Hominin Genomes (PLoS).
Subscribe to:
Posts (Atom)
Blog Archive

▼
2013
(211)

▼
06
(22)
 History will remember their names
 Bert and Ernie, Out?
 Solomonoff Universal Induction
 Views of Hong Kong
 Kolmogorov, Solomonoff, and de Finetti
 HDLC heritability from whole genomes: common vari...
 Android Dreams
 WDIST and PLINK
 Ken Wilson, dead at 77
 Beanbags and causal variants
 Sinica podcast
 China 3.0
 Erdos and Tao
 Spy vs Spy
 The ratchet of power
 If you can't fix it you've got to stand it
 Horizons of truth
 Weak Meat Strong Eat
 Morgan Freeman on physics and physicists
 All that is left is the wind in the pines
 Lore
 The genetics of humanness

▼
06
(22)
Labels
 physics (321)
 finance (257)
 genetics (256)
 globalization (245)
 brainpower (232)
 technology (175)
 american society (159)
 economics (153)
 China (152)
 photos (150)
 psychometrics (148)
 science (146)
 genomics (131)
 innovation (131)
 psychology (127)
 travel (122)
 credit crisis (115)
 universities (107)
 higher education (106)
 human capital (105)
 biology (97)
 iq (91)
 startups (89)
 ai (86)
 genetic engineering (85)
 cognitive science (79)
 credit crunch (78)
 careers (75)
 gilded age (71)
 elitism (70)
 evolution (69)
 income inequality (69)
 autobiographical (67)
 statistics (60)
 books (59)
 quantum mechanics (59)
 machine learning (58)
 caltech (56)
 social science (55)
 genius (54)
 talks (50)
 sci fi (48)
 bgi (47)
 history of science (47)
 kids (47)
 mma (47)
 cdo (45)
 silicon valley (45)
 education (43)
 geopolitics (43)
 derivatives (42)
 bounded rationality (40)
 mathematics (40)
 podcasts (40)
 politics (40)
 harvard (39)
 academia (38)
 behavioral economics (38)
 political correctness (38)
 intellectual history (37)
 bubbles (36)
 mortgages (36)
 MSU (35)
 history (35)
 jiujitsu (34)
 literature (34)
 hedge funds (32)
 expert prediction (31)
 realpolitik (31)
 bjj (30)
 foo camp (30)
 physical training (30)
 film (29)
 quants (29)
 ufc (29)
 computing (28)
 efficient markets (28)
 many worlds (28)
 black holes (27)
 economic history (27)
 sports (27)
 entrepreneurs (25)
 subprime (25)
 google (24)
 housing (24)
 obama (24)
 singularity (24)
 taiwan (24)
 movies (23)
 race relations (23)
 von Neumann (23)
 berkeley (22)
 feynman (22)
 ultimate fighting (22)
 athletics (21)
 nuclear weapons (21)
 wall street (21)
 affirmative action (20)
 cds (20)
 neuroscience (20)
 scifoo (20)
 music (19)
 biotech (18)
 goldman sachs (18)
 internet (17)
 meritocracy (17)
 quantum field theory (17)
 security (17)
 treasury bailout (17)
 university of oregon (17)
 venture capital (17)
 conferences (16)
 freeman dyson (16)
 gender (16)
 smpy (16)
 autism (15)
 blogging (15)
 cosmology (15)
 cryptography (15)
 japan (15)
 oppenheimer (15)
 personality (15)
 algorithms (14)
 happiness (14)
 new yorker (14)
 hedonic treadmill (13)
 probability (13)
 aspergers (12)
 height (12)
 india (12)
 malcolm gladwell (12)
 net worth (12)
 nobel prize (12)
 television (12)
 wwii (12)
 christmas (11)
 dna (11)
 fitness (11)
 geeks (11)
 les grandes ecoles (11)
 neanderthals (11)
 string theory (11)
 blade runner (10)
 entropy (10)
 football (10)
 government (10)
 italy (10)
 mutants (10)
 nerds (10)
 olympics (10)
 social networks (10)
 ability (9)
 chess (9)
 dating (9)
 eugene (9)
 flynn effect (9)
 pseudoscience (9)
 Einstein (8)
 complexity (8)
 crossfit (8)
 determinism (8)
 encryption (8)
 harvard society of fellows (8)
 keynes (8)
 pca (8)
 philip k. dick (8)
 philosophy of mind (8)
 pop culture (8)
 real estate (8)
 research (8)
 robot genius (8)
 usain bolt (8)
 aig (7)
 alpha (7)
 art (7)
 ashkenazim (7)
 basketball (7)
 data mining (7)
 energy (7)
 free will (7)
 game theory (7)
 hugh everett (7)
 james salter (7)
 manhattan (7)
 poker (7)
 qcd (7)
 turing test (7)
 Go (6)
 alan turing (6)
 anthropic principle (6)
 environmentalism (6)
 france (6)
 fx (6)
 games (6)
 nassim taleb (6)
 nsa (6)
 privacy (6)
 prostitution (6)
 russia (6)
 success (6)
 tail risk (6)
 teaching (6)
 volatility (6)
 war (6)
 Fermi problems (5)
 academia sinica (5)
 bayes (5)
 bobby fischer (5)
 class (5)
 climate change (5)
 democracy (5)
 econtalk (5)
 luck (5)
 noam chomsky (5)
 paris (5)
 renaissance technologies (5)
 sad but true (5)
 software development (5)
 warren buffet (5)
 100m (4)
 Iran (4)
 Poincare (4)
 borges (4)
 cambridge uk (4)
 charles darwin (4)
 cold war (4)
 creativity (4)
 fake alpha (4)
 feminism (4)
 global warming (4)
 godel (4)
 hormones (4)
 humor (4)
 inequality (4)
 intellectual property (4)
 iraq war (4)
 kerviel (4)
 markets (4)
 microsoft (4)
 mixed martial arts (4)
 monsters (4)
 moore's law (4)
 nonlinearity (4)
 patents (4)
 perimeter institute (4)
 solar energy (4)
 soros (4)
 trento (4)
 video (4)
 vietnam war (4)
 200m (3)
 babies (3)
 bill gates (3)
 brain drain (3)
 censorship (3)
 charlie munger (3)
 chet baker (3)
 correlation (3)
 demographics (3)
 ecosystems (3)
 equity risk premium (3)
 facebook (3)
 fannie (3)
 fst (3)
 information theory (3)
 intellectual ventures (3)
 jim simons (3)
 judo (3)
 kasparov (3)
 language (3)
 lee kwan yew (3)
 lewontin fallacy (3)
 lhc (3)
 michael lewis (3)
 nathan myhrvold (3)
 neal stephenson (3)
 new york times (3)
 olympiads (3)
 path integrals (3)
 quantum computers (3)
 rationality (3)
 risk preference (3)
 search (3)
 sec (3)
 sivs (3)
 society generale (3)
 thailand (3)
 alibaba (2)
 assortative mating (2)
 bear stearns (2)
 bruce springsteen (2)
 charles babbage (2)
 cheng ting hsu (2)
 cloning (2)
 david mamet (2)
 digital books (2)
 donald mackenzie (2)
 drones (2)
 eliot spitzer (2)
 empire (2)
 exchange rates (2)
 freddie (2)
 gaussian copula (2)
 heinlein (2)
 industrial revolution (2)
 james watson (2)
 ltcm (2)
 magic (2)
 mating (2)
 mba (2)
 mccain (2)
 monkeys (2)
 national character (2)
 nicholas metropolis (2)
 no holds barred (2)
 offices (2)
 oligarchs (2)
 palin (2)
 population structure (2)
 prisoner's dilemma (2)
 simulation (2)
 skidelsky (2)
 socgen (2)
 sprints (2)
 supercomputers (2)
 systemic risk (2)
 variance (2)
 virtual reality (2)
 abx (1)
 anathem (1)
 andrew lo (1)
 antikythera mechanism (1)
 athens (1)
 atlas shrugged (1)
 ayn rand (1)
 bay area (1)
 beats (1)
 book search (1)
 bunnie huang (1)
 car dealers (1)
 carlos slim (1)
 catastrophe bonds (1)
 cdos (1)
 ces 2008 (1)
 chance (1)
 children (1)
 cochranharpending (1)
 cpi (1)
 david x. li (1)
 dick cavett (1)
 dolomites (1)
 drugs (1)
 dune (1)
 eharmony (1)
 epidemics (1)
 escorts (1)
 faces (1)
 fads (1)
 favorite posts (1)
 fiber optic cable (1)
 francis crick (1)
 frauds (1)
 gary brecher (1)
 gizmos (1)
 greece (1)
 greenspan (1)
 hypocrisy (1)
 igon value (1)
 iit (1)
 inflation (1)
 information asymmetry (1)
 iphone (1)
 jack kerouac (1)
 jaynes (1)
 jfk (1)
 john dolan (1)
 john kerry (1)
 john paulson (1)
 john searle (1)
 john tierney (1)
 jonathan littell (1)
 las vegas (1)
 lawyers (1)
 lehman auction (1)
 les bienveillantes (1)
 lowell wood (1)
 lse (1)
 mcgeorge bundy (1)
 mexico (1)
 michael jackson (1)
 mickey rourke (1)
 migration (1)
 mit (1)
 money:tech (1)
 myron scholes (1)
 netwon institute (1)
 networks (1)
 newton institute (1)
 nfl (1)
 oliver stone (1)
 phil gramm (1)
 philanthropy (1)
 philip greenspun (1)
 portfolio theory (1)
 power laws (1)
 randomness (1)
 recession (1)
 sales (1)
 singapore (1)
 skype (1)
 standard deviation (1)
 star wars (1)
 starship troopers (1)
 students today (1)
 teleportation (1)
 tierney lab blog (1)
 tomonaga (1)
 twitter (1)
 tyler cowen (1)
 ussr (1)
 venice (1)
 violence (1)
 virtual meetings (1)
 war nerd (1)
 wealth effect (1)