Thursday, September 29, 2016

DJI Mavic



$750 without controller (use your phone) or $1000 with DJI controller. Weighs about 1.5 lbs.

New Yorker: Practice Doesn't Make Perfect (Zach Hambrick, MSU Psychology)

MSU Psychology Professor Zach Hambrick is featured in this New Yorker article about Nature vs Nurture. How far the pendulum has swung since the naive days of Malcolm Gladwell's Outliers and its credulous embrace of Anders Ericsson's nurturist claims. David Lubinski and SMPY also make an appearance.
New Yorker: Practice Doesn't Make Perfect

... So how much did practice actually explain? In a 2014 meta-analysis that looked specifically at the relationship between deliberate practice and performance in music, games like chess, sports, education, and other professions, Hambrick and his team found a relationship that was even more complex than they had expected. For some things, like games, practice explained about a quarter of variance in expertise. For music and sports, the explanatory power accounted for about a fifth. But for education and professions like computer science, military-aircraft piloting, and sales, the effect ranged from small to tiny. For all of these professions, you obviously need to practice, but natural abilities matter more.

What’s more, the explanatory power of practice fell even further when Hambrick took exact level of expertise into account. In sports—one of the areas in which deliberate practice seems to make the most difference—it turned out that the more advanced the athlete, the less of a role practice plays. Training an average athlete for a set number of hours yields far more results than training an élite athlete, which, in turn, yields greater results than training a super-élite athlete. Put differently, someone like me is going to improve a great deal with even a few hundred hours of training. But within an Olympic team tiny differences in performance are unlikely to be the result of training: these athletes train together, with the same coach, day in and day out. Those milliseconds come from somewhere else. Some may be due to the fact that genetic differences can account for some of the response to training. ...

So where else, exactly, do performance differences come from? While Hambrick’s work has been focussed more explicitly on practice and genetics, David Lubinski, a professor of psychology at Vanderbilt University, has been approaching the question from a slightly different angle: through what’s called the Study of Mathematically Precocious Youth (smpy), a longitudinal study of the lives of students who, by the age of thirteen, had scored in the top one per cent of mathematical-reasoning ability and were then selected to take part in an enriched educational environment. (The study, co-directed for many years by Lubinski and his wife, Vanderbilt’s education-school dean, Camilla Benbow, was described in detail in a recent article in Nature.) It’s a crucial supplement to work like Hambrick’s; the data you get from close observation of the same sample and the same individuals over time can answer questions other approaches can’t. “What kinds of practice are more effective? What approaches more effective for some people than others?” Hambrick asks. “We need all the pieces to the puzzle to maximize people’s potential. Lubinski’s work on mathematically precocious youth is an essential piece.”

MSU Autonomous Vehicle at World Mobility Forum in Detroit

This is a strategic area of research for Michigan. One of our professors demonstrated a deep neural network trained to pick individual human pedestrians out from 10-20 fps video taken as a car navigates the MSU campus. If you've ever tried to drive across our campus in the middle of the day, with students trying to get to class or lab or dorm, you'll understand what a difficult computational task this is!


Tuesday, September 27, 2016

Steel Man (opposite of Straw Man) Rationality

I often tell people on my team who are arguing for a particular position to also be ready and able to summarize the best argument against their position.

This might be something to keep in mind while thinking about the election and the debate last night.
Steel man: Sometimes the term "steel man" is used to refer to a position's or argument's improved form. A straw man is a misrepresentation of someone's position or argument that is easy to defeat: a "steel man" is an improvement of someone's position or argument that is harder to defeat than their originally stated position or argument.

John Stuart Mill: "He who knows only his own side of the case knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side, if he does not so much as know what they are, he has no ground for preferring either opinion..."
While the questions below may have right and wrong answers, I doubt the analysis leading to those answers is in any way simple and I doubt that most people with strong opinions on them could give the strongest arguments either for or against their own position.

Has global trade helped the average American over the last 30 years? Link.

When elites and the larger population disagree on a policy issue, which group generally gets their way? Link.

Can a nation with a $20 trillion national debt and large trade deficit maintain an aggressive military and foreign policy stance? Link.

At what threshold of economic productivity does a net "taker" become a "maker"? What is the threshold relative to the current minimum wage? Link.

Should the US accept more low-skill immigrants? What are the long term consequences? Link.

Is the total economic return to university education, either to the individual or to the nation, positive regardless of the ability level of the individual? Link.

Is the US winning or losing the cyberwar? Are China and Russia also listening to Merkel's cell phone conversations? Are PCs in Chinese and Russian government ministries actually secure and better maintained than ours? Link.

Is it possible that much of economic growth in future decades depends on technical innovations in areas such as quantum physics, machine learning, AI, genomics, advanced materials, robotics that only the top few percent of the population are cognitively equipped to understand? Has this already been true for some time? Link.


WARNING THESE VIDEOS MAY CONTAIN TRIGGERING OR OFFENSIVE MATERIAL. THEY HAVE SOMETHING FOR EVERYONE.





Thursday, September 22, 2016

Annals of Reproducibility in Science: Social Psychology and Candidate Gene Studies

Andrew Gelman offers a historical timeline for the reproducibility crisis in Social Psychology, along with some juicy insight into the one funeral at a time manner in which academic science often advances.
OK, that was a pretty detailed timeline. But here’s the point. Almost nothing was happening for a long time, and even after the first revelations and theoretical articles you could still ignore the crisis if you were focused on your research and other responsibilities. ...

Then, all of a sudden, the world turned upside down.

If you’d been deeply invested in the old system, it must be pretty upsetting to think about change. Fiske is in the position of someone who owns stock in a failing enterprise, so no wonder she wants to talk it up. The analogy’s not perfect, though, because there’s no one for her to sell her shares to. What Fiske should really do is cut her losses, admit that she and her colleagues were making a lot of mistakes, and move on. She’s got tenure and she’s got the keys to PPNAS, so she could do it. Short term, though, I guess it’s a lot more comfortable for her to rant about replication terrorists and all that.

... Why do I go into all this detail? Is it simply mudslinging? Fiske attacks science reformers, so science reformers slam Fiske? No, that’s not the point. The issue is not Fiske’s data processing errors or her poor judgment as journal editor; rather, what’s relevant here is that she’s working within a dead paradigm. A paradigm that should’ve been dead back in the 1960s when Meehl was writing on all this, but which in the wake of Simonsohn, Button et al., Nosek et al., is certainly dead today. It’s the paradigm of the open-ended theory, of publication in top journals and promotion in the popular and business press, based on “p less than .05” results obtained using abundant researcher degrees of freedom. It’s the paradigm of the theory that in the words of sociologist Jeremy Freese, is “more vampirical than empirical—unable to be killed by mere data.”

... In her article that was my excuse to write this long post, Fiske expresses concerns for the careers of her friends, careers that may have been damaged by public airing of their research mistakes. Just remember that, for each of these people, there may well be three other young researchers who were doing careful, serious work but then didn’t get picked for a plum job or promotion because it was too hard to compete with other candidates who did sloppy but flashy work that got published in Psych Science or PPNAS. It goes both ways. ...
An old timer who has seen it all before comments.
ex-social psychologist says:
September 21, 2016 at 5:36 pm

Former professor of social psychology here, now happily retired after an early buyout offer. If not so painful, it would almost be funny at how history repeats itself: This is not the first time there has been a “crisis” in social psychology. In the late 1960s and early 1970s there was much hand-wringing over failures of replication and the “fun and games” mentality among researchers; see, for example, Gergen’s 1973 article “Social psychology as history” in JPSP, 26, 309-320, and Ring’s (1967) JESP article, “Experimental social psychology: Some sober questions about some frivolous values.” It doesn’t appear that the field ever truly resolved those issues back when they were first raised–instead, we basically shrugged, said “oh well,” and went about with publishing by any means necessary.

I’m glad to see the renewed scrutiny facing the field. And I agree with those who note that social psychology is not the only field confronting issues of replicability, p-hacking, and outright fraud. These problems don’t have easy solutions, but it seems blindingly obvious that transparency and open communication about the weaknesses in the field–and individual studies–is a necessary first step. Fiske’s strategy of circling the wagons and adhering to a business-as-usual model is both sad and alarming.

I took early retirement for a number of reasons, but my growing disillusionment with my chosen field was certainly a primary one.
Geoffrey Miller also contributes
Geoffrey Miller says:
September 21, 2016 at 8:43 pm

There’s also a political/ideological dimension to social psychology’s methodological problems.

For decades, social psych advocated a particular kind of progressive, liberal, blank-slate ideology. Any new results that seemed to support this ideology were published eagerly and celebrated publicly, regardless of their empirical merit. Any results that challenged it (e.g. by showing the stability or heritability of individual differences in intelligence or personality) were rejected as ‘genetic determinism’, ‘biological reductionism’, or ‘reactionary sociobiology’.

For decades, social psychologists were trained, hired, promoted, and tenured based on two main criteria: (1) flashy, counter-intuitive results published in certain key journals whose editors and reviewers had a poor understanding of statistical pitfalls, (2) adherence to the politically correct ideology that favored certain kinds of results consistent with a blank-slate, situationist theory of human nature, and derogation of any alternative models of human nature (see Steven Pinker’s book ‘The blank slate’).

Meanwhile, less glamorous areas of psychology such as personality, evolutionary, and developmental psychology, intelligence research, and behavior genetics were trundling along making solid cumulative progress, often with hugely greater statistical power and replicability (e.g. many current behavior genetics studies involve tens of thousands of twin pairs across several countries). But do a search for academic positions in the APS job ads for these areas, and you’ll see that they’re not a viable career path, because most psych departments still favor the kind of vivid but unreplicable results found in social psych and cognitive neuroscience.

So, we’re in a situation where the ideologically-driven, methodologically irresponsible field of social psychology has collapsed like a house of cards … but nobody’s changed their hiring, promotion, or tenure priorities in response. It’s still fairly easy to make a good living doing bad social psychology. It’s still very hard to make a living doing good personality, intelligence, behavior genetic, or evolutionary psychology research.

In the title of this post I mention Candidate Gene Studies. Forget, for the moment, about goofy Social Psychology experiments conducted on undergraduates. Much more money was wasted in the early 21st century on under-powered genomics studies that looked for gene-trait associations using small samples. Researchers, overconfident in their vaunted biological or biochemical intuition, performed studies using p < 0.05 thresholds that produced (ultimately false) associations between candidate genes and a variety of traits. According to Ioannidis, almost none of these results replicate (more). When I first became aware of GWAS almost a decade ago, the field was in disarray, with some journals still publishing results at the p < 0.05 threshold, whereas others having adopted the corrected p < 5E-08 = 0.05 x 1E-06 "genome wide significance" threshold (based on multiple testing correction for 1E06 SNPs)! The latter results routinely replicate, as expected.

Clearly, many researchers fundamentally misunderstood basic statistics, or at least were grossly overconfident in their priors for no good reason. But as of today, genomics has corrected its practices and although no one wants to dwell on the 5+ years worth of non-replicable published results, science is at least moving forward. I hope Social Psychology and other problematic areas (such as in biomedical research) can self-correct their practices as genomics has.

See also One funeral at a time?


Bonus Feature!
Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature

Denes Szucs, John PA Ioannidis
doi: http://dx.doi.org/10.1101/071530

We have empirically assessed the distribution of published effect sizes and estimated power by extracting more than 100,000 statistical records from about 10,000 cognitive neuroscience and psychology papers published during the past 5 years. The reported median effect size was d=0.93 (inter-quartile range: 0.64-1.46) for nominally statistically significant results and d=0.24 (0.11-0.42) for non-significant results. Median power to detect small, medium and large effects was 0.12, 0.44 and 0.73, reflecting no improvement through the past half-century. Power was lowest for cognitive neuroscience journals. 14% of papers reported some statistically significant results, although the respective F statistic and degrees of freedom proved that these were non-significant; p value errors positively correlated with journal impact factors. False report probability is likely to exceed 50% for the whole literature. In light of our findings the recently reported low replication success in psychology is realistic and worse performance may be expected for cognitive neuroscience.
From the paper. FRP = False Report Probability = the probability that the null hypothesis is true when we get a statistically significant finding.
... In all, the combination of low power, selective reporting and other biases and errors that we have documented in this large sample of papers in cognitive neuroscience and psychology suggest that high FRP are to be expected in these fields. The low reproducibility rate seen for psychology experimental studies in the recent Open Science Collaboration (Nosek et al. 2015a) is congruent with the picture that emerges from our data. Our data also suggest that cognitive neuroscience may have even higher FRP rates, and this hypothesis is worth evaluating with focused reproducibility checks of published studies. Regardless, efforts to increase sample size, and reduce publication and other biases and errors are likely to be beneficial for the credibility of this important literature.

Wednesday, September 21, 2016

The death of Sol (Soylent Green)



I first saw Soylent Green on television when I was a kid. (CBS Late Night Movie of the Week, or something like that :-) It was terrifying -- the sweaty dystopian desperation, the riot scenes, but most of all Sol's final moments at the euthanasia center.
Backstage: ... it is unlikely that an actor will ever give a last performance with the stunning emotional resonance of Edward G. Robinson’s work in 1973’s “Soylent Green.”

The film is an environmental parable and, along with “Planet of the Apes,” could be considered the granddaddy of the “dystopian future” genre. But in 1973, the notion of a world destroyed by pollution, overpopulation, and food shortages was frightening and fresh. The fact that every ill depicted in “Soylent Green” (set in the then-distant world of 2022) is actually coming to pass has only made the film seem prescient.

Robinson plays Sol Roth, the partner, friend, and father figure to Charlton Heston’s Detective Thorn, a cop tasked with saving the world. As a man who remembers the Earth’s beauty before it was compromised, Robinson’s Sol symbolizes nothing less than humanity itself, and is given scene after scene where he conveys the wonder and longing for a world that exists only in his memory. These moments—Robinson remembering, for example, how food used to taste when he was young—are both chilling and touching. He breaks your heart, but Robinson is never sentimental.

That Robinson was dying of cancer during filming makes his rich performance even more psychologically intricate. In real life, Robinson died two weeks after filming ended—and he dies in the film itself. “Soylent Green” imagines Sol Roth’s elective euthanasia scene as the final word in personalized shopping. After making his AV preferences for his final journey, Roth is escorted to a large room where he lies on a gurney, imbibes some sort of (presumably) fatal drink, and begins what can only be described as one of the most poetic and powerful death scenes in film history.

As the soundtrack plays an assortment of elegiac Beethoven and Tchaikovsky, Sol watches a 1973 version of an IMAX screen project the breathtaking beauty of the vanished world: sunsets, birds, oceans, plains, flowers. Director Richard Fleischer gives the actor a series of wonderful close-ups, and what Robinson is able to convey with only his eyes is stunning in both its precision and economy. In these close-ups, the actor is able to wordlessly communicate a great many things: the brilliance of the late silent period; the scope of his personal struggles; the totality of his expansive body of work—all with an incredibly light, unmannered touch.

Though he was inexplicably overlooked for an Oscar nomination for “Soylent Green” (and, indeed, for his entire career), he was awarded a richly deserved posthumous Oscar for lifetime achievement. Here’s to you, Mr. Robinson.
See also Soylent is for People.

The death of Fermi


From Stan Ulam's Adventures of a Mathematician. More Ulam. More Fermi.
His illness progressed rapidly. I went to Chicago to visit him. In the hospital I found him sitting up in bed with tubes in the veins of his arms. But he could talk. Seeing me, he smiled as I came in and said: "Stan, things are coming to an end." It is impossible for me to describe how shattering it was to hear this sentence. I tried to keep composed, made a feeble attempt at a joke, then for about an hour we talked about many subjects, and all along he spoke with serenity, and under the circumstances really a superhuman calm.

He mentioned that Teller had visited him the previous day, and joked that he had "tried to save his soul." Normally it is the priest who wants to save the soul of the dying man; Fermi put it the other way round, alluding to the public hullabaloo about Teller and the H-bomb. Perhaps their conversation had an effect, for shortly after Fermi died Teller published an article entitled "The Work of Many People," toning down the assertions of Shepley and Blair. During my visit to Fermi Laura dropped in and I was amazed at the ordinary nature of their conversation about some household appliance.

We talked on and I remember his saying that he believed he had already done about two-thirds of his life's work, no matter how long he might have lived. He added that he regretted a little not having become more involved in public affairs. It was very strange to hear him evaluating his own activity—from the outside, as it were. Again I felt that he achieved this super-objectivity through sheer will power. ...

... Then half seriously I raised the question whether in a thousand years so much progress will be made that it may be possible to reconstruct people who had lived earlier by tracing the genes of the descendants, collecting all the characteristics that make tip a person and reconstructing them physically. Fermi agreed, but he added: "How about the memory? How will they put back in the brain all the memories which are the makeup of any given individual?" This discussion now seems rather unreal and even weird, and it was partly my fault to have put us on such a subject, but at the time it came quite naturally from his super-detachment about himself and death.

I paid him one more visit, this time with Metropolis; when we came out of his room I was moved to tears. Only Plato's account of the death of Socrates could apply to the scene, and paraphrasing some of the words of Krito I told Nick, "That now was the death of one of the wisest men known."

Fermi died shortly after.




Lunch with Fermi at Los Alamos. Feynman is looking at the camera.



Ulam with Feynman and Von Neumann.


See also Passing the Torch.
From Fermi Remembered. The insightful biographical sketch at the beginning of the book, by Emelio Segre, includes details of Fermi's early (self) education and entry into Scuola Normale Superiore.

Murray Gell-Mann: When Fermi lay dying in Billings Hospital, I realized how much I cared for this brilliant, funny, difficult man. I was on leave in the East, and I invited Frank (C.N.) Yang to come with me to Chicago to see him. When we got to the bedside, Enrico kept telling us not to be downcast. "It is not so bad," he said. He told of a Catholic priest who had visited him and whom he had had to comfort. And Frank reminded me a few years ago of what Enrico said when we left, never to see him again. "Now, it is up to you."

Friday, September 16, 2016

Genomic prediction of adult life outcomes using SNP genotypes


Genomic prediction of adult life outcomes using SNP genotypes is very close to a reality. This was discussed in an earlier post The Tipping Point. The previous post, Prenatal and pre-implantation genetic diagnosis (Nature Reviews Genetics), describes how genotyping informs the Embryo Selection Problem which arises in In Vitro Fertilization (IVF).

The Adult-Attainment factor in the figure above is computed using inputs such as occupational prestige, income, assets, social welfare benefit use, etc. See Supplement, p.3. The polygenic score is computed using estimated SNP effect sizes from the SSGAC GWAS on educational attainment (i.e., a simple linear model).

A genetic test revealing that a specific embryo is, say, a -2 or -3 SD outlier on the polygenic score would probably give many parents pause, in light of the results in the figure above. The accuracy of this kind of predictor will grow with GWAS sample size in coming years.

Via Professor James Thompson. See also discussion by Stuart Ritchie.
The Genetics of Success: How Single-Nucleotide Polymorphisms Associated With Educational Attainment Relate to Life-Course Development

Psychological Science 2016, Vol. 27(7) 957–972
DOI: 10.1177/0956797616643070

A previous genome-wide association study (GWAS) of more than 100,000 individuals identified molecular-genetic predictors of educational attainment. We undertook in-depth life-course investigation of the polygenic score derived from this GWAS using the four-decade Dunedin Study (N = 918). There were five main findings. First, polygenic scores predicted adult economic outcomes even after accounting for educational attainments. Second, genes and environments were correlated: Children with higher polygenic scores were born into better-off homes. Third, children’s polygenic scores predicted their adult outcomes even when analyses accounted for their social-class origins; social-mobility analysis showed that children with higher polygenic scores were more upwardly mobile than children with lower scores. Fourth, polygenic scores predicted behavior across the life course, from early acquisition of speech and reading skills through geographic mobility and mate choice and on to financial planning for retirement. Fifth, polygenic-score associations were mediated by psychological characteristics, including intelligence, self-control, and interpersonal skill. Effect sizes were small. Factors connecting DNA sequence with life outcomes may provide targets for interventions to promote population-wide positive development.

Thursday, September 15, 2016

Prenatal and pre-implantation genetic diagnosis (Nature Reviews Genetics)


An IFV cycle can produce multiple (e.g., 5-10 for younger mothers) viable embryos. This leads to an inevitable Embryo Selection Problem. Genomic advances allow for better-informed selection, raising complex ethical issues.
Prenatal and pre-implantation genetic diagnosis

Nature Reviews Genetics 17, 643–656 (2016) doi:10.1038/nrg.2016.97
Published online 15 September 2016

The past decade has seen the development of technologies that have revolutionized prenatal genetic testing; that is, genetic testing from conception until birth. Genome-wide single-cell arrays and high-throughput sequencing analyses are dramatically increasing our ability to detect embryonic and fetal genetic lesions, and have substantially improved embryo selection for in vitro fertilization (IVF). Moreover, both invasive and non-invasive mutation scanning of the genome are helping to identify the genetic causes of prenatal developmental disorders. These advances are changing clinical practice and pose novel challenges for genetic counseling and prenatal care.
From the paper:
Whole-genome analysis of pre-implantation embryos provides information about not only the disorder tested for, but the whole genomic make-up of the embryo. This not only allows for improved selection, but also provides information on genetic variants that are associated with several non-health-related traits. These prospects raise difficult ethical questions. Some people may see this as the slippery slope towards the ‘designer child’ (REF. 136), whereas a different perspective is that it enables prospective parents and professionals to take into account the welfare of the future child. Following the principle of procreative beneficence, it is common practice to rank embryos and select the embryo with the highest chance of resulting in a healthy individual137. This raises questions as to whether prospective parents have the right to select for the best embryo and how to define ‘best’, especially in the context of genome-wide analysis.

...

With further technological improvements and increasing success rates, prenatal and pre-implantation diagnosis of genetic disorders will become commonplace, and with increasing public acceptance a continued growth in their implementation can be anticipated. This implementation, in turn, will reduce the frequency of rare severe inherited genetic diseases. Increasingly, more common genetic variants causing late-onset disorders (for example, BRCA1 and BRCA2) or recessive disorders (for example, cystic fibrosis) could also be selected against and will eventually become rare. In the future, new diagnostic technologies will not only provide a tool to give parents the option of an informed choice, but they will also lead towards fetal personalized medicine ...

Truth and Remembrance at ASEAN: Duterte remarks

What did Philippine President Duterte really say at the recent ASEAN meeting?
Asia Times: Truth and Duterte in media crosshairs

... An actual listen to the full press conference is enlightening in terms of Duterte’s issues with the United States.

At the 6:40 mark, Duterte goes off on a Reuters reporter who, in Duterte’s view, accepts the premise that he needs to answer questions President Obama and others might raise on extrajudicial killings and human rights issues in the drug war.

Duterte is infuriated because in his view the United States is devoid of the moral stature to question him on human rights, given its bloody history of “Moro pacification” in Duterte’s homeland of Mindanao.

CNN helpfully (or hopelessly) glossed the human cost of the US intervention for its readers as a matter of about 600 dead:

Duterte was referring to the US’s history as a colonial power in the Philippines, and specifically to one infamous massacre in the southern Philippines — the 1906 Battle of Bud Dajo — in which hundreds of Filipinos, including women and children, were killed.

Actually, he wasn’t, which CNN would have discovered if they had listened past Duterte’s first agitated reference to his fuller statement about “600” at the ten-minute mark. Duterte is referring to 600,000 dead, not 600. Even more shockingly, Duterte’s number is actually one of the more conservative estimates (the upper end is 1.4 million) of Moro deaths at the hand of the US military.

Yes, American friends, Duterte is referring to one of the most brutal and shameful chapters in the history of American imperialism, the brutal subjugation of the Muslim population of Philippines’ Mindanao over 30 years of formal war and informal counterinsurgency from 1898 into the 1920s.

Mindanao is where the United States first applied the savage lessons of its Indian war to counterinsurgency in Asia—including massacre of civilians, collective punishment, and torture. Waterboarding entered the US military toolkit in Mindanao, as immortalized on the May 22, 1902 front cover of Life magazine.

And the war never ended. After the Philippines shed its colonial status, the Manila Roman Catholic establishment continued the war with US help. Today, the Philippines is locked in a cycle of negotiation and counterinsurgency between the central government and the Moro Islamic Liberation Front (MILF) —a cycle that Duterte as president hopes to bring to its conclusion with a negotiated peace settlement.

This is not ancient history to Duterte, who emphatically stated in his press conference that the reason Mindanao is “on the boil” today is because of the historical crimes of the United States.

Duterte has additional reasons for his choler.

As I wrote previously at Asia Times, Duterte suspects US spooks of orchestrating a deadly series of bombings in his home city of Davao in 2002, with the probable motive of creating a pretext for the central government to declare martial law on Mindanao to fight the MILF. The 2002 Davao bombings form the foundation of Duterte’s alienation from the United States and his resistance to US-Philippine joint exercises on Mindanao, as he declared upon the assumption of his presidency.

And, though it hasn’t received a lot of coverage in the United States, last week, on September 2, another bomb ripped through a marketplace in Davao, killing fourteen people. It was suspected of being part of an assassination plot against Duterte, who was in town at the time, and the Communist Party of the Philippines (which is also engaged in peace talks with Duterte) accused the United States of being behind it.

...

At the ASEAN gathering in Laos, Duterte apparently tried to explain the roots of his indignation ... :

“The Philippine president showed a picture of the killings of American soldiers in the past and the president said: ‘This is my ancestor they killed. Why now we are talking about human rights,'” an Indonesian delegate said. The Philippines was an American colony from 1898 to 1946.

The delegate described the atmosphere in the room as “quiet and shocked.”
As I wrote here (substitute Filipinos for Chinese below):
Most Chinese are incredulous that european colonialists and imperialists, many inhabiting the lands of indigenous people exterminated or displaced only a few centuries ago, would think to assume the moral high ground.
Someone quipped that a transcript of Duterte's remarks might be mistaken for something written by Howard Zinn or a post-colonial theorist. Obama, of all US Presidents, is most likely to understand Duterte's perspective. Obama initially responded by calling Mr. Duterte a “colorful guy” :-)

Saturday, September 10, 2016

Speed, Balding, et al.: "for a wide range of traits, common SNPs tag a greater fraction of causal variation than is currently appreciated"

I recently blogged about a nice lecture by David Balding at the 2015 MLPM (Machine Learning for Personalized Medicine) Summer School: Machine Learning for Personalized Medicine: Heritability-based models for prediction of complex traits. In that talk he discussed some results concerning heritability estimation and potential improvements over GCTA. A new preprint on bioRxiv has the details:
Re-evaluation of SNP heritability in complex human traits

Doug Speed, Na Cai, The UCLEB Consortium, Michael Johnson, Sergey Nejentsev, David Balding
http://dx.doi.org/10.1101/074310

SNP heritability, the proportion of phenotypic variance explained by SNPs, has been estimated for many hundreds of traits, and these estimates are being used to explore genetic architecture and guide future research. To estimate SNP heritability requires strong assumptions about how heritability is distributed across the genome, but the assumptions in current use have not been thoroughly tested. By analyzing imputed data for 42 human traits, we empirically derive an improved model for heritability estimation. It is commonly assumed that the expected heritability of a SNP does not depend on its allele frequency; we instead identify a more realistic relationship which reflects that heritability tends to decrease with minor allele frequency. Two methods for estimating SNP heritability, GCTA and LDAK, make contrasting assumptions about how heritability varies with linkage disequilibrium; we demonstrate that the model used by LDAK better reflects the properties of real data. Additionally, we show how genotype certainty can be incorporated in the heritability model; this enables the inclusion of poorly-imputed SNPs, which can capture substantial extra heritability. Our revised method typically results in substantially higher estimates of SNP heritability: for example, across 19 traits (mainly diseases), the estimates based on common SNPs (minor allele frequency >0.01) are on average 40% (SD 3) higher than those obtained using original GCTA, and 25% (SD 2) higher than those from the recently-proposed extension GCTA-LDMS. We conclude that for a wide range of traits, common SNPs tag a greater fraction of causal variation than is currently appreciated. When we also include rare SNPs (minor allele frequency <0.01), we find that across 23 quantitative traits, estimates of SNP heritability increase by on average 29% (SD 12), and that rare SNPs tend to contribute about half the heritability of common SNPs.
In contrast to GCTA, which assumes a uniform Gaussian distribution of effect sizes for each SNP, this paper considers effect sizes which depend on the local linkage disequilibrium in a particular region w_j, as well as a SNP quality score r_j. (See equation 1 of the paper.) The intuition behind w_j is that if there are n SNPs in a small region which are all highly correlated, they are likely to all be proxies for the actual causal variant, and hence one might over count its contribution by assigning nearly equal effects to each of the SNPs. Instead, the method proposed in this paper (roughly) splits the effect size among the SNPs (Figure 1 below). Their model also allows the effect size distribution to depend on the MAF of j: SNPs at lower frequency in the population contribute less to heritability than in the GCTA default assumption.



The resulting heritability estimates tend to be higher than from GCTA
, so if this method is an improvement (as the authors argue), the amount of missing heritability is even less than that found in GCTA.

Supplement Figure 21 (p.26) provides yet more criticism of Kumar et al., a paper we discussed previously here. [Kumar, S., Feldman, M., Rehkopf, D. & Tuljapurkar, S. Limitations of GCTA as a solution to the missing heritability problem, PNAS 113, E61E70 (2015).]

Friday, September 09, 2016

Defense Science Board report on Autonomous Systems


US DOD Defense Science Board report on Autonomy (autonomous systems).
... This report provides focused recommendations to improve the future adoption and use of autonomous systems.

... While difficult to quantify, the study concluded that autonomy—fueled by advances in artificial intelligence—has attained a ‘tipping point’ in value. Autonomous capabilities are increasingly ubiquitous and are readily available to allies and adversaries alike. The study therefore concluded that DoD must take immediate action to accelerate its exploitation of autonomy while also preparing to counter autonomy employed by adversaries.

... The primary intellectual foundation for autonomy stems from artificial intelligence (AI), the capability of computer systems to perform tasks that normally require human intelligence (e.g., perception, conversation, decisionmaking). Advances in AI are making it possible to cede to machines many tasks long regarded as impossible for machines to perform. ...

Countering adversary use of autonomy (p.42)

As has become clear in the course of the study, the technology to enable autonomy is largely available anywhere in the world and can—both at rest and in motion—provide significant advantage in many areas of military operations. Thus, it should not be a surprise when adversaries employ autonomy against U.S. forces. Preparing now for this inevitable adversary use of autonomy is imperative.

This situation is similar to the potential adversary use of cyber and electronic warfare. For years, it has been clear that certain countries could, and most likely would, develop the technology and expertise to use cyber and electronic warfare against U.S. forces. Yet most of the U.S. effort focused on developing offensive cyber capabilities without commensurate attention to hardening U.S. systems against attacks from others.28 Unfortunately, in both domains, that neglect has resulted in DoD spending large sums of money today to “patch” systems against potential attacks. The U.S. must heed the lessons from these two experiences and deal with adversary use of autonomy now.

While many policy and political issues surround U.S. use of autonomy, it is certainly likely that many potential adversaries will have less restrictive policies and CONOPs governing their own use of autonomy, particularly in the employment of lethal autonomy. Thus, expecting a mirror image of U.S. employment of autonomy will not fully capture the adversary potential.

The potential exploitations the U.S. could face include low observability throughout the entire spectrum from sound to visual light, the ability to swarm with large numbers of low-cost vehicles to overwhelm sensors and exhaust the supply of effectors, and maintaining both endurance and persistence through autonomous or remotely piloted vehicles.

...

The U. S. will face a wide spectrum of threats with varying kinds of autonomous capabilities across every physical domain—land, sea, undersea, air, and space—and in the virtual domain of cyberspace as well.

Figure 9 (photo on left) is a small rotary-wing drone sold on the Alibaba web site for $400.29 The drone is made of carbon fiber; uses both GPS and inertial navigation; has autonomous flight control; and provides full motion video, a thermal sensor, and sonar ranging. It is advertised to carry a 1 kg payload with 18 minutes endurance.

Figure 9 (photo on right) shows a much higher end application of autonomy, a UUV currently being used by China. Named the Haiyan, in its current configuration it can carry a multiple sensor payload, cruise up to 7 kilometers per hour (4 knots), range to 1,000 kilometers, reach a depth of 1,000 meters, and endure for 30 days.30 Undersea testing was initiated in mid-2014. The unit can carry multiple sensors and be outfitted to serve a wide variety of missions, from anti-submarine surveillance, to anti-surface warfare, underwater patrol, and mine sweeping. The combat potential and applications are clear.

Harvard to Release Six Years of Admissions Data for Lawsuit

This amounts to "comprehensive data" on almost 200k applicants! I imagine the legal team could use some good data scientists...
Crimson: Harvard to Release Six Years of Admissions Data for Lawsuit

Harvard must produce “comprehensive data” from six full admissions cycles for use in the pending admissions lawsuit between the University and anti-affirmative action group Students for Fair Admissions following a court order filed Tuesday.

Students for Fair Admissions launched the lawsuit in 2014, alleging that the University’s admissions process discriminates against Asian American applicants by setting quotas. ...
See also 20 years @15 percent: does Harvard discriminate against Asian-Americans? and much more.

Does this graph look like "soft-quotas" to you?

Wednesday, September 07, 2016

Daimler investing 500 million euros in drone delivery (video)



WSJ: Daimler to Work With Matternet to Develop Delivery Van Drones

Auto maker investing $562.75 million to design electric vans that can host aerial deliveries

Daimler AG said on Wednesday it would join with U.S. startup Matternet to develop drones for its delivery vans and invest €500 million ($562.7 million) over the next five years in designing electric, networked vans.

Daimler, the maker of Mercedes-Benz cars and trucks, acquired a minority stake in Menlo Park, Calif.-based Matternet as part of the partnership, a spokeswoman said. Daimler’s overall investment in the initiative, called adVANce, will go to vehicle digitization, automation, robotics and mobility solutions technologies.

“We are looking beyond the vehicle to the whole value chain and the entire environment of our clients,” said van division chief Volker Mornhinweg. The goal is to turn vans into “intelligent, interconnected data centers,” he said.

SMPY in Nature


No evidence of diminishing returns in the far tail of the cognitive ability distribution.
How to raise a genius: lessons from a 45-year study of super-smart children (Nature)

A long-running investigation of exceptional children reveals what it takes to produce the scientists who will lead the twenty-first century.

Tom Clynes 07 September 2016

On a summer day in 1968, professor Julian Stanley met a brilliant but bored 12-year-old named Joseph Bates. The Baltimore student was so far ahead of his classmates in mathematics that his parents had arranged for him to take a computer-science course at Johns Hopkins University, where Stanley taught. Even that wasn't enough. Having leapfrogged ahead of the adults in the class, the child kept himself busy by teaching the FORTRAN programming language to graduate students.

Unsure of what to do with Bates, his computer instructor introduced him to Stanley, a researcher well known for his work in psychometrics — the study of cognitive performance. To discover more about the young prodigy's talent, Stanley gave Bates a battery of tests that included the SAT college-admissions exam, normally taken by university-bound 16- to 18-year-olds in the United States.

Bates's score was well above the threshold for admission to Johns Hopkins, and prompted Stanley to search for a local high school that would let the child take advanced mathematics and science classes. When that plan failed, Stanley convinced a dean at Johns Hopkins to let Bates, then 13, enrol as an undergraduate.

Stanley would affectionately refer to Bates as “student zero” of his Study of Mathematically Precocious Youth (SMPY), which would transform how gifted children are identified and supported by the US education system. As the longest-running current longitudinal survey of intellectually talented children, SMPY has for 45 years tracked the careers and accomplishments of some 5,000 individuals, many of whom have gone on to become high-achieving scientists. The study's ever-growing data set has generated more than 400 papers and several books, and provided key insights into how to spot and develop talent in science, technology, engineering, mathematics (STEM) and beyond.

...

At the start, both the study and the centre were open to young adolescents who scored in the top 1% on university entrance exams. Pioneering mathematicians Terence Tao and Lenhard Ng were one-percenters, as were Facebook's Mark Zuckerberg, Google co-founder Sergey Brin and musician Stefani Germanotta (Lady Gaga), who all passed through the Hopkins centre.

“Whether we like it or not, these people really do control our society,” says Jonathan Wai, a psychologist at the Duke University Talent Identification Program in Durham, North Carolina, which collaborates with the Hopkins centre. Wai combined data from 11 prospective and retrospective longitudinal studies2, including SMPY, to demonstrate the correlation between early cognitive ability and adult achievement. “The kids who test in the top 1% tend to become our eminent scientists and academics, our Fortune 500 CEOs and federal judges, senators and billionaires,” he says.

Such results contradict long-established ideas suggesting that expert performance is built mainly through practice — that anyone can get to the top with enough focused effort of the right kind. SMPY, by contrast, suggests that early cognitive ability has more effect on achievement than either deliberate practice or environmental factors such as socio-economic status.

...

The study's first four cohorts range from the top 3% to the top 0.01% in their SAT scores. The SMPY team added a fifth cohort of the leading mathematics and science graduate students in 1992 to test the generalizability of the talent-search model for identifying scientific potential.

“I don't know of any other study in the world that has given us such a comprehensive look at exactly how and why STEM talent develops,” says Christoph Perleth, a psychologist at the University of Rostock in Germany who studies intelligence and talent development.

...

Monday, September 05, 2016

A secret map of the world (Venkatesh Rao / Ribbonfarm)

This is Venkatesh Rao's conceptual map of the world (as seen from Silicon Valley / the internet). Details in the video and this blog post.



In case you can't make out all the features on the map, here is a hi-res version. See also this other map.

Some places of note:

Isle of Deep Learning
Isle of Physics
Moldbug's Lair
Alt-Right Hills
Dark Enlightenment Volcano
Paleo Crossing
Satoshi Mines
Secret Cloud Empire of Amazon
Fjords of Sisu
Algomonopolia (Google, Facebook, ...)
a16z Unicorn Hunting Ground
Lean Startup Town
SJW Cathedral
Manosphere Tar Pit
Global Bro-Science Laboratory
NSA
Academia
Efficient Market Temple
Graveyard of Boomer Dreams
Ghost of Industrial Past

If these memes are unfamiliar, you need to spend more time on the internet or in the bay area :-)


World's fastest supercomputer: Sunway TaihuLight (41k nodes, 11M cores)



Jack Dongarra, professor at UT Knoxville, discusses the strengths and weaknesses of the Sunway TaihuLight, currently the world's fastest supercomputer. The fastest US supercomputer, Titan (#3 in the world), is at Oak Ridge National Lab, near UTK. More here and here.

MSU's latest HPC cluster would be ranked ~150 in the world.
Top 500 Supercomputers in the world

Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, which is in China's Jiangsu province is the No. 1 system with 93 petaflop/s (Pflop/s) on the Linpack benchmark. The system has 40,960 nodes, each with one SW26010 processor for a combined total of 10,649,600 computing cores. Each SW26010 processor is composed of 4 MPEs, 4 CPEs, (a total of 260 cores), 4 Memory Controllers (MC), and a Network on Chip (NoC) connected to the System Interface (SI). Each of the four MPEs, CPEs, and MCs have access to 8GB of DDR3 memory. The system is based on processors exclusively designed and built in China. The Sunway TaihuLight is almost three times as fast and three times as efficient as Tianhe-2, the system it displaces in the number one spot. The peak power consumption under load (running the HPL benchmark) is at 15.371 MW or 6 Gflops/W. This allows the TaihuLight system to hold one of the top spots on the Green500 in terms of the Performance/Power metric. [ IIRC, these processors are inspired by the old Digital Alpha chips that I used to use... ]

...

The number of systems installed in China has increased dramatically to 167, compared to 109 on the last list. China is now at the No. 1 position as a user of HPC. Additionally, China now is at No. 1 position in the performance share thanks to the big contribution of the systems at No. 1 and No. 2.

The number of systems installed in the USA declines sharply and is now at 165 systems, down from from 199 in the previous list. This is the lowest number of systems installed in the U.S. since the list was started 23 years ago.

...

The U.S., the leading consumer of HPC systems since the inception of the TOP500 lists is now second for the first time after China with 165 of the 500 systems. China leads the systems and performance categories now thanks to the No.1 and No. 2 system and a surge in industrial and research installations registered over the last few years. The European share (105 systems compared to 107 last time) has fallen and is now lower than the dominant Asian share of 218 systems, up from 173 in November 2015.

Dominant countries in Asia are China with 167 systems (up from 109) and Japan with 29 systems (down from 37).

In Europe, Germany is the clear leader with 26 systems followed by France with 18 and the UK with 12 systems.

Thursday, September 01, 2016

Balance of power in the western Pacific (Hugh White ANU video)

More Hugh White (see The Pivot and American Statecraft in Asia). In the first video below @38min: he gives what I consider a realistic assessment of the current and near future military balance of power in the Pacific, including the fact that both sides have significant uncertainty in their evaluation of opponent capability. If you just have a few minutes that part is worth a listen.

Defeating A2AD (Anti-Access Area Denial) requires ASB (Air Sea Battle), which is highly escalatory. See also A2AD fait accompli?
Hugh White AO is Professor of Strategic Studies at the Australian National University. His work focuses primarily on Australian strategic and defence policy, Asia-Pacific security issues, and global strategic affairs especially as they influence Australia and the Asia-Pacific. He has served as an intelligence analyst with the Office of National Assessments, as a journalist with the Sydney Morning Herald, as a senior adviser on the staffs of Defence Minister Kim Beazley and Prime Minister Bob Hawke, and as a senior official in the Department of Defence, where from 1995 to 2000 he was Deputy Secretary for Strategy and Intelligence, and as the first Director of the Australian Strategic Policy Institute (ASPI). In the 1970s he studied philosophy at Melbourne and Oxford Universities.





In the second video, see the syllogism expressed @25 min. @44min: surface ships are toast.

Blog Archive

Labels