Thursday, October 26, 2017

The Physicist and the Neuroscientist: A Tale of Two Connectomes



This is video of an excellent talk on the human connectome by neuroscientist Bobby Kasthuri of Argonne National Lab and the University of Chicago. (You can see me sitting on the floor in the corner :-)

The story below is for entertainment purposes only. No triggering of biologists is intended.
The Physicist and the Neuroscientist: A Tale of Two Connectomes

Steve burst into Bobby's lab, a small metal box under one arm. Startled, Bobby nearly knocked over his Zeiss electron microscope.

I've got it! shouted Steve. My former student at DeepBrain sent me one of their first AGI's. It's hot out of their 3D neuromorphic chip printer.

This is the thing that talks and understands quantum mechanics? asked Bobby.

Yes, if I just plug it in. He tapped the box -- This deep net has 10^10 connections! Within spitting distance of our brains, but much more efficient. They trained it in their virtual simulator world. Some of the algos are based on my polytope paper from last year. It not only knows QM, it understands what you mean by "How much is that doggie in the window?" :-)

Has anyone mapped the connections?

Sort of, I mean the strengths and topology are determined by the training and algos... It was all done virtually. Printed into spaghetti in this box.

We've got to scan it right away! My new rig can measure 10^5 connections per second!

What for? It's silicon spaghetti. It works how it works, but we created it! Specific connections... that's like collecting postage stamps.

No, but we need to UNDERSTAND HOW IT WORKS!

...

Why don't you just ask IT? thought Steve, as he left Bobby's lab.
More Bobby, with more hair.

Wednesday, October 25, 2017

AlphaGo Zero: algorithms over data and compute



AlphaGo Zero was trained entirely through self-play -- no data from human play was used. The resulting program is the strongest Go player ever by a large margin, and is extremely efficient in its use of compute (running on only 4 TPUs).
Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0.
Rapid progress from a random initial state is rather amazing, but perhaps something we should get used to given that:

1. Deep Neural Nets are general enough to learn almost any function (i.e., high dimensional mathematical function) no matter how complex
2. The optimization process is (close to) convex

A widely discussed AI mystery: how do human babies manage to learn (language, intuitive physics, theory of mind) so quickly and with relatively limited training data? AlphaGo Zero's impressive results are highly suggestive in this context -- the right algorithms make a huge difference.

It seems certain that great things are coming in the near future...

Sunday, October 22, 2017

Steven Weinberg: What's the matter with quantum mechanics?



In this public lecture Weinberg explains the problems with the two predominant interpretations of quantum mechanics, which he refers to as Instrumentalist (e.g., Copenhagen) and Realist (e.g., Many Worlds). The term "interpretation" may be misleading because what is ultimately at stake is the nature of physical reality. Both interpretations have serious problems, but the problem with Realism (in Weinberg's view, and my own) is not the quantum multiverse, but rather the origin of probability within deterministic Schrodinger evolution. Instrumentalism is, of course, ill-defined nutty mysticism 8-)

Physicists will probably want to watch this at 1.5x or 2x speed. The essential discussion is at roughly 22-40min, so it's only a 10 minute investment of your time. These slides explain in pictures.

See also Weinberg on Quantum Foundations, where I wrote:
It is a shame that very few working physicists, even theoreticians, have thought carefully and deeply about quantum foundations. Perhaps Weinberg's fine summary will stimulate greater awareness of this greatest of all unresolved problems in science.
and quoted Weinberg:
... today there is no interpretation of quantum mechanics that does not have serious flaws. 
Posts on this blog related to the Born Rule, etc., and two of my papers:
The measure problem in many worlds quantum mechanics

On the origin of probability in quantum mechanics

Dynamical theories of wavefunction collapse are necessarily non-linear generalizations of Schrodinger evolution, which lead to problems with locality.

Among those who take the Realist position seriously: Feynman and Gell-Mann, Schwinger, Hawking, and many more.

Thursday, October 19, 2017

Talking Ta-Nehisi Coates, Seriously?



Glenn Loury is Merton P. Stoltz Professor of the Social Sciences, Department of Economics, Brown University. John McWhorter is Associate Professor of English and Comparative Literature at Columbia University, where he teaches linguistics, American studies, philosophy, and music history.
Loury (@19min): "He's a good writer but not a deep thinker, and he's being taken seriously as if he was a deep thinker... he's talented I mean there's not any doubt about that but the actual analytical content of the argument, there are gaping holes in it..."
On the dangers of Identity Politics:
Loury (@21min): Coates' immersion in a racialist conception of American Society ... everything through the lens of race ... is the mirror image or the flip side of a white nationalist conception about American society in which everything is viewed in terms of race and Williams in the review includes extensive reportage from his interview of Richard Spencer the white nationalist leader ... and has Spencer saying back to him in effect I'm glad that people eatin' up Tallahassee cause I'm glad that they're taking it in because it's a thoroughly racialized conception. It's racial essentialism at its utmost and that primes them: they really believe in race, these liberals who are reading Coates, and that means I can flip them says Richard Spencer. The day will come given their belief in race -- I can persuade them that they're white. Coates wants that they regret and lament and eschew the fact that they're white. Richard Spencer dreams of a day in which, them seeing themselves as white, they'll get tired of hating themselves and flip over to the side of being proud ...
I've been reading Coates for years, since he was a relatively unknown writer at The Atlantic. Here are very good Longform Podcast interviews which explore his early development: 2015, 2014, 2012.

Mentioned in the discussion: Thomas Chatterton Williams, New York Times, How Ta-Nehisi Coates Gives Whiteness Power.

More links.

Tuesday, October 17, 2017

Super Green Smoothie


Frozen Spinach (see picture)
Handful of frozen blueberries
Small handful of nuts (pecans, almonds, etc.)
1/2 scoop protein powder
1-2 cups milk (or 1+1 milk and water)

Makes 2 large glasses of nutritious green super smoothie. Give the other one to your spouse or kid or roommate, or just use half the recipe  :-)

Rinse out the blender container immediately with warm water for easy clean up.

Most of the volume is spinach, so calorie density is low, while antioxidant and nutritional content is high.

Smoothie diet: drink one glass (~250 calories, 20g protein), wait 15 minutes, all hunger will vanish for 90+ minutes.



(Photo quality meh because I took them using a $40 Moto E (Android) I have been experimenting with. Over Xmas last year I researched cheap Android phones for my kids. Lots of very good devices for ~$100 or less. The carrier / data costs dwarf the cost of the handset.)

Monday, October 09, 2017

Blade Runner 2049: Demis Hassabis (Deep Mind) interviews director Villeneuve



Hassabis refers to AI in the original Blade Runner, but it is apparent from the sequel that replicants are merely genetically engineered humans. AI appears in Blade Runner 2049 in the form of Joi. There seems to be widespread confusion, including in the movie itself, about whether to think about replicants as robots (i.e., hardware) with "artificial" brains, or simply superhumans engineered (by manipulation of DNA and memories) to serve as slaves. The latter, while potentially very alien psychologically (detectable by Voight-Kampff machine, etc.), presumably have souls like ours. (Hassabis refers to Rutger Hauer's decision to have Roy Batty release the dove when he dies as symbolic of Batty's soul escaping from his body.)

Dick himself seems a bit imprecise in his use of the term android (hardware or wet bioware?) in this context. "Electric" sheep? In a bioengineered android brain that is structurally almost identical to a normal human's?

Q&A at 27min is excellent -- concerning the dispute between Ridley Scott and Harrison Ford as to whether Deckard is a replicant, and how Villeneuve handled it, inspired by the original Dick novel.







Addendum: Blade Runner, meet Alien

The Tyrell-Weyland connection

Robots (David, of Alien Prometheus) vs Genetically Engineered Slaves (replicants) with false memories



Saturday, October 07, 2017

Information Theory of Deep Neural Nets: "Information Bottleneck"



This talk discusses, in terms of information theory, how the hidden layers of a deep neural net (thought of as a Markov chain) create a compressed (coarse grained) representation of the input information. To date the success of neural networks has been a mainly empirical phenomenon, lacking a theoretical framework that explains how and why they work so well.

At ~44min someone asks how networks "know" to construct (local) feature detectors in the first few layers. I'm not sure I followed Tishby's answer but it may be a consequence of the hierarchical structure of the data, not specific to the network or optimization.
Naftali (Tali) Tishby נפתלי תשבי

Physicist, professor of computer science and computational neuroscientist
The Ruth and Stan Flinkman professor of Brain Research
Benin school of Engineering and Computer Science
Edmond and Lilly Safra Center for Brain Sciences (ELSC)
Hebrew University of Jerusalem, 96906 Israel

I work at the interfaces between computer science, physics, and biology which provide some of the most challenging problems in today’s science and technology. We focus on organizing computational principles that govern information processing in biology, at all levels. To this end, we employ and develop methods that stem from statistical physics, information theory and computational learning theory, to analyze biological data and develop biologically inspired algorithms that can account for the observed performance of biological systems. We hope to find simple yet powerful computational mechanisms that may characterize evolved and adaptive systems, from the molecular level to the whole computational brain and interacting populations.
Another Tishby talk on this subject.

Tuesday, October 03, 2017

A Gentle Introduction to Neural Networks



"A gentle introduction to the principles behind neural networks, including backpropagation. Rated G for general audiences."

This very well done. If you have a quantitative background you can watch it at 1.5x or 2x speed, I think :-)

A bit more on the history of backpropagation and convexity: why is the error function convex, or nearly so?

Blog Archive

Labels