Sunday, 27 March 2016

SNO - Neutrinos, Dark Matter and the Sun

SNO - Neutrinos, Dark Matter and the Sun

We went to a lecture the other day (March 21, 2016) at the University of Alberta, by Dr. Art McDonald of Queen’s University. Dr. MacDonald has been a major player in the SNO (Sudbury Neutrino Observatory) and SNOLAB observatories, which are neutrino detectors (and more) located 2 miles underground, near Sudbury Ontario. He and various colleagues recently shared the Nobel Prize in Physics, for their work, which showed that neutrinos can change their “flavour”, and thus have mass. Dr. McDonald was born and raised in Cape Breton, Nova Scotia, and his career has spanned many decades at various highly regarded universities, Princeton for one. He is now a professor emeritus at Queen’s in Kingston, Ontario.

The University of Alberta was an appropriate venue for such a lecture, as it has a long history of collaboration in these projects. Among other things, the U of A fabricated many key parts of the equipment. It is also currently involved in a new venture in the same underground lab, the PICO dark matter search. 


Neutrinos are sub-atomic particles, conventionally thought to be massless, though now they are thought to have a small mass. They are very abundant – billions pass through one’s fingernail (about a square centimeter) every second. But they have very little interaction with regular matter. It is said that a neutrino could pass through a lead wall one light year thick, with only a 50% chance of interaction with another particle. Basically, they have to hit an atomic nucleus or electron “head-on”, to interact with regular matter. The probability of this happening is vanishingly small, since most matter consists almost entirely of empty space.

  The Big Bang theory says that they were one of the first particles created, shortly after the event that started things off. They are also created in various nuclear reactions, including the reactions that power the sun, via nuclear fusion. Some other radioactive processes, such as beta decay result in neutrinos, and they are released in huge quantities during supernovas. Depending on the process that created them, they can come in three “flavours”, the electron neutrino, the muon neutrino and the tau neutrino. This becomes central to the solution of the “solar neutrino problem”, which is a key reason why Dr. McDonald and his team won the Nobel.
It should be noted that neutrino research could have practical benefits, in terms of developing working fusion reactors, which could provide power in the future.

Neutrinos, SNO and the Sun

A noted above, the nuclear reactions which create the energy that we receive from the sun produces neutrinos. These particles are needed to balance the nuclear equations, and carry away some of the energy from the proton-proton chain reactions (a somewhat complex series of nuclear reactions). Early detection apparatus (e.g. Homestake mine observatory) showed that there were not as many neutrinos coming out of the sun as expected – only about a third to a half.
Some explanations of the deficit involved major changes to our models of the sun’s structure and the associated solar reactions, for example different pressures and temperatures . But various other measurements (e.g. helioseismology) supported the original models.
Another possibility was that solar neutrinos changed their flavour, during the journey from sun to Earth. This theory of neutrino oscillation was originally proposed in the 1950’s. It stated that neutrinos could change their flavour (oscillate), due to certain quantum mechanical effects. This also implied that neutrinos had mass, which the standard model did not predict, in its original form. Originally, neutrinos were thought to be stable and unchangeable. The theory of neutron oscillation was further elaborated over the next couple of decades.
The SNO observatory was designed to detect all three flavours of neutrino, and it did detect the expected number of neutrinos in 2001, about equally distributed between the three flavours. The first observations at Homestake could only detect electron neutrinos, and that explained the discrepancy. The conclusion therefore, was that oscillation did occur and neutrinos have mass (though extremely small mass, perhaps 1/500000 of an electron). That’s the key work that earned Dr. McDonald the Nobel, along with Takaaki Kajita of Japan, who made a concurrent discovery with the Super-Kamiokande Observatory in Japan.

SNO itself

First off, let’s get this out of the way – “there’s no business like SNO business”. Well, strictly speaking there are a few underground observatories around the world, but it is a small number.

The reason for going two miles underground in a working mine, it to get away from radiation that would confuse the results of the experiment. Mainly, that means shielding from cosmic rays and their by-products. A very low level of radioactivity is required, such a low level that the type of paint used for some of the equipment became a problem. This concerned a little yellow submarine that allowed access inside the detector, which had to be changed from yellow to grey, as the yellow paint was too radioactive.
Basically, the detector was a huge (6 meter radius) acrylic sphere, containing 1000 tons of heavy water, which was surrounded by an array of photomultiplier tubes, that registered the nuclear reactions that were the signature of neutrinos. Heavy water was a key to the design, as the interaction with deuterium was involved. Heavy water contains more than the normal amount of deuterium (hydrogen with both proton and neutron). The detector used $300 million dollars of the stuff, lent to it by Atomic Energy Canada. The mining corporation INCO provided the space in the mine.

  The detector could discriminate the three flavours of neutrino, due to the differing details of the resulting energy levels, directions, and locations of detection. Note that these interactions didn’t occur very often, maybe about once per day. The detectors sensitivity has been compared to “seeing a candle on the moon”.
The working area had to be kept immaculately clean, rather like the rooms in which computer chips are made. Obviously that posed some problems, two miles underground, in a working mine. Dr. McDonald noted that his mother once visited the facility, and was impressed with how clean they managed to keep it.


Since the original experiment, SNO has morphed into a larger facility, SNOLAB. This has a number of experiments running, including some that are involved in the search for dark matter.
Dark matter is what it says it is – matter that we know exists but can’t yet detect. We know it exists from a number of arguments:
  • Galaxy rotation is such that galaxies shouldn’t be stable over long periods of time, unless some unseen matter was providing the necessary gravitational attraction to hold the galaxy together.
  • Similarly for galaxy clusters. They ought not to remain bound as long as it appears that they are, so there must be unseen mass involved there as well.
  • Cosmological “big bang” theories imply that there is a lot more mass in the universe than we can see.
A leading candidate for dark matter are the class of theorized particles known as WIMPS (weakly interacting massive particles). The Atlas experiment at SNOLAB is looking for these particles via recoil reactions produced by the rare interactions, which could be differentiated from “regular matter” interactions. There have also been attempts to produce these particles directly at CERN, though there have been no conclusive positive results.

The PICO Bubble Chamber experiment is another one at SNOLAB, which is looking for certain recoil reactions, that would be consistent with WIMPS. The U of Alberta is involved in this project as well.

The Nobel Prize Ceremony

Dr. McDonald closed off the talk with some photos of the 2015 Nobel ceremony and some anecdotes on this event. Many of the experiment participants were able to attend. The Nobel ceremonies are very elaborate and go on for quite a few days. It sounded fun but exhausting. The Swedish royal family were “just folks”, according to the professor.
By the way, SNO and SNOLAB have had many illustrious visitors, including Stephen Hawking, who curiously enough, has not won the Nobel. He is pictured here with Dr. McDonald, who noted that Stephen Hawking was extremely patient, brilliant and has a wicked sense of humour.
Also of note, is that the observatory has been featured in Robert Sawyer's SF novels (Hominids and companion books), and the solar neutrino problem was a key plot element in Arthue C. Clarke's “Songs of Distant Earth”.
A note on the photos: Most are from the SNOLAB site, via Google Images.
And it is only fair that I mention one of our Dodecahedron Books SF novels, since I can't be expected to blog entirely without self-interest :). Book one of the Witches' Stones series (Rescue from the Planet of the Amartos) includes some references to dark matter and a neutron star plays a pivotal role in one action scene. Neutron stars, of course, are the result of supernovas, which produce an incredible numbers of neutrinos. Plus, there's a nice neutron star on the cover, as well as a rather fetching heroine. You should buy it and read it, if only for the neutrinos. :)

Thursday, 24 March 2016

Helena Puumala's Easter story "Where the Apple Falls", free for the Easter weekend on Amazon

Helena Puumala's Easter story "Where the Apple Falls", free for the Easter weekend on Amazon

Helena Puumala's Easter short story "Where the Apple Falls", will be free for the Easter weekend on Amazon.  To be precise, that would be the five day period from Thursday March 24 to Monday March 28, 2016. 
This short story (approx. 6500 words) focuses on the complex and somewhat troubled relations between children, parents, and grandparents.  It also revolves around the mysterious forces of the universe, including the various notions of the divine held by the people in the story, which sometimes conflict, much as they do in the world in general.  An Easter service and a freshly planted apple tree draw the parties together, over one fateful Easter weekend.

The story is the middle story of a 3 story holiday cycle,  set in a Northern Ontario lake community, that explores some spiritual and family themes, concerning conflict, forgiveness, acceptance and love.  The cycle begins with "The Boathouse Christ" (Halloween), continues in "Where the Apple Falls" (Easter) and concludes with "A Christmas Miracle at the Lake" (Christmas).  Naturally, any of the 3 stories can be read and enjoyed on its own,  but next week, they will be bundled together into one convenient volume.

Monday, 21 March 2016

Go Match - AlphaGo versus Lee Sedol

Go Match - AlphaGo versus Lee Sedol

Computer Game Research and the U of A

We went to a lecture the other day (March 14, 2016, which also happened to be Pi Day, which is a nice math related coincidence) about the match between human Go expert Lee Sedol and the computer program AlphaGo. It was presented by Prof Martin Mueller of the University of Alberta. The U of A has long been involved in AI (Artificial intelligence) research, via games. In fact, about half of the team who programmed AlphaGo had U of A backgrounds (alumni, researchers, post-docs, etc.), so it was a very appropriate venue for such a talk. That includes the AlphaGo team lead, who got his PhD at the U of A.
The U of A has also been involved in computer chess (since the 1970’s), checkers (Professor Schaffer ‘solved’ checkers recently), and poker. The Go research goes back 30 years or more. Why has the U of A and other universities used games to do research into AI?
  • They’re fun
  • They’re well defined
  • It’s reasonably easy to measure your progress.

The Game

First off, I haven’t played the game of Go myself, so this description is necessarily sketchy and possibly wrong in some details. But, anyway:
  • Go is played on various sized boards, but at its highest level the board is 19 by 19 squares (chess is 8 by 8). There are 10 stones aside, black and white.
  • This results in a possible game space (permutation space, possible moves, etc.) that is much larger than the number of particles in the observable universe.
  • The rules are simple – surround the opposition and claim territory, but the strategies are subtle and complex. A numeric score is calculated, based on territory and captures, and that determines the winner.
  • Until recently, computers were rather weak, intermediate level at best. But AlphaGo was a big leap, and now plays at the highest levels, as the match showed.
  • There are many hundreds of human professionals, who play 40+ hours per week and make their living that way, including Lee Sedol, who has played from the age of 12 until his current age, mid-30s. So, he has played thousands of games, though AlphaGo has played millions (more on that later).

The Match

Note that these comments are based on the professor’s descriptions of the games, along with a bit of commentary from the Wiki entry. Professor Mueller is a high level Go player himself, so his analysis should be good.
  • Game 1
    • Lee “tested” the program with some aggressive moves, but the program responded well.
    • The program countered with some unusual and unexpected moves.
    • Each side made some critical errors, but the program won.
  • Game 2
    • Lee played it safe.
    • The computer got creative, played an unexpected game.
    • Generally considered a masterful game by AlphaGo, who won.
  • Game 3
    • Lee changed his strategy, with strong early moves.
    • Computer responded perfectly to this strategy.
    • Almost a perfect, flawless game by AlphGo, who won.
    • Many described the program’s play as “almost scary”.
  • Game 4
    • Lee tried a new strategy.
    • He played a “great move” (unconventional), that confused the program.
    • It presented many interconnected threats, that seemed to overwhelm the computer. It didn’t seem to have the computing power to respond appropriately.
    • The program seemed to flail for about 10 moves, and dug itself into a hole.
    • Lee played it safe after that and won.
    • The human race rejoiced.
  • Game 5
    • This actually came the night after the lecture, but the news media reported that the game was extremely close, about as close as a Go game can be.
    • AlphaGo made an early “bad mistake” but clawed back for the win, in “deep overtime”.
Lee noted that he felt like the games were close, and the match could have gone either way. The fact that the computer didn’t tire, or experience emotional highs and lows, seemed to be an advantage, in his opinion. He also felt that he played better when having the opening move.

A Brief Look at the History of Man-Machine Matches

Professor Mueller gave a brief overview of the history of matches between human and computer program. These matches have always been an important part of Computing Science research, at the U of A and at many other universites.
  • Chess – has always been a big part of the research program. Humans prevailed until about 1997, when Kasparov was defeated by IBM’s Deep Blue. Since then, there have been many other matches, with the best programs tending to win, though not necessarily dominate the best human champions.
  • Backgammon – this game has an element of luck, but it also has a significant strategy element, that computer programs using neural nets have optimized, in order to play at or above the best human players. Programs are “nearly perfect” now, according to Professor Mueller.
  • Checkers – the U of A has a long involvement with checkers, via the program Chinook. This program beat or equalled the best humans, in the mid-1990s. Since then, the U of A’s Jonathan Schaeffer has used Chinook to “solve” checkers (in 2007), proving that the best a human player can do against Chinook is to play to a draw.
  • Othello – a computer program mastered this game inn 1997. It is now widely acknowledged that computer programs can play significantly better than humans.
  • Poker – this is now a focus of U of A research. In 2007, the U of A Polaris program became the first program to win an important match with humans. Texas hold-em is now considered solved (2015 article in Nature).
  • Go – in 2009 a U of A program won against a top human on a 9 by 9 board. But, until AlphaGo, a computer needed a substantial handicap when playing against human beings. As we now know, that’s no longer the case.

The Science

Before looking at how AlphaGo makes decisions, it is worth reminding ourselves of how humans do so:
  • They consider alternatives.
  • They consider consequences of various alternatives.
  • They evaluate the options accordingly.
  • They choose the best option, under the circumstances.
AlphaGo does something like this, using Monte Carlo search strategies, deep neural networks and reinforcement learning, via simulations. My description of this process will be a combination of what was said in the talk, what I read in the paper in Nature (Mastering the game of Go with deep neural networks and tree search, Jan 28, 2016) and what I made of it all. Not being an expert in most of the technologies being used, that could result in some misunderstandings.
For the record, most of my experience is in the more traditional statistical methods, such as multiple regression analysis, logistic regression, ANOVA and the like, with the odd foray into newer methods such as decision trees. As noted above, the AlphaGo program utilizes the so-called “machine-learning” methods, primarily.
AlphaGo’s basic strategy can be broken down into:
    • This is a Monte Carlo tree search, something along the lines of a Decision Trees analysis.
    • The branches of the tree search can proliferate enormously, giving incredibly huge trees.
    • Thus, AlphaGo has to refine its search space, choosing the most promising paths early on (i.e. the shortest paths that score the most “points”).
    • Sometimes evaluation can be easy for a computer program, as it is in real life for human beings. But more often, it is “hard”, if the problem is complex. This is very familiar to us from our everyday human experience, as well.
    • So, as the search progresses, the current search path has to be evaluated. A sort of evaluation function is needed for this, one that can look at a Go board search path and estimate the probability of a win from that.
    • AlphaGo’s evaluation function is built up mostly from simulations that it has done, with sub-programs playing millions of games against each other, utilizing slightly different strategies (unsupervised learning).
    • It can then base its evaluation function on the results of all of those games. Since it has played millions of games to conclusions, it can estimate the likelihood of various board configurations, choosing paths that are likely to ultimately result in a win.
    • Utilizing the experience of all these games, also helps it narrow down the search space to a much smaller subset, more realistic space than a comprehensive search would require.
    • It has long been assumed that this is what humans do when playing games like Go - through extensive experience and a particular mental aptitude for the game in question, they can narrow down the search space to those that are most relevant to the game.
    • It should be noted that the program has first been trained with the tens of thousands of games among the best humans, to begin the reinforcement learning. The simulations noted above come after that initial training (supervised learning).
    • Note that the paper is still somewhat vague on the details of the evaluation function or matrix. It seems clear that the accuracy and predictive power of the “value network” has been enhanced enormously by the unsupervised learning that the millions of supervisions gave it.
It should also be noted that the AlphaGo program benefitted from:
  • Plenty of high speed hardware, consisting of thousands of cores (CPUs and GPUs).
  • Millions of simulations.
  • Much testing and fine-tuning by human programmers and AI experts.
  • I should also note that only in a computing science lecture could a phrase like “reinforcement through self-play” be said without cracking an ironic smile.
One issue with neural nets algorithms that is noted in the literature is the “black box” problem. They take input signals, send them through a series of intervening neural layers, and ultimately come up with output signals. The neural nets can be altered via various algorithms (back propagation, etc.), and the nets that produce the desired effect on the output variable of interest can be reinforced and selected. This can produce a very good predictive model, but it is still a black box. You know what nets worked, but you don’t necessarily know why. This is in contrast to a more traditional prediction method like regression, whereby you know what variables had the greatest effect in your predictive model, and can therefore understand something of the real-world processes behind the model.
For what it’s worth, my overall take on the match is that AlphaGo’s main advantage was primarily that it had the benefit of playing millions of games, and thereby creating a much richer evaluation function that other programs, and most humans can manage. For a human to build up a similar database, would take multiple hundreds of lifetimes.
Nonetheless, it is interesting that Lee Sedol managed to confuse and beat AlphaGo in one game, and essentially play to a draw in another, even though he had played perhaps ten thousand games, at most (a game a day for twenty years would be about 6500 games). So, there must still be an “X-factor” going on in the human neural net, that hasn’t yet been discovered. Some of us like to think that will continue to be the case, indefinitely :).

The Future

It is hoped that the scientific advances in producing programs like AlphaGo can be used for more general and pressing real-world problems. Basically, any problem that can be modelled in similar way as the Go problem could yield to these methods. That would include any complex problem that involved an “easy” problem statement, a solution that can be kick-started by expert human knowledge, and then fine-tuned via millions of iterations of self-play simulations.
Since AlphaGo is a product of a Google owned company, applications like self-driving cars come to mind. Some others that were mentioned were general image processing, speech recognition, medical imaging diagnostics, and drug research. It should be borne in mind, though, that many real-world problems don’t have these features, so the “AlphaGo method” is not a general purpose one.
One issue that Professor Mueller brought up, was the movement of AI research from the university sector to the corporate sector. This can be a concern, as the corporate sector has much more lavish funding, at least for areas that have a high profit potential. Will academic research be left in the dust? It’s hard to say - the corporate world still needs the trained personnel that universities provide, as well as the ability to research problems that have no obvious profit potential. So, academic research will probably remain vital, but it might mean that the collaboration between universities and corporations will grow, with all of the attendant problems and opportunities that would entail.

Here's XKCD, with a comment on computer-human matchups.
 Obligatory Star Trek reference: Computers will never beat us at Fizbin.

Tuesday, 15 March 2016

Golden Tree Bailing on Postmedia

Golden Tree Bailing on Postmedia

The word on the street this week is that Golden Tree Asset Management, the hedge fund that owns a big stake in Postmedia wants to sell its share.  This doesn’t seem like a good sign for Postmedia, the National Post or the affiliated papers.  Selling the shares probably indicates that Golden Tree is trying to monetize what it can, before bankruptcy.  Shareholders come behind bondholders and other debtors in a bankruptcy proceeding, so it’s better for them to extract some value now, rather than get caught holding the empty bag, come Chapter 11 (or whatever legal phrase applies in Canada).

Print media is in trouble all over.  For example, Britain’s Independent is now independent of its paper edition, falling back on a web only, in the past month or so.  But Postmedia is a special case.  It has huge debts (half a billion dollars plus), and it is doubtful whether restructuring will ever be sufficient to generate the sort of cash flow needed to keep it afloat for long.

There is a certain irony there.  The National Post has made a living (such as it was, as there were very few profitable years) scolding governments and the general public for their profligate ways, in terms of debt.  Meanwhile, management buyouts, takeovers and related shenanigans resulted in the company being saddled with a huge and impossible debt.

 So what will the result be?  I hope that the local papers can be saved (or re-booted) by local people.  There are a lot of big cities that will be without a daily paper, if and when Postmedia folds, otherwise.  That would be a shame.  There is still a lot of good in the local newspaper concept.

I will continue to get the local (Edmonton Journal) Postmedia paper, even though the political cartoonist’s anti-Notley and anti-Trudeau ways can grate on me, until it goes away.  If a new Journal rises from the ashes, I will probably give it a try.  I suppose that we will find out, one way or another, pretty soon.