Brian Micklethwait's Blog

In which I continue to seek part time employment as the ruler of the world.

Home

www.google.co.uk


Recent Comments


Monthly Archives


Most recent entries


Search


Advanced Search


Other Blogs I write for

Brian Micklethwait's Education Blog

CNE Competition
CNE Intellectual Property
Samizdata
Transport Blog


Blogroll

2 Blowhards
6000 Miles from Civilisation
A Decent Muesli
Adloyada
Adventures in Capitalism
Alan Little
Albion's Seedling
Alex Ross: The Rest Is Noise
Alex Singleton
AngloAustria
Another Food Blog
Antoine Clarke
Antoine Clarke's Election Watch
Armed and Dangerous
Art Of The State Blog
Biased BBC
Bishop Hill
BLDG BLOG
Bloggers Blog
Blognor Regis
Blowing Smoke
Boatang & Demetriou
Boing Boing
Boris Johnson
Brazen Careerist
Bryan Appleyard
Burning Our Money
Cafe Hayek
Cato@Liberty
Charlie's Diary
Chase me ladies, I'm in the cavalry
Chicago Boyz
China Law Blog
Cicero's Songs
City Comforts
Civilian Gun Self-Defense Blog
Clay Shirky
Climate Resistance
Climate Skeptic
Coffee & Complexity
Coffee House
Communities Dominate Brands
Confused of Calcutta
Conservative Party Reptile
Contra Niche
Contrary Brin
Counting Cats in Zanzibar
Скрипучая беседка
CrozierVision
Dave Barry
Davids Medienkritik
David Thompson
Deleted by tomorrow
deputydog
diamond geezer
Dilbert.Blog
Dizzy Thinks
Dodgeblogium
Don't Hold Your Breath
Douglas Carswell Blog
dropsafe
Dr Robert Lefever
Dr. Weevil
ecomyths
engadget
Englands Freedome, Souldiers Rights
English Cut
English Russia
EU Referendum
Ezra Levant
Everything I Say is Right
Fat Man on a Keyboard
Ferraris for all
Flickr blog
Freeborn John
Freedom and Whisky
From The Barrel of a Gun
ft.com/maverecon
Fugitive Ink
Future Perfect
FuturePundit
Gaping Void
Garnerblog
Gates of Vienna
Gizmodo
Global Warming Politics
Greg Mankiw's Blog
Guido Fawkes' blog
HE&OS
Here Comes Everybody
Hit & Run
House of Dumb
Iain Dale's Diary
Ideas
Idiot Toys
IMAO
Indexed
India Uncut
Instapundit
Intermezzo
Jackie Danicki
James Delingpole
James Fallows
Jeffrey Archer's Official Blog
Jessica Duchen's classical music blog
Jihad Watch
Joanne Jacobs
Johan Norberg
John Redwood
Jonathan's Photoblog
Kristine Lowe
Laissez Faire Books
Languagehat
Last of the Few
Lessig Blog
Libertarian Alliance: Blog
Liberty Alone
Liberty Dad - a World Without Dictators
Lib on the United Kingdom
Little Man, What Now?
listen missy
Loic Le Meur Blog
L'Ombre de l'Olivier
London Daily Photo
Londonist
Mad Housewife
Mangan's Miscellany
Marginal Revolution
Mark Wadsworth
Media Influencer
Melanie Phillips
Metamagician and the Hellfire Club
Michael Jennings
Michael J. Totten's Middle East Journal
Mick Hartley
More Than Mind Games
mr eugenides
Mutualist Blog: Free Market Anti-Capitalism
My Boyfriend Is A Twat
My Other Stuff
Natalie Solent
Nation of Shopkeepers
Neatorama
neo-neocon
Never Trust a Hippy
NO2ID NewsBlog
Non Diet Weight Loss
Normblog
Nurses for Reform blog
Obnoxio The Clown
Oddity Central
Oliver Kamm
On an Overgrown Path
One Man & His Blog
Owlthoughts of a peripatetic pedant
Oxford Libertarian Society /blog
Patri's Peripatetic Peregrinations
phosita
Picking Losers
Pigeon Blog
Police Inspector Blog
PooterGeek
Power Line
Private Sector Development blog
Public Interest.co.uk
Publius Pundit
Quotulatiousness
Rachel Lucas
RealClimate
Remember I'm the Bloody Architect
Rob's Blog
Sandow
Scrappleface
Setting The World To Rights
Shane Greer
Shanghaiist
SimonHewittJones.com The Violin Blog
Sinclair's Musings
Slipped Disc
Sky Watching My World
Social Affairs Unit
Squander Two Blog
Stephen Fry
Stuff White People Like
Stumbling and Mumbling
Style Bubble
Sunset Gun
Survival Arts
Susan Hill
Teblog
Techdirt
Technology Liberation Front
The Adam Smith Institute Blog
The Agitator
The AntRant
The Becker-Posner Blog
The Belgravia Dispatch
The Belmont Club
The Big Blog Company
The Big Picture
the blog of dave cole
The Corridor of Uncertainty (a Cricket blog)
The Croydonian
The Daily Ablution
The Devil's Advocate
The Devil's Kitchen
The Dissident Frogman
The Distributed Republic
The Early Days of a Better Nation
The Examined Life
The Filter^
The Fly Bottle
The Freeway to Serfdom
The Future of Music
The Futurist
The Happiness Project
The Jarndyce Blog
The London Fog
The Long Tail
The Lumber Room
The Online Photographer
The Only Winning Move
The Policeman's Blog
The Road to Surfdom
The Sharpener
The Speculist
The Surfer
The Wedding Photography Blog
The Welfare State We're In
things magazine
TigerHawk
Tim Blair
Tim Harford
Tim Worstall
tomgpalmer.com
tompeters!
Transterrestrial Musings
UK Commentators - Laban Tall's Blog
UK Libertarian Party
Unqualified Offerings
Violins and Starships
Virginia Postrel
Vodkapundit
WebUrbanist
we make money not art
What Do I Know?
What's Up With That?
Where the grass is greener
White Sun of the Desert
Why Evolution Is True
Your Freedom and Ours


Websites


Mainstream Media

BBC
Guardian
Economist
Independent
MSNBC
Telegraph
The Sun
This is London
Times


Syndicate

RSS 1.0
RSS 2.0
Atom
Feedburner
Podcasts


Categories

Advertising
Africa
Anglosphere
Architecture
Art
Asia
Atheism
Australasia
Billion Monkeys
Bits from books
Bloggers and blogging
Books
Brian Micklethwait podcasts
Brians
Bridges
Business
Career counselling
Cartoons
Cats and kittens
China
Civil liberties
Classical music
Comedy
Comments
Computer graphics
Cranes
Crime
Current events
Democracy
Design
Digital photographers
Drones
Economics
Education
Emmanuel Todd
Environment
Europe
Expression Engine
Family
Food and drink
France
Friends
Getting old
Globalisation
Healthcare
History
How the mind works
India
Intellectual property
Japan
Kevin Dowd
Language
Latin America
Law
Libertarianism
Links
Literature
London
Media and journalism
Middle East and Islam
Movies
Music
My blog ruins
My photographs
Open Source
Opera
Other creatures
Painting
Photography
Podcasting
Poetry
Politics
Pop music
Propaganda
Quote unquote
Radio
Religion
Roof clutter
Russia
Scaffolding
Science
Science fiction
Sculpture
Signs and notices
Social Media
Society
Software
South America
Space
Sport
Technology
Television
The internet
The Micklethwait Clock
Theatre
This and that
This blog
Transport
Travel
USA
Video
War


Category archive: Science

Thursday May 19 2016

Another French picture, but this time taken in Paris, by my friend Antoine Clarke (to whom thanks):

image

That would be La Defense, unless I am much mistaken, that being Paris’s new Big Thing district.

I cropped that photo slightly, to moderate that leaning-inwards effect you get when you point a camera upwards at tall buildings.

imageThe email that brought the above snap to my desk, earlier this month, was entitled “warmer than when you were here last”.  When I last visited Paris, it was indeed very, very cold, so cold that water features became ice features (see the first picture there).

Today, Antoine sent me another photo, also suffering somewhat from leaning-inwards syndrome, and also cropped by me, more than somewhat.  See right.

Mostly what I think about Antoine’s most recent picture is: What an amazing crane!  So very tall, and so very thin.  It’s amazing it even stays up, let alone manages to accomplish anything.  I don’t remember cranes like that existing a generation ago, but maybe that’s merely because no towers that high were being built in London.  Not that Antoine’s crane is in London.  It is somewhere in America, but where, I do not know.

I just did a bit of googling for books about cranes, and if my googling is anything to go by, books about construction cranes and their history are a lot thinner on the ground than are construction cranes.  When you consider how many tons of books have been written about the buildings that construction cranes construct, it is surprising that so little is written about the mighty machines without which such construction would be impossible.

It reminds me of the analogous profusion of books on the history of science, and the comparative neglect of the history of scientific instruments.

As I think I have written before, one major defect of my blog-posting software is that I do not get an accurate picture of how the final blog posting will look, and in this case, whether there is enough verbiage on the left hand side of this tall thin picture of a tall thin crane, to prevent the picture of the tall thin crane impinging upon the posting below.  Hence this somewhat verbose and superfluous paragraph, which may not even have been necessary, but I can’t now tell.

Thursday April 07 2016

I am in the habit of denouncing the notion that science is a precondition for technology (and therefore needs to be paid for by the government).  The tendency is for technological gadgetry to lead science, and often to correct science, by defying it and proving with its success that the relevant science needs to be redone.

But there is another even more direct way in which technology leads science.  Here is yet another excerpt from Steven Johnson’s The Invention of Air (pp. 73-77).  Click on the illustration, which I found here and which is the illustration in the book at that point in the text, to get it properly visible:

The study of air itself had only begun to blossom as a science in the past century, with Robert Boyle’s work on the compression and expansion of air in the late 1600s, and Black’s more recent work on carbon dioxide. Before Boyle and Black, there was little reason to think there was anything to investigate: the world was filled with stuff – people, animals, planets, sprigs of mint – and then there was the nothingness between all the stuff. Why would you study nothingness when there was such a vast supply of stuff to explain? There wasn’t a problem in the nothingness that needed explaining.  A cycle of negative reinforcement arose: the lack of a clear problem kept the questions at bay, and the lack of questions left the problems as invisible as the air itself. As Priestley once wrote of Newton, “[he] had very little knowledge of air, so he had few doubts concerning it.”

So the question is: Where did the doubts come from? Why did the problem of air become visible at that specific point in time?  Why were Priestley, Boyle, and Black able to see the question clearly enough to begin trying to answer it?  There were 800 million human beings on the planet in 1770, every single one of them utterly dependent on air.  Why Priestley, Boyle, and Black over everyone else?

One way to answer that question is through the lens of technological history. They were able to explore the problem because they had new tools.  The air pumps designed by Otto von Guericke and Boyle (the latter in collaboration with his assistant, Robert Hooke, in the mid-1600s) were as essential to Priestley’s lab in Leeds as the electrical machines had been to his Warrington investigations. It was almost impossible to do experiments without being able to move air around in a controlled manner, just as it was impossible to explore electricity without a reliable means of generating it.

In a way, the air pump had enabled the entire field of pneumatic chemistry in the seventeenth century by showing, indirectly, that there was something to study in the first place. If air was simply the empty space between things, what was there to investigate? But the air pump allowed you to remove all the air from a confined space, and thus create a vacuum, which behaved markedly differently from common air, even though air and absence of air were visually indistinguishable. Bells wouldn’t ring in a vacuum, and candles were extinguished. Von Guericke discovered that a metal sphere composed of two parts would seal tightly shut if you evacuated the air between them. Thus the air pump not only helped justify the study of air itself, but also enabled one of the great spectacles of early Enlightenment science.

The following engraving shows the legendary demonstration of the Magdeburg Sphere, which von Guericke presented before Ferdinand III to much amazement: two eight-horse teams attempt – and, spectacularly, fail – to separate the two hemispheres that have been sealed together by the force of a vacuum.

image

When we think of technological advances powering scientific discovery, the image that conventionally comes to mind is a specifically visual one: tools that expand the range of our vision, that let us literally see the object of study with new clarity, or peer into new levels of the very distant, the very small. Think of the impact that the telescope had on early physics, or the microscope on bacteriology. But new ways of seeing are not always crucial to discovery. The air pump didn’t allow you to see the vacuum, because of course there was nothing to see; but it did allow you to see it indirectly in the force that held the Magdeburg Sphere together despite all that horsepower. Priestley was two centuries too early to see the molecules bouncing off one another in his beer glasses. But he had another, equally important, technological breakthrough at his disposal: he could measure those molecules, or at least the gas they collectively formed. He had thermometers that could register changes in temperature (plus, crucially, a standard unit for describing those changes). And he had scales for measuring changes in weight that were a thousand times more accurate than the scales da Vinci built three centuries earlier.

This is a standard pattern in the history of science: when tools for measuring increase their precision by orders of magnitude, new paradigms often emerge, because the newfound accuracy reveals anomalies that had gone undetected. One of the crucial benefits of increasing the accuracy of scales is that it suddenly became possible to measure things that had almost no weight. Black’s discovery of fixed air, and its perplexing mixture with common air, would have been impossible without the state-of-the-art scales he employed in his experiments. The whole inquiry had begun when Black heated a quantity of “magnesia alba,” and discovered that it lost a minuscule amount of weight in the process - a difference that would have been imperceptible using older scales. The shift in weight suggested that something was escaping from the magnesia into the air. By then running comparable experiments, heating a wide array of substances, Black was able to accurately determine the weight of carbon dioxide, and consequently prove the existence of the gas. It weighs, therefore it is.

Wednesday April 06 2016

I am continuing to read, with huge pleasure, Steven Johnson’s book about Joseph Priestley, The Invention of Air.  Here’s another good bit (pp. 58-61):

With the university system languishing amid archaic traditions, and corporate R&D labs still on the distant horizon, the public space of the coffeehouse served as the central hub of innovation in British society How much of the Enlightenment do we owe to coffee? Most of the epic developments in England between 1650 and 1800 that still warrant a mention in the history textbooks have a coffeehouse lurking at some crucial juncture in their story.  The restoration of Charles II, Newton’s theory of gravity, the South Sea Bubble – they all came about, in part, because England had developed a taste for coffee, and a fondness for the kind of informal networking and shoptalk that the coffeehouse enabled.  Lloyd’s of London was once just Edward Lloyd’s coffeehouse, until the shipowners and merchants started clustering there, and collectively invented the modem insurance company.  You can’t underestimate the impact that the Club of Honest Whigs had on Priestley’s subsequent streak, precisely because he was able to plug in to an existing network of relationships and collaborations that the coffeehouse environment facilitated.  Not just because there were learned men of science sitting around the table – more formal institutions like the Royal Society supplied comparable gatherings – but also because the coffeehouse culture was cross-disciplinary by nature, the conversations freely roaming from electricity, to the abuses of Parliament, to the fate of dissenting churches.

The rise of coffeehouse culture influenced more than just the information networks of the Enlightenment; it also transformed the neurochemical networks in the brains of all those newfound coffee-drinkers.  Coffee is a stimulant that has been clinically proven to improve cognitive function - particularly for memory-related tasks - during the first cup or two. Increase the amount of “smart” drugs flowing through individual brains, and the collective intelligence of the culture will become smarter, if enough people get hooked.  Create enough caffeine-abusers in your society and you’ll be statistically more likely to launch an Age of Reason. That may itself sound like the self-justifying fantasy of a longtime coffee-drinker, but to connect coffee plausibly to the Age of Enlightenment you have to consider the context of recreational drug abuse in seventeenth-century Europe.  Coffee-drinkers are not necessarily smarter; in the long run, than those who abstain from caffeine. (Even if they are smarter for that first cup.) But when coffee originally arrived as a mass phenomenon in the mid-1600s, it was not seducing a culture of perfect sobriety.  It was replacing alcohol as the daytime drug of choice. The historian Tom Standage writes in his ingenious A History of the World in Six Glasses:

The impact of the introduction of coffee into Europe during the seventeenth century was particularly noticeable since the most common beverages of the time, even at breakfast, were weak “small beer” and wine .... Those who drank coffee instead of alcohol began the day alert and stimulated, rather than relaxed and mildly inebriated, and the quality and quantity of their work improved .... Western Europe began to emerge from an alcoholic haze that had lasted for centuries.

Emerging from that centuries-long bender, armed with a belief in the scientific method and the conviction, inherited from Newtonian physics, that simple laws could be unearthed beneath complex behavior, the networked, caffeinated minds of the eighteenth century found themselves in a universe that was ripe for discovery. The everyday world was teeming with mysterious phenomena – animals, plants, rocks, weather – that had never before been probed with the conceptual tools of the scientific method.  This sense of terra incognita also helps explain why Priestley could be so innovative in so many different disciplines, and why Enlightenment culture in general spawned so many distinct paradigm shifts.  Amateur dabblers could make transformative scientific discoveries because the history of each field was an embarrassing lineage of conjecture and superstition.  Every discipline was suddenly new again.

Wednesday March 16 2016

I am reading Steven Johnson’s book, The Invention of Air, which is about the life and career of Joseph Priestley.

Early on (pp. 10-12) there is a delightful bit concerning Benjamin Franklin, and his early investigations into the Gulf Stream:

In 1769, the Customs Board in Boston made a formal complaint to the British Treasury about the speed of letters arriving from England.  (Indeed, regular transatlantic correspondents had long noticed that letters posted from America to Europe tended to arrive more promptly than letters sent the other direction.) As luck would have it, the deputy postmaster general for North America was in London when the complaint arrived - and so the British authorities brought the issue to his attention, in the hope that he might have an explanation for the lag.  They were lucky in another respect: the postmaster in question happened to be Benjamin Franklin.

Franklin would ultimately turn that postal mystery into one of the great scientific breakthroughs of his career: a turning point in our visualization of the macro patterns formed by ocean currents.  Franklin was well prepared for the task.  As a twenty-year-old, traveling back from his first voyage to London in 1726, he had recorded notes in his journal about the strange prevalence of “gulph weed” in the waters of the North Atlantic.  In a letter written twenty years later he had remarked on the slower passage westward across the Atlantic, though at the time he supposed it was attributable to the rotation of the Earth.  In a 1762 letter he alluded to the way “the waters mov’d away from the North American Coast towards the coasts of Spain and Africa, whence they get again into the Power of the Trade Winds, and continue the Circulation.” He called that flow the “gulph stream.”

When the British Treasury came to him with the complaint about the unreliable mail delivery schedules, Franklin was quick to suspect that the “gulph stream” would prove to be the culprit.  He consulted with a seasoned New England mariner, Timothy Folger, and together they prepared a map of the Gulf Stream’s entire path, hoping that “such Chart and directions may be of use to our Packets in Shortning their Voyages.” The Folger/Franklin map ...

image

… was the first known chart to show the full trajectory of the Gulf Stream across the Atlantic. But the map was based on anecdotal evidence, mostly drawn from the experience of New England-based whalers.  And so in his voyage from England back to America in 1775, Franklin took detailed measurements of water temperatures along the way, and detected a wide but shallow river of warm water, often carrying those telltale weeds from tropical regions.  “I find that it is always warmer than the sea on each side of it, and that it does not sparkle in the night,” he wrote.  In 1785, at the ripe old age of seventy-nine, he sent a long paper that included his data and the Iolger map to the French scientist Alphonsus le Roy.  Franklin’s paper on “sundry Maritime Observations,” as he modestly called it, delivered the first empirical proof of the Gulf Stream’s existence.

I added that map in the middle of that quote, which I found here.  (I love the internet.)

Until now, I knew nothing of this Gulf Stream story.  The reason I knew nothing of this Gulf Stream story is that I know very little about eighteenth century history of any sort.  This book by Johnson looks like it will be a pain-free way to start correcting that.

Saturday February 27 2016

Here:

Six years ago I submitted a paper for a panel, “On the Absence of Absences” that was to be part of an academic conference later that year - in August 2010. Then, and now, I had no idea what the phrase “absence of absences” meant. The description provided by the panel organizers, printed below, did not help. The summary, or abstract of the proposed paper - was pure gibberish, as you can see below. I tried, as best I could within the limits of my own vocabulary, to write something that had many big words but which made no sense whatsoever. I not only wanted to see if I could fool the panel organizers and get my paper accepted, I also wanted to pull the curtain on the absurd pretentions of some segments of academic life. To my astonishment, the two panel organizers - both American sociologists - accepted my proposal and invited me to join them at the annual international conference of the Society for Social Studies of Science to be held that year in Tokyo.

I wonder what Hemingway would have made of “On the Absence of Absences”.  (Hemingway, for those not inclined to follow links, is a programme to make your writing clearer.)

Presumably someone has also written a program which churns out this kind of drivel automatically.  Google google.

Yes:

The creators of the automatic nonsense generator, Jeremy Stribling, Dan Aguayo and Maxwell Krohn, have made the SCIgen program free to download. And scientists have been using it in their droves.

At the moment, this sort of drivel just marches on.  This is because people who oppose the drivel have to convince the drivellers to stop, which is hard.  And, being opposed to drivel, they usually have better things to do with their time.  The trick is somehow to reverse the burden of proof, to put the drivellers in the position, en masse, of having to convince the rest of us that their drivel is not drivel.  At that point, they find that they have no friends, only public contempt.  Everybody, including them, thinks that it is drivel.  And nobody thinks it worth bothering to even try to prove otherwise.

Tuesday January 05 2016

3D printing is not the replacement of factories by homes.  It is manufacturing in factories only more so.  Making stuff is not, as of now, getting less skilled.  It is getting more skilled ...:

Most ceramic 3D printing uses complex techniques to deposit layers of the material on top of each other, and as a result have to use materials with relatively low melting points. The techniques can also only be used to create fairly simple shapes.

But a team from HRL Laboratories in Malibu, California, has developed what they call a pre-ceramic resin, which they can 3D print much like regular polymers into complex shapes. The process, known as stereolithography, fuses a powder of silicon carbide ceramics using UV light. Once the basic shape is printed, it can be heat-treated at 1,800°F to turn the pre-ceramic resin into a regular ceramic object. In fact, this is the first time silicon carbide ceramics have ever been 3D printed.

… which is very good news for the rich world economies.

Says a commenter:

So 2016 opens with YAAI3DP (Yet Another Advance In 3D Printing.) and some point all these breakthroughs are going to add up and utterly transform manufacturing.

The way he then goes on to say that it will transform manufacturing is that we may eventually get stuff made whenever and wherever we want it made.  In homes and shopping malls, in other words.  Maybe eventually.  In the meantime, cleverer stuff is getting made in the same old places, and then transported to where it is needed.

When I transport blogged, one of the constant themes I found myself noticing was how people regularly thought that transport would be done away with, but it never was.  The main notion was that people would communicate so well that they’d never want to meet face-to-face.  Now, it is being speculated that stuff will be made so cleverly that it will be makable anywhere.  Maybe so, but that isn’t now the smart way to do it, and it probably never will be.

Thursday December 10 2015

From Rob Fisher, who knows my interest in 3D printing, incoming email entitled:

Scientists 3D print ‘live’ blood vessels

Quote:

It’s no longer a rare feat to 3D print blood vessels. Printing vessels that act like the real deal, however, has been tricky… until now. Lawrence Livermore researchers have successfully 3D printed blood vessels that deliver nutrients and self-assemble like they would in a human body. The key is to print an initial structure out of cells and other organic material, and then to augment it with bio ink and other body-friendly materials. With enough time, everything joins up and behaves naturally.

Right now, the actual structures don’t bear much resemblance to what you’d find in a person - you get a “spaghetti bowl” of vessels. Scientists hope to organize these vessels the way they exist in nature, though. If that happens, you could one day see artificial tissue samples and even transplants that are about as realistic as you can get.

A while back, I worked out that 3D printing was going to be just as huge as everyone is saying, but that it was not going to get “domestic”, in the manner of like black-and-white laser printers for instance, in the foreseeable future (with the possible exception of certain kinds of food preparation).  3D printing is a vast range of specialist manufacturing techniques, and it will, for that foreseeable future, be used by people who already make specialist stuff by other and clumsier means, or who would like to make particular specialist stuff for the first time, of the sort that only 3D printing can do.  See the quoted verbiage above.

This is why I receive emails from Google about failing 3D printing companies along with other emails about successful 3D printing activities, mostly by already existing companies.  3D printing is best done by people who already know a hell of a lot about something else, which they can then get 3D printed.  Like: blood vessels.

The principle economic consequence of 3D printing will be to provide an abundance of jobs for people everywhere, but especially among the workers of the rich world, who, during the last few decades, have been famously deprived of many of their jobs by the workers of the poor world.

Prediction/guess.  Because of things like 3D printing, schools in the rich world will soon become (are already becoming?) a bit more successful, back towards what they were like in the 1950s.  This is because, as in the 1950s, there will again be an economic future for everyone in the rich countries, the way there has not been for the last few decades.  For the last few decades, in the rich countries, only the geeks (in computers) and the alpha-male super-jocks (in such things as financial services (and in a tiny few cases in sports)) and posh kids (whose parents motivate them to work hard no matter what (this is a circular definition (posh kids are the ones motivated by their parents))) have had proper futures to look forward to.  (These three categories overlap.) Accordingly, they have been the only ones paying proper attention in school.  The rest have not been able to see enough point to it.

My spell of education blogging taught me, among many things, that when it comes to schools being successful, teacher quality is absolutely not the only variable.  Good teachers can get bad results, if the kids just can’t doing with it.  Bad teachers can preside over good results, if parents and helpers-out, paid or unpaid, after regular school supply good supplementary teaching, or if the kids were highly motivated and determined to learn despite their crappy teachers.

The one exception to the rule about 3D printers not becoming meaningfully domestic is that they have a big future as educational toys, training kids to go into the bouncing-back manufacturing sector.

Thursday December 03 2015

I’ve been reading more of Matt Ridley’s The Evolution of Everything, from which a previous excerpt can be found here, here.  It continues to be very good.  In this bit, Ridley discusses the relationship between genetic and cultural evolution:

What sparked the human revolution in Africa?  It is an almost impossibly difficult question to answer, because of the very gradual beginning of the process: the initial trigger may have been very small. The first stirrings of different tools in parts of east Africa seem to be up to 300,000 years old, so by modern standards the change was happening with glacial slowness.  And that’s a clue.  The defining feature is not culture, for plenty of animals have culture, in the sense of traditions that are passed on by learning.  The defining feature is cumulative culture - the capacity to add innovations without losing old habits.  In this sense, the human revolution was not a revolution at all, but a very, very slow cumulative change, which steadily gathered pace, accelerating towards today’s near-singularity of incessant and multifarious innovation.

It was cultural evolution. I think the change was kicked off by the habit of exchange and specialisation, which feeds upon itself - the more you exchange, the more value there is in specialisation, and vice versa - and tends to breed innovation.  Most people prefer to think it was language that was the cause of the change.  Again, language would build upon itself: the more you can speak the more there is to say.  The problem with this theory, however, is that genetics suggests Neanderthals had already undergone the linguistic revolution hundreds of thousands of years earlier - with certain versions of genes related to languages sweeping through the species.  So if language was the trigger, why did the revolution not happen earlier, and to Neanderthals too?  Others think that some aspect of human cognition must have been different in these first ‘behaviourally modern humans’: forward planning, or conscious imitation, say.  But what caused language, or exchange, or forethought, to start when and where it did?

Almost everybody answers this question in biological terms: a mutation in some gene, altering some aspect of brain structure, gave our ancestors a new skill, which enabled them to build a culture that became cumulative.  Richard Klein, for instance, talks of a single genetic change that ‘fostered the uniquely modern ability to adapt to a remarkable range of natural and social circumstance’.  Others have spoken of alterations in the size, wiring and physiology of the human brain to make possible everything from language and tool use to science and art.  Others suggest that a small number of mutations, altering the structure or expression of developmental regulatory genes, were what triggered a cultural explosion.  The evolutionary geneticist Svante Pääbo says: ‘If there is a genetic underpinning to this cultural and technological explosion, as I’m sure there is .. .’

I am not sure there is a genetic underpinning. Or rather, I think they all have it backwards, and are putting the cart before the horse.  I think it is wrong to assume that complex cognition is what makes human beings uniquely capable of cumulative cultural evolution.  Rather, it is the other way around.  Cultural evolution drove the changes in cognition that are embedded in our genes.  The changes in genes are the consequences of cultural changes.  Remember the example of the ability to digest milk in adults, which is unknown in other mammals, but common among people of European and east African origin. The genetic change was a response to the cultural change. This happened about 5,000-8,000 years ago. The geneticist Simon Fisher and I argued that the same must have been true for other features of human culture that appeared long before that.  The genetic mutations associated with facilitating our skill with language - which show evidence of ‘selective sweeps’ in the past few hundred thousand years, implying that they spread rapidly through the species - were unlikely to be the triggers that caused us to speak; but were more likely the genetic responses to the fact that we were speaking.  Only in a language-using animal would the ability to use language more fluently be an advantage.  So we will search in vain for the biological trigger of the human revolution in Africa 200,000 years ago, for all we will find is biological responses to culture.  The fortuitous adopting of a habit, through force of circumstance, by a certain tribe might have been enough to select for genes that made the members of that tribe better at speaking, exchanging, planning or innovating.  In people, genes are probably the slaves, not the masters, of culture.

Sunday November 29 2015

I have begun reading Matt Ridley’s latest book, The Evolution of Everything.  Early signs: brilliant.  I especially liked this bit (pp. 7-10), about modern ideas in the ancient world:

A ‘skyhook’ is an imaginary device for hanging an object from the sky.  The word originated in a sarcastic remark by a frustrated pilot of a reconnaissance plane in the First World War, when told to stay in the same place for an hour: ‘This machine is not fitted with skyhooks,’ he replied.  The philosopher Daniel Dennett used the skyhook as a metaphor for the argument that life shows evidence of an intelligent designer.  He contrasted skyhooks with cranes - the first impose a solution, explanation or plan on the world from on high; the second allow solutions, explanations or patterns to emerge from the ground up, as natural selection does.

The history of Western thought is dominated by skyhooks, by devices for explaining the world as the outcome of design and planning.  Plato said that society worked by imitating a designed cosmic order, a belief in which should be coercively enforced.  Aristotle said that you should look for inherent principles of intentionality and development - souls - within matter. Homer said gods decided the outcome of battles. St Paul said that you should behave morally because Jesus told you so. Mohamed said you should obey God’s word as transmitted through the Koran.  Luther said that your fate was in God’s hands.  Hobbes said that social order came from a monarch, or what he called ‘Leviathan’ - the state. Kant said morality transcended human experience.  Nietzsche said that strong leaders made for good societies.  Marx said that the state was the means of delivering economic and social progress. Again and again, we have told ourselves that there is a top-down description of the world, and a top-down prescription by which we should live.

But there is another stream of thought that has tried and usually failed to break through. Perhaps its earliest exponent was Epicurus, a Greek philosopher about whom we know very little.  From what later writers said about his writings, we know that he was born in 341 BC and thought (as far as we can tell) that the physical world, the living world, human society and the morality by which we live all emerged as spontaneous phenomena, requiring no divine intervention nor a benign monarch or nanny state to explain them.  As interpreted by his followers, Epicurus believed, following another Greek philosopher, Dernocritus, that the world consisted not of lots of special substances including spirits and humours, but simply of two kinds of thing: voids and atoms.  Everything, said Epicurus, is made of invisibly small and indestructible atoms, separated by voids; the atoms obey the laws of nature and every phenomenon is the result of natural causes.  This was a startlingly prescient conclusion for the fourth century BC.

Unfortunately Epicurus’s writings did not survive.  But three hundred years later, his ideas were revived and explored in a lengthy, eloquent and unfinished poem, De Rerum Natura (Of the Nature of Things), by the Roman poet Titus Lucretius Carus, who probably died in mid-stanza around 49 BC, just as dictatorship was looming in Rome.  Around this time, in Gustave Flaubert’s words, ‘when the gods had ceased to be, and Christ had not yet come, there was a unique moment in history, between Cicero and Marcus Aurelius when man stood alone’.  Exaggerated maybe, but free thinking was at least more possible then than before or after.  Lucretius was more subversive, open-minded and far-seeing than either of those politicians (Cicero admired, but disagreed with, him).  His poem rejects all magic, mysticism, superstition, religion and myth.  It sticks to an unalloyed empiricism.

As the Harvard historian Stephen Greenblatt has documented, a bald list of the propositions Lucretius advances in the unfinished 7,400 hexameters of De Rerum Natura could serve as an agenda for modernity.  He anticipated modern physics by arguing that everything is made of different combinations of a limited set of invisible particles, moving in a void. He grasped the current idea that the universe has no creator, Providence is a fantasy and there is no end or purpose to existence, only ceaseless creation and destruction, governed entirely by chance.  He foreshadowed Darwin in suggesting that nature ceaselessly experiments, and those creatures that can adapt and reproduce will thrive.  He was with modern philosophers and historians in suggesting that the universe was not created for or about human beings, that we are not special, and there was no Golden Age of tranquillity and plenty in the distant past, but only a primitive battle for survival.  He was like modern atheists in arguing that the soul dies, there is no afterlife, all organised religions are superstitious delusions and invariably cruel, and angels, demons or ghosts do not exist.  In his ethics he thought the highest goal of human life is the enhancement of pleasure and the reduction of pain.

Thanks largely to Greenblatt’s marvellous book The Swerve, I have only recently come to know Lucretius, and to appreciate the extent to which I am, and always have been without knowing it, a Lucretian/Epicurean.  Reading his poem in A.E. Stallings’s beautiful translation in my sixth decade is to be left fuming at my educators.  How could they have made me waste all those years at school plodding through the tedious platitudes and pedestrian prose of Jesus Christ or Julius Caesar, when they could have been telling me about Lucretius instead, or as well?  Even Virgil was writing partly in reaction to Lucretius, keen to re-establish respect for gods, rulers and top-down ideas in general. Lucretius’s notion of the ceaseless mutation of forms composed of indestructible substances - which the Spanish-born philosopher George Santayana called the greatest thought that mankind has ever hit upon - has been one of the persistent themes of my own writing.  It is the central idea behind not just physics and chemistry, but evolution, ecology and economics too.  Had the Christians not suppressed Lucretius, we would surely have discovered Darwinism centuries before we did.

Tuesday November 10 2015

Indeed:

image

I’ve not been out much lately, but last Friday night I got to see Perry and Adriana’s new version of indoors.  That was the best photo I took, of a drying up cloth.

Click on that to see Adriana’s trousers, of the sort that are presumably threatening all the time to get tighter.

Saturday October 31 2015

It seems that I am not the only one reminiscing about photos taken nearly a decade ago.  The Atlantic is now doing this, with the help of NASA and its Cassini orbiter, and the Cassini orbiter’s oresumably now rather obsolete camera:

Saturn’s sixth-largest moon, Enceladus (504 kilometers or 313 miles across), is the subject of much scrutiny, in large part due to its spectacular active geysers and the likelihood of a subsurface ocean of liquid water. NASA’s Cassini orbiter has studied Enceladus, along with the rest of the Saturnian system, since entering orbit in 2004. Studying the composition of the ocean within is made easier by the constant eruptions of plumes from the surface, and on October 28, Cassini will be making its deepest-ever dive through the ocean spray from Enceladus - passing within a mere 30 miles of the icy surface. Collected here are some of the most powerful and revealing images of Enceladus made by Cassini over the past decade, with more to follow from this final close flyby as they arrive.

Here is a picture of Enceladus taken on June 10th 2006:

image

That is picture number 25, or rather, a horizontal slice of it.

Beyond Enceladus and Saturn’s rings, Titan, Saturn’s largest moon, is ringed by sunlight passing through its atmosphere. Enceladus passes between Titan and Cassini ...

That’s right.  Those two horizontal, ever so slightly converging white lines and the edge of the Rings of Saturn.

Picture number 10 is even more horizontalisable:

image

A pair of Saturn’s moons appear insignificant compared to the immensity of the planet in this Cassini spacecraft view. Enceladus, the larger moon is visible as a small sphere, while tiny Epimetheus (70 miles, or 113 kilometers across) appears as a tiny black speck on the far left of the image, just below the thin line of the rings.

That one was taken on November 4th 2011.

My thanks, for the second time in as many days, to 6k for pointing me to these amazing images.

Friday October 16 2015

More Dezeenery:

“Modern buildings, exemplified by the Eiffel Tower or the Golden Gate Bridge, are incredibly light and weight-efficient by virtue of their architectures,” commented Bill Carter, manager of the Architected Materials Group at HRL.

image

“We are revolutionising lightweight materials by bringing this concept to the materials level and designing their architectures at the nano- and micro-scales,” he added.

In the new film released by Boeing earlier this month, HRL research scientist Sophia Yang describes the metal as “the world’s lightest material”, and compares its 99.9 per cent air structure to the composition of human bones – rigid on the outside, but with an open cellular composition inside that keeps them lightweight.

All of which has obvious applications to airplanes:

Although the aerospace company hasn’t announced definite plans to use the microlattice, the film suggests that Boeing has been investigating possible applications for the material in aeroplanes, where it could be used for wall or floor panels to save weight and make aircraft more fuel efficient.

And it surely won’t stop with wall and floor panels.

These are the days of miracle and wonder.

Saturday September 05 2015

One of the many fine things about the internet – and in particular that great internet business, Amazon – is that you can now easily get hold of books that seem interesting, even if they were published a decade and a half ago.  Steven Johnson’s book, Emergence, for instance.  This was published in 2001.  I think it was some Amazon robot system that reckoned I might like it ("lots of people who bought this book you just bought also bought this one").  And I read some Amazon reviews, or whatever, and I did like it, or at least the sound of it, and I duly sent off for it.  (I paid £0.01 plus postage.) And now I’m reading it.

Chapter one of Emergence is entitled “The Myth of the Ant Queen”.  Here is the part of that chapter that describes the research then being done by Deborah Gordon, into ants:

At the heart of Gordon’s work is a mystery about how ant colonies develop, a mystery that has implications extending far beyond the parched earth of the Arizona desert to our cities, our brains, our immune systems - and increasingly, our technology.  Gordon’s work focuses on the connection between the microbehavior of individual ants and the overall behavior of the colonies themselves, and part of that research involves tracking the life cycles of individual colonies, following them year after year as they scour the desert floor for food, competing with other colonies for territory, and - once a year - mating with them.  She is a student, in other words, of a particular kind of emergent, self-organizing system.

Dig up a colony of native harvester ants and you’ll almost invariably find that the queen is missing.  To track down the colony’s matriarch, you need to examine the bottom of the hole you’ve just dug to excavate the colony: you’ll find a narrow, almost invisible passageway that leads another two feet underground, to a tiny vestibule burrowed out of the earth. There you will find the queen.  She will have been secreted there by a handful of ladies-in-waiting at the first sign of disturbance.  That passageway, in other words, is an emergency escape hatch, not unlike a fallout shelter buried deep below the West Wing.

But despite the Secret Service-like behavior, and the regal nomenclature, there’s nothing hierarchical about the way an ant colony does its thinking. ‘’Although queen is a term that reminds us of human political systems,” Gordon explains, “the queen is not an authority figure. She lays eggs and is fed and cared for by the workers.  She does not decide which worker does what.  In a harvester ant colony, many feet of intricate tunnels and chambers and thousands of ants separate the queen, surrounded by interior workers, from the ants working outside the nest and using only the chambers near the surface.  It would be physically impossible for the queen to direct every worker’s decision about which task to perform and when.” The harvester ants that carry the queen off to her escape hatch do so not because they’ve been ordered to by their leader; they do it because the queen ant is responsible for giving birth to all the members of the colony, and so it’s in the colony’s best interest - and the colony’s gene pool-to keep the queen safe. Their genes instruct them to protect their mother, the same way their genes instruct them to forage for food. In other words, the matriarch doesn’t train her servants to protect her, evolution does.

Popular culture trades in Stalinist ant stereotypes - witness the authoritarian colony regime in the animated film Antz - but in fact, colonies are the exact opposite of command economies.  While they are capable of remarkably coordinated feats of task allocation, there are no Five-Year Plans in the ant kingdom.  The colonies that Gordon studies display some of nature’s most mesmerizing decentralized behavior: intelligence and personality and learning that emerges from the bottom up.

I’m still gazing into the latticework of plastic tubing when Gordon directs my attention to the two expansive white boards attached to the main colony space, one stacked on top of the other and connected by a ramp.  (Imagine a two-story parking garage built next to a subway stop.) A handful of ants meander across each plank, some porting crumblike objects on their back, others apparently just out for a stroll. If this is the Central Park of Cordon’s ant metropolis, I think, it must be a workday.

Gordon gestures to the near corner of the top board, four inches from the ramp to the lower level, where a pile of strangely textured dust - littered with tiny shells and husks-presses neatly against the wall.  “That’s the midden,” she says. “It’s the town garbage dump.” She points to three ants marching up the ramp, each barely visible beneath a comically oversize shell. “These ants are on midden duty: they take the trash that’s left over from the food they’ve collected-in this case, the seeds from stalk grass-and deposit it in the midden pile.”

Gordon takes two quick steps down to the other side of the table, at the far end away from the ramp. She points to what looks like another pile of dust. “And this is the cemetery.” I look again, startled.  She’s right: hundreds of ant carcasses are piled atop one another, all carefully wedged against the table’s corner.  It looks brutal, and yet also strangely methodical.

I know enough about colony behavior to nod in amazement. “So they’ve somehow collectively decided to utilize these two areas as trash heap and cemetery,” I say. No individual ant defined those areas, no central planner zoned one area for trash, the other for the dead. “It just sort of happened, right?”

Cordon smiles, and it’s clear that I’ve missed something. “It’s better than that,” she says. “Look at what actually happened here: they’ve built the cemetery at exactly the point that’s furthest away from the colony. And the midden is even more interesting: they’ve put it at precisely the point that maximizes its distance from both the colony and the cemetery. It’s like there’s a rule they’re following: put the dead ants as far away as possible, and put the midden as far away as possible without putting it near the dead ants.” I have to take a few seconds to do the geometry myself, and sure enough, the ants have got it right. I find myself laughing out loud at the thought: it’s as though they’ve solved one of those spatial math tests that appear on standardized tests, conjuring up a solution that’s perfectly tailored to their environment, a solution that might easily stump an eight-year-old human.  The question is, who’s doing the conjuring?

It’s a question with a long and august history, one that is scarcely limited to the collective behavior of ant colonies.  We know the answer now because we have developed powerful tools for thinking about - and modeling - the emergent intelligence of self-organizing systems, but that answer was not always so clear.  We know now that systems like ant colonies don’t have real leaders, that the very idea of an ant “queen” is misleading. But the desire to find pacemakers in such systems has always been powerful-in both the group behavior of the social insects, and in the collective human behavior that creates a living city.

Wednesday June 10 2015

I continue to photo white vans.  The poshest white van so far is one I photoed today.  Here’s the basic photo:

image

But, this being a posh enterprise, the graphics are a bit thin and polite, and my photo doesn’t help.  So here’s a close up what it is:

image

And here are the services they offer.

Earlier in the day, I also photoed this white van, which also seemed rather posh:

image

Again, for the same sorts of reasons, here’s a close-up of what it is:

image

But, although “piano people” suggests people who play pianos, or at the very least tune them, all that these piano people do is move them from place to place, carefully.

There really are a lot of white vans out there.

Tuesday March 24 2015

I’ve been reading Paul Kennedy’s Engineers of Victory, which is about how WW2 was won, by us good guys.  Kennedy, like many others, identifies the Battle of the Atlantic as the allied victory which made all the other victories over Germany by the Anglo-American alliance possible.  I agree with the Amazon reviewers who say things like “good overview, not much engineering”.  But this actually suited me quite well.  At least I now know what I want to know more about the engineering of.  And thanks to Kennedy, I certainly want to know more about how centimetric radar was engineered.

Centimetric radar was even more of a breakthrough, arguably the greatest. HF-DF might have identified a U-boat’s radio emissions 20 miles from the convoy, but the corvette or plane dispatched in that direction still needed to locate a small target such as a conning tower, perhaps in the dark or in fog.  The giant radar towers erected along the coast of southeast England to alert Fighter Command of Luftwaffe attacks during the Battle of Britain could never be replicated in the mid-Atlantic, simply because the structures were far too large.  What was needed was a miniaturized version, but creating one had defied all British and American efforts for basic physical and technical reasons: there seemed to be no device that could hold the power necessary to generate the microwave pulses needed to locate objects much smaller than, say, a squadron of Junkers bombers coming across the English Channel, yet still made small enough to be put on a small escort vessel or in the nose of a long-range aircraft.  There had been early air-to-surface vessel (ASV) sets in Allied aircraft, but by 1942 the German Metox detectors provided the U-boats with early warning of them.  Another breakthrough was needed, and by late spring of 1943 that problem had been solved with the steady introduction of 10-centimeter (later 9.1-centimeter) radar into Allied reconnaissance aircraft and even humble Flower-class corvettes; equipped with this facility, they could spot a U-boat’s conning tower miles away, day or night.  In calm waters, the radar set could even pick up a periscope. From the Allies’ viewpoint, the additional beauty of it was that none of the German systems could detect centimetric radar working against them.

Where did this centimetric radar come from?  In many accounts of the war, it simply “pops up”; Liddell Hart is no worse than many others in noting, “But radar, on the new 10cm wavelength that the U-boats could not intercept, was certainly a very important factor.” Hitherto, all scientists’ efforts to create miniaturized radar with sufficient power had failed, and Doenitz’s advisors believed it was impossible, which is why German warships were limited to a primitive gunnery-direction radar, not a proper detection system.  The breakthrough came in spring 1940 at Birmingham University, in the labs of Mark Oliphant (himself a student of the great physicist Ernest Rutherford), when the junior scientists John Randall and Harry Boot, working in a modest wooden building, finally put together the cavity magnetron.

This saucer-sized object possessed an amazing capacity to detect small metal objects, such as a U-boat’s conning tower, and it needed a much smaller antenna for such detection.  Most important of all, the device’s case did not crack or melt because of the extreme energy exuded.  Later in the year important tests took place at the Telecommunications Research Establishment on the Dorset coast.  In midsummer the radar picked up an echo from a man cycling in the distance along the cliff, and in November it tracked the conning tower of a Royal Navy submarine steaming along the shore. Ironically, Oliphant’s team had found their first clue in papers published sixty years earlier by the great German physicist and engineer Adolf Herz, who had set out the original theory for a metal casement sturdy enough to hold a machine sending out very large energy pulses.  Randall had studied radio physics in Germany during the 1930s and had read Herz’s articles during that time.  Back in Birmingham, he and another young scholar simply picked up the raw parts from a scrap metal dealer and assembled the device.

Almost inevitably, development of this novel gadget ran into a few problems: low budgets, inadequate research facilities, and an understandable concentration of most of Britain’s scientific efforts at finding better ways of detecting German air attacks on the home islands. But in September 1940 (at the height of the Battle of Britain, and well before the United States formally entered the war) the Tizard Mission arrived in the United States to discuss scientific cooperation.  This mission brought with it a prototype cavity magnetron, among many other devices, and handed it to the astonished Americans, who quickly recognized that this far surpassed all their own approaches to the miniature-radar problem.  Production and test improvements went into full gear, both at Bell Labs and at the newly created Radiation Laboratory (Rad Lab) at the Massachusetts Institute of Technology.  Even so, there were all sorts of delays - where could they fit the equipment and operator in a Liberator?  Where could they install the antennae? - so it was not until the crisis months of March and April 1943 that squadrons of fully equipped aircraft began to join the Allied forces in the Battle of the Atlantic.

Soon everyone was clamoring for centimetric radar - for the escorts, for the carrier aircraft, for gunnery control on the battleships.  The destruction of the German battle cruiser Scharnhorst off the North Cape on Boxing Day 1943, when the vessel was first shadowed by the centimetric radar of British cruisers and then crushed by the radar-controlled gunnery of the battleship HMS Duke of York, was an apt demonstration of the value of a machine that initially had been put together in a Birmingham shed.  By the close of the war, American industry had produced more than a million cavity magnetrons, and in his Scientists Against Time (1946) James Baxter called them “the most valuable cargo ever brought to our shores” and “the single most important item in reverse lease-lend.” As a small though nice bonus, the ships using it could pick out life rafts and lifeboats in the darkest night and foggiest day.  Many Allied and Axis sailors were to be rescued this way.