Brian Micklethwait's Blog
In which I continue to seek part time employment as the ruler of the world.Home
Rob Fisher on Footbridges in the sky
Rob Fisher on Footbridges in the sky
6000 on Quota caption competition
Michael Jennings on 148 to Burgess Park
Esteban on David Pierce on what it's like using an electric scooter
Brian Micklethwait on Zooming in on the workers
Rob Fisher on Zooming in on the workers
Brian Micklethwait on David Pierce on what it's like using an electric scooter
Rob Fisher on Zooming in on the workers
Rob Fisher on Big Things on Boris Bikes
Most recent entries
- Someone Legal eagles versus illegal drones?
- A rejected Grand Chose that shouldn’t have been
- Vans that need to look the part
- Quota caption competition
- Footbridges in the sky
- White vans in Kentish Town
- A busy day and a collection of Big Things
- A still life and a cat cushion in Kentish Town
- A Japanese torpedo bomber that could use some zoom
- A good time of the year
- 148 to Burgess Park
- A Big Thing and a Much Bigger Thing – on a not-black cab
- Another way to photo my meetings
- Quota Pavlova
Other Blogs I write for
6000 Miles from Civilisation
A Decent Muesli
Adventures in Capitalism
Alex Ross: The Rest Is Noise
Another Food Blog
Antoine Clarke's Election Watch
Armed and Dangerous
Art Of The State Blog
Boatang & Demetriou
Burning Our Money
Chase me ladies, I'm in the cavalry
China Law Blog
Civilian Gun Self-Defense Blog
Coffee & Complexity
Communities Dominate Brands
Confused of Calcutta
Conservative Party Reptile
Counting Cats in Zanzibar
Deleted by tomorrow
Don't Hold Your Breath
Douglas Carswell Blog
Dr Robert Lefever
Englands Freedome, Souldiers Rights
Everything I Say is Right
Fat Man on a Keyboard
Ferraris for all
Freedom and Whisky
From The Barrel of a Gun
Gates of Vienna
Global Warming Politics
Greg Mankiw's Blog
Guido Fawkes' blog
Here Comes Everybody
Hit & Run
House of Dumb
Iain Dale's Diary
Jeffrey Archer's Official Blog
Jessica Duchen's classical music blog
Laissez Faire Books
Last of the Few
Libertarian Alliance: Blog
Liberty Dad - a World Without Dictators
Lib on the United Kingdom
Little Man, What Now?
Loic Le Meur Blog
L'Ombre de l'Olivier
London Daily Photo
Metamagician and the Hellfire Club
Michael J. Totten's Middle East Journal
More Than Mind Games
Mutualist Blog: Free Market Anti-Capitalism
My Boyfriend Is A Twat
My Other Stuff
Nation of Shopkeepers
Never Trust a Hippy
Non Diet Weight Loss
Nurses for Reform blog
Obnoxio The Clown
On an Overgrown Path
One Man & His Blog
Owlthoughts of a peripatetic pedant
Oxford Libertarian Society /blog
Patri's Peripatetic Peregrinations
Police Inspector Blog
Private Sector Development blog
Remember I'm the Bloody Architect
Setting The World To Rights
SimonHewittJones.com The Violin Blog
Sky Watching My World
Social Affairs Unit
Squander Two Blog
Stuff White People Like
Stumbling and Mumbling
Technology Liberation Front
The Adam Smith Institute Blog
The Becker-Posner Blog
The Belgravia Dispatch
The Belmont Club
The Big Blog Company
The Big Picture
the blog of dave cole
The Corridor of Uncertainty (a Cricket blog)
The Daily Ablution
The Devil's Advocate
The Devil's Kitchen
The Dissident Frogman
The Distributed Republic
The Early Days of a Better Nation
The Examined Life
The Fly Bottle
The Freeway to Serfdom
The Future of Music
The Happiness Project
The Jarndyce Blog
The London Fog
The Long Tail
The Lumber Room
The Online Photographer
The Only Winning Move
The Policeman's Blog
The Road to Surfdom
The Wedding Photography Blog
The Welfare State We're In
UK Commentators - Laban Tall's Blog
UK Libertarian Party
Violins and Starships
we make money not art
What Do I Know?
What's Up With That?
Where the grass is greener
White Sun of the Desert
Why Evolution Is True
Your Freedom and Ours
Arts & Letters Daily
Bjørn Stærk's homepage
Butterflies and Wheels
Dark Roasted Blend
Digital Photography Review
Ghana Centre for Democratic Reform
Global Warming and the Climate
History According to Bob
Institut économique Molinari
Institute of Economic Affairs
Ludwig von Mises Institute
Oxford Libertarian Society
The Christopher Hitchens Web
The Space Review
The TaxPayers' Alliance
This is Local London
UK Libertarian Party
Victor Davis Hanson
WSJ.com Opinion Journal
Bits from books
Bloggers and blogging
Brian Micklethwait podcasts
Cats and kittens
Food and drink
How the mind works
Media and journalism
Middle East and Islam
My blog ruins
Signs and notices
The Micklethwait Clock
This and that
Category archive: Science
3D printing is not the replacement of factories by homes. It is manufacturing in factories only more so. Making stuff is not, as of now, getting less skilled. It is getting more skilled ...:
Most ceramic 3D printing uses complex techniques to deposit layers of the material on top of each other, and as a result have to use materials with relatively low melting points. The techniques can also only be used to create fairly simple shapes.
But a team from HRL Laboratories in Malibu, California, has developed what they call a pre-ceramic resin, which they can 3D print much like regular polymers into complex shapes. The process, known as stereolithography, fuses a powder of silicon carbide ceramics using UV light. Once the basic shape is printed, it can be heat-treated at 1,800°F to turn the pre-ceramic resin into a regular ceramic object. In fact, this is the first time silicon carbide ceramics have ever been 3D printed.
… which is very good news for the rich world economies.
Says a commenter:
So 2016 opens with YAAI3DP (Yet Another Advance In 3D Printing.) and some point all these breakthroughs are going to add up and utterly transform manufacturing.
The way he then goes on to say that it will transform manufacturing is that we may eventually get stuff made whenever and wherever we want it made. In homes and shopping malls, in other words. Maybe eventually. In the meantime, cleverer stuff is getting made in the same old places, and then transported to where it is needed.
When I transport blogged, one of the constant themes I found myself noticing was how people regularly thought that transport would be done away with, but it never was. The main notion was that people would communicate so well that they’d never want to meet face-to-face. Now, it is being speculated that stuff will be made so cleverly that it will be makable anywhere. Maybe so, but that isn’t now the smart way to do it, and it probably never will be.
From Rob Fisher, who knows my interest in 3D printing, incoming email entitled:
It’s no longer a rare feat to 3D print blood vessels. Printing vessels that act like the real deal, however, has been tricky… until now. Lawrence Livermore researchers have successfully 3D printed blood vessels that deliver nutrients and self-assemble like they would in a human body. The key is to print an initial structure out of cells and other organic material, and then to augment it with bio ink and other body-friendly materials. With enough time, everything joins up and behaves naturally.
Right now, the actual structures don’t bear much resemblance to what you’d find in a person - you get a “spaghetti bowl” of vessels. Scientists hope to organize these vessels the way they exist in nature, though. If that happens, you could one day see artificial tissue samples and even transplants that are about as realistic as you can get.
A while back, I worked out that 3D printing was going to be just as huge as everyone is saying, but that it was not going to get “domestic”, in the manner of like black-and-white laser printers for instance, in the foreseeable future (with the possible exception of certain kinds of food preparation). 3D printing is a vast range of specialist manufacturing techniques, and it will, for that foreseeable future, be used by people who already make specialist stuff by other and clumsier means, or who would like to make particular specialist stuff for the first time, of the sort that only 3D printing can do. See the quoted verbiage above.
This is why I receive emails from Google about failing 3D printing companies along with other emails about successful 3D printing activities, mostly by already existing companies. 3D printing is best done by people who already know a hell of a lot about something else, which they can then get 3D printed. Like: blood vessels.
The principle economic consequence of 3D printing will be to provide an abundance of jobs for people everywhere, but especially among the workers of the rich world, who, during the last few decades, have been famously deprived of many of their jobs by the workers of the poor world.
Prediction/guess. Because of things like 3D printing, schools in the rich world will soon become (are already becoming?) a bit more successful, back towards what they were like in the 1950s. This is because, as in the 1950s, there will again be an economic future for everyone in the rich countries, the way there has not been for the last few decades. For the last few decades, in the rich countries, only the geeks (in computers) and the alpha-male super-jocks (in such things as financial services (and in a tiny few cases in sports)) and posh kids (whose parents motivate them to work hard no matter what (this is a circular definition (posh kids are the ones motivated by their parents))) have had proper futures to look forward to. (These three categories overlap.) Accordingly, they have been the only ones paying proper attention in school. The rest have not been able to see enough point to it.
My spell of education blogging taught me, among many things, that when it comes to schools being successful, teacher quality is absolutely not the only variable. Good teachers can get bad results, if the kids just can’t doing with it. Bad teachers can preside over good results, if parents and helpers-out, paid or unpaid, after regular school supply good supplementary teaching, or if the kids were highly motivated and determined to learn despite their crappy teachers.
The one exception to the rule about 3D printers not becoming meaningfully domestic is that they have a big future as educational toys, training kids to go into the bouncing-back manufacturing sector.
I’ve been reading more of Matt Ridley’s The Evolution of Everything, from which a previous excerpt can be found here, here. It continues to be very good. In this bit, Ridley discusses the relationship between genetic and cultural evolution:
What sparked the human revolution in Africa? It is an almost impossibly difficult question to answer, because of the very gradual beginning of the process: the initial trigger may have been very small. The first stirrings of different tools in parts of east Africa seem to be up to 300,000 years old, so by modern standards the change was happening with glacial slowness. And that’s a clue. The defining feature is not culture, for plenty of animals have culture, in the sense of traditions that are passed on by learning. The defining feature is cumulative culture - the capacity to add innovations without losing old habits. In this sense, the human revolution was not a revolution at all, but a very, very slow cumulative change, which steadily gathered pace, accelerating towards today’s near-singularity of incessant and multifarious innovation.
It was cultural evolution. I think the change was kicked off by the habit of exchange and specialisation, which feeds upon itself - the more you exchange, the more value there is in specialisation, and vice versa - and tends to breed innovation. Most people prefer to think it was language that was the cause of the change. Again, language would build upon itself: the more you can speak the more there is to say. The problem with this theory, however, is that genetics suggests Neanderthals had already undergone the linguistic revolution hundreds of thousands of years earlier - with certain versions of genes related to languages sweeping through the species. So if language was the trigger, why did the revolution not happen earlier, and to Neanderthals too? Others think that some aspect of human cognition must have been different in these first ‘behaviourally modern humans’: forward planning, or conscious imitation, say. But what caused language, or exchange, or forethought, to start when and where it did?
Almost everybody answers this question in biological terms: a mutation in some gene, altering some aspect of brain structure, gave our ancestors a new skill, which enabled them to build a culture that became cumulative. Richard Klein, for instance, talks of a single genetic change that ‘fostered the uniquely modern ability to adapt to a remarkable range of natural and social circumstance’. Others have spoken of alterations in the size, wiring and physiology of the human brain to make possible everything from language and tool use to science and art. Others suggest that a small number of mutations, altering the structure or expression of developmental regulatory genes, were what triggered a cultural explosion. The evolutionary geneticist Svante Pääbo says: ‘If there is a genetic underpinning to this cultural and technological explosion, as I’m sure there is .. .’
I am not sure there is a genetic underpinning. Or rather, I think they all have it backwards, and are putting the cart before the horse. I think it is wrong to assume that complex cognition is what makes human beings uniquely capable of cumulative cultural evolution. Rather, it is the other way around. Cultural evolution drove the changes in cognition that are embedded in our genes. The changes in genes are the consequences of cultural changes. Remember the example of the ability to digest milk in adults, which is unknown in other mammals, but common among people of European and east African origin. The genetic change was a response to the cultural change. This happened about 5,000-8,000 years ago. The geneticist Simon Fisher and I argued that the same must have been true for other features of human culture that appeared long before that. The genetic mutations associated with facilitating our skill with language - which show evidence of ‘selective sweeps’ in the past few hundred thousand years, implying that they spread rapidly through the species - were unlikely to be the triggers that caused us to speak; but were more likely the genetic responses to the fact that we were speaking. Only in a language-using animal would the ability to use language more fluently be an advantage. So we will search in vain for the biological trigger of the human revolution in Africa 200,000 years ago, for all we will find is biological responses to culture. The fortuitous adopting of a habit, through force of circumstance, by a certain tribe might have been enough to select for genes that made the members of that tribe better at speaking, exchanging, planning or innovating. In people, genes are probably the slaves, not the masters, of culture.
I have begun reading Matt Ridley’s latest book, The Evolution of Everything. Early signs: brilliant. I especially liked this bit (pp. 7-10), about modern ideas in the ancient world:
A ‘skyhook’ is an imaginary device for hanging an object from the sky. The word originated in a sarcastic remark by a frustrated pilot of a reconnaissance plane in the First World War, when told to stay in the same place for an hour: ‘This machine is not fitted with skyhooks,’ he replied. The philosopher Daniel Dennett used the skyhook as a metaphor for the argument that life shows evidence of an intelligent designer. He contrasted skyhooks with cranes - the first impose a solution, explanation or plan on the world from on high; the second allow solutions, explanations or patterns to emerge from the ground up, as natural selection does.
The history of Western thought is dominated by skyhooks, by devices for explaining the world as the outcome of design and planning. Plato said that society worked by imitating a designed cosmic order, a belief in which should be coercively enforced. Aristotle said that you should look for inherent principles of intentionality and development - souls - within matter. Homer said gods decided the outcome of battles. St Paul said that you should behave morally because Jesus told you so. Mohamed said you should obey God’s word as transmitted through the Koran. Luther said that your fate was in God’s hands. Hobbes said that social order came from a monarch, or what he called ‘Leviathan’ - the state. Kant said morality transcended human experience. Nietzsche said that strong leaders made for good societies. Marx said that the state was the means of delivering economic and social progress. Again and again, we have told ourselves that there is a top-down description of the world, and a top-down prescription by which we should live.
But there is another stream of thought that has tried and usually failed to break through. Perhaps its earliest exponent was Epicurus, a Greek philosopher about whom we know very little. From what later writers said about his writings, we know that he was born in 341 BC and thought (as far as we can tell) that the physical world, the living world, human society and the morality by which we live all emerged as spontaneous phenomena, requiring no divine intervention nor a benign monarch or nanny state to explain them. As interpreted by his followers, Epicurus believed, following another Greek philosopher, Dernocritus, that the world consisted not of lots of special substances including spirits and humours, but simply of two kinds of thing: voids and atoms. Everything, said Epicurus, is made of invisibly small and indestructible atoms, separated by voids; the atoms obey the laws of nature and every phenomenon is the result of natural causes. This was a startlingly prescient conclusion for the fourth century BC.
Unfortunately Epicurus’s writings did not survive. But three hundred years later, his ideas were revived and explored in a lengthy, eloquent and unfinished poem, De Rerum Natura (Of the Nature of Things), by the Roman poet Titus Lucretius Carus, who probably died in mid-stanza around 49 BC, just as dictatorship was looming in Rome. Around this time, in Gustave Flaubert’s words, ‘when the gods had ceased to be, and Christ had not yet come, there was a unique moment in history, between Cicero and Marcus Aurelius when man stood alone’. Exaggerated maybe, but free thinking was at least more possible then than before or after. Lucretius was more subversive, open-minded and far-seeing than either of those politicians (Cicero admired, but disagreed with, him). His poem rejects all magic, mysticism, superstition, religion and myth. It sticks to an unalloyed empiricism.
As the Harvard historian Stephen Greenblatt has documented, a bald list of the propositions Lucretius advances in the unfinished 7,400 hexameters of De Rerum Natura could serve as an agenda for modernity. He anticipated modern physics by arguing that everything is made of different combinations of a limited set of invisible particles, moving in a void. He grasped the current idea that the universe has no creator, Providence is a fantasy and there is no end or purpose to existence, only ceaseless creation and destruction, governed entirely by chance. He foreshadowed Darwin in suggesting that nature ceaselessly experiments, and those creatures that can adapt and reproduce will thrive. He was with modern philosophers and historians in suggesting that the universe was not created for or about human beings, that we are not special, and there was no Golden Age of tranquillity and plenty in the distant past, but only a primitive battle for survival. He was like modern atheists in arguing that the soul dies, there is no afterlife, all organised religions are superstitious delusions and invariably cruel, and angels, demons or ghosts do not exist. In his ethics he thought the highest goal of human life is the enhancement of pleasure and the reduction of pain.
Thanks largely to Greenblatt’s marvellous book The Swerve, I have only recently come to know Lucretius, and to appreciate the extent to which I am, and always have been without knowing it, a Lucretian/Epicurean. Reading his poem in A.E. Stallings’s beautiful translation in my sixth decade is to be left fuming at my educators. How could they have made me waste all those years at school plodding through the tedious platitudes and pedestrian prose of Jesus Christ or Julius Caesar, when they could have been telling me about Lucretius instead, or as well? Even Virgil was writing partly in reaction to Lucretius, keen to re-establish respect for gods, rulers and top-down ideas in general. Lucretius’s notion of the ceaseless mutation of forms composed of indestructible substances - which the Spanish-born philosopher George Santayana called the greatest thought that mankind has ever hit upon - has been one of the persistent themes of my own writing. It is the central idea behind not just physics and chemistry, but evolution, ecology and economics too. Had the Christians not suppressed Lucretius, we would surely have discovered Darwinism centuries before we did.
I’ve not been out much lately, but last Friday night I got to see Perry and Adriana’s new version of indoors. That was the best photo I took, of a drying up cloth.
Click on that to see Adriana’s trousers, of the sort that are presumably threatening all the time to get tighter.
It seems that I am not the only one reminiscing about photos taken nearly a decade ago. The Atlantic is now doing this, with the help of NASA and its Cassini orbiter, and the Cassini orbiter’s oresumably now rather obsolete camera:
Saturn’s sixth-largest moon, Enceladus (504 kilometers or 313 miles across), is the subject of much scrutiny, in large part due to its spectacular active geysers and the likelihood of a subsurface ocean of liquid water. NASA’s Cassini orbiter has studied Enceladus, along with the rest of the Saturnian system, since entering orbit in 2004. Studying the composition of the ocean within is made easier by the constant eruptions of plumes from the surface, and on October 28, Cassini will be making its deepest-ever dive through the ocean spray from Enceladus - passing within a mere 30 miles of the icy surface. Collected here are some of the most powerful and revealing images of Enceladus made by Cassini over the past decade, with more to follow from this final close flyby as they arrive.
Here is a picture of Enceladus taken on June 10th 2006:
That is picture number 25, or rather, a horizontal slice of it.
Beyond Enceladus and Saturn’s rings, Titan, Saturn’s largest moon, is ringed by sunlight passing through its atmosphere. Enceladus passes between Titan and Cassini ...
That’s right. Those two horizontal, ever so slightly converging white lines and the edge of the Rings of Saturn.
Picture number 10 is even more horizontalisable:
A pair of Saturn’s moons appear insignificant compared to the immensity of the planet in this Cassini spacecraft view. Enceladus, the larger moon is visible as a small sphere, while tiny Epimetheus (70 miles, or 113 kilometers across) appears as a tiny black speck on the far left of the image, just below the thin line of the rings.
That one was taken on November 4th 2011.
“Modern buildings, exemplified by the Eiffel Tower or the Golden Gate Bridge, are incredibly light and weight-efficient by virtue of their architectures,” commented Bill Carter, manager of the Architected Materials Group at HRL.
“We are revolutionising lightweight materials by bringing this concept to the materials level and designing their architectures at the nano- and micro-scales,” he added.
In the new film released by Boeing earlier this month, HRL research scientist Sophia Yang describes the metal as “the world’s lightest material”, and compares its 99.9 per cent air structure to the composition of human bones – rigid on the outside, but with an open cellular composition inside that keeps them lightweight.
All of which has obvious applications to airplanes:
Although the aerospace company hasn’t announced definite plans to use the microlattice, the film suggests that Boeing has been investigating possible applications for the material in aeroplanes, where it could be used for wall or floor panels to save weight and make aircraft more fuel efficient.
And it surely won’t stop with wall and floor panels.
These are the days of miracle and wonder.
One of the many fine things about the internet – and in particular that great internet business, Amazon – is that you can now easily get hold of books that seem interesting, even if they were published a decade and a half ago. Steven Johnson’s book, Emergence, for instance. This was published in 2001. I think it was some Amazon robot system that reckoned I might like it ("lots of people who bought this book you just bought also bought this one"). And I read some Amazon reviews, or whatever, and I did like it, or at least the sound of it, and I duly sent off for it. (I paid £0.01 plus postage.) And now I’m reading it.
Chapter one of Emergence is entitled “The Myth of the Ant Queen”. Here is the part of that chapter that describes the research then being done by Deborah Gordon, into ants:
At the heart of Gordon’s work is a mystery about how ant colonies develop, a mystery that has implications extending far beyond the parched earth of the Arizona desert to our cities, our brains, our immune systems - and increasingly, our technology. Gordon’s work focuses on the connection between the microbehavior of individual ants and the overall behavior of the colonies themselves, and part of that research involves tracking the life cycles of individual colonies, following them year after year as they scour the desert floor for food, competing with other colonies for territory, and - once a year - mating with them. She is a student, in other words, of a particular kind of emergent, self-organizing system.
Dig up a colony of native harvester ants and you’ll almost invariably find that the queen is missing. To track down the colony’s matriarch, you need to examine the bottom of the hole you’ve just dug to excavate the colony: you’ll find a narrow, almost invisible passageway that leads another two feet underground, to a tiny vestibule burrowed out of the earth. There you will find the queen. She will have been secreted there by a handful of ladies-in-waiting at the first sign of disturbance. That passageway, in other words, is an emergency escape hatch, not unlike a fallout shelter buried deep below the West Wing.
But despite the Secret Service-like behavior, and the regal nomenclature, there’s nothing hierarchical about the way an ant colony does its thinking. ‘’Although queen is a term that reminds us of human political systems,” Gordon explains, “the queen is not an authority figure. She lays eggs and is fed and cared for by the workers. She does not decide which worker does what. In a harvester ant colony, many feet of intricate tunnels and chambers and thousands of ants separate the queen, surrounded by interior workers, from the ants working outside the nest and using only the chambers near the surface. It would be physically impossible for the queen to direct every worker’s decision about which task to perform and when.” The harvester ants that carry the queen off to her escape hatch do so not because they’ve been ordered to by their leader; they do it because the queen ant is responsible for giving birth to all the members of the colony, and so it’s in the colony’s best interest - and the colony’s gene pool-to keep the queen safe. Their genes instruct them to protect their mother, the same way their genes instruct them to forage for food. In other words, the matriarch doesn’t train her servants to protect her, evolution does.
Popular culture trades in Stalinist ant stereotypes - witness the authoritarian colony regime in the animated film Antz - but in fact, colonies are the exact opposite of command economies. While they are capable of remarkably coordinated feats of task allocation, there are no Five-Year Plans in the ant kingdom. The colonies that Gordon studies display some of nature’s most mesmerizing decentralized behavior: intelligence and personality and learning that emerges from the bottom up.
I’m still gazing into the latticework of plastic tubing when Gordon directs my attention to the two expansive white boards attached to the main colony space, one stacked on top of the other and connected by a ramp. (Imagine a two-story parking garage built next to a subway stop.) A handful of ants meander across each plank, some porting crumblike objects on their back, others apparently just out for a stroll. If this is the Central Park of Cordon’s ant metropolis, I think, it must be a workday.
Gordon gestures to the near corner of the top board, four inches from the ramp to the lower level, where a pile of strangely textured dust - littered with tiny shells and husks-presses neatly against the wall. “That’s the midden,” she says. “It’s the town garbage dump.” She points to three ants marching up the ramp, each barely visible beneath a comically oversize shell. “These ants are on midden duty: they take the trash that’s left over from the food they’ve collected-in this case, the seeds from stalk grass-and deposit it in the midden pile.”
Gordon takes two quick steps down to the other side of the table, at the far end away from the ramp. She points to what looks like another pile of dust. “And this is the cemetery.” I look again, startled. She’s right: hundreds of ant carcasses are piled atop one another, all carefully wedged against the table’s corner. It looks brutal, and yet also strangely methodical.
I know enough about colony behavior to nod in amazement. “So they’ve somehow collectively decided to utilize these two areas as trash heap and cemetery,” I say. No individual ant defined those areas, no central planner zoned one area for trash, the other for the dead. “It just sort of happened, right?”
Cordon smiles, and it’s clear that I’ve missed something. “It’s better than that,” she says. “Look at what actually happened here: they’ve built the cemetery at exactly the point that’s furthest away from the colony. And the midden is even more interesting: they’ve put it at precisely the point that maximizes its distance from both the colony and the cemetery. It’s like there’s a rule they’re following: put the dead ants as far away as possible, and put the midden as far away as possible without putting it near the dead ants.” I have to take a few seconds to do the geometry myself, and sure enough, the ants have got it right. I find myself laughing out loud at the thought: it’s as though they’ve solved one of those spatial math tests that appear on standardized tests, conjuring up a solution that’s perfectly tailored to their environment, a solution that might easily stump an eight-year-old human. The question is, who’s doing the conjuring?
It’s a question with a long and august history, one that is scarcely limited to the collective behavior of ant colonies. We know the answer now because we have developed powerful tools for thinking about - and modeling - the emergent intelligence of self-organizing systems, but that answer was not always so clear. We know now that systems like ant colonies don’t have real leaders, that the very idea of an ant “queen” is misleading. But the desire to find pacemakers in such systems has always been powerful-in both the group behavior of the social insects, and in the collective human behavior that creates a living city.
I continue to photo white vans. The poshest white van so far is one I photoed today. Here’s the basic photo:
But, this being a posh enterprise, the graphics are a bit thin and polite, and my photo doesn’t help. So here’s a close up what it is:
And here are the services they offer.
Earlier in the day, I also photoed this white van, which also seemed rather posh:
Again, for the same sorts of reasons, here’s a close-up of what it is:
But, although “piano people” suggests people who play pianos, or at the very least tune them, all that these piano people do is move them from place to place, carefully.
There really are a lot of white vans out there.
I’ve been reading Paul Kennedy’s Engineers of Victory, which is about how WW2 was won, by us good guys. Kennedy, like many others, identifies the Battle of the Atlantic as the allied victory which made all the other victories over Germany by the Anglo-American alliance possible. I agree with the Amazon reviewers who say things like “good overview, not much engineering”. But this actually suited me quite well. At least I now know what I want to know more about the engineering of. And thanks to Kennedy, I certainly want to know more about how centimetric radar was engineered.
Centimetric radar was even more of a breakthrough, arguably the greatest. HF-DF might have identified a U-boat’s radio emissions 20 miles from the convoy, but the corvette or plane dispatched in that direction still needed to locate a small target such as a conning tower, perhaps in the dark or in fog. The giant radar towers erected along the coast of southeast England to alert Fighter Command of Luftwaffe attacks during the Battle of Britain could never be replicated in the mid-Atlantic, simply because the structures were far too large. What was needed was a miniaturized version, but creating one had defied all British and American efforts for basic physical and technical reasons: there seemed to be no device that could hold the power necessary to generate the microwave pulses needed to locate objects much smaller than, say, a squadron of Junkers bombers coming across the English Channel, yet still made small enough to be put on a small escort vessel or in the nose of a long-range aircraft. There had been early air-to-surface vessel (ASV) sets in Allied aircraft, but by 1942 the German Metox detectors provided the U-boats with early warning of them. Another breakthrough was needed, and by late spring of 1943 that problem had been solved with the steady introduction of 10-centimeter (later 9.1-centimeter) radar into Allied reconnaissance aircraft and even humble Flower-class corvettes; equipped with this facility, they could spot a U-boat’s conning tower miles away, day or night. In calm waters, the radar set could even pick up a periscope. From the Allies’ viewpoint, the additional beauty of it was that none of the German systems could detect centimetric radar working against them.
Where did this centimetric radar come from? In many accounts of the war, it simply “pops up”; Liddell Hart is no worse than many others in noting, “But radar, on the new 10cm wavelength that the U-boats could not intercept, was certainly a very important factor.” Hitherto, all scientists’ efforts to create miniaturized radar with sufficient power had failed, and Doenitz’s advisors believed it was impossible, which is why German warships were limited to a primitive gunnery-direction radar, not a proper detection system. The breakthrough came in spring 1940 at Birmingham University, in the labs of Mark Oliphant (himself a student of the great physicist Ernest Rutherford), when the junior scientists John Randall and Harry Boot, working in a modest wooden building, finally put together the cavity magnetron.
This saucer-sized object possessed an amazing capacity to detect small metal objects, such as a U-boat’s conning tower, and it needed a much smaller antenna for such detection. Most important of all, the device’s case did not crack or melt because of the extreme energy exuded. Later in the year important tests took place at the Telecommunications Research Establishment on the Dorset coast. In midsummer the radar picked up an echo from a man cycling in the distance along the cliff, and in November it tracked the conning tower of a Royal Navy submarine steaming along the shore. Ironically, Oliphant’s team had found their first clue in papers published sixty years earlier by the great German physicist and engineer Adolf Herz, who had set out the original theory for a metal casement sturdy enough to hold a machine sending out very large energy pulses. Randall had studied radio physics in Germany during the 1930s and had read Herz’s articles during that time. Back in Birmingham, he and another young scholar simply picked up the raw parts from a scrap metal dealer and assembled the device.
Almost inevitably, development of this novel gadget ran into a few problems: low budgets, inadequate research facilities, and an understandable concentration of most of Britain’s scientific efforts at finding better ways of detecting German air attacks on the home islands. But in September 1940 (at the height of the Battle of Britain, and well before the United States formally entered the war) the Tizard Mission arrived in the United States to discuss scientific cooperation. This mission brought with it a prototype cavity magnetron, among many other devices, and handed it to the astonished Americans, who quickly recognized that this far surpassed all their own approaches to the miniature-radar problem. Production and test improvements went into full gear, both at Bell Labs and at the newly created Radiation Laboratory (Rad Lab) at the Massachusetts Institute of Technology. Even so, there were all sorts of delays - where could they fit the equipment and operator in a Liberator? Where could they install the antennae? - so it was not until the crisis months of March and April 1943 that squadrons of fully equipped aircraft began to join the Allied forces in the Battle of the Atlantic.
Soon everyone was clamoring for centimetric radar - for the escorts, for the carrier aircraft, for gunnery control on the battleships. The destruction of the German battle cruiser Scharnhorst off the North Cape on Boxing Day 1943, when the vessel was first shadowed by the centimetric radar of British cruisers and then crushed by the radar-controlled gunnery of the battleship HMS Duke of York, was an apt demonstration of the value of a machine that initially had been put together in a Birmingham shed. By the close of the war, American industry had produced more than a million cavity magnetrons, and in his Scientists Against Time (1946) James Baxter called them “the most valuable cargo ever brought to our shores” and “the single most important item in reverse lease-lend.” As a small though nice bonus, the ships using it could pick out life rafts and lifeboats in the darkest night and foggiest day. Many Allied and Axis sailors were to be rescued this way.
For all his joie de vivre, Jardine is a master drone builder and pilot whose skills have produced remarkable footage for shows like Australian Top Gear, the BBC’s Into the Volcano, and a range of music videos. His company Aerobot sells camera-outfitted drones, including custom jobs that require unique specifications like, say, the capacity to lift an IMAX camera. From a sprawling patch of coastline real estate in Queensland, Australia, Jardine builds, tests, and tweaks his creations; the rural tranquility is conducive to a process that may occasionally lead to unidentified falling objects.
Simply put, if you’ve got a drone flying challenge, Jardine is your first call.
So, Mr Jardine is now flying his flying robots over volcanoes. There are going to be lots of calls to have these things entirely banned, but they are just too useful for that to happen.
When I was a kid and making airplanes out of balsa wood and paper, powered with rubber band propellers, I remember thinking that such toys were potentially a lot more than mere toys. I’m actually surprised at how long it has taken for this to be proved right.
What were the recent developments that made useful drones like Jardine’s possible? It is down to the power-to-weight ratio of the latest mini-engines? I tried googling “why drones work”, but all I got was arguments saying that it’s good to use drones to kill America’s enemies, not why they are now usable for such missions.
Incoming from Michael J:
Katy Perry and dancing Nazi sharks. I guess this is why you stay up for the Superbowl.
Actually I missed KP’s half time performance, but I have it on one of my various TV hard disks. I did stay up until the Superbowl ended, but I found myself only giving it about a third of my attention.
I did tune in at the end. That bizarre catch was fun. But the game ended the way it did because, at any rate in the opinion of all the commentators, the Seattle Seahawks made a horrible mistake. ("I cannot believe that call!") Truly great games are won because of something wonderful, not something horrible. In an ideal world, you want the losers thinking, not: “Oh Shit, What Were We Thinking?!?!? We’ll have nightmares about that for the rest of our lives.” You want them thinking: “Well, there was nothing we could have done about that.” And the winners can spend the rest of their lives remembering that they did it, not that the other guys did it for them.
And then this morning there was this:
6 1 6 . 6 6 | . 4 W 4 W 1 | 1 . 1wd 6 6 6
That’s the last three overs of the England Second Eleven‘s batting effort against the South Africa Second Eleven. I love how you can now follow these bizarrely obscure games. Ben Stokes, who has been having a rough time of it of late, is the one hitting six of those seven sixes at the end, and finishing on 151 not out (off 86 balls) , out of 378-6. Perhaps someone in the England First Eleven (recently crushed by Australia in a triangular warm-up tournament) will get hurt during the forthcoming World Cup, and Stokes will be inserted into their team. Such is the romance of sport.
Finally, here is a piece by cricket boffin Ed Smith, about how having fun is very important. Because of fun, Alexander Fleming invented penicillin, etc. But the real reason for fun is that having fun is fun. It’s articles like this that cause insane parents to send their children to Fun Classes.
I shouldn’t mock. It’s a good piece. And fun is what this blog here is mostly about.
Another Bit from a Book, and once again I accompany it with a warning that this Bit could vanish at any moment, for the reasons described in this earlier posting.
This particular Bit is from The Rational Optimist by Matt Ridley (pp. 255-258):
Much as I love science for its own sake, I find it hard to argue that discovery necessarily precedes invention and that most new practical applications flow from the minting of esoteric insights by natural philosophers. Francis Bacon was the first to make the case that inventors are applying the work of discoverers, and that science is the father of invention. As the scientist Terence Kealey has observed, modern politicians are in thrall to Bacon. They believe that the recipe for making new ideas is easy: pour public money into science, which is a public good, because nobody will pay for the generation of ideas if the taxpayer does not, and watch new technologies emerge from the downstream end of the pipe. Trouble is, there are two false premises here: first, science is much more like the daughter than the mother of technology; and second, it does not follow that only the taxpayer will pay for ideas in science.
It used to be popular to argue that the European scientific revolution of the seventeenth century unleashed the rational curiosity of the educated classes, whose theories were then applied in the form of new technologies, which in turn allowed standards of living to rise. China, on this theory, somehow lacked this leap to scientific curiosity and philosophical discipline, so it failed to build on its technological lead. But history shows that this is back-to-front. Few of the inventions that made the industrial revolution owed anything to scientific theory.
It is, of course, true that England had a scientific revolution in the late 1600s, personified in people like Harvey, Hooke and Halley, not to mention Boyle, Petty and Newton, but their influence on what happened in England’s manufacturing industry in the following century was negligible. Newton had more influence on Voltaire than he did on James Hargreaves. The industry that was transformed first and most, cotton spinning and weaving, was of little interest to scientists and vice versa. The jennies, gins, frames, mules and looms that revolutionised the working of cotton were invented by tinkering businessmen, not thinking boffins: by ‘hard heads and clever fingers’. It has been said that nothing in their designs would have puzzled Archimedes.
Likewise, of the four men who made the biggest advances in the steam engine - Thomas Newcomen, James Watt, Richard Trevithick and George Stephenson - three were utterly ignorant of scientific theories, and historians disagree about whether the fourth, Watt, derived any influence from theory at all. It was they who made possible the theories of the vacuum and the laws of thermodynamics, not vice versa. Denis Papin, their French-born forerunner, was a scientist, but he got his insights from building an engine rather than the other way round. Heroic efforts by eighteenth-century scientists to prove that Newcomen got his chief insights from Papin’s theories proved wholly unsuccessful.
Throughout the industrial revolution, scientists were the beneficiaries of new technology, much more than they were the benefactors. Even at the famous Lunar Society, where the industrial entrepreneur Josiah Wedgwood liked to rub shoulders with natural philosophers like Erasmus Darwin and Joseph Priestley, he got his best idea - the ‘rose-turning’ lathe - from a fellow factory owner, Matthew Boulton. And although Benjamin Franklin’s fertile mind generated many inventions based on principles, from lightning rods to bifocal spectacles, none led to the founding of industries.
So top-down science played little part in the early years of the industrial revolution. In any case, English scientific virtuosity dries up at the key moment. Can you name a single great English scientific discovery of the first half of the eighteenth century? It was an especially barren time for natural philosophers, even in Britain. No, the industrial revolution was not sparked by some deus ex machina of scientific inspiration. Later science did contribute to the gathering pace of invention and the line between discovery and invention became increasingly blurred as the nineteenth century wore on. Thus only when the principles of electrical transmission were understood could the telegraph be perfected; once coal miners understood the succession of geological strata, they knew better where to sink new mines; once benzene’s ring structure was known, manufacturers could design dyes rather than serendipitously stumble on them. And so on. But even most of this was, in Joel Mokyr’s words, ‘a semi-directed, groping, bumbling process of trial and error by clever, dexterous professionals with a vague but gradually clearer notion of the processes at work’. It is a stretch to call most of this science, however. It is what happens today in the garages and cafes of Silicon Valley, but not in the labs of Stanford University.
The twentieth century, too, is replete with technologies that owe just as little to philosophy and to universities as the cotton industry did: flight, solid-state electronics, software. To which scientist would you give credit for the mobile telephone or the search engine or the blog? In a lecture on serendipity in 2007, the Cambridge physicist Sir Richard Friend, citing the example of high-temperature superconductivity - which was stumbled upon in the 1980s and explained afterwards - admitted that even today scientists’ job is really to come along and explain the empirical findings of technological tinkerers after they have discovered something.
The inescapable fact is that most technological change comes from attempts to improve existing technology. It happens on the shop floor among apprentices and mechanicals, or in the workplace among the users of computer programs, and only rarely as a result of the application and transfer of knowledge from the ivory towers of the intelligentsia. This is not to condemn science as useless. The seventeenth-century discoveries of gravity and the circulation of the blood were splendid additions to the sum of human knowledge. But they did less to raise standards of living than the cotton gin and the steam engine. And even the later stages of the industrial revolution are replete with examples of technologies that were developed in remarkable ignorance of why they worked. This was especially true in the biological world. Aspirin was curing headaches for more than a century before anybody had the faintest idea of how. Penicillin’s ability to kill bacteria was finally understood around the time bacteria learnt to defeat it. Lime juice was preventing scurvy centuries before the discovery of vitamin C. Food was being preserved by canning long before anybody had any germ theory to explain why it helped.
This article confirms not one but two of my medical prejudices, which is double nice. Experts have their uses, one of which is to tell you that you have been right all along about something they’ve only just discovered.
The article is about artificial sweeteners, and this is how it ends:
What does this all mean?
1. Our gut bacteria matters a lot. Some guts can withstand artificial sugars well and others can’t. It stands to reason that, as we learn more about the uniqueness of our own microbiome, those of us who want to lose weight would be well served by diets that are tailored to the way our body and its biomic mini-me processes sugar.
2. Artificial sweeteners are pervasive and some people still can lose weight and enhance their health while consuming them. But since we now know that, on balance, they seem to be more bad than good, moderating how much we consume might be smart, too.
3. The study suggests that if people replace artificial sugars with real sugars or cut it out, their biomes could change in a way that contributes to the restoration of normal glucose tolerance over time, all other things being equal.
So, artificial sweeteners have a tendency to be very bad for you. That’s prejudice of mine number one. But, they may not be bad for you because, and this is prejudice of mine number two, people vary, physically. There is not just the one way of being healthy. There are a minimum of several, and what is harmless or even beneficial for you and to those like you may be very bad for other sorts of people.
The basic reason I came to think that artificial sweeteners might be bad for me was, to begin with, pure rationalisation of the fact that I have always thought that they taste disgusting, compared to sugar. “Diet” stuff, as a general rule, tasted, to me, horrible compared to regular stuff. In particular, Diet Coke tasted like that pink liqued they make you gargle with at the dentist. I started out believing that Diet Coke is bad for you because I wanted it to be, and I wanted the Regular Coke that I have always chosen when coking up to be less bad. But the more I thought about that early frisson of (literally) distaste, the more I came to believe that my at first merely wishful thinking actually did make some sense. Sugar really is somewhat more natural than most sweeteners, or so I assume, and we are more likely to be creatures that can handle sugar, even if not in the quantities that life now offers.
Plus, about five years ago, my niece told me that aspartame (which she said is an evil chemical used to make evil non-sugar) is evil. Rubbish says Big Aspartame. But I reckon, for some people, it is evil.
While rootling around in the www like it was about 2003, I found this piece, dating from 2009, which was all about this apparently pretty but otherwise unremarkable abstract picture:
In case you don’t already know what is going on here, the big story here is that the blue bits and the green bits are the same colour. What colour your eyes see something as depends on the other colours in the immediate vicinity.
The writer linked to above found this graphic here, which you can too if you do a bit of scrolling down.
If you saw this around 2009, or something similar around 2003, then apologies for the repetition. That early period of blogging, just after 2000, will always seem to me like a fleeting golden age, when everything of this sort was being discovered and passed on for the very first time. Because we could. Before, we couldn’t. Now, we could. But now (as in now), most of this sort of trivia has been in circulation for a decade, and it lacks the impact it once had. We bloggers must find new things to say, to cover for the fact that blogging itself is no longer new. This is not a bad thing.