Brian Micklethwait's Blog
In which I continue to seek part time employment as the ruler of the world.Home
Brian Micklethwait on Miguel aligns his message with his van
Natalie Solent on Miguel aligns his message with his van
Brian Micklethwait on Tate Modern is now fighting with its neighbours about privacy
Michael Jennings on Cyclists
Michael Jennings on Tate Modern is now fighting with its neighbours about privacy
Brian Micklethwait on Tate Modern is now fighting with its neighbours about privacy
Michael Jennings on Tate Modern is now fighting with its neighbours about privacy
Patrick Crozier on Cyclists
Brian Micklethwait on M20 bridge destroyed by passing digger
rob on M20 bridge destroyed by passing digger
Most recent entries
- I never thought that we could win
- Matt Ridley on how (fracking) technology lead science
- The wonderful things they’re doing with plastics nowadays
- The Big Parliament Tower and the Shard as seen from the Westminster Cathedral Tower
- 240 Blackfriars behind some reinforced concrete that is being demolished
- John Croft: Composition is not research
- The cuddly killer
- Strand Palace Hotel footbridge
- Harley Davidson - woman playing gramophone records
- Wooden Citroens and black baby dolls
- Brittany lighthouses
- Citroen correction
- When the people are the Art
- Ghost Bus
- Cats don’t smile
Other Blogs I write for
6000 Miles from Civilisation
A Decent Muesli
Adventures in Capitalism
Alex Ross: The Rest Is Noise
Another Food Blog
Antoine Clarke's Election Watch
Armed and Dangerous
Art Of The State Blog
Boatang & Demetriou
Burning Our Money
Chase me ladies, I'm in the cavalry
China Law Blog
Civilian Gun Self-Defense Blog
Coffee & Complexity
Communities Dominate Brands
Confused of Calcutta
Conservative Party Reptile
Counting Cats in Zanzibar
Deleted by tomorrow
Don't Hold Your Breath
Douglas Carswell Blog
Dr Robert Lefever
Englands Freedome, Souldiers Rights
Everything I Say is Right
Fat Man on a Keyboard
Ferraris for all
Freedom and Whisky
From The Barrel of a Gun
Gates of Vienna
Global Warming Politics
Greg Mankiw's Blog
Guido Fawkes' blog
Here Comes Everybody
Hit & Run
House of Dumb
Iain Dale's Diary
Jeffrey Archer's Official Blog
Jessica Duchen's classical music blog
Laissez Faire Books
Last of the Few
Libertarian Alliance: Blog
Liberty Dad - a World Without Dictators
Lib on the United Kingdom
Little Man, What Now?
Loic Le Meur Blog
L'Ombre de l'Olivier
London Daily Photo
Metamagician and the Hellfire Club
Michael J. Totten's Middle East Journal
More Than Mind Games
Mutualist Blog: Free Market Anti-Capitalism
My Boyfriend Is A Twat
My Other Stuff
Nation of Shopkeepers
Never Trust a Hippy
Non Diet Weight Loss
Nurses for Reform blog
Obnoxio The Clown
On an Overgrown Path
One Man & His Blog
Owlthoughts of a peripatetic pedant
Oxford Libertarian Society /blog
Patri's Peripatetic Peregrinations
Police Inspector Blog
Private Sector Development blog
Remember I'm the Bloody Architect
Setting The World To Rights
SimonHewittJones.com The Violin Blog
Sky Watching My World
Social Affairs Unit
Squander Two Blog
Stuff White People Like
Stumbling and Mumbling
Technology Liberation Front
The Adam Smith Institute Blog
The Becker-Posner Blog
The Belgravia Dispatch
The Belmont Club
The Big Blog Company
The Big Picture
the blog of dave cole
The Corridor of Uncertainty (a Cricket blog)
The Daily Ablution
The Devil's Advocate
The Devil's Kitchen
The Dissident Frogman
The Distributed Republic
The Early Days of a Better Nation
The Examined Life
The Fly Bottle
The Freeway to Serfdom
The Future of Music
The Happiness Project
The Jarndyce Blog
The London Fog
The Long Tail
The Lumber Room
The Online Photographer
The Only Winning Move
The Policeman's Blog
The Road to Surfdom
The Wedding Photography Blog
The Welfare State We're In
UK Commentators - Laban Tall's Blog
UK Libertarian Party
Violins and Starships
we make money not art
What Do I Know?
What's Up With That?
Where the grass is greener
White Sun of the Desert
Why Evolution Is True
Your Freedom and Ours
Arts & Letters Daily
Bjørn Stærk's homepage
Butterflies and Wheels
Dark Roasted Blend
Digital Photography Review
Ghana Centre for Democratic Reform
Global Warming and the Climate
History According to Bob
Institut économique Molinari
Institute of Economic Affairs
Ludwig von Mises Institute
Oxford Libertarian Society
The Christopher Hitchens Web
The Space Review
The TaxPayers' Alliance
This is Local London
UK Libertarian Party
Victor Davis Hanson
WSJ.com Opinion Journal
Bits from books
Bloggers and blogging
Brian Micklethwait podcasts
Cats and kittens
Food and drink
How the mind works
Media and journalism
Middle East and Islam
My blog ruins
Signs and notices
The Micklethwait Clock
This and that
Category archive: Science
Earlier, in 2014, I posting another bit from a Matt Ridley book, this time from The Rational Optimist. I entitled that posting Matt Ridley on how technology leads science and how that means that the state need not fund science.
Here is another Matt Ridley book bit, on this same subject, of how technology leads science. And it is also from The Evolution of Everything (pp. 135-137):
Technology comes from technology far more often than from science. And science comes from technology too. Of course, science may from time to time return the favour to technology. Biotechnology would not have been possible without the science of molecular biology, for example. But the Baconian model with its one-way flow from science to technology, from philosophy to practice, is nonsense. There’s a much stronger flow the other way: new technologies give academics things to study.
An example: in recent years it has become fashionable to argue that the hydraulic fracturing technology that made the shale-gas revolution possible originated in government-sponsored research, and was handed on a plate to industry. A report by California’s Breakthrough Institute noted that microseismic imaging was developed by the federal Sandia National Laboratory, and ‘proved absolutely essential for drillers to navigate and site their boreholes’, which led Nick Steinsberger, an engineer at Mitchell Energy, to develop the technique called ‘slickwater fracking’.
To find out if this was true, I spoke to one of hydraulic fracturing’s principal pioneers, Chris Wright, whose company Pinnacle Technologies reinvented fracking in the late 1990s in a way that unlocked the vast gas resources in the Barnett shale, in and around Forth Worth, Texas. Utilised by George Mitchell, who was pursuing a long and determined obsession with getting the gas to flow out of the Barnett shale to which he had rights, Pinnacle’s recipe - slick water rather than thick gel, under just the right pressure and with sand to prop open the fractures through multi-stage fracturing - proved revolutionary. It was seeing a presentation by Wright that persuaded Mitchell’s Steinsberger to try slickwater fracking. But where did Pinnacle get the idea? Wright had hired Norm Wapinski from Sandia, a federal laboratory. But who had funded Wapinksi to work on the project at Sandia? The Gas Research Institute, an entirely privately funded gas-industry research coalition, whose money came from a voluntary levy on interstate gas pipelines. So the only federal involvement was to provide a space in which to work. As Wright comments: ‘If I had not hired Norm from Sandia there would have been no government involvement.’ This was just the start. Fracking still took many years and huge sums of money to bring to fruition as a workable technology. Most of that was done by industry. Government laboratories beat a path to Wright’s door once he had begun to crack the problem, offering their services and their public money to his efforts to improve fracking still further, and to study just how fractures propagate in rocks a mile beneath the surface. They climbed on the bandwagon, and got some science to do as a result of the technology developed in industry - as they should. But government was not the wellspring.
As Adam Smith, looking around the factories of eighteenth-century Scotland, reported in The Wealth of Nations: ‘a great part of the machines made use in manufactures ... were originally the inventions of common workmen’, and many improvements had been made ‘by the ingenuity of the makers of the machines’. Smith dismissed universities even as a source of advances in philosophy. I am sorry to say this to my friends in academic ivory towers, whose work I greatly value, but if you think your cogitations are the source of most practical innovation, you are badly mistaken.
The internet is fighting back against … cats!
Cats are colonizers: this is what they do. They have colonized the internet just as they have colonized so many other habitats, always with the help of humans. This is the lesson of Cat Wars: The Devastating Consequences of a Cuddly Killer, a new book by conservation scientist Peter P. Marra and travel writer Chris Santella. From remote islands in the Pacific to the marshes of Galveston Bay, Cat Wars traces the various ways in which felines have infiltrated new landscapes, inevitably sowing death and devastation wherever they go.
Perhaps the most famous case of genocide-by-cat is that of the remote Stephens Island in New Zealand. Before the end of the 19th century, it was home to a unique species: the Stephens Island wren. One of only a few species of flightless songbirds, the wren ran low to the ground, looking more like a mouse than a bird. After a lighthouse was built on the island in 1894, a small human settlement was established; and with humans, invariably, come pets. At some point a pregnant cat, brought over from the mainland, escaped and roamed wild. The island’s wrens, unused to facing such a skillful predator, were no match for the feral cats that spread throughout the island. Within a year, the Stephens Island wren was extinct. It would take another 30 years to eradicate the feral cats.
This is not an isolated incident. Cats have contributed to species decline and habitat reduction in dozens of other cases. Because they’re so cute and beloved, we have little conception of — and little incentive to find out — how much damage cats are doing to our environment. When researcher Scott Loss tallied up the number of animals killed by North American housecats in a single year, the results were absolutely staggering: between 6.3 and 22.3 billion mammals, between 1.3 and 4 billion birds, between 95 and 299 million amphibians, and between 258 and 822 million reptiles.
Most books that get multiple reviews on Amazon get around four stars out of five, on average, because most of the reviews are from admirers and there are just a few from detractors. This book gets a star average of one and a bit.
Another French picture, but this time taken in Paris, by my friend Antoine Clarke (to whom thanks):
That would be La Defense, unless I am much mistaken, that being Paris’s new Big Thing district.
I cropped that photo slightly, to moderate that leaning-inwards effect you get when you point a camera upwards at tall buildings.
The email that brought the above snap to my desk, earlier this month, was entitled “warmer than when you were here last”. When I last visited Paris, it was indeed very, very cold, so cold that water features became ice features (see the first picture there).
Today, Antoine sent me another photo, also suffering somewhat from leaning-inwards syndrome, and also cropped by me, more than somewhat. See right.
Mostly what I think about Antoine’s most recent picture is: What an amazing crane! So very tall, and so very thin. It’s amazing it even stays up, let alone manages to accomplish anything. I don’t remember cranes like that existing a generation ago, but maybe that’s merely because no towers that high were being built in London. Not that Antoine’s crane is in London. It is somewhere in America, but where, I do not know.
I just did a bit of googling for books about cranes, and if my googling is anything to go by, books about construction cranes and their history are a lot thinner on the ground than are construction cranes. When you consider how many tons of books have been written about the buildings that construction cranes construct, it is surprising that so little is written about the mighty machines without which such construction would be impossible.
It reminds me of the analogous profusion of books on the history of science, and the comparative neglect of the history of scientific instruments.
As I think I have written before, one major defect of my blog-posting software is that I do not get an accurate picture of how the final blog posting will look, and in this case, whether there is enough verbiage on the left hand side of this tall thin picture of a tall thin crane, to prevent the picture of the tall thin crane impinging upon the posting below. Hence this somewhat verbose and superfluous paragraph, which may not even have been necessary, but I can’t now tell.
I am in the habit of denouncing the notion that science is a precondition for technology (and therefore needs to be paid for by the government). The tendency is for technological gadgetry to lead science, and often to correct science, by defying it and proving with its success that the relevant science needs to be redone.
But there is another even more direct way in which technology leads science. Here is yet another excerpt from Steven Johnson’s The Invention of Air (pp. 73-77). Click on the illustration, which I found here and which is the illustration in the book at that point in the text, to get it properly visible:
The study of air itself had only begun to blossom as a science in the past century, with Robert Boyle’s work on the compression and expansion of air in the late 1600s, and Black’s more recent work on carbon dioxide. Before Boyle and Black, there was little reason to think there was anything to investigate: the world was filled with stuff – people, animals, planets, sprigs of mint – and then there was the nothingness between all the stuff. Why would you study nothingness when there was such a vast supply of stuff to explain? There wasn’t a problem in the nothingness that needed explaining. A cycle of negative reinforcement arose: the lack of a clear problem kept the questions at bay, and the lack of questions left the problems as invisible as the air itself. As Priestley once wrote of Newton, “[he] had very little knowledge of air, so he had few doubts concerning it.”
So the question is: Where did the doubts come from? Why did the problem of air become visible at that specific point in time? Why were Priestley, Boyle, and Black able to see the question clearly enough to begin trying to answer it? There were 800 million human beings on the planet in 1770, every single one of them utterly dependent on air. Why Priestley, Boyle, and Black over everyone else?
One way to answer that question is through the lens of technological history. They were able to explore the problem because they had new tools. The air pumps designed by Otto von Guericke and Boyle (the latter in collaboration with his assistant, Robert Hooke, in the mid-1600s) were as essential to Priestley’s lab in Leeds as the electrical machines had been to his Warrington investigations. It was almost impossible to do experiments without being able to move air around in a controlled manner, just as it was impossible to explore electricity without a reliable means of generating it.
In a way, the air pump had enabled the entire field of pneumatic chemistry in the seventeenth century by showing, indirectly, that there was something to study in the first place. If air was simply the empty space between things, what was there to investigate? But the air pump allowed you to remove all the air from a confined space, and thus create a vacuum, which behaved markedly differently from common air, even though air and absence of air were visually indistinguishable. Bells wouldn’t ring in a vacuum, and candles were extinguished. Von Guericke discovered that a metal sphere composed of two parts would seal tightly shut if you evacuated the air between them. Thus the air pump not only helped justify the study of air itself, but also enabled one of the great spectacles of early Enlightenment science.
The following engraving shows the legendary demonstration of the Magdeburg Sphere, which von Guericke presented before Ferdinand III to much amazement: two eight-horse teams attempt – and, spectacularly, fail – to separate the two hemispheres that have been sealed together by the force of a vacuum.
When we think of technological advances powering scientific discovery, the image that conventionally comes to mind is a specifically visual one: tools that expand the range of our vision, that let us literally see the object of study with new clarity, or peer into new levels of the very distant, the very small. Think of the impact that the telescope had on early physics, or the microscope on bacteriology. But new ways of seeing are not always crucial to discovery. The air pump didn’t allow you to see the vacuum, because of course there was nothing to see; but it did allow you to see it indirectly in the force that held the Magdeburg Sphere together despite all that horsepower. Priestley was two centuries too early to see the molecules bouncing off one another in his beer glasses. But he had another, equally important, technological breakthrough at his disposal: he could measure those molecules, or at least the gas they collectively formed. He had thermometers that could register changes in temperature (plus, crucially, a standard unit for describing those changes). And he had scales for measuring changes in weight that were a thousand times more accurate than the scales da Vinci built three centuries earlier.
This is a standard pattern in the history of science: when tools for measuring increase their precision by orders of magnitude, new paradigms often emerge, because the newfound accuracy reveals anomalies that had gone undetected. One of the crucial benefits of increasing the accuracy of scales is that it suddenly became possible to measure things that had almost no weight. Black’s discovery of fixed air, and its perplexing mixture with common air, would have been impossible without the state-of-the-art scales he employed in his experiments. The whole inquiry had begun when Black heated a quantity of “magnesia alba,” and discovered that it lost a minuscule amount of weight in the process - a difference that would have been imperceptible using older scales. The shift in weight suggested that something was escaping from the magnesia into the air. By then running comparable experiments, heating a wide array of substances, Black was able to accurately determine the weight of carbon dioxide, and consequently prove the existence of the gas. It weighs, therefore it is.
With the university system languishing amid archaic traditions, and corporate R&D labs still on the distant horizon, the public space of the coffeehouse served as the central hub of innovation in British society How much of the Enlightenment do we owe to coffee? Most of the epic developments in England between 1650 and 1800 that still warrant a mention in the history textbooks have a coffeehouse lurking at some crucial juncture in their story. The restoration of Charles II, Newton’s theory of gravity, the South Sea Bubble – they all came about, in part, because England had developed a taste for coffee, and a fondness for the kind of informal networking and shoptalk that the coffeehouse enabled. Lloyd’s of London was once just Edward Lloyd’s coffeehouse, until the shipowners and merchants started clustering there, and collectively invented the modem insurance company. You can’t underestimate the impact that the Club of Honest Whigs had on Priestley’s subsequent streak, precisely because he was able to plug in to an existing network of relationships and collaborations that the coffeehouse environment facilitated. Not just because there were learned men of science sitting around the table – more formal institutions like the Royal Society supplied comparable gatherings – but also because the coffeehouse culture was cross-disciplinary by nature, the conversations freely roaming from electricity, to the abuses of Parliament, to the fate of dissenting churches.
The rise of coffeehouse culture influenced more than just the information networks of the Enlightenment; it also transformed the neurochemical networks in the brains of all those newfound coffee-drinkers. Coffee is a stimulant that has been clinically proven to improve cognitive function - particularly for memory-related tasks - during the first cup or two. Increase the amount of “smart” drugs flowing through individual brains, and the collective intelligence of the culture will become smarter, if enough people get hooked. Create enough caffeine-abusers in your society and you’ll be statistically more likely to launch an Age of Reason. That may itself sound like the self-justifying fantasy of a longtime coffee-drinker, but to connect coffee plausibly to the Age of Enlightenment you have to consider the context of recreational drug abuse in seventeenth-century Europe. Coffee-drinkers are not necessarily smarter; in the long run, than those who abstain from caffeine. (Even if they are smarter for that first cup.) But when coffee originally arrived as a mass phenomenon in the mid-1600s, it was not seducing a culture of perfect sobriety. It was replacing alcohol as the daytime drug of choice. The historian Tom Standage writes in his ingenious A History of the World in Six Glasses:
The impact of the introduction of coffee into Europe during the seventeenth century was particularly noticeable since the most common beverages of the time, even at breakfast, were weak “small beer” and wine .... Those who drank coffee instead of alcohol began the day alert and stimulated, rather than relaxed and mildly inebriated, and the quality and quantity of their work improved .... Western Europe began to emerge from an alcoholic haze that had lasted for centuries.
Emerging from that centuries-long bender, armed with a belief in the scientific method and the conviction, inherited from Newtonian physics, that simple laws could be unearthed beneath complex behavior, the networked, caffeinated minds of the eighteenth century found themselves in a universe that was ripe for discovery. The everyday world was teeming with mysterious phenomena – animals, plants, rocks, weather – that had never before been probed with the conceptual tools of the scientific method. This sense of terra incognita also helps explain why Priestley could be so innovative in so many different disciplines, and why Enlightenment culture in general spawned so many distinct paradigm shifts. Amateur dabblers could make transformative scientific discoveries because the history of each field was an embarrassing lineage of conjecture and superstition. Every discipline was suddenly new again.
I am reading Steven Johnson’s book, The Invention of Air, which is about the life and career of Joseph Priestley.
Early on (pp. 10-12) there is a delightful bit concerning Benjamin Franklin, and his early investigations into the Gulf Stream:
In 1769, the Customs Board in Boston made a formal complaint to the British Treasury about the speed of letters arriving from England. (Indeed, regular transatlantic correspondents had long noticed that letters posted from America to Europe tended to arrive more promptly than letters sent the other direction.) As luck would have it, the deputy postmaster general for North America was in London when the complaint arrived - and so the British authorities brought the issue to his attention, in the hope that he might have an explanation for the lag. They were lucky in another respect: the postmaster in question happened to be Benjamin Franklin.
Franklin would ultimately turn that postal mystery into one of the great scientific breakthroughs of his career: a turning point in our visualization of the macro patterns formed by ocean currents. Franklin was well prepared for the task. As a twenty-year-old, traveling back from his first voyage to London in 1726, he had recorded notes in his journal about the strange prevalence of “gulph weed” in the waters of the North Atlantic. In a letter written twenty years later he had remarked on the slower passage westward across the Atlantic, though at the time he supposed it was attributable to the rotation of the Earth. In a 1762 letter he alluded to the way “the waters mov’d away from the North American Coast towards the coasts of Spain and Africa, whence they get again into the Power of the Trade Winds, and continue the Circulation.” He called that flow the “gulph stream.”
When the British Treasury came to him with the complaint about the unreliable mail delivery schedules, Franklin was quick to suspect that the “gulph stream” would prove to be the culprit. He consulted with a seasoned New England mariner, Timothy Folger, and together they prepared a map of the Gulf Stream’s entire path, hoping that “such Chart and directions may be of use to our Packets in Shortning their Voyages.” The Folger/Franklin map ...
… was the first known chart to show the full trajectory of the Gulf Stream across the Atlantic. But the map was based on anecdotal evidence, mostly drawn from the experience of New England-based whalers. And so in his voyage from England back to America in 1775, Franklin took detailed measurements of water temperatures along the way, and detected a wide but shallow river of warm water, often carrying those telltale weeds from tropical regions. “I find that it is always warmer than the sea on each side of it, and that it does not sparkle in the night,” he wrote. In 1785, at the ripe old age of seventy-nine, he sent a long paper that included his data and the Iolger map to the French scientist Alphonsus le Roy. Franklin’s paper on “sundry Maritime Observations,” as he modestly called it, delivered the first empirical proof of the Gulf Stream’s existence.
I added that map in the middle of that quote, which I found here. (I love the internet.)
Until now, I knew nothing of this Gulf Stream story. The reason I knew nothing of this Gulf Stream story is that I know very little about eighteenth century history of any sort. This book by Johnson looks like it will be a pain-free way to start correcting that.
Six years ago I submitted a paper for a panel, “On the Absence of Absences” that was to be part of an academic conference later that year - in August 2010. Then, and now, I had no idea what the phrase “absence of absences” meant. The description provided by the panel organizers, printed below, did not help. The summary, or abstract of the proposed paper - was pure gibberish, as you can see below. I tried, as best I could within the limits of my own vocabulary, to write something that had many big words but which made no sense whatsoever. I not only wanted to see if I could fool the panel organizers and get my paper accepted, I also wanted to pull the curtain on the absurd pretentions of some segments of academic life. To my astonishment, the two panel organizers - both American sociologists - accepted my proposal and invited me to join them at the annual international conference of the Society for Social Studies of Science to be held that year in Tokyo.
I wonder what Hemingway would have made of “On the Absence of Absences”. (Hemingway, for those not inclined to follow links, is a programme to make your writing clearer.)
Presumably someone has also written a program which churns out this kind of drivel automatically. Google google.
The creators of the automatic nonsense generator, Jeremy Stribling, Dan Aguayo and Maxwell Krohn, have made the SCIgen program free to download. And scientists have been using it in their droves.
At the moment, this sort of drivel just marches on. This is because people who oppose the drivel have to convince the drivellers to stop, which is hard. And, being opposed to drivel, they usually have better things to do with their time. The trick is somehow to reverse the burden of proof, to put the drivellers in the position, en masse, of having to convince the rest of us that their drivel is not drivel. At that point, they find that they have no friends, only public contempt. Everybody, including them, thinks that it is drivel. And nobody thinks it worth bothering to even try to prove otherwise.
3D printing is not the replacement of factories by homes. It is manufacturing in factories only more so. Making stuff is not, as of now, getting less skilled. It is getting more skilled ...:
Most ceramic 3D printing uses complex techniques to deposit layers of the material on top of each other, and as a result have to use materials with relatively low melting points. The techniques can also only be used to create fairly simple shapes.
But a team from HRL Laboratories in Malibu, California, has developed what they call a pre-ceramic resin, which they can 3D print much like regular polymers into complex shapes. The process, known as stereolithography, fuses a powder of silicon carbide ceramics using UV light. Once the basic shape is printed, it can be heat-treated at 1,800°F to turn the pre-ceramic resin into a regular ceramic object. In fact, this is the first time silicon carbide ceramics have ever been 3D printed.
… which is very good news for the rich world economies.
Says a commenter:
So 2016 opens with YAAI3DP (Yet Another Advance In 3D Printing.) and some point all these breakthroughs are going to add up and utterly transform manufacturing.
The way he then goes on to say that it will transform manufacturing is that we may eventually get stuff made whenever and wherever we want it made. In homes and shopping malls, in other words. Maybe eventually. In the meantime, cleverer stuff is getting made in the same old places, and then transported to where it is needed.
When I transport blogged, one of the constant themes I found myself noticing was how people regularly thought that transport would be done away with, but it never was. The main notion was that people would communicate so well that they’d never want to meet face-to-face. Now, it is being speculated that stuff will be made so cleverly that it will be makable anywhere. Maybe so, but that isn’t now the smart way to do it, and it probably never will be.
From Rob Fisher, who knows my interest in 3D printing, incoming email entitled:
It’s no longer a rare feat to 3D print blood vessels. Printing vessels that act like the real deal, however, has been tricky… until now. Lawrence Livermore researchers have successfully 3D printed blood vessels that deliver nutrients and self-assemble like they would in a human body. The key is to print an initial structure out of cells and other organic material, and then to augment it with bio ink and other body-friendly materials. With enough time, everything joins up and behaves naturally.
Right now, the actual structures don’t bear much resemblance to what you’d find in a person - you get a “spaghetti bowl” of vessels. Scientists hope to organize these vessels the way they exist in nature, though. If that happens, you could one day see artificial tissue samples and even transplants that are about as realistic as you can get.
A while back, I worked out that 3D printing was going to be just as huge as everyone is saying, but that it was not going to get “domestic”, in the manner of like black-and-white laser printers for instance, in the foreseeable future (with the possible exception of certain kinds of food preparation). 3D printing is a vast range of specialist manufacturing techniques, and it will, for that foreseeable future, be used by people who already make specialist stuff by other and clumsier means, or who would like to make particular specialist stuff for the first time, of the sort that only 3D printing can do. See the quoted verbiage above.
This is why I receive emails from Google about failing 3D printing companies along with other emails about successful 3D printing activities, mostly by already existing companies. 3D printing is best done by people who already know a hell of a lot about something else, which they can then get 3D printed. Like: blood vessels.
The principle economic consequence of 3D printing will be to provide an abundance of jobs for people everywhere, but especially among the workers of the rich world, who, during the last few decades, have been famously deprived of many of their jobs by the workers of the poor world.
Prediction/guess. Because of things like 3D printing, schools in the rich world will soon become (are already becoming?) a bit more successful, back towards what they were like in the 1950s. This is because, as in the 1950s, there will again be an economic future for everyone in the rich countries, the way there has not been for the last few decades. For the last few decades, in the rich countries, only the geeks (in computers) and the alpha-male super-jocks (in such things as financial services (and in a tiny few cases in sports)) and posh kids (whose parents motivate them to work hard no matter what (this is a circular definition (posh kids are the ones motivated by their parents))) have had proper futures to look forward to. (These three categories overlap.) Accordingly, they have been the only ones paying proper attention in school. The rest have not been able to see enough point to it.
My spell of education blogging taught me, among many things, that when it comes to schools being successful, teacher quality is absolutely not the only variable. Good teachers can get bad results, if the kids just can’t doing with it. Bad teachers can preside over good results, if parents and helpers-out, paid or unpaid, after regular school supply good supplementary teaching, or if the kids were highly motivated and determined to learn despite their crappy teachers.
The one exception to the rule about 3D printers not becoming meaningfully domestic is that they have a big future as educational toys, training kids to go into the bouncing-back manufacturing sector.
I’ve been reading more of Matt Ridley’s The Evolution of Everything, from which a previous excerpt can be found here, here. It continues to be very good. In this bit, Ridley discusses the relationship between genetic and cultural evolution:
What sparked the human revolution in Africa? It is an almost impossibly difficult question to answer, because of the very gradual beginning of the process: the initial trigger may have been very small. The first stirrings of different tools in parts of east Africa seem to be up to 300,000 years old, so by modern standards the change was happening with glacial slowness. And that’s a clue. The defining feature is not culture, for plenty of animals have culture, in the sense of traditions that are passed on by learning. The defining feature is cumulative culture - the capacity to add innovations without losing old habits. In this sense, the human revolution was not a revolution at all, but a very, very slow cumulative change, which steadily gathered pace, accelerating towards today’s near-singularity of incessant and multifarious innovation.
It was cultural evolution. I think the change was kicked off by the habit of exchange and specialisation, which feeds upon itself - the more you exchange, the more value there is in specialisation, and vice versa - and tends to breed innovation. Most people prefer to think it was language that was the cause of the change. Again, language would build upon itself: the more you can speak the more there is to say. The problem with this theory, however, is that genetics suggests Neanderthals had already undergone the linguistic revolution hundreds of thousands of years earlier - with certain versions of genes related to languages sweeping through the species. So if language was the trigger, why did the revolution not happen earlier, and to Neanderthals too? Others think that some aspect of human cognition must have been different in these first ‘behaviourally modern humans’: forward planning, or conscious imitation, say. But what caused language, or exchange, or forethought, to start when and where it did?
Almost everybody answers this question in biological terms: a mutation in some gene, altering some aspect of brain structure, gave our ancestors a new skill, which enabled them to build a culture that became cumulative. Richard Klein, for instance, talks of a single genetic change that ‘fostered the uniquely modern ability to adapt to a remarkable range of natural and social circumstance’. Others have spoken of alterations in the size, wiring and physiology of the human brain to make possible everything from language and tool use to science and art. Others suggest that a small number of mutations, altering the structure or expression of developmental regulatory genes, were what triggered a cultural explosion. The evolutionary geneticist Svante Pääbo says: ‘If there is a genetic underpinning to this cultural and technological explosion, as I’m sure there is .. .’
I am not sure there is a genetic underpinning. Or rather, I think they all have it backwards, and are putting the cart before the horse. I think it is wrong to assume that complex cognition is what makes human beings uniquely capable of cumulative cultural evolution. Rather, it is the other way around. Cultural evolution drove the changes in cognition that are embedded in our genes. The changes in genes are the consequences of cultural changes. Remember the example of the ability to digest milk in adults, which is unknown in other mammals, but common among people of European and east African origin. The genetic change was a response to the cultural change. This happened about 5,000-8,000 years ago. The geneticist Simon Fisher and I argued that the same must have been true for other features of human culture that appeared long before that. The genetic mutations associated with facilitating our skill with language - which show evidence of ‘selective sweeps’ in the past few hundred thousand years, implying that they spread rapidly through the species - were unlikely to be the triggers that caused us to speak; but were more likely the genetic responses to the fact that we were speaking. Only in a language-using animal would the ability to use language more fluently be an advantage. So we will search in vain for the biological trigger of the human revolution in Africa 200,000 years ago, for all we will find is biological responses to culture. The fortuitous adopting of a habit, through force of circumstance, by a certain tribe might have been enough to select for genes that made the members of that tribe better at speaking, exchanging, planning or innovating. In people, genes are probably the slaves, not the masters, of culture.
I have begun reading Matt Ridley’s latest book, The Evolution of Everything. Early signs: brilliant. I especially liked this bit (pp. 7-10), about modern ideas in the ancient world:
A ‘skyhook’ is an imaginary device for hanging an object from the sky. The word originated in a sarcastic remark by a frustrated pilot of a reconnaissance plane in the First World War, when told to stay in the same place for an hour: ‘This machine is not fitted with skyhooks,’ he replied. The philosopher Daniel Dennett used the skyhook as a metaphor for the argument that life shows evidence of an intelligent designer. He contrasted skyhooks with cranes - the first impose a solution, explanation or plan on the world from on high; the second allow solutions, explanations or patterns to emerge from the ground up, as natural selection does.
The history of Western thought is dominated by skyhooks, by devices for explaining the world as the outcome of design and planning. Plato said that society worked by imitating a designed cosmic order, a belief in which should be coercively enforced. Aristotle said that you should look for inherent principles of intentionality and development - souls - within matter. Homer said gods decided the outcome of battles. St Paul said that you should behave morally because Jesus told you so. Mohamed said you should obey God’s word as transmitted through the Koran. Luther said that your fate was in God’s hands. Hobbes said that social order came from a monarch, or what he called ‘Leviathan’ - the state. Kant said morality transcended human experience. Nietzsche said that strong leaders made for good societies. Marx said that the state was the means of delivering economic and social progress. Again and again, we have told ourselves that there is a top-down description of the world, and a top-down prescription by which we should live.
But there is another stream of thought that has tried and usually failed to break through. Perhaps its earliest exponent was Epicurus, a Greek philosopher about whom we know very little. From what later writers said about his writings, we know that he was born in 341 BC and thought (as far as we can tell) that the physical world, the living world, human society and the morality by which we live all emerged as spontaneous phenomena, requiring no divine intervention nor a benign monarch or nanny state to explain them. As interpreted by his followers, Epicurus believed, following another Greek philosopher, Dernocritus, that the world consisted not of lots of special substances including spirits and humours, but simply of two kinds of thing: voids and atoms. Everything, said Epicurus, is made of invisibly small and indestructible atoms, separated by voids; the atoms obey the laws of nature and every phenomenon is the result of natural causes. This was a startlingly prescient conclusion for the fourth century BC.
Unfortunately Epicurus’s writings did not survive. But three hundred years later, his ideas were revived and explored in a lengthy, eloquent and unfinished poem, De Rerum Natura (Of the Nature of Things), by the Roman poet Titus Lucretius Carus, who probably died in mid-stanza around 49 BC, just as dictatorship was looming in Rome. Around this time, in Gustave Flaubert’s words, ‘when the gods had ceased to be, and Christ had not yet come, there was a unique moment in history, between Cicero and Marcus Aurelius when man stood alone’. Exaggerated maybe, but free thinking was at least more possible then than before or after. Lucretius was more subversive, open-minded and far-seeing than either of those politicians (Cicero admired, but disagreed with, him). His poem rejects all magic, mysticism, superstition, religion and myth. It sticks to an unalloyed empiricism.
As the Harvard historian Stephen Greenblatt has documented, a bald list of the propositions Lucretius advances in the unfinished 7,400 hexameters of De Rerum Natura could serve as an agenda for modernity. He anticipated modern physics by arguing that everything is made of different combinations of a limited set of invisible particles, moving in a void. He grasped the current idea that the universe has no creator, Providence is a fantasy and there is no end or purpose to existence, only ceaseless creation and destruction, governed entirely by chance. He foreshadowed Darwin in suggesting that nature ceaselessly experiments, and those creatures that can adapt and reproduce will thrive. He was with modern philosophers and historians in suggesting that the universe was not created for or about human beings, that we are not special, and there was no Golden Age of tranquillity and plenty in the distant past, but only a primitive battle for survival. He was like modern atheists in arguing that the soul dies, there is no afterlife, all organised religions are superstitious delusions and invariably cruel, and angels, demons or ghosts do not exist. In his ethics he thought the highest goal of human life is the enhancement of pleasure and the reduction of pain.
Thanks largely to Greenblatt’s marvellous book The Swerve, I have only recently come to know Lucretius, and to appreciate the extent to which I am, and always have been without knowing it, a Lucretian/Epicurean. Reading his poem in A.E. Stallings’s beautiful translation in my sixth decade is to be left fuming at my educators. How could they have made me waste all those years at school plodding through the tedious platitudes and pedestrian prose of Jesus Christ or Julius Caesar, when they could have been telling me about Lucretius instead, or as well? Even Virgil was writing partly in reaction to Lucretius, keen to re-establish respect for gods, rulers and top-down ideas in general. Lucretius’s notion of the ceaseless mutation of forms composed of indestructible substances - which the Spanish-born philosopher George Santayana called the greatest thought that mankind has ever hit upon - has been one of the persistent themes of my own writing. It is the central idea behind not just physics and chemistry, but evolution, ecology and economics too. Had the Christians not suppressed Lucretius, we would surely have discovered Darwinism centuries before we did.
I’ve not been out much lately, but last Friday night I got to see Perry and Adriana’s new version of indoors. That was the best photo I took, of a drying up cloth.
Click on that to see Adriana’s trousers, of the sort that are presumably threatening all the time to get tighter.
It seems that I am not the only one reminiscing about photos taken nearly a decade ago. The Atlantic is now doing this, with the help of NASA and its Cassini orbiter, and the Cassini orbiter’s oresumably now rather obsolete camera:
Saturn’s sixth-largest moon, Enceladus (504 kilometers or 313 miles across), is the subject of much scrutiny, in large part due to its spectacular active geysers and the likelihood of a subsurface ocean of liquid water. NASA’s Cassini orbiter has studied Enceladus, along with the rest of the Saturnian system, since entering orbit in 2004. Studying the composition of the ocean within is made easier by the constant eruptions of plumes from the surface, and on October 28, Cassini will be making its deepest-ever dive through the ocean spray from Enceladus - passing within a mere 30 miles of the icy surface. Collected here are some of the most powerful and revealing images of Enceladus made by Cassini over the past decade, with more to follow from this final close flyby as they arrive.
Here is a picture of Enceladus taken on June 10th 2006:
That is picture number 25, or rather, a horizontal slice of it.
Beyond Enceladus and Saturn’s rings, Titan, Saturn’s largest moon, is ringed by sunlight passing through its atmosphere. Enceladus passes between Titan and Cassini ...
That’s right. Those two horizontal, ever so slightly converging white lines and the edge of the Rings of Saturn.
Picture number 10 is even more horizontalisable:
A pair of Saturn’s moons appear insignificant compared to the immensity of the planet in this Cassini spacecraft view. Enceladus, the larger moon is visible as a small sphere, while tiny Epimetheus (70 miles, or 113 kilometers across) appears as a tiny black speck on the far left of the image, just below the thin line of the rings.
That one was taken on November 4th 2011.
“Modern buildings, exemplified by the Eiffel Tower or the Golden Gate Bridge, are incredibly light and weight-efficient by virtue of their architectures,” commented Bill Carter, manager of the Architected Materials Group at HRL.
“We are revolutionising lightweight materials by bringing this concept to the materials level and designing their architectures at the nano- and micro-scales,” he added.
In the new film released by Boeing earlier this month, HRL research scientist Sophia Yang describes the metal as “the world’s lightest material”, and compares its 99.9 per cent air structure to the composition of human bones – rigid on the outside, but with an open cellular composition inside that keeps them lightweight.
All of which has obvious applications to airplanes:
Although the aerospace company hasn’t announced definite plans to use the microlattice, the film suggests that Boeing has been investigating possible applications for the material in aeroplanes, where it could be used for wall or floor panels to save weight and make aircraft more fuel efficient.
And it surely won’t stop with wall and floor panels.
These are the days of miracle and wonder.
One of the many fine things about the internet – and in particular that great internet business, Amazon – is that you can now easily get hold of books that seem interesting, even if they were published a decade and a half ago. Steven Johnson’s book, Emergence, for instance. This was published in 2001. I think it was some Amazon robot system that reckoned I might like it ("lots of people who bought this book you just bought also bought this one"). And I read some Amazon reviews, or whatever, and I did like it, or at least the sound of it, and I duly sent off for it. (I paid £0.01 plus postage.) And now I’m reading it.
Chapter one of Emergence is entitled “The Myth of the Ant Queen”. Here is the part of that chapter that describes the research then being done by Deborah Gordon, into ants:
At the heart of Gordon’s work is a mystery about how ant colonies develop, a mystery that has implications extending far beyond the parched earth of the Arizona desert to our cities, our brains, our immune systems - and increasingly, our technology. Gordon’s work focuses on the connection between the microbehavior of individual ants and the overall behavior of the colonies themselves, and part of that research involves tracking the life cycles of individual colonies, following them year after year as they scour the desert floor for food, competing with other colonies for territory, and - once a year - mating with them. She is a student, in other words, of a particular kind of emergent, self-organizing system.
Dig up a colony of native harvester ants and you’ll almost invariably find that the queen is missing. To track down the colony’s matriarch, you need to examine the bottom of the hole you’ve just dug to excavate the colony: you’ll find a narrow, almost invisible passageway that leads another two feet underground, to a tiny vestibule burrowed out of the earth. There you will find the queen. She will have been secreted there by a handful of ladies-in-waiting at the first sign of disturbance. That passageway, in other words, is an emergency escape hatch, not unlike a fallout shelter buried deep below the West Wing.
But despite the Secret Service-like behavior, and the regal nomenclature, there’s nothing hierarchical about the way an ant colony does its thinking. ‘’Although queen is a term that reminds us of human political systems,” Gordon explains, “the queen is not an authority figure. She lays eggs and is fed and cared for by the workers. She does not decide which worker does what. In a harvester ant colony, many feet of intricate tunnels and chambers and thousands of ants separate the queen, surrounded by interior workers, from the ants working outside the nest and using only the chambers near the surface. It would be physically impossible for the queen to direct every worker’s decision about which task to perform and when.” The harvester ants that carry the queen off to her escape hatch do so not because they’ve been ordered to by their leader; they do it because the queen ant is responsible for giving birth to all the members of the colony, and so it’s in the colony’s best interest - and the colony’s gene pool-to keep the queen safe. Their genes instruct them to protect their mother, the same way their genes instruct them to forage for food. In other words, the matriarch doesn’t train her servants to protect her, evolution does.
Popular culture trades in Stalinist ant stereotypes - witness the authoritarian colony regime in the animated film Antz - but in fact, colonies are the exact opposite of command economies. While they are capable of remarkably coordinated feats of task allocation, there are no Five-Year Plans in the ant kingdom. The colonies that Gordon studies display some of nature’s most mesmerizing decentralized behavior: intelligence and personality and learning that emerges from the bottom up.
I’m still gazing into the latticework of plastic tubing when Gordon directs my attention to the two expansive white boards attached to the main colony space, one stacked on top of the other and connected by a ramp. (Imagine a two-story parking garage built next to a subway stop.) A handful of ants meander across each plank, some porting crumblike objects on their back, others apparently just out for a stroll. If this is the Central Park of Cordon’s ant metropolis, I think, it must be a workday.
Gordon gestures to the near corner of the top board, four inches from the ramp to the lower level, where a pile of strangely textured dust - littered with tiny shells and husks-presses neatly against the wall. “That’s the midden,” she says. “It’s the town garbage dump.” She points to three ants marching up the ramp, each barely visible beneath a comically oversize shell. “These ants are on midden duty: they take the trash that’s left over from the food they’ve collected-in this case, the seeds from stalk grass-and deposit it in the midden pile.”
Gordon takes two quick steps down to the other side of the table, at the far end away from the ramp. She points to what looks like another pile of dust. “And this is the cemetery.” I look again, startled. She’s right: hundreds of ant carcasses are piled atop one another, all carefully wedged against the table’s corner. It looks brutal, and yet also strangely methodical.
I know enough about colony behavior to nod in amazement. “So they’ve somehow collectively decided to utilize these two areas as trash heap and cemetery,” I say. No individual ant defined those areas, no central planner zoned one area for trash, the other for the dead. “It just sort of happened, right?”
Cordon smiles, and it’s clear that I’ve missed something. “It’s better than that,” she says. “Look at what actually happened here: they’ve built the cemetery at exactly the point that’s furthest away from the colony. And the midden is even more interesting: they’ve put it at precisely the point that maximizes its distance from both the colony and the cemetery. It’s like there’s a rule they’re following: put the dead ants as far away as possible, and put the midden as far away as possible without putting it near the dead ants.” I have to take a few seconds to do the geometry myself, and sure enough, the ants have got it right. I find myself laughing out loud at the thought: it’s as though they’ve solved one of those spatial math tests that appear on standardized tests, conjuring up a solution that’s perfectly tailored to their environment, a solution that might easily stump an eight-year-old human. The question is, who’s doing the conjuring?
It’s a question with a long and august history, one that is scarcely limited to the collective behavior of ant colonies. We know the answer now because we have developed powerful tools for thinking about - and modeling - the emergent intelligence of self-organizing systems, but that answer was not always so clear. We know now that systems like ant colonies don’t have real leaders, that the very idea of an ant “queen” is misleading. But the desire to find pacemakers in such systems has always been powerful-in both the group behavior of the social insects, and in the collective human behavior that creates a living city.