Brian Micklethwait's Blog
In which I continue to seek part time employment as the ruler of the world.Home
Matthew May on More White Vans
Simon Gibbs on More White Vans
Beakon blog on Tweet?
6000 on A weird view of the Wheel - and cats in Tiger
Michael Jennings on White Van
Brian Micklethwait on White Van
Rob Fisher on White Van
Rob Fisher on Is rugby the new squash?
Alan Little on Is rugby the new squash?
Michael Jennings on Michael Jennings on the likely progress of the Cricket World Cup
Most recent entries
- Back to being ill
- Wheel behind trees
- Big cat scan
- From a cat cushion to Bill Murray and a nude to a demon horse sculpture that killed its creator
- My favourie partial eclipse photos
- Bean drops snow on tourist
- Paul Kennedy on centimetric radar
- More White Vans
- Quota scaffolding and quota roof clutter
- Not squash
- A weird view of the Wheel - and cats in Tiger
- White Vin Van
- White Van
- BT Tower behind trees
- You don’t see this any more
Other Blogs I write for
6000 Miles from Civilisation
A Decent Muesli
Adventures in Capitalism
Alex Ross: The Rest Is Noise
Another Food Blog
Antoine Clarke's Election Watch
Armed and Dangerous
Art Of The State Blog
Boatang & Demetriou
Burning Our Money
Chase me ladies, I'm in the cavalry
China Law Blog
Civilian Gun Self-Defense Blog
Coffee & Complexity
Communities Dominate Brands
Confused of Calcutta
Conservative Party Reptile
Counting Cats in Zanzibar
Deleted by tomorrow
Don't Hold Your Breath
Douglas Carswell Blog
Dr Robert Lefever
Englands Freedome, Souldiers Rights
Everything I Say is Right
Fat Man on a Keyboard
Ferraris for all
Freedom and Whisky
From The Barrel of a Gun
Gates of Vienna
Global Warming Politics
Greg Mankiw's Blog
Guido Fawkes' blog
Here Comes Everybody
Hit & Run
House of Dumb
Iain Dale's Diary
Jeffrey Archer's Official Blog
Jessica Duchen's classical music blog
Laissez Faire Books
Last of the Few
Libertarian Alliance: Blog
Liberty Dad - a World Without Dictators
Lib on the United Kingdom
Little Man, What Now?
Loic Le Meur Blog
L'Ombre de l'Olivier
London Daily Photo
Metamagician and the Hellfire Club
Michael J. Totten's Middle East Journal
More Than Mind Games
Mutualist Blog: Free Market Anti-Capitalism
My Boyfriend Is A Twat
My Other Stuff
Nation of Shopkeepers
Never Trust a Hippy
Non Diet Weight Loss
Nurses for Reform blog
Obnoxio The Clown
On an Overgrown Path
One Man & His Blog
Owlthoughts of a peripatetic pedant
Oxford Libertarian Society /blog
Patri's Peripatetic Peregrinations
Police Inspector Blog
Private Sector Development blog
Remember I'm the Bloody Architect
Setting The World To Rights
SimonHewittJones.com The Violin Blog
Sky Watching My World
Social Affairs Unit
Squander Two Blog
Stuff White People Like
Stumbling and Mumbling
Technology Liberation Front
The Adam Smith Institute Blog
The Becker-Posner Blog
The Belgravia Dispatch
The Belmont Club
The Big Blog Company
The Big Picture
the blog of dave cole
The Corridor of Uncertainty (a Cricket blog)
The Daily Ablution
The Devil's Advocate
The Devil's Kitchen
The Dissident Frogman
The Distributed Republic
The Early Days of a Better Nation
The Examined Life
The Fly Bottle
The Freeway to Serfdom
The Future of Music
The Happiness Project
The Jarndyce Blog
The London Fog
The Long Tail
The Lumber Room
The Online Photographer
The Only Winning Move
The Policeman's Blog
The Road to Surfdom
The Wedding Photography Blog
The Welfare State We're In
UK Commentators - Laban Tall's Blog
UK Libertarian Party
Violins and Starships
we make money not art
What Do I Know?
What's Up With That?
Where the grass is greener
White Sun of the Desert
Why Evolution Is True
Your Freedom and Ours
Arts & Letters Daily
Bjørn Stærk's homepage
Butterflies and Wheels
Dark Roasted Blend
Digital Photography Review
Ghana Centre for Democratic Reform
Global Warming and the Climate
History According to Bob
Institut économique Molinari
Institute of Economic Affairs
Ludwig von Mises Institute
Oxford Libertarian Society
The Christopher Hitchens Web
The Space Review
The TaxPayers' Alliance
This is Local London
UK Libertarian Party
Victor Davis Hanson
WSJ.com Opinion Journal
Bits from books
Bloggers and blogging
Brian Micklethwait podcasts
Cats and kittens
Food and drink
How the mind works
Media and journalism
Middle East and Islam
My blog ruins
Signs and notices
The Micklethwait Clock
This and that
Category archive: Bits from books
I’ve been reading Paul Kennedy’s Engineers of Victory, which is about how WW2 was won, by us good guys. Kennedy, like many others, identifies the Battle of the Atlantic as the allied victory which made all the other victories over Germany by the Anglo-American alliance possible. I agree with the Amazon reviewers who say things like “good overview, not much engineering”. But this actually suited me quite well. At least I now know what I want to know more about the engineering of. And thanks to Kennedy, I certainly want to know more about how centimetric radar was engineered.
Centimetric radar was even more of a breakthrough, arguably the greatest. HF-DF might have identified a U-boat’s radio emissions 20 miles from the convoy, but the corvette or plane dispatched in that direction still needed to locate a small target such as a conning tower, perhaps in the dark or in fog. The giant radar towers erected along the coast of southeast England to alert Fighter Command of Luftwaffe attacks during the Battle of Britain could never be replicated in the mid-Atlantic, simply because the structures were far too large. What was needed was a miniaturized version, but creating one had defied all British and American efforts for basic physical and technical reasons: there seemed to be no device that could hold the power necessary to generate the microwave pulses needed to locate objects much smaller than, say, a squadron of Junkers bombers coming across the English Channel, yet still made small enough to be put on a small escort vessel or in the nose of a long-range aircraft. There had been early air-to-surface vessel (ASV) sets in Allied aircraft, but by 1942 the German Metox detectors provided the U-boats with early warning of them. Another breakthrough was needed, and by late spring of 1943 that problem had been solved with the steady introduction of 10-centimeter (later 9.1-centimeter) radar into Allied reconnaissance aircraft and even humble Flower-class corvettes; equipped with this facility, they could spot a U-boat’s conning tower miles away, day or night. In calm waters, the radar set could even pick up a periscope. From the Allies’ viewpoint, the additional beauty of it was that none of the German systems could detect centimetric radar working against them.
Where did this centimetric radar come from? In many accounts of the war, it simply “pops up”; Liddell Hart is no worse than many others in noting, “But radar, on the new 10cm wavelength that the U-boats could not intercept, was certainly a very important factor.” Hitherto, all scientists’ efforts to create miniaturized radar with sufficient power had failed, and Doenitz’s advisors believed it was impossible, which is why German warships were limited to a primitive gunnery-direction radar, not a proper detection system. The breakthrough came in spring 1940 at Birmingham University, in the labs of Mark Oliphant (himself a student of the great physicist Ernest Rutherford), when the junior scientists John Randall and Harry Boot, working in a modest wooden building, finally put together the cavity magnetron.
This saucer-sized object possessed an amazing capacity to detect small metal objects, such as a U-boat’s conning tower, and it needed a much smaller antenna for such detection. Most important of all, the device’s case did not crack or melt because of the extreme energy exuded. Later in the year important tests took place at the Telecommunications Research Establishment on the Dorset coast. In midsummer the radar picked up an echo from a man cycling in the distance along the cliff, and in November it tracked the conning tower of a Royal Navy submarine steaming along the shore. Ironically, Oliphant’s team had found their first clue in papers published sixty years earlier by the great German physicist and engineer Adolf Herz, who had set out the original theory for a metal casement sturdy enough to hold a machine sending out very large energy pulses. Randall had studied radio physics in Germany during the 1930s and had read Herz’s articles during that time. Back in Birmingham, he and another young scholar simply picked up the raw parts from a scrap metal dealer and assembled the device.
Almost inevitably, development of this novel gadget ran into a few problems: low budgets, inadequate research facilities, and an understandable concentration of most of Britain’s scientific efforts at finding better ways of detecting German air attacks on the home islands. But in September 1940 (at the height of the Battle of Britain, and well before the United States formally entered the war) the Tizard Mission arrived in the United States to discuss scientific cooperation. This mission brought with it a prototype cavity magnetron, among many other devices, and handed it to the astonished Americans, who quickly recognized that this far surpassed all their own approaches to the miniature-radar problem. Production and test improvements went into full gear, both at Bell Labs and at the newly created Radiation Laboratory (Rad Lab) at the Massachusetts Institute of Technology. Even so, there were all sorts of delays - where could they fit the equipment and operator in a Liberator? Where could they install the antennae? - so it was not until the crisis months of March and April 1943 that squadrons of fully equipped aircraft began to join the Allied forces in the Battle of the Atlantic.
Soon everyone was clamoring for centimetric radar - for the escorts, for the carrier aircraft, for gunnery control on the battleships. The destruction of the German battle cruiser Scharnhorst off the North Cape on Boxing Day 1943, when the vessel was first shadowed by the centimetric radar of British cruisers and then crushed by the radar-controlled gunnery of the battleship HMS Duke of York, was an apt demonstration of the value of a machine that initially had been put together in a Birmingham shed. By the close of the war, American industry had produced more than a million cavity magnetrons, and in his Scientists Against Time (1946) James Baxter called them “the most valuable cargo ever brought to our shores” and “the single most important item in reverse lease-lend.” As a small though nice bonus, the ships using it could pick out life rafts and lifeboats in the darkest night and foggiest day. Many Allied and Axis sailors were to be rescued this way.
Here (pp. 143-5) is how Thiel explains the difference between humans and computers, and how they complement one another in doing business together:
To understand the scale of this variance, consider another of Google’s computer-for-human substitution projects. In 2012, one of their supercomputers made headlines when, after scanning 10 million thumbnails of YouTube videos, it learned to identify a cat with 75% accuracy. That seems impressive-until you remember that an average four-year-old can do it flawlessly. When a cheap laptop beats the smartest mathematicians at some tasks but even a supercomputer with 16,000 CPUs can’t beat a child at others, you can tell that humans and computers are not just more or less powerful than each other - they’re categorically different.
The stark differences between man and machine mean that gains from working with computers are much higher than gains from trade with other people. We don’t trade with computers any more than we trade with livestock or lamps. And that’s the point: computers are tools, not rivals.
Thiel then writes about how he learned about the above truths when he and his pals at Paypal solved one of their biggest problems:
In mid-2000 we had survived the dot-com crash and we were growing fast, but we faced one huge problem: we were losing upwards of $10 million to credit card fraud every month. Since we were processing hundreds or even thousands of transactions per minute, we couldn’t possibly review each one - no human quality control team could work that fast.
So we did what any group of engineers would do: we tried to automate a solution. First, Max Levchin assembled an elite team of mathematicians to study the fraudulent transfers in detail. Then we took what we learned and wrote software to automatically identify and cancel bogus transactions in real time. But it quickly became clear that this approach wouldn’t work either: after an hour or two, the thieves would catch on and change their tactics. We were dealing with an adaptive enemy, and our software couldn’t adapt in response.
The fraudsters’ adaptive evasions fooled our automatic detection algorithms, but we found that they didn’t fool our human analysts as easily. So Max and his engineers rewrote the software to take a hybrid approach: the computer would flag the most suspicious transactions on a well-designed user interface, and human operators would make the final judgment as to their legitimacy. Thanks to this hybrid system - we named it “Igor,” after the Russian fraudster who bragged that we’d never be able to stop him - we turned our first quarterly profit in the first quarter of 2002 (as opposed to a quarterly loss of $29.3 million one year before).
There then follow these sentences.
The FBI asked us if we’d let them use Igor to help detect financial crime. And Max was able to boast, grandiosely but truthfully, that he was “the Sherlock Holmes of the Internet Underground.”
The answer was yes.
Thus did the self-declared libertarian Peter Thiel, who had founded Paypal in order to replace the dollar with a free market currency, switch to another career, as a servant of the state, using government-collected data to chase criminals. But that’s another story.
Here is another bit from a book which I found particularly interesting, having just purchased and started to read the book in question.
In the Preface of A Great and Terrible King: Edward I and the Forging of Britain, Marc Morris writes that the first question everyone asks is: Was that Edward the Confessor? No. He came much earlier, before the Norman Conquest. Question number two was more interesting, because it has a more interesting answer. It concerns evidence:
The second question that has usually been put to me concerns the nature of the evidence for writing the biography of a medieval king, and specifically its quantity. In general, people tend to presume that there can’t be very much, and imagine that I must spend my days poking around in castle muniment rooms, looking for previously undiscovered scraps of parchment. Sadly, they are mistaken. The answer I always give to the question of how much evidence is: more than one person could look at in a lifetime. From the early twelfth century, the kings of England began to keep written accounts of their annual expenditure, and by the end of the century they were keeping a written record of almost every aspect of royal government. Each time a royal document was issued, be it a grand charter or a routine writ, a copy was dutifully entered on to a large parchment roll. Meanwhile, in the provinces, the king’s justices kept similar rolls to record the proceedings of the cases that came before his courts. Miraculously, the great majority of these documents have survived, and are now preserved in the National Archives at Kew near London. Some of them, when unrolled, extend to twenty or thirty feet. And their number is legion: for the thirteenth century alone, it runs to tens of thousands. Mercifully for the medieval historian, the most important have been transcribed and published, but even this printed matter would be enough to line the walls of an average-sized front room with books. Moreover, the quantity is increased by the inclusion of non-royal material. Others besides the king were keeping records during Edward I’s day. Noblemen also drew up financial accounts, issued charters and wrote letters; monks did the same, only in their case the chances of such material surviving was much improved by their membership of an institution. Monks, in addition, continued to do as they had always done, and kept chronicles, and these too provide plenty to keep the historian busy. To take just the most obvious example from the thirteenth century, the monk of St Albans called Matthew Paris composed a chronicle, the original parts of which cover the quarter century from 1234 to 1259. In its modern edition it runs to seven volumes.
I say all this merely to demonstrate how much there is to know about our medieval ancestors, and not to pretend that I have in some way managed to scale this mountain all by myself. For the most part I have not even had to approach the mountain at all, for this book is grounded on the scholarly work of others. Nevertheless, even the secondary material for a study of Edward I presents a daunting prospect. At a conservative estimate, well over a thousand books and articles have been published in the last hundred years that deal with one aspect or another of the king’s reign. For scholarly works on the thirteenth century as a whole, that figure would have to be multiplied many times over.
Another Bit from a Book, and once again I accompany it with a warning that this Bit could vanish at any moment, for the reasons described in this earlier posting.
This particular Bit is from The Rational Optimist by Matt Ridley (pp. 255-258):
Much as I love science for its own sake, I find it hard to argue that discovery necessarily precedes invention and that most new practical applications flow from the minting of esoteric insights by natural philosophers. Francis Bacon was the first to make the case that inventors are applying the work of discoverers, and that science is the father of invention. As the scientist Terence Kealey has observed, modern politicians are in thrall to Bacon. They believe that the recipe for making new ideas is easy: pour public money into science, which is a public good, because nobody will pay for the generation of ideas if the taxpayer does not, and watch new technologies emerge from the downstream end of the pipe. Trouble is, there are two false premises here: first, science is much more like the daughter than the mother of technology; and second, it does not follow that only the taxpayer will pay for ideas in science.
It used to be popular to argue that the European scientific revolution of the seventeenth century unleashed the rational curiosity of the educated classes, whose theories were then applied in the form of new technologies, which in turn allowed standards of living to rise. China, on this theory, somehow lacked this leap to scientific curiosity and philosophical discipline, so it failed to build on its technological lead. But history shows that this is back-to-front. Few of the inventions that made the industrial revolution owed anything to scientific theory.
It is, of course, true that England had a scientific revolution in the late 1600s, personified in people like Harvey, Hooke and Halley, not to mention Boyle, Petty and Newton, but their influence on what happened in England’s manufacturing industry in the following century was negligible. Newton had more influence on Voltaire than he did on James Hargreaves. The industry that was transformed first and most, cotton spinning and weaving, was of little interest to scientists and vice versa. The jennies, gins, frames, mules and looms that revolutionised the working of cotton were invented by tinkering businessmen, not thinking boffins: by ‘hard heads and clever fingers’. It has been said that nothing in their designs would have puzzled Archimedes.
Likewise, of the four men who made the biggest advances in the steam engine - Thomas Newcomen, James Watt, Richard Trevithick and George Stephenson - three were utterly ignorant of scientific theories, and historians disagree about whether the fourth, Watt, derived any influence from theory at all. It was they who made possible the theories of the vacuum and the laws of thermodynamics, not vice versa. Denis Papin, their French-born forerunner, was a scientist, but he got his insights from building an engine rather than the other way round. Heroic efforts by eighteenth-century scientists to prove that Newcomen got his chief insights from Papin’s theories proved wholly unsuccessful.
Throughout the industrial revolution, scientists were the beneficiaries of new technology, much more than they were the benefactors. Even at the famous Lunar Society, where the industrial entrepreneur Josiah Wedgwood liked to rub shoulders with natural philosophers like Erasmus Darwin and Joseph Priestley, he got his best idea - the ‘rose-turning’ lathe - from a fellow factory owner, Matthew Boulton. And although Benjamin Franklin’s fertile mind generated many inventions based on principles, from lightning rods to bifocal spectacles, none led to the founding of industries.
So top-down science played little part in the early years of the industrial revolution. In any case, English scientific virtuosity dries up at the key moment. Can you name a single great English scientific discovery of the first half of the eighteenth century? It was an especially barren time for natural philosophers, even in Britain. No, the industrial revolution was not sparked by some deus ex machina of scientific inspiration. Later science did contribute to the gathering pace of invention and the line between discovery and invention became increasingly blurred as the nineteenth century wore on. Thus only when the principles of electrical transmission were understood could the telegraph be perfected; once coal miners understood the succession of geological strata, they knew better where to sink new mines; once benzene’s ring structure was known, manufacturers could design dyes rather than serendipitously stumble on them. And so on. But even most of this was, in Joel Mokyr’s words, ‘a semi-directed, groping, bumbling process of trial and error by clever, dexterous professionals with a vague but gradually clearer notion of the processes at work’. It is a stretch to call most of this science, however. It is what happens today in the garages and cafes of Silicon Valley, but not in the labs of Stanford University.
The twentieth century, too, is replete with technologies that owe just as little to philosophy and to universities as the cotton industry did: flight, solid-state electronics, software. To which scientist would you give credit for the mobile telephone or the search engine or the blog? In a lecture on serendipity in 2007, the Cambridge physicist Sir Richard Friend, citing the example of high-temperature superconductivity - which was stumbled upon in the 1980s and explained afterwards - admitted that even today scientists’ job is really to come along and explain the empirical findings of technological tinkerers after they have discovered something.
The inescapable fact is that most technological change comes from attempts to improve existing technology. It happens on the shop floor among apprentices and mechanicals, or in the workplace among the users of computer programs, and only rarely as a result of the application and transfer of knowledge from the ivory towers of the intelligentsia. This is not to condemn science as useless. The seventeenth-century discoveries of gravity and the circulation of the blood were splendid additions to the sum of human knowledge. But they did less to raise standards of living than the cotton gin and the steam engine. And even the later stages of the industrial revolution are replete with examples of technologies that were developed in remarkable ignorance of why they worked. This was especially true in the biological world. Aspirin was curing headaches for more than a century before anybody had the faintest idea of how. Penicillin’s ability to kill bacteria was finally understood around the time bacteria learnt to defeat it. Lime juice was preventing scurvy centuries before the discovery of vitamin C. Food was being preserved by canning long before anybody had any germ theory to explain why it helped.
As discussed in this earlier posting, here is a chunk of Frisby, from his book Bitcoin: The Future of Money? (pp. 197-201 – the chunk entitled “Beware the hype cycle"). And for the reasons stated in that earlier posting, this posting might rather suddenly disappear, so if you feel inclined to read it, do so now. And then when you have, buy the book and tell me that you have done this in the comments, because this might cheer up any passing authors or publishers:
There is a cycle that a new technology passes through as it goes from conception to widespread adoption. The research company Gartner has dubbed it the ‘hype cycle’. It has five phases: the technology trigger, the peak of inflated expectations, the trough of disappointment, the slope of enlightenment and the plateau of productivity.
In the first phase the new technology is invented. There is research and development and some early investment is found. The first products are brought to market. They are expensive and will need a lot of improvement, but they find some early users. The technology clearly has something special about it and people start getting excited. This is the ‘technology trigger’. The internet in the early 1990s is a good example.
As this excitement grows, we move into the second phase. The media start talking about this amazing new technology. Speculative money piles in. All sorts of new companies spring up to operate in this new sector. Many of them are just chasing hot money and have no real product to offer. They are sometimes fraudulent. This new technology is going to change the world. The possibilities are endless. We’re going to cure diseases. We’re going to solve energy problems. We’re going to build houses on the moon. This is the ‘peak of inflated expectations’. This was the internet in 2000.
But at some point, the needle of reality punctures the bubble of expectation, and we move into the third phase. Actually, this technology might not be quite as good as we thought it was; it’s going to take a lot of work to get it right and to make it succeed on a commercial scale. A great deal of not particularly rewarding hard work, time and investment lies ahead. Forget the ideas men – now we need the water-carriers. Suddenly, the excitement has gone.
Negative press starts to creep in. Now there are more sellers than buyers. Investment is harder to come by. Many companies start going bust. People are losing money. The hype cycle has reversed and we have descended into the ‘trough of disappointment.’ This was the internet between 2000 and 2003.
But now that the hot money has left, we can move into phase four. The incompetent or fraudulent companies have died. The sector has been purged. Most of those that remain are serious players. Investors now demand better practice and the survivors deliver it. They release the second and third generation products, and they work quite well. More and more people start to use the technology and it is finally finding mainstream adoption. This was the internet in 2004. It climbed the ‘Slope of Enlightenment’, the fourth phase of the hype cycle, and entered the ‘Plateau of Productivity’ - phase five - which is where the likes of Google, Amazon and eBay are today.
Of course, cycles like this are arbitrary. Reality is never quite so simple. But it’s easy to make the case that crypto-currencies in late 2013 reached a ‘peak of inflated expectations’.
Perhaps it was not the. It wasn’t Bitcoin’s dotcom 2000 moment – just a peak on a larger journey up. Many Bitcoin companies, for example, are not even listed on the stock market. Greater manias could lie ahead.
But it’s also easy to make the case that it ws the peak of inflated expectations. In the space of three or four years, Bitcoin went from an understated mention on an obscure mailing list to declarations that it was not only going to become the preferred money system of the world, but also the usurper of the existing world order. At $1,000 a coin, some early adopters had made a million times their original investment. Speculators marvelled at the colossal amount of money they were making. The media were crazy for it. Bitcoin was discussed all over television.
It caught the imagination of the left, the right and the in-between. Computer boffins marvelled at the impossibly resilient code. Economists and libertarians marvelled at the politics of a money without government or border. There were early adopters, from the tech savvy to the black markets (black markets are usually quick to embrace new technology - pornography was the first business sector to actually make money on the internet, for example).
Every Tom, Dick and Harry you met under the age of 30 with an interest in IT was involved in some Bircoin start-up or other. Either that or he was designing some new alt currency - some altcoins were rising at over a thousand per cent per day. ‘Banks, governments, they’re irrelevant now,’ these upstarts declared.
I suggest that in late 2013 we hit the peak of the hype cycle - the peak of inflated expectations. Now Bitcoin is somewhere in the ‘trough of disillusionment,’ just like the internet in 2001. The price has fallen. There have been thefts. Some of the companies involved have gone bankrupt.
The challenge now is for all those start-ups to make their product or service work. They have to take Bitcoin from a great idea and a technology that works to something with much wider ‘real world’ use. They have to find investment and get more and more people to start using the coins. This is a long process.
There are many who will disagree with this interpretation. And, with investment, it is dangerous to have rigid opinions – I reserve the right to change my mind as events unfold.
From time to time I like to stick bits from books up here, usually quite short, but sometimes quite long.
With the short bits, there is no legal or moral problem. Fair use, etc. But with the longer bits, there might be a problem. Here’s how I operate. I put up whatever bit it is that I think deserves to be made much of, on the clear understanding that it might disappear at any moment. Because, if anyone associated with the book I have got my chosen bit from complains and says please remove it, I will do so, immediately.
Many might think that such persons would be being rather silly. I mean, what better way could there be to reach potential readers of the entire book in question than for readers of a blog, and a blog written by someone who already likes the book, to get to read a relatively small chunk of it? Win-win, surely. Because of course, I only put up big chunks of writing if I approve of what the chunks say.
But what if a publisher is trying to insist on the principle, that copyright damn well means what it says? Such a publisher might want to proclaim, and to be seen to proclaim, a no-tolerance attitude to the copying of bigger than small bits of any its books. Even if that particular book might be assisted by this particular recycled chunk being here, the larger principle might feel far more significant to the publisher. That principle being: If we allow this, where will it then stop?
And I get that. As I say, if any publisher or author did complain, for these kinds of reasons or for any other, then I would get it, and the bit from the book in question would at once vanish from this blog. So far, I’ve had no such complaints. Which could just be because they reckon this blog to be too insignificant to be worth risking a fight with. They wouldn’t have a fight, but they might have a rule about letting sleeping puppies, like this one, lie.
Whatever. All I am saying here is that if I put up a big bit of a book, and anyone connected to that big bit cries foul, then the big bit will immediately vanish from here, with no grumbling, or worse, self-righteous campaigning, attempts to mobilise other bloggers, etc. etc.
Think of all this as an example of Rule Utilitarianism. And I am myself a Rule Utilitarian. My libertarian beliefs are not the absurd claim that libertarianism is inscribed into the very physical fabric of the universe, an inherent fact of life itself, which we humans either recognise or fail to recognise, but which are there anyway. Tell that to the spider I just squashed into the pavement on my way home to write this. No, I like libertarianism because it works. Libertarianism is a set of basically fairly simply rules which all we humans either choose to live by or choose not to live by. If we choose to live by these rules, life is good, happy, comfortable and it gets better and better. If we don’t live by such rules, life goes to shit and stays there.
And here comes the Rule Utilitarian bit. Even if this particular bit of thieving, by the government or just by some bod like you or me, is very insignificant, and even if what the government or the bod like you or me wants to spend its or his or her ill-gotten gains on is wonderful, absolutely wonderful, my rule says: No. Not allowed. Don’t get into complicated discussions about just how little thieving is too little to be bothering about, or just how noble a noble project has to be for it to be noble enough to be financed by a spot of thieving, because that way lies the slippery slope we are now on, where the government gobbles up at least half of everything, to very little benefit for anyone other than itself. Stick to the rule. No thieving, no matter how petty its scale or how noble its supposed object.
So, I get Rule Utilitarianism. And if any publisher decides to inflict his Rule Utilitarianism, in the manner described above, upon me, I would get that, and act accordingly.
What got me wanting to spell all this out is that I have recently been reading Dominic Frisby’s excellent Bitcoin book, and I find myself wanting to put bits of it up here, quite longish bits. And in general, having just followed the link at the top of this and read some of them, I feel that postings of this sort are among the better things that I do here, and I want to do more of them. But, to all of the bits from books that will follow, I want to attach the above mentioned caveat about how the verbiage that follows may vanish without warning, and a link to this posting is the way to summarise what is going on in my head without me banging on for however many paragraphs there are here.
There I was, lying in the bath, listening to Radio 3. Some music had ended, and I was now being subjected to a programme which I do not usually listen to, called Words and Music. And I heard the actor Jim Broadbent saying these words, by Michel de Montaigne:
I take the first subject that chance offers. They are all equally good to me. And I never plan to develop them completely. For I do not see the whole of anything. (Nor do those who promise to show it to us.) Of a hundred members and faces that each thing has, I take one, sometimes only to lick it, sometimes to brush the surface, sometimes to pinch it to the bone. I give it a stab, not as wide, but as deep as I know how. And most often, I like to take them from some unaccustomed point of view. Scattering a word here, there another, samples separated from their context, dispersed, without a plan and without a promise, I am not bound to make something of them, or to adhere to them myself, without varying when I please, and giving myself up to doubt and uncertainty, and my ruling quality, which is ignorance.
Sounds like a blogger, doesn’t he? A blogger, that is to say, like me. Especially where he says “without a promise”. I keep saying that. Above all there is that “this is what it is and if you don’t like it you know just what you can do about it” vibe that so many bloggers give off. With Montaigne, we are arriving at that first moment in history when writing and publishing new stuff had become easy. Not as easy as it is when you blog, but a whole lot easier than it had been.
I transcribed the above quote from Broadbent’s reading of it. The punctuation is somewhat uncertain, and at one point assertively creative on my part. I added some brackets, around what is clearly a diversion from his main line of thought to which he immediately returns. It’s a sideswipe at others and it is then forgotten.
Such is the wonder that is the internet that I had little difficulty in tracking down the quote. It is near the beginning of Montaigne’s essay entitled “Of Democritus and Heraclitus”, in volume three of his essays.
The BBC used a more recent translation, which I much prefer the sound of, it being less antique and long-winded. And if Montaigne himself was also antique and long-winded, then I still prefer intelligibility to stylistic accuracy.
LATER: More about Montaigne, also emphasising the modern social media angle, here.
I have already quoted a couple of interesting bits from Bill Bryson’s excellent book, At Home. I have now finished reading this, but just before I did, I encountered some interesting stuff about paint (pp. 453-5):
When paints became popular, people wanted them to be as vivid as they could possibly be made. The restrained colours that we associate with the Georgian period in Britain, or Colonial period in America, are a consequence of fading, not decorative restraint. In 1979, when Mount Vernon began a programme of repainting the interiors in faithful colours, ‘people came and just yelled at us’, Dennis Pogue, the curator, told me with a grin when I visited. ‘They told us we were making Mount Vernon garish. They were right - we were. But that’s just because that’s the way it was. It was hard for a lot of people to accept that what we were doing was faithful restoration.
‘Even now paint charts for Colonial-style paints virtually always show the colours from the period as muted. In fact, colours were actually nearly always quite deep and sometimes even startling. The richer a colour you could get, the more you tended to be admired. For one thing, rich colours generally denoted expense, since you needed a lot of pigment to make them. Also, you need to remember that often these colours were seen by candlelight, so they needed to be more forceful to have any kind of impact in muted light.’
The effect is now repeated at Monticello, where several of the rooms are of the most vivid yellows and greens. Suddenly George Washington and Thomas Jefferson come across as having the decorative instincts of hippies. In fact, however, compared with what followed they were exceedingly restrained.
When the first ready-mixed paints came on to the market in the second half of the nineteenth century, people slapped them on with something like wild abandon. It became fashionable not just to have powerfully bright colours in the home, but to have as many as seven or eight colours in a single room.
If we looked closely, however, we would be surprised to note that two very basic colours didn’t exist at all in Mr Marsham’s day: a good white and a good black. The brightest white available was a rather dull off-white, and although whites improved through the nineteenth century, it wasn’t until the 1940s, with the addition of titanium dioxide to paints, that really strong, lasting whites became available. The absence of a good white paint would have been doubly noticeable in early New England, for the Puritans not only had no white paint but didn’t believe in painting anyway. (They thought it was showy.) So all those gleaming white churches we associate with New England towns are in fact a comparatively recent phenomenon.
Also missing from the painter’s palette was a strong black. Permanent black paint, distilled from tar and pitch, wasn’t popularly available until the late nineteenth century. So all the glossy black front doors, railings, gates, lampposts, gutters, downpipes and other fittings that are such an elemental feature of London’s streets today are actually quite recent. If we were to be thrust back intime to Dickens’s London, one of the most startling differences to greet us would be the absence of black painted surfaces. In the time of Dickens, almost all ironwork was green, light blue or dull grey.
Famously, the rise of the Modern Movement in Architecture was triggered by, among many other things, a revulsion against the excesses of Victorian-era decoration, especially architectural decoration. Decoration became mechanised, and thus both much more common and much less meaningful. What did all this mechanised decoration prove, what did it mean, when you could thrash it out with no more difficulty than you could erect a plain wall?
What the above Bryson quote strongly suggests, at any rate to me, is that something rather similar happened with colour.
Why is the overwhelming atmosphere of Modernist architecture and architectural propaganda so very monochrome, still. Part of the answer is that it was only recently learned how to do monochrome. Monochrome looked modern, from about 1900-ish onwards, because it was modern. Monochrome was the latest thing. Colour, meanwhile, had become much cheaper and had been used with garish nouveau riche excess, and there was a reaction to that also, just as there was to excessive decoration.
Chippendale without Rannie
Bill Bryson on the miracle of crop rotation
Postrel goes for Gray
JK Rowling describes two rich girls
Christopher Seaman on conducting
3D printed baby in the womb
Don’t judge a new technology by its first stumbling steps
Alex on Quentin
Algernon Sidney sends for Micklethwait because Micklethwait is wise, learned, diligent, and faithful
New apostrophe-shaped footbridge in Hull
Lighter blogging here but not none
76 operas and a monument in the wrong place for Hermann the German
Emmanuel Todd quoted and Instalanched
Richard Dawkins on university debating games
Alex Ross on Hollywood film scores
Professor C. Northcote Parkinson on the Edifice Complex
Alex Ross on Sibelius
Lawrence H. White on the Scottish experience of free banking
“I will cause a boy that driveth a plough to know more of the scriptures than thou dost.”
John Carey on Shakespeare and the high-art/ popular-art distinction
Switching from dumb bombing to smart bombing
“I’ll build it with explosive bolts connecting the wings to the fuselage …”
If the Jews have been running the world they haven’t been doing it very successfully
Terence Kealey on the Wright brothers and their patent battles
Ed Smith on how baseball defeated cricket in America
Understanding is the booby prize exclamation mark
Will China fail?
A dreadful age
Richard Dawkins on the Muhammad cartoons affair
Is Jeremy Paxman a closet libertarian?