Proof of the Pythagorean theorem:
Dark matter:
Chemical reactions, fire, and photosynthesis:
How trains stay on their tracks and go through turns:
And just for fun, the broken window fallacy:
Wednesday, January 26, 2011
Saturday, January 22, 2011
Tech roundup 01/22/11
Atomic clock on a chip for about $1,500. Accurate and independent clocks improve secure synchronous protocols, in other words can help securely determine the order in which events occur on the Internet, wireless, and other networks while minimizing dependence on trusted third parties like GPS. The technology nicely complements secure timestamping (see e.g. here, here, and here) which can leave an unforgeable record of the ordering of events and the times at which specific data or documents existed.
Bitcoin, an implementation of the bit gold idea (and another example of where the order of events is important), continues to be popular.
It is finally being increasingly realized that there are many "squishy" areas where scientific methods don't work as well as they do in hard sciences like physics and chemistry. Including psychology, significant portions of medicine, ecology, and I'd add the social sciences, climate, and nutrition. These areas are often hopelessly infected with subjective judgments about results, so it's not too surprising that when the the collective judgments change about what constitutes, for example, the "health" of a mind, body, society, or ecosystem, that the "results" of experiments as defined in terms of these judgments change as well. See also "The Trouble With Science".
Flat sats (as I like to call them) may help expand our mobility in the decades ahead. Keith Lofstrom proposes fabricating an entire portion of a phased array communications satellite -- solar cells, radios, electronics, computation, etc. -- on a single silicon wafer. Tens of thousands or more of these, each nearly a foot wide, may be launched on a single small rocket. If they're thin enough, orientation and orbit can be maintained using light pressure (like a solar sail). Medium-term application: phased array broadcast of TV or data allows much smaller ground antennas, perhaps even satellite TV and (mostly downlink) Internet in your phone, iPad, or laptop. Long-term: lack of need for structure to hold together an array of flat sats may bring down the cost of solar power in space to the point that we can put the power-hungry server farms of Internet companies like Google, Amazon, Facebook, etc. in orbit. Biggest potential problem: large numbers of these satellites may both create and be vulnerable to micrometeors and other space debris.
Introduction to genetic programming, a powerful evolutionary machine learning technique that can invent new electronic circuits, rediscover Kepler's laws from orbital data in seconds, and much more, as long as it has fairly complete and efficient simulations of the environment it is inventing or discovering in.
Exploration for underwater gold mining is underway. See also "Mining the Vast Deep."
Bitcoin, an implementation of the bit gold idea (and another example of where the order of events is important), continues to be popular.
It is finally being increasingly realized that there are many "squishy" areas where scientific methods don't work as well as they do in hard sciences like physics and chemistry. Including psychology, significant portions of medicine, ecology, and I'd add the social sciences, climate, and nutrition. These areas are often hopelessly infected with subjective judgments about results, so it's not too surprising that when the the collective judgments change about what constitutes, for example, the "health" of a mind, body, society, or ecosystem, that the "results" of experiments as defined in terms of these judgments change as well. See also "The Trouble With Science".
Flat sats (as I like to call them) may help expand our mobility in the decades ahead. Keith Lofstrom proposes fabricating an entire portion of a phased array communications satellite -- solar cells, radios, electronics, computation, etc. -- on a single silicon wafer. Tens of thousands or more of these, each nearly a foot wide, may be launched on a single small rocket. If they're thin enough, orientation and orbit can be maintained using light pressure (like a solar sail). Medium-term application: phased array broadcast of TV or data allows much smaller ground antennas, perhaps even satellite TV and (mostly downlink) Internet in your phone, iPad, or laptop. Long-term: lack of need for structure to hold together an array of flat sats may bring down the cost of solar power in space to the point that we can put the power-hungry server farms of Internet companies like Google, Amazon, Facebook, etc. in orbit. Biggest potential problem: large numbers of these satellites may both create and be vulnerable to micrometeors and other space debris.
Introduction to genetic programming, a powerful evolutionary machine learning technique that can invent new electronic circuits, rediscover Kepler's laws from orbital data in seconds, and much more, as long as it has fairly complete and efficient simulations of the environment it is inventing or discovering in.
Exploration for underwater gold mining is underway. See also "Mining the Vast Deep."
Monday, January 17, 2011
"The Singularity"
A number of my friends have gotten wrapped up in a movement dubbed "The Singularity." We now have a "Singularity Institute", a NASA-sponsored "Singularity University", and so on as leading futurist organizations. I tend to let my disagreements on these kinds of esoteric futurist visions slide, but I've been thinking about my agreements and disagreements with this movement for long enough that I shall now share them.
One of the basic ideas behind this movement, that computers can help make themselves smarter, and that growth that for a time looks exponential, or even super-exponential in some dimensions and may end up much faster than today may result, is by no means off the wall. Indeed computers have been helping improve future versions of themselves at least since the first compiler and circuit design software was invented. But "the Singularity" itself is an incoherent and, as the capitalization suggests, basically a religious idea. As well as a nifty concept for marketing AI research to investors who like very high risk and reward bets.
The "for a time" bit is crucial. There is as Feynman said "plenty of room at the bottom" but it is by no means infinite given actually demonstrated physics. That means all growth curves that look exponential or more in the short run turn over and become S-curves or similar in the long run, unless we discover physics that we do not now know, as information and data processing under physics as we know it are limited by the number of particles we have access to, and that in turn can only increase in the long run by at most a cubic polynomial (and probably much less than that, since space is mostly empty).
Rodney Brooks thus calls the Singularity "a period" rather than a single point in time, but if so then why call it a singularity?
As for "the Singularity" as a point past which we cannot predict, the stock market is by this definition an ongoing, rolling singularity, as are most aspects of the weather, and many quantum events, and many other aspects of our world and society. And futurists are notoriously bad at predicting the future anyway, so just what is supposed to be novel about an unpredictable future?
The Singularitarian notion of an all-encompassing or "general" intelligence flies in the face of how our modern economy, with its extreme specialization, works. We have been implementing human intelligence in computers little bits and pieces at a time, and this has been going on for centuries. First arithmetic (first with mechanical calculators), then bitwise Boolean logic (from the early parts of the 20th century with vacuum tubes), then accounting formulae and linear algebra (big mainframes of the 1950s and 60s), typesetting (Xerox PARC, Apple, Adobe, etc.), etc. etc. have each gone through their own periods of exponential and even super-exponential growth. But it's these particular operations, not intelligence in general, that exhibits such growth.
At the start of the 20th century, doing arithmetic in one's head was one of the main signs of intelligence. Today machines do quadrillions of additions and subtractions for each one done in a human brain, and this rarely bothers or even occurs to us. And the same extreme division of labor that gives us modern technology also means that AI has and will take the form of these hyper-idiot, hyper-savant, and hyper-specialized machine capabilities. Even if there was such a thing as a "general intelligence" the specialized machines would soundly beat it in the marketplace. It would be very far from a close contest.
Another way to look at the limits of this hypothetical general AI is to look at the limits of machine learning. I've worked extensively with evolutionary algorithms and other machine learning techniques. These are very promising but are also extremely limited without accurate and complete simulations of an environment in which to learn. So for example in evolutionary techniques the "fitness function" involves, critically, a simulation of electric circuits (if evolving electric circuits), of some mechanical physics (if evolving simple mechanical devices or discovering mechanical laws), and so on.
These techniques only can learn things about the real world to the extent such simulations accurately simulate the real world, but except for extremely simple situations (e.g. rediscovering the formulae for Kepler's laws based on orbital data, which a modern computer with the appropriate learning algorithm can now do in seconds) the simulations are usually very woefully incomplete, rendering the results usually useless. For example John Koza after about 20 years of working on genetic programming has discovered about that many useful inventions with it, largely involving easily simulable aspects of electronic circults. And "meta GP", genetic programming that is supposed to evolve its own GP-implementing code, is useless because we can't simulate future runs of GP without actually running them. So these evolutionary techniques, and other machine learning techniques, are often interesting and useful, but the severely limited ability of computers to simulate most real-world phenomena means that no runaway is in store, just potentially much more incremental improvements which will be much greater in simulable arenas and much smaller in others, and will slowly improve as the accuracy and completeness of our simulations slowly improves.
The other problem with tapping into computer intelligence -- and there is indeed after a century of computers quite a bit of very useful but very alien intelligence there to tap into -- is the problem of getting information from human minds to computers and vice versa. Despite all the sensory inputs we can attach to computers these days, and vast stores of human knowledge like Wikipedia that one can feed to them, almost all such data is to a computer nearly meaningless. Think Helen Keller but with most of her sense of touch removed on top of all her other tragedies. Similarly humans have an extremely tough time deciphering the output of most software unless it is extremely massaged. We humans have huge informational bottlenecks between each other, but these hardly compare to the bottlenecks between ourselves and the hyperidiot/hypersavant aliens in our midst, our computers. As a result he vast majority of programmer time is spent working on user interfaces and munging data rather than on the internal workings of programs.
Nor does the human mind, as flexible as it is, exhibit much in the way of some universal general intelligence. Many machines and many other creatures are capable of sensory, information-processing, and output feats that the human mind is quite incapable of. So even if we in some sense had a full understanding of the human mind (and it is information theoretically impossible for one human mind to fully understand even one other human mind), or could somehow faithfully "upload" a human mind to a computer (another entirely conjectural operation, which may require infeasible simulations of chemistry), we would still not have "general" intelligence, again if such a thing even exists.
That's not to say that many of the wide variety of techniques that go under the rubric "AI" are not or will not be highly useful, and may even lead to accelerated economic growth as computers help make themselves smarter. But these will turn into S-curves as they approach physical limits and the idea that this growth or these many and varied intelligences are in any nontrivial way "singular" is very wrong.
One of the basic ideas behind this movement, that computers can help make themselves smarter, and that growth that for a time looks exponential, or even super-exponential in some dimensions and may end up much faster than today may result, is by no means off the wall. Indeed computers have been helping improve future versions of themselves at least since the first compiler and circuit design software was invented. But "the Singularity" itself is an incoherent and, as the capitalization suggests, basically a religious idea. As well as a nifty concept for marketing AI research to investors who like very high risk and reward bets.
The "for a time" bit is crucial. There is as Feynman said "plenty of room at the bottom" but it is by no means infinite given actually demonstrated physics. That means all growth curves that look exponential or more in the short run turn over and become S-curves or similar in the long run, unless we discover physics that we do not now know, as information and data processing under physics as we know it are limited by the number of particles we have access to, and that in turn can only increase in the long run by at most a cubic polynomial (and probably much less than that, since space is mostly empty).
Rodney Brooks thus calls the Singularity "a period" rather than a single point in time, but if so then why call it a singularity?
As for "the Singularity" as a point past which we cannot predict, the stock market is by this definition an ongoing, rolling singularity, as are most aspects of the weather, and many quantum events, and many other aspects of our world and society. And futurists are notoriously bad at predicting the future anyway, so just what is supposed to be novel about an unpredictable future?
The Singularitarian notion of an all-encompassing or "general" intelligence flies in the face of how our modern economy, with its extreme specialization, works. We have been implementing human intelligence in computers little bits and pieces at a time, and this has been going on for centuries. First arithmetic (first with mechanical calculators), then bitwise Boolean logic (from the early parts of the 20th century with vacuum tubes), then accounting formulae and linear algebra (big mainframes of the 1950s and 60s), typesetting (Xerox PARC, Apple, Adobe, etc.), etc. etc. have each gone through their own periods of exponential and even super-exponential growth. But it's these particular operations, not intelligence in general, that exhibits such growth.
At the start of the 20th century, doing arithmetic in one's head was one of the main signs of intelligence. Today machines do quadrillions of additions and subtractions for each one done in a human brain, and this rarely bothers or even occurs to us. And the same extreme division of labor that gives us modern technology also means that AI has and will take the form of these hyper-idiot, hyper-savant, and hyper-specialized machine capabilities. Even if there was such a thing as a "general intelligence" the specialized machines would soundly beat it in the marketplace. It would be very far from a close contest.
Another way to look at the limits of this hypothetical general AI is to look at the limits of machine learning. I've worked extensively with evolutionary algorithms and other machine learning techniques. These are very promising but are also extremely limited without accurate and complete simulations of an environment in which to learn. So for example in evolutionary techniques the "fitness function" involves, critically, a simulation of electric circuits (if evolving electric circuits), of some mechanical physics (if evolving simple mechanical devices or discovering mechanical laws), and so on.
These techniques only can learn things about the real world to the extent such simulations accurately simulate the real world, but except for extremely simple situations (e.g. rediscovering the formulae for Kepler's laws based on orbital data, which a modern computer with the appropriate learning algorithm can now do in seconds) the simulations are usually very woefully incomplete, rendering the results usually useless. For example John Koza after about 20 years of working on genetic programming has discovered about that many useful inventions with it, largely involving easily simulable aspects of electronic circults. And "meta GP", genetic programming that is supposed to evolve its own GP-implementing code, is useless because we can't simulate future runs of GP without actually running them. So these evolutionary techniques, and other machine learning techniques, are often interesting and useful, but the severely limited ability of computers to simulate most real-world phenomena means that no runaway is in store, just potentially much more incremental improvements which will be much greater in simulable arenas and much smaller in others, and will slowly improve as the accuracy and completeness of our simulations slowly improves.
The other problem with tapping into computer intelligence -- and there is indeed after a century of computers quite a bit of very useful but very alien intelligence there to tap into -- is the problem of getting information from human minds to computers and vice versa. Despite all the sensory inputs we can attach to computers these days, and vast stores of human knowledge like Wikipedia that one can feed to them, almost all such data is to a computer nearly meaningless. Think Helen Keller but with most of her sense of touch removed on top of all her other tragedies. Similarly humans have an extremely tough time deciphering the output of most software unless it is extremely massaged. We humans have huge informational bottlenecks between each other, but these hardly compare to the bottlenecks between ourselves and the hyperidiot/hypersavant aliens in our midst, our computers. As a result he vast majority of programmer time is spent working on user interfaces and munging data rather than on the internal workings of programs.
Nor does the human mind, as flexible as it is, exhibit much in the way of some universal general intelligence. Many machines and many other creatures are capable of sensory, information-processing, and output feats that the human mind is quite incapable of. So even if we in some sense had a full understanding of the human mind (and it is information theoretically impossible for one human mind to fully understand even one other human mind), or could somehow faithfully "upload" a human mind to a computer (another entirely conjectural operation, which may require infeasible simulations of chemistry), we would still not have "general" intelligence, again if such a thing even exists.
That's not to say that many of the wide variety of techniques that go under the rubric "AI" are not or will not be highly useful, and may even lead to accelerated economic growth as computers help make themselves smarter. But these will turn into S-curves as they approach physical limits and the idea that this growth or these many and varied intelligences are in any nontrivial way "singular" is very wrong.
Sunday, January 16, 2011
Making a toaster the hard way
(h/t Andrew Chamberlain)
For more on the economics at work here, see my essay Polynesians vs. Adam Smith.
Subscribe to:
Posts (Atom)