tag:blogger.com,1999:blog-17908317.post4410296457945154984..comments2024-03-28T03:15:14.875-07:00Comments on Unenumerated: Pascal's scamsNick Szabohttp://www.blogger.com/profile/16820399856274245684noreply@blogger.comBlogger40125tag:blogger.com,1999:blog-17908317.post-84649895383411498742017-09-15T08:44:23.395-07:002017-09-15T08:44:23.395-07:00Eventually the unexpected will happen.
But that d...Eventually the unexpected will happen. <br />But that does not mean every hypothesis will come true. <br />Principles of black swan events and being anti fragile in our<br />preparations still seem to apply. Having a first aid kit in your<br />house may never need to be used. But if a scenario arises <br />where it does, that $40 investment could save a life.BenWhttps://www.blogger.com/profile/10473875268305532108noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-53681183963211551342012-07-23T12:20:41.472-07:002012-07-23T12:20:41.472-07:00@nick: Wise choice.
With the AI box nonsense - w...@nick: Wise choice. <br /><br />With the AI box nonsense - why one would even bother building AI so general that it will want out of the box... I thought that this was about a boxed mind upload or other neural network derived AI.<br /><br />Generally this singularity/rationalism community has very strong belief in powers of their reason (as per philosophy of rationalism), and they mix up rationalism and rationality. <br /><br /><br />When talking of the AGI that others allegedly would create, they point out how bad it would be and how it would kill mankind.<br /><br />Whenever you point out to them that e.g. an algorithm which solves formally defined problems is very practically useful (and can be tackled directly without any further problems of defining what we want), they are trying to convince you that it is much less useful than their notion of AGI that wants real things, having forgotten the previous statement, or being unable to see how generally useful something like mathematics is.<br /><br />It's true 1984 doublethink over there. Almost everyone with any sense has evaporated, while the very few people with a clue (Wei Dai for example) read their own sense into the scriptures, arguing something sensible but entirely orthogonal to the nonsense of the whole group.Dmytryhttps://www.blogger.com/profile/03329960438673340983noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-2141837388669769102012-07-22T22:37:57.468-07:002012-07-22T22:37:57.468-07:00This comment has been removed by a blog administrator.nicknoreply@blogger.comtag:blogger.com,1999:blog-17908317.post-27511133926950769082012-07-22T17:42:22.785-07:002012-07-22T17:42:22.785-07:00Alexander, I hadn't quite realized just how fa...Alexander, I hadn't quite realized just how far from reality they had traveled down their imaginary rabbit hole. I hope in the future I'll be finding more constructive things to do then trying to engage them. Still, I applaud you for your research of them and hopefully the result will be saving some people from such mind-wasting drivel.nicknoreply@blogger.comtag:blogger.com,1999:blog-17908317.post-40740081414067607062012-07-22T12:27:24.091-07:002012-07-22T12:27:24.091-07:00On of the key pieces of evidence they like to cite...On of the key pieces of evidence they like to cite is something they call the <a href="http://yudkowsky.net/singularity/aibox/" rel="nofollow">AI-Box Experiment</a>. An unpublished, not reproducible chat between Eliezer Yudkowsky and a few strangers where he played the role of an AI trying to convince the gatekeeper, played by the other person, to release it into the world. <br /><br />One of the rules, which basically allows the AI to assert anything, is as follows:<br /><br /><i>"The Gatekeeper must remain engaged with the AI and may not disengage by setting up demands which are impossible to simulate. For example, if the Gatekeeper says "Unless you give me a cure for cancer, I won't let you out" the AI can say: "Okay, here's a cure for cancer" and it will be assumed, within the test, that the AI has actually provided such a cure. Similarly, if the Gatekeeper says "I'd like to take a week to think this over," the AI party can say: "Okay. (Test skips ahead one week.) Hello again.""</i>Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-84627251367965204952012-07-22T12:15:19.830-07:002012-07-22T12:15:19.830-07:00@Nick Szabo
Not sure if you saw the movie Eagle E...@Nick Szabo<br /><br />Not sure if you saw the movie <a href="http://en.wikipedia.org/wiki/Eagle_Eye" rel="nofollow">Eagle Eye</a>, but it nicely highlights how those people probably perceive your law commentary. <br /><br />You have to realize the following when arguing with those people.<br /><br />Namely their premises, which are very roughly:<br /><br />1.) There will soon be a superhuman intelligence. <br />2.) It will be a master of science and social engineering. <br />2.b.) It will be able to take over the world by earning money over the internet, control people by making phone calls and sending emails, and ordering equipment to invent molecular nanotechnology. <br />3.) It will interpret its goals in a completely verbatim way without wanting to refine those goals. <br />4.) It will want to protect those goals and in doing so take over the world and the universe.<br />5.) Any goal short of a full-fledged coherent extrapolation of the volition of all of humanity will lead to the destruction of human values either as a side-effect or because humans are perceived to be a security risk. <br /><br />It all sounds incredible crazy, naive and absurd. But as long as you don't argue against those premises they will just conjecture arbitrary amounts of intelligence, i.e. magic, to "refute" any your more specific arguments.Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-82124029775659233192012-07-21T21:31:49.744-07:002012-07-21T21:31:49.744-07:00Here was my reply:
(1) I'd rephrase the above...Here was my reply:<br /><br />(1) I'd rephrase the above [comment I posted under my "The Singularity" post on this blog, reposted to Less Wrong by Wei Dai] to say that computer security is among the two most important things one can study with regard to this alleged threat [the robot apocalypse, assuming one wanted to take it seriously].<br /><br />(2) The other important thing is law. Law is the "offensive approach to the problem of security" in the sense I suspect you mean it (unless you mean something more like the military). Law is very highly evolved, the work of millions of people as smart or smarter than Yudkoswky over more than a millenium, and tested empirically against the real world of real agents with a real diversity of values every day. It's not something you can ever come close to competing with by a philosophy invented from scratch.<br /><br />(3) I stand by my comment that "AGI" and "friendliness" are hopelessly anthropomorphic, infeasible, and/or vague.<br /><br />(4) Computer "goals" are only usefully studied against actual algorithms, or clearly defined mathemetical classes of algorithms, not vague and imaginary concepts. Perhaps you can make some progress by for example advancing the study of postconditions, which seem to be the closest analog to goals in the software engineering world. One can imagine a world where postconditions are always checked, for example, and other software ignores the output of software that has violated one of its postconditions.nicknoreply@blogger.comtag:blogger.com,1999:blog-17908317.post-51541920379492226382012-07-21T12:12:40.576-07:002012-07-21T12:12:40.576-07:00Wei Dai addressed your directly in his latest post...Wei Dai addressed your directly in his <a href="http://lesswrong.com/r/discussion/lw/dq9/work_on_security_instead_of_friendliness/" rel="nofollow">latest post</a> at lesswrong.com, asking: <i>Work on Security Instead of Friendliness?</i><br /><br />Here is my reply:<br /><br />For the sake of the argument I am going to assume that Nick Szabo shares the views of the Singularity Institute that at some point there will exist a vastly superhuman intelligence that cares to take over the world, which, as far as I can tell, he doesn’t.<br /><br />The question is not how to make systems perfectly secure but just secure enough as to disable such an entity from easily acquiring massive resources. Resources it needs to create e.g. new computational substrates or build nanotech assemblers etc., everything that is necessary to take over the world.<br /><br />In his post, Wei Dai claims that it is 2012 and that <i>“malware have little trouble with the latest operating systems and their defenses.”</i><br /><br />This is true and yet not relevant. What can be done with modern malware is nowhere near sufficient to what an AI will have to do to take over the world.<br /><br />An AI won’t just have to be able to wreck havoc but take <i>specific</i> and <i>directed</i> control of highly volatile social and industrial institutions and processes.<br /><br />It doesn’t take anywhere near perfect security to detect and defend against such large scale intrusions.Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-2045303523324663832012-07-20T16:27:23.890-07:002012-07-20T16:27:23.890-07:00Speaking of comparative intelligence, here's a...Speaking of comparative intelligence, here's a fun little debate about whether to use humans, monkeys, or robots to pick coconuts. Apparently monkeys can climb ten times as many trees in a day as humans, but they have a hard time telling whether a coconut is ripe. Not clear on what parts of the task the robots do better or worse:<br /><br />http://ibnlive.in.com/news/coconut-plucking-is-monkey-business-in-kerala/253500-62-126.htmlnicknoreply@blogger.comtag:blogger.com,1999:blog-17908317.post-82455919204848021252012-07-20T14:59:17.170-07:002012-07-20T14:59:17.170-07:00humans are a conglomerate of many specialized tech...<i>humans are a conglomerate of many specialized techniques rather than something that is similar to AIXI.<br /></i><br /><br />A good thing, too, because "AIXI" (i.e. Solomonoff induction, a generalized model of machine learning) is uncomputable. Even learning with no guaruntees is harder than cracking public key crypto in the general case. To make learning efficient, you need to have some already existing information about the particular environment being learned, or similarly a learning algorithm that is specialized for that environment. The more relevant information you have, the better adapted the algorithm, the easier it is to learn the rest. Which gives us an economy of different learners specialized to different kinds of environments.<br /><br />As for humans, we are certainly quite good at some important mental tasks, but there are also tasks we're relatively quite bad at. Exact memories typically being among them. Birds that can remember where in a landscape thousands of nuts are buried, the long-term memories of seals, animals with "photographic memories", etc. each in their own ways have specialized mental abilities our unaided brains don't typically have. And of course computers can memorize trillions of numbers, on top of being able to do arithmetic billions to trillions of times faster, and many other tasks that require boolean logic, arithmetic, and the like. And I'm not even counting the many sensory inputs our unaided brains lack, which before the advent of technological sensory modes caused us to miss out on most of the information available in our environment.<br /><br />None of that stops humans from still having the best overall package of specialties, especially with the help of the technology we create, like those computers. But there is no magical "general intelligence" that we possess and other entities do not -- just a different package wrapped up in a brain with a higher brain/body ratio, hands for our brains to create that technology, and language that allows us to form some sophisticated social relationships that are very different from those of other animals. A very good bundle of specialties, but missing some pieces we'd really now love to have (like the memories of computers), and certainly not "general intelligence", which ranges from the astronomically inefficient to the uncomputable.nicknoreply@blogger.comtag:blogger.com,1999:blog-17908317.post-2699293118662447262012-07-20T02:57:45.090-07:002012-07-20T02:57:45.090-07:00Wei_Dai wrote (@lesswrong):
"Given the benef...Wei_Dai wrote (@lesswrong):<br /><br /><i>"Given the benefits of specialization, how do you explain the existence of general intelligence (i.e. humans)? Why weren't all the evolutionary niches that humans current occupy already taken by organisms with more specialized intelligence?<br /><br />My explanation is that generalized algorithms may be less efficient than specialized algorithms when specialized algorithms are available, but inventing specialized algorithm is hard (both for us and for evolution) so often specialized algorithms simply aren't available. You don't seem to have responded to this line of argument..."</i><br /><br />As far as I can tell you already answered that by saying that humans are a conglomerate of many specialized techniques rather than something that is similar to AIXI.<br /><br />The confusion of which the question by Wei_Dai is a result is inherent in the vagueness of the concept of an agent.<br /><br />Humans are not agents in the same sense that a cooperation, a country or humanity as a whole is not an agent. <br /><br />What is usually referred to as an agent is an expected utility maximizer, a rigid consequentialist that acts in a way that is globally and timelessly instrumentally useful.<br /><br />Which is not how effects of evolution work or how any computationally limited decision maker could possible work.<br /><br />It takes complex values, motivations, a society of minds and its cultural evolution to yield behavior approximating agency. <br /><br />Complex values are the cornerstone of diversity, which in turn enables creativity and drives the exploration of various conflicting routes. A singleton with a stable utility-function lacks the feedback provided by a society of minds and its cultural evolution.<br /><br />You need to have various different agents with different utility-functions around to get the necessary diversity that can give rise to enough selection pressure. A “singleton” won’t be able to predict the actions of new and improved versions of itself by just running sandboxed simulations. Not just because of logical uncertainty but also because it is computationally intractable to predict the real-world payoff of changes to its decision procedures.<br /><br />You need complex values to give rise to the necessary drives to function in a complex world. You can’t just tell an AI to protect itself. What would that even mean? What changes are illegitimate? What constitutes “self”? That are all unsolved problems that are just assumed to be solvable when talking about risks from AI.<br /><br />An AI with simple values will simply lack the creativity, due to a lack of drives, to pursue the huge spectrum of research that a society of humans does pursue. Which will allow an AI to solve some well-defined narrow problems, but it will be unable to make use of the broad range of synergetic effects of cultural evolution. Cultural evolution is a result of the interaction of a wide range of utility-functions.Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-32238241770224441632012-07-19T17:06:20.144-07:002012-07-19T17:06:20.144-07:00Do you believe that if, at some point in future, w...<i>Do you believe that if, at some point in future, we combine our experts systems, tools, into a coherent framework of agency, this will be of no advantage to the resulting agent that is large enough for it to pose a risk to humanity?</i><br /><br />Stated this way, I don't even buy the "if" part -- combining software systems is a difficult engineering problem that gets much harder as the total lines of code increase. I've seen large software systems and "coherent" is not how I'd describe them. :-) Software is going to become more fragmented, not more combined.<br /><br />And of course as you state I do indeed believe that to the extent such combined software is feasible it would not pose any kinds of problems that specialized software had not posed long before and to a far greater degree.nicknoreply@blogger.comtag:blogger.com,1999:blog-17908317.post-85106605942861355662012-07-19T12:26:50.577-07:002012-07-19T12:26:50.577-07:00@Nick Szabo
Do you believe that if, at some point...@Nick Szabo<br /><br />Do you believe that if, at some point in future, we combine our experts systems, tools, into a coherent framework of agency, this will be of no advantage to the resulting agent that is large enough for it to pose a risk to humanity?<br /><br />That's what I inferred from skimming over your conversation over at lesswrong.Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-5958788655392156262012-07-18T23:01:09.847-07:002012-07-18T23:01:09.847-07:00Yeah, Nick, it was sort of just plain old, mundane...Yeah, Nick, it was sort of just plain old, mundane t-shirt printing technology!<br /><br />But, in its very large (economically) and very sophisticated (complexity) way, it was a blast.<br /><br />I worked for Intel from 1974 to 1986. I then did other things.<br /><br />But as a holder of Intel stock, and someone still interested in the technology, I follow what Intel, Samsung, TSMC, the leaders in the chip industry today, are doing.<br /><br />It bears very little resemblance to the Thinking Institutes which pontificate about whether step-and-repeat cameras are Dangers to Our Existence or What Laws do We Need to Head Off the AI Takeover.<br /><br />Good to see you blogging again. I would blog, maybe, except I did this for about 50% of my waking hours from 1992 to around 2002. Then I lessened my mailing list activity and got back to some more core stuff.<br /><br />(Certainly the "your great thoughts in 40 characters or less" current situation is really a disaster.)<br /><br />--Tim May, Corrallitos, CATimhttps://www.blogger.com/profile/15001173941283883747noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-38346204276996386012012-07-18T22:33:20.469-07:002012-07-18T22:33:20.469-07:00Tim, great comments on the actual history of compu...Tim, great comments on the actual history of computing -- far better information there than in a library full of speculations about the future. So it turns out that the future of computing depended much more on t-shirt printing tech than on self-replicating diamondoid nanobots. Put that into your Bayesian pipes and puff out some numbers. :-) <br /><br />Alexander, for most of your questions, even putting an answer into words, much less numbers, would be an exercise in false precision. I don't know what the "t-shirt printing" technology of the future will be (of course the futurists who pretend to know such things don't know either, so I'm in good company). Generally speaking, though, most of those things are already partially happening now, but won't happen anywhere close to completely before culture has radically changed in a myriad of ways, by which time such questions will have become largely moot, shocking as they may seem to us today.nicknoreply@blogger.comtag:blogger.com,1999:blog-17908317.post-17035340644568780492012-07-18T21:29:22.570-07:002012-07-18T21:29:22.570-07:00Another result of what I once dubbed "Rapture...Another result of what I once dubbed "Rapture of the Future" is that makes it hard to work on all that boring, short-term, intense stuff....like debugging a chip, or installing a new ion implant machine, that sort of boring grunge. <br /><br />I'm being facetious. But I saw it a bunch of times back 20 years ago when a lot of the cool new stuff was first getting wide exposure on the then-emerging mailing lists. (Extropians, Cryonics-l, Cyberia-l, Cypherpunks...).<br /><br />A bunch of folks were doing menial work at bookstores (one worked at the libertarian bookstore in SF, another was an unemployed philosopher). It was a lot easier to think about what "rules for Jupiter-sized brains" should be than to learn, as one of your blog commenters put it, to learn what a partial derivative is and how to use it.)<br /><br />I can recall one of the leader philosophers of nanotechnology--no, neither Drexler nor Merkle--saying at a nanotech discussion in 1992-3 that "This entire valley (Silicon) will be gone in 20 years!"<br /><br />Whoops.<br /><br />But the effect of the intoxication of the Rapture of the Future, either the positive or the negate sides of what Nick is calling Pascal's scams, can be debilitating.<br /><br />Frankly, I'm pretty glad that I "came up" during an era where so much tweeting and blogging and e-mailing and "bottle rockets being fired off" were not distracting me. To learn what I needed to learn to later do something useful (and financially rewarding) I had to buckle down and work. And at my job, the focus was on fairly short-term milestones, not grand visions of the future.<br /><br />Believe me, one of my boss's boss's bosses was Gordon Moore. And I can say he was no blue sky dreamer. And his observation had a lot more to do with saying "This is what we've seen in the last 5-6 years," along with some solid comments on the obvious "t-shirt printing" side of things. By this I mean that photolithography is a lot like silk-screening a t-shirt, and a lot of the jump from 100 transistors per die (chip) to 10,000 to a million, etc. had to do with precision optics, precision stepper motors....essentially printing images at higher and higher resolutions. Some things came from device physics (I worked on some of this), some things came from better CAD tools, but most of "Moore's Law" comes from photolithography. Even today, with the newest "step and repeat" systems costing $25 million EACH. Those are some expensive cameras!<br /><br />Not much room for Rapture-a-tarians in this environment.<br /><br />And a problem with a lot of these "planning" groups (the Usual Suspects) is that too many of the members want to be the Big Thinkers and authors of the policy papers.<br /><br />--Tim May, Corralitos, CATimhttps://www.blogger.com/profile/15001173941283883747noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-59881132159803714362012-07-18T13:27:59.529-07:002012-07-18T13:27:59.529-07:00Alexander:
And you learnt of those from where?
...Alexander:<br /><br />And you learnt of those from where? <br /><br />I'm using Bayes theorem in my work. Can't say about Solomonoff induction, it being non-computable.<br /><br />The Bayes theorem, combined with Solomonoff induction and halting problem/undecidability, is if anything, a proof that you can not find a probability of theory being true.<br /><br />Instead, alternative methods have to be used when no prior is available.<br /><br />For instance, if I commit to believe that a hypothesis is true if an experiment with one in a billion false positive rate has confirmed that the hypothesis is true, then the probability of me ending up with a belief in this false hypothesis is at most one in a billion. Given the cost of experiments and the costs of invalid beliefs of either kind, as well as the number of hypotheses being tested, the optimal standard for evidence (under worst-case assumptions) can be chosen, all without EVER pulling a number out of your ass and calling it a probability. This comes complete with resistance to various forms of pascal's wager, especially given that in the event that you are being offered a wager, there is significant probability of needing the money in the future for some other wager that would come with a proof.Dmytryhttps://www.blogger.com/profile/03329960438673340983noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-54457169460030480642012-07-18T12:14:22.272-07:002012-07-18T12:14:22.272-07:00What are you calling "state of the art" ...<i>What are you calling "state of the art" ? The informal "formalizations" originating from the same Eliezer Yudkowsky?</i><br /><br />Bayes’ Theorem, the expected utility formula, and Solomonoff induction.Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-36110218788142552002012-07-18T11:58:39.334-07:002012-07-18T11:58:39.334-07:00Alexander:
> The current state of the art form...Alexander:<br /><br />> The current state of the art formalization of technical rationality does indicate that a rational decision maker should rely less on empirical evidence than "traditional rationalists"/scientists tend to do and rather take logical implications much more seriously.<br /><br />What are you calling "state of the art" ? The informal "formalizations" originating from the same Eliezer Yudkowsky? What he is relying on has nothing to do with logic and as a truth finding method ranks some place near medieval scholasticism. Half the arguments rely on 'look theres this possible consequence and we can't imagine other possibilities so it must be true', and the other half replackages the general difficulty of building and motivating AI as the difficulty of making survivable AI.Dmytryhttps://www.blogger.com/profile/03329960438673340983noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-20660534306248182802012-07-18T07:44:31.624-07:002012-07-18T07:44:31.624-07:00Alexander:
> The current state of the art form...Alexander:<br /><br />> The current state of the art formalization of technical rationality does indicate that a rational decision maker should rely less on empirical evidence than "traditional rationalists"/scientists tend to do and rather take logical implications much more seriously.<br /><br />According to Eliezer &co, who have an obvious monetary motivation, and who do not promote logical implications, but instead something that on surface sounds vaguely reasonable but is about as effective as truth finding method as medieval scholasticism.<br /><br />You do not have reason to describe it as state of the art. There has been no great success attributable to informal 'formalizations' you refer to.<br /><br />While the logical implications certainly have to be followed, the scholastic implications as conjectured by Eliezer, certainly should not.<br /><br />Furthermore, from visiting lesswrong you may have massively skewed view on the decision making under uncertainty. There is a method that does not rely on made up priors. If I commit to believe in a hypothesis if it passes a test that has one in a million chance of a false positive, i have at most one in a million chance of believing in an invalid hypothesis; after considering the number of hypotheses you deal with, the cost of invalid belief, and the cost of tests, an appropriate testing strategy can be devised with a well defined worst-case performance, without ever making up a prior out of thin air. <br /><br />The 'technical rationality' as described on lesswrong is as much worthy of 'state of the art' title as Hubbard's dianetics or Keith Raniere's NXIVM.Dmytryhttps://www.blogger.com/profile/03329960438673340983noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-55134359348414613322012-07-18T03:49:21.260-07:002012-07-18T03:49:21.260-07:00Regarding the questions of my previous comment and...Regarding the questions of my previous comment and the probability estimates they ask for. I agree with you that such estimates can be very misleading. I don't expect numerical estimates. I'd just be curious about your general opinion with respect to those questions.<br /><br />It is actually one of the topics I am most confused about, namely the pros and cons of making up numerical probability estimates.<br /><br />I do believe that using Bayes’ rule when faced with data from empirical experiments, or goats behind doors in a gameshow, is the way to go.<br /><br />But I fear that using formal methods to evaluate informal evidence might lend your beliefs an improper veneer of respectability and in turn make them appear to be more trustworthy than your intuition. For example, using formal methods to evaluate something like AI risks might cause dramatic overconfidence.<br /><br />Bayes’ rule only tells us by how much we should increase our credence given certain input. But given most everyday life circumstances the input is often conditioned and evaluated by our intuition. Which means that using Bayes’ rule to update on evidence does emotionally push the use of intuition onto a lower level. In other words, using Bayes’ rule to update on evidence that is vague (the conclusions being highly error prone), and given probabilities that are being filled in by intuition, might simply disguise the fact that you are still using your intuition after all, while making you believe that you are not.<br /><br />Even worse, using formal methods on informal evidence might introduce additional error sources.Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-66782490499335730502012-07-18T03:02:30.281-07:002012-07-18T03:02:30.281-07:00If you wonder what AI researchers think, some time...If you wonder what AI researchers think, some time ago I conducted a Q&A style interview series with a bunch of people, all except one actual AI researchers:<br /><br /><a href="Interview%20series%20on%20risks%20from%20AI" rel="nofollow">http://wiki.lesswrong.com/wiki/Interview_series_on_risks_from_AI</a><br /><br />@Nick Szabo<br /><br />I would love you to answer the questions as well :-)<br /><br />1.) Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?<br /><br />2.) Once we build AI that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?<br /><br />3.) Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?<br /><br />4.) What probability do you assign to the possibility of an AI with initially roughly professional human-level competence (or better, perhaps unevenly) at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?<br /><br />5.) How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?<br /><br />6.) What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created)Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-78209848988948543592012-07-18T02:53:37.602-07:002012-07-18T02:53:37.602-07:00One correction regarding my last comment. Position...One correction regarding my last comment. Position 12 is not a verbatim quote by Eliezer Yudkowsky. Here is what he actually wrote:<br /><br /><i>[…] I would be asking for more people to make as much money as possible if they’re the sorts of people who can make a lot of money and can donate a substantial amount fraction, never mind all the minimal living expenses, to the Singularity Institute.<br /><br />This is crunch time. This is crunch time for the entire human species. […] and it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. I think that if you’re actually just going to sort of confront it, rationally, full-on, then you can’t really justify trading off any part of that intergalactic civilization for any intrinsic thing that you could get nowadays […]<br /><br />[…] having seen that intergalactic civilization depends on us, in one sense, all you can really do is try not to think about that, and in another sense though, if you spend your whole life creating art to inspire people to fight global warming, you’re taking that ‘forgetting about intergalactic civilization’ thing much too far."</i><br /><br />— <a href="http://vimeo.com/8586168" rel="nofollow">Video Q&A</a> with Eliezer YudkowskyAlexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-36760726590888189462012-07-18T01:55:20.048-07:002012-07-18T01:55:20.048-07:00With asteroid detection, there is a mechanism to d...With asteroid detection, there is a mechanism to determine where we have established a dangerous asteroid isn't. We can asymptotically approach 100% certainty that there is no dangerous asteroid within any given volume. You could easily administer a $10bn bounty, payable to the first person or group who identifies an asteroid of some determined size that will collide with Earth, and a $1tn bounty to the group that manages to change the trajectory. Those bounties are clearly worthwhile to issue, even at those absurd amounts. The low chance of payoff might result in few people trying to earn them, but that's a different issue.<br /><br />Right now the SI has not adequately defined Friendly AI to the point where they could offer a prize for solving the problem. If they can't offer a definition robust enough for a neutral party to resolve the dispute which will occur when somebody claims to have solved the AI problem but the SI disagrees, then they haven't defined the problem well enough to post a prize.Danhttps://www.blogger.com/profile/04068405479657933619noreply@blogger.comtag:blogger.com,1999:blog-17908317.post-3985877365081918852012-07-18T01:48:27.437-07:002012-07-18T01:48:27.437-07:00Some people who call themselves Bayesians sometime...<i>Some people who call themselves Bayesians sometimes or even often tend to confuse probability estimates with actual evidence, obsessing with the estimates and ignoring the actual evidence (or lack thereof).</i><br /><br />Which is basically the gist of most disagreement between the kind of people associated with the Singularity Institute and who those people call "traditional rationalists".<br /><br />The current state of the art formalization of technical rationality does indicate that a rational decision maker should rely less on empirical evidence than "traditional rationalists"/scientists tend to do and rather take logical implications much more seriously.<br /><br />My personal disagreement with them is mainly that I go a step further in rejecting the extent to which logical implications bear on decision making. Just as them I wouldn't give money to a Pascal's mugger. But I consider certain attitudes towards AI risks and existential risks to be a case of Pascal's mugging. Which they don't.<br /><br />As far as I can tell, the whole sequence of posts that Eliezer Yudkowsky wrote on the many world interpretation of quantum mechanics was meant to convey what you are rejecting, namely that probabilistic beliefs should be taken more seriously. And further that the implied invisible, that which logically follows from any given empirical evidence, shouldn't be discounted completely.<br /><br />I might be wrong here as I am not too familiar with the sequences.<br /><br />I don't disagree with them, except that I think the associated model uncertainty is doomed to be too drastic to take any given conclusions too seriously. More <a href="http://kruel.co/2012/06/25/rationality-implications/" rel="nofollow">here</a>.<br /><br />In short (excerpt of my post):<br /><br />You might argue that I would endorse position 12 [1] if NASA told me that there was a 20% chance of an extinction sized asteroid hitting Earth and that they need money to deflect it. I would indeed. But that seems like a completely different scenario to me.<br /><br />I would have to be able to assign more than a 80% probability to AI being an <i>existential</i> risk to endorse that position. I would further have to be <i>highly</i> confident that we will have to face that risk <i>within this century</i> and that the model uncertainty associated with my estimates is <i>low</i>.<br /><br />That intuition does stem from the fact that any estimates regarding AI risks are <i>very</i> likely to be wrong, whereas in the example case of an asteroid collision one could be <i>much</i> more confident in the 20% estimate. As the latter is based on <i>empirical evidence</i> while the former is inference based and therefore much more <b>error prone</b>.<br /><br />I don’t think that the evidence allows anyone to take position 12, or even 11, and be even slightly confident about it.<br /><br />I am also highly skeptical about using the expected value of a galactic civilization to claim otherwise. Because that reasoning will ultimately make you privilege unlikely high-utility outcomes over much more probable theories that are based on empirical evidence.<br /><br />[1] Position 12: <i>This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. Everyone should contribute all but their minimal living expenses in support of the issue.</i> (quote by Eliezer Yudkowsky)Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.com