[Most Recent Entries]
Below are the 9 most recent journal entries recorded in
|Monday, April 2nd, 2007|
|Wednesday, May 10th, 2006|
In response to this post
There are a few interesting points in here that I've never heard before...
One is the idea that God presumably had the ability to create beings of equal or greater intelligence than his own. Of course, in Christian doctrine, God is all that can ever be, so creating entities more intelligent than himself is a paradox in the same category of "can he create a boulder he cannot lift?" To play Devil's (God's?) Advocate, I might point out that perhaps God intentionally created less intelligent beings so that he their requirements for amusement are less and therefore easier to fulfill. But clearly God does not seem to be focused on our amusement, but more frequently our torment... so I guess that argument falls flat. And he is supposed to be all-powerful in any case, so managing more intelligent beings would not be a challenge to him. I was often told that we were created in God's "image", but what similarity of image or otherwise could there be between an omnipotent being and ourselves?
Rather than talk about intelligence absolutely exclusively, I would specify intelligence coupled to good ends. Perhaps because connotations of goodness, reasonableness, and compromise are wrapped up in the definition of "intelligence", we can say "intelligence is the greatest", and get away with it, but intelligence can also be defined more simply as "ability to formulate plans to achieve goals". This goal-independent definition of intelligence is probably more common than the way you are using it here.
Gardner did a really good job of presenting the infinite unknowable in a poetic way. What I think is interesting is that, as we gain greater control over our environments, the more complex mark of intelligence will make itself seen much more frequently than the simpler marks of intelligence-free circumstance, making it all the more difficult for any one mind to grasp precisely what will happen next. But this difficulty, of course, is something we will embrace as a welcome challenge once we get past that initial feeling of unease that accompanies all uncertain change.
One thing that inspired me when I was exposed to image boards last year was the idea of transferring more data at once than is possible with words - a picture is worth a thousand words and so forth. Today at Google's press conference they announced a partnership with a company that does advanced, animated visualizations of demographic data - I forget the name. Anyway it was really interesting because it shows how image-based as well as text-based advancements can be revolutionary in terms of data transfer between humans. In fact, instances when person-to-person bandwidth increase could very well be the root causes of the vast majority of historical chain reactions.
The point you bring up about hard-wired faith is a good one... if a tendency towards faith is really hard-wired and unavoidable, then it makes sense to turn it towards something constructive (human beings, intelligence, exploration) rather than stifle it entirely (curmudgeony atheism).
My belief summary:
As a child I was religious but also always had an interest in the sciences. With a scientific tendency to classify everything, I made up the idea that Heaven had layers which existed in alternate dimensions. Dante and others seemed to have been able to do this with Heaven and Hell and still be called Christian, which is surprising to me because these divisions are not specified in the Bible.
When I was 12 I began to read non-fiction voraciously and shortly dropped my religious faith. I was very interested in space exploration and frequently fantasized about the possibility of a machine that could turn back my biological clock without adverse effects.
When I was 14 I accepted secular humanism as a formal philosophy and also started turning towards the idea of using advanced manufacturing to create enough material abundance that the conventional currency system could be dissolved.
In my teenage years I began to see that in many cases money was just a placeholder that partially represented skills, knowledge, and abilities, so I felt a little less negative about it, but still wanted to see universal abundance so that more for one person didn't have to mean less for another.
When I was 17 I discovered transhumanism, which I thought to be an interesting crossroads of innovative ideas, some of which I agreed with and others I didn't. In any case I was welcomed by many dozens of adults in the online world who appreciated my enthusiasm and precociousness, who also formed an excellent group to bounce ideas off of. I met scientists, engineers, artists, VCs, etc., and began to see the value in consulting others even if it feels like you have it all figured out.
When I was around 18 I met Eli and adopted a much more complicated belief system that centered on the idea of creating smarter intelligence technologically. It also deeply involves the question, "what would Good look like if it were a software program?" These lines of action and inquiry reach to the very roots of what it means to be human. They invoke dozens of scientific disciplines to produce partial-answers that still need improvement to this day. I helped Eli build an organization devoted to tackling these issues.
Now the organization is as big as a small company, built by the work of hundreds of people, but needs to grow larger if we want these questions answered and these goals attained.
...and that's why we're setting up shop at Stanford this weekend to bring the discussion to the next level of mainstream acceptance and intellectual legitimacy!
|Thursday, August 26th, 2004|
I've been reading a lot about Soviet Gulags (hard labor camps) lately. Recently finished "The Gulag Archipelago" and "The Gulag Archipelago Two", by Aleksandr I. Solzhenitsyn. The first book isn't even about the prison camps specifically, but the events, mostly political and legal, that lead up to their use. The second book about the bowels of hell, the worst humanitarian disaster ever, the 40 or so years of hard labor by millions of people whose collective misery outclasses the Holocaust thousands of times over. Many of them were set to work on projects whose actual value was totally questionable, like a huge canal from the White Sea to the Gulf of Finland.
From 1917-1958, about 60 million people died through starvation, freezing, torture, being shot, torn apart by attack dogs, worked to death, etc. Never has such a large group of human beings passed into the depths of insanity than in Communist Russia. Thank Providence my family ran away to China when the time was right. They brushed cheeks with the greatest evil in human history.
So, how did those Soviets make the prisoners really suffer? The strategy was threefold:
1) The work brigade. Prisoners worked in groups of 20 or so people each, with shared fates. Bombarded by Communist propaganda constantly, separated from their families, moved from place to place at the whim of the higher-ups, work brigades had to fulfill large quotas or they would not eat. Set against each other by the horrible conditions, ridiculous sentences, mass deaths, freezing cold, and back-breaking work, the work brigade was its own worst enemy. In situations like these, who even needs overseers?
2) A differentiated ration pot. Work harder, faster, put your heart into it! The amount of food you receive at the end of the day depends on how much work you get done. The construction project in Moscow is being held up because there are no bricks! This work is being carried out on the initiative and orders of the Advocate of the People, the Great Strategist, Comrade Stalin. Competition between brigades, projects, camps, he who works the most gets the best rations! In truth, those who broke the work records most enthusiastically would be those who worked themselves to death most quickly.
3) Two groups of bosses. Bosses managed work brigades in shifts. Brigades would be switched from camp to camp, project to project, while the slave-driving "Chekas" usually got to stay in one place. Since each brigade had at least two bosses, there was often two independent sets of demands, punishments, threats, and rules. What circumstances could be more confusing and soul-crushing? There were stories of entire brigades dying under the same bosses. The prisoner's lives were worth nothing; a Cheka casually emptying his pistol into a work brigade was widely considered acceptable behavior.
As you can see, life in the Soviet prison camps of the twenties, thirties, fourties, and fifties wasn't very fun. The Gulag system became a positive feedback cycle of human suffering; as more people were sent to the camps, the country became increasingly dependent on their production out of economic necessity. Anyway, be thankful for your present circumstances and put all your energy and money into minimizing the probability that such suffering (or anything approaching it) ever happens again. (Right now there is a nonzero probability that Communist China will take over the world in the next century, and if they reach the right technologies
before we do, the ex-free world will be helpless to defend itself.)
|Friday, July 23rd, 2004|
Quote of the Day, by journalist John Robert Maslow, regarding nanotechnology policy:
"If relinquishment is not attainable (or, as most would hold, desirable), an active defense of some kind will be required. Because the consequences of a nanoevent anywhere on the globe can be so utterly dire, a comprehensive and global defense system offers the only possible means of containment, and will be necessary as a failsafe measure even if cooperative international nanodevelopment agreements are in-place and working. [...] Friendly Artificial Intelligence has enormous potential to ensure humanity's safe transition to a world containing nanotech."
|Monday, July 19th, 2004|
|Call for Collaborators
I am very interested in searching for someone to collaborate on writing projects with. I want to take up larger and more serious writing projects. Projects that are well thought-out, thoroughly researched, with strong citations, and considerable detail. Projects that convey genuinely new and useful ideas related to important issues in science and technology. The only problem with taking up such projects is that they are a large load to bear single-handedly. But if I were to team up with a person on the same wavelength as me, with exceptional intelligence, writing experience, and an independent drive to produce high-quality work, then the two of us could accomplish a lot more than we would have otherwise accomplished working alone. Listed are some of the projects I have at various stages of completion, sitting on my hard drive and gathering dust:Technological Singularity Forecasting:
A very long document surveying futurism, expert systems, robotics, nanotechnology, computational neuroscience, Singularity activism, AGI theory, and various other areas in an effort to estimate when the creation of smarter-than-human intelligence will become technologically feasible. The conclusion I've tentatively made is 2005-2015, with 2020 as a far upper bound. Systems theorist John Smart has a well-written page
on the topic of Singularity forecasting. Smart forecasts a Singularity in 2060, plus or minus 20 years, yet Drexler and others forecast molecular manufacturing in 2010-2020, which would rapidly lead to computers with computational powers far in excess of the human brain. Creating general artificial intelligence would then just be a matter of implementing a class of algorithm that a corresponds to a computable solution to the mathematical problem of general intelligence, within the constraints inherent to the hardware. This would happen months or years after the nanotech revolution, not decades. Technological Singularity FAQ:
There is still not a definitive Technological Singularity FAQ. The void persists, just waiting to be filled. A properly written and advertised Singularity FAQ could bring in a decent amount of traffic, and inform people of the necessary basics to grasp at least an outline of the concept. There is an abundance of faulty Singularity definitions and interpretations floating around, and any effort to coax order out of that chaos seems like a wise idea to me. Like the Foresight Institute
in the domain of nanotechnology, the Singularity Institute benefits from a "founder effect" in the domain of the Singularity, something worth taking full advantage of.History of Friendly AI:
: "Friendly AI is one of the critical links on humanity's road to the future. At some point in the relatively near future, enough computing power will exist that a near-human or transhuman Artificial Intelligence is theoretically possible. At some point thereafter, Artificial Intelligence will be created, and the actions and choices an AI makes will have significant impact on the world. As a field of study, "Friendly AI" is the theoretical knowledge needed to understand goals and choices in artificial minds, and the engineering knowledge needed to create cognitive content, design features, and cognitive architectures that result in benevolence."Future Shock FAQ:
Originally suggested by Stanley Pecavar, a mechanical engineer at Sun and one of my financial supporters, an in-detail analysis of future shock and its distinct levels would be useful. I rewrote and improved "Future Shock Level Analysis"
earlier this year, but the document fails to analyze the shock levels in enough detail, and fails to provide detailed suggestions for "moving" from one shock level to the next. The document is largely agnostic about migration between shock levels, in order to take the objective point of view. But a new document that actively suggests upward motion on the shock level scale might be an interesting idea.The Future of a Civilization with Nanotechnology:
Full-fledged molecular manufacturing (MNT) will probably be here sometime within the next decade. "Nanotechnology" is supposedly a hot public topic, but few people seem to understand the technical side of nanotech or the true extent of its implications. The original founders of the field, such as Eric Drexler
, are screaming their heads off, but the US Government and its National Nanotech Initiative has a bizzare political agenda which encourages clouding the original meaning of the term "nanotechnology". Anyway, MNT will result in the rapid plummeting of manufacturing costs, energy costs, research costs, computing costs, military costs, civil costs, and so on. If the benefits of MNT are widely available soon after their initial invention, then the result will probably converge towards one of two attractors; either an anarchistic free-for-all, or a well-regulated singleton scenario. This document discusses the capabilities of nanotech and the extended consequences of such capabilities in the social, economic, political, environmental, and scientific spheres.
That's it for now. Mail me
if you're interested in any of the above.
In the past month or two I've done a bunch of assorted writing, mostly on Singularity-related topics:
, I did the short piece "Deconstructing Asimov's Laws"
. A criticism of Asimov's Laws. The article was briefly quoted on KurzweilAI.net
."Technological Singularity Survey"
. For those who are interested, I put this up about a week ago. Consider taking the survey and sending me your results, I'm curious."A Concise Introduction to Heuristics and Biases"
. Important facts about human cognitive biases that you need to know."10 Simple Ways to Help the Technological Singularity"
. 10 ways that an average person can help the effort to create transhuman intelligence. It makes me feel silly to spell it out in such detail, but if people need such specific suggestions, then fine."Achieving the Technological Singularity"
. More Singularity propaganda. This is now the 7th time I introduce the Singularity on my personal website. Not too long; takes about 10 minutes max to read. Please read and send in your comments
."Intelligent People with Interesting Ideas"
. This page features my primary intellectual influences. If they aren't on that page, then they probably haven't influenced me that much. Part of my rationale for publishing this page was to increase my Google ranking for the search term "intelligent people", I must confess. (Hey, it brings in traffic.)"Who Are Technological Singularity Activists"
, my overlong, poorly structured introduction to Singularity Activists, has been updated somewhat. Includes a few new images and new wording. Even if it is choppy in parts, I still think it's worth reading."World Peace Through World Domination"
. After seeing this phrase in an anime, I just had to create a page with this title. Humorous. I really doubt anyone can actually achieve world peace through world domination in the short run, but I'd take world domination over world destruction any day.
And finally, I'm still looking for donors
. Invest in me, and you'll get your money's worth.
|Sunday, July 18th, 2004|
, the Singularity Institute's
web project timed to coincide with the release of "I, Robot"
, went live a few days ago. We immediately got slashdotted. Traffic has been very good. Read the press release here
. Thanks to everyone who participated, especially Michael Roy Ames, Christian Rovner, Josh Yotty, and Tyler Emerson. This intiative should bring in some new donors and programmer candidates, both of which we really need to continue our growth.