Always books
Aug. 27th, 2003 10:26 pmI don't have anything really worth saying. Am actually feeling useless and uninteresting. I think I'm just tired. *yawn*
Today was Tom's first day of high school. Scary. My little brother is a ninth grader! And Mr. Arlt is gone. *shocked* I'm glad I finished before he left. Science would not have been the same without him. And Tom did not get the Evil Dark Lord for Algebra. Oh well.
Oh, actually there is something I can ramble about. I started reading Prey by Michael Crichton last night. It is about nanotechnology. The inside flap of the jacket says, "In the Nevada desert, an experiment has gone horribly wrong. A cloud of nanoparticles--micro-robots--has escaped from the laboratory. This cloud is self-sustaining and self-reproducing. It is intelligent and learns from experience. For all practical purposes, it is alive."
That creeps me out.
Now, Crichton is an extremely intelligent man. He knows a lot of stuff. All of his books are based on some fact somewhere, some truth. In his introduction he says that "[nanotechnology] is the quest to build man-made machinery of extremely small size, on the order of 100 nanometers, or a hundred billionths of a meter. Such machines would be about 1,000 times smaller than the diameter of a human hair." His introduction to the book also explains some of the origins of nanotechnology and where it is going/expected to go in the future. The goal is to design organisms that will, for all intents and purposes, be "alive". They will be called artificial because they are man-made.
I have one thought on this. Just because we have the power to do something does not mean it is a good idea. Producing artificial organisms that can take care of themselves, learn from their past actions, self-reproduce, and generally make "conscious" decisions is a bad idea. In theory maybe it isn't. In theory it's probably a darn good idea. But a lot of things look good "in theory". (The design of the Titanic was darn good in theory.) But if we give machines the power to really decide things than we open up the possibility of not being able to control those decisions. And then we have a bunch of amazingly microscopic machines to worry about. The power to do something does not mean we should do it.
I'm a little past page 120 and there are 364 pages in the book. So far nothing has really been explained. Hopefully something will be explained soon. I want to know what in tarnation is going on!
Today was Tom's first day of high school. Scary. My little brother is a ninth grader! And Mr. Arlt is gone. *shocked* I'm glad I finished before he left. Science would not have been the same without him. And Tom did not get the Evil Dark Lord for Algebra. Oh well.
Oh, actually there is something I can ramble about. I started reading Prey by Michael Crichton last night. It is about nanotechnology. The inside flap of the jacket says, "In the Nevada desert, an experiment has gone horribly wrong. A cloud of nanoparticles--micro-robots--has escaped from the laboratory. This cloud is self-sustaining and self-reproducing. It is intelligent and learns from experience. For all practical purposes, it is alive."
That creeps me out.
Now, Crichton is an extremely intelligent man. He knows a lot of stuff. All of his books are based on some fact somewhere, some truth. In his introduction he says that "[nanotechnology] is the quest to build man-made machinery of extremely small size, on the order of 100 nanometers, or a hundred billionths of a meter. Such machines would be about 1,000 times smaller than the diameter of a human hair." His introduction to the book also explains some of the origins of nanotechnology and where it is going/expected to go in the future. The goal is to design organisms that will, for all intents and purposes, be "alive". They will be called artificial because they are man-made.
I have one thought on this. Just because we have the power to do something does not mean it is a good idea. Producing artificial organisms that can take care of themselves, learn from their past actions, self-reproduce, and generally make "conscious" decisions is a bad idea. In theory maybe it isn't. In theory it's probably a darn good idea. But a lot of things look good "in theory". (The design of the Titanic was darn good in theory.) But if we give machines the power to really decide things than we open up the possibility of not being able to control those decisions. And then we have a bunch of amazingly microscopic machines to worry about. The power to do something does not mean we should do it.
I'm a little past page 120 and there are 364 pages in the book. So far nothing has really been explained. Hopefully something will be explained soon. I want to know what in tarnation is going on!