|Editions | myCNN | Video | Audio | Headline News Brief | Feedback||
Analysis: The future of 'amorphous computing'
(IDG) -- Peer-to-peer distributed computing and clusters are two recurring hot topics in the Linux world. What'll it be like, though, when those technologies truly take root, and we each have not two or ten external processors working for us, but a thousand, or a million?
That's the sort of question Harold Abelson, Gerald Sussmann, and eight alphabetically-sorted more junior coauthors address in their 1999 MIT memorandum, "Amorphous Computing" (reprinted by the Communications for the Association of Computing Machinery earlier this year). Among their conclusions: we'll need new programming models to exploit processors that are individually unreliable and communicate over unreliable channels. It'll be worth it, though, because the marginal cost of each additional processor will be under a penny, and the right kind of design and engineering will give us unprecedented computational power.
Amorphous computing is sometimes called swarm computing to emphasize that a collective result emerges from individual microlevel behaviors with the surprising symmetry of a relocating bee or ant colony. This form of computing is also important for controlling the devices created by nanotechnology. Amorphous computing builds on research into distributed computing models like Jini. It presumably will be fueled by nanotechnology research and will ultimately provide the intelligence for nanotechnology products. And it might well be built with calculating biological molecules like those proposed by "Amorphous Computing" coauthor Tom Knight.
What will amorphous computing look like? Advanced fabrication techniques will synthesize processing elements so cheaply that they might be delivered in a paint or wrap. We'll "install" a thousand low-power processors at a time. We're unlikely to program them with the traditional, deterministic, barely-above-assembler languages we now use. Instead, we'll set up the kinds of systems that seem to work well for beehives or schools of fish: individual processors will operate with a few simple rules such as "follow your neighbors, mostly" and "jiggle around occasionally and see if you bump into a better solution." It won't be fatal if a few processors don't work to specification, or if noise in the environment degrades interprocess communications. The collective will still be able to achieve reliable answers from its unreliable parts.
Research into animal physiology also suggests that this approach can work. Insects seem to co-ordinate their six legs and four wings not with a sophisticated master algorithm, but rather with simple, rather autonomous local programs that move each part. The teamwork that leads to efficient movement emerges from the interaction of simple elements.
Abelson and Sussmann are famous for their classic text, Structure and Interpretation of Computer Programs, as well as their related work on the Scheme language, program verification, and other computational methods and books. Does it make sense for people who've invested so much in theories of formal correctness now to focus on inherently indeterminate calculations? Several columns planned for the next months will show that, in fact, there are deep connections between these areas.
All the elements necessary for amorphous computing seem within grasp. It's exciting to imagine what it'll be like to paint more efficient fuel injector controls onto an automobile engine, or put enough processor power into an ear-bud so that a radio receiver can learn on its own to scan for music likely to please its owner.
Pez dispensers go high-tech at MIT's Disruptive Lab
RELATED IDG.net STORIES:
Be a technology champion, not a stone wall
|Back to the top||
© 2001 Cable News Network. All Rights Reserved.|
Terms under which this service is provided to you.
Read our privacy guidelines.