David M. Berube [1]
Cite as: Berube, D.M.,'Decision Ethics and Emergent Technologies: The Case of Nanotechnology', European Journal of Law and Technology, Vol. 3, No.1, 2012Discussions of emerging technologies are steeped in hyperbole. Promises of abundance are associated with assurances of safety and predictions of doom are frequently made by the prognosticators. In an address to the National Association of Science Writers meeting in New York on 16 September 1954, Lewis L. Strauss,chairman of the Atomic Energy Commission, forecasted nuclear power would be too cheap to meter. [2] In 1964 Marshall McLuhan predicted that television would alter man's central nervous system. It would 'end the consumer phase of American culture' and change city life so that 'New York will become a Disneyland, a pleasure dome'. [3] Newsweek's Clifford Stoll used contrarian hyperbole when writing about the Internet in 1995: 'The truth is no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works'. [4]
At best, futurology is a pseudo-science especially when discussing technology. Situating oneself at a point in time and space that would enable predictive validity is nothing more than an exercise in counterfactual thinking. While an interesting game and an academic exercise, prolepsis makes for great storytelling though it seems to suffer predictive validity. Epicurus explained prolepsis as precognition and Cicero called it praenotiones. It is about attaining the happy, tranquil life (ataraxia), the absence of pain (aponia), and living a self-sufficient life surrounded by friends. [5] While it may not serve as a methodology for planning our lives and society's future, it is a plan for self-discovery. Unfortunately, critics in emerging science and technology and futurists are asked to participate not as an exercise in self-discovery but rather as a prognostication for early stages of regulation.
Thinking about the future is what we do on a daily basis. Of course, anyone putting down in print what they expect the world to be decades, centuries, or millennia from now is imprudent and a bit reckless. To quote Yogi Berra, 'It's tough to make predictions, especially about the future'. Consequently, the following does not speculate where technology may be a decade or more from now. Instead, it was written to provoke thought. At best, it is a guide for living cooperatively with emergent technology. This is especially true for technology since we never can predict how technologies will ultimately be used. The world is populated by very smart people who find new applications for discoveries on a daily basis. In addition, applications can be repurposed, which further complicates predictability. A simple example: all-terrain vehicles (ATV). While designed for off-road activities, they are more often than not found on roads. While their use has decreased over the last few years, interactions and behaviors emerge through these unpredictable uses of technology like on-road ATV racing events. These interactions and behaviors complicate how we can predict the gamut of applications associated with a new technology as well as its potential risk signatures and its environmental health and safety footprints.
Science and technology is systematically complex. Weaver defined complexity as the degree of difficulty in predicting the properties of the system. Emerging technology suffers from both disorganized and organized complexity. [6] Disorganized complexity is a function of attempting to concatenate information without a meta-understanding about how each data point relates to another. Statistical techniques such as simple averaging are used to tame disorganized complexity. Organized complexity, on the other hand, involves dealing simultaneously with a sizable number of factors that are interrelated into a whole.
Another phenomenon associated with technology is emergent nature [7] [8] [9]. Almost by definition, emergence is unpredictable. By intention or accident, emergent technology is often very different than expected. Interactions between applications involving the technology and the world around it produce high levels of uncertainty. Consequently, definitive claims regarding an emerging technology's environmental health and safety are unreliable. This is especially true when you have technologies that are platforms working as tools upon which improvements to other technologies are made. Sometimes two or more platform technologies come together to produce yet another technology. Recent experiences with biotechnology and nanotechnology have demonstrated this phenomenon. Nanotechnology is less about building nano-things and more about using nanotechnology as an enabling technology. For example, nano-biotechnology or bio-nanotechnology is an area of scientific and technological opportunity that applies the tools and processes of nanotechnology to build devices for studying and interacting with biosystems.
When nanotechnology was first touted we heard the world 'revolution'. While it is difficult to track the source of this claim, K. Eric Drexler's 'Engines of Creation', which appeared in 1986, is the most likely source in the popular literature.[10] However, claims associated with the revolutionary potential of nanoscience were not limited to futurists and science fiction devotees. MIT's former president, Charles M. Vest, among many others made statements of this sort.
'The gathering nanotechnology revolution will eventually make possible a huge leap in computing power, vastly stronger yet much lighter materials, advances in medical technologies, as well as devices and processes with much lower energy and environmental costs. Nanotechnology may well rival the development of the transistor or telecommunications in its ultimate impact'. [11]That frame for nanotechnology was not as effective as proponents intended. Truth be told, nanoscience has been evolutionary rather than revolutionary. More than a decade later we are still waiting for a shattering breakthrough.
Recently, people have been talking more about 'Green Nanotechnology' [12], a substitute frame in the marketing of nanoscience (marketing science is not inherently a bad thing). Nanotechnology has been associated with natural resource conservation, especially substitution, and wild-eyed claims about its remediative potential appear regularly in the media. Cleaning up oil spills, removing contaminants from drinking water, even creative solutions to climate change are just a few of the environmental claims associated with nanoscience. While some of these will come to pass, others will not. The most important observation to be drawn about 'Green Nano' is whether or not this is more a rhetorical flourish and a marketing strategy rather than a description.
Correlative to these claims are worries about uncertain implications. How will the environment and those dependent on it react to the presence of engineered nanoparticles? The level of uncertainty is tremendously challenging. Whether or not the government or other entities have the capacity to protect us, calls for some sort of preemptory policy making resound. Unsurprisingly, when regulation of uncertain technologies is discussed we find the Precautionary Principle referenced. While there are a range of options available to address EHS, three dominate the debate: laissez faire (leave well enough alone), proceed with caution (regulate when there is evidence), and precaution (uncertainty constitutes evidence for regulation). Unfortunately, the proponents of 'precaution' bundled 'regulation under uncertainty' with liberal political theory. As such, moderates and conservative ideologues have rejected their proscriptions regardless of the strength of their arguments.
This article offers another direction for this debate. When I wrote 'Nano-Hype' in 2006, I was working with a team of Science and Technology Studies academics who represented a diverse set of disciplines. [13] Those from philosophy and ethics had an important influence on my thinking. Their influence and critical research in how the public interfaces with nanoscience has led me to ask some of the bigger questions about nanotechnology. Emerging technologies, including nanotechnology, may demand a new decision ethics. Given the dire warnings associated with phenomena, such as nanotechnology, post facto apologia is inappropriate. Given space restrictions, this article draws from experiences in nanoscience. Given that much is known about nanoscience and nanotechnology, they are an appropriate subject to test the following.
Brenda Almond, co-founder of the Society for Applied Philosophy, defines applied ethics as 'the philosophical examination, from a moral standpoint, of particular issues in private and public life that are matters of moral judgment'. [14] It is thus a term used to describe attempts to use philosophical methods to identify the morally correct course of action in various fields of human life. Decision ethics is a subset of applied ethics associated with making moral decisions. The following offers some observations about how decision making may be more moral and draws from some of the intrinsic characteristics of nanotechnology as an emergent technology.
Emergence involves complex systems arising from an integrative process involving a multiplicity of interactions. Emerging technologies have two definitions: (1) prominent technologies and (2) technologies that are the product of emergence. While most people use the first definition, this article examines the second.
Nanoscience, despite early hyperbole about its revolutionary nature evolved from advances in chemistry, material sciences, and instrumentation. Indeed, many colleagues of mine find little new about nanotechnology per se. As material scientists and chemists were better able to work on the nanoscale thanks to a set of important inventions, we uncovered characteristics of nanoscience that are both blessings and curses. For example, its size and surface features make it ideally suited for personalized medicine but simultaneously may impact parts of the human and non-human ecology in unintended ways. Nanotechnology is a platform technology and we expect it will allow us to make better alloys, polymers, etc. as well as making products with superior characteristics whether baseball bats or desalination systems.
At the turn of the 21st century, nanoscience was mostly about nanoparticles, and most applications involved incorporating nanoparticles into or onto other particles, though there were notable exceptions especially in nanotube research. In discussions about the future of nanotechnology multiple acronyms surfaced such as NBIC. NBIC stands for nanotechnology, biotechnology, information technology, and cognitive sciences. [15] Other acronyms include: GNR (Genetics, Nanotechnology, and Robotics), [16] GRIN (Genetics, Robotics, Information, and Nanotechnology), GRAIN (Genetics, Robotics, Artificial Intelligence, and Nanotechnology),[17] and BANG (Bits, Atoms, Neurons, and Genes). [18] The spate of acronyms is used for varied rhetorical purposes but fundamental to all of them is the sense that technologies build with and on each other, affecting change.
These ten principles are drawn from two decades of following developments in biotechnology and nanotechnology, as a social scientist of science, a journalist, a granted researcher, a blogger, an academic in science and technology studies, a consultant in emerging technologies, and an author. They are introduced to begin a much needed debate on regulated emerging technologies of all sorts though as the bed of this paper indicates, it might be prudent to take a close look at the newly developing field of synthetic biology.
The discipline of foresight is grossly underdeveloped. The algorithms we use to determine risk in analysis and management demand data. Emergence is writhing in uncertainty. Given the low levels of predictability, we have underinvested in designing new models for making sound risk analysis. Trying to insert imprecise data into old algorithms that demand specific data is doomed to failure. As such, we need a formula that allow high levels of uncertainty to be expressed.
When Weaver discussed science and complexity, he discussed time and space and the limitations they present when resolving truly complex phenomena. Because of the extensive number of variables in a complex system, some become discounted. Generally, when conservative and freethinking variables are tested against each other, the middle fades away. We tend to focus on polar opposite values (all good, all bad) rather than the nuanced values found in the middle. As Weaver explained, we are tempted to oversimplify and say that a methodology went from one extreme to the other and left untouched is a great middle region in which lies a very substantial number of relevant variables. [6]
In a future anchored counterfactual, both the antecedent and the consequence are both false, hence its truth value as a foresight tools must lie elsewhere, if it exists at all. Goodman [19] suggested conjoining a counterfactual conditional with a set of conditions (contextualizing it). Of course, determining the context given the lack of data on the future produces a status quo bias. We predict the future as an extension of the past which has led to some highly peculiar and inaccurate projections [20].
Anyone with a rudimentary understanding of mathematics knows that an infinite impact multiplied by anything but zero will result in an infinite value. LPHC claims, while not sound objectively, are very persuasive to some audiences. The mental shortcuts used by less informed publics favor hyperbolic claims of all sorts including LPHC. According to Suskind, a LPHC strategy he calls the 'one percent doctrine' was at the core of America's anti-terrorism policy under the Bush Administration. [21] The underlying premise is that terrorist threats with even a one percent probability must be treated as certainties. This led to a foreign policy that found support in false claims about weapons of mass destruction, enhanced interrogation methods, and rendition.
Balancing LP and HC is difficult because they are psychological antagonists.
'High magnitude means you should take precautions. Low probability means you should shrug the problem off. So most of us have trouble focusing simultaneously and equally on the two. We pick one. If we choose to focus on magnitude, then even a vanishingly unlikely scenario is unacceptable, because it's so awful. If we choose to focus on probability, then even the prospect of worldwide catastrophe is tolerable, because it's such a long shot'. [22]
Worst case scenarios are built on this model. [23] Nuclear winter, pandemics, asteroid collisions, omnivorous robots devouring the planet, death panels, etc. Worst case scenarios, like Frankenstein's monster, can also be unpredictably destructive, undermining both preparedness and the very values we seek to promote. They can distort debate, limit options, rationalize human rights abuses, and undermine equality and social justice. [24]
Technologies that risk irreversible damages should be subject to heightened scrutiny. Why an impact may be irreversible may have less to do with any intrinsic quality to the impact, but rather to our analytical weaknesses. What seems irreversible today, may be reversible tomorrow and there is a continuum of cases with different degrees of difficulty in reversing. Sunstein talks about this phenomenon when he posits his 'irreversible harm precautionary principle' and 'special steps' needed to be taken to ameliorate them. [23]
Arguments built on claims of irreversibility are highly suspect. Arguments of this ilk begin with a false assumption of knowing enough to make the claim when the argument begins with a lack of knowledge as the backing for the claim. Making a decision based upon what we do not know is not progressive and the primary complaint posited to a hard reading of the precautionary principle. [25] In addition, arguers and respondents using irreversibility as an argument know they are closing down rather than opening debate about a technology. Rather than discussing a technology in net terms, within multiple contexts, in its myriad of variants, and so forth, we are left with a conservative bias toward rejection on its face by definition. This becomes especially problematic when the impacts affect future generations. While it may be true that future generations may experience events that cancel an impact, convert an impact event into a net beneficial event, or repurpose the impact event for some greater social good, these phenomena only justify some discounting of impacts across multiple generations. While we cannot predict the future accurately, it is unfair to assume future generations will discover solutions to problems we leave behind after we are gone.
New products tend to gravitate to markets frequented by those with high levels of disposable income. Fundamental to capitalism, industry develops product lines for which there is a profitable market. In addition, young technologies carry a research and development surcharge of sorts passed along to consumers. This is why prescription drugs and computer upgrades are so expensive until their generic versions are produced by the reverse engineers. This is particularly true for emerging technologies such as biotechnology and nanotechnology, especially bionanotechnology and nanomedicine. Research and development is expensive and public support for fundamental R&D doesn't provide enough of an incentive to develop humanitarian product lines. As such some of the first products are luxury goods and elective medical procedures.
Altruistic claims associated with benefits to marginal populations are often used as rhetorical appeals to suspend inquiry into the side-effects and collateral concerns by extending the depth of the net benefits base when you add on millions of poor and destitute as potential markets. In addition, production and fabrication in a global economy tend to settle where wages are lower, if not lowest, and where the EHS footprint is easily ignored by those who buy the finished products. For example, an alleged nanoparticle event surfaced in China. While the research design and data have been belittled by toxicologists, the event speaks to this concern. Regarding the event, Song reported the death of Chinese women by respiratory distress in a plant that produced carbon nanotubes (CNTs). [26] While CNTs may be used in making better baseball bats and golf driver shafts, these products may offer little to factory workers in impoverished settings.
There are always less attractive alternatives to new technologies, such as the old technologies they would replace. Steam-powered looms of the 19th century replaced crafts-persons and artisans who worked out of their homes rather than in a factory. Reversion to bygone eras is possible. While a new Luddism is not suggested, one must ponder how much of an EHS footprint is justified if there are alternatives with a smaller footprint and a little less efficiency. Too often those that produce a product have such a strong interest in the intellectual property and development costs that they become close-minded to alternatives. The fail to apply the type of multi-criteria analysis that may provide a better product with some minor tweaking on the development and production processes. Alternative analysis is what Rittel and Webber called a wicked problem. [27]
An idealized planning system would involve altruistic participants concerned solely with the public good rather than profit. Given the unlikelihood of planners turning over decision making to the inexperts, Rittel and Webber suggested an on-going , cybernetic process of governance that involves incorporating systematic procedures for continuously searching out goals, identifying problems, forecasting uncontrollable contextual changes, inventing alternative strategies, tactics, and time-sequenced changes, stimulating alternative and plausible action sets and their consequences, and so forth. [27] Unfortunately, motivations to engage in this type of oversight are not the natural outgrowth of a laissez-faire system.
The claim that technological progress is inevitable has been made throughout human history. If we don't make it someone else will and that someone is much less scrupulous. In defense parlance, we need to build biological weapons in order to build defenses against their use. While there are individuals with a clearly defined sense of the public good, technology is institutionally driven and institutional public good may be at odds with an individual sense of public good. Though it is difficult to develop much in secret these days, breakthroughs are kept below radar in business and industry to protect intellectual property and maximize licensing returns. Given clandestine industrial research and development activities, claims of inevitability are made rhetorically driven than data driven.
While referencing historical events like the launch of Sputnik as a case in point, by and large inevitability claims tend to be used to suspend thoughtful reflecting and critical analysis of problematic phenomena of all sorts, especially emerging technology, rather than for other reasons. According to Katz-Hyman and Krepon in their critique of space weaponry, historical inevitability is a heavily freighted and much contested concept. Every historical chapter contains its unique passages that are read and weighted differently by whomever reports the history. And the 'historical record' usually contains many blank pages reflecting unanswered questions. We also know that historical parallels can be forced and made to conform to policy preferences. Policy advocates who employ this line of argument usually majored in other subjects. Historical determinism can therefore be a flawed and dangerous enterprise. [28] Claims of inevitability are made from some vantage point in the present with inadequate information and powered by speculation. If history is any guide, we should heed the warning made by Bernard Brodie:
'History is at best an imperfect guide to the future, but when imperfectly understood and interpreted it is a menace to sound judgment.' [29]
Regulation entails a laundry list of options from outright bans to complaint-instigated reactions. The former freezes development and the latter does little for the complainant beyond compensation. With high levels of uncertainty new regulatory models need to be considered ranging from self-regulation prodded on by insurance and liability to more recognized formula like control banding. [30] Banding offers opportunities if we perceive it as an interim solution. Control banding strategies offer simplified solutions for controlling worker exposures to constituents that are found in the workplace in the absence of firm toxicological and exposure data. These strategies may be particularly useful in nanotechnology applications, considering the overwhelming level of uncertainty over what nanomaterials and nanotechnologies present as potential work-related health risks, what about these materials might lead to adverse toxicological activity, how risk related to these might be assessed, and how to manage these issues in the absence of this information. [31]As more and more information surfaces, the banding strategy responds much more organically than more traditional rule based regulations.
This direction is wholly consistent with the White House policy statement on nanotechnology and nanomaterials. [32] The statement asserts nanotechnology is neither intrinsically benign nor harmful and demands an evidence-based philosophy on regulation. This 'best-science' approach discourages knee-jerk reactionary decision making. [33]
This argument claims scruples at home will drive self-serving and irresponsible developers abroad. Apart from denying society the possible benefits, it may lead to the constitution of 'technological paradises', i.e. where research is carried out in zones without regulatory frameworks and is open to possible misuse. [34] If we are led to be suspicious of the business and industrial communities because they are more interested in profit than safety, then we must remain aware that working to protect public safety here at home can have predictably disastrous consequences for others even less able to protect themselves. There are lots of places on the planet where EHS concerns are secondary or non-existent. In a globalized world there is always some place somewhere to make products cheaply. In addition, regulation increases reporting and that process alone could be sufficiently onerous to foster technological paradises outside of the reach of American regulatory power.
Some, like Lux Research, have argued technological paradises will only provide breathing room and companies will find their EHS problems following them. [35] However, the developing world has tended to avoid regulation and has not had a track record of greening per se; international agreements have not been able to guarantee fair labor and it is doubtless they can insure EHS; and there are many different ways for a corporation to insulate itself from liability when using a supplier. On the other hand, some industries prefer a regulatory environment because it provide predictability. Actually, the more stringent the regulations the more likely that large established transnational industries can stranglehold startups and consume them. Regulation can be a powerful concentration and monopoly device.
Nanotechnologies, like any other type of technology, cannot be separated from the socio-economic and political context in which they are generated, commercialized and utilized. Seldom do promoters of new technologies fail to play the altruism card, 'With advances in technology come huge advances for the poor and downtrodden.' While this is not inherently false, trickle-down economics has found a home of technology theory. The last to benefit from technological advances are those most unlikely to afford them. The more fundamental impacts and implications of the diffusion of nanotechnologies for the poor, however, are due to their insertion in the current economic and social tendencies. These tendencies use technological innovation as a direct means of profit and the concentration of wealth. Solving the problems of the poor is, at best, an indirect political consequence. [36]
Over the past thirty years, the world has seen the rapid development of technologies, such as microelectronics, information technologies, biotechnologies and telecommunications. But this technological advancement, with its applications in almost every sector of production and the creation of new sectors, has not helped to bridge the poverty gap. [36] Indeed, Foladori & Invernizzi outlined two of the more formidable challenges facing the developing world: water and energy. These problems move far beyond the manufacture of nanomaterials and nanodevices. They claim
'many of the companies have a narrow technical vision of the relationship between society and technology, and believe that utilizing nanotechnologies to benefit the poor is merely a problem of political will and ethics' [36]
This is a rather simplistic assessment by anyone's metrics. New technologies that can impact employment are very expensive when first introduced and that tech-tax does not dissipate for some time. For example, personal computers while much less expensive than they were a decade ago are still beyond the means of most people in the developing world. Most of the control over intellectual property, such as patents, reside in the developed world. Product substitution may impact employment in the natural resource industry which is highly concentrated in developing economies.
While I am a huge fan of holding people and institutions accountable, I have grown weary of groups of individuals who appoint themselves as protectors of the public trust and rail against technology because it is technology. Public interest groups, like lobbyists, seldom represent the public sentiment. Ninety-one percent of elites in public interest groups identified themselves as liberal whilst only 25 percent of the American public self-identify as liberals. [37] Almost all of them think they represent the public interest, but the issue here is sentiment not interest. Almost any action can be of interest though a much smaller subset are of interest and reflect the public sentiment. Public interest groups and NGOs tend to absorb a lot of the gaze from critics of technology. Controversially, they are self-appointed and have as many vested interests as those they criticize. NGOs want to function as a third estate much like the traditional press, enabling minority voices to be heard. Unfortunately, much like the press, they need reputation and revenue which can shift their stance.
An interesting analysis of the performance of NGOs can be gleaned from their role in the GMO debates. One critic characterized the NGOs in this debate as, 'antibiotechnology zealots continuing to wage their campaigns of propaganda and vandalism'. [38] Others were kinder.
"Some critics argue that governments are reacting to NGO campaigns, rather than scientifically demonstrated environmental and health risks, which illustrates the power of "unsubstantiated scare mongering" by NGO opponents of genetic engineering. [38]
In general, critics argue some NGOs in the GMO debate do not represent the public because they fail to account for the needs of the developing world, overestimate potential environmental and public health risks, and are inconsistent in their regulatory concerns. [39] The GMO debate was dominated by Northern NGOs with greater access to regulators, highly educated representative, and large funding sources produce agendas that are too focused on the relative luxuries of food safety and biodiversity loss - the issues that mobilize NGO supporters and will secure funding for NGOs in more developed countries. [40] Some NGOs are excellent at what they do, while others are just sensational. In the nanotechnology world groups, like Environmental Defense and Greenpeace challenged early claims about all things nanotechnological until they determined this working space was not optimal for who they were and what they sought to accomplish. This left the field open for the likes of the ETC Group and Friends of the Earth-Australia to tell their fanciful tales about globalization and industrial greed, sometimes shamelessly inaccurate.
Other naysayers are lone wolves. They tend to be critics of technology, such as nanotechnology, but bring to the subject a miscellany of interests including anti-globalization and technophobia. They work out of their homes and garages posting complaints on their blogs and web sites. Instead of dealing with the essential nature of nanotechnology they dabble around the edges of the civilizing process of technology. Unfortunately, many of these observations can be made of many of the proponents as well, a subject that cannot be adequately dealt with in this limited space. It will have to wait for another day, probably another book. [13]
Nanoscience and nanotechnology have been associated with claims of great prospects. However, breakthroughs that have been truly profound and paradigm threatening have not happened as predicted. Amid all the claims and counterclaims and portends and promises, trying to separate accuracies from hyperboles remains problematic. Given the complexity of the task, we may need to re-examine the sensibilities we bring to the undertaking at hand. Rethinking how we understand emerging technologies while not the solution to all our concerns may open new approaches that help us comprehend and examine the formidable questions about technologies that seem to have lives of their own.
This work was supported by a grant from the National Science Foundation NSF 0809470, Nanotechnology Interdisciplinary Research Team: Intuitive toxicology and Public Engagement. All opinions expressed within are the authors and do not necessarily reflect those of the National Science Foundation or North Carolina State University.
[1] North Carolina State University. Raleigh, North Carolina. USA. The author has a doctorate from New York University and is a full professor in a Department in Communication where he teaches risk, science, and technology communication. Also, he teaches in a Science and Technology Studies program and an Environmental Studies program as well. He has written many articles and chapters on the social dimensions of nanotechnology, received grants from the National Science Foundation to research and criticize the public understanding of nanotechnology, and in 2006 wrote Nano-Hype: The Truth Behind the Nanotechnology Buzz.
[2] Bodansky, D. (1996, 2004). Nuclear Energy: Principles, Practices, and Prospects. NY: Springer-Verlag. 32.
[3] McLuhan, M. (2005). Critical Evaluations in Cultural Theory. Geneske, G., Ed. NY: Routledge. 155.
[4] Stoll, C. (1995). "The Internet? Bah!" Newsweek. February 27. http://www.newsweek.com/1995/02/26/the-internet-bah.html. Accessed July 1, 2011.
[5] van Sijl, C. (2003). "Prolepsis according to Epicurus and the Stoa: English summary of master thesis." November. http://www.phil.uu.nl/~claartje/summ.pdf. Accessed July 1, 2011.
[6] Weaver, W. (1948). "Science and Complexity". American Scientist 36 (4): 536. Accessed July 6, 2012.
[7] Lewes, GH. (1875). Problems of Life and Mind (First Series) 2. London: Trubner.
[8] Bedau, MA & Humphreys, P. (2008). Emergence: Contemporary Readings in Philosophy and Science. NY: MIT Press.
[9] Adner, R. & Levinthal, DA. (2002). "The emergence of emerging technologies." California Management Review. 45:1. Fall. http://faculty.insead.edu/adner/research/cmr-Emergence%20of%20Emerging%20Technologies.pdf. Accessed July 1, 2011.
[10] Drexler, KE. (1986). Engines of Creation. NY: Anchor Books.
[11] NNI Endorsements. (2008). http://www.nsf.gov/crssprgm/nano/reports/endorse.jsp. July 10. Accessed July 1, 2011.
[12] Schmidt, K. (2007). Green Nanotechnology: It's Easier than You Think. Washington, DC: Project on Emerging Nanotechnologies. April. http://www.nanotechproject.org/file_download/files/GreenNano_PEN8.pdf. Accessed July 1, 2011.
[13] Berube, DM. (2006). Nano-Hype: The Truth Behind the Nanotechnology Buzz. Amherst, NY: Prometheus Books.
[14] Almond, B. (1995). Introducing Applied Ethics. NY: Wiley-Blackwell.
[15] Converging Technologies (NBIC). (n.dat.) Nano2Life: Bringing Nanotechnologies to Life. http://www.nano2life.org/content.php?id=22. Accessed July 1, 2011.
[16] Joy, B. (2000). Why the future doesn't need us. Wired. 8:04. April. http://www.wired.com/wired/archive/8.04/joy.html. Accessed July 1, 2011.
[17] ETC Group. (2003). The Big Down: From Genomes to Atoms, Atomtech: Technologies Converging at the Nano-scale. January. http://www.etcgroup.org/upload/publication/171/01/thebigdown.pdf. Accessed July 1, 2011.
[18] ETC Group. (2003). The Little BANG Theory. February 6. http://www.etcgroup.org/upload/publication/169/01/combang2003.pdf . Accessed July 1, 2011.
[19] Goodman, N. (1973). Fact, Fiction, and Forecast. 3rd Edition. New York: Bobbs-Merrill Co., Inc.
[20] Gardner, D. (2010). Future Babble: Why Expert Predictions Fail and Why We Believe Them Anyway. Toronto: McClelland & Stewart.
[21] Suskind, R. (2006). The One Percent Doctrine: Deep Inside America's Pursuit of its Enemies Since 9/11. NY: Simon & Schuster.
[22] Sandman, PM. (2004). Worst case scenarios. The Peter Sandman Risk Communication Website. http://www.psandman.com/col/birdflu.htm. August 28. Accessed July 5, 2011.
[23] Sunstein, CR. (2007). Worst-Case Scenarios. Cambridge, MA: Harvard UP.
[24] Annas, GJ. (2010) Worst Case Bioethics: Death, Disaster, and Public Health. NY: Oxford UP.
[25] Morris, J. (2000). Rethinking Risk and the Precautionary Principle. Woburn, MA: Butterworth-Heinemann.
[26] Song, Y., Li, X. & Du, X. (2009). Exposure to nanoparticles is related to pleural effusion, pulmonary fibrosis and granuloma. European Respiratory Journal. Vol. 34, pp. 559-567.
[27] Rittel, H. & Webber; M. (1973) "Dilemmas in a General Theory of Planning," pp. 155-169, Policy Sciences, Vol. 4, Elsevier Scientific Publishing Company, Inc., Amsterdam, [Reprinted in N. Cross (ed.), Developments in Design Methodology, J. Wiley & Sons, Chichester, 1984, pp. 135-144].
[28]Katz-Hyman, M. & Krepon, M. (2003). Assurance or Space Dominance? The Case Against Weaponizing Space. Washington, DC: Henry L. Stimson Center, April. http://www.stimson.org/images/uploads/research-pdfs/spacefront.pdf. Accessed July 7, 2011.
[29] Brodie, B. (1946). The Absolute Weapon, NY: Harcourt, Brace, and Co.
[30] Berube, DM. (2006). Regulating nanoscience: A proposal and a response to J. Clarence Davies. Nanotechnology Law & Business. December. 485-506.
[31] Paik, SY., Zalk, DM. & Swuste, P. (2008). Application of a pilot control banding tool for risk level assessment and control of nanoparticle exposures. Ann. Occup. Hyg. 52(6). 419-428.
[32] Holdren, JP., Sunstein, CR. & Siddiqui, IA. (2011). Policy Principles for the U.S. Decision-Making Concerning Regulation and Oversight of Applications of Nanotechnology and Nanomaterials June 9. http://www.whitehouse.gov/sites/default/files/omb/inforeg/for-agencies/nanotechnology-regulation-and-oversight-principles.pdf. Accessed July 8, 2012.
[33] Monica, Jr., JC. (2011). White House Issues Nanotechnology EHS Policy Statement. Nanotechnology Law Report. June. http://www.nanolawreport.com/2011/06/articles/white-house-issues-nanotechnology-ehs-policy-statement/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+NanotechnologyLawReport+%28Nanotechnology+Law+Report%29#axzz1PQMnwcXT. Accessed July 8, 2011).
[34] Commission of the European Communities. (2004). Communication from the Commission: Towards a European strategy for nanotechnology December 5. http://ec.europa.eu/nanotechnology/pdf/nano_com_en.pdf. Accessed July 8, 2011.
[35] Holman, M. (2006). Taking Action on Nanotech Environmental, Health, and Safety Risks. May. http://ethics.iit.edu/NanoEthicsBank/node/710. Accessed July 8, 2011.
[36] Foladori, G. & Invernizzi, N. (2005). Nanotechnology in its socio-economic context. Science Studies. 18(2). 67-73.
[37] Robert Lerner, Althea K. Nagai, Stanley Rothman. (1996) "General Social Survey", 'American Elites', New Haven, Yale UP.
[38] Borlaug, NE. (2000). Ending World Hunger: The Promise of Biotechnology and the Threat of Antiscience Zealotry, 124 Plant Physiology 487, 488, available at http://www.agbioworld.org/listarchive/view.php?2id=1059.
[39] Teel, J. (2002). Rapporteur's Summary of the Deliberative Forum: Have NGOs Distorted or Illuminated the Benefits and Hazards of Genetically Modified Organisms? Colorado Journal of International Environmental Law and Policy. 13. Winter. Accessed Lexis-Nexis.
[40] Tripp, R. (2000). Overseas Development Institute, GMOs and NGOs: Biotechnology, The Policy Process, and the Presentation of Evidence, 60 Nat. Resources Persp.