THE FUTURIST magazine's Top Ten Forecasts for 2009 and beyond
Cover story from December 2008 edition of The Futurist magazine
OUTLOOK 2009 More sex, fewer antidepressants; more transparency online, less privacy in real life. These are among the World Future Society’s latest roundup of more than 70 forecasts for your changing world. PDF available
THE FUTURIST magazine's Top Ten Forecasts for 2009 and beyond.
Forecast # 1: Everything you say and do will be recorded by 2030. By the late 2010s, ubiquitous unseen nanodevices will provide seamless communication and surveillance among all people everywhere. Humans will have nanoimplants, facilitating interaction in an omnipresent network. Everyone will have a unique Internet Protocol (IP) address. Since nano storage capacity is almost limitless, all conversation and activity will be recorded and recoverable. — Gene Stephens, “Cybercrime in the Year 2025,” THE FUTURIST July-Aug 2008.
Forecast #2: Bioviolence will become a greater threat as the technology becomes more accessible. Emerging scientific disciplines (notably genomics, nanotechnology, and other microsciences) could pave the way for a bioattack. Bacteria and viruses could be altered to increase their lethality or to evade antibiotic treatment.— Barry Kellman, “Bioviolence: A Growing Threat,” THE FUTURIST May-June 2008.
Forecast #3: The car's days as king of the road will soon be over. More powerful wireless communication that reduces demand for travel, flying delivery drones to replace trucks, and policies to restrict the number of vehicles owned in each household are among the developments that could thwart the automobile’s historic dominance on the environment and culture. If current trends were to continue, the world would have to make way for a total of 3 billion vehicles on the road by 2025. — Thomas J. Frey, “Disrupting the Automobile’s Future,” THE FUTURIST, Sep-Oct 2008.
Forecast #4: Careers, and the college majors for preparing for them, are becoming more specialized. An increase in unusual college majors may foretell the growth of unique new career specialties.
Instead of simply majoring in business, more students are beginning to explore niche majors such as sustainable business, strategic intelligence, and entrepreneurship.
Other unusual majors that are capturing students' imaginations: neuroscience and nanotechnology, computer and digital forensics, and comic book art. Scoff not: The market for comic books and graphic novels in the United States has grown 12% since 2006. —THE FUTURIST, World Trends & Forecasts, Sep-Oct 2008.
Forecast #5: There may not be world law in the foreseeable future, but the world's legal systems will be networked. The Global Legal Information Network (GLIN), a database of local and national laws for more than 50 participating countries, will grow to include more than 100 counties by 2010. The database will lay the groundwork for a more universal understanding of the diversity of laws between nations and will create new opportunities for peace and international partnership.— Joseph N. Pelton, "Toward a Global Rule of Law: A Practical Step Toward World Peace," THE FUTURIST Nov-Dec 2007.
Forecast #6: The race for biomedical and genetic enhancement will — in the twenty-first century — be what the space race was in the previous century. Humanity is ready to pursue biomedical and genetic enhancement, says UCLA professor Gregory Stock, the money is already being invested, but, he says, “We'll also fret about these things — because we're human, and it's what we do.” — Gregory Stock quoted in THE FUTURIST, Nov-Dec 2007.
Forecast #7: Professional knowledge will become obsolete almost as quickly as it's acquired. An individual's professional knowledge is becoming outdated at a much faster rate than ever before. Most professions will require continuous instruction and retraining. Rapid changes in the job market and work-related technologies will necessitate job education for almost every worker. At any given moment, a substantial portion of the labor force will be in job retraining programs. — Marvin J. Cetron and Owen Davies, "Trends Shaping Tomorrow's World, Part Two," THE FUTURIST May-June 2008.
Forecast #8: Urbanization will hit 60% by 2030. As more of the world's population lives in cities, rapid development to accommodate them will make existing environmental and socioeconomic problems worse. Epidemics will be more common due to crowded dwelling units and poor sanitation. Global warming may accelerate due to higher carbon dioxide output and loss of carbon-absorbing plants. — Marvin J. Cetron and Owen Davies, “Trends Shaping Tomorrow's World,” THE FUTURIST Mar-Apr 2008.
Forecast #9: The Middle East will become more secular while religious influence in China will grow. Popular support for religious government is declining in places like Iraq, according to a University of Michigan study. The researchers report that in 2004 only one-fourth of respondents polled believed that Iraq would be a better place if religion and politics were separated. By 2007, that proportion was one-third. Separate reports reveal a countertrend in China. — World Trends & Forecasts, THE FUTURIST Nov-Dec 2007.
Forecast #10: Access to electricity will reach 83% of the world by 2030. Electrification has expanded around the world, from 40% connected in 1970 to 73% in 2000, and may reach 83% of the world's people by 2030. Electricity is fundamental to raising living standards and access to the world's products and services. Impoverished areas such as Sub-Saharan Africa still have low rates of electrification; Uganda is just 3.7% electrified. — Andy Hines, “Global Trends in Culture, Infrastructure, and Values,” Sep-Oct 2008.
Cover story from December 2008 edition of The Futurist magazine
More About Singularity
Here is a very interesting 80-minute Ken Humbs film Building Gods Rough Cut made on 2006 including Hugo de Garis and Kevin Warwick, artificail intelligence researchers and Nick Bostrom, philosopher.
And also below is a new 44-minute Next World's video Future of Intelligence including Ray Kurzweil [who coined the term "singularity" along with Joel Garreau, Vernor Vinge and Bruce Sterling], Guido Jouret, Hiroshi Ishiguro, Stephen Jacobsen, Rod Humble, Seth Goldstein, Dave Evans, Michel Parent, Stephane Aubarbier, Jeff Kleiser, Marthin De Beer, Steve Kieron, Marie Hattar, Brian Conte, James Kuffner, Kevin Warwick..
].
Illustrative table of Who's Who in Singularity: Click here for a large version of this chart [PDF format].
Signs of the Singularity
By Vernor Vinge
This is part of IEEE Spectrum's SPECIAL REPORT: THE SINGULARITY
I think it's likely that with technology we can in the fairly near future create or become creatures of more than human intelligence. Such a technological singularity would revolutionize our world, ushering in a posthuman epoch. If it were to happen a million years from now, no big deal. So what do I mean by “fairly near” future? In my 1993 essay, “The Coming Technological Singularity,” I said I'd be surprised if the singularity had not happened by 2030. I'll stand by that claim, assuming we avoid the showstopping catastrophes—things like nuclear war, superplagues, climate crash—that we properly spend our anxiety upon.
In that event, I expect the singularity will come as some combination of the following:
The AI Scenario: We create superhuman artificial intelligence (AI) in computers.
The IA Scenario: We enhance human intelligence through human-to-computer interfaces—that is, we achieve intelligence amplification (IA).
The Biomedical Scenario: We directly increase our intelligence by improving the neurological operation of our brains.
The Internet Scenario: Humanity, its networks, computers, and databases become sufficiently effective to be considered a superhuman being.
The Digital Gaia Scenario: The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being.
The essays in this issue of IEEE Spectrum use similar definitions for the technological singularity but variously rate the notion from likely to totally bogus. I'm going to respond to arguments made in these essays and also mine them for signs of the oncoming singularity that we might track in the future.
Philosopher Alfred Nordmann criticizes the extrapolations used to argue for the singularity. Using trends for outright forecasting is asking for embarrassment. And yet there are a couple of trends that at least raise the possibility of the technological singularity. The first is a very long-term trend, namely Life's tendency, across aeons, toward greater complexity. Some people see this as unstoppable progress toward betterment. Alas, one of the great insights of 20th-century natural science is that Nature can be the harshest of masters. What we call progress can fail. Still, in the absence of a truly terminal event (say, a nearby gamma-ray burst or another collision such as made the moon), the trend has muddled along in the direction we call forward. From the beginning, Life has had the ability to adapt for survival via natural selection of heritable traits. That computational scheme brought Life a long way, resulting in creatures that could reason about survival problems. With the advent of humankind, Life had a means of solving many problems much faster than natural selection.
In the last few thousand years, humans have begun the next step, creating tools to support cognitive function. For example, writing is an off-loading of memory function. We're building tools—computers, networks, database systems—that can speed up the processes of problem solving and adaptation. It's not surprising that some technology enthusiasts have started talking about possible consequences. Depending on our inventiveness—and our artifacts' inventiveness—there is the possibility of a transformation comparable to the rise of human intelligence in the biological world. Even if the singularity does not happen, we are going to have to put up with singularity enthusiasms for a long time.
Get used to it.
In recent decades, the enthusiasts have been encouraged by an enabling trend: the exponential improvement in computer hardware as described by Moore's Law, according to which the number of transistors per integrated circuit doubles about every two years. At its heart, Moore's Law is about inventions that exploit one extremely durable trick: optical lithography to precisely and rapidly emplace enormous numbers of small components. If the economic demand for improved hardware continues, it looks like Moore's Law can continue for some time—though eventually we'll need novel component technology (perhaps carbon nanotubes) and some new method of high-speed emplacement (perhaps self-assembly). But what about that economic demand? Here is the remarkable thing about Moore's Law: it enables improvement in communications, embedded logic, information storage, planning, and design—that is, in areas that are directly or indirectly important to almost all enterprise. As long as the software people can successfully exploit Moore's Law, the demand for this progress should continue.
The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly”
Roboticist Hans Moravec may have been the first to draw a numerical connection between computer hardware trends and artificial intelligence. Writing in 1988, Moravec took his estimate of the raw computational power of the brain together with the rate of improvement in computer power and projected that by 2010 computer hardware would be available to support roughly human levels of performance. There are a number of reasonable objections to this line of argument. One objection is that Moravec may have radically underestimated the computational power of neurons. But even if his estimate is a few orders of magnitude too low, that will only delay the transition by a decade or two—assuming that Moore's Law holds.
Another roboticist, Rodney Brooks, suggests in this issue that computation may not even be the right metaphor for what the brain does. If we are profoundly off the mark about the nature of thought, then this objection could be a showstopper. But research that might lead to the singularity covers a much broader range than formal computation. There is great variety even in the pursuit of pure AI. In the next decade, those who credit Moravec's timeline begin to expect results. Interestingly powerful computers will become cheap enough for a thousand research groups to bloom. Some of these researchers will pursue the classic computational tradition that Brooks is doubting—and they may still carry the day. Others will be working on their own abstractions of natural mind functions—for instance, the theory that Christof Koch and Giulio Tononi discuss in their article. Some (very likely Moravec and Brooks himself) will be experimenting with robots that cope with many of the same issues that, for animals, eventually resulted in minds that plan and feel. Finally, there will be pure neurological researchers, modeling increasingly larger parts of biological brains in silico. Much of this research will benefit from improvements in our tools for imaging brain function and manipulating small regions of the brain.
But despite Moravec's estimate and all the ongoing research, we are far short of putting the hardware together successfully. In his essay, Brooks sets several intermediate challenges. Such goals can help us measure the progress that is being made. More generally, it would be good to have indicators and counterindicators to watch for. No single one would prove the case for or against the singularity, but together they would be an ongoing guide for our assessment of the matter. Among the counterindicators (events arguing against the likelihood of the singularity) would be debacles of overweening software ambition: events ranging from the bankruptcy of a major retailer upon the failure of its new inventory management system to the defeat of network-centric war fighters by a transistor-free light infantry. A tradition of such debacles could establish limits on application complexity—independent of any claims about the power of the underlying hardware.
There are many possible positive indicators. The Turing Test—whether a human judge communicating by text alone can distinguish a computer posing as human from a real human—is a subtle but broad indicator. Koch and Tononi propose a version of the Turing Test for machine consciousness in which the computer is presented a scene and asked to “extract the gist of it” for evaluation by a human judge. One could imagine restricted versions of the Turing Test for other aspects of Mind, such as introspection and common sense.
As with past computer progress, the achievement of some goals will lead to interesting disputes and insights. Consider two of Brooks's challenges: manual dexterity at the level of a 6‑year‑old child and object-recognition capability at the level of a 2-year‑old. Both tasks would be much easier if objects in the environment possessed sensors and effectors and could communicate. For example, the target of a robot's hand could provide location and orientation data, even URLs for specialized manipulation libraries. Where the target has effectors as well as sensors, it could cooperate in the solution of kinematics issues. By the standards of today, such a distributed solution would clearly be cheating. But embedded microprocessors are increasingly widespread. Their coordinated presence may become the assumed environment. In fact, such coordination is much like relationships that have evolved between living things.
There are more general indicators. Does the distinction between neurological and AI researchers continue to blur? Does cognitive biomimetics become a common source of performance improvement in computer applications? From an entirely different direction, consider economist Robin Hanson's “shoreline” metaphor for the boundary between those tasks that can be done by machines and those that can be done only by human beings. Once upon a time, there was a continent of human-only tasks. By the end of the 1900s, that continent had become an archipelago. We might recast much of our discussion in terms of the question, “Is any place on the archipelago safe from further inundation?” Perhaps we could track this process with an objective economic index—say, wages divided by world product. However much human wealth and welfare may increase, a sustained decline in the ratio of wages to world product would argue a decline in the human contribution to the economy.
Machine/network life-forms will be faster, more labile, and more varied than what we see in biology. Digital Gaia is a hint of how alien the possibilities are
Some indicators relate different areas of technological speculation. In his essay, physicist Richard A.L. Jones critiques molecular nanotechnology (MNT). Even moderate success with MNT could support Moore's Law long enough to absorb a number of order-of-magnitude errors in our estimates of the computing power of the brain. At the same time, some of the advanced applications that K. Eric Drexler describes—things like cell-repair machines—depend on awesome progress with software. Thus, while success with MNT probably does not need the technological singularity (or vice versa), each would be a powerful indicator for the other.
Several of the essays discuss the plausibility of mind uploads and consequent immortality for “our digitized psyches,” ideas that have recently appeared in serious nonfiction, most notably Ray Kurzweil's The Singularity Is Near. As with nanotechnology, such developments aren't prerequisites for the singularity. On the other hand, the goal of enhancing human intelligence through human-computer interfaces (the IA Scenario) is both relevant and in view. Today a well-trained person with a suitably provisioned computer can look very smart indeed. Consider just a slightly more advanced setup, in which an Internet search capability plus math and modeling systems are integrated with a head‑up display. The resulting overlays could give the user a kind of synthetic intuition about his or her surroundings. At a more intimate but still noninvasive level, DARPA's Cognitive Technology Threat Warning System is based on the idea of monitoring the user's mental activities and feeding the resulting analysis back to the user as a supplement to his or her own attention. And of course there are the researchers working with direct neural connections to machines. Larger numbers of implanted connections may allow selection for effective subsets of connections. The human and the machine sides can train to accommodate each other.
To date, research on neural prostheses has mainly involved hearing, vision, and communication. Prostheses that could restore any cognitive function would be a very provocative indicator. In his essay, John Horgan discusses neural research, including that of T.W. Berger, into prostheses for memory function. In general, Horgan and I reach very different conclusions, but I don't think we have much disagreement about the facts; Horgan cites them to show how distant today's technology is from anything like the singularity—and I am saying, “Look here, these are the sorts of things we should track going forward, as signs of progress toward the singularity (or not).”
The Biomedical Scenario—directly improving the functioning of our own brains—has a lot of similarities to the IA Scenario, though computers would be only indirectly involved, in support of bioinformatics. In the near future, drugs for athletic ability may be only a small problem compared with drugs for intellect. If these mind drugs are not another miserable fad of uppers and downers, if they enable real improvements to memory and creativity, that would be a strong indicator for this scenario. Much further out—for both logistical and ethical reasons—is the possibility of embryo optimization and germ-line engineering. Biomedical enhancement, even the extreme varieties, probably does not scale very well; however, it might help biological minds maintain some influence over other progress.
Brooks suggests that the singularity might happen—and yet we might not notice. Of the scenarios I mentioned at the beginning of this essay, I think a pure Internet Scenario—where humanity plus its networks and databases become a superhuman being—is the most likely to leave room to argue about whether the singularity has happened or not. In this future, there might be all-but-magical scientific breakthroughs. The will of the people might manifest itself as a seamless transformation of demand and imagination into products and policy, with environmental and geopolitical disasters routinely finessed. And yet there might be no explicit evidence of a superhuman player.
A singularity arising from networks of embedded microprocessors—the Digital Gaia Scenario—would probably be less deniable, if only because of the palpable strangeness of the everyday world: reality itself would wake up. Though physical objects need not be individually sapient, most would know what they are, where they are, and be able to communicate with their neighbors (and so potentially with the world). Depending on the mood of the network, the average person might notice a level of convenience that simply looks like marvelously good luck. The Digital Gaia would be something beyond human intelligence, but nothing like human. In general, I suspect that machine/network life-forms will be faster, more labile, and more varied than what we see in biology. Digital Gaia is a hint of how alien the possibilities are.
In his essay, Hanson focuses on the economics of the singularity. As a result, he produces spectacular insights while avoiding much of the distracting weirdness. And yet weirdness necessarily leaks into the latter part of his discussion (even leaving Digital Gaia possibilities aside). AI at the human level would be a revolution in our worldview, but we can already create human-level intelligences; it takes between nine months and 21 years, depending on whom you're talking to. The consequences of creating human-level artificial intelligence would be profound, but it would still be explainable to present-day humans like you and me.
But what happens a year or two after that? The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”
For most of us, the hard part is believing that machines could ever reach parity. If that does happen, then the development of superhuman performance seems very likely—and that is the singularity. In its simplest form, this might be achieved by “running the processor clock faster” on machines that were already at human parity. I call such creatures “weakly superhuman,” since they should be understandable if we had enough time to analyze their behavior. Assuming Moore's Law muddles onward, minds will become steadily smarter. Would economics still be an important driver? Economics arises from limitations on resources. Personally, I think there will always be such limits, if only because Mind's reach will always exceed its grasp. However, what is scarce for the new minds and how they deal with that scarcity will be mostly opaque to us.
The period when economics could help us understand the new minds might last decades, perhaps corresponding to what Brooks describes as “a period, not an event.” I'd characterize such a period as a soft takeoff into the singularity. Toward the end, the world would be seriously strange from the point of view of unenhanced humans.
A soft takeoff might be as gentle as changes that humanity has encountered in the past. But I think a hard takeoff is possible instead: perhaps the transition would be fast. One moment the world is like 2008, perhaps more heavily networked. People are still debating the possibility of the singularity. And then something...happens. I don't mean the accidental construction that Brooks describes. What I'm thinking of would probably be the result of intentional research, perhaps a group exploring the parameter space of their general theory. One of their experiments finally gets things right. The result transforms the world—in just a matter of hours. A hard takeoff into the singularity could resemble a physical explosion more than it does technological progress.
If the singularity happens, the world passes beyond human ken
I base the possibility of hard takeoff partly on the known potential of rapid malcode (remember the Slammer worm?) but also on an analogy: the most recent event of the magnitude of the technological singularity was the rise of humans within the animal kingdom. Early humans could effect change orders of magnitude faster than other animals could. If we succeed in building systems that are similarly advanced beyond us, we might experience a similar incredible runaway.
Whether the takeoff is hard or soft, the world beyond the singularity contains critters who surpass natural humans in just the ability that has so empowered us: intelligence. In human history, there have been a number of radical technological changes: the invention of fire, the development of agriculture, the Industrial Revolution. One might reasonably apply the term singularity to these changes. Each has profoundly transformed our world, with consequences that were largely unimagined beforehand. And yet those consequences could have been explained to earlier humans. But if the transformation discussed in this issue of Spectrum occurs, the world will become intrinsically unintelligible to the likes of us. (And that is why “singularity,” as in “black hole singularity of physics,” is the cool metaphor here.) If the singularity happens, we are no longer the apex of intellect. There will be superhumanly intelligent players, and much of the world will be to their design. Explaining that to one of us would be like trying to explain our world to a monkey.
Both Horgan and Nordmann express indignation that singularity speculation distracts from the many serious, real problems facing society. This is a reasonable position for anyone who considers the singularity to be bogus, but some form of the point should also be considered by less skeptical persons: if the singularity happens, the world passes beyond human ken. So isn't all our singularity chatter a waste of breath? There are reasons, some minor, some perhaps very important, for interest in the singularity. The topic has the same appeal as other great events in natural history (though I am more comfortable with such changes when they are at a paleontological remove). More practically, the notion of the singularity is simply a view of progress that we can use—along with other, competing, views—to interpret ongoing events and revise our local planning. And finally: if we are in a soft takeoff, then powerful components of superintelligence will be available well before any complete entity. Human planning and guidance could help avoid ghastliness, or even help create a world that is too good for us naturals to comprehend.
Horgan concludes that “the singularity is a religious rather than scientific vision.” Brooks is more mellow, seeing “commonalities with religious beliefs” in many enthusiasts' ideas. I argue against Horgan's conclusion, but Brooks's observation is more difficult to dispute. If there were no other points to discuss, then those commonalities would be a powerful part of the skeptics' position. But there are other, more substantive arguments on both sides of the issue.
And of course, the spirituality card can be played against both skeptics and enthusiasts: Consciousness, intelligence, self-awareness, emotion—even their definitions have been debated since forever, by everyone from sophomores to great philosophers. Now, because of our computers, the applications that we are attempting, and the tools we have for observing the behavior of living brains, there is the possibility of making progress with these mysteries. Some of the hardest questions may be ill-posed, but we should see a continuing stream of partial answers and surprises. I expect that many successes will still be met by reasonable criticism of the form “Oh, but that's not really what intelligence is about” or “That method of solution is just an inflexible cheat.” And yet for both skeptics and enthusiasts, this is a remarkable process. For the skeptic, it's a bit like subtractive sculpture, where step-by-step, each partial success is removing more dross, closing in on the ineffable features of Mind—a rather spiritual prospect! Of course, we may remove and remove and find that ultimately we are left with nothing but a pile of sand—and devices that are everything we are, and more. If that is the outcome, then we've got the singularity.
About the Author
VERNOR VINGE, who wraps up this issue, first used the term singularity to refer to the advent of superhuman intelligence while on a panel at the annual conference of the Association for the Advancement of Artificial Intelligence in 1982. Three of his books—A Fire Upon the Deep (1992), A Deepness in the Sky (1999), and Rainbows End (2006)—won the Hugo Award for best science-fiction novel of the year. From 1972 to 2000, Vinge taught math and computer science at San Diego State University.
Joel de Rosnay Talking About Web 4.0 [Symbiotic Web]
Les quatre web de Joel de Rosnay, du 1.0 au 4.0
Maybe this Web 4.0 could look like this:
Holographic Interface - round interface - Ringo from Ivan Tihienko on Vimeo.
The Future of the Web
Or will my next blog be Twistori like, a sort of FriendFeed lifestream in perpetual motion, automaticaly scrolling...endlessly flowing on line.
If my life is transferred into an online scrolling lifestream matrix soon, I wonder how I will be able to manage all this information by following others feeds, without losing too much of my "productive content creation" time?
Already I can hardly manage my Friendfeed lifestream because I'm overloaded by too many feeds from people I consider interesting.
Is the future of the Web [or online information systems] above all, in aggregation and filtering?
Below is an interesting poll from the New York Times article, published few months ago about how people are spending [wasting] their office time.
According to this poll, if we could decrease our online distraction time (or "interruptions" such as reading mails, following others lifestream feeds, chatting, etc...) and the time we spend on searching information , we could increase our "productive" time by 43 % .
And even if aggregation and filtering systems do not eliminate our need for distraction, at least, we will feel a little less lost and overloaded by gathering all this streaming online information.
The question is who can we authorize to aggregate and filter for us?
To be really reliable, this machine has not only to simulate our own value system [reflect our need for security] but also has to be able to push us beyond our intellectual safe zone [make us curious and able to accept the Alterity].
Conference in London on October 29, 2008: "Can we bored with Debord?"
"The Autumn 2008 season of Conversations hosted by Rethinking Cities opens with Alan James Bullion posing the question: "Are we bored with Debord? What can we derive from his concept of drift in the modern city?"
It is forty years since Debord’s “Society of the Spectacle” triggered civil unrest in the streets of Paris.
As the leader of the political art movement known as the Situationists in the early 1960’s, Guy Debord was a proponent of the ‘derive’, to walk through, was to understand the city and its class struggles.
Alan Bullion is the Lib Dem parliamentary candidate for Sevenoaks poses the question: "Are we bored with Debord? What can we derive from his concept of drift in the modern city?"
This Conversation will take place on the evening of Wednesday 29 October 2008 at the Royal Commonwealth Society, Northumberland Avenue, London.
Click here to register for this or other Conversations
Even if Debord's ideas are shaped in a different historical context, I think this topic is not at all anachronistic and that we can still learn a lot from him.
Read on line The Society of the Spectacle by Guy Debord translated in English here or in French [original text] here or take a look at these short videos from a video documentary about Guy Debord's Situationist movement and you will see why.
Situationalist International - Part 1 of 3
Situationalist International - Part 2 of 3
Situationalist International - Part 3 of 3
"The first stage of the economy’s domination of social life brought about an evident degradation of being into having — human fulfillment was no longer equated with what one was, but with what one possessed.
The present stage, in which social life has become completely dominated by the accumulated productions of the economy, is bringing about a general shift from having to appearing — all “having” must now derive its immediate prestige and its ultimate purpose from appearances.
At the same time all individual reality has become social, in the sense that it is shaped by social forces and is directly dependent on them. Individual reality is allowed to appear only if it is not actually real. " Guy Debord : The Society of the Spectacle
Incurable American Optimism
Inteviewed by Mark Molaro, American professor and media expert Paul Levinson is talking here about the state, influence and future of the new media.
Levinson is the author of "Digital McLuhan" and "The Soft Edge" and has appeared in countless media venues from PBS to Fox to offer his insight on media issues.
In this video Levinson discusses the current exponential rise of new media and what Marshall McLuhan would think of the digital age we live and now create in.
I wonder if this kind of 100% favorable and enthusiastic discourse about Web 2.0 and new media is possible in Europe.
Watching this video makes me think of Jean Baudrillard's "hyper-information age" and once again I was reminded of Baudrillard's distinction between the American and European way of thinking:
"Vu d'Amérique et par intellectuls américains [Susan Sontag] , le désaveu de la réaité dans les cultures européennes, et singulièrement dans la théorie française, n'est que le dépit "métaphisique" de ne plus être maître de cette réalité, et la manifestation, à la fois arogante et ironique, de cette impuissance.
Et c'est sans doute vrai.
Mais vice versa: c'est parti pris de la réaité, cet "affirmative thinking", n'est-il pas, chez les Américains, l'expression naive et idéologique du fait qu'ils ont, de par leur puissance, la monoplole de la rélaité?
Nus vivons certes dans la nostalgie ridicule de la gloire [de l'hisoire, de la culture], mais eux vivent dans l'illusion ridicule de la performance." Jean Baudrillard, Cool Memories V
http://holychic.blogspot.com/2008/07/incurable-american-optimism.html
Arnaud Pagès exhibit in Issue Gallery next week
Arnaud Pagès wearing a piece of his street wear collection
Mobile Trends 2008
Presentation "Mobile 2.0 - what is it and why should you care?" by Rudy De Waele at Plugg Conference in Brussels on March 19, 2008
A deep dive into the future of mobile with Rudy De Waele, one of the world's most renowned mobile strategists, featuring a look at historical and upcoming trends, insights on potential revenue models and the industry's leading protagonists.
Mobile and Wireless Trends for 2008
by Rudy De Waele:1. Google’s Android and the Open Handset Alliance will definately take off in 2008. While the iPhone is doing probably the best job embracing mobile and web convergence, the Apple OS is still a closed system and used by a rather small market segment of users. Nokia’s Nseries - though all remarkeable devices - didn’t produce any breakthrough Symbian OS changes last year and is still too buggy to go mass-market - I don’t see my sister or father perform a device software update; which leaves the opportunity for Google and the Open Handset Alliance to get the new Linux-based operating system Android on several cutting-edge smartphones before year-end. Mobile OS, a truely competitive space in 2008!
2. The Rise of the Mobile Social Networks. M:Metrics released some promising data mid-2007 on the rise of the Mobile Social Networks. With the big social media networks all going mobile in 2007 (Facebook, MySpace, YouTube and Bebo, …), this trend will continue to rise in 2008, sustained by more flat rate introductions on different markets.
3. Apple will be seriously attacked by the music industry on its own, once disruptive, iTunes business model. 2008 will be the year of further downfall of DRM and the raise of watermarked audio-files. With Sony BMG planning to drop DRM - the last of the Big Four record labels with Warner Music Group, Universal Music Group and EMI Music, to throw in the towel on digital rights management. The end of DRM might embolden a host of new, online download venues initiated by the Big Four in its searches for a successful digital strategy. Note also the rise of new business models (!) giving away DRM-free, ad-supported music downloads, like the recently founded Rcrd Lbl by Peter Rojas. Read my DRM Free at Last! for a recent overview and links to previous posts on this topic.
4. Telefonica will introduce the 3G iPhone. To be announced at Mobile World Congress in Barcelona in February?
5. The return of the Location-Based Services. Since Nokia introduced the Nseries N95 with built in GPS, Location-Based Services are becoming exciting again. A new wave of mobile services and applications build on the location of the user (cell-ID and/or GPS) will see the light this year, driven by the open Google Maps API and flickr’s geotagged photo function. Read also my early 2005 coverage on the formerly known MoSoSos.
6. First iPhone competitors coming to market. Nokia will introduce a serious competitor for the iPhone. It has the hardware manufacturing intelligence and knowledge to come up with its own multi-touch screenLucidTouch-Profile Feb-08 interface. Biggest challenge for Nokia (and other manufacturers) will be to keep the OS user-experience as simple as the iPhone. Expect some great innovating devices from HTC too in 2008! (checkout the HTC Touch Dual).
7. Mobile Video Blogging starting to taking off. Though still to be used by early adopters, mobile video blogging tools such as Kyte.tv mobile are already doing a great job with Floobs and KaZiVu also looking very promising (both still in beta), not to forget about YouTube Mobile. All eyes will be on Seesmic however that has the right start-up vibe - instigated daily by its impressive experienced shareholders (and web 2.0 icons) and its very active beta-testers community. Imagining Seesmic to be used on your mobile phone is an easy one, the challenges for Seesmic are to bypass the complex technical issues and delivery of its great idea.
8. Mobile search, as already predicted last year will continue to be one of the most important and most used mobile applications. I keep this one in my list adding that some new players might disrupt the big Search market players, not having figured out the real mobile search issues such as accuracy, context, relevance, latency and the correct display of local and niche results.
9. PRM (Personal Rights Management) and Privacy policies and procedures will be high on the agenda for every entreprise and conscious connected individuals. Already talk of the connected crowds at LeWeb3, opening the Social Graphs might appear cool in your social media community but has to be done right! As a starter, check out Dataportability.org and watch Robert Scoble explaining his recent portability issues with Facebook.
10. Twitter and the breakthrough of the ultimate Mobile Presence Tool. Yes, Twitter is the utlimate mobile presence tool, since it’s the easiest to use (through SMS and mobile web access), and most accurate to stay connected at any time from anywhere… Jaiku has a definately a richer client but Twitter is the most easily integrated into most of your social networks, checkout MoodBlast that can simultaneously update multiple chat clients and web services presence tools. 2008 will also see the rise of lifestreaming apps like Tumblr, surprisingly simple on the web and looks great on your mobile phone.
Saturday, 23 Feb.2008 : New Gallery Launches
Galerie e.l Bannwarth,
68, rue Julien Lacroix, 75020 Paris, Métro Belleville