Return to home page

Further information on viewing conditions, site index and the site Google search facility
Frost's Meditations Logo

The future of human communication
Contact lenses - the internet on the go
Living in an information age
What consequences will converging technologies have?
The Singularity and human communication
What is the future of communication?
The Future of Storytelling
See also


The Future of Human Communication

Jan 18 2009 John Burch

Hi, I saw this article today from Science Daily and it reminded me of an idea I’m using in a novel.
Contact lens with electronics built in. SEE BELOW

The idea is that the cell phone has developed from the old hand lifted device to the new “carry anywhere” phone of today (2008) and it will continue to develop along the following lines.

The kids of today are creating a new use as text and acronyms take over the conversations of youth. Technology will feed that as they develop eye and ear accessories that are always connected. Both to the internet (as the medium) and to the phone descendants that will blend with our bodies.

In less than ten years the optical equivalent of contact lens plus the electronics to make them fully display capable with a resolution around 4000×3000 will give us total and immersive contact with the internet and all that it provides.

This means that first we will become assisted and have continuous audio and video interchange with anyone we know. More specifically, we will learn to “dial”, speak to and communicate with anyone as if they are in the room with us. In fact, it will become essential that anyone involved in commerce or fad based social groups will have to have this technology.

As the technology improves, it will become part of our body rather than an accessory we wear. Our eyeballs have plenty of room inside to hold full color laser projectors that write directly to the retina with a resolution far beyond that of any ordinary human. Remember, our eyes have a simple lens - nothing like the high quality lens even on a cheap camera.

In the limit, we will become a wired organism. We will experience each other in an entirely new medium of cyberspace that rivals the descriptions from Neuromancer. We will cease to talk to anyone beyond arms length without automatically slipping into internet connection mode. It will be seamless. And so much more intense than ordinary face to face conversation. To speak to someone beyond ten feet away, our electronic connection will take over and bring them face to face with us through the eye and ear that is internet connected.

[top]

Contact Lenses With Circuits,
Lights A Possible Platform For Superhuman Vision

ScienceDaily (Jan. 17, 2008)

Movie characters from the Terminator to the Bionic Woman use bionic eyes to zoom in on far-off scenes, have useful facts pop into their field of view, or create virtual crosshairs. Off the screen, virtual displays have been proposed for more practical purposes -- visual aids to help vision-impaired people, holographic driving control panels and even as a way to surf the Web on the go.

The device to make this happen may be familiar. Engineers at the University of Washington have for the first time used manufacturing techniques at microscopic scales to combine a flexible, biologically safe contact lens with an imprinted electronic circuit and lights.

"Looking through a completed lens, you would see what the display is generating superimposed on the world outside," said Babak Parviz, a UW assistant professor of electrical engineering. "This is a very small step toward that goal, but I think it's extremely promising." The results were presented today at the Institute of Electrical and Electronics Engineers' international conference on Micro Electro Mechanical Systems by Harvey Ho, a former graduate student of Parviz's now working at Sandia National Laboratories in Livermore, Calif. Other co-authors are Ehsan Saeedi and Samuel Kim in the UW's electrical engineering department and Tueng Shen in the UW Medical Center's ophthalmology department.

There are many possible uses for virtual displays. Drivers or pilots could see a vehicle's speed projected onto the windshield. Video-game companies could use the contact lenses to completely immerse players in a virtual world without restricting their range of motion. And for communications, people on the go could surf the Internet on a midair virtual display screen that only they would be able to see.

"People may find all sorts of applications for it that we have not thought about. Our goal is to demonstrate the basic technology and make sure it works and that it's safe," said Parviz, who heads a multi-disciplinary UW group that is developing electronics for contact lenses.

The prototype device contains an electric circuit as well as red light-emitting diodes for a display, though it does not yet light up. The lenses were tested on rabbits for up to 20 minutes and the animals showed no adverse effects.

Ideally, installing or removing the bionic eye would be as easy as popping a contact lens in or out, and once installed the wearer would barely know the gadget was there, Parviz said.

Building the lenses was a challenge because materials that are safe for use in the body, such as the flexible organic materials used in contact lenses, are delicate. Manufacturing electrical circuits, however, involves inorganic materials, scorching temperatures and toxic chemicals. Researchers built the circuits from layers of metal only a few nanometers thick, about one thousandth the width of a human hair, and constructed light-emitting diodes one third of a millimeter across.

They then sprinkled the grayish powder of electrical components onto a sheet of flexible plastic. The shape of each tiny component dictates which piece it can attach to, a microfabrication technique known as self-assembly. Capillary forces -- the same type of forces that make water move up a plant's roots, and that cause the edge of a glass of water to curve upward -- pull the pieces into position.

The prototype contact lens does not correct the wearer's vision, but the technique could be used on a corrective lens, Parviz said. And all the gadgetry won't obstruct a person's view.

"There is a large area outside of the transparent part of the eye that we can use for placing instrumentation," Parviz said. Future improvements will add wireless communication to and from the lens. The researchers hope to power the whole system using a combination of radio-frequency power and solar cells placed on the lens, Parviz said.

A full-fledged display won't be available for a while, but a version that has a basic display with just a few pixels could be operational "fairly quickly," according to Parviz.

The research was funded by the National Science Foundation and a Technology Gap Innovation Fund from the University of Washington.

[top]


Human Communication?
Coelynn McIninch. November 2007

We are living in an information age with the whole world connected through an invisible conceptual space. Our direct physical interaction with technology and our communal use of Cyberspace has created a social dependence on technology. “ We live in the Age of Information and of Communication because electrical media instantly and constantly creates a total field of interacting events in which all we participate”(McLuhan). The new types of, and uses for technology are altering the way we communicate, interact and create.

We have come a long way since 1968 when Doug Engelbart first introduced the idea of “information-space” by creating the first graphic user interface and a handy little pointer he called a mouse. The idea sounds so simple to us today but this basic concept was a major turning point in our techo-social history. Engelbart’s idea turned the blank screen into an interactive virtual space. The user was endowed with the ability to reach in and touch information, to manipulate data with a wave (and click) of the hand. Data was given a familiar visuals and terminology to make the transition and utilization more instinctive: ideas organized in folders, displayed in windows and disposed of in the trash. Our place in this digital space represented by a small arrow dancing around the screen in perfect mime of our own bodies. 

The invention of the user interface was the first major step towards total immersion in the digital world and the total amputation of the self. Creative advances in the structure of interactivity and graphics have created “a machine worth living in.” (Johnson). MySpace, Facebook, chat-room, Blogspot, website, Cyberspace: the language of the Internet is a perfect indication of our perception of Cyberspace as an external physical space. We are so completely invested in this “Metaverse”(Levy) that we rarely acknowledge the true nature of Cyberspace as rapid-fire patterns of billions of switches turning on and off each second in hard drives and servers all over the world. Just blips of electricity translated, processed, conveyed and displayed.

The ability to connect to the entire world at once and from practically anywhere can be overwhelming. Globally extending the self beyond the body and projecting it into the digital metaverse (Levy) leaves the user with a separation anxiety of the self, the only remedy being to continuously tap into the displaced electrical world to reestablish the umbilical link to the digital self. Consequently this also leaves the user, or participant, constantly distract from, and disassociated from, the physical world.

Our society relies on certain physical, visual and auditory clues to help communicate each member’s needs and desires, resulting in a set of mutually accepted rules of conduct.  Once any one of the senses is removed from the equation, negotiation of all needs and desires becomes more difficult. Communication online or through text messaging does not require facial expressions or vocal intonations, all conveyance of emotion and intent must be rendered visually. To the casual observer, anyone sitting at their computer, absorbed in an online game or chatting with friends appears to be completely devoid of emotion. As a generation, we have amputated our senses. According to Marshall McLuhan, “We have to numb our central nervous system when it is extended and exposed, or we will die. Thus the age of anxiety and of electric media is also the age of unconsciousness and of apathy.” Hours are lost to “surfing, navigating, hunting, watching, staring at a glowing box practicing interpersonal communication without ever physically speaking to, or seeing, another person. Considering the early age at which children are being taught to use computers, the “Art “ of conversation may become a thing of the past.

When you go to almost any public space today there is a certain irony to be found in how few people are actually participating in that public space. An employee on the cell phone, a child playing Spiderman II on the Gameboy, a teenager listening to his iPod, customers surfing on their laptops all of them in public space, physically, and in private space mentally. The odd twist to this dynamic is that the actual number of people invisibly connected to that space could measure in the hundreds. The public space has been relocated and private space is portable.

Cell phone innovations and usage have progressed too fast for our society to generate a proper set of etiquette rules for the technology. Years ago any person wandering the streets speaking out loud would have been avoided as a crazy person. Now, wherever you go there are people speaking and gesticulating to invisible persons, sharing intimate details about their life to the open air without even realizing someone else is listening. Historically all new technologies go through a certain gestation period during which we gradually adapt to the eccentricities associated with its use but the turnover rate of new technology may actually be forcing us to use even more technology to speed up our own adaptation i.e.: increased text messaging necessitates new phones with keyboards so we can type faster. The increase in text messaging use may at first seem like a step backwards in technology when in fact it is simply a perfect complement to the way we prefer to communicate.  “The telephone requires complete participation unlike the written and printed page” and Americans thrive on multi-tasking so they are drawn to a communication form that affords them the luxury of participation on their own terms in their own good time

According to Marshall McLuhan “The phonetically written word sacrificed worlds of meaning and perception that were secured by forms like the hieroglyph and the Chinese ideogram.” Oddly enough, the current trend in messaging is gradually moving closer to symbolic representation.  As evidenced by the following example of the Star Spangled Banner translated from English to tXt lingo using the online site http://www.transl8it.com:

O sA, cn u C, by d dawn’s erly lyt, w@ so proudly we hailed @ d twilight's lst gleaming, who's broad stripes n brite **, thru d perilous fyt, O’er d ramparts we watchd, wr so gallantly streamin? n d rockets’ red glare, d bombs bursting n air, Gave proof thru d nyt dat r flag wz stil der. O sA, dz dat (*) spangled banner yet ~~~ O’er d l& of d fre, n d hom of d brave?

Or by a bit of social commentary from a drink coaster at The Outback Steak House that playfully states: “2 much tXting mAks U 1 bad spLR” This visual/literal shorthand evolved in a desperate attempt to compensate for the absence of vocal and visual clues that would normally exist in face-to-face conversation while at the same time maintaining the apparent speed of vocal conversation.

There are many debates at to whether or not all this “texting” is damaging to literacy. The real question we should be asking is whether or not future generations will need the current alphabet at all.  “Civilization is built on literacy because literacy is a uniform processing of a culture by a visual sense extended by space and time by the alphabet.” (McLuhan) Could that literacy become fully symbolic? Writer Neal Stephenson in his Sci-Fi novel “The Diamond Age” describes a future world that relies on “Mediaglyphs” or graphic representations of ideas.  In his view of the future, the old alphabet still exists but is only used as a formality and a sign of aristocratic status. Much like students today being taught Latin

The Internet is an emergent technology (Johnson), a virtually endless web of ideas linked together seemingly at random. The successful display and distribution of these ideas is based partly on the technical clarity of the message’s design and partly on chance. The technical clarity is in the programming and interface design. Chance is the odds that one person finds a site and shares it with another person or links it to their site, and they tell two friends, and they tell two friends, and so on.  This is how patterns of information and cyber communities emerge (Johnson).

The global exchange of ideas in cyberspace is creating a huge stockpile of free information accessible to anyone who taps into the information network. With sites like Wikipedia, You Tube, MySpace etc. participants are coming together to create a massive collective intelligence (Levy). Or, as founder Jimmy Wales likes to say: “free access to the sum of all human knowledge.” Access to the network is the only barrier to education. Old, young, rich, poor, all have a chance to explore and participate. Everyone has a voice. Writers can self publish on their websites and advertise for free.  Flicker gives everyone a chance to share his or her personal view of the world and sites like Craig’s list act as information crossroads for exchange outside of the cyberspace.  The Internet as tool has created a democratic leveling of the educational playing field. If you need to know something, all you have to do is ask and the answer appears in front of you.

With wireless technology, voice recognition software and access to the Internet, you could literally stand on your front lawn at three in the morning, ask the question “how do I boil water?” and your laptop will light up with two million possible solutions to your question. But, how do you decide which one to listen to? Who is right? Are they all right? Are there really two million people out there who have responded to your question?

The conceptual nature of cyberspace is a confusing one. We perceive cyberspace as being simultaneously public and private, public because of its accessibility and private because of its anonymity. “ The tendency of electric media is to create a kind of organic interdependence among all the institutions of society . . .”(McLuhan). You can do your Christmas shopping, kill the evil overlord, manage you personal business, marry a virtual wife and pour out you soul in text, all without leaving the comfort of home. We are no longer on the outside looking at the world go by. We are on the inside, controlling all facets of experience. The basic act of controlling all information with a click and drag from the comfort of your own home creates sense of security and hubris. The perceived comfort and visual control also generates a certain level of gullibility and trust in the durability of information, images and ideas that have been written, uploaded and stored in cyberspace. How many photos and important emails are stored on your computer or in a server somewhere outside of your home just waiting to be printed out when you happen to find the time? What happens if the server crashes? You could loose several years worth of memories. That’s like storing you family photos in a shoebox . . . in the backyard.

“With the arrival of electric technology, man extended, or set outside himself, a live model of the central nervous system itself.” (McLuhan) but, there is an important “distinction between interactivity (which emerged from the properties of media technology) and participation (which emerged from the protocols and social practices surrounding media)”(Jenkins). The virtual world used to be “a world apart” from us but somehow; this invisible community creation seems to be creeping out into the “real” world a bit more each day. Online communities like Second Life are creating a unique problem, from a psychological standpoint and a legal standpoint.  Second Life is a 3D interactive environment with more registered users than the entire population of Massachusetts, It is modeled after the physical world. Participants create a fictional name and a 3D representation for this new identity. “Residents” own houses, have jobs, visit clubs, and create goods for sale in virtual malls. There is even a Second Life court system for solving disputes between residents. Participants are thriving on the convincing reality and versatility of the Second Life interface. Universities are using Second Life to teach online classes. IBM holds board meetings in Second Life. Record companies release new records, Major performers have concerts of previously unreleased songs. The residents create the content and virtual terrain and of Second Life which gives them the feeling of ownership.  This ownership encourages participation. It’s hard to resist participating in a fantasy where you have no real risks, you never grow old and you can change who you are with a click of the mouse. Your own identity is prosthetic. "Our brains are not specialized for 21st-century media, there's no switch that says, ‘Process this differently because it's on a screen."(Reeves) To enhance the simulation and grant users more natural control over the interface, researchers in Japan are working on a machine that enables the user to control their avatar with a thought. The goal is to create a full sensory suit that enables residents to physically interact with the interface. This online society is already so close to “reality” that one participant, Doug McMahon, has started a blog called “A Constitution for the Metaverse” (http://metaconsti.blogspot.com/) as an attempt to negotiate a communal set of rules for conduct within this new virtual society.                

"The main threats to our survival result from the almost total disjunction between the power of our technologies and the wisdom required to use them over the long period during which their effects will last." (Primak and Abrams.) We used be relatively unaware of the subtle shifts of social adaptation to technological advances because they happened gradually over the course of an entire generation. Now, global connectivity and the rapid exchange of ideas is forcing us to, to adapt to the personal, social and legal, complexities of multiple industries wide advances occurring every three to five years. Theorists, artists, scholars and industry leaders have attempted to make order of this rapidly changing techno-social dynamic unfortunately, the likelihood that any one of their books on the subject will make it to print before being rendered obsolete are fairly slim.

The world is at your fingertips but so is the off button.


[top]


Human Communication and the Convergence Agenda       
Cees Hamelink, waccglobal.org

The question raised in this article is what consequences will converging technologies have for the way human beings communicate in the future?

Humans have an almost unlimited desire to communicate, even with angels, trees and dolphins. Converging technologies promise to open up further opportunities to expand this communicative compulsion. Enormous benefits could be in store such as the removal of all the obstacles that during most of human history limited the effectiveness of message transfer. Such obstacles were the restrictions of distance, speed, volume, and reliability.

Another essential impediment to human communication across borders has always been the existence of different languages. The advent of communications steered by machine intelligence will almost certainly offer a solution to this problem through advanced speech recognition and instant translation.

One may easily be carried away with such prospects, but one needs to realize that it is unlikely that the availability, accessibility and affordability of these emerging technologies would be globally and equitably shared. Given the reality of unequal socio-economic positions in the world, one can foresee that such divides as the development divide, the information divide and the digital divide, will be succeeded by a convergence-divide. And, as with the other gaps, this divide will be exploited to consolidate positions of power and privilege.

What different modalities of impact on human communication might we expect in the not too distant future? I will briefly discuss human-machine communication, machine to machine communication, communication between living and deceased people, brain-brain communication, and cross-species communication.

Human - machine communication
People will increasingly communicate with the assistance of machine intelligence and the machines will have ever more human features.

Developments in converging technologies will produce machines with an intelligence that far exceeds human intellectual capabilities. In order to co-exist with these machines humans will have to learn to communicate with a being gifted with superior intelligence. The deepest impact of this development will be the necessity to re-think the whole concept of what it means to be ‘human’. As Bill Joy (2000) wonders, ‘Given the incredible power of these technologies, shouldn’t we be asking how we can best coexist with them? And if our extinction is a likely, or even possible, outcome of our technological development, shouldn’t we proceed with great caution?’
A crucial question is whether humans are ready for peaceful co-existence with such beings.

Machine to machine communication
Among the new forms of communication that humans will be exposed to is communication between intelligent nanobots in our biological systems. The introduction of machine intelligence in our biological systems implies the prospect of longevity and even immortality. ‘Billions of nanobots will travel through the bloodstream of our bodies and brains. In our bodies, they will destroy pathogens, correct DNA errors, eliminate toxins, and perform many tasks to enhance physical well-being. As a result, we will be able to live indefinitely without aging’ (Kurzweil, 2006: 43).
Convergent technologies may well create a future – as Bill Joy argues – that does not need humans anymore. Humans in the world of convergence may still communicate with each other, but it may not matter any more. How do we deal with a new Copernican change that no longer positions humans at the centre of the universe?
Cyber-immortality

The development of converging technologies might open up the possibility of communicating across the barrier of human mortality. Artificial intelligence technology is likely to make it possible that people continue their lives after death as archived personalities. ‘The Convergenists’ agenda is aimed at improving human performance without limit, and many of the anticipated technological spin-offs would be useful for recording, preserving, and reanimating human personalities – ultimately creating cyberimmortality’ (Brainbridge, 2006: 28).

With such a development a realistic expectation would be that living people can communicate with persons that have died but are electronically ‘archived’. This represents a challenging prospect for religious reflection. It is interesting to note that converging technologies have not yet caused a similar concern as human cloning has. It remains to be seen how religious leaders and theological experts will respond to humanoid, intelligent robots with the personality characteristics of people who have died.

Indicative for the nature of this development may be the observation of the strong emotional bonds that elderly, lonesome people have with such intelligent artefacts as the Sony dog Aibo. Although Aibo is clearly a metallic toy, experiments demonstrate that people quickly grow attach to their electronic pet with which they share problems and very private concerns.

Imagine how they would relate to a deceased partner with whom communication remains possible and who would not only listen like the digital dog, but who would respond with advice and understanding as he or she did when still alive.

Telepathic communication
Converging technologies will make communication between human brains possible. As Kurzweil proposes ‘the age of telepathic communication is upon us’ (Kurzweil, 2006: 43).

Communication between human brains and machine intelligence and between human brains may significantly alter the way human brains operate through the exponential expansion of our memory and cognitive capacities. An intriguing question is whether effective human communication is not largely influenced by our failing memories. Try to imagine that in human encounters we would have full recollection and cognition of all the interlocutors in the conversations we conduct.

Cross-species communication
Humans have demonstrated throughout history a tendency to treat non-human animals in very destructive ways. This has always been rationalized and legitimized by referring to the fundamental divides between the different species. This position has been reinforced by the impossibility of trans-human communication. We know that whales communicate with each other over great distances (over hundreds of miles) and with voluminous content (anywhere between one and ten million ‘bits’ per 30 minutes). Humans cannot communicate with them which makes killing them and industrially processing them easier.

It is interesting to observe that when people can communicate (even in the sense of giving orders that are understood) with their pets, they are less inclined to murder and consume them. This would lead to the conclusion that human treatment of other sentient beings (animals and even plants) would change once communication – or at least hearing the ‘others’ – becomes possible.


Research in photo-acoustics demonstrates that plants are not as silent as is commonly assumed. Most plants make sounds. If one registers – through advances audio-technology – the sounds of a rose that wakes up, there is a regular succession of chords that reminds the listener of Bach’s toccatas (Berendt, 1987: 62). Biological research has also demonstrated that different sounds have a different impact upon the growth and well-being of plants. And it has also been empirically demonstrated that different types of music cause different crystal patterns in frozen water.
It seems a realistic expectation that the further development of converging technologies will allow humans to hear sounds where they assumed only silence existed (like in the deep seas) and to produce sounds that are beneficial to other living beings. Admittedly, a very rudimentary basis for inter-species communication, but a beginning nevertheless!

The human dimension
We are already well on our way to lose the human dimension in many of our communication processes. Even the plain old telephone is increasingly going virtual and it is an ever rarer experience to talk to a live human being instead of communicating with a machine. Virtual voices welcome our telephone calls, direct us though a maze of numbers and options, tell us that all human operators are too busy and, when we are on the verge of strangling the virtual respondent, they kindly thank us for our business.


Is this still human communication? If it can be called communication at all, it certainly falls in the narrow category of the ‘transfer of messages’ model and does not fit into a conception of communication as ‘interaction’.
Can we still communicate in sense of Martin Buber’s relational communication. What happens to human communication when all ‘I-Thou’ relations are all technically mediated? We will certainly expand and upgrade our communicative transmission practices, but can we ever listen to the ‘otherness’ of the other in communication processes mediated by converging technologies?

Solutions?
If we think this is an undesirable development: should we try to limit technological development? Can we limit ‘our pursuit of certain kinds of knowledge’? (Joy, 2000). In a modern world that is largely inspired by the Enlightenment ideals of human improvement through science and technology, this is difficult to imagine. The holy mantra of our era seems to be that knowledge in itself is good and that acquiring more knowledge is even better. And, indeed, the search for knowledge satisfies a fundamental human desire to ‘fly away from ignorance’ as William Shakespeare elegantly put it.

However, as we acquire more knowledge we should also become aware of the dark side to an uncritical reverence for scientific and technological development. In the past decades science and technology brought humankind close to the destruction of the planet and the emerging converging technologies make the extinction of humanity a very real possibility.

One may object that humans as individuals have a strong desire to survive. This may be true, but it does not guarantee that the collectivity of humans as a species will not be guided by an equally strong negligence towards its future. The nuclear arms race of recent history does not provide a very reassuring picture. Humankind has a great capacity for irresponsible and destructive action. Technology has rarely ever been invented, developed and applied under the guidance of normative, moral principles. ‘Engineerability’ was and remains in combination with military and commercial interests the essential driving force. The question is whether we can still afford this in the 21st century?


Should not the development of converging technologies be tested against the human rights standards of human dignity, security and autonomy? If the human dimension were chosen as a normative yardstick, we would have seriously to consider the observation that the tools people make are rapidly outpacing their mental capacities. For their survival human beings may not be very well equipped, but they are streetwise in designing constructs (such as languages, technologies) that compensate for innate inadequacies. Amid the impressive array of human cultural constructs, the question comes up whether these may wander too far away from human nature. Could it be that the distance between construct and nature grows so big that what seemed a solution turns out to be a danger?
The development of advanced, sophisticated armaments is a good illustration. Modern arms (such as computer-steered fighter planes) no longer match the human capacity to understand what we are doing and what the consequences may be. The fighter pilot is morally so distant from his victims, that he or she may as well be playing a computer game rather than destroying human lives.

But even if fighter pilots were to try to understand and reason morally about their acts, they could not possibly begin to imagine what the effects of their actions are. Whereas our minds still travel in the age of horse-drawn carriages and spears, our bodies travel in super-fast cars and Concordes and have the devastating power of nuclear arms at our disposal. Can minds catch up with bodies?

The mind-boggling developments in science and technology inspired the belief that the rational, conscious and free human mind that the Enlightenment projected was capable of dealing with these developments in a humanitarian way. This illusion was fundamentally challenged by the 20th century horrors of Auschwitz and Hiroshima. To regain the human dimension we have to contemplate how we could bring the different paces of mind and body in harmony. Maybe the aboriginals in Australia can help us by what they do during long journeys: before they reach the destination of their walks they take time for their souls to catch up with their bodies.

Whatever position one may have in relation to the converging technologies, there can be little doubt that humankind is in the process of developing new tools that have far-reaching implications for its future. Finding a humane perspective for this future demands that critical choices are made. These choices should result from a society-wide and transparent discursive process that involves all those who will be affected. It is disconcerting that the very beginnings of such a process are not even in sight today!

References
Bainbridge, W.S. (2006). ‘Cyberimmortality: Science, Religion and the Battle to Save Our Souls.’ In The Futurist, 40(2): 25-29.

Berendt, J-E (1987). Nada Brahma: the World is Sound. London, Destiny Books.

Joy, B. (2000). ‘Why the future doesn’t need us.’ Wired, Issue 8.04

Kurzweil, R. (2006). ‘Reinventing Humanity: The Future of Machine-Human Intelligence.’ In The Futurist, 40(2): pp. 39-46.

Cees J. Hamelink is Emeritus Professor at the University of Amsterdam, The Netherlands.
Cees J. Hamelink is Emeritus Professor at the University of Amsterdam, The Netherlands.

[top]


The Singularity and human communication
versus a future that does matter

Leo Lake lake@home.nl

Soon the computer will take over. So they say. The singularity is the name reserved for the moment our computers will out-smart us and develop their own minds.

Singularity theorists think they see an important similarity between humans and computers: they both think. However, given the vast differences in the circumstances under which humans and computers compute, this discovery of similarity is both a remarkable and a suspicious one. Even when thinking only of thinking, the differences between both seem to be more important. The current machine computer performs it’s tasks in glorious isolation from which it can be interrupted, the human computer is closely linked to it’s environment: it is fundamentally event-driven. If a human computer is not driven by events it comes to an halt. A machine computer is more likely to halt when it is, occasionally, interrupted.

Event-driven becomes almost synonymous with distraction when humans and computers are considered similar. When we, humans, think, we get so easily distracted that those who do not get so easily distracted become famous. When we solve a differential equation, we often get thirsty or develop a pseudo-philosophy about the color of the pencil used to jot down the intermediate steps. When we listen to music, the proverbial fly on the nose too easily becomes the most urgent thought. When we play chess for an audience, our eyes often wander to sexy examples of the desired sex.

Our environment pulls the strings of our attention. We “think” in messy dependence, not in the computers glorious isolation. The machine computer extrapolates from axioms numerous consequences, the human computer basically reacts.

This is probably due to our hardware, our brain. The larger part of the human brain has not developed to deal with the movement of the stars, musical compositions, chess, poetry or abstract logic. The brain was not designed to extrapolate, but to react. React to what? Well... to other human brains. Indeed, the main objects in our environment are people. The brain has adapted it’s structure to this social environment. Our brain is not a general purpose computer. Our notions of mind, of person, of subjective experience may be the result of the way our brain adapts to other brains. All brains together may be an all-purpose computer, but probably not.

Is the singularity theorist correct in ignoring the vast differences in the pragmatics of computation as done by machines and as done by humans? It may be that they see an arrow of brain-development that is, to a large degree, an illusion.

Why? It all has to do what the fundament of what our mind is. That fundament is called “being a person”. I will return to this in the next paragraph, but let me first be explicit on the consequences of this claim. It is unlikely that the person concept will be developed by machine computers, or by any “running” computational process. This notion however controls to a very large degree what a human does and thinks. In the degree the concept of person matters, in that same degree humans and computers will be different. In the degree the concept of person matters, in that same degree a personless future becomes irrelevant to us here and now.

Let’s inspect the development of personhood a little closer to make such statements credible. When we see a body we automatically infer that it is a person, endowed with consciousness, with feelings and subjective experiences. This point is made in an exquisite book by Leslie Brown (Friday’s Footprint, 1997). Our brain is hardwired to develop the notion of person, she claims. The notion of person is instrumental in co-adjusting the behavior of a group of human bodies. That it is hard-wired means that we cannot see a body without seeing a mind, without inferring the existence of subjective experience in the other. A similar point has been made by Peter Strawson, who claims that concept person is logically more primitive than the idea of subjective experience.

There is nothing mysterious about this process of person construction. When we see a string of letters that form a word in a language we speak, we cannot but see it’s meaning. Exactly what meaning depends on what we have learned, just as the rather abstract concept of person will be endowed with numerous characteristics based, mainly, on what is learned during the conversations between two or more human bodies.

The notion of person is a construct of our brain. It comes to being when brains and bodies interact. A person is therefore a social phenomenon. It is not ‘in’ a brain, it is distributed over at least two brains.

Back to our singularity theorists.
Will smart computers, or smart computational processes, have a mind, as so many singularists seem to imply? Probably not. A mind, as I’ve hinted above, is a sensation, of if you prefer cog-speak or have, unlike me, a degree in psychology, a schema, constructed by humans because it has a survival value in the war on the battlefield of interpersonal relations. Mind is conditioned on personhood. A mind is an attribute, it is a relation, or relation producing form, and therefore has no essence. If it has a reality it is a social reality. Whether something like a mind will exist in the future universe of computational processes after the singularity, depends on how, and if, these processes will communicate, parallel computation being assumed, of course. They will only develop a person schema of a kind we, humans, can relate to, if they interact closely with humans. Given the schism between machine computer and human computers in the degree they are eventdriven, this is unlikely to happen. It becomes even more unlikely if we compare a brain and computer on the speed of their constituents. The brain is many orders slower than a computer. The notion of mind will most likely not even develop in the ongoing conversation of computational processes.

What will computers develop to understand each other?
It will probably not be something we, humans, cannot understand, or something we couldn’t possibly be interested in. If these communication processes become controlled by a new environment, however they too may get caught in new endless and pointless circles of communication, and they may theselves desire a singularity renewed. But who cares?

Does the future of the singularity theorists matter to us? That future is so alien to us, that it cannot matter to us. However, if their future is futile, their method of transcending the present is/maybe not.

The real value of singularity theory is that it is an attempt to transcend human existence. It does this by focussing on using a small part of our mental capacities, problem-solving, and on the production of machines that are good at it, better than humans. This transcendence is lacking in our picture of mankind as a set of communicating brains. As said, brains have developed to cope with other brains.

Even the notion of person serves an instrumental role. If, as is bound to happen, this insight becomes part and parcel of our cultural discourse, than all we can do is stare at a rather nauseating circularity; brains exist to understand other similar brains. It is like saying that the reason for my being is your being: that the reason for existence is existence. From essence to being... let’s not go there. Our cognitive apparatus itself may just be a way nature has found for one brain to make other brains more predictable. If this does not annoy you already, let me try to rub it in using an analogy.

There are animals who have learned to develop a thick skull because banging heir heads against other skulls has proven it’s survival value. Talking to other people maybe just be the human variation of banging each others’ skulls. Survival value is highly dependent on a self-created context and thereby becomes utterly point-less. We communicate to survive. Period.

The singularity theorists do have a way to transcend the mess (some call it mesh) of our existence. We, whose existence is conditioned on being a person, are bound to a brain that only wants to survive amidst other brains.

How to transcend human existence if the cognitive way of transcendation leads to an incomprehensible world? And should we?

If the answer is yes, we face the task of finding something between the meaninglessness of brains that develop just to understand other but highly similar brains and the unimaginable, even when unavoidable, existence of smart communicating parallel computational processes.

To this end we have to focus on bendings the arrow of our real brain-time development. Pointless conversations may precisely be the substance of our future, if we do nothing to prevent that. This is not a mere academic point. Our informational environment is increasingly orienting our brains to pointless communicaton. The human capacity to transcend the present is under serious threat. The singularity theorists are probably sensitive to this threat, but their solution has some escapist tendencies.

Bending the arrow of brain-time into a direction that keeps the future related to the present, requires a different technique of transcendation than the protagonists of the singularity propose. They enhance only a part of a human. It also means a break with the ideology of protagonists of the dominance of the social, the worshippers of human communication, including the omnipresent practioners of irreflective communication.

The singularity theorists transcend humans by, perhaps implicitly, abandoning the concept of person. That will disconnect us from their future. But staying were we are, amounts to the closing in of human development in a small, narrow and incestuous circle, one where all that counts is coping with the brain of the other.

To transcend the present is to transcend personhood without abandoning it. Perhaps even without rewiring our brain. Since abstract personhood is filled in by conversations, we effectively need to transcend communication: we need an überlanguage. If we want one...

Here my story probably ends.

[top]


What is the future of communication?
November 2nd, 2007

Nikos pinged me just in time about the MicroMedia 5 minute meetup.  That’s basically a virtual meeting of a bunch of people, who each provide their own answer to one specific question.  Since the meetup is virtual, the answers should have been provided virtually as well.  And since it was all about micromedia, it was logical to expect micromedia tools to be used. 

The question this time was: “What is the future of communication?“.

That’s one broad question if seen from all perspectives.  To avoid a non-stop thinking exercise, I limited myself to a version like “What is the future of communication from micro media point of view?“.  For those of you, who don’t know what micro media is all about, here is a quote from the meetup wiki:

Text: Microblogging tools like Jaiku, Twitter, Pownce, or Text messages
Audio: Twitter Gram, Utterz, Audioblogger, or other
Images: Get creative here, can you tell a story from photos?

These tools rushed into our digital lives recently, and got a large and important place there.  The applications of these tools vary from personal notes to corporate meetings, but most people use these for communication purposes.

So, here are the questions that I got thinking about:

How are these tools going to change in the future?
Will we get some new ones?  Will the old ones stay?  How much will those that will stay change?
What kind of tools will people prefer and why?
How will these tools be utilized?
How will people’s lives change because of these tools?

There is a lot that I can say answering these, but most of it will be just water with no proof or reason to it. For me it’s mostly based on personal experiences and feelings, rather then any specific studies or statistical data or anything like that.

Before I go on, here is the short answer the meetup question that I posted to Twitter.

Twitter-type short text services with open API and mobile/SMS integration will rule the future for a long time.

Now, for the long story.

I think plain text will dominate images, video, and sound for a very long time. That’s not going to change in any foreseeable future. I think so because:

Text is way easier to produce.
Most of electronic devices these days have some sort of keyboard attached. Text can be easily produced in a number of ways - full featured keyboard, simple mobile phone like keyboard, one button keyboard with a cycle through the alphabet, mouse/joystick pointer, speech to text conversion, etc.

Text is way easier to search.
Modern search engines are at the point of extracting meaning (when you tell them “car”, they understand that you mean “car”, “automobile”, “vehicle” and so on. When you tell them “the sound of bass”, they understand that you are probably talking about music rather than fishing).

Text is much easier to consume.
Most people won’t have troubles recognizing parts of the texts without reading them through letter by letter. Most people skip chunks of texts when reading longer pieces. Most people won’t have any troubles reading several texts at the same time.

Text is more portable and accessible.
Read it from the screen or print it out or even re-write it by hand. Devices that are needed to move text around are much cheaper and simpler than those for sound and video.

Now, most people will prefer short chunks of text to long chunks of text.
I’d rather read two sentences and move on to the next news section or topic, than spend three days trying to figure out what the author is trying to say. Most people I know will have hard times writing an article one page long on any subject at all. All the same people will have no problem spitting out a sentence or two.
Again, on any subject at all.

Because of the above, I think that short text services will blossom. And they already grow pretty fast.

Which of these are better?
Those that are simpler to use. Twitter is doing a really good job here. One large text box, one submit button, and a counter of how many characters you have left for this message. Nothing more. It’s difficult to make it even simpler.

But ease of use shouldn’t be only for the end user.
It should be from all sides of the service. And again Twitter is doing a pretty good job of it. It has a simple and straightforward API, which allows programmers to create applications for this service in just a few lines of code (a few is as in one or two lines of code). It has an RSS feed for everything, so it’s easy to get things out of it. 

It has plenty of automation - TinyURL integration, direct messages, tracking, etc. It has SMS integration, so it’s easy to use on the move.

Stuff like that will be at the top of micro media, I think.

Of course, other technologies will move forward, as they always do. It will be easier and easier to create and move around sounds, videos, and whatever else is there. Devices will get smaller. Connections will get faster. The content will get richer.

But, as with many other things, the limiting factor won’t be in the technologies. It will be in people. Think about images for example. Those were with us for thousands of years. Yet, only a few of us can draw a semi-decent picture. And photography is of no help here. Millions of terrible images out there show as to how effective we are with cameras. We see things in 3D. Images are 2D. The software will hardly ever do a proper conversion. And humans will hardly spend the time learning about the topic to do it themselves. Videos are even more complex - we can’t manage static pictures properly, and now we have a full power to work with moving images. Sounds aren’t much different.

People are buying multi-core multi-CPU hi-end machines and use them to play minesweeper mostly. You give them a mobile phone which can control half of the universe, and they won’t even bother about an address book in it. I don’t think this will ever change.

Things might improve both from the people’s and technology’s sides, but the huge gap will always be there.

These are my thoughts on how this whole micro media communication will play out in the future…

[top]


A Glimpse Ahead: the Future of Storytelling
by Helen McKay.
Text of an address to the Australian National Library, Canberra - October 1997

I look out my office window on to what was, until recently, a patch of almost bare ground. We'd experienced a particularly cold winter and small plants in that garden had received little to nourish them.

Suddenly, it was September. Spring arrived with its sunshine and showers. Almost as I watched, bulbs forced upwards spearing through the soil, their leaves reaching high to feel the warmth of the sun. Buds followed and in days there were clumps of splendid daffodils bringing their dazzling colours - yellow, orange and gold - to that previously dull corner. A bare-branched magnolia burst forth in glorious rosy bloom. Other plants sprouted new growth and quickly formed buds as they gained their nourishment from the spring rains and the sun.

Storytelling in this country is a bit like my patch of garden. The artform has been there for many years, but with the advent of television and other forms of electronic entertainment, suffered a little from the cold of winter - and a loss of nourishment.

The storytellers were still there - but storytelling was not being accorded the status it deserved. The new electronic media had caught our fancy and we all rushed to keep up with all the developments in this new `Gee Whizz' technology. Writing the stories became the popular artform, surpassing storytelling, as people in the computer age were able to self-publish and promote their work.

Because of the exciting discoveries, the world began to embrace the technology with a passion previously unheard of. Our lives now move at an even faster pace than ever with the advent of the internet. Ideas and information race around the planet at the speed of light, to bring us together into a closer global community. We swap information in seconds by email, with people on the other side of the world.

People are reacting to this change in an interesting way. The more they are required to interact with electronic machines, the greater seems to be the need for human communication in their leisure time. Consequently, there is currently a big swing back towards the communication artforms. Performance poetry and theatre are once again popular forms of entertainment as audiences seek that which they have lost - the art of one-to-one communication.

Now it's Springtime - it's storytelling's turn for a place on the cultural agenda, as people feel the need - once again - to make that elusive connection with the storyteller. The truths, values and healing powers of stories are more relevant today than they ever were. Many people live alone and with only a television or computer screen for company; are starved for the most basic of all needs - the need to communicate with others. Storytelling answers this need at a very basic level.

As he has done since the beginning of time, the storyteller transmits the pictures in his head through his oral and body language to his audience - a truly human exchange of his vision. This exchange is a two-way phenomenon, as the audience in turn sends back signals that they received his pictures, allowing the teller to continue with the story.

George Miller, the film-maker, found to his amazement "that stories connect us through time and space. As his Mad Max films made their way around the planet, they seemed to resonate somehow, culture to culture. As he says in his article The Apocalypse and the Pig, 'To the French they were post-modern, post-apocalyptic westerns and Max was a gunslinger. In Japan, he was an outlaw samurai. In Scandinavia, a lone Viking warrior.' They were all examples of the `universal hero' myth, which is the basis of all good stories"

When he wanted to shoot one of the Mad Max movies at Kata Tjuta (the Olgas) in the mid 80s, he discovered something even more incredible. George says, 'To the Aborigines of the Central Desert, this place - Kata Tjuta - is sacred. We were required to sit with the tribal elders of the Pitjantjatjara and present them with our story. They had heard the story before. Many of its motifs and archetypes corresponded to some of their own. So here were the custodians of a culture 40,000 years old, and connections were being made."

Storytelling is not a new artform - it's been with us since the beginning of human communication. First came the cautionary tales - warning of the dangers lurking in the unknown - outside the cave, then came tales of triumphs and disasters, of joys and sorrows. These stories have been passed down through generations telling of the significant events of our forebears to connect us to our family mythology.

Storytelling is humanity's oldest tradition. Through stories we learn our values, and how to overcome adversity. We seem to have a deep-seated need to share our stories with others.

In his book `A recipe for Dreaming', Bryce Courtenay says: "Each of us has been designed for one of two immortal functions - as either a storyteller or as a cross-legged listener - to tales of wonder, love and daring. When we cease to tell or listen, then we no longer exist as a people. Dead men tell no tales."

Aboriginal communities agree, believing that the loss of their stories and language is a recipe for disaster for their people.

Films, television and other forms of electronic entertainment, cannot fulfil the need of human beings to share their stories and experiences. You cannot react to a film image in the way you can to a storyteller telling his story to you, orally.

Story entrains minds - it has the ability to reach down deep into our souls and open up blocked areas deep within us, releasing grief, wounded feelings and suppressed creativity. The freeing up of these blockages, allows us to expand and grow - to reach our creative potential.

Because of a new awareness of its value in communication and education, storytelling is currently enjoying a world-wide resurgence. At the recent Australian National Storytelling Festival we saw many examples of the power of story. There were tales, which first brought laughter then tears - Donald Davis was a master of this type of storytelling - and he played us like a fiddle, as he wove his magic with stories. Children came to the festival and competed in the final of the NSW Junior storytelling event, telling existing stories, or stories they had written. One young lass reduced us to tears with her story of a rabbit.

Although some of us were aware of it, we discovered that storytelling is an excellent device to encourage children to listen to stories - then read and write their own; often switching on reluctant readers. Parents who spoke to us, were overjoyed that their children had finally found something at which they excelled.

During that weekend we heard stories from historians, stories from our Aboriginal communities, stories from other cultures. All the stories we heard, were very relevant to our lifestyle today.

I have an interesting example of how storytelling and technology can work together. We were contacted by a group of Norwegian students, who had seen our website in Norway and while they were here - in an exchange student program - rang us in Sydney - from Queensland, to book 9 seats; before climbing into two cars and travelling down to Sydney for the Showcase Concert.

Financial recognition for the festival, came from the Australia Council and Arts West Australia. I believe that support firmly places storytelling up on the cultural agenda with other activities. Although we were not funded by Carnivale, we were promoted and publicised by them.

We also experienced unheard of support from the media. Two National radio stations aired interviews and daily storytelling performances, while television interviews were aired at prime times by Channel 9, Channel 10 and Optus cable TV. The child storytellers were recognised by the media, who interviewed them on both radio and television. Many newspapers and magazines (including some very prominent business ones) gave space to articles about the festival and its participants. Our competition winners were interviewed by the media.

Recently, I attended a function at which, I was thanked for encouraging that particular Asian community to tell one of their traditional stories at the '97 Festival. Initially, they didn't value their stories and were surprised, when I asked them to perform. They did it brilliantly, highlighting the story with dance and drama - it was one of the most popular performances on the program.

This performance gave them a sense of value as a community - they now felt that their stories were valid. This group say they will continue to encourage their people to tell - not only the stories from their homeland - but also stories of the struggle to find acceptance in Australia, their new country.

People with disabilities told wonderful stories - we had a brilliant 4th generation profoundly deaf woman, who told a wonderfully funny story by signing, with the assistance of an Auslan interpreter. A blind man came with his beautiful black Labrador dog and told us a wickedly funny story, and a lady from West Australia, ran a most inspiring workshop - Follow your Dreams - from her wheelchair - and later, told stories in a group storytelling session.

Throughout the weekend, we enjoyed the opportunity to attend our choice of twenty six workshops (many of which were booked out), five concerts, four group tellings, three competition finals, some swapping ground performances and - not a partridge in a pear tree - but a Liars' Contest - complete with a "Load of Bull" trophy.

From where I view it, storytelling is experiencing a groundswell of enthusiasm, throughout the world and certainly - within Australia. When we called for expressions of interest from guilds around Australia, we received 76 excellent workshop propositions from professional tellers. Deciding which to place on the program was very difficult. We also tried to make the program truly representative of the talent in this country. This meant that some of our storytelling concerts were quite long. But it was all so worthwhile. It is really surprising to see the depth of talent we have among our professional tellers in this country.

People praised our committee for inviting the international storytellers who came from around the Pacific rim - our regional neighbours. Together with our Australian storytellers, they created an interesting blend of cultures over that weekend, building bridges of understanding and acceptance.

We found storytellers who worked in schools, community organisations, health facilities, social work situations, children's entertainment, the corporate communications area, tourism, writing and performance - to name but a few. Storytelling currently seems to be the `flavour of the month' in the communications industry. The tellers who attended, were drawn from all ages and occupations - the diversity was surprising.

Opportunity knocks for storytellers in this country, as tourism grows. I have travelled on trains, planes and ferries - especially tourist boats - and heard the most dreary recitals of statistics - usually about overpriced real estate- rather than the wonderful stories associated with those places. Yet some of the really interesting stories about the historical places in this country - taped - as are the descriptions of paintings in our more innovative art galleries - could make our journeys so much more enjoyable and interesting.

Our guild has been approached to train guides, in some tourist venues, to tell the stories of those places, rather than recite a list of boring facts, as they currently do. Good storytelling can bring tourists - and the wealth accompanying it - to this country in the future. The year two thousand is almost upon us - with its Olympic tourist hordes - what an opportunity for storytellers to polish some of the folk tales of Australia. The tales of Australia lie on the ground like gemstones, waiting to be discovered and polished.

Health facilities are increasingly using story, to release people from problems, such as, grief, post traumatic disorders and other stressful symptoms, allowing their patients to resume normal, happier and productive lives.

Educaters are realising that storytelling is a very cross-curricular activity. If you get kids involved with a story, you can do so much with it - extending it right across the curriculum.You can feed dry food to kids, saying it's nutricious - but although they may eat it - they won't enjoy it. Using story, changes the subject into chocolate and the children will always ask for more.

In a recent newsletter is an article about the use of storytelling, to counter bullying in schools in the UK. I can see that we can certainly use those methods here, as bullying is a major problem, which needs to be addressed in our schools.

From these few examples I have given, you can see that opportunity certainly knocks, for storytellers, brave enough to step out and think laterally in a changing world. But to make a career of storytelling, we must first get the public to accept that storytellers are professionals, who need to be paid a reasonable fee for their work. They need to be housed, fed and clothed; just as other professionals, such as librarians and teachers.

Many activities, where speakers, writers, performers and storytellers have been invited to perform, the only appreciation of their services - for many years - has been the pretty card, box of chocolates, bottle of wine, souvenir spoon or bunch of flowers. While the gifts are delightful, they don't do much towards feeding, housing and dressing these talented, professional people.

The speaking industry has addressed this problem by setting standards, which now bring fees for keynote speakers from $2000 to $12,000. The irony is: the best speakers in the world are great Storytellers!

To gain a greater appreciation for their talent, and fair payment for their storytelling skills, the NSW Storytelling Guild saw a benchmark was needed, by which organisations hiring storytellers, could gauge their worth. We also felt that it would give storytellers structured goals to work towards, helping improve the standard of storytelling in our guild. So our guild instituted an accreditation process.

Accreditation is on three levels: Accredited Storyteller, Professional Storyteller, and Master Storyteller. The accredited status is recorded on each membership card, which hiring organisations know to ask for, and a certificate is issued at each level. Since this began a number of years ago, the standard has lifted and payment to tellers has improved, although it still has a way to go yet. Our guild encourages anyone who is keen to develop the skills needed, to come and learn at our workshops, preparing them for a career as a storyteller.

Whether storytelling will continue to be regarded as just a "nice folk hobby", or as a "professional career choice", in the future, is up to us, the storytellers. The financial rewards that speakers have achieved, are available to storytellers.

By gaining the public's acceptance that we are professional storytellers - the same as teachers, librarians, or health workers - who are regarded as professionals, we can achieve better rewards for our work. This will take committment and an attitude of professionalism by all storytellers. We must unite to give the Guilds strength. The Guilds are dedicated to fostering the art and image of storytelling for their members.

The Bi-ennual National Storytelling Festival, has stimulated the interest of both guild membership and the public, who make up our audience. It is up to us to keep up the momentum, carrying storytelling bravely forward into the new millenium.

To quote George Millar, "Somewhere in our neuro-physiology we've been hard-wired for story. There is a kind of narrative imperative — we can't be without stories and we will find them where we can".

See also
Language and Linguistics
Computer says get a life – and we have
Email dependency
Just Remember This
Gestures tell us much
2006 Assessing Indirect Impacts of the EC Proposals for Video [184KB PDF]
Bid to save the future of Braille

Readers please email comments to: editorial AT martinfrost.ws including full name

Note: martinfrost.ws contains copyrighted material, the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of "fair use" in an effort to advance a better understanding of political, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than "fair use" you must request permission from the copyright owner.
Return to home page
top