ArcNews Online
 

Fall 2007
 

Living Inside Networks of Knowledge

By Nick Chrisman

Nearly every article on technological change begins by saying that recent changes are unprecedented. As I begin this essay about new directions and choices, I remember the overblown prose of the manual for a 1974 data conversion program. It began: "Recent years have witnessed the upsurge . . ." After 33 years, the upsurge becomes just a matter of daily life. Been there; time to break the habit.

Instead of saying that the present is different, I am going to argue that networks have always been important, just not very clearly identified as powerful elements. Around 1974, I started working on a computer at Harvard that had a freezer-sized box to connect it to other computers across the continent. It was node "9" on the ARPANet. This box enlarged our e-mail to the dozen or so other network boxes, but e-mail was pretty selective in those days. In my practical application, it took another 13 years before I could reliably expect to contact a colleague through e-mail. By 1986, in planning for AUTO-CARTO 8, I could reach most of the authors and reviewers through e-mail, with a bit of care in how it was sent. Each network needed special addressing; for example, British addresses were inverted (uk.ac.bristol and not bristol.ac.uk as it is now). Still, it was possible to reach the community. The lesson is that a network of communication has to become nearly universal before it supplants the prior technology.

I am not going to spend any more time talking about the early days of the pre-Internet, since they have little bearing on the bigger future revolutions that have already begun. Am I exaggerating? What can be bigger than the planetary communication system that has emerged in the past decade? The Internet was not unprecedented. Connecting a significant portion of the world's population to an integrated network of communication is something our society has done over and over again. The telegraph system was one such system. From its inception in the mid-19th century, the telegraph provided light-speed communications from place to place. It remained centralized, and the last mile involved boys on bicycles, but the overall increase in speed was enormous. The telegraph was followed by the telephone, bringing the equipment right into each house. In a sober analysis, the Internet, as most people use it, simply makes another transition in the details of the connection. The network technology offers some new possibilities, but we have barely begun to figure them out. The real trouble is that as each new technology emerges, the first reaction is to use it to implement the previous technology, only a little bit faster or cheaper. Our conceptual models have not evolved as fast as our infrastructure.

click to enlargeIn the world of GIS, we are still living out the original dreams of the 1960s. An institution would spend great time and effort to develop a geographic information system. Note that the term is singular. It implies one integrated system, a centralized one, built by experts to respond to specific needs. There is some vague hope that others will beat a path to the door of the big centralized system. If one of these users wants the data, they will be offered 1974 technology: a File Transfer Protocol (FTP) to take a copy. FTP has survived virtually unchanged for more than 30 years. Now implemented as a Web-based portal under the disguise of a download, this looks modern and sophisticated, but it leads to the most horrible duplication and proliferation of unsynchronized data holdings. We have a worldwide communication network, but we are still managing it with some elements of the telegraph mentality of centralization. Somehow the official-looking professional presence of a clearinghouse inspires confidence, even if the business model fails to grasp how the world has changed.

In the movement to build "spatial data infrastructures" as a new form of activity, it is rather curious that a key message of the original work by Barbara Petchenik and colleagues at the National Research Council has been forgotten. Her point was that we already had a spatial data infrastructure, one that needed to be rethought and reengineered. The simple transfer from one medium to another preserved the institutional structure that needed to be overhauled. In place of the one-stop shop metaphor, we should be expecting to hear from many sources. In place of relying on a single integrator to produce the safety of a 1960s unitary GIS, we should learn to live with multiple sources and conflicting viewpoints.

The geographic technology that challenges the old ways of thinking is not simply the communication backbone of the Internet. The new world goes under various terms: distributed sensor networks, sensor webs, and some other buzzwords. Let's paint a picture of what these networks mean in a nested scenario. In my textbook Exploring Geographic Information Systems (Wiley: 1997, 2002), I start out with a simple case of geographic measurement: a stream gauge (or a tide gauge).

At a particular place, whose position is established by other means, a float rides up and down on the water's surface. A recording device can capture the height of the water at a given time. But then what happens? In the old days, a guy drove up in a pickup truck, changed the roll of paper, and drove it back to the office. There are a lot of hidden steps to make the basic measurement accessible. We have to include all those procedures of inscription, reinscription, digitizing, and storage before we make a stream gauge functional. As the technology changes, someone comes up with the bright idea of installing a communication link. It could be a telephone or a wireless link of some sort. The motivation of the processing agency that sent out the guy in the pickup would be to save labor costs, reduce the time lag in processing, and make a host of other improvements. A computer would probably be installed to manage the sensor and the communications, but the command from the central authority would still be "send all your data." The computer simply replaces the roll of paper. What a waste!

The computer at our stream gauge becomes a part of a distributed sensor web when we expect it to actually do some work, not just act as a roll of paper in the old arrangement. Linked by a communication network that does not simply act as a star, feeding data into the maw of the all-knowing centralized database, our stream gauge can communicate with other stream gauge installations to determine the water levels at other locations. An event like a flash flood could be detected in the field as it happens, rather than waiting for the rolls of paper to be processed at the central office (weeks later). After all, the information is driven by the water levels, not the acts of humans to recode the data and run the analysis. These agents in the field will of course be looking for whatever their programmers foresee. Detecting a flash flood requires some idea of the hydrological network, the neighborhood in which the sensor is deployed. Rising water levels upstream propagate downstream at a specific time lag that depends on slope and distance along the channel. These details can be captured, and deviations above some threshold reported. Ah! Reported to whom?

The agency with the pickup trucks that stock the rolls of paper might still exercise control over its equipment. This institution's survival depends on guarding its role as custodian of the stream gauges. But this would be somewhat like expecting the telegraph boy on his bicycle to deliver our Web pages on strips of yellow paper. It would make more sense to give the computer at the gauge more of a role. It holds the archive of water levels over time; why ship it off somewhere else? The issue becomes "bandwidth"—the capacity of the network connection, which is influenced by power supply as well as the communication link. Rather than sending in a dump of water- level data and waiting for it to be integrated at some control center, the neighboring gauge computers could share their recent water readings and provide a value-added product, such as alerts of impending floods to subscribers or relevant parties (dam operators, kayak clubs, and downstream residences).

This sketch of a revised business model for simple sensors inverts the old hierarchy. The old GIS looks like a telegraph business with its bicycle messengers. But like the anarchic and turbulent world of Web 2.0, it is not clear how we make the transition to the world of distributed sensor networks. There is a lot of programming to be done, and business models to be shredded by the competition. The sensors we currently have around the city and the environment are much more complicated than a simple float in a pipe. We have video cameras pointed at every public place. But when London needed to trace backpack bombers, they resorted to brute force: people looking at videotape for hours looking for recognizable people. In George Orwell's 1984, the cameras enforced the state's will, but that 1949 novel's author had people behind the screens. If it takes one police officer to watch each citizen, the overhead costs are pretty high. And, as always, who watches the watchers?

From his observation of the observers in Paris, French sociologist Bruno Latour found that each agency has its particular reason for being and hence its own manner of observation. The watchers do not see everything, just as we do not expect our stream gauge to record passing moose. Sensors fulfill a particular purpose and measure within a framework that the equipment imposes. An optical camera captures little at night unless the scene is properly lit. And the measurements of gray by pixel are still not really what any user wants. The images require substantial processing to recognize a specific person—or a moose, for that matter; however, this trick is no longer the wild dreaming of a sci-fi writer.

click to enlargeJust as the Internet grew in a given historical setting, the distributed sensor network of the future will emerge from the little bits we already have. It will not get integrated and coherent until somebody makes the effort and has the access. I do not doubt that it can be done technically, but such a revolution will destabilize many existing institutions. There will be growing pains, resistance, and the usual shortsightedness.

As long as the current distribution of geographic power revolves around being a gatekeeper, a custodian of data, the potential of the distributed sensor network is diminished. What is required is an escape from the "Prisoner's Dilemma." [Note: This dilemma comes from game theory: many situations are structured to disfavor cooperation.]

And there are glimmers of hope in this regard. In the tightest of information economies, there are "Free Data Movements." Institutions can be motivated by their original mandate—protect the environment—to cut loose from the habits of centuries forced on them by the processing technologies of the past. Old habits die slowly, but there is some movement.

The biggest trend that will support the conversion of the data economy will come from the human—not technical—side. Knowledge networks have escaped from the hierarchical structure. Citizens are making their own maps, integrating their own evaluations of the world they inhabit. Yes, some of this has started as user ratings of motels and restaurants, but that is a start. Each new social networking Web site (YouTube, Facebook, Wikipedia, and so on) may appear to be a simple craze, but collectively, these sites amass the power to address pressing issues of the world as much as the popularity of rock stars.

In the GIS community, the movement was first heard under the title of Digital Earth—the idea that libraries of information could be referenced by location as a special kind of content index. The term also tied in a real-time camera pointed at the Earth from orbit. Although Al Gore did not invent the Internet, his name and office were used to validate the Digital Earth vision. The term geoweb is perhaps a better term for the technical trick to search for content based on location. Certainly the emphasis on spatial search is the key to Google Earth and Microsoft's Virtual Earth. Yet these initiatives miss the social side of networking. One of the key elements of the technology is the empowerment of citizens to produce their own spatial information, then to present it publicly. This overthrows the specialist model of the centralized model from decades past.

Knowledge networks do not have their origin in Web technology. Scholars and specialists have developed tools like journals, conferences, and peer review over the centuries. Some of these tools are attuned to the exigencies of printing or face-to-face meetings, but each has evolved to a new hybrid form. While some people focus on the wiki movement as a way to decentralize knowledge, that kind of work remains at the level of the encyclopedia, a rather superficial one.

The collective problems of the planet also require the concerted efforts of the science community. In my role as scientific director of the Geomatics for Informed Decisions (GEOIDE) Network, which links geomatics research across Canada, I have come to see the power of reorganizing our scientific expectations, of giving greater room for interdisciplinary collaboration. Funded under the Canadian Networks of Centres of Excellence (NCE) program, the idea is to build a community of interest that includes user communities in the research process from the start. Rather than talking about "technology transfer"—a process that implies that the user does not matter until the research is finished— the NCEs engage in knowledge translation as an active process as researchers advance in collaboration with partners from industry, government, and other community participants. A few countries in the world have taken similar steps, each attuned to their particular background and history. I can point to the Cassini group in France, which has reconstituted itself as the SIGMA Groupe de Recherche and will continue to find new administrative ways to carry on useful networking. Its next phase may be under the title G�oide � la fran�aise. In Australia, the Cooperative Research Centre–Spatial Information (CRC-SI) has built a strong linkage between industry and the research community. In the Netherlands, RGI (Space for Geo-Information) has an ambitious program of research to result in direct benefits to citizens and the economy. These groups, nine of them in all, have begun to share their experiences, a long and complex process that began last year in Banff, Alberta, Canada. New groups have emerged since then; the network structure quickly accommodates them. In the end, I expect to see that these collaborations will provide the firm foundation for a knowledge network to understand the complex interactions that constitute the world in which humankind must learn to prosper sustainably.

Knowledge networks happen at a finer scale than national ones too. Each reader should think about how they already communicate in a network of interactions, locally and in their professional roles. How do we decide what is trustworthy information? Do we do our own tests, or do we trust another person or institution? How can we be sure the guy with the pickup did not switch rolls of paper between two stream gauges? It clearly saves a lot of effort once we can fully trust the work of others, but that trust should not be handed out without careful consideration. Some of the community wants to install a closed shop, using licensing to decide who can work with GIS. The problem is that these groups want to legislate away the breadth and diversity of the current user community. It is no time to restrict access to the tools of GIS; the tool is out of that stage anyway, firmly in the realm of the whole population.

The distributed sensor webs will mix up humans and robotic sensors in a new and complex set of interactions. Trust will become a more and more important commodity, one that we will learn new ways to validate.

About the Author

From 1972 to 1982, Nick Chrisman was a researcher at the Harvard Laboratory for Computer Graphics and Spatial Analysis. He is now professor of geomatic sciences at Universit� Laval in Qu�bec City, Canada, and is scientific director of GEOIDE. He is the author of Charting the Unknown: How Computer Mapping at Harvard Became GIS, published in 2006 by Esri Press, and the textbook Exploring Geographic Information Systems, published by John Wiley & Sons, 1997 and 2002.

More Information

For more information, contact Dr. Nick Chrisman (e-mail: nicholas.chrisman@geoide.ulaval.ca).

Contact Us | Privacy | Legal | Site Map