In 2011, ArcWatch asked me to briefly speculate on the future of GIS. I offered five thoughts: GIS will be adapted to the indoor environment, we will have the ability to know where everything is, GIS will benefit enormously from the emerging Internet of Things, GIS will be increasingly real time, and the GIS of the future will express multiple views of geography.
Four years later, all five are developing rapidly, with new research and new products emerging all the time. But the pace of change in GIS is faster than ever before, and today several exciting new developments are on the horizon. So by way of an update, here are four GIS-related topics that are very much on my mind at the start of 2015. But first, I offer a couple of caveats. All discussion of the future is speculation, especially in a technologically advanced field like GIS. I still feel bad about coediting a two-volume, thousand-plus-page, state-of-the-art review of GIS in 1991 and completely failing to mention the Internet. Moreover, what follows is my own personal speculation. I write it partly because it is fun to think about the future and partly to stimulate others to offer their own ideas.
Any discussion of GIS is bound to touch on the issue of privacy, because the technology has enormous power to capture, store, and analyze personal information. Some years ago, my daughter, who teaches transportation engineering at a university in the United States, took a group of students to Toronto, Canada. They passed through immigration and customs, checked into a hotel, and then began to explore the city, taking pictures of things that interested them such as delivery trucks and streetcars. Toronto happened to be hosting an international economic summit at the time, and security was tight. A few minutes after the group returned to the hotel, there was a knock on the door: the Canadian Security Intelligence Service wanted to ask them a few questions. We can only speculate about what this investigation required, in the way of security cameras; face-recognition software; linking of hotel, airline, and immigration records; and of course they may simply have been observed and followed by a suspicious agent. But the power of modern surveillance technology is still mind-boggling.
In essence, the privacy problem boils down to control of data: what control do you have over data about yourself? You decide, for example, when to share personal information via social media, but vast amounts of other, equally personal information about you is in the hands of corporations and government agencies. It is fragmented and notoriously difficult to correct when it is wrong. Geoffrey Jacquez, Professor of Geography at the University at Buffalo in New York, calls this the “Balkanization of the quantified self.”
Suppose you decided to seek a better alternative by collecting and managing your own personal data in a systematic way and deciding who should have access to what and under what circumstances. You might capture a 3D representation of your house, a useful asset if you ever wanted to sell the house or inventory the house’s contents for insurance purposes. If you were an academic, you might compile a database of all your papers, presentations, and class notes. To keep track of possible exposure to environmental hazards in your life, you might build a database of your own travels and the places you have lived. Such information might be very useful to a health professional in the event that you agreed to share it—either because you yourself have recently developed a health problem or because the health professional would like to reference your case in his or her research.
Most tempting, perhaps, is the potential to recognize and realize the economic value of such data. Why surrender information about our buying habits to vendors when we use credit cards, rather than collecting, managing, and perhaps selling it ourselves?
Much media attention has been lavished recently on big data and some of its successes. Big data is, of course, not just big; it’s more correctly identified with at least three characteristics, often termed the three Vs:
- Volume—The data is also available in greater volumes than we have been comfortably able to handle in the past.
- Variety—Today it is normally possible to find multiple sources of relevant data on any problem.
- Velocity—Many of these sources may be gathering data in near real time.
Big data involves networks of sensors monitoring the planet, as well as crowdsourced data from citizens, but the data is rarely subject to carefully controlled sampling or quality assurance.
Large volumes of data are nothing new to GIS. Landsat began acquiring data in the early 1970s in volumes that were then far above our ability to fully exploit them. Today the video images being captured by the thousands of surveillance cameras deployed around London, England, and other large cities amount to a petabyte-scale computing problem. Variety and velocity, however, are a different matter. In the past, our geographic information has been carefully assembled and synthesized by experts working for agencies such as the National Geospatial-Intelligence Agency (NGA) or the United States Geological Survey (USGS). Big data requires an entirely new set of tools for integration and synthesis, in effect making useful and reliable data out of a morass of disparate observations. Velocity is also new, since GIS evolved in an environment of maps that were designed to be valid for as long as possible and only show comparatively stable features such as mountains, rivers, and roads.
So why the fuss? If we could solve these issues, what would we gain? The big data success stories are all about prediction—tomorrow’s Dow Jones Industrial Average or the result of an election—which is why the idea has attracted such attention in industry and government. Big data in GIS will also be about prediction, not about whenbut about where (and sometimes when too). Spatial prediction can answer questions like
- Where will this hurricane track in the next week?
- What will the value be five years from now of this house I’m thinking of buying?
- Where will the heaviest impacts of this year’s flu season be?
- Where should we locate the next store in our retail chain?
So big data has a lot to offer GIS but also poses plenty of challenges for the GIScience research community.
Space and Place
GIS is spatial, using coordinates to represent positions and geometries and supporting functions that make use of those coordinates to measure distances, slopes, and areas. But humans don’t think in terms of coordinates and don’t have the means in their own brains to compute distances, directions, and other properties, which is, of course, why GIS is so vital and successful. On the other hand, humans think a lot about named places, from the scale of continents to the rooms in their houses. They store associations of these places in their memories and share them in conversations. They know about the hierarchical relationships between named places (Seattle is in the State of Washington, the Pacific Northwest, the Puget Sound Lowlands, and the region of Cascadia) without being able to perform point-in-polygon operations in their heads.
Today we have numerous tools for linking places to spaces. Gazetteers and point-of-interest databases give us the coordinates of named features, though they fall apart in dealing with large features and features without well-defined boundaries. For example, try asking Google to find a route from “Colorado” to “Wyoming” (or asking ArcGIS Online for the location of the “Mississippi River”). In both cases the gazetteers that are used by these systems give only a single pair of coordinates to these large, extended features. So your trip from Colorado to Wyoming will be assumed to start and end at the geometric centroids of these states, and you will be told that part of your route is closed in winter.
GIS has long had a reputation for being difficult to learn and use, which is why we need GIS courses at the university level to train our GIS professionals. Things have improved markedly in recent years, with point-and-click interfaces, story maps, and ArcGIS’s extensive online help, but even today GIS is not as easy to use as it might be. One reason may be because GIS doesn’t reason like humans, so humans have to learn a new way of communicating with GIS. What if we could build a technology that helped humans think the way they inherently do? What would be its benefits? First, it would allow us to tap the vast resources of place-based knowledge that humans carry around with them but have no technology-based way of sharing or compiling. Second, it would enable the development of a whole new set of functions based on place rather than space, such as automatically generated sketch maps that sacrifice planimetric accuracy in the interests of clarity and usefulness. Third, it would build better bridges between the spatial and “platial” worlds, greatly shortening the GIS learning curve.
So What’s It All About, Anyway?
The world we know and love and call GIS has grown by many orders of magnitude since Roger Tomlinson coined the acronym in the 1960s. Terms have proliferated, and today students can take courses in geoinformatics, geomatics, geographic information science, or spatial information systems and encounter much of the same basic content. The G in GIS has been decoded as global and geospatial instead of the original geographic (or geographical), and the S as science, services, and studies instead of system. But hunt for a central term that describes what all these have in common, and what do we have? Geospatial is perhaps the strongest contender, but it’s an adjective: “What do you do?” “I do geospatial. . .er. . .stuff.”
In the fourth edition of our textbook Geographic Information Science and Systems, which will be published later this year, Paul Longley, David Maguire, David Rhind, and I argue that almost all systems are integrated across networks, and with cloud GIS, the entire Internet is quickly becoming one vast GIS. We argue that what holds the field together and forms its essential core is the information—geographic information—and the processes used to capture, represent, store, analyze, model, archive, and more generally use it. We discuss GI databases; GI science; GI software; GI professionals and, of course, GI itself. We would be equally comfortable with geoinformation, and perhaps geodata when dealing with raw observations.
Why is this important to the future of the field? Because it’s critical to understand how this field has expanded; and why many experts who have joined our field in recent years may never have identified it as GIS and may not even know how to decode the acronym.
As I wrote at the outset, I have fun thinking about the future of GIS, and I hope you do too. If you have ideas, comments, or suggestions I’d be happy to read them. Please e-mail me.
Read Goodchild’s 2011 ArcWatch article Looking Forward: Five Thoughts on the Future of GIS.