Category Archives: Web-GIS

GeoDjango: Standing Up A GeoJSON Web-Service

The models are complete. The database is loaded with some test tabular and spatial data. We’re pushing out HTML representations of attribute data using GeoDjango’s standard templating functions. Now, the focus moves to visualizing these features’ geometries in a spatial context. Just as with a Django QuerySet, GeoDjango provides a GeoQuerySet. When paired with a spatially-enabled database (e.g. PostGIS, SpatialLite, etc.), the GeoQuerySet provides functionality for querying data using a series of spatial filters, in addition to tabular filters. As a point of reference, the GeoDjango docs have great tables depicting a blow-by-blow comparison of different spatial databases, displaying each available Spatial Lookup and GeoQuerySet method. Take note, PostGIS is the clear winner in terms of functionality 😉

Why GeoJSON?

From the perspective of exporting data, GeoDjango supports a number of formats. The GeoQuerySet methods can represent your model’s geometry column in a number of different formats: GeoHash, GeoJSON, GML, KML, and SVG. Of all these serialization formats, I’ve found KML to be the most frequently used amongst GeoDjango users. Illustrative of this, three of the four functions in django.contrib.gis.shortcuts have to do with KML/KMZ. That’s awesome, but where is the love for GeoJSON?

KML can be easily consumed by OpenLayers, the king of open source web mapping viewers. But some of the new kids, e.g. leaflet, polymaps, look to favor GeoJSON over KML as an input for dynamically rendered data, not directly consuming KML out-of-the-box. That being said, if you want KML, this fork of leaflet looks like it will work for you. In my particular project, I’m interested in using leaflet, so GeoJSON was the way to go.

Later on, I’d like to do some speed comparisons, rendering the same featureset using OpenLayers, represented as both KML and GeoJSON, but that’s for the future. I’m wondering if OpenLayers will handle the JSON object faster then KML’s XML? JSON is just JavaScript after all.

The Problem

The GeoDjango GeoQuerySet API has built in methods to handle the serialization and de-serialization of a result set’s geometries into different formats. The problem is that these methods only wrap the geometries of a result set. For display in a web mapping application, like leaflet, I want to have access to both the geometry in the format of my choosing, as well as the supplementary attributes (name, type, etc.) which provide context for that geometry.

For example, asking for the GeoJSON representation of a given feature through Django’s shell, like this:

# Import Models from the Company Application
from company.Models import *
# Create a GeoQuerySet from the primary key, return GeoJSON
qs = Boundary.objects.filter(pk=1).geojson()
# Print GeoJSON representation of geom
print qs[0].geojson

Will produce a GeoJSON object like this:

        ] // Truncated Verticies

As shown in the example above, the geometries are returned, but not the tabular attributes associated with that feature. Looking at the GeoJSON spec, there are multiple ‘type’ values which an object can be constrained by. Using GeoDjango’s geoJSON() method will produce a type matching the geometry listed in the associated GeoDjango model (point, line, polygon, etc). The distinction here is that I’d like to return a GeoJSON object of type ‘Feature’ or ‘FeatureCollection’. These types require an additional ‘properties’ parameter, which can store tabular attributes. From the spec:

A feature object must have a member with the name “properties”. The value of the properties member is an object (any JSON object or a JSON null value).

So, the trick now is to dynamically create a GeoJSON object which contains both populated Geom and Properties attributes.

The fix (vectorformats)

In order to create a fully populated GeoJSON object, we need to bring in some extra assistance. Some quick searching brought me to this stack exchange response, from Chris Schmidt. Chris’ vectorformats package handles the serialization and de-serializtion of a variety of formats, including Django Querysets and GeoJSON. From the project homepage:

The vectorformats library is designed to make it easy to serialize content from any source to any source within Python. Think of it as a “poor man’s OGR” – a pure Python implementation of transforming features to and from various formats (largely XML based).

Installing vectorformats is as easy as:

$sudo easy_install vectorformats

From there, as outlined in the above referenced post, it’s only a matter of adding a few lines into your GeoDjango app’s view function.

# Using vectorfeatures module return a GeoJSON FeatureCollection
# for a given boundary ID.
def boundary_detail(request, boundary_id):
    boundary_detail = Boundary.objects.filter(pk=boundary_id)
    djf = Django.Django(geodjango='geom', properties=['name'])
    geoj = GeoJSON.GeoJSON()
    s = geoj.encode(djf.decode(boundary_detail))
    return HttpResponse(s)

The resulting GeoJSON object, represented as a ‘type’ of ‘FeatureCollection’:

              ] // Truncated Verticies
        "name":"Port Gamble"

And there you have it, GeoJSON containing both the geometry and attributes. This output can now be mapped to URL, creating an endpoint such as ‘{boundary_id}/’. Pass this to your web mapping client, and you’re ready to rock.

Moving to GeoDjango

I’ve been creating a simple GeoDjango application for managing environmental sampling metadata, and it’s been a lot of fun so far. I’ve had experience working with many different forms of metadata tracking, from spreadsheets, to wikis, to online project management tools. All of them have their ups and downs, and it seems like there is always a dealbreaker with each organizational method.

Spreadsheets are easy to edit, but lack any form of relational structure (two sets of data for the same report? i guess i’ll just put two entries into the same cell).

sometimes spreadsheets just don't work...

Wikis are cool, allow easy access to information, but are (in certain cases) a pain for folks to edit. Take the experience of table creation. Dokuwiki, a generic wiki software, requires a series of  carefully placed carrots and pipes to delineate headers and columns. A common pain comes when adding a new value to a new row, in which that value exceeds the length of any previous cell. This requires the author to expand the column header, and all previously entered rows, re-aligning the pipes and carrots. Granted as a slightly OCD GIS Analyst, the sight of a well crafted text table fills me with no less wonder then when I saw Albert Bierstadt’s “Puget Sound on the Pacific Coast”, but it’s just darn tedious at times. Additionally, as the log of sampling events grows larger, it gets harder to manage. Dokuwiki, AFAIK provides no ways to automatically resort entire sections of pages or records in tables based on alphabetical order, which would make searching for information on a particular page much faster as content becomes larger and larger.

^ Column One                ^ Column Two                 ^
| Zee 'z' string            | This is a longer string    |
| 2nd longer string of text | 2nd short string           |
| SuperShort                | A string starting with 'A' |

Online project management tools are interesting as well. They allow rapid collaboration between project members, and provide template functionality, allowing for status reports on recurring workflows to be easily generated (e.g., create a template for a report, spawn off an instance of a template for each new project). The downside to these services are that: they cost money, they also may not provide a normalized way to store data, and (of most interest to myself) they probably don’t allow for the storage/visualization of spatial data.

10 megs free for one user? cool!

In comes GeoDjango. Over the next few posts, I think I’ll record my experiences developing an application that allows the storage of metadata, within the context of environmental sampling efforts. The goal is to provide a web application which stores both tabular and spatial data in a normalized fashion, and easily visualize both in an informative way.

Hey look Ma', it's GeoJSON!

Hand-Rolled Vector Tiles – TileStache

A few weeks ago I found myself surfing the intertubes for instructions on how to serve up some vector tile goodness. That search came up pretty much empty, except for one glimmering thread of hope. The answer, TileStache { <– Imagine that’s a mustache on it’s side.

TileStache is a Python-based server application that can serve up map tiles based on rendered geographic data.

By design, TileStache can be used to serve up stylish TMS tiles using mapnik map files, and can also be used to locally cache remote-services via proxy. What I’m most interested in though, is it’s ability to deploy vector tiles. So what are vector tiles? Think TMS tiles… but replace representations of the geometries through images, with GeoJSON. Pretty wild right? Specifically, the TileStache PostGeoJSON Provider can be used to connect TileStache to a PostGIS data source, and return a tile comprised entirely of GeoJSON data.

For example, data from a PostGIS data source can be rendered as an image tile (…/10/16/357.png), like this:

But can also be represtented as a vector tile (…/10/16/357.json), like this:

// Subset of a single 256x256 pixel vector tile.
  "type": "FeatureCollection",
  "features": [
      "geometry": {
        "type": "MultiPolygon",
        "coordinates": [
      "type": "Feature",
      "properties": {
        "property_s": "USFS",
        "juris_name": "Olympic National Forest"
      "id": 1280

So what are the advantages of using vector tiles? You can already use OpenLayers’ GeoJSON format reader to populate a vector layer in OL. It’s an issue of size. Highly complex geometries can be large in size, and requesting all that data at once can be time consuming. Vector tiles approach this problem using the same answer as TMS… only request those sections of data which you need at that time. By only requesting those tiles within the user’s current extent + a small buffer, the need to download large geometries at once can be negated. Furthermore, just as TMS’s can be pre-cached to disk (seeded), so can vector tiles.

One example of this is serving up a combined NFS boundary dataset compiled by my good pal, Greg ( These boundaries are dense and displaying them at their full extent & raw level of detail is expensive. But by breaking the vector representations of these geometries up into a standard tile scheme, only those tiles which we need are requested, and only when we need them. As a side note, in addition to tiling, I also simplified the boundaries, to promote faster load time at small-scales. The least granular vector representations display at the smallest zoom-scales, while the highest (raw, unsimplified) level of granularity displays only at the largest zoom-scales.

NFS Boundaries Provided By ChopShopGeo

Additionally, using vector representations of geometry rather then cached images allows styling of those geometries on the fly. Polymaps, the only display client I’ve found so far that can consume vector tiles out-of-the-box, renders these tiles as SVG elements. Because of this, unique styling can be applied via CSS; controlling the color, stroke, fill, etc. of each geometry in response to both attributes associated with the geometry (see image below) or user input… ala the Polymaps example page.

USGS real-time gauge stations. Darker dots represent stronger streamflow, lighter dots represent slower flow. You'll have to ignore the fact that I'm symbolizing streamflow without the streams.

The above example converts data from the USGS Instantaneous Values Web Service (part of the USGS Water Date for the Nation program) as a JSON response to GeoJSON. These data points are then symbolized dynamically using Polymaps. More on that later.

“type”: “FeatureCollection”,
“features”: [{
“geometry”: {
“type”: “MultiPolygon”,
“coordinates”: [
[-122.973093, 47.969842],
[-122.973093, 47.969842]
http: //
“type”: “Feature”,
“properties”: {
“property_s”: “USFS”,
“juris_name”: “Olympic National Forest”
“id”: 1280

cURL’ing to FeatureServer from PostGIS: Easier then I Thought

So I’ve finished cutting a draft tileset using mapnik, depicting bus routes in Bellingham, WA. Now that the cartography is well in progress, I’d like to add some interactivity to the map. My first attempt at this will be to utilize MetaCarta (Chris Schmidt)’s FeatureServer. FeatureServer allows one to use standard HTTP verbs to GET representations of data, POST new data, or DELETE existing data. While querying data you can also pass additional URL parameters like a bounding box or attribute to select out a smaller subset of returned representations. I’ll be POST’ing a bus stop dataset to FeatureServer as GeoJSON. Once the data are stored in FeatureServer, I’ll be able to add popups based on a user’s click of a bus stop.

Getting data stored on my local PostGIS install to my remote FeatureServer instance turned out to be a three step process.

Step One: Convert local PostGIS bus stops layer to GeoJSON via OGR

I had originally planned on writing a pg/plsql function to try and output a bash script. The script would cURL each feature individually to my FeatureServer instance. This proved to be way more work then I had expected. What was the solution? OGR, of course. OGR has read/write drivers for both GeoJSON and PostGIS. This allows one to convert an entire dataset to GeoJSON with a single command (see below).

ogr2ogr -f "GeoJSON" ogrstops.json PG:"host=localhost dbname=routing user=postgres password=*** port=5432" "wtastops(the_geom)"

Step 2: Wrap “coordinate” elements in double brackets

When initially trying to cURL the GeoJSON output to FeatureServer, I was receiving an error stating that a bounding box could not be determined for the first geometry in my dataset. After some trial-and-error, I soon realized that the OGR output FeatureCollection was wrapping each point feature’s geometry in a single set of brackets. This type of behavior follows the GeoJSON specification for a FeatureCollection, as far as I can tell. However, in order for FeatureServer to consume this dataset, each point feature is required to be wrapped in a second set of brackets. I used gedit to run the find/replace. Below is an example of a GeoJSON feature which FeatureServer can consume. This individual feature is part of a larger FeatureCollection.

{ "type": "Feature",
          "properties": {
             "POINT_ID": "1000",
             "POINT_NAME": "Fielding at 32nd",
             "SHELTER": "Yes", "BENCH": "No" },
          "geometry": {
             "type": "Point",
             "coordinates": [[-122.474490,48.730021]]}

Step 3: cURL GeoJSON to FeatureServer

The last step is to actually POST the data to FeatureServer. For that, I used cURL.

curl -d @ogrstops.json

Now that the features have been uploaded, we can view them via FeatureServer as GeoRSS, KML, JSON, GML. Neat!

pgRouting III: PHP + OpenLayers Interface

With the routing database configured and populated, and with geoserver rendering the WMS, now the focus can shift on designing the actual display and functionality.

The conceptual plan is as follows:

  • Extract the geometry of a user’s click on the map.
  • Pass the extracted geometry to a PHP script, via an HTTP GET request.
  • Use the PHP script to pass the geometry as part of an SQL query against the PostGIS/pgRouting database.
  • Return the geometry from the database as GeoJSON, and deserialize it into an OpenLayers vector layer feature.

The code to extract a user’s clicked coordinates was taken from this OpenLayers example. It was then modified to pass the xy coordinates to a second function, designed to create a URL which will execute a PHP script.

trigger: function(e) {
 var xy = map.getLonLatFromViewPortPx(e.xy);

Passing the XY variable to the executeSQL() function, we are able to now seperate out the individual X and Y coordinates, and apply them to their respective parameters in our URL string.

// Build the URL
 var json_url = "http://localhost/near_vertex_astar.php?";
 json_url += "x=" + escape(xy.lon);
 json_url += "&y=" + escape(;

Having constructed the URL, we are now ready to use it to populate an OpenLayers vector layer with data.

// Make a fresh vector layer, pulling features from our script URL
 json_layer = new OpenLayers.Layer.Vector("GeoJSON", {
 styleMap: myStyles,
 strategies: [new OpenLayers.Strategy.Fixed()],
 protocol: new OpenLayers.Protocol.HTTP({
 url: json_url,
 format: new OpenLayers.Format.GeoJSON()

Alright! So where are we at right now? A user has clicked the map, and that click’s geometry has been extracted and sent to a PHP script on the server for further work. The PHP script will execute SQL in the PostGIS/pgRouting data base to do the following:

  • Find the closest vertex in our routing network to the user’s map click. This will be used as a source vertex.
  • Find all firestations within 5km of the vertex (which have been pre-attributed with the closest vertex on the routing network to their location).
  • Calculate the cost (as defined by total length of the route) from the source vertex to each fire station (really the routing network vertex).
  • Return back as GeoJSON only the geometry for the route with the lowest cost.

Why all the hassle with determining the cost? Can’t you just use PostGIS’ ST_DWithin() function to find the closet firestation to our user’s click and create the route? Well you could, but it might not always be the shortest route.

Euclidean distance versus Manhattan. Which one is shorter?

Euclidean distance versus Manhattan. Which one is shorter?

This behavior can be respresented in the routing network with the example below. Two different routes are generated from the same source vertex based on the combination of routing algorithm and account for route cost. On the left, the dijkstra algorithm is used to return the route to the closest fire station as the result of an ST_DWithin() query. On the right, the A-Star algorithm is used, and the route costs of all fire stations within a buffer are taken into account. As we can see, a different route and a different station are returned.

Comparing the two search algorithms and cost relationships.

A link to the JS and PHP scripts can be found at the end of this post. This definitely is not the most elegant solution to working with routing, but in terms of an experiment it was a great learning exercise. I’m really excited to dive deeper into PostGIS and pgRouting. The next step in the process will be incorporating OSM data, and adding in addition attributes which affect cost (speed limits, one-way streets, etc).

View the PHP.

View the OL JS.

pgRouting Part II: PostGIS + Geoserver

Since compiling Orkney’s pgRouting extension to PostgreSQL/PostGIS, I’ve decided to try my hand at creating a simple web interface to poke into the database. The current setup is as follows:

  • Display: OpenLayers
  • Renderer: Geoserver (via non-cached WMS)
  • Spatial Backend: PostGIS/pgRouting enabled PostgreSQL
  • Data: Public GIS data from the city of Bellingham, Washington’s GIS department.

For the sake of brevity, (but really because both TOPP has created some fantastic guides) I won’t go into the specifics of installing all the pieces. Just as an FYI, remember to set your ‘JAVA_HOME’ environment variable and make sure that you don’t have things trying to use the same port!

The Bellingham data is currently stored in NAD83 State Plane WA North Feet, a typical projection for this area. This projection however, is not part of the EPSG projection set, and as such is not included in a vanilla install of PostGIS.

In order to add this to the collection of spatial reference systems used by my PostGIS install, I went with the ridiculously cool site (A crschmidt, dbsgeo, hobu, and umbrella joint, hah). Navigating to the projection’s page gives me the option to generate an INSERT statement, adding the projection’s info into my database.

To load shapefiles into the PostGIS database, I chose to use the SPIT plugin for QGIS. Loading the data was fairly straightforward. I had an issue with a datefield that was present in the source shapefile, and had to delete the column manually using Open Office Database. I haven’t found a way to delete fields from a shapefile using QGIS.


The SPIT Interface

After uploading the streets data into my PostGIS database, the next step was to transform the geometry into the Web Mercator 900913 Projection. This was done using standard PostGIS functions, adding a new, second, geometry column to the existing streets table. This reprojected data was then exported from my staging PostGIS database as a shapefile using the QGIS, ‘Save As Shapefile’ tool, and re-imported into my production database (with the routing functions).

With data stored in the web mercator projection, inside of our PostGIS/pgRouting database, the next step was to add the layers to Geoserver. Using Geoserver 2.x, the process included the following steps (all done through the web-admin).

  • Add the new data store pointing the PostGIS database.
  • Add new layers (resources) which point to the tables of interest in our PostGIS database.

After creating the connections between PostGIS and Geoserver, the creation of WMS services is taken care of, allowing us to roll them into OpenLayers with relative ease.

I guess this got a little off-topic from what I originally wanted to write about. I think that I’ll save the actual breakdown of my OL code (taking a user’s map click to and using it to calculate a route to the nearest fire-station as determined by manhattan distance, as opposed to euclidean distance) for another day.

pgRouting On Ubuntu Netbook Remix 9.10

While working through Regina Obe and Leo Hsu’s PostGIS In Action I thought that I’d jump into the world of routing. My plan was to develop a sample application that could be used to plan bicycle routes throughout the city of Seattle. A quick google search proved that someone has already done it, and done it very well! provides cycling routes using OSM data for many major cities, Seattle included.

Undeterred and inspired, i decided to compile the pgRouting set of tools for PostGIS and give them a whirl.

My primary tutorial for moving through the install and execution of functions came from the 2009 FOSS4G Tokyo & Osaka workshop entitled, “FOSS4G routing with pgRouting tools and OpenStreetMap road data.” Although my installation on Ubuntu Netbook Remix (UNR) 9.10 required a little different setup, this guide definitely got me 99% of the way there.

The majority of my installation woes were caused by the different pathways used on my UNR install of PostgreSQL vs. what are apparently the standard paths.

After attempting to execute cmake to compile pgRouting, I’d be presented with an error stating that the ‘POSTGRESQL_INCLUDE_DIR’ was not found. A locate command pointed me to the correct path for my PostgreSQL installation. By modifying the FindPostgreSQL.cmake file to search for the correct path, I was back in business.

Following the workshop instructions, I then attempted to create the database directly from the terminal, which yielded the following result.

matt@matt-netbook:~$ createdb -U postgres routing
createdb: could not connect to database postgres: could not connect to server: No such file or directory
 Is the server running locally and accepting
 connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

After reading the documentation associated with “createdb”, i tried adding the “-h” flag pointing to “localhost”, which solved the problem.

The final error which I ran into had to do with the “$libdir” environment variable. While trying to register the pgRouting functions in my new database, I’d be presented with the following:

psql:/usr/share/postlbs/routing_core.sql:32: ERROR:  could not access file "$libdir/librouting": No such file or directory
psql:/usr/share/postlbs/routing_core.sql:43: ERROR:  could not access file "$libdir/librouting": No such file or directory
psql:/usr/share/postlbs/routing_core.sql:53: ERROR:  could not access file "$libdir/librouting": No such file or directory

Getting impatient at this point (i wanted to route!) I modified the SQL files to reference the explicit path of my PostgreSQL lib directory. Once that was done, I had a working routing database!

Loading the sample data, creating the indexes, and executing the queries was amazingly straightforward. To test visualizing the data, I exported one of the tutorial queries directly into a new table.

SELECT * INTO export
 FROM dijkstra_sp('ways', 10, 20);

The route depicted in red as seen in QGIS.

Just for kicks, I tried exporting the data as GeoJSON and visualzing it via OpenLayers.

The following SQL query aggregates the exported line segments into a single GeoJSON object:

SELECT ST_AsGeoJSON(ST_UNION(the_geom)) AS geom_union
FROM export;

Using the vector-formats OL example, which displays GeoJSON in either EPSG 4326 or 102113, I was able to visualize the line segment with no problem.

GeoJSON representation of line segment generated using pgRouting, displayed in OpenLayers

Well that’s all for one day. So it looks like the bike riding app is out, but I’m sure that there will be many more interesting ideas for pgRouting that will come to mind as I continue to explore PostGIS.

MetaCarta’s Map Rectifier + ESRI DevSummit Mashup Winner :)

I never knew about the MetaCarta Labs’ Map Rectifier tool, but I’ll expect to be using it more in the future. After uploading an image to the site, a user has full control over the creation of Ground Control Points. The advanced nature of this tool is shown though included RMS error reporting as well as the choice between multiple transformation algorithms. In addition to uploading your own content, a user has the ability to add GCPs for other users’ uploads as well.

Above: The MetaCarta Interface

Above: The MetaCarta Interface

What’s really amazing is the ability to directly access rectified images via WMS overlay. Each image hosted on the site is given a unique URL, we can insert into our favorite web mapping clients.

To try it out, I used the ExtMap – Mashup Framework developed by ArcGIS user alperdincer. This particular application framework was one of the winners at 2009 ESRI DevSummit, with good reason. I was able to quickly pass in the MetaCarta Labs URL, allowing the ExtMap application to consume and display the WMS layer with ease.

Above: Adding a Service to ExtMap

Above: Adding a Service to ExtMap

In addition to WMS layers, we can add in KML/GeoRSS as well as ArcGIS Server Dynamic/Tiled Map Layers.

Above: ExtMap Interface

Above: ExtMap Interface

Easy as pie? Piece of Cake? Yes. It’s innovative projects like these that keep pushing me to learn more about web mapping technology. Big thanks go out to crschmidt (who i assumed was involved w/ the project) at MetaCarta and alperdincer for putting together two great products.

On a final note, the MetaCarta Rectifier has the ability to export out images as geotiffs, allowing us to consume them in our desktop GIS applications. A quick check in ArcCatalog of the Chernobyl sample image I exported out revealed a WGS84 GCS. I can see some really nice workflows combining this tool with tiling programs such as GDAL2Tiles for painless TMS creation.

CloudMade Tile Request Graphics

I just found a neat feature from CloudMade, a heat map showing intensity of their tile requests at each zoom scale. As can be expected, Europe and North America are the definite zones of high activity. It’s also interesting to note the high activity in other regions such as Chile and the Philippines.

Above: CloudMade's Tile Request Overlay

Above: CloudMade's Tile Request Graphic

Following the link to the stats JavaScript, it looks like they are using a custom OpenLayers layer class, OpenLayers.Layer.Cloudmade. Sweet!

ArcGIS REST API / OpenLayers / Unit Testing = Fun In The Sun

Until today, I had never truly appreciated the value of unit testing. I recently had the need to bring these ArcGIS REST controls, designed for version 2.6 of OpenLayers, into the current development version. Having no real idea how to get started on this process, I looked to the unit tests as a guide to what needed to be changed. One might be asking why this was necessary, when the team over at Avencia just put together a great ArcGIS REST Plugin that has made its way into the trunk for the upcoming 2.8 release. The answer is that both plugins do different things well. The older AGSControls can perform ‘Identify’ and ‘Geoprocessing’ operations rather well, while the Avencia plugin does a great job at displaying and querying a subset of a layer resource.

In any event, the Test.AnotherWay suite, used by OpenLayers, provides an easy-to-navigate interface for debugging javascript code.

In two steps I was able to begin the debugging process.

First, adding the unit test for the AGSControl to the ‘list-tests.html’ file located in the ‘tests’ folder of a development version of OpenLayers. This unit test, written by the developer, needs to manually downloaded and incorporated into the standard series of tests that come with OpenLayers. As we can see from the image below, this particular test was written as an html file and placed into the ‘Control’ sub-directory of the ‘tests’ directory.

Adding a link to the unit test for use by Test.AnotherWay

Above: Adding a link to the unit test for use by Test.AnotherWay

After adding the test, we open up ‘run-tests.html’ in the browser and select ‘AgsControl’ from the list. After the test has executed, the results are provided to us. With the red light of failure burning bright, we might think to abandon all hope. We are, however, given the cause and location of the failure, an invaluable clue as to where to start debugging. Time to soldier on.

Above: Unit Test Failure

Above: Unit Test Failure

Using these test results as a road map, even I can eventually debug a plugin. The green light of success offers a reassuring reminder that all is well in the GIS world.

Above: Successful Unit Test

Above: Successful Unit Test

I’ve taken away a few things from this experience. Firstly, I’m again deeply impressed by the time and effort that developers in the Open Source community are putting into these projects. The only reason that I could even dream of modifying any of this source code is due to the fact that the developer of the AGSControls provided such detailed unit tests. These allowed me to wrap my brain around what was going on with the code, and how it could be updated. Taking the time to not only write code, but to also provide tools so that others can understand it and modify it with ease is something that I think I’ll always be impressed with. And of course, I’ll be continuing to rely on unit tests as debugging tools as i continue my exploration of javascript programming.

In the words of Dave Bouwman, who has a whopping fourteen posts in his tag cloud on the subject:

Unit testing is quite possibly the single best practice for ensuring that your code is bug free (or very nearly bug free!).

His ‘fundamentals’ article provides a great introduction on the subject: which can be read here.