Category Archives: Scripting

PL/pgSQL function to Iterate pgRouting

I’ve been working on a side project using pgRouting to determine the least-cost path across a street network from a given building to the nearest bus stop within Bellingham, WA. It’s one thing to execute pgRouting’s built-in functions for a single vertex (building) to another vertex (bus stop)… but another to have the function iterate through all buildings and their closest bus stop.

So that began my first experience with using PL/pgSQL. The benefit for using the procedural language for PostgreSQL lies in its ability to loop through collections of records easily. I’ve posted my function below. It’s not pretty, but it’s filled with enough notices to let me know where an error occurs, which helped me understand how things were acting each step of the way. Here is the basic idea:

  • Loop through a table in which each row has a source and destination vertex
  • Execute the pgRouting function using these two vertices, determining the length of the least-cost path.
  • Populate a field, ‘dist_calc’ with the distance.
CREATE OR REPLACE FUNCTION bulk_route_generate() RETURNS VOID AS $$
DECLARE
 bld_row bld_wtastops_staging5%ROWTYPE;
 dist_calc RECORD;
BEGIN
 RAISE NOTICE 'Beginning Function';
 FOR bld_row IN SELECT * FROM bld_wtastops_staging5 WHERE bld_wtastops_staging5.bld_vert_node IS NOT NULL
 AND bld_wtastops_staging5.wtastops_vert_node IS NOT NULL
 -- BEGIN ADDING BUM NODES TO SKIP OVER
 AND bld_wtastops_staging5.bld_vert_node <> 2915
 AND bld_wtastops_staging5.wtastops_vert_node <> 293
 -- ADD START GID
 -- USED ONLY IF BUM NODES EXIST
 --AND bld_wtastops_staging5.bld_gid >= 29200
 ORDER BY bld_wtastops_staging5.bld_gid LOOP
 RAISE NOTICE 'Value of wtastops_vert_node is %. The value of bld_vert_node is %',bld_row.wtastops_vert_node, bld_row.bld_vert_node;
 RAISE NOTICE 'Value of wtastops_gid is %. The value of bld_gid is %',bld_row.wtastops_gid, bld_row.bld_gid;
 -- BEGIN STANDARD pgRouting A*Star FUNCTION
 SELECT SUM(cost) INTO dist_calc FROM shortest_path_astar('
 SELECT gid as id,
 source::integer,
 target::integer,
 length::double precision as cost,
 x1, y1, x2, y2
 FROM streets_9102748',
 bld_row.bld_vert_node, bld_row.wtastops_vert_node, false, false);
 RAISE NOTICE 'Value of dist_calc is %.',dist_calc;
 EXECUTE 'UPDATE bld_wtastops_staging5
 SET route_dist = ' ||dist_calc|| '
 WHERE ' ||bld_row.bld_gid|| ' = bld_wtastops_staging5.bld_gid';
 END LOOP;
 -- BAIL OUT ON ERRORS
 EXCEPTION
 WHEN CONTAINING_SQL_NOT_PERMITTED THEN
 RAISE NOTICE ' EXECPTION Value of wtastops_vert_node is %. The value of bld_vert_node is %',bld_row.wtastops_vert_node, bld_row.bld_vert_node;
END;
$$ LANGUAGE 'plpgsql';
-- EXECUTE FUNCTION
SELECT bulk_route_generate();

I’m excited at the possibilities that using PL/pgSQL offers in terms of manipulating data. I’m sure that the above function can be cleaned up quite a bit, too. If I ever have the need to re-visit this or similar problems, I’ll be sure to do some serious head-scratching to think about a better approach!

Here is an image of the resulting data generated using mapnik. Areas from dark green to light-green are within a 1/4 mile distance, while areas from yellow-to-red represent distances increasingly greater then a 1/4 mile. The large checkered areas are where the dataset failed to route. More on that at another time.

bld_sym_diverging_web

the result as seen in mapnik

Nested Tuples and Hours of Patience

I’ve finally got a working python script that does the following:

-Reads each of the vertexes of a polyline shapefile using ‘OGR’

-Runs the vertexes through the Douglas-Peucker smoothing algorithm.

-Generates an output text file using ‘stdout’ with the resulting ‘smoothed’ coordinates (in whatever linear unit of measurement that the source shapefile happens to be in).

This has taken quite awhile to get this far, but it has been an excellent learning experience. The breakthrough for tonight was understanding that the Douglas-Peucker algorithm which I downloaded from mappinghacks required a nested tuple as an input. Basically then, I needed to figure out how translate the extracted vertexes from the source shapefile into the tuple object.

Things started to go bad, really bad, when I attempted to create formatted output files for the extracted vertexes… and re-import them as input files for the DP algorithm. It’s all good now though, and the code looks (a bit) cleaner.

The next step will be to take this output text file and build another shapefile from it…

Here is a link to the heavily commented script (right-click, select ‘save-as’) as it stands thus-far.

I’m Obsessed With Douglas-Peucker

The Problem: Web map vector overlays take a really long time to load if there are too many verticies.

The Idea: Use the Douglas-Peucker algorithm to reduce the number of verticies thereby decreasing load time.

The [Proposed] Solution: Using a version of the Douglas-Peucker algorithm written in Python, input a source shapefile, and output a smoothed shapefile through the following steps. This script currently uses board coordinates, and as such, is not ‘spatially’ enabled. The trick is to make it except shapefiles as an input source.

-Extract each of the verticies using ogrinfo, pipe to grep, and export to output csv file.
$ogrinfo -al input_polyline.shp | grep linestring | tr -d “LINESTRING()” > output.csv

-Reformat the verticies [somehow], so that they can be readable to the python script.

-Rebuild shapefile from output csv with projection information using GDAL/OGR.

So my plan is to essentially to break down the shapefile, smooth it, and build it back up again, using open source tools. We’ll see how far this goes.

Some reading:

Douglas-Peucker Algorithm

Existing GRASS Implementation

Existing ArcGIS Implementation

-

Python GP Strangeness

I had to explicitly set the workspace, list the ASCII files, THEN link the two into a single variable for use in the ASCIIToRaster tool. I could have sworn that all I needed to do was to pass along the ASCII variable right into the ASCIIToRaster tool, and it would automatically carry the file path along with it. Maybe that is only the case when you are working with fields, rather than rasters / feature classes?

#Batch Convert ASCII to ESRI GRID
#Matthew Kenny
#September 14, 2008
#############################################################
# Import the arcgisscripting module
import sys, arcgisscripting

# Create the Geoprocessor object
gp = arcgisscripting.create()

# Set the workspace
gp.workspace = (“F:/!South_Kona/DEM_XYZ_Tiled/Area1/Group5/ASCII/”)

# get list of all the ASCII files in the workspace
asciiList = gp.ListRasters(“*DEM*”, “*”)
ascii = asciiList.next()
print “Starting with ” + ascii

# Set initial INPUT file
inputAscii = gp.workspace + “/” + ascii
print inputAscii

# Set initial OUTPUT file
raster = gp.workspace + “/GRID/” + ascii[:-8] + “.img”
print raster

# loop through list of feature classes
while ascii:
#Execute ASCII to Raster
    gp.ASCIIToRaster_conversion(inputAscii, raster, “INTEGER”)
    print “File ” + raster+ ” has been created.”
    ascii = asciiList.next()
    print “Moving onto ” + ascii
    inputAscii = gp.workspace + “/” + ascii
    print “new inputAscii: ” + inputAscii
    raster = gp.workspace + “/GRID/” + ascii[:-8] + “.img”
    print “new output Raster: ” + raster
ascii = asciiList.reset()

#remove geoprocessor object from memory
del gp