<?xml version="1.0" encoding="UTF-8" ?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" version="2.0"><channel><title>Martin Davis | CrunchyData Blog</title>
<atom:link href="https://www.crunchydata.com/blog/author/martin-davis/rss.xml" rel="self" type="application/rss+xml" />
<link>https://www.crunchydata.com/blog/author/martin-davis</link>

<description>PostgreSQL experts from Crunchy Data share advice, performance tips, and guides on successfully running PostgreSQL and Kubernetes solutions</description>
<language>en-us</language>
<pubDate>Tue, 30 May 2023 09:00:00 EDT</pubDate>
<dc:date>2023-05-30T13:00:00.000Z</dc:date>
<dc:language>en-us</dc:language>
<sy:updatePeriod>hourly</sy:updatePeriod>
<sy:updateFrequency>1</sy:updateFrequency>
<item><title><![CDATA[ SVG Images from Postgres ]]></title>
<link>https://www.crunchydata.com/blog/svg-images-from-postgis</link>
<description><![CDATA[ We are excited to announce a new set of functions to generate svgs from Postgres and PostGIS! This gives you really easy maps, images, or charts directly from your database. Once you start reading through these samples, you'll want to start playing with images from your database. ]]></description>
<content:encoded><![CDATA[ <p><a href=https://postgis.net/>PostGIS</a> excels at storing, manipulating and analyzing geospatial data. At some point it's usually desired to convert raw spatial data into a two-dimensional representation to utilize the integrative capabilities of the human visual cortex. In other words, to see things on a map.<p>PostGIS is a popular backend for mapping technology, so there are many options to choose from to create maps. Data can be rendered to a raster image using a web map server like <a href=https://geoserver.org/>GeoServer</a> or <a href=https://mapserver.org/>MapServer</a>; it can be converted to GeoJSON or vector tiles via servers such as <a href=https://github.com/CrunchyData/pg_featureserv><code>pg_featureserv</code></a> and <a href=https://github.com/CrunchyData/pg_tileserv><code>pg_tileserv</code></a> and then shipped to a Web browser for rendering by a library such as <a href=https://openlayers.org/>OpenLayers</a>, <a href=https://maplibre.org/>MapLibre</a> or <a href=https://leafletjs.com/>Leaflet</a>; or a GIS application such as <a href=https://qgis.org>QGIS</a> can connect to the database and create richly-styled maps from spatial queries.<p>What these options have in common is that they require external tools which need to be installed, configured and maintained in a separate environment. This can introduce unwanted complexity to a geospatial architecture.<p>This post presents a simple way to generate maps entirely within the database, with no external infrastructure required.<h2 id=svg-for-the-win><a href=#svg-for-the-win>SVG for the win</a></h2><p>A great way to display vector data is to use the <dfn>Scalable Vector Graphic</dfn> (<abbr>SVG</abbr>) format. It provides rich functionality for displaying and styling geometric shapes. SVG is widely supported by web browsers and other tools.<p>By including CSS and Javascript it's possible to add advanced styling, custom popups, dynamic behaviour and interaction with other web page elements.<h2 id=introducing-pg-svg><a href=#introducing-pg-svg>Introducing <code>pg-svg</code></a></h2><p>Generating SVG "by hand" is difficult. It requires detailed knowledge of the <a href=https://www.w3.org/TR/SVG2/>SVG specification</a>, and constructing a complex text format in SQL is highly error-prone. While PostGIS has had the function <a href=https://postgis.net/docs/manual-3.3/ST_AsSVG.html><code>ST_AsSVG</code></a> for years, it only produces the SVG <a href=https://svgwg.org/svg2-draft/paths.html#PathData><strong>path data</strong></a> attribute value. Much more is required to create a fully-styled SVG document.<p>The PL/pgSQL library <a href=https://github.com/dr-jts/pg_svg><code>pg-svg</code></a> solves this problem! It makes it easy to convert PostGIS data into styled SVG documents. The library provides a simple API as a set of PL/pgSQL functions which allow creating an SVG document in a single SQL query. Best of all, this installs with a set of functions, nothing else required!<h2 id=example-map-of-us-high-points><a href=#example-map-of-us-high-points>Example map of US high points</a></h2><p>The best way to understand how <code>pg-svg</code> works is to see an example. We'll create an SVG map of the United States showing the highest point in each state. The map has the following features:<ul><li>All 50 states are shown, with Alaska and Hawaii transformed to better fit the map<li>States are labeled, and filled with a gradient<li>High points are shown at their location by triangles whose color and size indicate the height of the high point.<li>Tooltips provide more information about states and highpoints.</ul><p>The resulting map looks like this (to see tooltips open the <a href=https://raw.githubusercontent.com/dr-jts/crunchyblog/master/pg-svg/us-highpt.svg>raw image</a>):<p><img alt="a scramble of state names and a map of the US"loading=lazy src=https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/5672aa43-56f7-4239-10f2-a1d845d6c400/public><p>The SQL query to generate the map is <a href=https://github.com/dr-jts/pg_svg/blob/master/demo/map/us-highpt-svg.sql>here</a>. It can be downloaded and run using <code>psql</code>:<pre><code class=language-shell>psql -A -t -o us-highpt.svg  &#60 us-highpt-svg.sql
</code></pre><p>The SVG output <code>us-highpt.svg</code> can be viewed in any web browser.<h2 id=how-it-works><a href=#how-it-works>How it Works</a></h2><p>Let's break the query down to see how the data is prepared and then rendered to SVG. A dataset of US states in geodetic coordinate system (WGS84, SRID = 4326) is required. We used the Natural Earth states and provinces data available <a href=https://www.naturalearthdata.com/downloads/10m-cultural-vectors/10m-admin-1-states-provinces/>here</a>. It is loaded into a table <code>ne.admin_1_state_prov</code> with the following command:<pre><code class=language-shell>shp2pgsql -c -D -s 4326 -i -I ne_10m_admin_1_states_provinces.shp ne.admin_1_state_prov | psql
</code></pre><p>The query uses the SQL <code>WITH</code> construct to organize processing into simple, modular steps. We'll describe each one in turn.<h3 id=select-us-state-features><a href=#select-us-state-features>Select US state features</a></h3><p>First, the US state features are selected from the Natural Earth boundaries table <code>ne.admin_1_state_prov</code>.<pre><code class=language-pgsql>us_state AS (SELECT name, abbrev, postal, geom
  FROM ne.admin_1_state_prov
  WHERE adm0_a3 = 'USA')
</code></pre><h3 id=make-a-us-state-map><a href=#make-a-us-state-map>Make a US state map</a></h3><p>Next, the map is made more compact by realigning the far-flung states of Alaska and Hawaii.<br>This is done using PostGIS <a href=https://postgis.net/docs/manual-3.3/reference.html#Affine_Transformation>affine transformation functions</a>. The states are made more proportionate using <a href=https://postgis.net/docs/manual-3.3/ST_Scale.html><code>ST_Scale</code></a>, and moved closer to the lower 48 using <a href=https://postgis.net/docs/manual-3.3/ST_Translate.html><code>ST_Translate</code></a>. The scaling is done around the location of the state high point, to make it easy to apply the same transformation to the high point feature.<pre><code class=language-pgsql>,us_map AS (SELECT name, abbrev, postal,
    -- transform AK and HI to make them fit map
    CASE WHEN name = 'Alaska' THEN
      ST_Translate(ST_Scale(
        ST_Intersection( ST_GeometryN(geom,1), 'SRID=4326;POLYGON ((-141 80, -141 50, -170 50, -170 80, -141 80))'),
        'POINT(0.5 0.75)', 'POINT(-151.007222 63.069444)'::geometry), 18, -17)
    WHEN name = 'Hawaii' THEN
      ST_Translate(ST_Scale(
        ST_Intersection(geom, 'SRID=4326;POLYGON ((-161 23, -154 23, -154 18, -161 18, -161 23))'),
        'POINT(3 3)', 'POINT(-155.468333 19.821028)'::geometry), 32, 10)
    ELSE geom END AS geom
  FROM us_state)
</code></pre><h3 id=high-points-of-us-states><a href=#high-points-of-us-states>High Points of US states</a></h3><p>Data for the highest point in each state is provided as an inline table of values:<pre><code class=language-pgsql>,high_pt(name, state, hgt_m, hgt_ft, lon, lat) AS (VALUES
 ('Denali',              'AK', 6198, 20320,  -151.007222,63.069444)
,('Mount Whitney',       'CA', 4421, 14505,  -118.292,36.578583)
...
,('Britton Hill',        'FL',  105,   345,  -86.281944,30.988333)
)
</code></pre><h3 id=prepare-high-point-symbols><a href=#prepare-high-point-symbols>Prepare High Point symbols</a></h3><p>The next query does several things:<ul><li>translates the <code>lon</code> and <code>lat</code> location for Alaska and Hawaii high points to match the transformation applied to the state geometry<li>computes the <code>symHeight</code> attribute for the height of the high point triangle symbol<li>assigns a fill color value to each high point based on the height<li>uses <code>ORDER BY</code> to sort the high points by latitude, so that their symbols overlap correctly when rendered</ul><pre><code class=language-pgsql>,highpt_shape AS (SELECT name, state, hgt_ft,
    -- translate high points to match shifted states
    CASE WHEN state = 'AK' THEN lon + 18
      WHEN state = 'HI' THEN lon + 32
      ELSE lon END AS lon,
    CASE WHEN state = 'AK' THEN lat - 17
      WHEN state = 'HI' THEN lat + 10
      ELSE lat END AS lat,
    (2.0 * hgt_ft) / 15000.0 + 0.5 AS symHeight,
    CASE WHEN hgt_ft > 14000 THEN '#ffffff'
         WHEN hgt_ft >  7000 THEN '#aaaaaa'
         WHEN hgt_ft >  5000 THEN '#ff8800'
         WHEN hgt_ft >  2000 THEN '#ffff44'
         WHEN hgt_ft >  1000 THEN '#aaffaa'
                             ELSE '#558800'
    END AS clr
  FROM high_pt ORDER BY lat DESC)
</code></pre><h3 id=generate-svg-elements><a href=#generate-svg-elements>Generate SVG elements</a></h3><p>The previous queries transformed the raw data into a form suitable for rendering.<br>Now we get to see <code>pg-svg</code> in action! The next query generates the SVG text for each output image element, as separate records in a result set called <code>shapes</code>.<p>The SVG elements are generated in the order in which they are drawn - states and labels first, with high-point symbols on top. Let's break it down:<h3 id=svg-for-states><a href=#svg-for-states>SVG for states</a></h3><p>The first subquery produces SVG shapes from the state geometries. The <a href=https://github.com/dr-jts/pg_svg#svgshape><code>svgShape</code></a> function produces an SVG shape element for any PostGIS geometry. It also provides optional parameters supporting other capabilities of SVG. Here <code>title</code> specifies that the state name is displayed as a tooltip, and <code>style</code> specifies the styling of the shape. Styling in SVG can be provided using properties defined in the <a href=https://developer.mozilla.org/en-US/docs/Glossary/CSS><dfn>Cascaded Style Sheets</dfn> (<abbr>CSS</abbr>)</a> specification. <code>pg-svg</code> provides the <a href=https://github.com/dr-jts/pg_svg#svgstyle><code>svgStyle</code></a> function to make it easy to specify the names and values of CSS styling properties.<p>Note that the <code>fill</code> property value is a URL instead of a color specifier. This refers to an SVG gradient fill which is defined later.<p>The state geometry is also included in the subquery result set, for reasons which will be discussed below.<pre><code class=language-pgsql>,shapes AS (
  -- State shapes
  SELECT geom, svgShape( geom,
    title => name,
    style => svgStyle(  'stroke', '#ffffff',
                        'stroke-width', 0.1::text,
                        'fill', 'url(#state)',
                        'stroke-linejoin', 'round' ) )
    svg FROM us_map
</code></pre><h3 id=svg-for-state-labels><a href=#svg-for-state-labels>SVG for state labels</a></h3><p>Labels for state abbreviations are positioned at the point produced by the <code>ST_PointOnSurface</code> function. (Alternatively, <code>ST_MaximumInscribedCircle</code> could be used.) The SVG is generated by the <a href=https://github.com/dr-jts/pg_svg#svgtext><code>svgText</code></a> function, using the specified styling.<pre><code class=language-pgsql>  UNION ALL
  -- State names
  SELECT NULL, svgText( ST_PointOnSurface( geom ), abbrev,
    style => svgStyle(  'fill', '#6666ff', 'text-anchor', 'middle', 'font', '0.8px sans-serif' ) )
    svg FROM us_map
</code></pre><h3 id=svg-for-high-point-symbols><a href=#svg-for-high-point-symbols>SVG for high point symbols</a></h3><p>The high point features are displayed as triangular symbols. We use the convenient <a href=https://github.com/dr-jts/pg_svg#svgpolygon><code>svgPolygon</code></a> function with a simple array of ordinates specifying a triangle based at the high point location, with height given by the previously computed <code>svgHeight</code> column. The title is provided for a tooltip, and the styling uses the computed <code>clr</code> attribute as the fill.<pre><code class=language-pgsql>  UNION ALL
  -- High point triangles
  SELECT NULL, svgPolygon( ARRAY[ lon-0.5, -lat, lon+0.5, -lat, lon, -lat-symHeight ],
    title => name || ' ' || state || ' - ' || hgt_ft || ' ft',
    style => svgStyle(  'stroke', '#000000',
                        'stroke-width', 0.1::text,
                        'fill', clr  ) )
    svg FROM highpt_shape
)
</code></pre><h3 id=produce-final-svg-image><a href=#produce-final-svg-image>Produce final SVG image</a></h3><p>The generated shape elements need to be wrapped in an <code>&#60svg></code> document element. This is handled by the <a href=https://github.com/dr-jts/pg_svg#svgdoc><code>svgDoc</code></a> function.<p>The viewable extent of the SVG data needs to be provided by the <code>viewbox</code> parameter. The most common case is to display all of the rendered data. An easy way to determine this is to apply the PostGIS <code>ST_Exrtent</code> aggregate function to the input data (this is why we included the <code>geom</code> column as well as the <code>svg</code> text column). We can include a border by enlarging the extent using the <code>ST_Expand</code> function. The function <a href=https://github.com/dr-jts/pg_svg#svgviewbox><code>svgViewBox</code></a> converts the PostGIS geometry for the extent into SVG format.<p>We also include a definition for an SVG <a href=https://developer.mozilla.org/en-US/docs/Web/SVG/Element/linearGradient>linear gradient</a> to be used as the fill style for the state features.<pre><code class=language-pgsql>SELECT svgDoc( array_agg( svg ),
    viewbox => svgViewbox( ST_Expand( ST_Extent(geom), 2)),
    def => svgLinearGradient('state', '#8080ff', '#c0c0ff')
  ) AS svg FROM shapes;
</code></pre><p>The output from <code>svgDoc</code> is a <code>text</code> value which can be used anywhere that SVG is supported.<h2 id=more-to-explore><a href=#more-to-explore>More to Explore</a></h2><p>We've shown how the <code>pg-svg</code> SQL function library lets you easily generate map images from PostGIS data right in the database. This can be used as a simple ad-hoc way of visualizing spatial data. Or, it could be embedded in a larger system to automate repetitive map generation workflows.<p>Although SVG is a natural fit for vector data, there may be situations where producing a map as a bitmap (raster) image makes sense.<br>For a way of generating raster maps right in the database see this PostGIS Day 2022 <a href="https://www.youtube.com/watch?v=5Zg8j9X2f-Y">presentation</a>. This would be especially appealing where the map is displaying data stored using <a href=https://postgis.net/docs/manual-3.3/using_raster_dataman.html>PostGIS raster data</a>. It would also be possible to combine vector and raster data into a hybrid SVG/image output.<p>Although we've focussed on creating maps of geospatial data, SVG is often used for creating other kinds of graphics. For examples of using it to create geometric and mathematical designs see the <code>pg-svg</code> <a href=https://github.com/dr-jts/pg_svg/tree/master/demo><code>demo</code></a> folder. Here's an image of a Lissajous knot generated by <a href=https://github.com/dr-jts/pg_svg/blob/master/demo/math/lissajous-knot-34-svg.sql>this SQL</a>.<p><img alt="Lissajous Knot"loading=lazy src=https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/1a48fc23-12fe-45ad-fa94-67c3d97f1500/public><p>You could even use <code>pg-svg</code> to generate charts of non-spatial data (although this would be better handled by a more task-specific API).<p>Let us know if you find <code>pg-svg</code> useful, or if you have ideas for improving it! ]]></content:encoded>
<category><![CDATA[ Spatial ]]></category>
<author><![CDATA[ Martin.Davis@crunchydata.com (Martin Davis) ]]></author>
<dc:creator><![CDATA[ Martin Davis ]]></dc:creator>
<guid isPermalink="false">479e0443dab5893e0d3312ecb16c4e4de29f64bff60f2988c241473801259a44</guid>
<pubDate>Tue, 30 May 2023 09:00:00 EDT</pubDate>
<dc:date>2023-05-30T13:00:00.000Z</dc:date>
<atom:updated>2023-05-30T13:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ Temporal Filtering in pg_featureserv with CQL ]]></title>
<link>https://www.crunchydata.com/blog/temporal-filtering-pgfeatureserv-with-cql</link>
<description><![CDATA[ Dive into some examples of temporal filtering in pg_featureserv. ]]></description>
<content:encoded><![CDATA[ <p>In a <a href=https://blog.crunchydata.com/blog/cql-filtering-in-pg_featureserv>previous post</a> we announced the <strong>CQL filtering</strong> capability in <a href=https://github.com/CrunchyData/pg_featureserv><code>pg_featureserv</code></a>. It provides powerful functionality for attribute and <a href=https://blog.crunchydata.com/blog/spatial-filters-in-pg_featureserv-with-cql>spatial</a> querying of data in PostgreSQL and PostGIS.<p>Another important datatype which is often present in datasets is <strong>temporal</strong>. Temporal datasets contain attributes which are dates or timestamps. The CQL standard defines some special-purpose syntax to support temporal filtering. This allows <code>pg_featureserv</code> to take advantage of the extensive capabilities of PostgreSQL for specifying queries against time-valued attributes. This post in the CQL series will show some examples of temporal filtering in <code>pg_featureserv</code>.<h2 id=cql-temporal-filters><a href=#cql-temporal-filters>CQL Temporal filters</a></h2><p>Temporal filtering in CQL is provided using <strong>temporal literals</strong> and <strong>conditions</strong>.<p><strong>Temporal literal</strong> values may be <strong>dates</strong> or <strong>timestamps</strong>:<pre><code class=language-text>2001-01-01
2010-04-23T01:23:45
</code></pre><p><em>Note: The temporal literal syntax is based on an early version of the OGC API <a href=https://portal.ogc.org/files/96288>Filter and CQL standard</a>. The current <a href=https://docs.ogc.org/DRAFTS/21-065.html>draft CQL standard</a> has a different syntax: <code>DATE('1969-07-20')</code> and <code>TIMESTAMP('1969-07-20T20:17:40Z')</code>. It also supports intervals: <code>INTERVAL('1969-07-16', '1969-07-24')</code>. A subsequent version of <code>pg_featureserv</code> will support this syntax as well.</em><p><strong>Temporal conditions</strong> allow time-valued properties and literals to be compared via the standard boolean comparison operators <code>&lt;</code>,<code>></code>,<code>&#60=</code>,<code>>=</code>,<code>=</code>,<code>&#60></code> and the <code>BETWEEN..AND</code> operator:<pre><code class=language-text>start_date >= 2001-01-01
event_time BETWEEN 2010-04-22T06:00 AND 2010-04-23T12:00
</code></pre><p><em>The <a href=https://docs.ogc.org/DRAFTS/21-065.html#_temporal_operators>draft CQL standard</a> provides dedicated temporal operators, such as <code>T_AFTER</code>, <code>T_BEFORE</code>, <code>T_DURING</code>, etc. A future version of <code>pg_featureserv</code> will likely provide these operators.</em><h2 id=publishing-historical-tropical-storm-tracks><a href=#publishing-historical-tropical-storm-tracks>Publishing Historical Tropical Storm tracks</a></h2><p>We'll demonstrate temporal filters using a dataset with a strong time linkage: tracks of tropical storms (or hurricanes). There is a dataset of <strong>Historical Tropical Storm Tracks</strong> available <a href=https://hifld-geoplatform.opendata.arcgis.com/datasets/geoplatform::historical-tropical-storm-tracks>here</a>.<p>The data requires some preparation. It is stored as a set of records of line segments representing 6-hour long sections of storm tracks. To provide simpler querying we will model the data using a single record for each storm, with a line geometry showing the entire track and attributes for the start and end time for the track.<p>The data is provided in Shapefile format. As expected for a worldwide dataset, it is in the WGS84 geodetic coordinate system (lat/long). In PostGIS this common <a href=https://postgis.net/docs/manual-dev/using_postgis_dbmanagement.html#spatial_ref_sys>Spatial Reference System</a> is assigned an identifier (SRID) of 4326.<p>The PostGIS <a href=https://postgis.net/docs/manual-3.3/using_postgis_dbmanagement.html#shp2pgsql_usage><code>shp2pgsql</code></a> utility can be used to load the dataset into a spatial table called <code>trop_storm_raw</code>. The <code>trop_storm_raw</code> table is a temporary staging table allowing the raw data to be loaded and made available for the transformation phase of data preparation.<pre><code class=language-shell>shp2pgsql -c -D -s 4326 -i -I -W LATIN1 "Historical Tropical Storm Tracks.shp" public.trop_storm_raw | psql -d database
</code></pre><p>The options used are:<ul><li><code>-c</code> - create a new table<li><code>-D</code> - use PostgreSQL dump format to load the data<li><code>-s</code> - specify the SRID of 4326<li><code>-i</code> - use 32-bit integers<li><code>-I</code> - create a GIST index on the geometry column (this is not strictly necessary, since this is just a temporary staging table)<li><code>-W</code> - specifies the encoding of the input attribute data in the DBF file</ul><p>Next, create the table having the desired data model:<pre><code class=language-pgsql>CREATE TABLE public.trop_storm (
    btid int PRIMARY KEY,
    name text,
    wind_kts numeric,
    pressure float8,
    basin text,
    time_start timestamp,
    time_end timestamp,
    geom geometry(MultiLineString, 4326)
);
</code></pre><p>It's good practice to add comments to the table and columns. These will be displayed in the <code>pg_featureserv</code> Web UI.<pre><code class=language-pgsql>COMMENT ON TABLE public.trop_storm IS 'This is my spatial table';
COMMENT ON COLUMN public.trop_storm.geom IS 'Storm track LineString';
COMMENT ON COLUMN public.trop_storm.name IS 'Name assigned to storm';
COMMENT ON COLUMN public.trop_storm.btid IS 'Id of storm';
COMMENT ON COLUMN public.trop_storm.wind_kts IS 'Maximum wind speed in knots';
COMMENT ON COLUMN public.trop_storm.pressure IS 'Minumum pressure in in millibars';
COMMENT ON COLUMN public.trop_storm.basin IS 'Basin in which storm occured';
COMMENT ON COLUMN public.trop_storm.time_start IS 'Timestamp of storm start';
COMMENT ON COLUMN public.trop_storm.time_end IS 'Timestamp of storm end';
</code></pre><p>Now the power of SQL can be used to transform the raw data into the simpler data model. The track sections can be combined into single tracks with a start and end time using the following query.<ul><li>The original data represents the track sections as <code>MultiLineString</code>s with single elements. The element is extracted using <a href=https://postgis.net/docs/manual-dev/ST_GeometryN.html><code>ST_GeometryN</code></a> so that the result of aggregating them using <a href=https://postgis.net/docs/manual-dev/ST_Collect.html><code>ST_Collect</code></a> is a <code>MultiLineString</code>, not a <code>GeometryCollection</code>. (An alternative is to aggregate into a GeometryCollection and use <a href=https://postgis.net/docs/manual-dev/ST_CollectionHomogenize.html><code>ST_CollectionHomogenize</code></a> to reduce it to a <code>MultiLineString</code>.)<li>The final <a href=https://postgis.net/docs/manual-dev/ST_Multi.html><code>ST_Multi</code></a> ensures that all tracks are stored as <code>MultiLineStrings</code>, as required by the type constraint on the <code>geom</code> column.<li>the filter condition <code>time_end - time_start &#60 '1 year'::interval</code> removes tracks spanning the International Date Line.</ul><pre><code class=language-pgsql>WITH data AS (
 SELECT btid, name, wind_kts, pressure, basin, geom,
    make_date(year::int, month::int, day::int) + ad_time::time AS obs_time
 FROM trop_storm_raw ORDER BY obs_time
),
tracks AS (
  SELECT btid,
    MAX(name) AS name,
    MAX(wind_kts) AS wind_kts,
    MAX(pressure) AS pressure,
    MAX(basin) AS basin,
    MIN(obs_time) AS time_start,
    MAX(obs_time) AS time_end,
    ST_Multi( ST_LineMerge( ST_Collect( ST_GeometryN(geom, 1)))) AS geom
  FROM data GROUP BY btid
)
INSERT INTO trop_storm
SELECT * FROM tracks WHERE time_end - time_start &#60 '1 year'::interval;
</code></pre><p>This is a small dataset, and <code>pg_featureserv</code> does not require one, but as per best practice we can create a spatial index on the geometry column:<pre><code class=language-pgsql>CREATE INDEX trop_storm_gix ON public.trop_storm USING GIST ( geom );
</code></pre><p>Once the <code>trop_storm</code> table is created and populated, it can be published in <code>pg_featureserv</code>. Issuing the following request in a browser shows the feature collection in the Web UI:<pre><code class=language-text>http://localhost:9000/collections.html
</code></pre><p><img alt=tropstorm loading=lazy src=https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/3ec14409-e6ff-4225-34fc-36a5b9444000/public><pre><code class=language-text>http://localhost:9000/collections/public.trop_storm.html
</code></pre><p><img alt=tropstorm loading=lazy src=https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/e40a9a29-d5c3-4462-40a9-de5dcfd97300/public><p>The dataset can be viewed using <code>pg_featureserv</code>'s built-in map viewer (note that to see all 567 records displayed it is probably necessary to increase the limit on the number of response features):<pre><code class=language-text>http://localhost:9000/collections/public.trop_storm/items.html?limit=1000
</code></pre><p><img alt=publictropstorm loading=lazy src=https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/77ed25fd-9ace-43e6-e87e-57e0a870c800/public><h2 id=querying-by-time-range><a href=#querying-by-time-range>Querying by Time Range</a></h2><p>That's a lot of storm tracks. It would be easier to visualize a smaller number of tracks. A natural way to subset the data is by querying over a time range. Let's retrieve the storms between the start of 2005 and the end of 2009. This is done by adding a <code>filter</code> parameter with a CQL expression against the dataset temporal property <code>time_start</code> (storms typically do not span the start of years). To query values lying between a range of times it is convenient to use the <code>BETWEEN</code> operator. The filter condition is <code>time_start BETWEEN 2005-01-01 AND 2009-12-31</code>. The full request is:<pre><code class=language-text>http://localhost:9000/collections/public.trop_storm/items.html?filter=time_start BETWEEN 2005-01-01 AND 2009-12-31&#38limit=100
</code></pre><p>Submitting this query produces a result with 68 tracks:<p><img alt=tropstormtracks loading=lazy src=https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/d587d647-2835-4821-394b-8da483257e00/public><h2 id=querying-by-time-and-space><a href=#querying-by-time-and-space>Querying by Time and Space</a></h2><p>Temporal conditions can be combined with other kinds of filters. For instance, we can execute a spatio-temporal query by using a temporal condition along with a spatial condition. In this example, we query the storms which occurred in 2005 and after in Florida. The temporal condition is expressed as <code>time_start > 2005-01-01</code>.<p>The spatial condition uses the <code>INTERSECTS</code> predicate to test whether the line geometry of a storm track intersects a polygon representing the (simplified) coastline of Florida. The polygon is provided as a geometry literal using WKT. (For more information about spatial filtering with CQL in <code>pg_featureserv</code> see this <a href=https://www.crunchydata.com/blog/spatial-filters-in-pg_featureserv-with-cql>blog post</a>.)<pre><code class=language-text>POLYGON ((-81.4067 30.8422, -79.6862 25.3781, -81.1609 24.7731, -83.9591 30.0292, -85.2258 29.6511, -87.5892 29.9914, -87.5514 31.0123, -81.4067 30.8422))
</code></pre><p><img alt=polygon loading=lazy src=https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/b4cfc027-d336-4b28-dc42-33be03df9d00/public><p>Putting these conditions together in a boolean expression using <code>AND</code>, the request to retrieve the desired tracks from <code>pg_featureserv</code> is:<pre><code class=language-text>http://localhost:9000/collections/public.trop_storm/items.html?filter=time_start > 2005-01-01 AND INTERSECTS(geom, POLYGON ((-81.4067 30.8422, -79.6862 25.3781, -81.1609 24.7731, -83.9591 30.0292, -85.2258 29.6511, -87.5892 29.9914, -87.5514 31.0123, -81.4067 30.8422)) )&#38limit=100
</code></pre><p>This query produces a result with only 9 tracks, all of which cross Florida:<p><img alt=temporalfiltering loading=lazy src=https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/a3a899cb-97df-4246-9467-67cf0abae800/public><h2 id=try-it-yourself><a href=#try-it-yourself>Try it yourself!</a></h2><p>CQL temporal filtering is included in the forthcoming <code>pg_featureserv</code> Version 1.3. But you can try it out now by <a href=https://github.com/CrunchyData/pg_featureserv#download>downloading</a> the latest build. Let us know what use cases you find for CQL temporal filtering! Crunchy Data offers full managed PostGIS in the Cloud, with Container apps to run pg_featureserv. <a href=https://www.crunchydata.com/products/crunchy-bridge>Try it today</a>. ]]></content:encoded>
<category><![CDATA[ Spatial ]]></category>
<author><![CDATA[ Martin.Davis@crunchydata.com (Martin Davis) ]]></author>
<dc:creator><![CDATA[ Martin Davis ]]></dc:creator>
<guid isPermalink="false">1eae2910754316b49855ba2bced9ce1d0b73c7d634e1974f12401fe2bc229fc9</guid>
<pubDate>Fri, 10 Feb 2023 10:00:00 EST</pubDate>
<dc:date>2023-02-10T15:00:00.000Z</dc:date>
<atom:updated>2023-02-10T15:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ Spatial Filters in pg_featureserv with CQL ]]></title>
<link>https://www.crunchydata.com/blog/spatial-filters-in-pg_featureserv-with-cql</link>
<description><![CDATA[ In this post we'll show some examples of spatial filtering using CQL with pg_featureserv. ]]></description>
<content:encoded><![CDATA[ <p><a href=https://github.com/CrunchyData/pg_featureserv><code>pg_featureserv</code></a> provides access to the powerful spatial database capabilities of <a href=https://postgis.net/>PostGIS</a> and <a href=https://www.postgresql.org/>PostgreSQL</a> via a lightweight web service. To do this, it implements the <a href=https://ogcapi.ogc.org/features/><dfn>OGC API for Features</dfn></a> (<abbr>OAPIF</abbr>) RESTful protocol. OAPIF is part of the <dfn>Open Geospatial Consortium</dfn> (<abbr>OGC</abbr>) <a href=https://ogcapi.ogc.org/#standards>OGC API</a> suite of standards.<p>In a <a href=/blog/cql-filtering-in-pg_featureserv>previous post,</a> we announced an exciting new capability for <code>pg_featureserv</code> : support for <strong>CQL filters</strong>. <abbr>CQL</abbr> (<a href=https://docs.ogc.org/DRAFTS/21-065.html><dfn>Common Query Language</dfn></a>) is another OGC standard that provides the equivalent of SQL <code>WHERE</code> clauses for web queries.<p>As you would expect given the OGC focus on promoting ease-of-access to spatial information, CQL supports spatial filtering. This lets us take advantage of PostGIS' ability to query spatial data very efficiently. In this post we'll show some examples of spatial filtering using CQL with <code>pg_featureserv</code>.<p>The companion vector tile service <a href=https://github.com/CrunchyData/pg_tileserv><code>pg_tileserv</code></a> also supports CQL, and spatial filtering works there as well.<h2 id=cql-spatial-filters><a href=#cql-spatial-filters>CQL Spatial Filters</a></h2><p>Spatial filtering in CQL involves using <strong>spatial predicates</strong> to test a condition on the geometry property of features. Spatial predicates include the standard <a href=https://www.ogc.org/standards/sfs><em>OGC Simple Features</em></a> predicates for spatial relationships:<ul><li><code>INTERSECTS</code> - tests whether two geometries intersect<li><code>DISJOINT</code> - tests whether two geometries have no points in common<li><code>CONTAINS</code> - tests whether a geometry contains another<li><code>WITHIN</code> - tests whether a geometry is within another<li><code>EQUALS</code> - tests whether two geometries are topologically equal<li><code>CROSSES</code> - tests whether the geometries cross<li><code>OVERLAPS</code> - tests whether the geometries overlap<li><code>TOUCHES</code> - tests whether the geometries touch</ul><p><code>pg_featureserv</code> also implements the <strong>distance predicate</strong> <code>DWITHIN</code>.<p>Spatial predicates are typically used to compare the feature geometry property against a geometry value. Geometry values are expressed in <a href=https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry><dfn>Well-Known Text</dfn></a> (<abbr>WKT</abbr>):<pre><code class=language-pgsql>POINT (1 2)
LINESTRING (0 0, 1 1)
POLYGON ((0 0, 0 9, 9 0, 0 0))
POLYGON ((0 0, 0 9, 9 0, 0 0),(1 1, 1 8, 8 1, 1 1))
MULTIPOINT ((0 0), (0 9))
MULTILINESTRING ((0 0, 1 1),(1 1, 2 2))
MULTIPOLYGON (((1 4, 4 1, 1 1, 1 4)), ((1 9, 4 9, 1 6, 1 9)))
GEOMETRYCOLLECTION(POLYGON ((1 4, 4 1, 1 1, 1 4)), LINESTRING (3 3, 5 5), POINT (1 5))
ENVELOPE (1, 2, 3, 4)
</code></pre><p>By default, the <dfn>coordinate reference system</dfn> (<abbr>CRS</abbr>) of geometry values is <a href=https://en.wikipedia.org/wiki/Geodetic_datum><strong>geodetic</strong></a> (longitude and latitude). If needed a different CRS can be specified by using the <code>filter-crs</code> parameter. (PostGIS supports a large number of standard coordinate reference systems.)<p>Here are some examples of spatial filter conditions:<pre><code class=language-pgsql>INTERSECTS(geom, ENVELOPE(-100, 49, -90, 50) )
CONTAINS(geom, POINT(-100 49) )
DWITHIN(geom, POINT(-100 49), 0.1)
</code></pre><p>Of course, these can be combined with attribute conditions to express real-world queries.<h2 id=publishing-geographic-names><a href=#publishing-geographic-names>Publishing Geographic Names</a></h2><p>For these examples we'll use the U.S. <a href=https://en.wikipedia.org/wiki/Geographic_Names_Information_System>Geographic Names Information System</a> (GNIS) dataset. It contains more than 2 million points for named geographical features. We load this data into a PostGIS spatial table called <code>us.geonames</code>. The point location values are stored in a column called <code>geom</code> of type <a href=https://blog.crunchydata.com/blog/postgis-and-the-geography-type><code>geography</code></a>. (PostGIS allows storing spatial data as either <code>geometry</code> or <code>geography</code>. We'll explain later why for this case it is better to use <code>geography)</code>.<p>We create a spatial index on this column to provide fast spatial querying.<pre><code class=language-pgsql>CREATE TABLE us.geonames (
    id integer PRIMARY KEY,
    name text,
    lat double precision,
    lon double precision,
    type text,
    state text,
    geom geography(Point),
    ts tsvector
);

CREATE INDEX us_geonames_gix ON us.geonames USING GIST ( geom );
</code></pre><p>We can now publish the dataset with <code>pg_featureserv</code>, and view query results on the web UI.<p>The service <code>collections</code> page shows the published tables and views (here we have used the configuration <code>Database.TableIncludes</code> setting to publish just the <code>us</code> schema):<pre><code class=language-text>http://localhost:9000/collections.html
</code></pre><p><img alt=pgfs-cql-spatial-collections loading=lazy src=https://fs.hubspotusercontent00.net/hubfs/2283855/pgfs-cql-spatial-collections.png><p>The <code>collections\us.geonames</code> page shows the collection metadata:<pre><code class=language-text>http://localhost:9000/collections/us.geonames.html
</code></pre><p><img alt=pgfs_cql-spatial-usgeonames loading=lazy src=https://fs.hubspotusercontent00.net/hubfs/2283855/pgfs_cql-spatial-usgeonames.png><h2 id=querying-with-intersects><a href=#querying-with-intersects>Querying with <code>INTERSECTS</code></a></h2><p>For this example we'll query water features on the <a href=https://en.wikipedia.org/wiki/San_Juan_Islands>San Juan Islands</a> in the state of Washington, USA. Because there is no GNIS attribute providing region information, we have to use a <strong>spatial filter</strong> to specify the area we want to query. We used <a href=https://www.qgis.org/>QGIS</a> to create a polygon enclosing the islands.<p><img alt=pgfs-cql-spatial-sanjuan-polygon loading=lazy src=https://fs.hubspotusercontent00.net/hubfs/2283855/pgfs-cql-spatial-sanjuan-polygon.png><p>We can convert the polygon to WKT and use it in an <code>INTERSECTS</code> spatial predicate (since we are querying points, <code>WITHIN</code> could be used as well - it produces the same result). To retrieve only water features (Lakes and Reservoirs) we add the condition <code>type IN ('LK','RSV')</code>. The query URL is:<pre><code class=language-text>http://localhost:9000/collections/public.geonames/items.html?filter=type IN ('LK','RSV') AND INTERSECTS(geom,POLYGON ((-122.722 48.7054, -122.715 48.6347, -122.7641 48.6046, -122.7027 48.3885, -123.213 48.4536, -123.2638 48.6949, -123.0061 48.7666, -122.722 48.7054)))
</code></pre><p>The result of the query is a dataset containing 33 GNIS points:<p><img alt=pgfs-cql-spatial-sanjuan-lkrsv loading=lazy src=https://fs.hubspotusercontent00.net/hubfs/2283855/pgfs-cql-spatial-sanjuan-lkrsv.png><h2 id=querying-with-dwithin><a href=#querying-with-dwithin>Querying with <code>DWITHIN</code></a></h2><p>Now we'll show an example of using a distance-based spatial filter, using the <code>DWITHIN</code> predicate. This is the reason we loaded the GNIS data as <code>geography</code>.<br><code>DWITHIN</code> tests whether a feature geometry is within a given distance of a geometry value. By using the <code>geography</code> type, we can specify the distance in <strong>metres</strong>, which are the units of measure of the geodetic EPSG:4326 coordinate system. If we had loaded the dataset using the <code>geometry</code> type, the units would have been degrees, which is awkward to use. Also, <code>geography</code> computes the distance correctly on the surface of the earth (using the <a href=https://en.wikipedia.org/wiki/Great-circle_distance>great-circle distance</a>).<p>Let's query for mountains (<code>type = 'MT'</code>) within 100 kilometers of Seattle (lat/long of about 47.6,-122.34 - note that WKT requires this as long-lat). The query URL is:<pre><code class=language-text>http://localhost:9000/collections/us.geonames_geo/items.html?filter=type = 'MT' AND DWITHIN(geom,Point(-122.34 47.6),100000)&#38limit=1000
</code></pre><p>This gives a result showing 695 mountains near Seattle. It's a hilly place!<p><img alt=pgfs-cql-spatial-dwithin-mt loading=lazy src=https://fs.hubspotusercontent00.net/hubfs/2283855/pgfs-cql-spatial-dwithin-mt.png><h2 id=try-it-yourself><a href=#try-it-yourself>Try it yourself!</a></h2><p>CQL filtering will be included in the forthcoming <code>pg_featureserv</code> Version 1.3. But you can try it out now by <a href=https://github.com/CrunchyData/pg_featureserv#download>downloading</a> the latest build. Let us know what use cases you find for CQL spatial filtering!<h2 id=more-to-come><a href=#more-to-come>More to come</a></h2><p><code>pg_featureserv</code> with CQL attribute and spatial filtering provides a highly effective platform for accessing data over the web. And we're continuing to enhance it with new capabilities to unlock even more of the power of PostGIS and PostgreSQL. Stay tuned for posts on advanced query functionality including temporal filtering, geometry transformations, and aggregation. ]]></content:encoded>
<category><![CDATA[ Spatial ]]></category>
<author><![CDATA[ Martin.Davis@crunchydata.com (Martin Davis) ]]></author>
<dc:creator><![CDATA[ Martin Davis ]]></dc:creator>
<guid isPermalink="false">https://blog.crunchydata.com/blog/spatial-filters-in-pg_featureserv-with-cql</guid>
<pubDate>Tue, 15 Mar 2022 05:00:00 EDT</pubDate>
<dc:date>2022-03-15T09:00:00.000Z</dc:date>
<atom:updated>2022-03-15T09:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ CQL Filtering in pg_featureserv ]]></title>
<link>https://www.crunchydata.com/blog/cql-filtering-in-pg_featureserv</link>
<description><![CDATA[ Recently Crunchy Data added pg_featureserv support for most of CQL2. Here we'll describe the powerful new capability it provides. ]]></description>
<content:encoded><![CDATA[ <p>The goal of <a href=https://github.com/CrunchyData/pg_featureserv><code>pg_featureserv</code></a> is to provide easy and efficient access to <a href=https://postgis.net/>PostGIS</a> from web clients.<p>To do this, it uses the emerging <a href=https://ogcapi.ogc.org/features/><dfn>OGC API for Features</dfn></a> (<abbr>OAPIF</abbr>) RESTful protocol. This is a natural fit for systems which need to query and communicate spatial data. The <a href=https://docs.opengeospatial.org/is/17-069r3/17-069r3.html>core OAPIF</a> specification provides a basic framework for querying spatial datasets, but it has only limited capability to express filtering subsets of spatial tables.<p>In particular, it only allows filtering on single attribute values, and it only supports limited spatial filtering via the <code>bbox</code> query parameter (in PostGIS terms, this is equivalent to using the <code>&#38&amp</code> operator with a <code>box2d</code>).<p>Of course, PostGIS and PostgresQL provide much more powerful capabilities to filter data using SQL <code>WHERE</code> clauses. It would be very nice to be able to use these via <code>pg_featureserv</code>. Luckily, the OGC is defining <a href=https://docs.ogc.org/DRAFTS/19-079r1.html>a way to allow filtering</a> via the <strong><dfn>Common Query Language</dfn></strong> (<abbr>CQL</abbr>) which, as the name suggests, is a close match to SQL filtering capabilities. This is being issued under the OGC API suite as <strong>CQL2</strong> (currently in <a href=https://docs.ogc.org/DRAFTS/21-065.html>draft</a>).<p>Recently we added <code>pg_featureserv</code> support for most of CQL2. Here we'll describe the powerful new capability it provides.<h2 id=cql-overview><a href=#cql-overview>CQL Overview</a></h2><p>CQL is a simple language to describe <strong>logical expressions</strong>. A CQL expression applies to values provided by feature properties and constants including numbers, booleans and text values. Values can be combined using the arithmetic operators <code>+</code>,<code>-</code>,<code>*</code>, <code>/</code> and <code>%</code> (modulo). Conditions on values are expressed using simple comparisons (<code>&lt;</code>,<code>></code>,<code>&#60=</code>,<code>>=</code>,<code>=</code>,<code>&#60></code>). Other predicates include:<pre><code class=language-pgsql>prop IN (val1, val2, ...)
prop BETWEEN val1 AND val2
prop IS [NOT] NULL
prop LIKE | ILIKE pattern
</code></pre><p>Conditions can be combined with the boolean operators <code>AND</code>,<code>OR</code> and <code>NOT</code>.<p>You will notice that this is very similar to SQL (probably not a coincidence!). That makes it straightforward to implement, and <strong>easy to use</strong> for us database people.<p>CQL also defines syntax for <strong>spatial</strong> and <strong>temporal</strong> filters. We'll discuss those in a future blog post.<h2 id=filtering-with-cql><a href=#filtering-with-cql>Filtering with CQL</a></h2><p>A CQL expression can be used in a <code>pg_featureserv</code> request in the <code>filter</code> parameter.<br>This is converted to SQL and included in the <code>WHERE</code> clause of the underlying database query. This allows the database to use its query planner and any defined indexes to execute the query efficiently.<p>Here's an example. We'll query the <a href=https://www.naturalearthdata.com/downloads/10m-cultural-vectors/10m-admin-0-boundary-lines/>Natural Earth admin boundaries</a> dataset, which we've loaded into PostGIS as a spatial table. (See <a href=/blog/crunchy-spatial-querying-spatial-features-with-pg_featureserv>this post</a> for details on how to do this.) We're going to retrieve information about European countries where the population is 5,000,000 or less. The CQL expression for this is <code>continent = 'Europe' AND pop_est &#60= 5000000</code>.<p>Here's the query to get this result:<pre><code class=language-text>http://localhost:9000/collections/ne.countries/items.json?properties=name,pop_est&#38filter=continent%20=%20%27Europe%27%20AND%20pop_est%20%3C=%205000000&#38limit=100
</code></pre><p><em>Note: it's a good idea to to URL-encode spaces and special characters.</em><p>This returns a GeoJSON response with 25 features:<p><img alt=pgfs-cql-europe-small-json loading=lazy src=https://fs.hubspotusercontent00.net/hubfs/2283855/pgfs-cql-europe-small-json.png><p>By using the extension <code>html</code> instead of <code>json</code> in the request we can visualize the result in the <code>pg_featureserv</code> UI:<p><img alt=pgfs-cql-europe-small loading=lazy src=https://fs.hubspotusercontent00.net/hubfs/2283855/pgfs-cql-europe-small%20(1).png><h2 id=more-power-fewer-functions><a href=#more-power-fewer-functions>More power, fewer functions</a></h2><p>One of the cool things about <code>pg_featureserv</code> and its companion <a href=https://github.com/CrunchyData/pg_tileserv><code>pg_tileserv</code></a> is the ability to serve data provided by PostgreSQL functions. In a previous post we showed <a href=/blog/using-postgis-functions-in-pg_featureserv>how to use a function to find countries where the name matches a search string</a> . Now we can do this more easily and flexibly by using a CQL filter:<pre><code class=language-shell>http://localhost:9000/collections/ne.countries/items.html?properties=name,pop_est&#38filter=name%20ILIKE%20%27Mo%25%27
</code></pre><p><em>Note that the <code>ILIKE</code> wildcard must be URL-encoded as <code>%25</code>.</em><p><em><img alt=pgfs-cql-ilike-mo loading=lazy src=https://fs.hubspotusercontent00.net/hubfs/2283855/pgfs-cql-ilike-mo.png></em><p>And the filter can easily include more complex conditions, which is harder to do with a function. But function serving is still powerful for things like <a href=/blog/tile-serving-with-dynamic-geometry>generating spatial data</a> and <a href=/blog/routing-with-postgresql-and-crunchy-spatial>routing</a>.<h2 id=more-to-come-><a href=#more-to-come->More to come... <!-- markdownlint-ignore-line no-trailing-punctuation --></a></h2><p>We'll publish a blog post on the spatial filtering capabilities of CQL soon, along with some other interesting spatial capabilities in <code>pg_featureserv</code>. CQL support will be <a href=https://github.com/CrunchyData/pg_tileserv/pull/130>rolled out</a> in <a href=https://github.com/CrunchyData/pg_tileserv><code>pg_tileserv</code></a> soon as well. This brings some exciting possibilities for large-scale data visualization!<p>PostgreSQL provides even more powerful expression capabilities than are available in CQL. There's things like string concatenation and functions, the <code>CASE</code> construct for "computed if", and others. What kinds of things would you like to see <code>pg_featureserv</code> support?<h2 id=try-it><a href=#try-it>Try it!</a></h2><p>CQL filtering will be included in the forthcoming pg_featureserv Version 1.3.<br>But you can try it out now by simply <a href=https://github.com/CrunchyData/pg_featureserv#download>downloading</a> the latest build. <a href=https://twitter.com/crunchydata>Let us know</a> what use cases you find for CQL filtering! ]]></content:encoded>
<category><![CDATA[ Spatial ]]></category>
<author><![CDATA[ Martin.Davis@crunchydata.com (Martin Davis) ]]></author>
<dc:creator><![CDATA[ Martin Davis ]]></dc:creator>
<guid isPermalink="false">https://blog.crunchydata.com/blog/cql-filtering-in-pg_featureserv</guid>
<pubDate>Mon, 07 Mar 2022 04:00:00 EST</pubDate>
<dc:date>2022-03-07T09:00:00.000Z</dc:date>
<atom:updated>2022-03-07T09:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ Using PostGIS and pg_featureserv with QGIS ]]></title>
<link>https://www.crunchydata.com/blog/using-postgis-and-pg_featureserv-with-qgis</link>
<description><![CDATA[ Crunchy Data has developed a suite of spatial web services that work natively with PostGIS to expose your data to the web, using industry-standard protocols. ]]></description>
<content:encoded><![CDATA[ <p>My colleague Kat Batuigas recently <a href=/blog/arcgis-feature-service-to-postgis-the-qgis-way>wrote about</a> using the powerful open-source <a href=https://www.qgis.org/en/site/>QGIS</a> desktop GIS to import data into <a href=/blog/topic/spatial>PostGIS</a> from an ArcGIS Feature Service. This is a great first step toward moving your geospatial stack onto the performant, open source platform provided by PostGIS. And there's no need to stop there! <a href=https://www.crunchydata.com/>Crunchy Data</a> has developed a suite of spatial web services that work natively with PostGIS to expose your data to the web, using industry-standard protocols. These include:<ul><li><a href=https://github.com/CrunchyData/pg_tileserv><strong>pg_tileserv</strong></a> - allows mapping spatial data using the <a href=https://github.com/mapbox/vector-tile-spec>MVT</a> vector tile format<li><a href=https://github.com/CrunchyData/pg_featureserv><strong>pg_featureserv</strong></a> - publishes spatial data using the <a href=https://docs.opengeospatial.org/is/17-069r3/17-069r3.html><em>OGC API for Features</em></a> protocol</ul><p>Recent versions of QGIS support using the <em>OGC API for Features</em> (previously known as WFS3) as a vector data source. So it should be able to source data from <code>pg_featureserv</code>.<p>Let's see how it works.<h2 id=load-data-into-postgis><a href=#load-data-into-postgis>Load Data into PostGIS</a></h2><p>To keep things simple we are using a <a href=https://www.crunchydata.com/products/crunchy-bridge>Crunchy Bridge</a> cloud-hosted Postgres/PostGIS instance. For demo purposes we'll load a dataset of British Columbia wildfire perimeter polygons (available for download <a href=https://catalogue.data.gov.bc.ca/dataset/fire-perimeters-current>here</a>). The data is provided as a shapefile, so we can use the PostGIS <a href=https://postgis.net/docs/manual-3.1/postgis_usage.html#shp2pgsql_usage><code>shp2pgsql</code></a> utility to load it into a table. (If the data was in another format then we could load it using <a href=https://gdal.org/programs/ogr2ogr.html>ogr2ogr</a>, or use QGIS itself as Kat described).<p>We use the <code>-c</code> option to have the loader create a table appropriate for the dataset, and the <code>-I</code> option to create a spatial index on it (always a good idea). The data is in the BC-Albers coordinate system, so we specify the SRID using as <code>-s 3005</code>. Here we are doing the load in two steps using an intermediate SQL file, or it can be done in a single command by piping the <code>shp2pgsql</code> output to <code>psql</code>.<pre><code class=language-shell>shp2pgsql -c -D -s 3005 -i -I prot_current_fire_polys.shp bc.wildfire_poly > bc_wf.sql
psql -h p.asdfghjklqwertyuiop12345.db.postgresbridge.com -U postgres &#60 bc_wf.sql
</code></pre><p>Using <code>psql</code> we can connect to the database and verify that the table has been created and loaded:<pre><code class=language-pgsql>postgres=# \d bc.wildfire_poly
                                            Table "bc.wildfire_poly"
   Column   |            Type             | Collation | Nullable |                    Default
------------+-----------------------------+-----------+----------+-----------------------------------------------
 gid        | integer                     |           | not null | nextval('bc.wildfire_poly_gid_seq'::regclass)
 objectid   | double precision            |           |          |
 fire_year  | integer                     |           |          |
 fire_numbe | character varying(6)        |           |          |
 version_nu | double precision            |           |          |
 fire_size_ | numeric                     |           |          |
 source     | character varying(50)       |           |          |
 track_date | date                        |           |          |
 load_date  | date                        |           |          |
 feature_co | character varying(10)       |           |          |
 fire_stat  | character varying(30)       |           |          |
 fire_nt_id | integer                     |           |          |
 fire_nt_nm | character varying(50)       |           |          |
 fire_nt_lk | character varying(254)      |           |          |
 geom       | geometry(MultiPolygon,3005) |           |          |
Indexes:
    "wildfire_poly_pkey" PRIMARY KEY, btree (gid)
    "wildfire_poly_geom_idx" gist (geom)

 postgres=# select count(*) from bc.wildfire_poly;
 count
-------
   133
</code></pre><h2 id=serve-postgis-data-with-pg_featureserv><a href=#serve-postgis-data-with-pg_featureserv>Serve PostGIS Data with pg_featureserv</a></h2><p>The Crunchy spatial services are lightweight native applications (written in Go), so it's easy to install them on a local or hosted platform. They can be downloaded as applications, a Docker container, or built from source. See the <a href=https://access.crunchydata.com/documentation/pg_featureserv/1.2.0/installation/installing/>installation guide</a> for details.<p>We're running <code>pg_featureserv</code> in a local environment. To connect it to the Bridge Postgres instance, we specify the database connection information in the <code>config/pg_featureserv.toml</code> file. The connection string can also be specified in an environment variable. See the <a href=https://access.crunchydata.com/documentation/pg_featureserv/1.2.0/installation/configuration/>documentation</a> for more information.<pre><code class=language-toml>[Database]
# Database connection
# postgresql://username:password@host/dbname
DbConnection = "postgres://postgres:password@p.asdfghjklqwertyuiop12345.db.postgresbridge.com:5432/postgres"

</code></pre><p>For ease-of-use <code>pg_featureserv</code> provides a browser-based Admin UI. Using this we can see the data table published as a collection:<p><code>http://localhost:9000/collections.html</code><p><img alt=pgfs-home loading=lazy src=https://f.hubspotusercontent00.net/hubfs/2283855/pgfs-home.png><p>We can display the the collection metadata:<p><code>http://localhost:9000/collections/bc.wildfire_poly.html</code><p><img alt=pgfs_collection_meta loading=lazy src=https://f.hubspotusercontent00.net/hubfs/2283855/pgfs_collection_meta.png><p>The Admin UI also lets us see the data on a map:<p><code>http://localhost:9000/collections/bc.wildfire_poly/items.html?limit=200</code><p><img alt=pgfs_wildfire_map loading=lazy src=https://f.hubspotusercontent00.net/hubfs/2283855/pgfs_wildfire_map.png><p>The main use of <code>pg_featureserv</code> is to serve feature data via the <dfn>OGC API for Features</dfn> (<abbr>OAPIF</abbr>), which is a RESTful HTTP protocol returning GeoJSON. We can verify that a OAPIF data query works by issuing the following request URL. The response is a GeoJSON document (shown here using the handy JSON display in the Firefox browser):<p><code>http://localhost:9000/collections/bc.wildfire_poly/items.json</code><p><img alt=pgfs_query loading=lazy src=https://f.hubspotusercontent00.net/hubfs/2283855/pgfs_query.png><h2 id=display-pg_featureserv-collection-as-a-qgis-layer><a href=#display-pg_featureserv-collection-as-a-qgis-layer>Display pg_featureserv Collection as a QGIS Layer</a></h2><p>In QGIS, we can create a layer that displays the data coming from the <code>pg_featureserv</code> instance.<p>To do this, under the <strong>Layer</strong> menu choose Add Layer > Add WFS Layer**...**. This displays the Data Source Manager window at the WFS/OGC API-Features tab. Click New to define the connection to the <code>pg_featureserv</code> service. The Connection Details dialog lets us enter the following information:<ul><li>We'll use <code>pg_fs</code> as the name of the connection<li>The connection URL is the service home endpoint <a href=http://localhost:9000><code>http://localhost:9000/</code></a><li>The <strong>WFS Version</strong> is <em>OGC API - Features</em><li>We'll specify the <strong>Max. number of features</strong> as 200, since that will allow loading the entire dataset without paging</ul><p><img alt=qgis_dataman_connect loading=lazy src=https://f.hubspotusercontent00.net/hubfs/2283855/qgis_dataman_connect.png><p>When we click Connect we see the collections published by the service (in this demo there is only one):<p><img alt=qgis_ds_list loading=lazy src=https://f.hubspotusercontent00.net/hubfs/2283855/qgis_ds_list.png><p>Now we can select the <code>bc.wildfire_poly</code> collection, and click Add to add it as a layer to the QGIS map layout. The screenshot also shows the result of using the Identify Features tool on the map, showing that all attribute data is loaded as well.<p><img alt=qgis_map loading=lazy src=https://f.hubspotusercontent00.net/hubfs/2283855/qgis_map.png><p>Of course, QGIS is able to connect to PostGIS directly, and provides full query and update capability when doing that. But it can be simpler, more flexible and more secure to expose PostGIS data via a web service. That way, it can be easily accessed by many different tools which might not be able to or allowed to connect to PostGIS directly.<p>We're also exploring ways to be able to run <code>pg_tileserv</code> and <code>pg_featureserv</code> directly in Crunchy Bridge, to provide a turn-key solution for exposing spatial data on the web. If this sounds interesting to you, <a href=https://twitter.com/crunchydata>get in touch</a>! ]]></content:encoded>
<category><![CDATA[ Postgres Tutorials ]]></category>
<category><![CDATA[ Spatial ]]></category>
<author><![CDATA[ Martin.Davis@crunchydata.com (Martin Davis) ]]></author>
<dc:creator><![CDATA[ Martin Davis ]]></dc:creator>
<guid isPermalink="false">https://blog.crunchydata.com/blog/using-postgis-and-pg_featureserv-with-qgis</guid>
<pubDate>Wed, 21 Jul 2021 05:00:00 EDT</pubDate>
<dc:date>2021-07-21T09:00:00.000Z</dc:date>
<atom:updated>2021-07-21T09:00:00.000Z</atom:updated></item></channel></rss>