<?xml version="1.0" encoding="UTF-8" ?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" version="2.0"><channel><title>CrunchyData Blog</title>
<atom:link href="https://www.crunchydata.com/blog/topic/security/rss.xml" rel="self" type="application/rss+xml" />
<link>https://www.crunchydata.com/blog/topic/security</link>

<description>PostgreSQL experts from Crunchy Data share advice, performance tips, and guides on successfully running PostgreSQL and Kubernetes solutions</description>
<language>en-us</language>
<pubDate>Tue, 25 Mar 2025 11:00:00 EDT</pubDate>
<dc:date>2025-03-25T15:00:00.000Z</dc:date>
<dc:language>en-us</dc:language>
<sy:updatePeriod>hourly</sy:updatePeriod>
<sy:updateFrequency>1</sy:updateFrequency>
<item><title><![CDATA[ Postgres Security Checklist from the Center for Internet Security ]]></title>
<link>https://www.crunchydata.com/blog/postgres-security-checklist-from-the-center-for-internet-security</link>
<description><![CDATA[ The CIS Benchmark for Postgres is a free, community supported, security checklist for Postgres. ]]></description>
<content:encoded><![CDATA[ <p>The Center for Internet Security (CIS) releases security benchmarks to cover a wide variety of infrastructure used in modern applications, including databases, operating systems, cloud services, containerized services, and even networking. Since 2016 Crunchy Data has collaborated with CIS to provide this security resource for those deploying Postgres. The output of this collaboration is a checklist for folks to follow and improve the security posture of Postgres deployments.<p>The <a href=https://www.cisecurity.org/benchmark/postgresql>PostgreSQL CIS Benchmark™ for PostgreSQL 17</a> was just recently released.<h2 id=the-center-for-internet-security><a href=#the-center-for-internet-security>The Center for Internet Security</a></h2><p>The <a href=https://www.cisecurity.org/>Center for Internet Security</a> (CIS) is a nonprofit organization that collaborates with government and commercial entities to develop best practices for securing IT systems and data. CIS Benchmarks are community driven and help provide configuration recommendations in the form of security checklists. CIS allows public contributions, reviews, and an open discussion forum on the benchmarks to make sure they meet broader community standards.<blockquote><blockquote><p>The CIS Benchmark for Postgres is a free, community supported, security checklist for Postgres.</blockquote></blockquote><h2 id=getting-started-with-the-postgres-benchmark><a href=#getting-started-with-the-postgres-benchmark>Getting started with the Postgres benchmark</a></h2><p>The CIS Benchmark for Postgres is a <a href=https://www.cisecurity.org/benchmark/postgresql>freely available pdf</a> for non-commercial use with recommendations alongside Postgres configurations. The pdf is 200+ pages of descriptions, rational, and sample code to verify Postgres configurations.<p>In addition to manual verification, to standardize on this benchmark, teams incorporate these settings into their infrastructure deployment tools. Using infrastructure-as-code tools with the benchmarks ensure deployments across an organization meet these security specifications.<p>For commercial use of CIS Benchmarks, CIS has membership and tools to automatically run the benchmarks.<h2 id=what-is-in-the-cis-postgres-benchmark-security-checklist><a href=#what-is-in-the-cis-postgres-benchmark-security-checklist>What is in the CIS Postgres benchmark security checklist?</a></h2><p>The benchmark covers a variety of topics for Postgres deployment and configurations, including:<ul><li>Postgres install and file permission settings<li>Recommended settings for logs<li>User access, role creation, passwords, and authorization<li>Guidance for using key Postgres extensions like pg_audit, set_user, pg_crypto, and pgBackRest</ul><p>The document is very hands on, in many cases, CIS provides specific scripts to do the security check. For example, this will look for PGPASSWORD stored environment variable, which is something to avoid:<pre><code class=language-sql># grep PGPASSWORD --no-messages /home/*/.{bashrc,profile,bash_profile} 
# grep PGPASSWORD --no-messages /root/.{bashrc,profile,bash_profile} 
# grep PGPASSWORD --no-messages /etc/environment
</code></pre><p>There are also several statements and queries to help with role and user validation. This SQL query creates a role tree that is pretty neat. It creates a view that shows all roles with login access, superuser configuration, and more:<pre><code class=language-sql>CREATE 
OR REPLACE VIEW roletree AS WITH RECURSIVE roltree AS (
  SELECT 
    u.rolname AS rolname, 
    u.oid AS roloid, 
    u.rolcanlogin, 
    u.rolsuper, 
    '{}' :: name[] AS rolparents, 
    NULL :: oid AS parent_roloid, 
    NULL :: name AS parent_rolname 
  FROM 
    pg_catalog.pg_authid u 
    LEFT JOIN pg_catalog.pg_auth_members m on u.oid = m.member 
    LEFT JOIN pg_catalog.pg_authid g on m.roleid = g.oid 
  WHERE 
    g.oid IS NULL 
  UNION ALL 
  SELECT 
    u.rolname AS rolname, 
    u.oid AS roloid, 
    u.rolcanlogin, 
    u.rolsuper, 
    t.rolparents || g.rolname AS rolparents, 
    g.oid AS parent_roloid, 
    g.rolname AS parent_rolname 
  FROM 
    pg_catalog.pg_authid u 
    JOIN pg_catalog.pg_auth_members m on u.oid = m.member 
    JOIN pg_catalog.pg_authid g on m.roleid = g.oid 
    JOIN roltree t on t.roloid = g.oid
) 
SELECT 
  r.rolname, 
  r.roloid, 
  r.rolcanlogin, 
  r.rolsuper, 
  r.rolparents 
FROM 
  roltree r 
ORDER BY 
  1;
</code></pre><h2 id=updating-the-benchmark-for-new-postgres-versions><a href=#updating-the-benchmark-for-new-postgres-versions>Updating the benchmark for new Postgres versions</a></h2><p>Crunchy Data helps update the benchmark with every major Postgres version. Are new features added that should be in the benchmark? Or features to be wary of?<p>In this last release a couple notable changes were made:<ul><li>Addition of a recommendation for <code>passwordcheck</code><li>Addition of a recommendation for password complexity<li>Revisions of the Logging, Monitoring, and Auditing section</ul><h2 id=final-notes><a href=#final-notes>Final notes</a></h2><p>The CIS benchmark is a fantastic resource for anyone working with security around Postgres. If you need an even deeper security resource, we also work with the United States Department of Defense on a <a href=https://www.crunchydata.com/solutions/postgres-stig>Postgres Security Technical Implementation Guide</a> STIG.<p>Need help with Postgres security? <a href=https://www.crunchydata.com/contact>Contact our team</a>. ]]></content:encoded>
<category><![CDATA[ Security ]]></category>
<author><![CDATA[ Elizabeth.Christensen@crunchydata.com (Elizabeth Christensen) ]]></author>
<dc:creator><![CDATA[ Elizabeth Christensen ]]></dc:creator>
<guid isPermalink="false">7b7e12430ae1f36d21d78b6e83b78230c00fb6ba009c79d04297f10dc55b16b5</guid>
<pubDate>Tue, 25 Mar 2025 11:00:00 EDT</pubDate>
<dc:date>2025-03-25T15:00:00.000Z</dc:date>
<atom:updated>2025-03-25T15:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ Crunchy Data PostgreSQL 16 Security Technical Implementation Guide Released by DISA ]]></title>
<link>https://www.crunchydata.com/blog/crunchy-data-postgresql-16-security-technical-implementation-guide-released-by-disa</link>
<description><![CDATA[ Crunchy Data, together with the United States Defense Information Systems Agency (DISA), is pleased to release the newest STIG for Postgres including versions  13 through 16. ]]></description>
<content:encoded><![CDATA[ <p>Crunchy Data is pleased to <a href=https://www.crunchydata.com/news/crunchy-data-postgres-16-security-technical-implementation-guide-released-by-disa>announce</a> the publication of the <a href=https://www.crunchydata.com/files/stig/PGSQL_16_STIG_V1R1.pdf>Crunchy Data PostgreSQL 16 Security Technical Implementation Guide</a> (STIG) by the United States Defense Information Systems Agency (DISA). This update covers Postgres versions 13-16, for previous versions of Postgres see the prior <a href=https://www.crunchydata.com/blog/announcing-the-crunchy-data-postgresql-stig>Crunchy Data Postgres STIG</a>. Crunchy Data has collaborated with DISA since 2017 on the PostgreSQL STIG and this new STIG reflects Crunchy Data's ongoing collaboration with DISA and commitment to provide enhanced security guidance for PostgreSQL as it continues to advance and evolve.<p>Data security continues to be at the forefront of the U.S. Department of Defense software and systems development. This DISA STIG complements other DoD initiatives like DevSecOps and container hardening and is a critical piece in a continuous authorization to operate. Security conscious customers anywhere can benefit from implementing the STIG controls in their Postgres environment.<p>The security functionality reflected within the Crunchy Data PostgreSQL STIG is provided by 100% open source Postgres, <a href=https://www.crunchydata.com/blog/postgres-the-batteries-included-database>Postgres extensions</a>, and <a href=https://access.crunchydata.com/documentation/>documentation</a>. The Crunchy Data PostgreSQL STIG provides security guidance regarding the use of PostgreSQL (versions 13-16) used in conjunction with certain open source PostgreSQL <a href=https://www.craigkerstiens.com/2019/11/13/postgres-interview-from-art-of-postgresql/>extensions</a> – most notably, <a href=https://github.com/pgaudit/pgaudit>pgaudit</a>.<p>In order to help PostgreSQL users benefit from the guidance provided in the Crunchy Data PostgreSQL STIG, let's provide some background information for getting started.<h2 id=what-is-a-disa-stig><a href=#what-is-a-disa-stig>What is a DISA STIG?</a></h2><p>The Security Technical Implementation Guide (STIG) is the configuration standards for United States Department of Defense (DoD) Information Assurance (IA) and IA-enabled devices/systems published by the United States Defense Information Systems Agency (DISA). Since 1998, DISA has played a critical role enhancing the security posture of DoD's security systems by providing the STIGs. The STIGs contain technical guidance to “lock down” information systems/software that might otherwise be vulnerable to a malicious computer attack.<h2 id=is-the-crunchy-data-postgresql-stig-us-government-specific><a href=#is-the-crunchy-data-postgresql-stig-us-government-specific>Is the Crunchy Data PostgreSQL STIG US Government Specific?</a></h2><p>The PostgreSQL STIG is from the National Institute of Standards and Technology (NIST) Special Publication (SP) <a href=https://csrc.nist.gov/publications/detail/sp/800-53/rev-4/final>800-53</a>, Revision 4 and related documents. While the DISA STIG is intended to provide technical guidance to “lock down” information systems and software used within the DoD, the guidance provided in it is not specific to the DoD and is generally helpful to those interested in securing their PostgreSQL deployments.<h2 id=what-does-the-crunchy-data-postgresql-stig-cover><a href=#what-does-the-crunchy-data-postgresql-stig-cover>What does the Crunchy Data PostgreSQL STIG Cover?</a></h2><p>The DISA STIG document outlines many security rules and discussion around how they impact vulnerability within the context of the PostgreSQL database. The document covers 35 different standards. PostgreSQL STIG provides guidance on the configuration of PostgreSQL to address requirements associated with:<ul><li>Auditing<li>Logging<li>Data Encryption at Rest<li>Data Encryption Over the Wire<li>Access Controls<li>Administration<li>Authentication<li>Protecting against SQL Injection</ul><h2 id=how-does-the-crunchy-data-postgresql-stig-work><a href=#how-does-the-crunchy-data-postgresql-stig-work>How does the Crunchy Data PostgreSQL STIG work?</a></h2><p>The PostgreSQL STIG provides a series of Requirements, Checks and Fixes where:<ul><li>Requirements are provided as a series of security requirements for an operating environment.<li>Checks are provided as a series of instructions or commands for verifying compliance with the stated requirement.<li>Fixes are provided as remediation steps to the extent the Check determines that the system is not in fact in compliance with the stated Requirement.</ul><h2 id=looking-ahead><a href=#looking-ahead>Looking Ahead</a></h2><p>Crunchy Data views the Crunchy Data PostgreSQL STIG as yet another validation of the comprehensive security functionality of PostgreSQL and the accomplishments of the <a href=https://www.postgresql.org/developer/>PostgreSQL Global Development Community</a>. The Crunchy Data PostgreSQL STIG demonstrates that open source PostgreSQL is capable of meeting the exacting security requirements of the DoD.<p>We are proud to be part of the team that developed this STIG for PostgreSQL and look forward to working with all of the organizations who have been anxiously waiting for the Crunchy Data PostgreSQL STIG to be approved for modern versions of this quality open source relational database.<h2 id=additional-resources><a href=#additional-resources>Additional Resources</a></h2><p><a href=https://www.crunchydata.com/files/stig/PGSQL_16_STIG_V1R1.pdf>Download the Crunchy Data PostgreSQL Security Technical Implementation Guide</a> ]]></content:encoded>
<category><![CDATA[ Security ]]></category>
<author><![CDATA[ Doug.Hunley@crunchydata.com (Doug Hunley) ]]></author>
<dc:creator><![CDATA[ Doug Hunley ]]></dc:creator>
<guid isPermalink="false">3ff5bb675214ffe549b6f2d17af039cd8ac5512d18d8acee1e0f6e10c69152bd</guid>
<pubDate>Tue, 25 Jun 2024 06:00:00 EDT</pubDate>
<dc:date>2024-06-25T10:00:00.000Z</dc:date>
<atom:updated>2024-06-25T10:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ The Vectors of Database Encryption ]]></title>
<link>https://www.crunchydata.com/blog/the-vectors-of-database-encryption</link>
<description><![CDATA[ Keith offers a high level review of the vectors of attack on databases and database encryption types including Data-At-Rest, Data-In-Transit, and Data-In-Use. ]]></description>
<content:encoded><![CDATA[ <p>One of the most requested features by <a href=https://www.crunchydata.com/customers>Crunchy Data customers</a> using modern enterprise database environments is some form of data encryption. However, nailing down exactly what someone means when they say "We need our data encrypted" is often a challenge due to the actual requirements not being fully clarified or even understood. So, before anyone tries to implement database encryption it is critically important to understand what needs to be encrypted and what benefit is actually gained by the methods that are employed. This blog post is not going to discuss any deep technical implementations of encryption. Instead, let's discuss what vectors of attack any given encryption method will mitigate since that will greatly influence which method is effective before you even reach any sort of development or deployment phases.<p>The application of encryption to a database environment can be broken down into three different methods:<ol><li>Data-At-Rest<li>Data-In-Transit<li>Data-In-Use</ol><h3 id=data-at-rest><a href=#data-at-rest>Data-At-Rest</a></h3><p>Data-At-Rest is probably one of the most talked about and requested methods of encryption when talking about databases, so let's talk about that one first. What vectors of attack is this method effective for?<ul><li>Attack vector of concern is when data is not in use<li>Data must remain encrypted at all times while not in use<li>ALL data must be encrypted<li>Physical access to hardware is a perceived threat</ul><p>When is this method of encryption ineffective?<ul><li>Attack vector of concern is a fully compromised host<li>Attack vector of concern is not physical access<li>Attack vector of concern is data transmission</ul><p>The solutions to Data-At-Rest encryption can shed some light on why it is either effective or ineffective based on the above statements. The most common solution to this is full disk encryption, which is completely independent of any RDBMS or application in use. This can be done either at the hardware or software level and the client accessing that data typically never knows it was encrypted in the first place and rarely has to do anything on their end to either encrypt or decrypt the data. This is also why a fully compromised host, be it the server where the data is stored or a client accessing that data, completely compromises all the protection that many people want that encryption for in the first place.<p>Another common method to implement Data-At-Rest is Transparent Data Encryption (TDE). Similar to full disk encryption, this handles it at either the filesystem, hardware, or database level and, again, the client is completely unaware of encryption being in use. This has similar benefits and issues as full disk encryption mentioned above, however, TDE at the database level can provide some additional protections where general filesystem level encryption cannot. Depending on the database level TDE in play, the data may only be available directly through the database but not from the system level.<p>As of version 14, the community release of PostgreSQL does not have TDE built in, but it is currently under development for a future release. TDE is currently available through <a href=https://www.crunchydata.com/products/hardened-postgres>Crunchy Hardened Postgres</a>.<p>If your main concern is someone walking into your office or data center and walking out with your hard drives or servers, this method can be effective against that. However, there are additional issues with Data-At-Rest encryption, and solutions to those issues, so lets move on to the discussing the other two methods mentioned above.<h3 id=data-in-transit><a href=#data-in-transit>Data-In-Transit</a></h3><p>One of the easiest methods of encryption to implement, but unfortunately one that is not brought up as often as it should be during the discussion of database encryption, is securing the data while it is being transferred to or away from the database. When is this method effective?<ul><li>Attack vector of concern is data visibility during transit<li>Attack vector of concern is local network compromise<li>Attack vector of concern is data transmission over the Internet</ul><p>When is this method ineffective?<ul><li>Attack vector of concern is a fully compromised host<li>Attack vector of concern is physical access<li>Attack vector of concern is data visibility in storage</ul><p>Note these vectors are ineffective when Data-In-Transit is the only encryption method put into place. When combined with Data-At-Rest or Data-In-Use solutions, the physical access and data visibility in storage concerns can be addressed.<p>The most common solution to Data-In-Transit is securing your network traffic with TLS certificate management. And thankfully almost all modern database servers, including PostgreSQL, and clients have methods for implementing that certificate management.<h3 id=data-in-use><a href=#data-in-use>Data-in-Use</a></h3><p>This final method is frequently not the one mentioned often, but is actually the most effective method for securing your data in most situations.<ul><li>Attack vector of concern is data visibility inside the database<li>Access to unencrypted data is privilege based<li>Application controls the encryption/decryption process<li>Only specific items need encryption (Ex. per-column)<li>Can possibly address both Data-At-Rest and Data-In-Transit concerns</ul><p>It can still be ineffective when:<ul><li>Attack vector of concern is a fully compromised client host<li>Attack vector of concern is physical access to the application server</ul><p>Data-In-Use encryption is best handled at the application layer because then the private key is never, at any point, anywhere on the database system. The most common method for doing this is some sort of vaulted credentials system where the application requests access to the decryption key at the time it needs the data. The data is then decrypted on the application server, or some server between the database and client.<p>This can completely mitigate one of the biggest vulnerabilities with Data-At-Rest encryption since the very nature of that encryption method requires that either the private key or password exist somewhere on the database server to allow the transparent decryption of that data. While in-memory attacks are rare and difficult, they are most certainly still possible. So if that vector of attack is a concern, it can be completely eliminated if the data is never decrypted on the database server itself. And for that reason, it also solves many Data-At-Rest concerns. And while it technically can solve many Data-In-Transit concerns as far as the data itself, it's still best to implement some sort of end-to-end TLS solution, especially if that data is traveling across the Internet.<p>The main areas of concern left for Data-In-Use encryption are compromise of the client or application server. However, these are also still concerns with Data-At-Rest and Data-In-Transit as well. So solving that problem is typically outside the scope of solving the actual process of encryption itself.<p>While Data-In-Use solutions can take more planning and development time to implement, if the security of your data is paramount, this is by far one of the most effective means of actually keeping your data secure.<h3 id=backups><a href=#backups>Backups</a></h3><p>And lastly, one of the other places that database encryption can often be overlooked is the backups. All three above methods need to be addressed for your backups as well.<ul><li>Data-At-Rest<ul><li>Filesystem backups could backup unencrypted versions of files<li>Encrypt the backups independent of the Data-At-Rest system<li><a href=https://pgbackrest.org/>pgBackRest</a> supports S3/Azure/GCS client-side encryption</ul><li>Data-In-Transit<ul><li>Use ssh to transmit both backups and archived WAL files</ul><li>Data-In-Use<ul><li>Filesystem backups of the database server should generally not compromise this encryption method since the data itself is never decrypted on the database server.<li>Logical backups could potentially backup unencrypted data if they are using the decryption system to dump out unencrypted versions of the data.</ul></ul><h3 id=conclusion><a href=#conclusion>Conclusion</a></h3><p>There are many different solutions out there for actually implementing these different methods of encryption. Crunchy Data products on <a href=https://www.crunchydata.com/products/crunchy-bridge>Cloud</a>, <a href=https://www.crunchydata.com/products/crunchy-postgresql-for-kubernetes>Kubernetes</a>, or <a href=https://www.crunchydata.com/products/crunchy-high-availability-postgresql>VMs</a> all offer encryption at-rest and in-transit. We also offer <a href=https://www.crunchydata.com/products/hardened-postgres>Crunchy Hardened Postgres</a> with <dfn>Transparent Data Encryption</dfn> (<abbr>TDE</abbr>).<p>We hope this overview of encryption methods for your database helps you when planning out which of those solutions works best for your environment and actually provides the security you are looking to achieve. ]]></content:encoded>
<category><![CDATA[ Security ]]></category>
<author><![CDATA[ Keith.Fiske@crunchydata.com (Keith Fiske) ]]></author>
<dc:creator><![CDATA[ Keith Fiske ]]></dc:creator>
<guid isPermalink="false">c64fd5eb195b257570deee5e25cea1be48bb9476ea6e64e97db3e3ed6bcf8ad4</guid>
<pubDate>Wed, 11 May 2022 11:55:00 EDT</pubDate>
<dc:date>2022-05-11T15:55:00.000Z</dc:date>
<atom:updated>2022-05-11T15:55:00.000Z</atom:updated></item>
<item><title><![CDATA[ Safer Application Users in Postgres ]]></title>
<link>https://www.crunchydata.com/blog/safer-application-users-in-postgres</link>
<description><![CDATA[ Risk management for Postgres. A guide to changing application user permissions so they can't delete your production database. ]]></description>
<content:encoded><![CDATA[ <blockquote><p>We deleted our database.</blockquote><p>Two years ago on a Friday afternoon around 4pm I had a customer open a support ticket. The customer thought they were running their test suite against a dev environment. In reality they were running on production. One of the early steps in many test suites is to ensure a clean state:<ol><li><code>DROP</code> all tables or <code>DELETE</code> schemas<li><code>CREATE</code> from scratch</ol><p>With <a href=/blog/database-terminology-explained-postgres-high-availability-and-disaster-recovery>disaster recovery</a> and <a href=https://docs.crunchybridge.com/how-to/point-in-time-recovery/>point-in-time recovery</a> in place, we could roll the database back to any exact moment in the past. So we got the timestamp, and they ran the command and recovered their several TB database to exactly the moment before. A stressful Friday afternoon, but no data loss.<p>You might be thinking of the various ways you can prevent this. Set your shell color to red when connected to production. Don't allow public internet access to production. Only allow CI-driven deployment. Here is one more option for you that is great for production risk mitigation: don't allow your production application users to delete data in prod.<h2 id=prevent-application-user-from-deleting-data-in-production><a href=#prevent-application-user-from-deleting-data-in-production>Prevent Application User from Deleting Data in Production</a></h2><p>To prevent an application from deleting data in production, we need to mitigate this risk and restrict the application user from the following operations:<ul><li><code>DROP TABLE</code><li><code>TRUNCATE TABLE</code></ul><p>The approach requires a mixture of best practices and proper configuration. To start, let's define the actors!<h3 id=administrator-user><a href=#administrator-user>Administrator User</a></h3><p>Administrator users are responsible for the creation of database schemas and relations (<dfn>Data Definition Language</dfn>, or <a href=https://www.postgresql.org/docs/current/ddl.html><abbr>DDL</abbr></a>).<p>Let's create an administrator user for the sake of this example:<pre><code class=language-pgsql>CREATE USER admin with PASSWORD 'correcthorsebatterystaple' SUPERUSER;
CREATE ROLE

\du admin
           List of roles
 Role name | Attributes | Member of
-----------+------------+-----------
 admin     | Superuser  | {}
</code></pre><h3 id=application-user><a href=#application-user>Application User</a></h3><p>Application users are generally restricted to performing operations on predefined database relations and schemas (<dfn>Data Manipulation Language</dfn>, or <a href=https://www.postgresql.org/docs/current/dml.html><abbr>DML</abbr></a>).<p><code>DROP</code> and <code>TRUNCATE</code> privileges would not be granted to an application user.<p>Production applications should only need privileges to add and update data. A typical production application grows by:<ul><li>Adding new columns to tables<li>Adding new rows<li>Updating records</ul><p>If your application follows the design pattern above, you might not want to give app users the ability to <code>DROP</code>, <code>TRUNCATE</code>, or <code>DELETE</code> from tables.<p>In the following example, we will use the application user named 'myappuser', so let's create them:<pre><code class=language-pgsql>CREATE USER myappuser WITH PASSWORD 'verygoodpasswordstring';
CREATE ROLE
</code></pre><h3 id=create-tables-as-admin><a href=#create-tables-as-admin>Create Tables as Admin</a></h3><p>Now that we have our actors defined, let's set the stage.<p>We should only create production tables as the administrator user. By default, relation creators are relation owners. Only owners and superusers can perform actions such as <code>DROP TABLE</code>. This protects against accidental deletion of data in production tables by application users. Application users cannot drop tables they do not own.<p>Let's make sure we're the appropriate admin before making our production sandbox:<pre><code class=language-pgsql>SELECT current_user;
 current_user
--------------
 admin
(1 row)
</code></pre><p>Go ahead and create a production <code>SCHEMA</code> and <code>GRANT</code> the appropriate permissions:<pre><code class=language-pgsql>CREATE SCHEMA prod;
CREATE SCHEMA

GRANT USAGE ON SCHEMA prod TO myappuser;
GRANT
</code></pre><p>Now we can create a table for our production data and start testing out some concepts:<pre><code class=language-pgsql>CREATE TABLE prod.userdata (col1 integer, col2 text, col3 text);
CREATE TABLE
</code></pre><p>If we log back in as <code>myappuser</code>, we shouldn't be able to drop the table:<pre><code class=language-pgsql>\c postgres myappuser
Password for user myappuser:
You are now connected to database "postgres" as user "myappuser".
postgres=> DROP TABLE prod.userdata;
ERROR:  must be owner of table userdata
</code></pre><h3 id=least-privilege><a href=#least-privilege>Least Privilege</a></h3><p>We've shown how to block <code>DROP TABLE</code> for application users. To prevent deletion of tuples inside a relation, we need to do a bit more work. The application user should only have access to exactly what it needs.<p>To do this, we <code>GRANT</code> only the privileges that the application user needs, as outlined above:<pre><code class=language-pgsql>postgres=> \c postgres admin
Password for user admin:
You are now connected to database "postgres" as user "admin".

GRANT SELECT, INSERT, UPDATE ON ALL TABLES IN SCHEMA prod TO myappuser;
GRANT
</code></pre><p>Or if you already have some application user created you can <code>REVOKE</code> the unwanted production privileges:<pre><code class=language-pgsql>REVOKE DELETE, TRUNCATE ON ALL TABLES IN SCHEMA prod FROM myappuser;
REVOKE
</code></pre><p>Now our application user cannot delete data:<pre><code class=language-pgsql>\c postgres myappuser
Password for user myappuser:
You are now connected to database "postgres" as user "myappuser".
postgres=> DELETE FROM prod.userdata *;
ERROR:  permission denied for table userdata
postgres=> TRUNCATE TABLE prod.userdata;
ERROR:  permission denied for table userdata
</code></pre><p>Great! We've narrowed down our privileges, but how do we know whether or not we're missing something?<h3 id=check-access><a href=#check-access>Check Access</a></h3><p>When working with roles and permissions, it is always good to do an access check. We have a nice extension I recommend <a href=https://github.com/CrunchyData/crunchy_check_access>crunchy_check_access</a> for walking the full tree of access and permissions.<p>Log in as the admin user and take a look at the privileges we've granted to the application user:<pre><code class=language-pgsql>SELECT base_role,objtype,schemaname,objname,privname FROM all_access() WHERE base_role = 'myappuser' AND schemaname = 'prod';
 base_role | objtype | schemaname | objname  | privname
-----------+---------+------------+----------+----------
 myappuser | schema  | prod       | prod     | USAGE
 myappuser | table   | prod       | userdata | SELECT
 myappuser | table   | prod       | userdata | INSERT
 myappuser | table   | prod       | userdata | UPDATE
(4 rows)
</code></pre><p>It's as simple as that!<h2 id=let-your-application-user-delete-records><a href=#let-your-application-user-delete-records>Let Your Application User Delete Records</a></h2><p>So we've revoked privileges and protected against "accidental" deletion errors in the database, but it is very likely that your application still needs to delete records. Let's look at safer alternative designs for deleting application data.<p>A common pattern in applications is to mark tuples as deleted, rather than deleting them.<p>We can alter the table above to add a timestamp column, named <code>deleted</code>, which has two benefits:<ol><li>Data is never actually deleted, so the issues outlined above are not of concern.<li>We now have a snapshot of records at each moment in time for quick and painless application-level rollback of state.</ol><h3 id=adding-a-deleted-column><a href=#adding-a-deleted-column>Adding a <code>deleted</code> Column</a></h3><p>Assuming we have the production table created already, we can add a <code>deleted</code> column like so:<pre><code class=language-pgsql>ALTER TABLE prod.userdata ADD COLUMN deleted timestamp;
ALTER TABLE
</code></pre><p>NOTE: The <code>ADD COLUMN</code> syntax noted above is an expensive operation, as it holds an Exclusive Lock on the table.<p>Normal table inserts and update operations can still take the same form:<pre><code class=language-pgsql>INSERT INTO prod.userdata VALUES (generate_series(1,10), md5(random()::text), md5(random()::text)) ;
INSERT 0 10
</code></pre><p>We now have the option of updating a row to mark it deleted. Let's say our app wants to delete all records <code>where col1 &#60 3</code>:<pre><code class=language-pgsql>postgres=> UPDATE prod.userdata SET deleted = now() WHERE col1 &#60 3;
UPDATE 2
</code></pre><p>If we want to see all remaining records:<pre><code class=language-pgsql>SELECT * from prod.userdata WHERE deleted IS NULL;
 col1 |               col2               |               col3               | deleted
------+----------------------------------+----------------------------------+---------
    3 | 828748efff06ce5b6f0f8e8931429bd3 | e50fe6654ee497de8ad75746849fba0f |
    4 | 4241511ee0a8f7f76976f0bab43b47f0 | d08e31ba79f972a2983301832ec67b94 |
    5 | 93de032bc9157362593a0259a8558514 | 6cd1639323a0c1a96fb3e781283e19d3 |
    6 | af1e1d81ef68dbd5ac14a0ae55195e2a | a4e500cf2c3ecd24c0a745c42b5af939 |
    7 | bcd0c74ca0d416b3f1b3e7ffda375615 | 361ed5d6bff759df7c138daf4b4b0e1b |
    8 | 35856a2d5b0e5b3e1d3ea4e09f0f88fe | a6d0977908e08626bad8278e965e9315 |
    9 | 43de7e949e9777969248b9b1d751d44e | 196390d618931a8dd3d5473cc23869fa |
   10 | 3fc5661e900a25b96b708f3c22cf1d59 | 2f29a28b25e1a1e25fc10b45fc22bc91 |
(8 rows)
</code></pre><p>We can also filter by timestamp. Say we delete more records, say any of the non-deleted columns, <code>WHERE col1 &#60 6</code>:<pre><code class=language-pgsql>UPDATE prod.userdata SET deleted = now() WHERE deleted IS NULL AND col1 &#60 6;
UPDATE 3

SELECT * from prod.userdata;
 col1 |               col2               |               col3               |          deleted
------+----------------------------------+----------------------------------+----------------------------
    6 | af1e1d81ef68dbd5ac14a0ae55195e2a | a4e500cf2c3ecd24c0a745c42b5af939 |
    7 | bcd0c74ca0d416b3f1b3e7ffda375615 | 361ed5d6bff759df7c138daf4b4b0e1b |
    8 | 35856a2d5b0e5b3e1d3ea4e09f0f88fe | a6d0977908e08626bad8278e965e9315 |
    9 | 43de7e949e9777969248b9b1d751d44e | 196390d618931a8dd3d5473cc23869fa |
   10 | 3fc5661e900a25b96b708f3c22cf1d59 | 2f29a28b25e1a1e25fc10b45fc22bc91 |
    1 | b4fb51aff93bf865c6bc8c5f32b306cf | 49d37b3934e2c44f20ddd87019bc525e | 2022-02-03 16:30:49.445571
    2 | e53507d91f39905f6bcd193636b13c3d | 66066e4c78a3eb701086391052c19b56 | 2022-02-03 16:30:49.445571
    3 | 828748efff06ce5b6f0f8e8931429bd3 | e50fe6654ee497de8ad75746849fba0f | 2022-02-03 16:34:19.953742
    4 | 4241511ee0a8f7f76976f0bab43b47f0 | d08e31ba79f972a2983301832ec67b94 | 2022-02-03 16:34:19.953742
    5 | 93de032bc9157362593a0259a8558514 | 6cd1639323a0c1a96fb3e781283e19d3 | 2022-02-03 16:34:19.953742
(10 rows)
</code></pre><p>We can now restore state using the timestamp from the last delete:<pre><code class=language-pgsql>SELECT * from prod.userdata WHERE deleted IS NULL OR deleted >= timestamp '2022-02-03 16:34:19.953742';
 col1 |               col2               |               col3               |          deleted
------+----------------------------------+----------------------------------+----------------------------
    6 | af1e1d81ef68dbd5ac14a0ae55195e2a | a4e500cf2c3ecd24c0a745c42b5af939 |
    7 | bcd0c74ca0d416b3f1b3e7ffda375615 | 361ed5d6bff759df7c138daf4b4b0e1b |
    8 | 35856a2d5b0e5b3e1d3ea4e09f0f88fe | a6d0977908e08626bad8278e965e9315 |
    9 | 43de7e949e9777969248b9b1d751d44e | 196390d618931a8dd3d5473cc23869fa |
   10 | 3fc5661e900a25b96b708f3c22cf1d59 | 2f29a28b25e1a1e25fc10b45fc22bc91 |
    3 | 828748efff06ce5b6f0f8e8931429bd3 | e50fe6654ee497de8ad75746849fba0f | 2022-02-03 16:34:19.953742
    4 | 4241511ee0a8f7f76976f0bab43b47f0 | d08e31ba79f972a2983301832ec67b94 | 2022-02-03 16:34:19.953742
    5 | 93de032bc9157362593a0259a8558514 | 6cd1639323a0c1a96fb3e781283e19d3 | 2022-02-03 16:34:19.953742
(8 rows)
</code></pre><h2 id=safer-application-users-summary><a href=#safer-application-users-summary>Safer Application Users Summary</a></h2><p>We've shown how to mitigate the risk of accidental deletion of production data, by:<ol><li>Ensuring administrator users are object owners<li>Application users only have privileges for add/update operations<li>Safer deletion of data is possible by using a <code>deleted</code> timestamp column</ol><p>Now we can rest easy knowing our production data is safe from those pesky test scripts!<ul><li>For more information on limiting database user privileges, check out the blog post on <a href=https://blog.crunchydata.com/blog/creating-a-read-only-postgres-user>Creating a Read-Only Postgres User</a>.<li>PostgreSQL's privilege landscape is complicated. There is often more to Least Privilege than meets the eye. For a deeper dive on the complexities, check out the <a href=https://blog.crunchydata.com/blog/postgresql-defaults-and-impact-on-security-part-1>PostgreSQL Defaults and Impact on Security</a> blog series.<li>If you're interested in protecting user data, take a look at the Enhanced RBAC and Superuser Lockdown features of <a href=https://www.crunchydata.com/products/hardened-postgres>Crunchy Hardened PostgreSQL</a>.</ul> ]]></content:encoded>
<category><![CDATA[ Security ]]></category>
<author><![CDATA[ Mike.Palmiotto@crunchydata.com (Mike Palmiotto) ]]></author>
<dc:creator><![CDATA[ Mike Palmiotto ]]></dc:creator>
<guid isPermalink="false">https://blog.crunchydata.com/blog/safer-application-users-in-postgres</guid>
<pubDate>Mon, 14 Feb 2022 04:00:00 EST</pubDate>
<dc:date>2022-02-14T09:00:00.000Z</dc:date>
<atom:updated>2022-02-14T09:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ Secure Permissions for pgBackRest ]]></title>
<link>https://www.crunchydata.com/blog/secure-permissions-for-pgbackrest</link>
<description><![CDATA[ A guide on securing the pgBackRest user for high-security Postgres environments. ]]></description>
<content:encoded><![CDATA[ <p>The <a href=https://www.crunchydata.com/products/crunchy-high-availability-postgresql>pgBackRest</a> tool is a fantastic backup solution for Postgres, with many features including encryption, compression, automatic expiration, PITR, asynchronous archiving, and lots more. By default it runs as the Unix user "postgres" and connects to the database as the "postgres" superuser. In working with one of our finance clients on <a href=https://www.crunchydata.com/products/crunchy-high-availability-postgresql>Crunchy High Availability Postgres</a>, we needed to limit the access of the pgBackRest program for security and compliance on the database cluster. This article describes a method to allow pgBackRest to take a backup with the minimum of access rights for other like-minded security Postgres users. Using the <dfn>Principle of Least Privilege</dfn> (<abbr>PoLP</abbr>) we can set up pgBackRest to take backups but not have write access to the database itself.<p>Before we start, it is helpful to know a bit about how pgBackRest operates. There are two main pieces: <abbr>WAL</abbr> (<dfn>write-ahead log</dfn>) archiving and creating backups. For the WAL archiving, the Postgres backend itself will invoke pgBackRest, and move the WAL files to one or more pgBackRest repositories. Because this runs from within Postgres itself, it already has full permissions to the WAL files, so nothing needs to change.<p>When pgBackRest performs a backup, it connects to the Postgres server, gathers some information, tells Postgres to start a backup, then copies the physical files inside the Postgres data directory to one or more pgBackRest repositories. Our goal in this article is to have this backup run by a low-privilege user that can read, but not write, the Postgres data directory. Additionally, we need to connect to the database but only run a few functions needed to perform the backup. The account we connect as should not be able to read any data from the database!<p>This example focuses on an existing Postgres database that is not using pgBackRest yet. It also assumes that the backup storage is on the same machine as Postgres, but all the information here should be easy to modify for more advanced configurations.<h2 id=overview><a href=#overview>Overview</a></h2><p>The process will be:<ul><li>Create a new user at the Unix level<li>Change group permissions of key directories used by pgBackRest<li>Create a new user at the database level<li>Give this new user the ability to only run a small handful of functions</ul><h3 id=setup><a href=#setup>Setup</a></h3><p>The items below should work with minimal changes if you are modifying an existing Postgres and/or pgBackRest system; for this example we will create a complete standalone system. First step is to install Postgres, create a new database cluster, and then install pgBackRest. These commands are for a Red Hat / CentOS system, so your experience may differ:<pre><code class=language-shell>$ sudo yum install -y postgresql14-server
## Put Postgres utilities such as initdb and pg_ctl into the postgres user's path:
$ echo 'export PATH=/usr/pgsql-14/bin/:$PATH' | sudo -iu postgres tee -a .bash_profile
$ sudo -iu postgres initdb -k
$ sudo yum install -y pgbackrest
</code></pre><h3 id=creating-a-local-user-to-perform-the-backups><a href=#creating-a-local-user-to-perform-the-backups>Creating a local user to perform the backups</a></h3><p>A package install of pgBackRest creates everything owned as either root or as the "postgres" user. We can use this to our advantage by creating a new unprivileged user that, like the "postgres" user, will belong to the "postgres" <strong>group</strong>:<pre><code class=language-shell>sudo useradd pgbackrest --gid postgres_backup --create-home
</code></pre><p>If the backups are going to run from another server, we will need to create a pair of SSH keys. While these are not needed for this particular example, let's create some anyway:<pre><code class=language-shell>sudo -u pgbackrest ssh-keygen -q -t ed25519 -N "" -C "pgbackrest key"
</code></pre><h3 id=adjust-the-postgres-data-directory><a href=#adjust-the-postgres-data-directory>Adjust the Postgres data directory</a></h3><p>The next step is to make sure this new user has read-only access to the entire Postgres data directory, as backups involve copying these files somewhere else. We also are going to make use of the Unix setgid feature to ensure that any files created in the future are also readable by our new "pgbackrest" user. First, let's check what the Postgres data directory permissions are:<pre><code class=language-shell>$ sudo -iu postgres psql -tc 'show data_directory'
/var/lib/pgsql/14/data
</code></pre><pre><code class=language-shell>$ sudo ls -la /var/lib/pgsql/14
total 4
drwx------. 1 postgres postgres 32 Jan 1 13:21 .
drwx------. 1 postgres postgres 32 Jan 1 13:21 ..
drwx------. 20 postgres postgres 4096 Jan 1 13:21 data
</code></pre><p>We want to make this directory, and all directories underneath it, readable and searchable for our new user. We do this by using the chmod command to grant group read (<strong>r</strong>), group search (<strong>x</strong>), and group setgid (<strong>s</strong>) for every <strong><em>directory</em></strong>:<pre><code class=language-shell>sudo find /var/lib/pgsql/14/data  -type d  -exec chmod g+rxs {} \;
</code></pre><p>We also need to make sure that the complete path to the data directory is searchable by anyone in the "postgres" group. While we could apply the chmod above to <code>/var/lib/pgsql/</code>, let's keep any setgid to the data directory, and apply the read/search changes per directory:<pre><code class=language-shell>sudo chmod g+rx /var/lib/pgsql  /var/lib/pgsql/14
</code></pre><p>That takes care of the directories, but we also need to make sure all the files are group readable:<pre><code class=language-shell>sudo find /var/lib/pgsql/14/data  -type f  -exec chmod g+r {} \;
</code></pre><p>That <strong>"s"</strong> in the g+rxs (also known as the setgid bit) ensures that any files created in those directories will be readable by the "postgres" group. This will not take effect for any processes that are already running and happen to have opened a directory. So, unless Postgres is already stopped, it will need restarting, and the permission changes applied again:<pre><code class=language-shell>$ sudo -iu postgres /usr/pgsql-14/bin/pg_ctl restart
## Run these until both return an error from xargs about 'missing operand':
sudo -u pgbackrest find /var/lib/pgsql/14/data -not -readable -type d | xargs -n1 sudo chmod g+rxs
sudo -u pgbackrest find /var/lib/pgsql/14/data -not -readable -type f | xargs -n1 sudo chmod g+r
</code></pre><p>Let's make sure our new backup user can now read, but not write, to the files in the data directory:<pre><code class=language-shell>$ sudo -u pgbackrest cat /var/lib/pgsql/14/data/postmaster.pid
29327
/var/lib/pgsql/14/data
1641332824
5432
/var/run/postgresql
localhost
 16810335 6
ready

$ sudo -u pgbackrest touch /var/lib/pgsql/14/data/postmaster.pid
touch: cannot touch '/var/lib/pgsql/14/data/postmaster.pid': Permission denied
</code></pre><h3 id=adjust-the-pgbackrest-lock-directory><a href=#adjust-the-pgbackrest-lock-directory>Adjust the pgBackRest lock directory</a></h3><p>To prevent multiple backrest processes from stepping on each other's toes, backrest implements a simple locking scheme, involving writing files in a common directory, by default <code>/tmp/pgbackrest</code>. Our new user needs to be able to create files in this directory, and should be able to read files created by others. If pgBackRest has never run, it is possible this directory does not exist yet, so we'll add some code to create it just in case. Then we'll adjust the permissions:<pre><code class=language-shell>sudo mkdir /tmp/pgbackrest/
sudo chown postgres.postgres /tmp/pgbackrest/

sudo chmod g+rwxs /tmp/pgbackrest/
sudo find /tmp/pgbackrest/ -type f -exec chmod g+r {} \;
</code></pre><p>(Savvy users of pgBackRest may wonder about the spool-path. Because those files are not needed for backups, no special permission changes are needed for it.)<h3 id=adjust-the-pgbackrest-configuration-files><a href=#adjust-the-pgbackrest-configuration-files>Adjust the pgBackRest configuration files</a></h3><p>Both the "postgres" user, and this new backup-only user, need to be able to read from the main pgBackRest configuration files, which by default are located in <code>/etc/pgbackrest/</code>, so let's adjust those as well. The new user has no need for write access.<pre><code class=language-shell>## Create this just in case it does not exist:
$ sudo mkdir /etc/pgbackrest/
$ sudo chown postgres.postgres /etc/pgbackrest/

$ sudo find /etc/pgbackrest/  -type d  -exec chmod g+rxs {} \;
$ sudo find /etc/pgbackrest/  -type f  -exec chmod g+r   {} \;

## This file may also be in use, so adjust permissions if needed
## (this file may be root owned and mode 0644)
$ sudo -iu pgbackrest find /etc/pgbackrest.conf3 -not -readable | xargs sudo chmod g+r
</code></pre><h3 id=adjust-the-pgbackrest-logging-directory><a href=#adjust-the-pgbackrest-logging-directory>Adjust the pgBackRest logging directory</a></h3><p>Finally, this new user needs read and write access to the logging directory for pgBackRest, if file logging is in use:<pre><code class=language-shell>## As before, create if it does not exist:
$ sudo mkdir -p /var/log/pgbackrest
$ sudo chown postgres.postgres /var/log/pgbackrest
$ sudo chmod g+rwxs /var/log/pgbackrest/
$ sudo find /var/log/pgbackrest/ -type f -exec chmod g+wr {} \;
</code></pre><h3 id=adjust-the-pgbackrest-repository><a href=#adjust-the-pgbackrest-repository>Adjust the pgBackRest repository</a></h3><p>The repository is the place where pgBackRest stores its backups, as well as where is stores the WAL files that are created by Postgres. For this article, our repository will be on the same server as Postgres, but the process is very similar if performing backups from a remote server (a better option!). The default location is <code>/var/lib/pgbackrest</code>, so let's tweak the permissions there:<pre><code class=language-shell>sudo find /var/lib/pgbackrest/ -type d -exec chmod g+rwxs {} \;
sudo find /var/lib/pgbackrest/ -type f -exec chmod g+r {} \;
</code></pre><h3 id=create-a-backrest-stanza-if-needed><a href=#create-a-backrest-stanza-if-needed>Create a backrest stanza if needed</a></h3><p>If you don't already have a stanza, create one now:<pre><code class=language-shell>$ echo '[foobar]' | sudo tee -a /etc/pgbackrest/pgbackrest.conf
$ echo 'pg1-path=/var/lib/pgsql/14/data' | sudo tee -a /etc/pgbackrest/pgbackrest.conf
$ echo 'start-fast=y' | sudo tee -a /etc/pgbackrest/pgbackrest.conf
$ sudo -u postgres /bin/pgbackrest stanza-create --stanza foobar
## Make pgbackrest the owner of the backups directory:
$ sudo chown -R pgbackrest /var/lib/pgbackrest/backup
</code></pre><p>We can see the new stanza here with the correct permissions:<pre><code class=language-shell>$ sudo find /var/lib/pgbackrest/ -ls

89111117 0 drwxrws--- 4 postgres postgres 35 Jan 30 00:43 /var/lib/pgbackrest/
32100105 0 drwxr-s--- 3 postgres postgres 20 Jan 30 00:43 /var/lib/pgbackrest/archive
11599111 0 drwxr-s--- 2 postgres postgres 51 Jan 30 00:43 /var/lib/pgbackrest/archive/foobar
11810111 4 -rw-r----- 1 postgres postgres 253 Jan 30 00:43 /var/lib/pgbackrest/archive/foobar/archive.info
43297321 4 -rw-r----- 1 postgres postgres 253 Jan 30 00:43 /var/lib/pgbackrest/archive/foobar/archive.info.copy
14101100 0 drwxr-s--- 3 pgbackrest postgres 20 Jan 30 00:43 /var/lib/pgbackrest/backup
32104101 0 drwxr-s--- 2 pgbackrest postgres 49 Jan 30 00:43 /var/lib/pgbackrest/backup/foobar
11411410 4 -rw-r----- 1 pgbackrest postgres 370 Jan 30 00:43 /var/lib/pgbackrest/backup/foobar/backup.info
5110103 4 -rw-r----- 1 pgbackrest postgres 370 Jan 30 00:43 /var/lib/pgbackrest/backup/foobar/backup.info.copy
</code></pre><p>Note that the new user does not need write access to the WAL files (which exist in the 'archive' directory), so you could do this if wanted:<pre><code class=language-shell>sudo find /var/lib/pgbackrest/archive -exec chmod g-w {} \;
</code></pre><h2 id=file-permissions-summary><a href=#file-permissions-summary>File permissions summary</a></h2><p>Here's a summary of all the file permissions we need to set to have a second user perform backups using pgBackRest:<table><thead><tr><th>Item<th>Default<th>Config setting name<th>Group permissions<tbody><tr><td>Backrest repository (archive)<td>/var/lib/pgbackrest/archive<td><a href=https://pgbackrest.org/configuration.html#section-repository/option-repo-path>repo1-path</a><td>Read only<tr><td>Backrest repository (backup)<td>/var/lib/pgbackrest/backup<td><a href=https://pgbackrest.org/configuration.html#section-repository/option-repo-path>repo1-path</a><td>Read and write<tr><td>Configuration files<td>/etc/pgbackrest<td><a href=https://pgbackrest.org/configuration.html#introduction>built-in</a><td>Read only<tr><td>Locking<td>/tmp/pgbackrest<td><a href=https://pgbackrest.org/configuration.html#section-general/option-lock-path>lock-path</a><td>Read and write<tr><td>Logging<td>/var/log/pgbackrest<td><a href=https://pgbackrest.org/configuration.html#section-log/option-log-path>log-path</a><td>Write only*<tr><td>Spool for async WAL push<td>/var/spool/pgbackrest<td><a href=https://pgbackrest.org/configuration.html#section-general/option-spool-path>spool-path</a><td>None<tr><td>Postgres data directory<td>Varies:<code>SHOW data_directory</code><td>pg1-path<td>Read only<tr><td>Postgres logs<td>Varies, often $DATADIR/log<td>N/A<td>Read (not needed but nice to have)</table><p>* Not needed if <a href=https://pgbackrest.org/configuration.html#section-log/option-log-level-file>log-level-file</a> is set to off<h3 id=create-a-regular-postgres-database-user><a href=#create-a-regular-postgres-database-user>Create a regular Postgres database user</a></h3><p>Now that the file permissions are all in place, we need to create an account inside of Postgres itself. We want a regular, non-superuser account that has minimal privileges. To keep things simple, we will call this user "backrest".<p>If you are following along, give it a good password. Here's one way to make one:<pre><code class=language-shell>$ dd if=/dev/urandom count=1 status=none | md5sum | awk '{print$1}' | tee mypass
1560a2dff5992750d9748cbda44b4c51
</code></pre><p>Create the new Postgres user, assign it the password generated above, then put that password into the "pgpass" file for the Unix user "pgbackrest":<pre><code class=language-shell>$ sudo -iu postgres createuser backrest --pwprompt
## (enter password twice)
$ echo *:*:postgres:backrest:$(cat mypass) | sudo -iu pgbackrest tee -a .pgpass
$ sudo -iu pgbackrest chmod 600 .pgpass
</code></pre><h3 id=restrict-what-this-new-database-user-can-do><a href=#restrict-what-this-new-database-user-can-do>Restrict what this new database user can do</a></h3><p>We only want this new database user to connect to the "postgres" database and nowhere else, so we need to add these lines to the pg_hba.conf file, making sure they appear before any other "local" lines:<pre><code class=language-text>local  postgres  backrest  scram-sha-256
local  all       backrest  reject
</code></pre><p>Alas, there is currently no equivalent to <a href=https://www.postgresql.org/docs/current/sql-altersystem.html>ALTER SYSTEM</a> for the pg_hba.conf file, but a little command-line trickery gets the job done:<pre><code class=language-shell>$ sudo -iu postgres bash -c \
  'sed -i "1i local postgres backrest scram-sha-256 \nlocal all backrest reject" $(psql -Atc "show hba_file")'
$ sudo -iu postgres psql -c 'select pg_reload_conf()'
</code></pre><p>Let's make sure the rules are in there:<pre><code class=language-shell>$ sudo -iu postgres psql -c 'select * from pg_hba_file_rules limit 2'
 line_number | type  |  database  | user_name  | address | netmask |  auth_method  | options | error
-------------+-------+------------+------------+---------+---------+---------------+---------+-------
           1 | local | {postgres} | {backrest} |         |         | scram-sha-256 |         |
           2 | local | {all}      | {backrest} |         |         | reject        |         |
(2 rows)
</code></pre><p>Even though this user can only connect to a single database, let's further limit what it can do by revoking all access to the 'public' schema:<pre><code class=language-shell>sudo -iu postgres psql -c 'revoke all on schema public from backrest'
</code></pre><h3 id=grant-permission-to-the-backup-command><a href=#grant-permission-to-the-backup-command>Grant permission to the 'backup' command</a></h3><p>To be able to run backups, the new database user will need access to one role, and two functions:<pre><code class=language-shell>$ sudo -iu postgres psql \
 -c 'grant pg_read_all_settings to pgbackrest' \
 -c 'grant execute on function pg_start_backup to pgbackrest' \
 -c 'grant execute on function pg_stop_backup(bool,bool) to pgbackrest'
</code></pre><p>Our sample database is not archiving WAL file via pgBackRest yet, so let's add that in place now:<pre><code class=language-shell>sudo -iu postgres psql -c "alter system set archive_mode=on"
sudo -iu postgres psql -c "alter system set archive_command='pgbackrest --stanza=foobar archive-push %p'"
sudo -iu postgres /usr/pgsql-14/bin/pg_ctl restart
</code></pre><h3 id=grant-permission-to-the-check-command><a href=#grant-permission-to-the-check-command>Grant permission to the 'check' command</a></h3><p>We can in theory do a backup now, but how about the <a href=https://pgbackrest.org/command.html#command-check>pgBackRest "check" command?</a><pre><code class=language-shell>$ sudo --user pgbackrest -i /bin/pgbackrest --stanza=foobar --log-level-console=detail check
ERROR: [057]: unable to execute query 'select pg_catalog.pg_create_restore_point('pgBackRest Archive Check')::text':
ERROR:  permission denied for function pg_create_restore_point
</code></pre><p>Well, it turns out that the check command, and only the check command, requires permission for two more database functions:<pre><code class=language-shell>$ sudo -iu postgres psql \
 -c 'grant execute on function pg_create_restore_point to pgbackrest' \
 -c 'grant execute on function pg_switch_wal to pgbackrest'
</code></pre><p>The check command is working as expected now:<pre><code class=language-shell>$ sudo -iu pgbackrest  /bin/pgbackrest --stanza foobar  check  --log-level-console=detail
2022-01-01 03:19:22.033 P00   INFO: check command begin 2.36: --exec-id=9921-e64ad021 --log-level-console=detail
                                    --pg1-path=/var/lib/pgsql/14/data --stanza=foobar
2022-01-01 03:19:22.011 P00   INFO: check repo1 configuration (primary)
2022-01-01 03:19:22.005 P00   INFO: check repo1 archive for WAL (primary)
2022-01-01 03:19:22.025 P00   INFO: WAL segment 000000010000000000000002 successfully archived to
                                    '/var/lib/pgbackrest/archive/foobar/14-1/0000000100000000
                                     /000000010000000000000002-6273e062555e65ea850137e743f73fe941746F5A.gz' on repo1
2022-01-01 03:19:22.033 P00   INFO: check command end: completed successfully (1321ms)
</code></pre><h3 id=create-a-backup><a href=#create-a-backup>Create a backup</a></h3><p>Now that the check command is working, let's take our first backup!<pre><code class=language-shell>$ sudo -iu pgbackrest /bin/pgbackrest --stanza=foobar --log-level-console=info backup
2022-01-01 03:19:23.116 P00   INFO: backup command begin 2.36: --exec-id=9996-6beecee3 --log-level-console=info
                                    --pg1-path=/var/lib/pgsql/14/data --stanza=foobar --start-fast
WARN: option 'repo1-retention-full' is not set for 'repo1-retention-full-type=count', the repository may run out of space
      HINT: to retain full backups indefinitely (without warning), set option 'repo1-retention-full' to the maximum.
WARN: no prior backup exists, incr backup has been changed to full
2022-01-01 03:19:23.104 P00   INFO: execute non-exclusive pg_start_backup(): backup begins after the requested immediate checkpoint completes
2022-01-01 03:19:23.101 P00   INFO: backup start archive = 000000010000000000000005, lsn = 0/5000028
2022-01-01 03:19:23.102 P00   INFO: execute non-exclusive pg_stop_backup() and wait for all WAL segments to archive
2022-01-01 03:19:23.117 P00   INFO: backup stop archive = 000000010000000000000005, lsn = 0/5003EE8
2022-01-01 03:19:23.108 P00   INFO: check archive for segment(s) 000000010000000000000005:000000010000000000000005
2022-01-01 03:19:23.108 P00   INFO: new backup label = 20220101-031923F
2022-01-01 03:19:23.116 P00   INFO: full backup size = 25.8MB, file total = 952
2022-01-01 03:19:23.105 P00   INFO: backup command end: completed successfully (4451ms)
2022-01-01 03:19:23.116 P00   INFO: expire command begin 2.36: --exec-id=9996-6beecee3 --log-level-console=info --stanza=foobar
2022-01-01 03:19:23.108 P00   INFO: option 'repo1-retention-archive' is not set - archive logs will not be expired
2022-01-01 03:19:23.101 P00   INFO: expire command end: completed successfully (6ms)
</code></pre><p>Those two warnings are not important for now. Let's check that the backup looks complete by using the <a href=https://pgbackrest.org/command.html#command-info>pgBackRest "info" command</a>:<p>Those two warnings are not important for now. Let's check that the backup looks complete by using the <a href=https://pgbackrest.org/command.html#command-info>pgBackRest "info" command</a>:<pre><code class=language-shell>$ sudo -iu pgbackrest /bin/pgbackrest info
stanza: foobar
    status: ok
    cipher: none

    db (current)
        wal archive min/max (14): 000000010000000000000001/000000010000000000000007

        full backup: 20220101-031923F
            timestamp start/stop: 2022-01-01 03:19:23.000  / 2022-01-01 03:19:23.999
            wal start/stop: 000000010000000000000003 / 000000010000000000000003
            database size: 25.8MB, database backup size: 25.8MB
            repo1: backup set size: 3.2MB, backup size: 3.2MB
</code></pre><h2 id=database-permissions-summary><a href=#database-permissions-summary>Database permissions summary</a></h2><p>Here's a summary of all the Postgres database permissions we need to use pgBackRest:<table><thead><tr><th>Item<th>Type<th>Needed for<tbody><tr><td>pg_read_all_settings<td>role<td>backup<tr><td>pg_start_backup<td>function<td>backup<tr><td>pg_stop_backup<td>function<td>backup<tr><td>pg_create_restore_point<td>function<td>check<tr><td>pg_switch_wal<td>function<td>check</table><p>That's it! We used the power of Unix groups, and selective EXECUTE privileges on a handful of functions, to make a user that can create Postgres backups through the pgBackRest program with the least permissions possible.<h3 id=final-notes><a href=#final-notes>Final notes</a></h3><ul><li>This new user should not do restores - for that, use the "postgres" user.<li>Anything that may create new Postgres clusters (e.g. with <code>initdb</code>) will need to ensure that the new permissions in the data directory are set. For Patroni, a quick shell script attached to the "on_start" hook will suffice.<li>Using S3, GCS, or Azure (which are all supported by pgBackRest) will require further tweaks.<li>If you tie this in with <a href=https://www.crunchydata.com/products/hardened-postgres>TDE</a> (transparent date encryption), then you will have "blind" backups in which the files you are backing up are encrypted and cannot be decrypted by your backup user.<li>For further Least Privilege considerations, ask about the <a href=https://www.crunchydata.com/products/hardened-postgres>Crunchy Hardened</a> Superuser Lockdown feature.</ul> ]]></content:encoded>
<category><![CDATA[ Security ]]></category>
<author><![CDATA[ Greg.Sabino.Mullane@crunchydata.com (Greg Sabino Mullane) ]]></author>
<dc:creator><![CDATA[ Greg Sabino Mullane ]]></dc:creator>
<guid isPermalink="false">https://blog.crunchydata.com/blog/secure-permissions-for-pgbackrest</guid>
<pubDate>Fri, 04 Feb 2022 04:00:00 EST</pubDate>
<dc:date>2022-02-04T09:00:00.000Z</dc:date>
<atom:updated>2022-02-04T09:00:00.000Z</atom:updated></item></channel></rss>