<?xml version="1.0" encoding="UTF-8" ?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" version="2.0"><channel><title>Greg Nokes | CrunchyData Blog</title>
<atom:link href="https://www.crunchydata.com/blog/author/greg-nokes/rss.xml" rel="self" type="application/rss+xml" />
<link>https://www.crunchydata.com/blog/author/greg-nokes</link>

<description>PostgreSQL experts from Crunchy Data share advice, performance tips, and guides on successfully running PostgreSQL and Kubernetes solutions</description>
<language>en-us</language>
<pubDate>Thu, 30 May 2024 06:00:00 EDT</pubDate>
<dc:date>2024-05-30T10:00:00.000Z</dc:date>
<dc:language>en-us</dc:language>
<sy:updatePeriod>hourly</sy:updatePeriod>
<sy:updateFrequency>1</sy:updateFrequency>
<item><title><![CDATA[ Data Encryption in Postgres: A Guidebook ]]></title>
<link>https://www.crunchydata.com/blog/data-encryption-in-postgres-a-guidebook</link>
<description><![CDATA[ Greg reviews data encryption methods including solutions for filesystem, data storage, transparent data encryption, and application level encryption. Greg includes pros and cons of each to help you decide on the right method for your data store. ]]></description>
<content:encoded><![CDATA[ <p>When your company has decided it's time to invest in more open source, Postgres is the obvious choice. Managing databases is not new and you already have established practices and requirements for rolling out a new database. One of the big requirements we frequently help new customers with on their Postgres adoption is data encryption. While the question is simple, there's a few layers to it that determine which is the right approach for you. Here we'll walk through the pros and cons of approaches and help you identify the right path for your needs.<h2 id=overview-of-at-rest-encryption-methods><a href=#overview-of-at-rest-encryption-methods>Overview of At-Rest Encryption Methods</a></h2><p>Let’s start by defining some terms. There are four primary ways to encrypt your data while it is at rest:<h3 id=os-level-and-filesystem-encryption><a href=#os-level-and-filesystem-encryption>OS-Level and Filesystem Encryption</a></h3><p>Operating system or disk-level encryption protects entire file systems or disks. This method is application-agnostic and offers encryption with minimal overhead. Think technologies like <code>luks</code> in Linux or FileVault in MacOS.<p><strong>Pros</strong>:<ul><li>Transparent to applications and the database<li>Simplifies management by applying encryption to the entire storage layer<li>Offloads encryption and decryption processing to the OS<li>Minimal performance and operational impact<li>Widely understood and implemented technology</ul><p><strong>Cons</strong>:<ul><li>Less granular control over specific databases or tables<li>Backups are not encrypted by default<li>Additional overhead is required to ensure encryption keys are properly managed</ul><h3 id=storage-device-encryption><a href=#storage-device-encryption>Storage Device Encryption</a></h3><p>Encryption is directly implemented on storage devices such as hard disk drives or SSDs which automatically encrypt all of the data written to their storage.<p><strong>Pros</strong>:<ul><li>Suitable for environments with hardware security requirements<li>Minimal performance and operational impact<li>Offloads encryption and decryption processing to the hardware layer</ul><p><strong>Cons</strong>:<ul><li>Less granular control over specific databases or tables<li>Additional overhead is required to ensure encryption keys are properly managed</ul><h3 id=transparent-disk-encryption-tde><a href=#transparent-disk-encryption-tde>Transparent Disk Encryption (TDE)</a></h3><p>In the context of Postgres, TDE means offloading encryption and decryption to the Postgres application. TDE encrypts the entire database, its associated backup files, and the transaction log files, using a database encryption key. This process is transparent to applications, meaning they operate without any changes, as the encryption and decryption happen at the database engine level.<p>This introduces some complexity and performance overhead as the database must handle all encryption and decryption tasks. Postgres does not currently have this capability built in. Generally, TDE in Postgres is accomplished by forking Postgres, applying patches, and re-building the forked TDE-enabled version.<p><strong>Pros</strong>:<ul><li>Encryption at the database level</ul><p><strong>Cons</strong>:<ul><li>The database must handle all encryption and decryption for every disk read and write<li>Moving away from TDE can be complex and requires an expensive dump and restore process<li>Risk of total data loss if keys are not accessible<li>Additional overhead is required to ensure encryption keys are properly managed<li>Functionality is not native to Postgres and currently requires a forked version of the code<li>Data in memory is not encrypted</ul><h3 id=application-level-encryption><a href=#application-level-encryption>Application-Level Encryption</a></h3><p>Encryption logic is implemented directly within your application code. This method can impact performance and add complexity but offers the most flexibility. You can use tools like <a href=https://www.crunchydata.com/blog/postgres-pgcrypto><code>pgcrypto</code></a> alongside your application code to encrypt data at the columnar level, ensuring that your most sensitive data is stored safely. You can create a “dual key” system, where one key unlocks access to the database, and the second key unlocks access to sensitive data stored in the database. The database need never be aware of the second key, as the application uses it to encrypt and decrypt data before it sends it to the database.<p><strong>Pros</strong>:<ul><li>Offloads encryption and decryption processing to the application layer<li>Allows fine-grained encryption down to the field level<li>Enables encryption tailored to the application's specific security needs<li>Can be used to give a level of separation of duties and access<li>Protects data from non-business users thus giving a level of separation of duties between admins and users</ul><p><strong>Cons</strong>:<ul><li>Greatly increases application complexity<li>Additional management overhead is required to ensure encryption keys are properly managed<li>Can hinder database search, sorting and indexing capabilities</ul><h2 id=how-we-think-about-data-encryption><a href=#how-we-think-about-data-encryption>How We Think About Data Encryption</a></h2><p>First and foremost, are you encrypting your data in flight? That's just table stakes in our mind. When it comes to at-rest data encryption, you need to think about the options available and your real requirements.<h3 id=1-assess-compliance-requirements><a href=#1-assess-compliance-requirements>1. Assess Compliance Requirements</a></h3><p>Understanding regulatory frameworks and internal requirements is crucial in deciding which encryption strategy to implement. Your specific requirements should guide your decision-making process and help you reach a decision around which technology is the best fit for your environment.<h3 id=2-evaluate-existing-architecture><a href=#2-evaluate-existing-architecture>2. Evaluate Existing Architecture</a></h3><p>When selecting an encryption strategy, evaluate the existing architecture and available resources. Consider OS support, hardware resources, and storage devices to ensure compatibility and minimal disruption. Think about any operational burden you may incur.<h3 id=3-balance-complexity-and-security><a href=#3-balance-complexity-and-security>3. Balance Complexity and Security</a></h3><p>Finding the right balance between security and performance is critical. Encrypting highly sensitive data with more intensive methods is justified, but less critical data might require less intensive methods. Testing and understanding your requirements can help identify acceptable trade-offs in complexity.<h3 id=4-minimize-management-complexity><a href=#4-minimize-management-complexity>4. Minimize Management Complexity</a></h3><p>Encryption solutions should not overwhelm existing management workflows. Effective key management, version compatibility, and alignment with current security operations are essential to minimize additional management overhead.<h3 id=5-combine-strategies-for-layered-security><a href=#5-combine-strategies-for-layered-security>5. Combine Strategies for Layered Security</a></h3><p>Consider combining encrypted storage with in-database encryption for highly sensitive data. Layered security can provide additional safeguards and may enhance overall data protection.<h2 id=finding-the-right-solution><a href=#finding-the-right-solution>Finding the Right Solution</a></h2><h3 id=for-general-implementations-or-medium-to-high-security-environments><a href=#for-general-implementations-or-medium-to-high-security-environments>For General Implementations or Medium to High Security Environments</a></h3><p>Most implementations fall into this category. Think about OS-level or filesystem encryption for ease of deployment and compatibility across multiple applications. You can leverage tools like Tablespaces to target more secure underlying storage systems for tables with more sensitive data. Design a solution leveraging filesystem, disk and application level encryption that addresses your specific needs.<h3 id=for-when-disk-encryption-is-not-an-option><a href=#for-when-disk-encryption-is-not-an-option>For when Disk Encryption is not an option</a></h3><p>If you really require transparent disk encryption, evaluate whether storage-level encryption is sufficient. For workloads with stringent data classification requirements that necessitate TDE, ensure that you undertake a proof-of-concepts with clear goals and measurable outcomes. Test and create playbooks for events like data restore to an off-site database. Ensure that you have built operation expertise in key management, security and backup.<h2 id=conclusion><a href=#conclusion>Conclusion</a></h2><p>Encrypting your data while it is at rest is essential for data security in Postgres environments. By understanding your needs and balancing various encryption approaches, you can achieve optimal data protection without overcomplicating your workflows. With the right strategy, you can ensure security, compliance, and performance, providing a robust solution tailored to each environment's needs. At Crunchy Data we have deep expertise in helping enterprises and agencies navigate these requirements and design secure solutions that meet their requirements while also keeping maintenance and operational complexity low. We have helped countless organizations navigate these challenges and would be happy to discuss this further with you. Reach out to <a href=mailto:info@crunchydata.com>info@crunchydata.com</a> for more information. ]]></content:encoded>
<category><![CDATA[ Production Postgres ]]></category>
<author><![CDATA[ Greg.Nokes@crunchydata.com (Greg Nokes) ]]></author>
<dc:creator><![CDATA[ Greg Nokes ]]></dc:creator>
<guid isPermalink="false">2e728b8128c0f4367956b1f6f19996408d56ffcbfe2ca60fad15ddb30f453f9b</guid>
<pubDate>Thu, 30 May 2024 06:00:00 EDT</pubDate>
<dc:date>2024-05-30T10:00:00.000Z</dc:date>
<atom:updated>2024-05-30T10:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ Crunchy Postgres for Kubernetes 5.3 Release ]]></title>
<link>https://www.crunchydata.com/blog/crunchy-postgres-for-kubernetes-5.3-release</link>
<description><![CDATA[ Announcing PGO support for Postgres 15, Kubernetes 1.25, IPv6, Helm charts, and more! ]]></description>
<content:encoded><![CDATA[ <p>We are excited to announce the release of <strong><a href=https://www.crunchydata.com/products/crunchy-postgresql-for-kubernetes>Crunchy Postgres for Kubernetes</a> version 5.3</strong>. We have been hard at work on a lot of new features that we cannot wait to get into your hands. You can get started on version 5.3 from our <strong><a href=https://access.crunchydata.com/documentation/postgres-operator/latest>Developer Portal</a></strong> or the <strong><a href=https://www.crunchydata.com/developers/get-started/postgres-operator>getting started tutorial</a></strong>. We have decided to highlight a few of our favorite new features and changes today.<h2 id=hot-off-the-presses-postgres-15-and-kubernetes-125><a href=#hot-off-the-presses-postgres-15-and-kubernetes-125>Hot off the Presses: Postgres 15 and Kubernetes 1.25</a></h2><p>With the latest release of Crunchy Postgres for Kubernetes, we are excited to now natively support Postgres 15. Please note that TimescaleDB and pgAdmin 4 are not currently supported for use with Postgres 15. We are hard at work enabling that functionality for Crunchy Postgres for Kubernetes.<p>We are also excited to announce that Crunchy Postgres for Kubernetes now offers support for Kubernetes 1.25. Be sure to check your apps’ usage of <code>CronJob</code>, <code>PodDisruptionBudget</code>, or <code>PodSecurityPolicy</code> before upgrading; some of these <a href=https://git.k8s.io/kubernetes/CHANGELOG/CHANGELOG-1.25.md#urgent-upgrade-notes>API versions have been removed</a> in Kubernetes 1.25.<p>Some of the notable updates in Kubernetes 1.25 that Crunchy Postgres for Kubernetes now supports include:<ul><li><a href=https://kubernetes.io/docs/concepts/security/pod-security-admission/>Pod Security Admission</a> is stable and Crunchy Postgres for Kubernetes is generally compliant with its baseline policy. It is also compliant with the restricted policy in OpenShift.<li><a href=https://kubernetes.io/docs/concepts/architecture/cgroups/>cgroup v2</a> is stable and works with the bundled pgMonitor v4.8.0 container metrics.</ul><h2 id=tls-for-exporter><a href=#tls-for-exporter>TLS for Exporter</a></h2><p>In our ongoing quest to ensure that Crunchy Postgres for Kubernetes remains the default for securely and safely managing Postgres in Kubernetes, we are hard at work to enable safe and sane defaults and settings everywhere we can. We are excited to announce that in Crunchy Postgres for Kubernetes 5.3 you can now enable full TLS for the Postgres Exporter, the tool that exports metrics from the Postgres pods.<h2 id=helm-charts><a href=#helm-charts>Helm Charts</a></h2><p>We know a lot of our users like using Helm charts to install Crunchy Postgres for Kubernetes and we are excited to announce that we are now hosting install charts for Crunchy Postgres for Kubernetes 5.3 in our own OCI registry. You can find instructions for using these charts <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/installation/helm/>here</a>.<pre><code class=language-yaml>$ helm show chart oci://registry.developers.crunchydata.com/crunchydata/pgo
Pulled: registry.developers.crunchydata.com/crunchydata/pgo:5.3.0
Digest: sha256:7f50f74b0bb4fde32348af87cc740b79ed7be965f90002bcb2425e27fb02a9e3
apiVersion: v2
appVersion: 5.3.0
description: Installer for PGO, the open source Postgres Operator from Crunchy Data
name: pgo
type: application
version: 5.3.0

$ helm install oci://registry.developers.crunchydata.com/crunchydata/pgo
Pulled: registry.developers.crunchydata.com/crunchydata/pgo:5.3.0
</code></pre><h2 id=ipv6-for-pgbackrest><a href=#ipv6-for-pgbackrest>IPv6 for pgBackRest</a></h2><p>We are excited to announce that IPv6 is now supported for pgBackRest. Users can now successfully deploy Postgres clusters to Kubernetes environments that are configured for IPv6 only!  You can find directions for setting that up in the <a href=https://access.crunchydata.com/documentation/postgres-operator/5.3.0/tutorial/backups/>Backup Configuration section</a> of the docs.<h2 id=open-source-contributions><a href=#open-source-contributions>Open Source Contributions</a></h2><p>Of course a large thank you should go out to our Open Source contributors who help make the Postgres Operator better for everyone. During the last quarter these are the highlights of their contributions:<ul><li>JIT is now explicitly disabled for the monitoring user, allowing users to opt-into using JIT elsewhere in the database without impacting exporter functionality. Contributed by Kirill Petrov (@<a href=https://github.com/chobostar>chobostar</a>).<li>PGO now logs both stdout and stderr when running a SQL file referenced via spec.databaseInitSQL during database initialization. Contributed by Jeff Martin (@<a href=https://github.com/jmartin127>jmartin127</a>).<li>Limit the monitoring user to local connections using SCRAM authentication. Contributed by Scott Zelenka (@<a href=https://github.com/szelenka>szelenka</a>)<li>Skip a scheduled backup when the prior one is still running. Contributed by Scott Zelenka (@<a href=https://github.com/szelenka>szelenka</a>)</ul><p>If you would like to lend a hand with PGO development, get started by reviewing the <a href=https://github.com/CrunchyData/postgres-operator/blob/master/CONTRIBUTING.md>contributing guidelines</a>.<h2 id=and-more><a href=#and-more>And More…</a></h2><p>We are very excited to bring you this next version of <a href=https://www.crunchydata.com/products/crunchy-postgresql-for-kubernetes>Crunchy Postgres for Kubernetes</a>. This is just a sampling of the new features and fixes that we have shipped with this version. We hope that you enjoy using it, and as always we value our community's feedback. The full feature notes are available <strong><a href=https://access.crunchydata.com/documentation/postgres-operator/5.3.0/>in our documentation</a></strong>. ]]></content:encoded>
<category><![CDATA[ Kubernetes ]]></category>
<author><![CDATA[ Greg.Nokes@crunchydata.com (Greg Nokes) ]]></author>
<dc:creator><![CDATA[ Greg Nokes ]]></dc:creator>
<guid isPermalink="false">1baa02b6e2a4226d7112b73c5bfbb3d649327168860b26d0534e7cb7bb7fbf10</guid>
<pubDate>Wed, 21 Dec 2022 10:00:00 EST</pubDate>
<dc:date>2022-12-21T15:00:00.000Z</dc:date>
<atom:updated>2022-12-21T15:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ Crunchy Postgres for Kubernetes 5.2 Launch ]]></title>
<link>https://www.crunchydata.com/blog/crunchy-postgres-for-kubernetes-launch-5.2</link>
<description><![CDATA[ Some exciting new features are launching this week with Crunchy Postgres for Kubernetes 5.2. We have a new CLI plugin for kubectl! Plus sidecar apps, new options for streaming replicas, and upgrade options. ]]></description>
<content:encoded><![CDATA[ <p>We are excited to announce the release of <a href=https://www.crunchydata.com/products/crunchy-postgresql-for-kubernetes>Crunchy Postgres for Kubernetes</a> version 5.2. We have been hard at work on a lot of new features we cannot wait to get into your hands. You can get started on version 5.2 from our <a href=https://access.crunchydata.com/documentation/postgres-operator/latest>container portal</a> or the <a href=https://www.crunchydata.com/developers/get-started/postgres-operator>get started tutorial</a>. We have decided to highlight a few of our favorite features today.<h2 id=command-lines-for-everyone><a href=#command-lines-for-everyone>Command Lines for Everyone</a></h2><p>First, we are very excited to release the first iteration of our CLI for PGO v5, <code>pgo</code>. Our CLI is designed as a <code>kubectl</code> plugin. You can get it <a href=https://github.com/CrunchyData/postgres-operator-client/releases/latest/>here</a> and you can use it today with PGO 5.2. We have taken steps to align our CLI very closely to <code>kubectl</code> for ease of use. <code>pgo</code> also works as a plugin with the OpenShift <code>oc</code> CLI. This ensures that developer experience matches the rest of Kubernetes.<p>For example, you can now use the following command to create a cluster:<pre><code class=language-bash>pgo create postgrescluster hippo
</code></pre><p>Or this command to backup a cluster:<pre><code class=language-bash>pgo backup hippo --repoName="repo1”
</code></pre><p>This is a preview release of the CLI, and we will be working hard on adding more commands and workflows. We welcome <a href=https://github.com/CrunchyData/postgres-operator-client/issues>feedback</a> on what you would like to see included.<h2 id=streaming-replication><a href=#streaming-replication>Streaming Replication</a></h2><p>Enabling opinionated and safe streaming replication for Kubernetes workloads was one of our goals in 5.2 to increase the choices for achieving disaster recovery. Streaming replication can be faster than log shipping and does not rely on a single s3 storage area. You can read more about how to turn this feature on in the <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery/>documentation</a>.<p><img alt="Streaming Replica Architecture"loading=lazy src=https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/c14a9826-87eb-4918-d29a-14befa766900/public><p>This feature allows new architecture choices for replicas and standbys, with low latency direct interconnectivity. Now we support streaming replication as well as the use of both streaming and a cloud-based WAL archive in conjunction.<h2 id=feature-gates-and-sidecars><a href=#feature-gates-and-sidecars>Feature Gates and Sidecars</a></h2><p>We have added two feature gates related to sidecars. You can now add any container you want to Postgres or PgBouncer pods through the PostgresCluster spec. It can be an observability agent like DataDog or New Relic, a network proxy like poolers or service meshes, or an API endpoint like Hasura. The possibilities are endless and we’re excited to see what our users build. We relaxed the <code>runAsNonRoot</code> security setting so you can run more images from your favorite registries. This opens up more custom use cases in a safe and opinionated fashion. This will allow folks on 5.2 additional flexibly to design a system that works well for their needs. You can read more in the <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/guides/configuring-cluster-images/>documentation</a>.<p>We will be working with customers and partners to release documented sidecar patterns in the coming months, and we welcome your <a href=https://github.com/CrunchyData/postgres-operator>pull requests</a> if you want to share.<h2 id=easy-upgrades><a href=#easy-upgrades>Easy Upgrades</a></h2><p>With the 5.x version of PGO, we have focused on making the upgrade process easier. One improvement that we have delivered with 5.2 is the new <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/cluster-management/administrative-tasks>Pause Reconcile</a> feature. This enables customers to pause execution of a reconcile until such a time as it is convenient. This will allow users to stage changes, and execute them when it is convenient.<p><a href=https://access.crunchydata.com/documentation/postgres-operator/latest/upgrade>Upgrading your PGO version</a> requires just a few simple non-destructive steps to. For example, if you are on 5.1 series, updating to 5.2 can be as easy as:<pre><code class=language-bash>kubectl apply --server-side -k kustomize/install/default
</code></pre><h2 id=and-more><a href=#and-more>And More…</a></h2><p>We are very excited to bring you this next version of Postgres for Kubernetes. This is just a sampling of the new features and fixes that we have shipped with this version. We hope that you enjoy using it, and as always we value our community's feedback. The full feature notes are available <a href=https://access.crunchydata.com/documentation/postgres-operator/5.2.0/>in our documentation</a>. <br /><br /><br /><br /> co-authored with <a href=https://www.crunchydata.com/blog/author/andrew-lecuyer>Andrew L'Ecuyer</a> and Chris Bandy ]]></content:encoded>
<category><![CDATA[ Kubernetes ]]></category>
<author><![CDATA[ Greg.Nokes@crunchydata.com (Greg Nokes) ]]></author>
<dc:creator><![CDATA[ Greg Nokes ]]></dc:creator>
<guid isPermalink="false">c12bd2d81b2118a88f3316d97507adf672128e9baf8e18d0857364a8f18b4c70</guid>
<pubDate>Fri, 09 Sep 2022 11:00:00 EDT</pubDate>
<dc:date>2022-09-09T15:00:00.000Z</dc:date>
<atom:updated>2022-09-09T15:00:00.000Z</atom:updated></item></channel></rss>