<?xml version="1.0" encoding="UTF-8" ?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" version="2.0"><channel><title>Andrew L'Ecuyer | CrunchyData Blog</title>
<atom:link href="https://www.crunchydata.com/blog/author/andrew-lecuyer/rss.xml" rel="self" type="application/rss+xml" />
<link>https://www.crunchydata.com/blog/author/andrew-lecuyer</link>

<description>PostgreSQL experts from Crunchy Data share advice, performance tips, and guides on successfully running PostgreSQL and Kubernetes solutions</description>
<language>en-us</language>
<pubDate>Tue, 05 Dec 2023 08:00:00 EST</pubDate>
<dc:date>2023-12-05T13:00:00.000Z</dc:date>
<dc:language>en-us</dc:language>
<sy:updatePeriod>hourly</sy:updatePeriod>
<sy:updateFrequency>1</sy:updateFrequency>
<item><title><![CDATA[ Announcing Crunchy Postgres for Kubernetes 5.5 ]]></title>
<link>https://www.crunchydata.com/blog/announcing-crunchy-postgres-for-kubernetes-5-5</link>
<description><![CDATA[ Version 5.5 of Crunchy Postgres for Kubernetes is out and Andrew has an overview of highlights. We have a really cool new pgAdmin set up, streamlined metrics features, updated pgBouncer and more. ]]></description>
<content:encoded><![CDATA[ <p>We're excited to announce the release of Crunchy Postgres for Kubernetes 5.5. Included in this release are great updates to database administration, monitoring, connection pooling and more. Specific highlights include:<ul><li>An updated pgAdmin experience, including the ability to deploy one pgAdmin for use with multiple Postgres clusters<li>Easier installation of the metrics and monitoring tools, along with the added ability to monitor standby clusters<li>Updates to PgBouncer including support for prepared statements and streamlined connectivity to read replicas</ul><p>We have many updates in this version so see <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/releases/5.5.x>Crunchy Postgres for Kubernetes 5.5 Release Notes</a> for the full list of all changes included.<h2 id=updated-pgadmin><a href=#updated-pgadmin>Updated pgAdmin</a></h2><p>Included in the 5.5 release we’ve introduced a new pgAdmin Custom Resource. Designed from the ground-up to put pgAdmin at your fingertips where and when needed, this version lets you:<ul><li>Add one or more Postgres clusters to a single pgAdmin deployment<li>Deploy the latest and greatest pgAdmin features &#38 releases, including full Postgres 16 support</ul><ul><li>Integrate with enterprise authentication methods, such as LDAP</ul><p>Here is a quick peek at what simple pgAdmin deployment looks like using the new Custom Resource:<pre><code class=language-yaml>apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PGAdmin
metadata:
  name: rhino-admin
spec:
  dataVolumeClaimSpec:
    accessModes:
      - 'ReadWriteOnce'
    resources:
      requests:
        storage: 1Gi
  serverGroups:
    - name: demand
      postgresClusterSelector:
        matchLabels:
          app: demo
</code></pre><p>From here I simply need to ensure any of the PostgresClusters I want to connect to have the <code>demo</code> label.<pre><code>kubectl label postgrescluster hippo app=demo
</code></pre><p>That's it! From here, Crunchy Postgres for Kubernetes will handle the rest. This includes configuring a connection to my <code>hippo</code> PostgresCluster within the <code>rhino-admin</code> deployment.<h2 id=enhanced-metrics--monitoring><a href=#enhanced-metrics--monitoring>Enhanced Metrics &#38 Monitoring</a></h2><p>We're always looking for ways to make it easy to get insights from your Postgres clusters and we’ve added a number of features, including these notable ones:<ul><li>Helm chart for installing metrics and monitoring<li>Easily adding custom queries to the monitor<li>Monitoring standby clusters</ul><p>Additional details on each of these can be found below.<h4 id=helm-support><a href=#helm-support>Helm support</a></h4><p>With version 5.5, we're excited to announce that Helm support is now available for the monitoring stack. This makes this important day 2 activity of monitoring available to a variety of Helm use cases. The Helm chart is available via OCI registry or the examples repo.<pre><code>helm install crunchy \
oci://registry.developers.crunchydata.com/crunchydata/crunchy-monitoring
</code></pre><p>Take a look at our docs for more details:<ul><li><a href=https://access.crunchydata.com/documentation/postgres-operator/4.6.2/installation/other/helm/>Installing Crunchy Postgres for Kubernetes Monitoring Using Helm</a><li><a href=https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/day-two/monitoring>Day 2 Tasks: Monitoring</a></ul><h4 id=custom-queries><a href=#custom-queries>Custom queries</a></h4><p>Crunchy Postgres for Kubernetes monitoring comes with many queries out of the box. You can also create custom queries for additional monitoring of anything within the database itself. In the past, you had to create a large file of all the queries used. Now you can provide custom queries independently of having to re-document the existing monitoring queries. You can enable this feature via the following feature-flag:<pre><code>PGO_FEATURE_GATES=”AppendCustomQueries”
</code></pre><p>See the following docs for more details: <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/day-two/monitoring#append-your-custom-queries-to-the-defaults>Append Your Custom Queries to the Defaults</a>.<h4 id=monitoring-standby-clusters><a href=#monitoring-standby-clusters>Monitoring Standby Clusters</a></h4><p>Finally, this release provides an important improvement for multi-Kubernetes cluster database architectures. As more databases expand beyond a single cluster, multi-cluster monitoring is also needed. For a PostgresCluster, this means being able to monitor standby clusters. Using version 5.5, you can now set the monitoring password in standby clusters. This, in turn, unlocks the ability to install and use the full monitoring stack with a standby cluster. As a result, you can now get insight into your databases across all your Kubernetes clusters.<h2 id=connection-pooling-updates><a href=#connection-pooling-updates>Connection Pooling Updates</a></h2><p>We're happy to announce updates to <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/basic-setup/connection-pooling>connection pooling</a>. PgBouncer v1.21 is now available in Crunchy Postgres for Kubernetes 5.5. This version of <a href=https://www.crunchydata.com/blog/prepared-statements-in-transaction-mode-for-pgbouncer>PgBouncer introduces support for prepared statements while using transaction mode</a>.<p>As a quick overview, prepared statements are a query efficiency mechanism in Postgres. If you are running the exact same query multiple times, you can create a prepared statement. Once created, you can execute the prepared statement over and over, removing the parsing and preparing steps of the query. This can dramatically increase query performance. PgBouncer is an important tool for many of our customers scaling out their database operations and PgBouncer recently added support for prepared statements, specifically using the transaction mode.<p>Note that by default PgBouncer uses session mode (i.e., pool_mode is set to session). Switching to transaction mode only requires a quick update to your PostgresCluster spec:<pre><code class=language-yaml>spec:
  proxy:
    pgBouncer:
      config:
        global:
          pool_mode: transaction
</code></pre><p>The updates to Crunchy Postgres for Kubernetes connection pooling don't stop there! Version 5.5 also makes connecting to read replicas via PgBouncer easier than ever before. More specifically, DNS names for the replica service are now added to the TLS certificates created for a PostgresCluster. This makes it easy to leverage connection pooling for a variety of use cases involving connectivity to read replicas.<h2 id=upgrading-to-crunchy-postgres-for-kubernetes-55><a href=#upgrading-to-crunchy-postgres-for-kubernetes-55>Upgrading to Crunchy Postgres for Kubernetes 5.5</a></h2><p>Upgrading to Crunchy Postgres for Kubernetes 5.5 is typically as simple as running a single command. For instance, if you installed Crunchy Postgres for Kubernetes using the Kustomize installer available in the <a href=https://github.com/CrunchyData/postgres-operator-examples>Postgres Operator examples repository</a>, you would simply issue the following command:<pre><code>kubectl apply --server-side -k kustomize/install/default
</code></pre><p>For additional upgrade guidance (e.g. if using <a href=https://access.crunchydata.com/documentation/postgres-operator/4.6.2/installation/other/helm/>Helm</a> or various other installation methods), please see the <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/upgrade>Crunchy Postgres for Kubernetes Upgrade documentation</a>.<p>Thanks for diving into some of the great new features included in Crunchy Postgres for Kubernetes 5.5. We look forward to discussing this new release and more out in the Crunchy Data <a href=https://discord.gg/ErmzUAmTvy>Discord server</a>. If you haven't done so already, we welcome you to join and continue the conversation. ]]></content:encoded>
<category><![CDATA[ Kubernetes ]]></category>
<author><![CDATA[ Andrew.L'Ecuyer@crunchydata.com (Andrew L'Ecuyer) ]]></author>
<dc:creator><![CDATA[ Andrew L'Ecuyer ]]></dc:creator>
<guid isPermalink="false">6ef5bd100503dcaf4bd1b6be93a47f210f7351396d5b3cf34a6bcea0d9e51394</guid>
<pubDate>Tue, 05 Dec 2023 08:00:00 EST</pubDate>
<dc:date>2023-12-05T13:00:00.000Z</dc:date>
<atom:updated>2023-12-05T13:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ Easier Upgrades and Image Management for Postgres in Kubernetes ]]></title>
<link>https://www.crunchydata.com/blog/easier-upgrades-and-image-management-for-postgres-in-kubernetes</link>
<description><![CDATA[ We are showing off some of the newest features in Crunchy PostgreSQL for Kubernetes and related images. Learn how to upgrade Postgres minor and major versions and even pause the upgrade in process. ]]></description>
<content:encoded><![CDATA[ <p>Lukas Fittl recently posted one of his 5 minutes of Postgres videos about his experimentation with different Kubernetes Postgres Operators: <a href=https://pganalyze.com/blog/5mins-postgres-kubernetes-operator-handling-major-version-upgrades>Postgres on Kubernetes, choosing the right operator, and handling major version upgrades</a>. One passage about version updates caught my interest:<blockquote><p>The other article I found interesting was <a href=https://www.crunchydata.com/blog/easy-major-postgresql-upgrades-using-pgo-v51>this post by Andrew from the Crunchy Data team</a>, where he describes how the PGO operator now makes it easy to do major version upgrades. This is actually pretty cool. I think this shows pretty well why an operator can be a lot more sophisticated than a simple pod.</blockquote><p>While Postgres version upgrades are infrequent, they are very necessary. In order for our users and customers to be able to accomplish these vital maintenance operations we have invested considerable thought and work in how to streamline this process. Out of this has come the Minor Versions upgrade process, as well as the Major Version upgrade process that Lukas spoke about.<p>Our goal has been to make the life of someone running <a href=https://www.crunchydata.com/products/crunchy-postgresql-for-kubernetes>Postgres on Kubernetes</a> much easier. The upgrade process that we have built in to the Operator also helps our users and customers keep their systems up to date with patches and updates with out breaking a sweat.<p>We would like to highlight some capabilities within the Operator to further that goal, including the new Pause feature that was introduced in PGO v5.2.0.<h2 id=minor-version-upgrades><a href=#minor-version-upgrades>Minor Version Upgrades</a></h2><p>As shown in the <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/guides/configuring-cluster-images>Crunchy Postgres for Kubernetes documentation</a>, performing a minor upgrade of PostgreSQL is as simple as swapping out images in the PostgresCluster spec.<hr><p>The Postgres image is referenced using the spec.image and looks similar to the below:<pre><code class=language-yaml>spec:
  image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.2-0
</code></pre><p>Diving into the tag a bit further, you will notice the 14.2-0 portion. This represents the Postgres minor version (14.2) and the patch number of the release 0. If the patch number is incremented (e.g. 14.2-1), this means that the container is rebuilt, but there are no changes to the Postgres version. If the minor version is incremented (e.g. 14.3-0), this means that there is a newer bug fix release of Postgres within the container.<p>To update the image, you just need to modify the spec.image field with the new image reference, e.g.<pre><code class=language-yaml>spec:
  image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.2-1
</code></pre><hr><p>Yet, as simple as this process is, what if it could be even easier? And not only that, but what if you could simplify image configuration for all users, while also streamlining operator upgrades?<p>This post will explore how this can be achieved by using the “related images” feature within Crunchy Postgres for Kubernetes. As you will see, related images can make image management for your various PostgresCluster’s easier than ever before!<h3 id=defining-related-images><a href=#defining-related-images>Defining Related Images</a></h3><p>When exploring the PostgresCluster API included with Crunchy Postgres for Kubernetes (e.g. via the <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/references/crd/>CRD reference</a>), you might have noticed something similar to the following for the various <code>image</code> fields defined within the spec:<table><thead><tr><th>NAME<th>TYPE<th>DESCRIPTION<th>REQUIRED<tbody><tr><td>image<td>string<td>The image name to use for PostgreSQL containers. When omitted, the value comes from an operator environment variable. For standard PostgreSQL images, the format is <code>RELATED_IMAGE_POSTGRES_{postgresVersion}</code>, e.g. <code>RELATED_IMAGE_POSTGRES_13</code>. For PostGIS enabled PostgreSQL images, the format is <code>RELATED_IMAGE_POSTGRES_{postgresVersion}_GIS_{postGISVersion}</code>, e.g. <code>RELATED_IMAGE_POSTGRES_13_GIS_3.1</code>.<td>false</table><p>The example shown above describes the <code>spec.image</code> field that is used to define what image Crunchy Postgres for Kubernetes should use to run PostgreSQL. And starting with the “REQUIRED” column all the way to the right of this table, you’ll notice that <code>image</code> is actually optional.<p>If you’re asking how that’s even possible - i.e., obviously Crunchy Postgres for Kubernetes needs to know what image to use for PostgreSQL! - the description actually provides a clue. Specifically, by defining a <code>RELATED_IMAGE_POSTGRES_</code> environment variable in the Crunchy <dfn>Postgres Operator</dfn> (<abbr>PGO</abbr>) deployment, it is possible to tell PGO <strong>itself</strong> exactly what images to use!<p>In the case of the <code>image</code> field for PostgreSQL shown above, this means anyone creating a PostgresCluster now simply needs to define a PostgreSQL version using <code>spec.postgresVersion</code>, and PGO will handle the rest. However, note that <code>image</code> still remains available for use in the PostgresCluster spec if/when it is needed (e.g. to override the related image value).<h3 id=available-related_image_-environment-variables><a href=#available-related_image_-environment-variables>Available <code>RELATED_IMAGE_</code> Environment Variables</a></h3><p>While the above example demonstrates how related images can be used for PostgreSQL images, please note that <em>all</em> <code>image</code> fields in the PostgresCluster API have a corresponding <code>RELATED_IMAGE_</code> environment variable.<p>For instance, if you look at the <a href=https://github.com/CrunchyData/postgres-operator-examples/blob/main/kustomize/install/manager/manager.yaml>PGO deployment</a> within the v5.2.0 <a href=https://access.crunchydata.com/documentation/postgres-operator/5.2.0/installation/kustomize/>Kustomize installer</a>, you will see the following:<pre><code class=language-yaml>- name: RELATED_IMAGE_POSTGRES_13
  value: 'registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-13.8-1'
- name: RELATED_IMAGE_POSTGRES_13_GIS_3.0
  value: 'registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-13.8-3.0-1'
- name: RELATED_IMAGE_POSTGRES_13_GIS_3.1
  value: 'registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-13.8-3.1-1'
- name: RELATED_IMAGE_POSTGRES_14
  value: 'registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.5-1'
- name: RELATED_IMAGE_POSTGRES_14_GIS_3.1
  value: 'registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-14.5-3.1-1'
- name: RELATED_IMAGE_POSTGRES_14_GIS_3.2
  value: 'registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-14.5-3.2-1'
- name: RELATED_IMAGE_PGADMIN
  value: 'registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-4.30-4'
- name: RELATED_IMAGE_PGBACKREST
  value: 'registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.40-1'
- name: RELATED_IMAGE_PGBOUNCER
  value: 'registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.17-1'
- name: RELATED_IMAGE_PGEXPORTER
  value: 'registry.developers.crunchydata.com/crunchydata/crunchy-postgres-exporter:ubi8-5.2.0-0'
</code></pre><h2 id=smooth-updates--upgrades-using-related-images><a href=#smooth-updates--upgrades-using-related-images>Smooth Updates &#38 Upgrades Using Related Images</a></h2><p>Let’s now get back to the minor PostgreSQL upgrade use case discussed in the introduction. As you might recall, a minor PostgreSQL image typically involves updating the <code>spec.image</code> field in the PostgresCluster spec.<p>However, with the proper <code>RELATED_IMAGE_</code> environment variable defined within the PGO deployment (specifically for the latest patch version of PostgreSQL available), this is no longer needed! For instance, if <code>RELATED_IMAGE_POSTGRES_14</code> to configured for PostgreSQL 14.4 image, it can be updated to v14.5 as follows:<pre><code class=language-bash>kubectl set env -n postgres-operator deployment/pgo \
    RELATED_IMAGE_POSTGRES_14=registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-14.5-0
</code></pre><p>And as soon as the above patch is applied and the PGO deployment is restarted, PGO will safely rollout the the minor upgrade to all PostgresCluster’s using PostgreSQL 14. This means PGO administrators can now easily ensure <em>all</em> PostgreSQL clusters using v14 are updated to the latest patched version of PostgreSQL (which includes the latest bug fixes, security fixes, etc.).<p>And while this demonstrates a clear benefit that related images brings to the PostgreSQL minor upgrade process, there are other benefits as well. To provide one more great example, you might have noticed the following in the PGO upgrade documentation when upgrading to PGO v5.1.0+:<hr><p>Relatedly, if you are instead using the RELATED_IMAGE environment variables to set the image values, you would instead check and update these as needed before redeploying PGO.<hr><p>What this is effectively saying is that by using related images, the pre-requisite to manually update all <code>crunchy-postgres</code> and <code>crunchy-pgbackrest</code> images <em>prior</em> to the PGO upgrade is no longer even applicable. Simply upgrade PGO, and since all <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/installation/>installers</a> come pre-defined with <code>RELATED_IMAGE_</code> environment variables that included latest <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/references/components/>component images</a> available, PGO will simply handle the rest!<p>There are many scenarios in which related images are useful. Those managing PostgresCluster’s can ensure the latest patches/fixes, security updates, etc. are applied and managed via a central/single configuration. And those who are provisioning PostgresClusters no longer even need to worry/think about what image tags to use.<h2 id=controlling-updates-using-pause><a href=#controlling-updates-using-pause>Controlling Updates Using “Pause”</a></h2><p>Since the <code>RELATED_IMAGE_</code> environment variables in the PGO deployment itself controls the various images utilized for all PostgresCluster’s, this does mean the a <code>RELATED_IMAGE_</code> update could cause many clusters within your environment to rollout the change concurrently, which might not always be desirable. Fortunately the Crunchy Postgres for Kubernetes <a href=https://access.crunchydata.com/documentation/postgres-operator/5.2.0/releases/5.2.0/>v5.2.0 release</a> provided a way to control rollouts for individual PostgresClusters in the form of a new <a href=https://access.crunchydata.com/documentation/postgres-operator/5.2.0/tutorial/administrative-tasks/#pausing-reconciliation-and-rollout>“pause” capability</a>.<p>Specifically, by setting the <code>paused</code> field in the PostgresCluster spec to <code>true</code>, it is possible to tell PGO to temporarily stand-down from reconciling any changes to the PostgresCluster:<pre><code class=language-bash>kubectl patch postgrescluster/hippo -n postgres-operator --type merge \
  --patch '{"spec":{"paused": true}}'
</code></pre><p>This means a new <code>image</code> value set via a <code>RELATED_IMAGE_</code> environment variable will not be applied and rolled out until <code>paused</code> is either removed, or set to <code>false</code>. Therefore, using this capability those administering the PGO deployment can control the images utilized via related images, while those managing the PostgresCluster’s can control exactly when those updates are applied.<p>For more information on pausing PostgresCluster reconciliation and rollouts, please see the <a href=https://access.crunchydata.com/documentation/postgres-operator/5.2.0/tutorial/administrative-tasks/>Administrative Tasks</a> section of the <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/>latest PGO documentation</a>.<h2 id=conclusion><a href=#conclusion>Conclusion</a></h2><p>Lukas noted that,<blockquote><p>The Crunchy Operator clearly has a lot of sophistication built into it, which would give me confidence if I were to deploy it in production.</blockquote><p>We are thrilled to see our hard work around streamlining upgrades out in the wild and are excited for the future of this feature. If you have any questions about our Operator and getting it up and running for yourself please feel free to <a href=https://www.crunchydata.com/contact>reach out</a>. ]]></content:encoded>
<category><![CDATA[ Kubernetes ]]></category>
<author><![CDATA[ Andrew.L'Ecuyer@crunchydata.com (Andrew L'Ecuyer) ]]></author>
<dc:creator><![CDATA[ Andrew L'Ecuyer ]]></dc:creator>
<guid isPermalink="false">6e0f6f0aac5a87d69cbf01bce395da0deed94b8b5c17f3b19c2cb602975188b3</guid>
<pubDate>Wed, 26 Oct 2022 11:00:00 EDT</pubDate>
<dc:date>2022-10-26T15:00:00.000Z</dc:date>
<atom:updated>2022-10-26T15:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ Effective PostgreSQL Cluster Configuration & Management Using PGO v5.1 ]]></title>
<link>https://www.crunchydata.com/blog/effective-postgres-cluster-config-and-management-using-pgo-v5.1</link>
<description><![CDATA[ A review of PGO 5.1 features including rolling database restarts, pod disruption budgets, and manual switchovers/failovers. ]]></description>
<content:encoded><![CDATA[ <p>Modern day production ready Postgres solutions require quite a bit of sophistication and automation. Changes need to be applied in a uniform and safe way. DevOps and SRE teams need to be in control system updates while limiting disruption to their users.<p>With the release of <a href=https://access.crunchydata.com/documentation/postgres-operator/v5/>PGO v5.1</a>, we are excited to announce enhancements in each of these areas. Not only does PGO v5.1 now automatically rollout all PostgreSQL configuration changes, but it allows you to protect your running databases against outages due to voluntary disruptions. Additionally, manual switchover and failover support means you can fully control your current cluster topology via your PostgresCluster spec.<p>PGO v5.1 delivers on what it means to be a production-ready PostgreSQL solution by providing the key features needed to configure, manage and protect your production PostgreSQL databases. Let’s take a look at what each of these new features looks like in action.<h2 id=rolling-database-restarts><a href=#rolling-database-restarts>Rolling Database Restarts</a></h2><p>Prior to PGO v5.1, if you changed a PostgreSQL configuration setting that required database restart, then it was necessary to <a href=https://access.crunchydata.com/documentation/postgres-operator/v5/tutorial/administrative-tasks/>manually trigger a rolling restart</a> to apply that change. As of PGO v5.1, however, this step is no longer required. PGO will now detect and safely rollout configuration changes requiring a database restart.<p>To see this in action, update your PostgresCluster spec with a configuration change that will require a database restart. For this example we will update the <code>shared_buffers</code> to <code>256MB</code>:<pre><code class=language-shell>$ kubectl patch postgrescluster hippo --type=merge -p '{"spec":{"patroni":{"dynamicConfiguration":
{"postgresql":{"parameters":{"shared_buffers": "256MB"}}}}}}'
postgrescluster.postgres-operator.crunchydata.com/hippo patched
</code></pre><p>At this point, not only will PGO apply the desired configuration change, but it will also detect that a database restart is required. It will then restart all PostgreSQL instances in the cluster to automatically rollout the change.<p>Looking at the logs for any PostgreSQL instance in the cluster, you should see the following:<pre><code class=language-shell>$ kubectl logs hippo-instance1-jl7v-0 -c database | grep shared_buffers
2022-04-03 18:23:31,347 INFO: Changed shared_buffers from 16384 to 256MB (restart might be required)
</code></pre><p>And by looking at the PostgreSQL logs you can see that the database was restarted:<pre><code class=language-shell>$ kubectl exec hippo-instance1-jl7v-0 -c database -- bash -c 'cat /pgdata/pg13/log/*.log' | tail -n 5
...
2022-04-03 18:33:09.234 UTC [3188] LOG:  database system was shut down at 2022-04-03 18:33:08 UTC
2022-04-03 18:33:09.276 UTC [3181] LOG:  database system is ready to accept connections
</code></pre><p>You can also see that the setting has been applied via <code>psql</code>:<pre><code class=language-shell>$ kubectl exec hippo-instance1-jl7v-0 -c database -- psql -t -c 'show shared_buffers;'
 256MB
</code></pre><p>This therefore demonstrates how PGO v5.1 makes PostgreSQL configuration easier than ever before.<h2 id=pod-disruption-budgets><a href=#pod-disruption-budgets>Pod Disruption Budgets</a></h2><p>Voluntary disruptions are often a reality in any Kubernetes environment. For instance, someone might drain a node containing a running PostgreSQL database. And when this occurs, it is important that all production databases be protected from an unexpected outage to the fullest extent possible.<p>We are therefore excited to announce support for protection against voluntary cluster disruptions in PGO v5.1. By simply defining more than once replica for an instance set or a PgBouncer deployment, PGO will now automatically create a <dfn>Pod Disruption Budget</dfn> (<abbr>PDB</abbr>) that will ensure at least one Pod for that instance set or PgBouncer deployment remains available when a voluntary disruption occurs. For instance, you can see this in action by defining two replicas for a PgBouncer deployment:<pre><code class=language-shell>$ kubectl patch postgrescluster hippo --type='json' \
-p='[{"op":"replace","path":"/spec/pro
xy/pgBouncer/replicas","value":2}]'
postgrescluster.postgres-operator.crunchydata.com/hippo patched
</code></pre><pre><code class=language-shell>$ kubectl get pdb
NAME              MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
hippo-pgbouncer   1               N/A               1                     4s
</code></pre><p>You can also set the <code>minAvailable</code> field for any instance set or PgBouncer deployment. This will cause PGO to reconcile a PDB that enforces that the defined number of Pods is always available.<p>For example, the following will ensure at least two Pods are available for an instance set with five replicas:<pre><code class=language-yaml>spec:
  instances:
    - name: instance1
      replicas: 3
      minAvailable: 2
</code></pre><p>Once the above spec is applied, PGO will reconcile a corresponding PDB:<pre><code class=language-shell>$ kubectl get pdb
NAME              	MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
hippo-set-instance1   2           	N/A           	1                 	110s
</code></pre><p>For more information about voluntary disruptions and PDBs, please see the following:<ul><li><a href=https://access.crunchydata.com/documentation/postgres-operator/v5/architecture/high-availability/>PGO High Availability Documentation</a><li><a href=https://kubernetes.io/docs/concepts/workloads/pods/disruptions/>Kubernetes Disruptions Guide</a></ul><h2 id=manual-switchovers--failovers><a href=#manual-switchovers--failovers>Manual Switchovers &#38 Failovers</a></h2><p>While manual switchovers and/or failover are not typically required during normal operations (since the HA system will automatically failover to a replica when needed), there are certain circumstances where this functionality can be beneficial. For instance, you might want to move the primary to an instance that is running on a specific node prior to performing maintenance. Or, you might simply want to test various failover scenarios in your environment. Whatever the use case might be, PGO now has you covered.<p>With PGO v5.1, it is now possible to manually switchover and failover for your PostgreSQL clusters, all via the PostgresCluster spec. For instance, let's say you want to switchover to an instance called hippo-instance1-t97d within your cluster. First, you would add the following to your PostgresCluster spec to define the instance that you want to switchover to:<pre><code class=language-yaml>spec:
  patroni:
    switchover:
      enabled: true
      targetInstance: hippo-instance1-t97d
</code></pre><p>Then, annotate PostgresCluster to trigger the actual switchover:<pre><code class=language-shell>$ kubectl annotate -n postgres-operator postgrescluster hippo \
  postgres-operator.crunchydata.com/trigger-switchover="$(date)"
postgrescluster.postgres-operator.crunchydata.com/hippo annotated
</code></pre><p>Once PGO detects the annotation, the switchover will occur. This means instance <code>hippo-instance1-t97d</code> will now be promoted to primary. You can verify this by looking for the Pod in the cluster with a value of master for the postgres-operator.crunchydata.com/role label.<pre><code class=language-shell>$ kubectl get pods \
--selector=postgres-operator.crunchydata.com/role=master,postgres-operator.crunchydata.com/cluster=hippo
NAME                 	READY   STATUS	RESTARTS   AGE hippo-instance1-t97d-0   4/4
	Running   0      	26h
</code></pre><p>As you can see, Pod <code>hippo-instance1-t97d-0</code> now has the master role, which means the switchover was successful.<h2 id=next-steps><a href=#next-steps>Next Steps</a></h2><p>Thank you for taking the time to explore a few of the new features included in PGO v5.1! There is a lot more to get excited about in this release, so I encourage you to see the <a href=https://access.crunchydata.com/documentation/postgres-operator/v5/>PGO documentation</a> for more details. For information about upgrading to PGO v5.1, please see the <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/upgrade/>Upgrade Documentation</a>. If you are installing PGO for the first time, please be sure to check out the <a href=https://access.crunchydata.com/documentation/postgres-operator/latest/quickstart/>quickstart</a>. ]]></content:encoded>
<category><![CDATA[ Kubernetes ]]></category>
<author><![CDATA[ Andrew.L'Ecuyer@crunchydata.com (Andrew L'Ecuyer) ]]></author>
<dc:creator><![CDATA[ Andrew L'Ecuyer ]]></dc:creator>
<guid isPermalink="false">cfd6b6c362ea4a0eb77c4e728ecb19d9382eaf45304ccd7a4aa5cacbc38eb87b</guid>
<pubDate>Wed, 01 Jun 2022 16:00:00 EDT</pubDate>
<dc:date>2022-06-01T20:00:00.000Z</dc:date>
<atom:updated>2022-06-01T20:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ Seamless pgAdmin 4 Deployments Using PGO v5.1 ]]></title>
<link>https://www.crunchydata.com/blog/seamless-pgadmin-4-deployments-using-pgo-v5.1</link>
<description><![CDATA[ Need a GUI for your Postgres cluster? PGO 5.1 introduces a new feature to let you create a pgAdmin 4 pod alongside your other database services managed by the Kubernetes Operator. ]]></description>
<content:encoded><![CDATA[ <p>Recently, there has been a bit of a debate here at Crunchy Data around SQL editors. While some members of the Crunchy Team such as Elizabeth (<a href=https://mobile.twitter.com/e_g_christensen>@e_g_christensen</a>) prefer pgAdmin 4, others such as Craig (@<a href=https://twitter.com/craigkerstiens>craigkerstiens</a>) prefer using <a href=https://www.craigkerstiens.com/2013/02/13/how-i-work-with-postgres/>psql</a>. And one the great things about the PostgreSQL ecosystem is that there is no right answer to this debate! Instead, you have choice and flexibility when it comes to finding and using the tools that meet your specific database development and/or management needs.<p>With the release of PGO v5.1 we're excited to announce support for pgAdmin 4. We're bringing yet another choice for developing and managing your PostgreSQL databases to PGO. So for those that have been wanting support for an GUI-based SQL editor, we've now made it dead simple to add one to any PGO-provisioned PostgreSQL cluster. By defining the following in your PostgresCluster spec, not only will PGO provision a pgAdmin 4 deployment and PostgreSQL user account named rhino, it will also automatically create a pgAdmin 4 user account called rhino@pgo:<pre><code class=language-yaml>spec:
  users:
  - name: rhino
  userInterface:
    pgAdmin:
      image: registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-4.30-0
      dataVolumeClaimSpec:
        accessModes:
        - "ReadWriteOnce"
        resources:
          requests:
            storage: 1Gi
</code></pre><p>For instance, with the above spec applied, a Deployment will be reconciled by PGO in order to deploy pgAdmin 4:<pre><code class=language-shell>$ kubectl get pods --selector postgres-operator.crunchydata.com/role=pgadmin
NAME       READY  STATUS  RESTARTS  AGE
hippo-pgadmin-0  1/1   Running  0     48s
</code></pre><p>Additionally, the Secret containing the credentials for rhino can be used to access both the PostgreSQL database itself and pgAdmin 4. For instance, the password shown below for the hippo-pguser-rhino Secret (which must be decoded) will allow a user named rhino to access both:<pre><code class=language-shell>$ kubectl get secret hippo-pguser-rhino -o yaml | grep password
password: bkBBSDVSN15sRURGZGo4LXNTdGd0W30v
</code></pre><p>To connect to pgAdmin 4, you can the port-forward to the pgAdmin 4 Service, and then navigate to <a href=http://localhost:5050>http://localhost:5050</a> in a browser (a <a href=https://access.crunchydata.com/documentation/postgres-operator/v5/references/crd/#postgresclusterspecuserinterfacepgadminservice>spec.userInterface.pgAdmin.service.type</a> setting is also available to configure the Service according to your specific needs):<pre><code class=language-shell>$ kubectl port-forward svc/hippo-pgadmin 5050:5050
Forwarding from 127.0.0.1:5050 -> 5050
</code></pre><p><img alt="pgAdmin login screen"loading=lazy src=https://imagedelivery.net/lPM0ntuwQfh8VQgJRu0mFg/6afb0123-2c30-4d7b-e8c3-4ff1e3fa9500/public><p>Please note that each PostgresCluster is assigned its own pgAdmin 4 deployment. Therefore, to deploy pgAdmin 4 for any other PostgreSQL clusters in your environment, simply update the PostgresCluster spec in the same way as the hippo spec above.<p>As you can see, not only does PGO v5.1 now provide a powerful PostgreSQL GUI in the form of pgAdmin 4, it also allows you to easily manage user accounts across both PostgreSQL and pgAdmin 4.<h2 id=managing-postgresql-your-way><a href=#managing-postgresql-your-way>Managing PostgreSQL Your Way</a></h2><p>With a simple update to your PostgresCluster spec you can have a pgAdmin 4 deployment fully up-and-running in no time. Not only that, but the deployment seamlessly integrates with the robust user management capabilities already included in PGO. In fact, when we recently showed off this new feature, many of the reactions were: "That's it? I don't have to do anything else to get it running?".<p>We are therefore excited to provide yet another choice for developing and managing your PostgreSQL databases in PGO v5.1. And not to be one-upped by Crunchy Bridge, who recently rolled out with support for pgAdmin 4 as part of the <a href=https://docs.crunchybridge.com/container-apps/>Postgres Container Apps</a> feature, we felt that it was important to also bring this powerful capability to PGO!<p>Thank you for taking the time to see just how easy PGO makes configuring and deploying pgAdmin 4. For a full list of all features and changes included in PGO v5.1, please see the <a href=https://access.crunchydata.com/documentation/postgres-operator/v5/releases/5.1.0/>full release notes</a>. If you are just getting started with PGO for the first time, please see the <a href=https://access.crunchydata.com/documentation/postgres-operator/v5/quickstart/>quickstart</a>. ]]></content:encoded>
<category><![CDATA[ Kubernetes ]]></category>
<author><![CDATA[ Andrew.L'Ecuyer@crunchydata.com (Andrew L'Ecuyer) ]]></author>
<dc:creator><![CDATA[ Andrew L'Ecuyer ]]></dc:creator>
<guid isPermalink="false">76344ea8c6b3f8a22d927e6da139d2dfebbc04009f83b76efac51d5595b343f3</guid>
<pubDate>Thu, 05 May 2022 11:00:00 EDT</pubDate>
<dc:date>2022-05-05T15:00:00.000Z</dc:date>
<atom:updated>2022-05-05T15:00:00.000Z</atom:updated></item>
<item><title><![CDATA[ Easy Postgres Major Version Upgrades Using PGO v5.1 ]]></title>
<link>https://www.crunchydata.com/blog/easy-major-postgresql-upgrades-using-pgo-v51</link>
<description><![CDATA[ Today we’re excited to introduce support for major PostgreSQL upgrades in PGO v5.1. Using the new PGUpgrade API, you can now seamlessly upgrade your clusters across major versions of PostgreSQL. ]]></description>
<content:encoded><![CDATA[ <p>Whether upgrading PGO itself, or upgrading the PostgreSQL databases PGO manages, seamless upgrades should be a core feature for any cloud or Kubernetes-based database solution. As a result, one of the goals when we set out to build version five of PGO, the Postgres Operator from Crunchy Data, was to provide a seamless and user-friendly upgrade experience.<p>Today we’re excited to introduce support for major version PostgreSQL upgrades in PGO v5.1. Using the new PGUpgrade API, you can now seamlessly upgrade your clusters across major versions of PostgreSQL. This means upgrading Postgres is now as easy as submitting a simple custom resource, with PGO handling everything else.<p>Please join me in walking through an example of this powerful new capability, and see just how easy PGO makes the major version upgrade process!<h2 id=create-a-cluster><a href=#create-a-cluster>Create a Cluster</a></h2><p>The first step is to create a PostgresCluster that can then be upgraded.  For this example we will first deploy a PostgreSQL 13 cluster, which we will then upgrade to PostgreSQL 14. A simple PostgreSQL 13 cluster can be created as follows:<pre><code class=language-yaml>$ kubectl create -f - &#60&#60EOF
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
  name: hippo
spec:
  image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-13.6-1
  postgresVersion: 13
  instances:
    - name: instance1
      dataVolumeClaimSpec:
        accessModes:
        - "ReadWriteOnce"
        resources:
          requests:
            storage: 1Gi
  backups:
    pgbackrest:
      image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0
      repos:
      - name: repo1
        volume:
          volumeClaimSpec:
            accessModes:
            - "ReadWriteOnce"
            resources:
              requests:
                storage: 1Gi
EOF
postgrescluster.postgres-operator.crunchydata.com/hippo created
</code></pre><p>Once the cluster is up and running, add some data to the database. In a later step, specifically following the completion of the major PostgreSQL version upgrade, we will confirm this data is still present.<p>First, find the name of the primary PostgreSQL instance:<pre><code class=language-shell>$ kubectl get pod -o name -l \
postgres-operator.crunchydata.com/role=master,postgres-operator.crunchydata.com/cluster=hippo
pod/hippo-instance1-5clq-0
</code></pre><p>Then, insert some data using psql.<pre><code class=language-shell>$ kubectl exec -it -c database \
    pod/hippo-instance1-5clq-0 -- psql
psql (13.6)
</code></pre><pre><code class=language-pgsql>CREATE TABLE upgradedata(id int);
CREATE TABLE
INSERT INTO upgradedata(id) SELECT id FROM generate_series(1, 10000) AS id;
INSERT 0 10000
</code></pre><p><strong><em>Note:</em></strong> <em>It is recommended that a backup always be created prior to performing a major PG upgrade.  Please see the</em> <a href=https://access.crunchydata.com/documentation/postgres-operator/v5/tutorial/backups/><em>PGO backup tutorial</em></a> <em>for more information.</em><h2 id=upgrade-the-cluster><a href=#upgrade-the-cluster>Upgrade the Cluster</a></h2><p>With the cluster up and running and some test data inserted, we can now perform the major version PostgreSQL upgrade!  As described above, this will be done using the new PGUpgrade API.<p>To initiate the upgrade process, proceed with creating a PGUpgrade custom resource called “hippo-upgrade”:<pre><code class=language-yaml>$ kubectl create -f - &#60&#60EOF
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PGUpgrade
metadata:
  name: hippo-upgrade
spec:
  postgresClusterName: hippo
  fromPostgresVersion: 13
  toPostgresVersion: 14
  image: registry.developers.crunchydata.com/crunchydata/crunchy-upgrade:ubi8-5.1.0-0
EOF
pgupgrade.postgres-operator.crunchydata.com/hippo-upgrade created
</code></pre><p>As you can see, initiating a major PostgreSQL upgrade is a simple as providing the following information in a PGUpgrade spec:<ul><li>The name of the PostgresCluster we want to upgrade<li>The major versions of PostgreSQL we are upgrading to and from<li>The crunchy-upgrade image to use for the upgrade</ul><p>However, at this point you will notice that even though a PGUpgrade custom resource has been created, nothing is occurring within the running cluster.  By inspecting the conditions of “hippo-upgrade” custom resource, you can see the specific reason why:<pre><code class=language-yaml>$ kubectl describe pgupgrade hippo-upgrade
Name:     hippo-upgrade
Namespace: postgres-operator
…
Status:
  Conditions:
Last Transition Time:  2022-03-16T01:37:32Z
Message:           PostgresCluster instances still running
Observed Generation:   1
Reason:            PGClusterNotShutdown
Status:            False
Type:              Progressing
Events:                &#60none>
</code></pre><p>As the above condition indicates, the upgrade is not progressing because the “hippo” cluster we are attempting to upgrade has not yet been shut down.  This is one way in which PGO allows those managing a cluster to retain complete control over the upgrade process and prevent unexpected outages, even once a PGUpgrade resource for that cluster has been created.<p>At this point we can therefore proceed with shutting down the “hippo” cluster:<pre><code class=language-shell>$ kubectl patch postgrescluster hippo --type=merge \
    -p '{"spec":{"shutdown":true}}'
postgrescluster.postgres-operator.crunchydata.com/hippo patched
</code></pre><p>However, even once the “hippo” cluster has been shutdown, the conditions “hippo-upgrade” custom resource will still indicate that the upgrade is unable progress:<pre><code class=language-yaml>$ kubectl describe pgupgrade hippo-upgrade
Name:     hippo-upgrade
Namespace: postgres-operator
…
Status:
  Conditions:
Last Transition Time:  2022-03-16T01:37:32Z
Message:           PostgresCluster hippo lacks annotation for upgrade hippo-upgrade
Observed Generation:   1
Reason:            PGClusterMissingRequiredAnnotation
Status:            False
Type:              Progressing
Events:                &#60none>
</code></pre><p>In this instance the condition is telling us that the “hippo” PostgresCluster is missing the annotation that is needed to initiate the upgrade process.  This is yet another way in which PGO allows the upgrade process to be controlled by those managing a PostgreSQL cluster, even if that cluster is currently shut down when the PGUpgrade custom resource is created.<p>Therefore, as a final step, add the required annotation to the “hippo” PostgresCluster.  The value of the annotation should match the name of the PGUpgrade custom resource, i.e., “hippo-upgrade” for this specific example:<pre><code class=language-shell>$ kubectl annotate postgrescluster hippo  \
    postgres-operator.crunchydata.com/allow-upgrade=hippo-upgrade
</code></pre><p>At this point a Job will be run to upgrade the “hippo” cluster to PostgreSQL v14.  And once the Job completes, we can once again check the conditions for the “hippo-upgrade” PGUpgrade custom resource to determine the status of the upgrade.<pre><code class=language-yaml>$ kubectl describe pgupgrade hippo-upgrade
Name:         hippo-upgrade
Namespace:    postgres-operator
…
Status:
 Conditions:
   Last Transition Time:  2022-03-16T02:17:26Z
   Message:               PostgresCluster hippo is running version 14
   Observed Generation:   1
   Reason:                PGUpgradeCompleted
   Status:                False
   Type:                  Progressing
   Last Transition Time:  2022-03-16T02:19:25Z
   Message:               PostgresCluster hippo is ready to complete upgrade to version 14
   Observed Generation:   1
   Reason:                PGUpgradeSucceeded
   Status:                True
   Type:                  Succeeded
Events:                    &#60none>
</code></pre><p>As you can see above, the Succeeded condition is now true, indicating that the upgrade completed successfully!  We can therefore now start the cluster back up as follows:<pre><code class=language-json>$ kubectl patch postgrescluster hippo --type "json" -p '\[
{"op":"replace","path":"/spec/shutdown","value":false},
{"op":"replace","path":"/spec/postgresVersion","value":14},
{"op":"replace","path":"/spec/image","value":"registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.2-1"}\]'
postgrescluster.postgres-operator.crunchydata.com/hippo patched
</code></pre><p>Once the cluster is back up and running, we can verify that the data we added above survived the upgrade.  Therefore, once again find the name of the primary Postgres instance:<pre><code class=language-shell>$ kubectl get pod -o name -l \ postgres-operator.crunchydata.com/role=master,postgres-operator.crunchydata.com/cluster=hippo
pod/hippo-instance1-79ff-0
</code></pre><p>Then, verify that the data is present.<pre><code class=language-shell>$ kubectl exec -it -c database \
    pod/hippo-instance1-79ff-0 -- psql
psql (14.2)
Type "help" for help.
</code></pre><pre><code class=language-pgsql>SELECT * FROM upgradedata;
 id    
-------
    1
    2
    3
    4
    5
…
</code></pre><p>And finally, run the following query to check the current version of PostgreSQL:<pre><code class=language-pgsql>SHOW server_version;
 server_version
----------------
 14.2
(1 row)
</code></pre><p>As the above clearly indicates, the cluster is now on PostgreSQL version 14! This therefore confirms that the PGUpgrade API successfully orchestrated and completed a major PostgreSQL upgrade for the “hippo” cluster.<h2 id=major-postgresql-upgrades-made-easy><a href=#major-postgresql-upgrades-made-easy>Major PostgreSQL Upgrades Made Easy</a></h2><p>As the example above clearly demonstrates, PGO now makes upgrading across major versions of PostgreSQL easier than ever before.   For a full list of all the great features included in PGO v5.1, please see the <a href=https://access.crunchydata.com/documentation/postgres-operator/v5/releases/5.1.0/>full release notes</a>. Additionally, if you are not a Crunchy customer but would like to learn more major PostgreSQL upgrade support in PGO, please feel free to <a href=mailto:info@crunchydata.com>contact us</a> to help answer your questions. ]]></content:encoded>
<category><![CDATA[ Kubernetes ]]></category>
<author><![CDATA[ Andrew.L'Ecuyer@crunchydata.com (Andrew L'Ecuyer) ]]></author>
<dc:creator><![CDATA[ Andrew L'Ecuyer ]]></dc:creator>
<guid isPermalink="false">f65e98efd07991d1b926e0cc4fae8bc6959ed15711a136647ef7a089e7901bfd</guid>
<pubDate>Tue, 26 Apr 2022 05:00:00 EDT</pubDate>
<dc:date>2022-04-26T09:00:00.000Z</dc:date>
<atom:updated>2022-04-26T09:00:00.000Z</atom:updated></item></channel></rss>