Skip to main content

Release notes

Contains description of Nebari releases.


Release 2024.12.1 - December 13, 2024

NOTE: Support for DigitalOcean has been removed in this release. If you plan to deploy Nebari on DigitalOcean, you first need to independently create a Kubernetes cluster and then use the existing deployment option.

What's Changed

New Contributors

Full Changelog: https://github.com/nebari-dev/nebari/compare/2024.11.1...2024.12.1

Release 2024.11.1 - November 21, 2024 (Hotfix Release)

NOTE: This hotfix addresses several major bugs identified in the 2024.9.1 release. For a detailed overview, please refer to the related discussion at #2798. Users should upgrade directly from 2024.7.1 to 2024.11.1.

What's Changed

Full Changelog: https://github.com/nebari-dev/nebari/compare/2024.9.1...2024.11.1

Release 2024.9.1 - September 27, 2024 (Broken Release)

WARNING: This release was later found to have unresolved issues described further in issue 2798. We have marked this release as broken on conda-forge and yanked it on PyPI. One of the bugs prevents any upgrade from 2024.9.1 to 2024.11.1. Users should skip this release entirely and upgrade directly from 2024.7.1 to 2024.11.1.

WARNING: This release changes how group directories are mounted in JupyterLab pods: only groups with specific permissions will have their directories mounted. If you rely on custom group mounts, we strongly recommend running nebari upgrade before updating. This will prompt you to confirm how Nebari should handle your groups—either keep them mounted or allow unmounting. No data will be lost, and you can reverse this anytime.

What's Changed

New Contributors

Full Changelog: https://github.com/nebari-dev/nebari/compare/2024.7.1...2024.9.1

Release 2024.7.1 - August 8, 2024

NOTE: Support for Digital Ocean deployments using CLI commands and related Terraform modules is being deprecated. Although Digital Ocean will no longer be directly supported in future releases, you can still deploy to Digital Ocean infrastructure using the current existing deployment option.

What's Changed

New Contributors

Full Changelog: https://github.com/nebari-dev/nebari/compare/2024.6.1...2024.7.1

Release 2024.6.1 - June 26, 2024

NOTE: This release includes an upgrade to the kube-prometheus-stack Helm chart, resulting in a newer version of Grafana. When upgrading your Nebari cluster, you will be prompted to have Nebari update some CRDs and delete a DaemonSet on your behalf. If you prefer, you can also run the commands yourself, which will be shown to you. If you have any custom dashboards, you'll also need to back them up by exporting them as JSON, so you can import them after upgrading.

What's Changed

New Contributors

Full Changelog: https://github.com/nebari-dev/nebari/compare/2024.5.1...2024.6.1

Release 2024.5.1 - May 13, 2024

What's Changed

Full Changelog: https://github.com/nebari-dev/nebari/compare/2024.4.1...2024.5.1

Release 2024.4.1 - April 20, 2024

What's Changed

New Contributors

Full Changelog: https://github.com/nebari-dev/nebari/compare/2024.3.3...2024.4.1

Release 2024.3.3 - March 27, 2024

What's Changed

New Contributors

Full Changelog: https://github.com/nebari-dev/nebari/compare/2024.3.2...2024.3.3

Release 2024.3.2 - March 14, 2024

What's Changed

Full Changelog: https://github.com/nebari-dev/nebari/compare/2024.3.1...2024.3.2

Release 2024.3.1 - March 11, 2024

What's Changed

New Contributors

Full Changelog: https://github.com/nebari-dev/nebari/compare/2024.1.1...2024.3.1

Release 2024.1.1 - January 17, 2024

Feature changes and enhancements

  • Upgrade conda-store to latest version 2024.1.1
  • Add Jhub-Apps
  • Add Jupyterlab-pioneer
  • Minor improvements and bug fixes

Breaking Changes

WARNING: jupyterlab-videochat, retrolab, jupyter-tensorboard, jupyterlab-conda-store and jupyter-nvdashboard are no longer supported in Nebari version and will be uninstalled."

What's Changed

Full Changelog: https://github.com/nebari-dev/nebari/compare/2023.12.1...2024.1.1

Release 2023.12.1 - December 15, 2023

Feature changes and enhancements

  • Upgrade conda-store to latest version 2023.10.1
  • Minor improvements and bug fixes

Breaking Changes

WARNING: Prefect, ClearML and kbatch were removed in this release and upgrading to this version will result in all of them being uninstalled.

What's Changed

Full Changelog: https://github.com/nebari-dev/nebari/compare/2023.11.1...2023.12.1

Release 2023.11.1 - November 15, 2023

Feature changes and enhancements

  • Upgrade conda-store to latest version 2023 .10.1
  • Minor improvements and bug fixes

Breaking Changes

WARNING: Prefect, ClearML and kbatch were removed in this release and upgrading to this version will result in all of them being uninstalled.

What's Changed

Full Changelog: https://github.com/nebari-dev/nebari/compare/2023.10.1...2023.11.1

Release 2023.10.1 - October 20, 2023

This release includes a major refactor which introduces a Pluggy-based extension mechanism which allow developers to build new stages. This is the initial implementation of the extension mechanism and we expect the interface to be refined overtime. If you're interested in developing your own stage plugin, please refer to our documentation. When you're ready to upgrade, please download this version from either PyPI or Conda-Forge and run the nebari upgrade -c nebari-config.yaml command and follow the instructions

WARNING: CDS Dashboards was removed in this release and upgrading to this version will result in CDS Dashboards being uninstalled. A replacement dashboarding solution is currently in the works and will be integrated soon.

WARNING: Given the scope of changes in this release, we highly recommend backing up your system before upgrading. Please refer to our Manual Backup documentation for more details.

Feature changes and enhancements

  • Extension Mechanism Implementation in PR 1833
    • This also includes much stricter schema validation.
  • JupyterHub upgraded to 3.1 in PR 1856'

Breaking Changes

  • While we have tried our best to avoid breaking changes when introducing the extension mechanism, the scope of the changes is too large for us to confidently say there won't be breaking changes.

WARNING: CDS Dashboards was removed in this release and upgrading to this version will result in CDS Dashboards being uninstalled. A replacement dashboarding solution is currently in the work and will be integrated soon.

WARNING: We will be removing and ending support for ClearML, Prefect and kbatch in the next release. The kbatch has been functionally replaced by Argo-Jupyter-Scheduler. We have seen little interest in ClearML and Prefect in recent years, and removing makes sense at this point. However if you wish to continue using them with Nebari we encourage you to write your own Nebari extension.

What's Changed

New Contributors

Full Changelog: https://github.com/nebari-dev/nebari/compare/2023.7.2...2023.10.1

Release 2023.7.2 - August 3, 2023

This is a hot-fix release that resolves an issue whereby users in the analyst group are unable to launch their JupyterLab server because the name of the viewer-specific ARGO_TOKEN was mislabeled; see PR 1881 for more details.

What's Changed

Release 2023.7.1 - July 21, 2023

WARNING: CDS Dashboards will be deprecated soon. Nebari 2023.7.1 will be the last release with support for CDS Dashboards integration. A new dashboard sharing mechanism added in the near future, but some releases in the interim will not have dashboard sharing capabilities..

WARNING: For those running on AWS, upgrading from previous versions to 2023.7.1 requires a backup. Due to changes made to the VPC (See issue 1884 for details), Terraform thinks it needs to destroy and reprovision a new VPC which causes the entire cluster to be destroyed and rebuilt.

Feature changes and enhancements

  • Addition of Nebari-Workflow-Controller in PR 1741
  • Addition of Argo-Jupyter-Scheduler in PR 1832
  • Make most of the API private

Breaking Changes

  • As mentioned in the above WARNING, clusters running on AWS should perform a manual backup before running the upgrade to the latest version as changes to the AWS VPC will cause the cluster to be destroyed and redeployed.

What's Changed

New Contributors

Full Changelog: https://github.com/nebari-dev/nebari/compare/2023.5.1...2023.7.1

Release 2023.5.1 - May 5, 2023

Feature changes and enhancements

  • Upgrade Argo-Workflows to version 3.4.4

Breaking Changes

  • The Argo-Workflows version upgrade will result in a breaking change if the existing Kubernetes CRDs are not deleted (see the NOTE below for more details).
  • There is a minor breaking change for the Nebari CLI version shorthand, previously it nebari -v and now to align with Python convention, it will be nebari -V.

NOTE: After installing the Nebari version 2023.5.1, please run nebari upgrade -c nebari-config.yaml to upgrade the nebari-config.yaml. This command will also prompt you to delete a few Kubernetes resources (specifically the Argo-Workflows CRDS and service accounts) before you can upgrade.

What's Changed

New Contributors

Full Changelog: https://github.com/nebari-dev/nebari/compare/2023.4.1...2023.5.1

Release 2023.4.1 - April 12, 2023

NOTE: Nebari requires Kubernetes version 1.23 and Digital Ocean now requires new clusters to run Kubernetes version 1.24. This means that if you are currently running on Digital Ocean, you should be fine but deploying on a new cluster on Digital Ocean is not possible until we upgrade Kubernetes version (see issue 1622 for more details).

Feature changes and enhancements

  • Upgrades and improvements to conda-store including a new user-interface and greater administrator capabilities.
  • Idle-culler settings can now be configured directly from the nebari-config.yaml.

What's Changed

New Contributors

Full Changelog: https://github.com/nebari-dev/nebari/compare/2023.1.1...2023.4.1

Release 2023.1.1 - January 30, 2023

What's Changed

New Contributors

Release 2022.11.1 - December 1, 2022

What's Changed

New Contributors

Release 2022.10.1 - October 28, 2022

WARNING

The project has recently been renamed from QHub to Nebari. If your deployment is is still managed by qhub, performing an inplace upgrade will IRREVOCABLY BREAK your deployment. This will cause you to lose any data stored on the platform, including but not limited to, NFS (filesystem) data, conda-store environments, Keycloak users and groups, etc. Please backup your data before attempting an upgrade.

Feature changes and enhancements

We are happy to announce the first official release of Nebari (formly QHub)! This release lays the groundwork for many exciting new features and improvements to come.

This release introduces several important changes which include:

  • a major project name change from QHub to Nebari - PR 1508
  • a switch from the SemVer to CalVer versioning format - PR 1501
  • a new, Typer-based CLI for improved user experience - PR 1443 + PR 1519

Although breaking changes are never fun, the Nebari development team believes these changes are important for the immediate and future success of the project. If you experience any issues or have any questions about these changes, feel free to open an issue on our Github repo.

What's Changed

New Contributors

Note: The following releases (v0.4.5 and lower) were made under the name Quansight/qhub.

Release v0.4.5 - October 14, 2022

Enhancements for this release include:

  • Fix reported bug with Azure deployments due to outdated azurerm provider
  • All dashboards related conda-store environments are now visible as options for spawning dashboards
  • New Nebari entrypoint
  • New Typer-based CLI for Qhub (available using new entrypoint)
  • Renamed built-in conda-store namespaces and added customization support
  • Updated Traefik version to support the latest Kubernetes API

What's Changed

New Contributors

Migration note

If you are upgrading from a version of Nebari prior to 0.4.5, you will need to manually update your conda-store namespaces to be compatible with the new Nebari version. This is a one-time migration step that will need to be performed after upgrading to continue using the service. Refer to How to migrate base conda-store namespaces for further instructions.

Release v0.4.4 - September 22, 2022

Feature changes and enhancements

Enhancements for this release include:

  • Bump conda-store version to v0.4.11 and enable overrides
  • Fully decouple the JupyterLab, JupyterHub and Dask-Worker images from the main codebase
  • Add support for Python 3.10
  • Add support for Terraform binary download for M1 Mac
  • Add option to supply additional arguments to ingress from qhub-config.yaml
  • Add support for Kubernetes Kind (local)

What's Changed

New Contributors

Release v0.4.3 - July 7, 2022

Feature changes and enhancements

Enhancements for this release include:

  • Integrating Argo Workflow
  • Integrating kbatch
  • Adding cost-estimate CLI subcommand (Infracost)
  • Add panel-serve as a CDS dashboard option
  • Add option to use RetroLab instead of default JupyterLab

What's Changed

New Contributors

Full Changelog: https://github.com/Quansight/qhub/compare/v0.4.1...v0.4.3

Release v0.4.2 - June 8, 2022

Incident postmortem

Bitnami update breaks post v0.4.0 releases

On June 2, 2022, GitHub user @peytondmurray reported issue 1306, stating that he was unable to deploy QHub using either the latest release v0.4.1 or installing qhub from main. As verified by @peytondmurray and others, during your first qhub deploy, the deployment halts and complains about two invalid Helm charts missing from the bitnami index.yaml.

Bitnami's decision to update how long they keep old Helm charts in their index for has essentially broken all post v0.4.0 versions of QHub.

This is a severe bug that will affect any new user who tries to install and deploy QHub with any version less than v0.4.2 and greater than or equal to v0.4.0.

Given the impact and severity of this bug, the team has decided to quickly cut a hotfix.

AWS deployment failing due to old auto-scaler helm chart

On May 27, 2022, GitHub user @tylerpotts reported issue 1302, stating that he was unable to deploy QHub using the latest release v0.4.1 (or installing qhub from main). As described in the original issue, the deployment failed complaining about the deprecated v1beta Kubernetes API. This led to the discovery that we were using an outdated cluster_autoscaler helm chart.

The solution is to update from v1beta to v1 Kubernetes API for the appropriate resources and update the reference to the cluster_autoscaler helm chart.

Given the impact and severity of this bug, the team has decided to quickly cut a hotfix.

Bug fixes

This release is a hotfix for the issue summarized in the following:

What's Changed

  • Update minio, postgresql chart repo location by @iameskild in PR 1308
  • Fix broken AWS, set minimum desired size to 1, enable 0 scaling by @tylerpotts in PR 1304

Release v0.4.1 - May 10, 2022

Feature changes and enhancements

Enhancements for this release include:

  • Add support for pinning the IP address of the load balancer via terraform overrides
  • Upgrade to Conda-Store to v0.3.15
  • Add ability to limit JupyterHub profiles based on users/groups

Bug fixes

This release addresses several bugs with a slight emphasis on stabilizing the core services while also improving the end user experience.

What's Changed

  • [BUG] Adding back feature of limiting profiles for users and groups by @costrouc in PR 1169
  • DOCS: Add release notes for v0.4.0 release by @HarshCasper in PR 1170
  • Move ipython config within jupyterlab to docker image with more robust jupyterlab ssh tests by @costrouc in PR 1143
  • Removing custom dask_gateway from qhub and idle_timeout for dask clusters to 30 min by @costrouc in PR 1151
  • Overrides.json now managed by qhub configmaps instead of inside docker image by @costrouc in PR 1173
  • Adding examples to QHub jupyterlab by @costrouc in PR 1176
  • Bump conda-store version to 0.3.12 by @costrouc in PR 1179
  • Fixing concurrency not being specified in configuration by @costrouc in PR 1180
  • Adding ipykernel as default to environment along with ensure conda-store restarted on config change by @costrouc in PR 1181
  • keycloak dev docs by @danlester in PR 1184
  • Keycloakdev2 by @danlester in PR 1185
  • Setting minio storage to by default be same as filesystem size for Conda-Store environments by @costrouc in PR 1188
  • Bump Conda-Store version in Qhub to 0.3.13 by @costrouc in PR 1189
  • Upgrade mrparkers to 3.7.0 by @danlester in PR 1183
  • Mdformat tables by @danlester in PR 1186
  • [ImgBot] Optimize images by @imgbot in PR 1187
  • Bump conda-store version to 0.3.14 by @costrouc in PR 1192
  • Allow terraform init to upgrade providers within version specification by @costrouc in PR 1194
  • Adding missing init files by @costrouc in PR 1196
  • Release 0.3.15 for Conda-Store by @costrouc in PR 1205
  • Profilegroups by @danlester in PR 1203
  • Render .gitignore, black py files by @iameskild in PR 1206
  • Update qhub-dask pinned version by @iameskild in PR 1224
  • Fix env doc links and add corresponding tests by @aktech in PR 1216
  • Update conda-store-environment variable type by @iameskild in PR 1213
  • Update release notes - justification for changes in v0.4.0 by @iameskild in PR 1178
  • Support for pinning the IP address of the load balancer via terraform overrides by @aktech in PR 1235
  • Bump moment from 2.29.1 to 2.29.2 in /tests_e2e by @dependabot in PR 1241
  • Update cdsdashboards to 0.6.1, Voila to 0.3.5 by @danlester in PR 1240
  • Bump minimist from 1.2.5 to 1.2.6 in /tests_e2e by @dependabot in PR 1208
  • output check fix by @Adam-D-Lewis in PR 1244
  • Update panel version to fix jinja2 recent issue by @viniciusdc in PR 1248
  • Add support for terraform overrides in cloud and VPC deployment for Azure by @aktech in PR 1253
  • Add test-release workflow by @iameskild in PR 1245
  • Bump async from 3.2.0 to 3.2.3 in /tests_e2e by @dependabot in PR 1260
  • [WIP] Add support for VPC deployment for GCP via terraform overrides by @aktech in PR 1259
  • Update login instructions for training by @iameskild in PR 1261
  • Add docs for general node upgrade by @iameskild in PR 1246
  • [ImgBot] Optimize images by @imgbot in PR 1264
  • Fix project name and domain at None by @pierrotsmnrd in PR 856
  • Adding name convention validator for QHub project name by @viniciusdc in PR 761
  • Minor doc updates by @iameskild in PR 1268
  • Enable display of Qhub version by @viniciusdc in PR 1256
  • Fix missing region from AWS provider by @viniciusdc in PR 1271
  • Re-enable GPU profiles for GCP/AWS by @viniciusdc in PR 1219
  • Release notes for v0.4.1 by @iameskild in PR 1272

New Contributors

  • @dependabot made their first contribution in PR 1241

Full Changelog

Release v0.4.0.post1 - April 7, 2022

This post-release addresses the a few minor bugs and updates the release notes. There are no breaking changes or API changes.

  • Render .gitignore, black py files - PR 1206
  • Update qhub-dask pinned version - PR 1224
  • Update conda-store-environment variable type - PR 1213
  • Update release notes - justification for changes in v0.4.0 - PR 1178
  • Merge spawner and profile env vars to ensure dashboard sharing vars are provided to dashboard servers - PR 1237

Release v0.4.0 - March 17, 2022

WARNING

If you're looking for a stable version of QHub, please consider v0.3.14. The v0.4.0 has many breaking changes and has rough edges that will be resolved in upcoming point releases.

We are happy to announce the release of v0.4.0! This release lays the groundwork for many exciting new features and improvements in the future, stay tuned.

Version v0.4.0 introduced many design changes along with a handful of user-facing changes that require some justification. Unfortunately as a result of these changes, QHub instances that are upgraded from previous version to v0.4.0 will irrevocably break.

Until we have a fully functioning backup mechanism, anyone looking to upgrade is highly encouraged to backup their data, see the upgrade docs and more specifically, the backup docs.

These design changes were considered important enough that the development team felt they were warranted. Below we try to highlight a few of the largest changes and provide justification for them.

  • Replace Terraforms resource targeting with staged Terraform deployments.
    • Justification: using Terraform resource targeting was never an ideal way of handing off outputs from stage to the next and Terraform explicitly warns its users that it's only intended to be used "for exceptional situations such as recovering from errors or mistakes".
  • Fully remove cookiecutter as a templating mechanism.
    • Justification: Although cookiecutter has its benefits, we were becoming overly reliant on it as a means of rendering various scripts needed for the deployment. Reading through Terraform scripts with scattered cookiecutter statements was increasing troublesome and a bit intimidating. Our IDEs are also much happier about this change.
  • Removing users and groups from the qhub-config.yaml and replacing user management with Keycloak.
    • Justification: Up until now, any change to QHub deployment needed to be made in the qhub-config.yaml which had the benefit of centralizing any configuration. However it also potentially limited the kinds of user management tasks while also causing the qhub-config.yaml to balloon in size. Another benefit of removing users and groups from the qhub-config.yaml that deserves highlighting is that user management no longer requires a full redeployment.

Although breaking changes are never fun, we hope the reasons outlined above are encouraging signs that we are working on building a better, more stable, more flexible product. If you experience any issues or have any questions about these changes, feel free to open an issue on our Github repo.

Breaking changes

Explicit user facing changes:

  • Upgrading to v0.4.0 will require a filesystem backup given the scope and size of the current change set.
    • Running qhub upgrade will produce an updated qhub-config.yaml and a JSON file of users that can then be imported into Keycloak.
  • With the addition of Keycloak, QHub will no longer support security.authentication.type = custom.
    • No more users and groups in the qhub-config.yaml.

Feature changes and enhancements

  • Authentication is now managed by Keycloak.
  • QHub Helm extension mechanism added.
  • Allow JupyterHub overrides in the qhub-config.yaml.
  • qhub support CLI option to save Kubernetes logs.
  • Updates conda-store UI.

What's Changed

Details

New Contributors

Full Changelog: https://github.com/Quansight/qhub/compare/v0.3.13...v0.4.0

Release 0.3.13 - October 13, 2021

Breaking changes

  • No known breaking changes

Feature changes and enhancements

  • Allow users to specify external Container Registry (#741)
  • Integrate Prometheus and Grafana into QHub (#733)
  • Add Traefik Dashboard (#797)
  • Make ForwardAuth optional for ClearML (#830)
  • Include override configuration for Prefect Agent (#813)
  • Improve authentication type checking (#834)
  • Switch to pydata Sphinx theme (#805)

Bug fixes

  • Add force-destroy command (only for AWS at the moment) (#694)
  • Include namespace in conda-store PVC (#716)
  • Secure ClearML behind ForwardAuth (#721)
  • Fix connectivity issues with AWS EKS via Terraform (#734)
  • Fix conda-store pod eviction and volume conflicts (#740)
  • Update remove_existing_renders to only delete QHub related files/directories (#800)
  • Reduce number of AWS subnets down to 4 to increase the number of available nodes by a factor of 4 (#839)

Release 0.3.11 - May 7, 2021

Breaking changes

Feature changes and enhancements

  • better validation messages on github auto provisioning

Bug fixes

  • removing default values from pydantic schema which caused invalid yaml files to unexpectedly pass validation
  • make kubespawner_override.environment overridable (prior changes were overwritten)

Release 0.3.10 - May 6, 2021

Breaking changes

  • reverting qhub_user default name to jovyan

Feature changes and enhancements

Bug fixes

Release 0.3.9 - May 5, 2021

Breaking changes

Feature changes and enhancements

Bug fixes

  • terraform formatting in cookiecutter for enabling GPUs on GCP

Release 0.3.8 - May 5, 2021

Breaking changes

Feature changes and enhancements

  • creating releases for QHub simplified
  • added an image for overriding the dask-gateway being used

Bug fixes

  • dask-gateway exposed by default now properly
  • typo in cookiecutter for enabling GPUs on GCP

Release 0.3.7 - April 30, 2021

Breaking changes

Feature changes and enhancements

  • setting /bin/bash as the default terminal

Bug fixes

  • jhsingle-native-proxy added to the base jupyterlab image

Release 0.3.6 - April 29, 2021

Breaking changes

  • simplified bash jupyterlab image to no longer have dashboard packages panel, etc.

Feature changes and enhancements

  • added emacs and vim as default editors in image
  • added jupyterlab-git and jupyterlab-sidecar since they now support 3.0
  • improvements with qhub destroy cleanly deleting resources
  • allow user to select conda environments for dashboards
  • added command line argument --skip-terraform-state-provision to allow for skipping terraform state provisioning in qhub deploy step
  • no longer render qhub init qhub-config.yaml file in alphabetical order
  • allow user to select instance sizes for dashboards

Bug fixes

  • fixed gitlab-ci before_script and after_script
  • fixed jovyan -> qhub_user home directory path issue with dashboards

Release 0.3.5 - April 28, 2021

Breaking changes

Feature changes and enhancements

  • added a --skip-remote-state-provision flag to allow qhub deploy within CI to skip the remote state creation
  • added saner defaults for instance sizes and jupyterlab/dask profiles
  • qhub init no longer renders qhub-config.yaml in alphabetical order
  • spawn_default_options to False to force dashboard owner to pick profile
  • adding before_script and after_script key to ci_cd to allow customization of CI process

Bug fixes

Release 0.3.4 - April 27, 2021

Breaking changes

Feature changes and enhancements

Bug fixes

  • remaining issues with ci_cd branch not being fully changed

Release 0.3.3 - April 27, 2021

Breaking changes

Feature changes and enhancements

Bug fixes

  • Moved to ruamel as yaml parser to throw errors on duplicate keys
  • fixed a url link error in cds dashboards
  • Azure fixes to enable multiple deployments under one account
  • Terraform formatting issue in acme_server deployment
  • Terraform errors are caught by qhub and return error code

Breaking changes

Release 0.3.2 - April 20, 2021

Bug fixes

  • prevent gitlab-ci from freezing on gitlab deployment
  • not all branches were configured via the branch option in ci_cd

Release 0.3.1 - April 20, 2021

Feature changes an enhancements

  • added gitlab support for CI
  • ci_cd field is now optional
  • AWS provider now respects the region set
  • More robust errors messages in cli around project name and namespace
  • git init default branch is now main
  • branch for CI/CD is now configurable

Bug fixes

  • typo in authenticator_class for custom authentication

Release 0.3.0 - April 14, 2021

Feature changes and enhancements

  • Support for self-signed certificate/secret keys via kubernetes secrets
  • jupyterhub-ssh (ssh and sftp integration) accessible on port 8022 and 8023 respectively
  • VSCode(code-server) now provided in default image and integrated with jupyterlab
  • Dask Gateway now accessible outside of cluster
  • Moving fully towards traefik as a load balancer with tight integration with dask-gateway
  • Adding ability to specify node selector label for general, user, and worker
  • Ability to specify kube_context for local deployments otherwise will use default
  • Strict schema validation for qhub-config.yaml
  • Terraform binary is auto-installed and version managed by qhub
  • Deploy stage will auto render by default removing the need for render command for end users
  • Support for namespaces with qhub deployments on kubernetes clusters
  • Full JupyterHub theming including colors now.
  • JupyterHub docker image now independent from zero-to-jupyterhub.
  • JupyterLab 3 now default user Docker image.
  • Implemented the option to locally deploy QHub allowing for local testing.
  • Removed the requirement for DNS, authorization is now password-based (no more OAuth requirements).
  • Added option for password-based authentication
  • CI now tests local deployment on each commit/PR.
  • QHub Terraform modules are now pinned to specific git branch via terraform_modules.repository and terraform_modules.ref.
  • Adds support for Azure cloud provider.

Bug fixes

Breaking changes

  • Terraform version is now pinned to specific version
  • domain attributed in qhub-config.yaml is now the url for the cluster

Migration guide

  1. Version <version> is in format X.Y.Z
  2. Create release branch release-<version> based off main
  3. Ensure full functionality of QHub this involves at a minimum ensuring
  • [ ] GCP, AWS, DO, and local deployment
  • [ ] "Let's Encrypt" successfully provisioned
  • [ ] Dask Gateway functions properly on each
  • [ ] JupyterLab functions properly on each
  1. Increment the version number in qhub/VERSION in format X.Y.Z
  2. Ensure that the version number in qhub/VERSION is used in pinning QHub in the github actions qhub/template/{{ cookiecutter.repo_directory }}/.github/workflows/qhub-ops.yaml in format X.Y.Z
  3. Create a git tag pointing to the release branch once fully tested and version numbers are incremented v<version>

Release 0.2.3 - February 5, 2021

Feature changes, and enhancements

  • Added conda prerequisites for GUI packages.
  • Added qhub destroy functionality that tears down the QHub deployment.
  • Changed the default repository branch from master to main.
  • Added error message when Terraform parsing fails.
  • Added templates for GitHub issues.

Bug fixes

  • qhub deploy -c qhub-config.yaml no longer prompts unsupported argument for load_config_file.
  • Minor changes on the Step-by-Step walkthrough on the docs.
  • Revamp of README.md to make it concise and highlight Nebari Slurm.

Breaking changes

  • Removed the registry for DigitalOcean.

Thank you for your contributions!

Brian Larsen, Rajat Goyal, Prasun Anand, and Rich Signell and Josef Kellndorfer for the insightful discussions.