What's new in dbt Cloud - November 2024
Nov 07, 2024
ProductWelcome back to our regular installment of "What's new in dbt Cloud" where we recap all the latest innovations landing in dbt since our last announcement back in August. It's been a very busy few months at dbt Labs and we're still coming down from Coalesce 2024, where over 2,000 data enthusiasts joined us in Las Vegas (with another 8,000+ tuning in online) to connect, learn, and get inspired. You can check out the sessions on-demand and be the first to hear all the need-to-know information about next year’s event here.
Alright, on to everything that's new in dbt Cloud!
♾️ Analytics Development Lifecycle
In case you missed it, in September we published a whitepaper about what we believe is the right next step forward to mature analytics practices at organizations of any size: aligning on the analytics development lifecycle (ADLC). We encourage you to read the paper to learn more about how embracing this vendor-agnostic process can help accelerate and improve analytics at your company. You can also check out our new-and-improved website to learn more about how dbt helps teams embrace various stages of the ADLC.
📈 dbt Semantic Layer
Harness the power of the dbt Semantic Layer with new features that make it easier to build, consume, and scale your semantic layer strategy. Centralize metrics, ensure data consistency, and deliver insights more efficiently than ever before. Here’s what’s new:
🪄Auto-generate semantic models with dbt Copilot (beta): Building and deploying your semantic layer is now faster and simpler. With dbt Copilot, you can automatically generate semantic models, reducing manual work and allowing you to standardize metrics and logic across your organization with just a click. Contact your sales rep if you’re interested in getting involved in the beta.
🌎 Query the semantic layer within the IDE: You can now query metadata, metrics, preview compiled SQL, and run exports directly in your development environment in the Cloud IDE. This makes it easier to build and manage semantic models, speeding up governed data development. And, with parity between the IDE and Cloud CLI, you can choose the environment that works best for your team for a seamless experience.
📊 Microsoft Excel integration: The dbt Semantic Layer integration with Microsoft Excel 365 and Desktop is now generally available! This enables business users to self-serve data from governed metric definitions through a simple drop-down interface query builder directly in Excel. Whether you're in finance, accounting, or any other department that relies on Excel, you can now easily access data directly without needing assistance from the data team.
📅 Custom calendar support in MetricFlow: The dbt Semantic Layer now supports custom calendars in MetricFlow (now available in Preview). This allows you to define and use your own business or fiscal calendars, aligning your metrics and reporting with custom timeframes like 4-4-5 retail calendars or non-standard fiscal years. With this update, your analytics will better reflect your organization’s unique structure, delivering more accurate and relevant insights.
📤 Exports improvements: We've enhanced our exports experience with new features like database configurations, limit and order configurations, and tagging. In addition to export-specific settings like export_as
, schema
, and alias
, you can now configure the database
setting to select the most suitable database for each export. Limit and order settings, previously outlined in the documentation, are now configurable directly in your YAML files. Additionally, tags allow you to run exports based on specific tags, just as they work for models.
📈 Embedded analytics repo: Embedded analytics has emerged as a prominent use case for the dbt Semantic Layer. To make it easier to learn how to get started, we introduced a new demo environment where you can explore integrating the dbt Semantic Layer into backend systems for an embedded analytics use case. Dive into how the Jaffle Shop uses the dbt Semantic Layer to deliver personalized sales metrics to independent merchants by dynamically filtering data by store location with the Python SDK.
🔎 dbt Explorer
dbt Explorer is dbt's built-in, automated data catalog. Use it to gain the holistic context and breadcrumbs that enable you to move beyond reactive workflows so you can build, fine-tune, and troubleshoot your pipelines more proactively. Here's what's new:
📊 Auto-exposures for Tableau: Now in Preview, you can automatically populate your dbt lineage with downstream exposures in Tableau (and Power BI to follow). This gives data teams automatic context into how and where models are used, so they can prioritize data work to promote data quality. Coming soon, you can trigger downstream dashboards to automatically refresh as soon as new data is available, giving business stakeholders confidence that they’re always making decisions from the freshest data. Auto-exposures are automatically accounted for throughout dbt Cloud, including in dbt Explorer, scheduled jobs, and CI jobs. Read the docs to learn more.
💡 Model query history: dbt Explorer now surfaces how frequently models are queried, helping data teams focus their time and infrastructure spend on popular data products as well as easing discovery by making analysts aware of widely used data models. Model query history can be viewed in performance charts, as a lineage lens in your DAG (pictured), and as a new column in your list of models. This feature is currently in Preview for Snowflake and BigQuery, with additional platforms coming soon. See the docs to learn more.
🆗 Data health tiles: Now GA in dbt Cloud, you can embed health signals like data quality and freshness within any dashboard, giving your downstream stakeholders at-a-glance confirmation of whether they can trust the data they’re about to use. Users can also navigate back to dbt Explorer with a single click to investigate further. Read the docs to get started.
👍 In-app trust signals: Data health signals aren’t just for downstream dashboards. Now in Preview, health signals are accounted for throughout the in-app dbt Cloud experience, giving users an at-a-glance understanding of whether the dbt resource they’re about to use is fresh, error-free, tested, documented, and more. Read the docs for more.
Develop
dbt Cloud offers multiple, accessible development environments to foster organization-wide data collaboration. Here's what's new:
🖼️ Visual editing experience (beta): With a low-code visual editing experience in dbt Cloud, users can create new or explore existing dbt models using a drag-and-drop interface that compiles directly to SQL. Users have the additional flexibility to jump back and forth between this visual representation of their models and SQL code to dive deeper. The visual editing experience is fully integrated with version control, dbt Explorer, and the added capability to leverage AI for code generation. Reach out to your account team to get involved in the ongoing beta.
Deploy
dbt makes it easy to validate and safely deploy models into production. Here's what's new:
🤲 Microbatch incremental strategy: The new microbatch strategy allows you to break up large time-series datasets and process smaller batches for faster transformations and improved performance and resiliency for runs. Microbatch is currently available in beta for dbt Cloud versionless and dbt Core v1.9 for BigQuery, Postgres, Snowflake, and Spark with Redshift, Databricks, and Athena coming soon.
📸 Snapshots improvements: We’ve been hard at work enhancing snapshots in dbt, and we’re excited to introduce a few new improvements. Snapshots can now be configured via YAML files for a cleaner, more consistent setup alongside your models. Additionally, you can now customize the names of meta fields, offering greater flexibility to tailor snapshot metadata to your needs.
🔄 Advanced CI: Continuous integration in dbt just got even smarter. With the ability to compare changes in CI (now in Preview), each CI job will include a breakdown of the columns and rows that are being added, modified, or removed in your underlying data platform as a result of executing your dbt job. Users can see a summary of these changes inside their PR in Git. This additional context allows data teams to catch any unexpected behavior before code is deployed into production, improving data quality and increasing trust among all collaborators. Read the blog to learn more.
Observability
Catch problems before your stakeholders notice with dbt Cloud's built-in observability features. Here's what's new:
⚠️ Job warn notifications. Now you can get notified via Slack or email if a job run encounters warnings from tests or source freshness checks—in addition to already available options for notification on job success, failure, or cancelation.
Platform
We’re always making improvements to the dbt Cloud platform to make it more scalable, reliable, and accessible.
🧊 Iceberg table support: dbt now supports the Apache Iceberg table format. This enables data teams to work more efficiently with large-scale data lakes while maintaining the familiar dbt workflow. Support for Athena, Spark, Databricks, Starburst/Trino, and Dremio are GA and Snowflake is currently in beta. Iceberg table format support is a critical capability to enable cross-platform dbt Mesh (currently in development).
☁️ Further support for Azure deployments: We launched the ability to deploy dbt Cloud multi-tenant natively on Microsoft Azure in Europe a few months ago, and we’re excited to now share the US region is joining the fold. This hosting option is in addition to our support for AWS deployments, bringing the same powerful dbt Cloud experience to even more data teams in more regions — regardless of your choice of cloud. Azure multi-tenant support (hosted in both America and Europe) is currently in Preview for dbt Cloud Enterprise customers.
🔐 MFA enforcement for all users: Multi-factor authentication (MFA) is now required for all dbt Cloud users. The next time a user logs in to dbt Cloud with a username/password, they’ll be required to set up MFA—through SMS, an authenticator app, or a WebAuthn-compliant security key. This new posture will help bolster overall security of your dbt Cloud account. Read the docs to learn more.
🏂 External OAuth for Okta & Entra with Snowflake: Using External OAuth, you can federate data warehouse access for developers using an OAuth Flow with an identity provider. Now, instead of Snowflake acting as the identity provider, you can leverage Okta or Entra ID to authenticate. Available to enterprise customers with Snowflake connections. Read the docs to learn more.
✍️ Sign commits from Cloud IDE: You can now configure a private key from dbt Cloud so GitHub can verify your identity when committing code from the Cloud IDE. This capability is now GA for Enterprise accounts. Read the docs to learn more.
🔌 New adapters: dbt Cloud now integrates with AWS Athena (GA) and Teradata (Preview), enabling more organizations and teams to collaborate on data workflows.
What's next
As always, we're excited to get these new features in your hands and look forward to your feedback. Be sure to join us for our ongoing "One dbt" webinar series where we dive into the latest features powering dbt Cloud. The next one is happening in December and is all about how to enable cross-functional teams with trusted data with cross-platform dbt Mesh. Save your spot here!
Last modified on: Nov 07, 2024
Set your organization up for success. Read the business case guide to accelerate time to value with dbt Cloud.