dbt
Blog Go fast without sacrificing quality: Everything we announced at dbt Developer Day

Go fast without sacrificing quality: Everything we announced at dbt Developer Day

dbt Developer Day is in the books! It was a jam-packed virtual event where we showcased new features that help deliver a best-in-class developer experience for the (now, forever, and always) heros of dbt: data practitioners.

From the new dbt engine, powered by integrating SDF’s SQL comprehension technology into dbt, to the new dbt-native VS Code extension, the GA of dbt Copilot, and some exciting updates to dbt Core…the features we highlighted all enable dbt developers to embrace speed in their workflows. With the guardrails to ensure that they aren’t compromising data quality in the process.

Get up-to-speed on everything we announced below, or you can catch the replay—and admire the A+ costuming—here.

Good DevEx = better business outcomes

While the surface area of dbt has expanded over recent years to include more stakeholders in the data development workflow, consistent with our mission, our #1 priority has always been and will always be to maintain dbt as the best place for analytics engineers and data practitioners to standardize data workflows.

A big part of that means delivering a best in class developer experience so you can empower the teams you support—whether data analysts, business users, or execs—to win with data. And spoiler alert: it turns out that what’s good for the developer is good for the organization. Better DevEx has been proven time and time again to equate to better business outcomes.

In our view, data developers need tooling that empowers them to balance these two seemingly paradoxical objectives: to expedite data workflows, while also ensuring data quality. The features we announced at dbt Developer Day strike that balance:

  • New dbt engine & VS Code Extension: Users will enjoy dramatic performance and productivity improvements with this lightning fast new engine that gives them faster feedback, supports rapid iteration, and makes it turnkey to validate analytics code as its being written.
  • dbt Copilot: Users can reduce cognitive load by leaning on AI to automate mundane tasks like creating code, tests, and documentation. Equally important, this embedded AI can harness your data’s full context to automate the creation of refined, well-governed data products.
  • dbt Core 1.10: The next minor version release for dbt Core includes features that promote faster and safer development to help users stay in their flow, including sample mode and stricter validation.

Get to know the new dbt engine and VS Code extension

An entirely new engine will soon power dbt—one that will make development orders of magnitude faster and considerably more cost-effective.

And because we know many of you prefer to develop in VS Code, we’re also launching an official dbt VS Code extension to bring these improvements to your local development experience.

Screenshot of dbt-native VS Code Extension with IntelliSense capabilities

Faster, smarter, more efficient: Meet the new dbt engine

Earlier this year, we acquired SDF Labs, a next-gen data transformation layer and SQL compiler, and have been hard at work integrating SDF's technology into dbt. The result is a next-generation dbt engine that will fundamentally transform dbt as we know it, now in private beta.

🏁 The implications to developer experience are really exciting:

  • Lightning-fast parse times. Our demo included a 10,000 model project that parsed in less than a second 🤯.
  • Rich IntelliSense to autocomplete SQL keywords, as well as suggest available dbt model or column names as you type
  • Auto-refactor references to model, column names across your project as soon as you make a naming change
  • Instantly click through to go to another model, ref, or macro from inside a model
  • Hover over to see available columns and column types within a schema
  • Preview the data that will be outputted by a given CTE from within a dbt model

✅ The new engine also automatically helps organizations optimize data warehouse costs:

  • Detect and flag parsing errors (e.g. a missing comma) without hitting the warehouse
  • Detect and flag compilation errors (e.g. a function doesn't accept the provided parameters or data types), without hitting the warehouse
  • Soon, dbt will also be able to even better help avoid unnecessary model runs, including in CI, with state-aware and column-aware orchestration

And the best part? All of these improvements will happen automatically when you run dbt on the new engine.

Bringing the new dbt engine to VS Code

Of course, this new engine will soon power development in the dbt Cloud IDE. We continue to be as committed as ever to ensuring the dbt Cloud IDE is a robust, accessible, and constantly improving dbt development experience.

But we know that many of you do your best work in the familiar local confines of VS Code, and that’s why we’re introducing the official dbt VS Code Extension, now in private beta. This extension is built from the ground up by dbt Labs, for dbt developers—and it’s the best way to take advantage of the new engine while developing locally.

It will come with the speed and cost efficiency unlocks made possible by the new dbt engine built right into it, in addition to all the other capabilities you would expect from a first class dbt development experience, such as the ability to explore lineage, preview data, and more.

What’s next?

Both the new dbt engine and the VS Code extension are currently in private beta for select dbt Cloud customers. We'll continue to select more participants for the beta over the coming weeks and months and we’ll be in touch if you’re an eligible candidate. In the meantime, dbt Cloud customers and dbt open source users can express interest in the beta program here. We're moving quickly towards broader availability, which is planned for our upcoming dbt Launch Showcase in late May.

Disrupting data engineering (again) with AI: dbt Copilot is GA

We also announced the general availability of dbt Copilot, a native AI-assistant that brings context-aware AI to your data workflows so you can deliver higher-quality data faster. In the AI era, speed and quality are essential to staying ahead. dbt Copilot harnesses rich data context—capturing relationships, metadata, lineage, and more—paired with powerful LLMs to automate routine tasks and consistently enforce key ADLC best practices across your dbt projects.

With this GA release, dbt Cloud Enterprise customers can now auto-generate documentation, data tests, semantic models, metric definitions, and inline SQL directly within the dbt Cloud IDE with a simple click of a button and natural language prompts. dbt Copilot also supports Bring Your Own Key (BYOK) for OpenAI or Azure OpenAI, and includes a built-in style guide to ensure consistency.

gif of dbt copilot generating automated documentation

Early beta users have shared positive feedback, noting improved documentation coverage, faster formatting, and improved query optimization, saving hours on manual work.

dbt Copilot has completely changed how we approach documentation and query optimization. Instead of spending hours manually updating models, I can use natural language to generate tests, infer metadata, and enrich our data models with valuable context. The more metadata we add, the better our entire team benefits, from analysts to executives.”
- Cody Mclean, Sr. Data Engineer, Hard Rock Digital

Read the full GA announcement here. If you’re a dbt Cloud Enterprise customer, check out the docs to learn how to get started using dbt Copilot today.

Taking dbt Core 1.10 out for a test drive

The latest version of dbt Core (1.10) is now in beta. It introduces a couple of powerful new capabilities that will make development not just faster, but also safer.

🏎️ Faster: Sample mode

You can now opt to use dbt in sample mode to build just a subset of your data during development or CI, rather than building your entire dataset(s). This will allow you to validate outputs white iterating much more rapidly, and reduce warehouse spend as a result. Sample mode will be particularly helpful if you're dealing with large time-based datasets.

Today, dbt Core 1.10 supports time-based sampling for references to any models or sources with event_time configured. You can use the --sample flag with the dbt run or dbt build commands to specify a trailing time window (such as the last "3 days"). You can also specify a particular historical time range (such as 2025-01-27 to 2025-01-30). More in the docs.

Additionally, you can set a default sample window at the environment level, so you don’t have to manually pass the --sample flag every run.

Sample mode in dbt 1.10

Thank you to all of the community members who participated in our Github discussion and Zoom feedback session to help shape this feature!

🛡️ Safer: Stricter validation

dbt Core 1.10 will also introduce stricter validation for project inputs, preventing common configuration errors and ensuring better reliability.

In prior versions, dbt allowed any configuration input, including typos or misconfigurations. This could lead to silent failures when models were run. For instance, mistakenly entering desciption (which is misspelled) instead of description would previously go unnoticed. But no more! Soon, dbt will emit a warning for mistakes such as this one.

If you'd like to use a custom configuration, you can still do so, but it will need to be nested within the meta config.

Sample mode is currently available in the dbt Core 1.10 beta, as well as for dbt Cloud customers on the Latest release track. Stricter validation is on its way soon; keep an eye out for an upcoming discussion in Github!

But wait, there’s more!

A big part of our commitment to data developers is to meet them where they are; across a myriad of data platforms. A few other milestones to share in this regard:

BigQuery

  • We continue to invest to make our BigQuery connector better than ever. We will soon extend support to Python Models powered by BigQuery DataFrames for advanced analytics and machine learning on BigQuery.
  • We also now support Workload Identity Federation so users can avoid having to use service keys to authenticate to BigQuery.
  • Stay tuned for more announcements on our partnership with Google and BigQuery at Google Next, happening April 7-11 in Las Vegas. Join us!

Teradata

  • We introduced our Teradata adapter in late 2024. After great customer feedback and a rigorous beta program, we’re thrilled to announce that this adapter is now generally available.

Save the date for May 28

We’re excited about how these innovations will uplevel the developer experience in dbt, helping data teams ship data products faster, while always keeping data quality in check. And we’re just getting started. Be sure to register for our next virtual launch event—the annual dbt Launch Showcase happening on May 28—where we’ll introduce even more features designed to empower our users to scale analytics. See you there!

Last modified on: Mar 19, 2025

dbt Developer Day

Join us on March 19th to hear from dbt Labs product leads about exciting new and coming-soon features designed to supercharge data developer workflows.

Set your organization up for success. Read the business case guide to accelerate time to value with dbt Cloud.

Read now

Recent Posts