Changelog

🔧 Fixes

  • Use a hadolint version that works on the latest versions of macOS

  • Fix parsing in new versions of ruff for source files with syntax errors

  • Update links to custom linters and parsers documentation by @amkisko

🔧 Fixes

  • Fix a daemon connection error that could occur on Windows.

  • Fix a daemon connection error that could happen on some machines with restricted TCP permissions.

  • Check mypy output for crashes and report to the user.

We're excited to introduce a powerful new feature that will help you accelerate your PR iterations: PR Test Summaries!

Imagine this: You're diligently working on a new feature in your branch and ready to submit your Pull Request (PR) for review. You've run your tests locally, and everything looks good. But then you push your code, and—oh no! Some tests you haven’t even touched are failing. This is where our new PR Test Summaries come in!

With this feature, you can quickly identify failing tests and their reasons, as well as pinpoint flaky tests. This way, you’ll know whether the issue lies with your code changes or the tests themselves.

PR Test Summaries

This new feature includes the following:

GitHub PR Comment: Each new PR will receive a comment summarizing the test results, including:

  • Counts for test runs, categorized by result (failed, flaky, skipped, quarantined)

  • Collapsible lists for failed, flaky, and skipped tests

  • For failed and flaky tests: detailed failure reasons and direct links to trace logs for quick investigation

PR Summary View: This new view in our web app gives you access to a comprehensive listing of all test run details:

  • A table showing all test runs, filtered by outcome (passed, failed, or skipped) and whether or not they have been quarantined

  • Every test run includes an icon to indicate whether a test is known to be flaky, the failure reason, and more details to investigate

Test Run Detail Panel: You are most likely already familiar with this view; it will show you all run details include tracelogs, branch, job run link, duration, and more.

Learn more about PR Test Summaries in the docs

🎁 New

✨ Improvements

  • Add download for python@3.11.9. Specify runtime versions in the runtime.enabled section of your .trunk/trunk.yaml

  • Add support for psscriptanalyzer versions 1.23.0 and beyond!

🔧 Fixes

  • Fix a rare deadlock that could cause a linter execution to hang indefinitely.

🎁 New

  • New linter: squawk

  • New linter: markdownlint-cli2

  • New tool: dbt-cli

🔧 Fixes

  • Disable incorrect caching for pyright

✨ Improvements

  • Help text improvements.

  • trunk fmt will now use configured verbs when presenting results to the user.

🔧 Fixes

  • Fix a crash in the LSP server when giving diagnostics for empty files.

  • Trunk no longer erroneously follows symlinks to directories when watching the filesystem on Linux, saving on resources.

We’re excited to announce the latest enhancement: Jira integration for managing flaky tests. This update builds on our MVP by streamlining the issue management process within Jira.

With this release, we’re introducing key features designed to simplify and enhance your workflow:

  • Create Jira Issues with One Click: Generate Jira issues directly from the dashboard or test details view. After a quick setup, a single click creates the issue, saving you time and effort.

  • Customize Issue Defaults: Set up default issue types and labels to match your workflow. This ensures that every issue you create aligns with your process.

  • Track Issue Updates: Monitor the status of your Jira issues with ease. A new column on the dashboard shows the current issue status, while the test details page displays status changes in the timeline. These updates help confirm if an issue completion has resolved the flaky test.

  • Review Work Progress: Use the new tracking features to review progress and identify trends. Filter by status to pinpoint flaky tests that need attention and assess their impact on your team.

Get Started Today

Start the process by clicking "Create Issue" on the dashboard and follow the prompts to link your Jira account with your Jira instance URL, username, and Personal Access Token.

✨ Improvements

  • ruff@>=0.6.0 now runs on Jupyter notebooks by default

🔧 Fixes

  • Fix definition for textproto filetype

  • Fix trivy config files, and don't recommend by default

🎁 New

✨ Improvements

  • Improved output for Markdownlint messages.

  • Improved output for the trunk git hooks.

  • Improved output for trunk when run via a git hook inside VSCode.

  • Reduced memory usage.

  • Remove some overly verbose logging.

🔧 Fixes

  • Fix a rare instance where a linter can hang without timing out.

Merge queues are integral to validating and merging PRs - a critical part of any development process. Minimizing the friction to merge a PR and ensuring the merging process remains fast is essential to remaining productive and shipping code faster.

Gaining an analytical understanding of how PRs are moving through the merge queue, how much CI time it takes to test PRs in the queue, why PRs are frequently failing during testing, and so on, can help developers recognize the impact and problems of their merge process.

Today, Trunk Merge Queues introduces a metric dashboard focused on Merge Queue health and efficiency. Merge Queue has many features and settings, such as batching and changing how many PRs can get tested at once, which can affect the throughput and speed of your queue - these metrics help you validate your decisions when optimizing your queue’s configuration.

  1. “Time in Queue” - how long a pull spends in the queue from when it enters the queue to when it leaves, whether it fails or succeeds.

  2. “Conclusion Counts” - the count of successfully tested and merged PRs, failures in the queue, or cancellations. A large amount of failures or sudden changes in the number of merged or failed PRs can indicate a problem that needs investigating.

  3. “CI Time Testing PRs in the Queue” - how much time is spent in your CI pipelines testing PRs in the queue. A spike in this metric indicates that tests are now taking much longer to run.

    1. If you’re currently utilizing batching, this feature considers that all of the PRs in a batch will be tested by the same CI run

  4. “CI Jobs Triggered To Test PRs” - how many jobs are getting kicked off to test pull requests in the queue. Each job required a machine to run on - the more jobs there are, the higher the contention for machines and the longer it takes to test PRs

Currently, both CI metrics work specifically for GitHub actions or any workflows you’re pushing data to CI analytics for.  

More metrics are being planned for the future! If you have any suggestions or questions, head over to our community slack.

✨ Improvements

  • Support additional osv-scanner lockfile types

  • Added tfvars filetype

  • Added svelte filetype

🔧 Fixes

  • Fix vale to run correctly on Windows

  • Fix ruff-nbqa to work on newer versions

  • Fix sqlfluff fix command to not fail in nonformattable cases

Today, we are releasing our first feature to support issue handling related to flaky tests. This MVP feature is designed to streamline reporting and managing flaky tests, saving valuable time and effort.

With this new feature, you can copy the issue title and content generated by Test Analytics, reducing the time needed to gather all necessary information. The auto-generated content includes test filepaths, owners, the time the test turned flaky, the most common failure reasons, and their stack traces.

From the issue handling interface, you can quickly copy detailed issue reports and paste them into your preferred issue tracking system. This automation ensures that all relevant information is accurately captured, facilitating more effective communication and faster resolution of flaky test issues.

Trunk Merge Queue has added more functionality to its public API, allowing it to fit seamlessly into any integration. When you need to build custom integrations, Trunk Merge Queue can support it through its public API and webhooks.

Currently, Trunk Merge Queue supports the following actions through our public API:

  • Submitting or resubmitting a PR to the merge queue

  • Removing a PR from the queue

  • Restarting tests on a PR in the queue that has failed

  • Getting current information on the PR and its status in the merge queue

Have a unique use case these APIs or webhooks do not cover? Let the team know in our community slack!

We are rolling out substantial UX improvements to the dashboard and detail views. These enhancements provide users with quick and easy access to critical information.

Quickly assess flaky test impact

We updated the key metrics managers and engineers are looking at to assess the impact of flaky tests on their organization. Users can see PRs impacted and the failure rate for the selected repository, including trends.

We also learned from previous research that users want to see insights more relevant to them. To support that, we've rolled out features to select the date range (1 day, 7 days, or 14 weeks), filter tests by teams, and drill down even further through test search.

Quicker access to key test details

Engineers need to quickly get the information needed to verify a flaky test, troubleshoot it, and deploy a fix. To support that, we rolled out a slew of improvements to the test details view.

Users can now effortlessly access the test owner, filepath, links to CI runs, and more. Additionally, the inclusion of common failure reasons and tracelogs allows for a deeper understanding of recurring issues, supporting prioritization of the most common issues and facilitating quicker resolutions.

CLI 1.22.2

Code Quality

🎁 New

  • Add an option to skip files that are missing compile commands.

✨ Improvements

  • Trunk will now automatically cleanup downloads that have not been used in awhile.

  • Trunk will now enable eslint version >= 9 if you use the new style config files.

  • Trunk now allows running node based tools from node_modules/.bin instead of hermetically managed.

🔧 Fixes

  • Fix Ruby installation on some machines.

🎁 New

  • New linter Vale. Thanks @gewenyu99 for the contribution!

  • New formatter php-cs-fixer. Thanks @matthewbaggett for the contribution!

  • New tools phpunit and paratest. Thanks @matthewbaggett for the contribution!

  • New tool docker-credential-ecr-login

✨ Improvements

  • Official support for eslint@9.x. For more information, see our migration guide.

  • Add missing health checks for several tools

  • Fix error handling for biome fmt

  • Cleanup contributing and style guide

  • More concise bazel filetypes

🔧 Fixes

  • Support more ruby version formats

  • Pin versions for npm-check-pre-push action

  • Fix parsing of trivy terraform issues

Trunk Merge Queue, by default, will now raise draft pull requests in order to test changes submitted to the merge queue! Trunk Merge Queue is now even easier to get started with and no longer requires any setup - simply create a Merge Queue, submit a pull request, and watch it get tested and merged.

Testing pull requests in the Merge Queue by raising draft PRs is especially useful since Trunk Merge Queue automatically syncs with your branch protection settings on GitHub. This means Trunk Merge Queue will use the same set of tests you configured on your GitHub Status Checks, keeping things simple and consistent.

Previously, Trunk Merge Queue would create an isolated branch containing the tip of main, the current changes in your pull request, and changes of all the PRs in front of this one in the queue. Developers would have to merge code changes to configure specifically what tests get triggered when we begin testing a PR.

This has its own benefits, but it requires configuring a custom GitHub Action Workflow, which can be very involved.

1on:
2 push:
3 branches:
4 - trunk-merge/**

Now, you can get started with Trunk Merge Queue right away! This setting is also configurable, so you can always have Trunk Merge Queue create a branch instead if you wish to have more control over testing PRs in the merge queue.

Trunk Merge Queue, through its Parallel Queues feature, can dynamically create new merge queues to test only the pull requests with potential conflicts together. While Parallel Mode can be used by any repo, we provide an easy way to start using Parallel Queues on Bazel repos through our GitHub action, requiring practically no setup.

Today, we’re providing even more first-class support for parallel queues with the release of our GitHub Action for Nx repos using Parallel Queues. Now, enjoying the benefits of using Parallel Merge Queues is easier than before on your Nx repos - you set up the GitHub action, and Trunk takes care of pulling the relevant information out of your PRs, making sure it only tests the pull requests that actually have a chance of breaking each other together. Power like this allows your merge queue to continue to scale effortlessly, preventing needlessly long lines of pull requests waiting to get tested and merged.

Not using Nx or Bazel? You can still get started with Parallel Queues by using our Public API! The majority of build tools like Nx and Bazel make it easy to pull out information on what sections of code your PRs is impacting - for more guidance with this, reach out in our community slack. More GitHub actions supporting other tools like Turborepo are coming soon to continue to make parallel queues as easy to leverage as possible for any build tool.