Plugins 1.6.5
Code Qualityđ§ Fixes
Use a hadolint version that works on the latest versions of macOS
Fix parsing in new versions of ruff for source files with syntax errors
Update links to custom linters and parsers documentation by @amkisko
đ§ Fixes
Use a hadolint version that works on the latest versions of macOS
Fix parsing in new versions of ruff for source files with syntax errors
Update links to custom linters and parsers documentation by @amkisko
Hi everyone, weâve introduced a new onboarding flow to make it easier to integrate your test framework and CI provider with Trunk.Â
This flow can also be used to add more test frameworks and CI jobs to existing Trunk Flaky Tests projects. You can access this flow by clicking the Add Test Framework button on the top right of your dashboard.
The onboarding flow has guides for all of the test frameworks and CI provider combinations from the Trunk documentation. The docs will guide you through the setup process and help you validate your first upload.
The commands provided will have the API token and Trunk org slug automatically populated, so you can set up your repo without circling back to your settings.
If you donât see the test framework or CI provider you see in the onboarding process, feel free to reach out to us on Slack.
Try the new add test framework flow in the Trunk Web App.
The Trunk CLI now includes commands for uploading and validating test results for Trunk Flaky Tests.
You can install the CLI in your CI systems like this:
1curl -fsSLO --retry 3 https://trunk.io/releases/trunk && chmod +x trunk
The launcher will install the appropriate Trunk CLI binary for your system.
You can use the Trunk CLI for Flaky Tests using the trunk flakytests
subcommand.
1./trunk flakytests upload --junit-paths "test_output.xml" \2Â Â --org-url-slug <TRUNK_ORG_SLUG> \3Â Â --token $TRUNK_API_TOKEN
You can also now use the Trunk CLI to validate JUnit XML files during debugging.
1./trunk flakytests validate
Learn more in our Uploader CLI Reference docs.
Weâre continually refining our UX with the help of our beta usersâ feedback and weâve made some changes to how Flaky Tests displays key information on the dashboards. As more users are using Trunk to tackle flakiness in their test suite, weâve received some valuable feedback on which pieces of information are missing and which metrics are more important.
Weâve revisited the main Flaky Tests dashboard and added a new trend chart to help you better understand the health trends of your tests at a glance.
Weâve also made improvements to what information is displayed in the test list to help you find high-impact flaky tests easier, especially when quarantining is enabled.
In the test details page, we revisited the metrics that customers care most about and placed them more prominently. We also cleaned up how quarantining and ticket creation UI elements are displayed for a cleaner and more intuitive appearance.
See these improvements in the Trunk Web App.
đ§ Fixes
Fix a daemon connection error that could occur on Windows.
Fix a daemon connection error that could happen on some machines with restricted TCP permissions.
Check mypy
output for crashes and report to the user.
A common pain point during onboarding is the lack of transparency after the Trunk CLI uploads test results. To help you better understand if test results have been uploaded successfully and if it has been processed, weâve added a Uploads tab to the Trunk Flaky Tests dashboard.
The Uploads tab shows you the status of each upload, which is identified by the source branch and the commit hash. You can click on the link under Job Run to view the CI job in your provider.
View your recent uploads in the Trunk Web App.
Flaky Tests now supports the XCResults format outputted by Swift projects using XCTests. You can now upload the .xcresults
format to Trunk directly, without configuring a JUnit XML reporter. Supporting XCTests directly also means you can include test file paths from .xcresults
when reporting to Trunk Flaky Tests. This is especially important for CODEOWNERS support planned for a future release.
You can upload XCResults using the Trunk CLI with the --xcresult-path
argument.
1./trunk flakytests upload --xcresult-path "/path/to/Test.xresult"
To learn more, visit our XCTests documentation and Uploader CLI references.
We're excited to introduce a powerful new feature that will help you accelerate your PR iterations: PR Test Summaries!
Imagine this: You're diligently working on a new feature in your branch and ready to submit your Pull Request (PR) for review. You've run your tests locally, and everything looks good. But then you push your code, andâoh no! Some tests you havenât even touched are failing. This is where our new PR Test Summaries come in!
With this feature, you can quickly identify failing tests and their reasons, as well as pinpoint flaky tests. This way, youâll know whether the issue lies with your code changes or the tests themselves.
This new feature includes the following:
GitHub PR Comment: Each new PR will receive a comment summarizing the test results, including:
Counts for test runs, categorized by result (failed, flaky, skipped, quarantined)
Collapsible lists for failed, flaky, and skipped tests
For failed and flaky tests: detailed failure reasons and direct links to trace logs for quick investigation
PR Summary View: This new view in our web app gives you access to a comprehensive listing of all test run details:
A table showing all test runs, filtered by outcome (passed, failed, or skipped) and whether or not they have been quarantined
Every test run includes an icon to indicate whether a test is known to be flaky, the failure reason, and more details to investigate
Test Run Detail Panel: You are most likely already familiar with this view; it will show you all run details include tracelogs, branch, job run link, duration, and more.
Learn more about PR Test Summaries in the docs
đ New
New tool: grpcui. Can be used like Postman to explore gRPC servers with a web UI, learn about running tools here.
⨠Improvements
Add download for python@3.11.9
. Specify runtime versions in the runtime.enabled
 section of your .trunk/trunk.yaml
Add support for psscriptanalyzer versions 1.23.0
 and beyond!
Weâre excited to provide a new way for you to mitigate the negative impact of flaky tests in your repo through quarantining.Â
Quarantining lets you isolate failures for known flaky tests so they don't fail your CI jobs while still continuing to run them. It looks for known flaky tests and determines which tests should be isolated at runtime, avoiding the code changes usually required to disable flaky tests.
Running flaky tests without disabling them lets you validate your fixes by continuously monitoring their results. This is especially important because most attempted fixes for flaky tests donât reduce the testâs flakiness.
Weâre currently onboarding organizations to try quarantining manually. This is because quarantining can drastically impact the results of your CI jobs. We want to help you validate quarantining results and educate your team on its behavior during onboarding. To try Quarantining, reach out to us on Slack.
Key Impact Metrics: You can see the impact of quarantining on your repo. Understand how many CI jobs with flaky failures were rescued by quarantining and how many engineering hours were saved.
Keep track of quarantined tests: You can continue to monitor the results of quarantined flaky tests through the PR Tests Summaries generated by Trunk. You can filter failures by quarantined runs and see the failure reasons at a glance.
Per test overrides: You can override quarantining for individual tests in your repo to always quarantine or never quarantine. This helps you handle edge cases such as tests that should never break or tests that are flaky/broken due to reasons outside your teamâs control, like external APIs being down.
Detailed audit trails: You can see the audit trail of quarantine setting changes in both the testâs status history and an overall audit trail in the settings.
Learn more in the Quarantining documentation.
đ§ Fixes
Fix a rare deadlock that could cause a linter execution to hang indefinitely.
đ New
New linter: squawk
New linter: markdownlint-cli2
New tool: dbt-cli
đ§ Fixes
Disable incorrect caching for pyright
⨠Improvements
Help text improvements.
trunk fmt
will now use configured verbs when presenting results to the user.
đ§ Fixes
Fix a crash in the LSP server when giving diagnostics for empty files.
Trunk no longer erroneously follows symlinks to directories when watching the filesystem on Linux, saving on resources.
Weâre excited to announce the latest enhancement: Jira integration for managing flaky tests. This update builds on our MVP by streamlining the issue management process within Jira.
With this release, weâre introducing key features designed to simplify and enhance your workflow:
Create Jira Issues with One Click: Generate Jira issues directly from the dashboard or test details view. After a quick setup, a single click creates the issue, saving you time and effort.
Customize Issue Defaults: Set up default issue types and labels to match your workflow. This ensures that every issue you create aligns with your process.
Track Issue Updates: Monitor the status of your Jira issues with ease. A new column on the dashboard shows the current issue status, while the test details page displays status changes in the timeline. These updates help confirm if an issue completion has resolved the flaky test.
Review Work Progress: Use the new tracking features to review progress and identify trends. Filter by status to pinpoint flaky tests that need attention and assess their impact on your team.
Start the process by clicking "Create Issue" on the dashboard and follow the prompts to link your Jira account with your Jira instance URL, username, and Personal Access Token.
⨠Improvements
ruff@>=0.6.0
 now runs on Jupyter notebooks by default
đ§ Fixes
Fix definition for textproto
 filetype
Fix trivy
 config files, and don't recommend by default
đ New
Add a linter definition field for a linter to have multi-line messages.
Allow specifying a hint so that Trunk can identify an upstream remote to use other than origin.
⨠Improvements
Improved output for Markdownlint messages.
Improved output for the trunk git hooks.
Improved output for trunk when run via a git hook inside VSCode.
Reduced memory usage.
Remove some overly verbose logging.
đ§ Fixes
Fix a rare instance where a linter can hang without timing out.
Trunk Web App now supports both light and dark mode.
You can enable light mode by navigating to Settings > Appearance > Color Theme.
Merge queues are integral to validating and merging PRs - a critical part of any development process. Minimizing the friction to merge a PR and ensuring the merging process remains fast is essential to remaining productive and shipping code faster.
Gaining an analytical understanding of how PRs are moving through the merge queue, how much CI time it takes to test PRs in the queue, why PRs are frequently failing during testing, and so on, can help developers recognize the impact and problems of their merge process.
Today, Trunk Merge Queues introduces a metric dashboard focused on Merge Queue health and efficiency. Merge Queue has many features and settings, such as batching and changing how many PRs can get tested at once, which can affect the throughput and speed of your queue - these metrics help you validate your decisions when optimizing your queueâs configuration.
âTime in Queueâ - how long a pull spends in the queue from when it enters the queue to when it leaves, whether it fails or succeeds.
âConclusion Countsâ - the count of successfully tested and merged PRs, failures in the queue, or cancellations. A large amount of failures or sudden changes in the number of merged or failed PRs can indicate a problem that needs investigating.
âCI Time Testing PRs in the Queueâ - how much time is spent in your CI pipelines testing PRs in the queue. A spike in this metric indicates that tests are now taking much longer to run.
If youâre currently utilizing batching, this feature considers that all of the PRs in a batch will be tested by the same CI run
âCI Jobs Triggered To Test PRsâ - how many jobs are getting kicked off to test pull requests in the queue. Each job required a machine to run on - the more jobs there are, the higher the contention for machines and the longer it takes to test PRs
Currently, both CI metrics work specifically for GitHub actions or any workflows youâre pushing data to CI analytics for. Â
More metrics are being planned for the future! If you have any suggestions or questions, head over to our community slack.
⨠Improvements
Support additional osv-scanner
 lockfile types
Added tfvars
 filetype
Added svelte
 filetype
đ§ Fixes
Fix vale
 to run correctly on Windows
Fix ruff-nbqa
 to work on newer versions
Fix sqlfluff
 fix command to not fail in nonformattable cases
Today, we are releasing our first feature to support issue handling related to flaky tests. This MVP feature is designed to streamline reporting and managing flaky tests, saving valuable time and effort.
With this new feature, you can copy the issue title and content generated by Test Analytics, reducing the time needed to gather all necessary information. The auto-generated content includes test filepaths, owners, the time the test turned flaky, the most common failure reasons, and their stack traces.
From the issue handling interface, you can quickly copy detailed issue reports and paste them into your preferred issue tracking system. This automation ensures that all relevant information is accurately captured, facilitating more effective communication and faster resolution of flaky test issues.