TDP v4.4.4 Release Notes

Release date: 26 March 2026

TetraScience has released Tetra Data Platform (TDP) version 4.4.4. This release provides important security fixes for Tetra Data Apps along with several other bug fixes, improved path configuration options for Tetra File-Log Agents, and pipeline notification enhancements.

Here are the details for what's new in TDP v4.4.4.

Notes

Learn about note blocks and what they mean.
  • Any blue NOTE blocks indicate helpful considerations, but don't require customer action.
📘

NOTE

  • Any yellow IMPORTANT note blocks indicate required actions that customers must take to either use a new functionality or enhancement, or to avoid potential issues during the upgrade.
🚧

IMPORTANT

GxP Impact Assessment

All new TDP functionalities go through a GxP impact assessment to determine validation needs for GxP installations.

Enhancements and Bug Fixes do not generally affect Intended Use for validation purposes.

New Functionality

New functionalities are features that weren't previously available in the TDP.

  • There's no new functionality in this release.

Enhancements

Enhancements are modifications to existing functionality that improve performance or usability, but don't alter the function or intended use of the system.

Data Integrations Enhancements

Improved Path Configuration Options for Tetra File-Log Agents

The Agent Details page for Tetra File-Log Agents includes new path configuration side panels that provide a clear visual separation from the main Paths table. Now, either a New Path or an Edit Path side panel appears when customers select either the New Path button or the Edit Path icon.

This new side panel design provides a more focused editing experience while maintaining visibility of the overall path configuration. The side panels also include safeguards to prevent accidental loss of unsaved changes, displaying warning prompts when users attempt to close the panels when there's pending path modifications.

Previously, a path configuration window appeared in the Paths table when customers selected either of these options.

For more information, see Remotely Configure a Tetra File-Log Agent.

New Path side bar Edit Path side bar

Limited Availability and Beta Release Functionality Enhancements

Custom Subject Lines for Custom Pipeline Email Notifications

When configuring custom pipeline email notifications, customers can now specify custom subject lines by using the new SUCCESS SUBJECT and FAILURE SUBJECT fields in the Set Notifications section.

For more information, see Create Custom Pipeline Email Notifications.

Custom subject lines for custom pipeline email notifications

Custom Pipeline Email Notifications Label Improvements

To provide a more intuitive and consistent way to specify labels when configuring custom pipeline notifications, which are currently available as part of a limited availability release, the API's labels parameter now accepts labels in dictionary format ({"key": "value"}) instead of list format (["item1", "item2"]).

For more information, see Create Custom Pipeline Email Notifications.

Bug Fixes

The following bugs are now fixed.

Data Integrations Bug Fixes

  • Wait Time fields for Archive and Delete operations in the Tetra File-Log Agent configuration would accept decimal values, which could cause commands to fail. The TDP user interface now validates the Wait Time input to make sure that only whole number values are accepted.

Data Access and Management Bug Fixes

  • This release introduces important security fixes for Tetra Data Apps. Customer account teams are available to discuss the impact on specific deployments.
  • The Download selected files action no longer replaces the following characters with an underscore (_) if they appear before the file extension in file names: /[/\\:*?<>|.]+/gu;.

Data Engineering Bug Fixes

  • Fixed an issue where duplicate records could be introduced to Normalized Lakehouse tables when the underlying data processing jobs that power those tables reached a stuck state or were cancelled and then resumed.

TDP System Administration Bug Fixes

  • The Data Hub alarm notifier now functions correctly after the Node.js runtime upgrade.

Deprecated Features

The following features are now on a deprecation track.

<ORG>__tss__system Tables Deprecation in SQL Search

Starting in TDP v4.5.0, the <ORG>__tss__system database tables used by Health Monitoring App versions 1.0.0 and later will be deprecated and removed from SQL Search. These tables will be hidden from the SQL Search page in the TDP user interface as well as any third-party tools, and will no longer receive updates. Data will be migrated to new Lakehouse-based tables that power the Health Monitoring App's dashboards. There are no changes to the Health Monitoring App dashboards' functionality.

Customers who query <ORG>__tss__system directly should do the following:

  1. Review dashboards and scripts that depend on these tables.
  2. Submit a support ticket or contact your customer account leader to discuss alternatives.
  3. Migrate away from direct use of these tables before upgrading to TDP v4.5.0.

For more information, see <ORG>__tss__system Tables Deprecation in SQL Search in the Tetra Product Deprecation Notices.

Known and Possible Issues

The following are known and possible issues for TDP v4.4.4.

Data Integrations Known Issues

  • For new Tetra Agents set up through a Tetra Data Hub and a Generic Data Connector (GDC), Agent command queues aren’t enabled by default. However, the TDP UI still displays the command queue as enabled when it’s deactivated. As a workaround, customers can manually sync the Tetra Data Hub with the TDP. A fix for this issue is in development and testing and is scheduled for a future release.
  • For on-premises standalone Connector deployments that use a proxy, the Connector’s installation script fails when the proxy’s name uses the following format: username:password@hostname. As a workaround, customers should contact their customer account leader to update the Connector’s install script. A fix for this issue is in development and testing and is scheduled for a future release.

Data Harmonization and Engineering Known Issues

  • For customers using proxy servers to access the TDP, Tetraflow pipelines created in TDP v4.3.0 and earlier fail and return a CalledProcessError error. As a workaround, customers should disable any existing Tetraflow pipelines and then enable them again. A fix for this issue is in development and testing and is scheduled for a future release.
  • The legacy ts-sdk put command to publish artifacts for Self-service pipelines (SSPs) returns a successful (0) status code, even if the command fails. As a workaround, customers should switch to using the latest TetraScience Command Line Interface (CLI) and run the ts-cli publish command to publish artifacts instead.
  • IDS files larger than 2 GB are not indexed for search.
  • The Chromeleon IDS (thermofisher_chromeleon) v6 Lakehouse tables aren't accessible through Snowflake Data Sharing. There are more subcolumns in the table’s method column than Snowflake allows, so Snowflake doesn’t index the table. A fix for this issue is in development and testing and is scheduled for a future release.
  • Empty values in Amazon Athena SQL tables display as NULL values in Lakehouse tables.
  • File statuses on the File Processing page can sometimes display differently than the statuses shown for the same files on the Pipelines page in the Bulk Processing Job Details dialog. For example, a file with an Awaiting Processing status in the Bulk Processing Job Details dialog can also show a Processing status on the File Processing page. This discrepancy occurs because each file can have different statuses for different backend services, which can then be surfaced in the TDP at different levels of granularity. A fix for this issue is in development and testing.
  • Logs don’t appear for pipeline workflows that are configured with retry settings until the workflows complete.
  • Files with more than 20 associated documents (high-lineage files) do not have their lineage indexed by default. To identify and re-lineage-index any high-lineage files, customers must contact their CSM to run a separate reconciliation job that overrides the default lineage indexing limit.
  • OpenSearch index mapping conflicts can occur when a client or private namespace creates a backwards-incompatible data type change. For example: If doc.myField is a string in the common IDS and an object in the non-common IDS, then it will cause an index mapping conflict, because the common and non-common namespace documents are sharing an index. When these mapping conflicts occur, the files aren’t searchable through the TDP UI or API endpoints. As a workaround, customers can either create distinct, non-overlapping version numbers for their non-common IDSs or update the names of those IDSs.
  • File reprocessing jobs can sometimes show fewer scanned items than expected when either a health check or out-of-memory (OOM) error occurs, but not indicate any errors in the UI. These errors are still logged in Amazon CloudWatch Logs. A fix for this issue is in development and testing.
  • File reprocessing jobs can sometimes incorrectly show that a job finished with failures when the job actually retried those failures and then successfully reprocessed them. A fix for this issue is in development and testing.
  • File edit and update operations are not supported on metadata and label names (keys) that include special characters. Metadata, tag, and label values can include special characters, but it’s recommended that customers use the approved special characters only. For more information, see Attributes.
  • The File Details page sometimes displays an Unknown status for workflows that are either in a Pending or Running status. Output files that are generated by intermediate files within a task script sometimes show an Unknown status, too.
  • Some historical protocols and IDSs are not compatible with the new ids-to-lakehouse data ingestion mechanism. The following protocols and IDSs are known to be incompatible with ids-to-lakehouse pipelines:
    • Protocol: fcs-raw-to-ids < v1.5.1 (IDS: flow-cytometer < v4.0.0)
    • Protocol: thermofisher-quantstudio-raw-to-ids < v5.0.0 (IDS: pcr-thermofisher-quantstudio < v5.0.0)
    • Protocol: biotek-gen5-raw-to-ids v1.2.0 (IDS: plate-reader-biotek-gen5 v1.0.1)
    • Protocol: nanotemper-monolith-raw-to-ids v1.1.0 (IDS: mst-nanotemper-monolith v1.0.0)
    • Protocol: ta-instruments-vti-raw-to-ids v2.0.0 (IDS: vapor-sorption-analyzer-tainstruments-vti-sa v2.0.0)

Data Access and Management Known Issues

  • Data App providers may exhibit the following issues when shared secrets are configured using the Custom provider option:

    • Existing secrets linked to a provider can appear blank or fail to resolve correctly.
    • New secrets created through the Custom provider option on the Providers page can appear as duplicate entries on the Shared Settings page. Do not delete these entries.
    • Upgrading a data app after deleting a provider can return a 503 error. As a workaround, do a hard refresh in your browser and retry the upgrade.

    To avoid these issues when using the Custom provider option, create shared secrets on the Shared Settings page using all-lowercase names, and reference them as existing secrets in the provider rather than creating new secrets directly from the Providers page. Do a hard refresh in your browser after any provider changes, especially deletion. If you encounter duplicate secret entries or secrets that fail to resolve, contact your customer account leader for assistance. A fix for these issues is in development and testing and is scheduled for a future release.

  • When creating a direct-to-lakehouse pipeline (v0.2.0) and selecting an Instant Start (Lambda) option from the Memory Allocation list, the workflow fails and returns a FileNotFoundError: [Errno 2] No such file or directory error. This error is caused by a limitation of the Lambda runtime environment. As a workaround, customers should select a non-Instant Start memory allocation option when configuring a direct-to-lakehouse pipeline. A fix for this issue is in development and testing and is scheduled for a future release.

  • When creating a direct-to-lakehouse pipeline, the pipeline won't create the transform output tables if any line breaks (\n), trailing whitespaces, or other special characters are included in the transform output's schemaIdentifier field.

  • When using the SQL Search page to query a table created from a direct-to-lakehouse pipeline, the Select First 100 Rows functionality sometimes defaults to an invalid query and displays the following error: "COLUMN_NOT_FOUND: line <number>. Column 'col' cannot be resolved or requester is not authorized to access requested resources." As a workaround, customers should adjust their queries to a standard SELECT *  or SELECT column_name query, and then choose the SELECT First 100 Rows option again.

  • On the Search (Classic) page, shortcuts created in browse view also appear in collections and as saved searches when they shouldn’t.

  • Saved Searches created on the Search (Classic) page can't be used or saved as Collections on the Search page.

  • Data Apps won’t launch in customer-hosted environments if the private subnets where the TDP is deployed are restricted and don’t have outbound access to the internet. As a workaround, customers should enable the following AWS Interface VPC endpoint in the VPC that the TDP uses: com.amazonaws.<AWS REGION>.elasticfilesystem

  • Data Apps return CORS errors in all customer-hosted deployments. As a workaround, customers should create an AWS Systems Manager (SSM) parameter using the following pattern: /tetrascience/production/ECS/ts-service-data-apps/DOMAIN

    For DOMAIN, enter your TDP URL without the https:// (for example, platform.tetrascience.com).

  • The Data Lakehouse Architecture doesn't support restricted, customer-hosted environments that connect to the TDP through a proxy and have no connection to the internet. A fix for this issue is in development and testing and is scheduled for a future release.

  • On the File Details page, related files links don't work when accessed through the Show all X files within this workflow option. As a workaround, customers should select the Show All Related Files option instead. A fix for this issue is in development and testing and is scheduled for a future release.

  • When customers upload a new file on the Search page by using the Upload File button, the page doesn’t automatically update to include the new file in the search results. As a workaround, customers should refresh the Search page in their web browser after selecting the Upload File button. A fix for this issue is in development and testing and is scheduled for a future TDP release.

  • Values returned as empty strings when running SQL queries on SQL tables can sometimes return Null values when run on Lakehouse tables. As a workaround, customers taking part in the Data Lakehouse Architecture EAP should update any SQL queries that specifically look for empty strings to instead look for both empty string and Null values.

  • Query DSL queries run on indices in an OpenSearch cluster can return partial search results if the query puts too much compute load on the system. This behavior occurs because the OpenSearch search.default_allow_partial_result setting is configured as true by default. To help avoid this issue, customers should use targeted search indexing best practices to reduce query compute loads. A way to improve visibility into when partial search results are returned is currently in development and testing and scheduled for a future TDP release.

  • Text within the context of a RAW file that contains escape (\) or other special characters may not always index completely in OpenSearch. A fix for this issue is in development and testing, and is scheduled for an upcoming release.

  • If a data access rule is configured as [label] exists > OR > [same label] does not exist, then no file with the defined label is accessible to the Access Group. A fix for this issue is in development and testing and scheduled for a future TDP release.

  • File events aren’t created for temporary (TMP) files, so they’re not searchable. This behavior can also result in an Unknown state for Workflow and Pipeline views on the File Details page.

  • When customers search for labels in the TDP UI’s search bar that include either @ symbols or some unicode character combinations, not all results are always returned.

  • The File Details page displays a 404 error if a file version doesn't comply with the configured Data Access Rules for the user.

TDP System Administration Known Issues

  • The Data user policy doesn’t allow users who are assigned the policy to create saved searches, even though it should grant the required functionality permissions.

  • Limited availability release Data Retention Policies don’t consistently delete data. A fix for this issue is in development and testing and is scheduled for a future release.

  • Failed files in the Data Lakehouse can’t be reprocessed through the Health Monitoring page. Instead, customers should monitor and reprocess failed Lakehouse files by using the Data Reconciliation, File Processing, or Workflow Processing pages.

  • The latest Connector versions incorrectly log the following errors in Amazon CloudWatch Logs:

    • Error loading organization certificates. Initialization will continue, but untrusted SSL connections will fail.
    • Client is not initialized - certificate array will be empty
      These organization certificate errors have no impact and shouldn’t be logged as errors. A fix for this issue is currently in development and testing, and is scheduled for an upcoming release. There is no workaround to prevent Connectors from producing these log messages. To filter out these errors when viewing logs, customers can apply the following CloudWatch Logs Insights query filters when querying log groups. (Issue #2818)
      CloudWatch Logs Insights Query Example for Filtering Organization Certificate Errors
    fields @timestamp, @message, @logStream, @log
    | filter message != 'Error loading organization certificates. Initialization will continue, but untrusted SSL connections will fail.'
    | filter message != 'Client is not initialized - certificate array will be empty'
    | sort @timestamp desc
    | limit 20
  • If a reconciliation job, bulk edit of labels job, or bulk pipeline processing job is canceled, then the job’s ToDo, Failed, and Completed counts can sometimes display incorrectly.

Upgrade Considerations

During the upgrade, there might be a brief downtime when users won't be able to access the TDP user interface and APIs.

After the upgrade, the TetraScience team verifies that the platform infrastructure is working as expected through a combination of manual and automated tests. If any failures are detected, the issues are immediately addressed, or the release can be rolled back. Customers can also verify that TDP search functionality continues to return expected results, and that their workflows continue to run as expected.

For more information about the release schedule, including the GxP release schedule and timelines, see the Product Release Schedule.

For more details on upgrade timing, customers should contact their customer account leader.

Security

TetraScience continually monitors and tests the TDP codebase to identify potential security issues. Various security updates are applied to the following areas on an ongoing basis:

  • Operating systems
  • Third-party libraries

Quality Management

TetraScience is committed to creating quality software. Software is developed and tested following the ISO 9001-certified TetraScience Quality Management System. This system ensures the quality and reliability of TetraScience software while maintaining data integrity and confidentiality.

Other Release Notes

To view other TDP release notes, see Tetra Data Platform Release Notes.