TDP v4.4.0 Release Notes
Release date: 13 November 2025
TetraScience has released its next version of the Tetra Data Platform (TDP), version 4.4.0. This release focuses on making it even easier for customers to manage their deployments, engineer and harmonize their data, and then analyze it to achieve scientific outcomes.
Key updates include the following:
- Adding metadata as Labels to Tetra Data Pipelines and Tetra Data Apps
- An improved Tetra Data & AI Workspace user experience
- A new, optional Alert Management page provides options for configuring email alerts for data replatforming infrastructure
- Direct download links for laboratory information management systems (LIMS) and electronic lab notebooks (ELNs)
- Automated Delta Sharing of Lakehouse tables with customer Databricks accounts
- Infrastructure updates to support future AI and Platform UI Services along with Agent fleet management capabilities
- Access Rules for Tetra Data Pipelines and Tetra Data Apps (limited availability release)
- New Schema artifact type (limited availability release) to support structured data that’s not in Intermediate Data Schema (IDS) format
Here are the details for what's new in TDP v4.4.0.
How to Read These Release Notes
- Read the GxP Impact Assessment to learn which new functionalities require GxP validation, and which ones don’t. New functionalities marked with an asterisk (*) don’t require validation. Enhancements and Bug Fixes don't generally require validation.
- Any blue NOTE blocks indicate helpful considerations, but don’t require customer action.
NOTE
- Any yellow IMPORTANT note blocks indicate required actions that customers must take to either use a new functionality or enhancement, or to avoid potential issues during the upgrade.
IMPORTANT
GxP Impact Assessment
All new TDP functionalities go through a GxP impact assessment to determine validation needs for GxP installations.
New Functionality items marked with an asterisk (*) address usability, supportability, or infrastructure issues, and do not affect Intended Use for validation purposes, per this assessment.
Enhancements and Bug Fixes do not generally affect Intended Use for validation purposes.
Items marked as either beta release or limited availability release (previously early adopter program (EAP)) are not validated for GxP by TetraScience. However, customers can use these prerelease features and components in production if they perform their own validation.
New Functionality
New functionalities are features that weren't previously available in the TDP.
Data Access and Management New Functionality
Download URL Links to embed in ELNs and LIMS
To help streamline data access workflows for scientists, customers can now configure Tetra Data Pipelines to generate direct download links in their Laboratory Information Management Systems (LIMS) or Electronic Lab Notebooks (ELNs). These links when clicked by authenticated users will download the appropriate file directly to a user’s chosen location.
To create direct download links in target systems, customers can now create functions in their pipeline scripts that generate the links based on each processed file’s file_id and orgSlug.
For more information, see Link to a File in the TDP. For detailed instructions, see Add TDP Download Links to ELNs and LIMS in the TetraConnect Hub.
Data Harmonization and Engineering New Functionality
Labels for Pipeline Organization and Access Rules
Customers can now assign labels to specific pipelines. The ability to add labels to pipelines provides two key benefits:
- Improved pipeline organization and discoverability through filtering on the Pipeline Manager page
- Access control through Access Groups (available through the limited availability release of Entity Access Rules)
For more information about assigning labels to pipelines, see Set Up and Edit Pipelines and Manage Pipelines.
For more information about configuring pipeline access rules, see Configure Access Rules for an Organization.
Pipeline Label UI Updates
- A new LABELS field on the Pipeline Edit page now provides customers the ability to assign labels to each pipeline.

New Labels field on the Pipeline Edit page
- A new Label dropdown on the Pipeline Manager page can help customers find pipelines based on specific attributes.

New Label dropdown on the Pipeline Manager page
Automated Delta Sharing with Customer Databricks Accounts*
Customers with their own Databricks accounts can now have their Lakehouse databases automatically synced to their Delta Shares from the TDP, rather than working with their Tetra account team to manually set up each Delta Share.
To set up automatic Delta Sharing, customers can now create a new databricks-delta-sharing secret in their TDP organization that uses their Databricks sharing identifier as the secret value. The value should be entered in the following format:
- For single shared accounts:
{target_accounts”:[“<sharing_identifier>”]} - For multiple shared accounts:
{"target_accounts":["<sharing_identifier_1>","<sharing_identifier_2>"]}
For more information, see Automate Databricks Delta Sharing.

New Databricks Delta Sharing secret formatting
direct-to-lakehouse Pipelines Now Support File Deletion Events
direct-to-lakehouse Pipelines Now Support File Deletion EventsThe Direct to Lakehouse functionality now supports file deletion events in addition to the previous append-only behavior. Now, when an input file is deleted, the corresponding data in the output Lakehouse table is deleted automatically, too.
For more information, see Create a direct-to-lakehouse Pipeline.
TDP System Administration New Functionality
New Alert Management Page: Email Alerts for Offline Agents, Pluggable Connectors, and Tetra Hubs
A new, optional Alert Management page helps customers proactively monitor their data ingestion infrastructure. The new page provides options for configuring email notifications to be automatically sent when any of an organization’s deployed Tetra Agents, Pluggable Connectors, and Tetra Hubs go offline.
To use the new Alert Management page, customers must contact their account leader. For more information, see Alert Management.

New Alert Management page and Create Alert dialog
Once the page is activated, a new Operational Intelligence option appears in the left navigation menu, replacing the previous Health Monitoring option.

New Operational Intelligence left navigation menu option
Choosing the new Operational Intelligence option from the left menu provides the option to select from either the Health Monitoring dashboard or the new Alert Management page.

Operational Intelligence menu option dropdown
Beta Release and Limited Availability New Functionality
Access Rules for Pipelines and Data Apps (Limited Availability)
To provide more control over who can modify Tetra Data Pipelines and access Tetra Data Apps, organization administrators can now define metadata-driven access rules for both pipelines and data apps through Access Groups.
There are now two types of Access Rules (previously labeled just Data Access Rules in the TDP user interface) available through Access Groups:
- Entity Access Rules: define access permissions for Tetra Data Pipelines and/or Tetra Data Apps for multiple users based on specific attributes assigned to either the pipeline or the data app.
- Data Access Rules: define data access permissions for multiple users based on specific file attributes.
Customers can configure Entity Access Rules in addition to, and separately from, the existing Data Access Rules. All Access Groups and their associated Access Rules can also be managed through a third-party identity provider (IdP) by using SSO Identity Groups.
To start using entity access rules, customers should contact their account leader. For more information, see Configure Access Rules for an Organization.
New Schema Artifacts (Limited Availability)
Customers can now register their non-Intermediate Data Schema (IDS) tables produced through direct-to-lakehouse pipelines and manage them as governed, discoverable datasets on the platform by using a new Schema artifact type. Available as part of a limited availability release, Schema artifacts are designed to provide the same rich metadata and context of IDS tables.
To start using the new Schema type, customers should contact their account leader. Once activated, customers can access any Schema artifacts through a new Schemas page, accessible through the Artifacts option in the left navigation menu.
Enhancements
Enhancements are modifications to existing functionality that improve performance or usability, but don't alter the function or intended use of the system.
Data Access and Management Enhancements
Improved Tetra Data & AI Workspace UX
To help improve usability, the following UI updates were made to the Tetra Data & AI Workspace page:
- An updated Data & AI Workspace Dashboard tab now includes the following:
- Tiles of all running apps in an organization
- New menu icons in each tile app that open the app’s Details page (shown in the following bullet points) or allows the app the be removed
- A new Running and Starting status indicator
- A new Open App button that opens the selected app
For more information, see Access the Tetra Data & AI Workspace Page.

New Tetra Data & AI Workspace Dashboard
- A new App Gallery page makes it easier to see what apps are available by adding a search bar and the option to filter apps by namespace.
For more information, see Activate an Embedded Data App.

New App Gallery page
- A new Data App details dialog appears when an app on the App Gallery page is selected that shows the app’s details, including what it does and any prerequisites for using it.
For more information, see Activate an Embedded Data App.

New app Details dialog on the App Gallery page
- Enabled apps now also have their own dedicated Details page that includes the following:
- Version Management: provides a dropdown to select an app version to use along with an Upgrade button to upgrade to the selected version.
- Overview: shows the README file for the app, including a change log for each app version.
- Requirements: shows any platform requirements.
- Providers: shows any available third-party systems that customers can connect the app to.
- **Logs: **shows the app’s activity logs.
- Attribute Management: allows customers to assign labels to the app for organization and access control.
For more information, see Embedded Data Apps.

New App Details page
- A new Create Your Own App page provides a step-by-step guide to build and publish your own self-service Data App. For more information about creating your own apps, see Self-Service Data Apps in the TetraConnect Hub.
For more information, see Create a Self-Service Data App.

New Create Your Own App page
New Tree Navigation for Viewing Nested Fields on the SQL Search Page
To help make it easier for customers to navigate nested fields in SQL tables, the SQL Search page now includes a new tree navigation option that shows each table’s nested fields.
To use the new tree navigation, customers can run a query on the SQL Search page, and then select any field with nested data in the result set.
For more information, see Query SQL Tables in the TDP.

New tree navigation option on SQL Search page
Data Integrations Enhancements
Remotely Configure Stream Uploads and Scan Interval Settings for Tetra File-Log Agents
Customers can now configure the following settings for Tetra File-Log Agents remotely through the Agents page in the TDP:
- Stream Uploads: When set to Yes, the Agent uploads files directly to Amazon S3 without requiring the file to be stored locally in the Group User System temporary folder. The default setting is No.
- Scan Interval (seconds): Indicates the frequency between when paths are rescanned for new or changed files. The default is 30 seconds.
For more information, see Remotely Configure a Tetra File-Log Agent.
Download Tetra OpenLab Agent Configuration Settings Remotely
TDP users with either Administrator or Developer policies assigned to their roles can now download Tetra OpenLab Agent configuration (manual.json) files by selecting Download Configuration on the Agents page.
Previously, customers could remotely download configuration files for Tetra File-Log Agents and Tetra Empower Agents only.
For more information, see Download Agent Configuration Settings.
New Caution Message for Refreshing the Agents Page
The Configuration tab on the Agents page now displays the following caution message:
Unsaved changes will be lost if you leave or refresh this page.
For more information, see Create and Edit Agents.
Updated Agent Names in the Create Agent Dialog
The Create Agent dialog now lists available Agents using their official product names. The following Agent names are now updated:
- File-Log Agent (instead of File Log Agent)
- UNICORN Agent (instead of Unicorn Agent)
- User-Defined Agent (instead of User Defined Agent)
For more information, see Create a New Agent.

Updated Create Agent dialog
Tetra Hub Nginx Proxy Settings Update
The latest Tetra Hub's reverse proxy (Nginx) now defaults to proxy_ssl_server_name on;, enabling Server Name Indication (SNI) for improved compatibility with modern cloud services and content delivery networks (CDNs).
If you encounter Agent connection issues after upgrading to the latest Tetra Hub version, you can disable this setting by adding proxy_ssl_server_name off; to /etc/hub/nginx on the Hub host server.
Customers should contact their account leader before modifying Nginx configuration files.
For more information, see Hub Nginx Proxy Configuration.
Simplified Tetra Hub Installation Scripts for Hosts that Use Ubuntu 22.04 or Higher
To help simplify the Tetra Hub installation process and avoid potential issues associated with earlier versions of Ubuntu, installation scripts for new Tetra Hubs no longer require --allow-ubuntu-22 on host machines that use Ubuntu 22.04 or higher.
To facilitate this change, Ubuntu 18.04 and earlier is no longer supported for new Tetra Hub installations. TetraScience will continue to support existing installations that use Ubuntu 18.04 and earlier.
For more information, see Create and Install a Tetra Hub.
Data Harmonization and Engineering Enhancements
Improved Local Protocol and Task Script Verification for SSPs
To provide immediate feedback on locally developed protocol validity, self-service Pipelines now include the ability to verify if protocols and task scripts conform to valid specifications or not in local development environments.
Previously, protocols for SSPs needed to be deployed to the TDP before they could be verified.
For more information, see Test SSP Artifacts Locally.
Manual File Uploads Can Now be Saved to Target Folders
To help improve data ingestion consistency and data discoverability, customers can now manually upload files to target folders by using a new SELECTED FOLDER option in the Upload File dialog on the Search page. Selecting a target folder for uploads can help customers consistently upload files to the same folder as a specific Agent or Connector’s upload path.
Previously, customers could only upload files to target folders by using the Upload a File (/v1/datalake/upload) API endpoint.
For more information, see Upload a New File.

New Selected Folder field in the Upload File dialog
Upload a File API Endpoint Now Supports Multiple Languages and Paths with Special Characters
The Upload a File (/v1/datalake/upload) API endpoint now supports upload paths of any valid folder names in Windows, iOS, and Linux operating systems. To facilitate this, the endpoint includes a new path parameter that provides the option to manually upload files to target folders, rather than the default root folder.
For more information, see Upload a File in the TetraScience API Reference.
Updated ids-to-lakehouse Pipeline Default Settings
ids-to-lakehouse Pipeline Default SettingsTo help users define lakehouse trigger conditions for specific IDSs more easily, pipelines using the ids-to-lakehouse protocol now have new default trigger options:
- File Category is now set to IDS by default
- IDSType is left blank by default
For more information, see Create an ids-to-lakehouse Pipeline.
Improved Lakehouse Normalized Datacubes Tables
The Normalized datacubes tables that customers can choose to create through their ids-to-lakehouse pipelines have been improved to enhance performance and functionality by introducing a new, optimized table schema. The updated table schema streamlines data access, improves query performance, and provides a more robust data model for managing data at scale.
This release deprecates the complex file_metadata struct column in favor of four new, top-level columns:
file_metadata_file_id(STRING): A unique identifier for each file.file_metadata_ids_path(STRING): The path of the data source.file_metadata_created_at_timestamp(BIGINT): The creation timestamp of the file.file_metadata_created_at(TIMESTAMP): The creation time of the file.
This change is designed to improve data access efficiency by reducing the need for costly joins and nested field lookups.
The original file_metadata column will be retained for backward compatibility, but will no longer be populated with data for new rows. Instead, it will have a NULL value for all newly inserted records. For tables created with this new release, the file_metadata column will always be NULL.
For best performance, customers should update their queries to use one or more of the new top-level columns in their WHERE or ON clauses. For example, instead of querying on t.file_metadata.file_id, use t.file_metadata_file_id.
For more information, see Working with Normalized Lakehouse Tables.
Improved Filtering for Scheduled Protocols
When customers select the SCHEDULED trigger check box on the Pipeline Edit page, the Select Protocol tab now only displays protocols that are compatible with scheduled triggers. Previously, all available protocols would appear, regardless of whether or not they were compatible with scheduled triggers.
For more information, see Select Trigger Conditions.
TDP System Administration Enhancements
New Access Group Entity for Audit Trail
Access Group Entity for Audit TrailTo help customers better track actions performed on Access Groups in their TDP environments, a new Access Groups entity type appears in the Audit Trail.
The following actions are now tracked for each Access Group along with the users that performed them:
- Add users shows when users are added to an Access Group.
- Create shows when an Access Group is created.
- Delete shows when an Access Group is deleted.
- Remove users shows when users are removed from an Access Group.
- Update shows when an Access Group’s access rules are updated.
For more information, see Entities and Logged Actions in the Audit Trail documentation.

New Access Group Audit Trail entity and recorded actions
New Operations Policy Permissions
A new Operations policy provides more granular, read-only access to users who need permissions for troubleshooting purposes, but don't need access to any Administrative functionality.
For more information, see Operations Policy Permissions.
Infrastructure Updates
The following is a summary of the TDP infrastructure changes made in this release. For more information about specific resources, contact your customer account leader.
New and removed resources will be finalized when TDP v4.4.0 is generally available on 13 November 2025.
New AI Runtime Services and Platform UI Runtime Services Infrastructure
TDP v4.4.0 introduces three new backend AWS CloudFormation stacks:
- AI Runtime Services (
ts-ai-platform) infrastructure is designed to help rapidly expand the platform's AI and analytics capabilities by serving and managing AI workflows through the TetraScience API and CLI and supporting the UI management components. AI management capabilities are scheduled for limited availability release later in Q4 2025. The new AI Runtime Services layer provides the compute, infrastructure provisioning, and execution environment required to run AI inference at scale. AI management capabilities are disabled by default in this release. - Platform UI Runtime Services (
ts-tetrasphere) infrastructure is designed to help rapidly expand the TDP’s functionalities by supporting the new frontend Platform UI Services that are scheduled for future release. The new Platform UI Runtime Services layer enables new AI and user-facing features within the TDP, allowing the platform to expand its functional surface area without requiring full platform upgrades. Platform UI Runtime capabilities are disabled by default in this release. - API Gateway Services (
ts-gateway) infrastructure routes traffic to and from the new runtime services. In future releases (starting in TDP v4.5.0), thets-gatewaystack will route traffic for additional TetraScience components as well.
Existing TDP functionality will not be impacted and will be released following the existing TDP release schedule.
Backend runtime services and the upcoming frontend AI Services and Platform UI Services will be regularly updated separately from the TDP, similar to how Tetra Integrations, Artifacts, and Data Apps are currently released. Each new and updated service will go through the established TetraScience Software Development Lifecycle (SDLC).
NOTE
Upcoming frontend services will be placed behind a feature flag, which will be deactivated by default. Organization administrators must approve each new or updated frontend service before it becomes available to users in each TDP environment. Each frontend service will also have its own documentation, including a change log for each service version.
Backend service updates will be released as needed, and won’t impact any customer-facing frontend services.
Artifact Upload Size Limit Increased to 10 GB
The TDP now includes a TetraScience-managed override setting that can support artifact uploads up to 10 GB, such as large Self-Service Data App artifacts.
The default artifact upload size limit is still 1 GB. Customers must contact their account leader to upload artifacts larger than 1 GB.
Agent Fleet Management Infrastructure
This release introduces the infrastructure required for customers to remotely manage Tetra Agents as fleets in the TDP. When this new Agent fleet management functionality is introduced in a future TDP v4.x release, it will make it easier for customers to horizontally scale and automatically load balance their Agents to improve data acquisition throughput and latency.
There are no customer-facing impacts associated with this infrastructure update in TDP v4.4.0.
Bug Fixes
The following bugs are now fixed.
Data Integrations Bug Fixes
- When configuring Tetra File-Log Agent paths in the TDP user interface, the following known issues are now resolved:
- The End Date field can now be the same value as the Start Date field.
- After a filter action is cleared, the Labels field's Value dropdown now displays a correct list of customers' available labels for new scan paths.
- Path configuration options no longer appear to be enabled for Agents that have their queues disabled.
- Install scripts for Standalone Connector deployments now parse proxy details correctly if any are entered by customers.
Data Harmonization and Engineering Bug Fixes
- The Pipeline Edit page now correctly handles pipelines with scheduled trigger conditions configured through the Create New Pipeline (v1/pipeline/create) API endpoint. Previously, the page displayed an incorrect custom trigger and a Cannot read properties of undefined (reading 'every') error message for scheduled pipelines created through the API. (#4338)
Data Access and Management Bug Fixes
- When customers create a new Data App provider using an existing secret that’s already being used, the new provider name now populates correctly in the app.
- Entering secrets on the Create Data App Provider dialog now allows secret values that take up multiple lines.
Deprecated Features
The following features are now on a deprecation track:
- To create a more seamless search experience for customers, the Search (Classic) page is tentatively planned to be deprecated in TDP v4.5.0 and replaced by the Search page that was introduced in TDP v4.0.0. If any customers' organizations rely on the existing Search (Classic) page, they should contact their customer account leader to ensure all required functionality is migrated to the new user interface.
- To better productize solutions and help more customers, TetraScience will only create new artifacts in the
clientnamespace by exception starting November 13, 2025. Instead, all new artifacts created by TetraScience for customers will be deployed to thecommonnamespace, and will be available to all customers. Support provided by contract will continue for all existing artifacts in theclientnamespace.
For more information about TDP deprecations, see Tetra Product Deprecation Notices.
Known and Possible Issues
The following are known and possible issues for TDP v4.4.0.
Data Integrations Known Issues
- For new Tetra Agents set up through a Tetra Data Hub and a Generic Data Connector (GDC), Agent command queues aren’t enabled by default. However, the TDP UI still displays the command queue as enabled when it’s deactivated. As a workaround, customers can manually sync the Tetra Data Hub with the TDP. A fix for this issue is in development and testing and is scheduled for a future release.
- For on-premises standalone Connector deployments that use a proxy, the Connector’s installation script fails when the proxy’s name uses the following format:
username:password@hostname. As a workaround, customers should contact their customer account leader to update the Connector’s install script. A fix for this issue is in development and testing and is scheduled for TDP v4.4.1.
Data Harmonization and Engineering Known Issues
- For customers using proxy servers to access the TDP, Tetraflow pipelines created in TDP v4.3.0 and earlier fail and return a
CalledProcessErrorerror. As a workaround, customers should disable any existing Tetraflow pipelines and then enable them again. A fix for this issue is in development and testing and is scheduled for a future release. - The legacy
ts-sdk putcommand to publish artifacts for Self-service pipelines (SSPs) returns a successful (0) status code, even if the command fails. As a workaround, customers should switch to using the latest TetraScience Command Line Interface (CLI) and run thets-cli publishcommand to publish artifacts instead. A fix for this issue is in development and testing and is scheduled for a futurets-sdkrelease. - IDS files larger than 2 GB are not indexed for search.
- The Chromeleon IDS (thermofisher_chromeleon) v6 Lakehouse tables aren't accessible through Snowflake Data Sharing. There are more subcolumns in the table’s
methodcolumn than Snowflake allows, so Snowflake doesn’t index the table. A fix for this issue is in development and testing and is scheduled for a future release. - Empty values in Amazon Athena SQL tables display as
NULLvalues in Lakehouse tables. - File statuses on the File Processing page can sometimes display differently than the statuses shown for the same files on the Pipelines page in the Bulk Processing Job Details dialog. For example, a file with an
Awaiting Processingstatus in the Bulk Processing Job Details dialog can also show aProcessingstatus on the File Processing page. This discrepancy occurs because each file can have different statuses for different backend services, which can then be surfaced in the TDP at different levels of granularity. A fix for this issue is in development and testing. - Logs don’t appear for pipeline workflows that are configured with retry settings until the workflows complete.
- Files with more than 20 associated documents (high-lineage files) do not have their lineage indexed by default. To identify and re-lineage-index any high-lineage files, customers must contact their CSM to run a separate reconciliation job that overrides the default lineage indexing limit.
- OpenSearch index mapping conflicts can occur when a client or private namespace creates a backwards-incompatible data type change. For example: If
doc.myFieldis a string in the common IDS and an object in the non-common IDS, then it will cause an index mapping conflict, because the common and non-common namespace documents are sharing an index. When these mapping conflicts occur, the files aren’t searchable through the TDP UI or API endpoints. As a workaround, customers can either create distinct, non-overlapping version numbers for their non-common IDSs or update the names of those IDSs. - File reprocessing jobs can sometimes show fewer scanned items than expected when either a health check or out-of-memory (OOM) error occurs, but not indicate any errors in the UI. These errors are still logged in Amazon CloudWatch Logs. A fix for this issue is in development and testing.
- File reprocessing jobs can sometimes incorrectly show that a job finished with failures when the job actually retried those failures and then successfully reprocessed them. A fix for this issue is in development and testing.
- File edit and update operations are not supported on metadata and label names (keys) that include special characters. Metadata, tag, and label values can include special characters, but it’s recommended that customers use the approved special characters only. For more information, see Attributes.
- The File Details page sometimes displays an Unknown status for workflows that are either in a Pending or Running status. Output files that are generated by intermediate files within a task script sometimes show an Unknown status, too.
- Some historical protocols and IDSs are not compatible with the new
ids-to-lakehousedata ingestion mechanism. The following protocols and IDSs are known to be incompatible withids-to-lakehousepipelines:- Protocol:
fcs-raw-to-ids< v1.5.1 (IDS:flow-cytometer< v4.0.0) - Protocol:
thermofisher-quantstudio-raw-to-ids< v5.0.0 (IDS: pcr-thermofisher-quantstudio < v5.0.0) - Protocol:
biotek-gen5-raw-to-idsv1.2.0 (IDS:plate-reader-biotek-gen5v1.0.1) - Protocol:
nanotemper-monolith-raw-to-idsv1.1.0 (IDS:mst-nanotemper-monolithv1.0.0) - Protocol:
ta-instruments-vti-raw-to-idsv2.0.0 (IDS:vapor-sorption-analyzer-tainstruments-vti-sav2.0.0)
- Protocol:
Data Access and Management Known Issues
-
On the Search (Classic) page, shortcuts created in browse view also appear in collections and and as saved searches when they shouldn’t. A fix for this issue is in development and testing and is scheduled for TDP v4.4.1.
-
Data Apps won’t launch in customer-hosted environments if the private subnets where the TDP is deployed are restricted and don’t have outbound access to the internet. As a workaround, customers should enable the following AWS Interface VPC endpoint in the VPC that the TDP uses:
com.amazonaws.<AWS REGION>.elasticfilesystem -
Data Apps return CORS errors in all customer-hosted deployments. As a workaround, customers should create an AWS Systems Manager (SSM) parameter using the following pattern:
/tetrascience/production/ECS/ts-service-data-apps/DOMAINFor
DOMAIN, enter your TDP URL without thehttps://(for example,platform.tetrascience.com). -
The Data Lakehouse Architecture doesn't support restricted, customer-hosted environments that connect to the TDP through a proxy and have no connection to the internet. A fix for this issue is in development and testing and is scheduled for a future release.
-
On the File Details page, related files links don't work when accessed through the Show all X files within this workflow option. As a workaround, customers should select the Show All Related Files option instead. A fix for this issue is in development and testing and is scheduled for a future release.
-
When customers upload a new file on the Search page by using the Upload File button, the page doesn’t automatically update to include the new file in the search results. As a workaround, customers should refresh the Search page in their web browser after selecting the Upload File button. A fix for this issue is in development and testing and is scheduled for a future TDP release.
-
Values returned as empty strings when running SQL queries on SQL tables can sometimes return
Nullvalues when run on Lakehouse tables. As a workaround, customers taking part in the Data Lakehouse Architecture EAP should update any SQL queries that specifically look for empty strings to instead look for both empty string andNullvalues. -
Query DSL queries run on indices in an OpenSearch cluster can return partial search results if the query puts too much compute load on the system. This behavior occurs because the OpenSearch
search.default_allow_partial_resultsetting is configured astrueby default. To help avoid this issue, customers should use targeted search indexing best practices to reduce query compute loads. A way to improve visibility into when partial search results are returned is currently in development and testing and scheduled for a future TDP release. -
Text within the context of a RAW file that contains escape (
\) or other special characters may not always index completely in OpenSearch. A fix for this issue is in development and testing, and is scheduled for an upcoming release. -
If a data access rule is configured as [label] exists > OR > [same label] does not exist, then no file with the defined label is accessible to the Access Group. A fix for this issue is in development and testing and scheduled for a future TDP release.
-
File events aren’t created for temporary (TMP) files, so they’re not searchable. This behavior can also result in an Unknown state for Workflow and Pipeline views on the File Details page.
-
When customers search for labels in the TDP UI’s search bar that include either @ symbols or some unicode character combinations, not all results are always returned.
-
The File Details page displays a
404error if a file version doesn't comply with the configured Data Access Rules for the user.
TDP System Administration Known Issues
-
The Data user policy doesn’t allow users who are assigned the policy to create saved searches, even though it should grant the required functionality permissions. A fix for this issue is scheduled for TDP v4.4.1.
-
Limited availability release Data Retention Policies don’t consistently delete data. A fix for this issue is in development and testing and is scheduled for a future release.
-
Failed files in the Data Lakehouse can’t be reprocessed through the Health Monitoring page. Instead, customers should monitor and reprocess failed Lakehouse files by using the Data Reconciliation, File Processing, or Workflow Processing pages.
-
The latest Connector versions incorrectly log the following errors in Amazon CloudWatch Logs:
Error loading organization certificates. Initialization will continue, but untrusted SSL connections will fail.Client is not initialized - certificate array will be empty
These organization certificate errors have no impact and shouldn’t be logged as errors. A fix for this issue is currently in development and testing, and is scheduled for an upcoming release. There is no workaround to prevent Connectors from producing these log messages. To filter out these errors when viewing logs, customers can apply the following CloudWatch Logs Insights query filters when querying log groups. (Issue #2818)
CloudWatch Logs Insights Query Example for Filtering Organization Certificate Errors
fields @timestamp, @message, @logStream, @log | filter message != 'Error loading organization certificates. Initialization will continue, but untrusted SSL connections will fail.' | filter message != 'Client is not initialized - certificate array will be empty' | sort @timestamp desc | limit 20 -
If a reconciliation job, bulk edit of labels job, or bulk pipeline processing job is canceled, then the job’s ToDo, Failed, and Completed counts can sometimes display incorrectly.
Upgrade Considerations
During the upgrade, there might be a brief downtime when users won't be able to access the TDP user interface and APIs.
After the upgrade, the TetraScience team verifies that the platform infrastructure is working as expected through a combination of manual and automated tests. If any failures are detected, the issues are immediately addressed, or the release can be rolled back. Customers can also verify that TDP search functionality continues to return expected results, and that their workflows continue to run as expected.
For more information about the release schedule, including the GxP release schedule and timelines, see the Product Release Schedule.
For more details on upgrade timing, customers should contact their CSM.
Security
TetraScience continually monitors and tests the TDP codebase to identify potential security issues. Various security updates are applied to the following areas on an ongoing basis:
- Operating systems
- Third-party libraries
Quality Management
TetraScience is committed to creating quality software. Software is developed and tested following the ISO 9001-certified TetraScience Quality Management System. This system ensures the quality and reliability of TetraScience software while maintaining data integrity and confidentiality.
Other Release Notes
To view other TDP release notes, see Tetra Data Platform Release Notes.
Updated about 2 hours ago
