TDP v3.6.0 Release Notes
Release date: 2 November 2023 (Last updated: 3 July 2024)
TetraScience has released its next version of the Tetra Data Platform (TDP), version 3.6.0. This release focuses on supporting scientific outcomes by introducing significant performance and usability improvements, such as the following:
- Self-service pipelines (SSPs) are now available in all TDP deployment environments and work with an updated SDK to make it simpler and more secure to build SSPs
- Improved bulk actions for reprocessing pipeline data, editing labels, and reconciling files
- Improved search functionality to make it easier to find data by file path and content
- 2x-4x performance increases across all TDP deployment sizes
- Support for UTF-8 and Kanji characters in file names, contents, and labels
- Low-code DataWeave scripting is now supported in Pipeline configurations
- General availability of Tetra Hub and the Pluggable Connector Framework (previously in beta release), which allow new Tetra Integration functionalities to be released separately from the TDP
- (Beta release) Basic search user experience for scientists
Here are the details for what’s new in TDP v3.6.0.
NOTE
Keep in mind the following:
- Items labeled as New functionality include features that weren’t previously available in the TDP.
- Enhancements are modifications to existing functionality that improve performance or usability, but don't alter the function or intended use of the system.
- Features marked with an asterisk (*) are for usability, supportability, or troubleshooting, and do not affect Intended Use for validation purposes. Beta Release and early adopter program (EAP) features are not suitable for GxP use.
Data Integrations
Tetra Integrations automatically collect scientific data from different instruments and applications and centralize that data in the Tetra Scientific Data Cloud. You can also use them to send data to designated systems.
The following are new functionalities and enhancements introduced for data integrations in TDP v3.6.0.
New Functionality for Data Integrations
Pluggable Connectors are Now Generally Available*
The new Pluggable Connector Framework makes it possible for TetraScience to update and release new Connectors independent of a TDP release. Previously available in beta release only, Pluggable Connectors are now generally available and provide the following benefits:
- Expedite the Connector development process by introducing a common Connector framework
- Flexibility when deploying upgrades, because deployments can happen outside of a TDP version release
- Streamlined health monitoring and troubleshooting options (customers can now use Amazon CloudWatch to track each Connector’s activity logs and performance metrics)
The Tetra KEPServerEX Connector, Tetra AGU SDC Connector and Tetra HRB Cellario Connector are available as a Pluggable Connectors currently.
For more information, see Tetra Connectors.
New Connectors Page for Pluggable Connectors*
A new Connectors page was added to the left navigation menu under Data Sources. The page provides general information about each Pluggable Connector that a customer has created within their organization, including its configuration details and diagnostics.
Each Pluggable Connector also now has a Connector Details page (accessed through the Connectors page) that provides more granular information about individual Connectors. Customers can now edit a Pluggable Connector’s information or change the Connector’s status by using the Connector Details page.
For more information, see Create, Configure, and Update Pluggable Connectors.
Tetra Hub is Now Generally Available*
Tetra Hub is the on-premises connectivity component of the TDP. It facilitates secure data transfer to the Tetra Scientific Data Cloud through components called Connectors. Tetra Hub gives customers the option to release new Hub functionalities or patches without needing to upgrade the entire TDP, which can help reduce overhead and accelerate implementation. Previously available in beta release only, Hub sunsets the use of the Tetra Generic Data Connector (GDC), simplifying deployment. Hub also offers the following benefits that aren’t available in Data Hub (previously Tetra Data Hub):
- Hosts Pluggable Connectors
- Acts as a proxy for Tetra IoT Agents
For more information, see Tetra Hub.
Tetra Hub Can Now Act as a Proxy for Tetra IoT Agents*
Instruments that use the Tetra IoT Agent can now connect to the TDP through an on-premises Tetra Hub. Previously, the Tetra IoT Agent could connect directly to AWS through the Tetra IoT Layer only.
For more information, see Configure a Hub as a Proxy for a Tetra IoT Agent.
Improved Infrastructure Monitoring and Alarms for Tetra Hub*
Infrastructure-level notifications for Tetra Hub v2s are now sent to TetraScience Support by AWS automatically. These alerts contain no sensitive information and indicate a Hub’s state and failure reason only. This information helps the TetraScience team provide timely and effective support.
For more information and a list of the notification types, see Tetra Hub Monitoring and Alarms.
Tetra File-Log Agent Events Are Now Viewable Through the TDP UI
Tetra File-Log Agent events can now be viewed in the new Events Timeline tab on the File Details page, and the new Integration Events tab on the Health Monitoring page.
For more information, see View the File Details Page and Monitor Integration Events.
Tetra File-Log Agent Events Are Now Viewable Through the TetraScience API (Beta Release Endpoints)*
The following APIs also now return information about Tetra File-Log Agent events:
- Get activity events from all Agents—returns events from all Tetra File-Log Agents.
- Get Agent event types—returns a list of event names that a specific Tetra File-Log Agent generates.
The Get activity events from all Agents and Get Agent event types list endpoints are in beta release currently and may require changes in future TDP releases. For more information, customers must contact their CSM.
Enhancements for Data Integrations
To help improve usability, the following changes were made to the TDP user interface (UI).
Tetra Agents UI Improvements
- When creating or updating an Agent in the TDP, customers can more easily select from pre-existing Hubs (v1 or v2) or create necessary Service Users without leaving the Agent Wizard. Also, the Install Agent page is now named Install Agent Locally. For more information, see Create a New Agent.
- The File Upload API endpoint and the Generic Data Connector (GDC) now support specifying labels when uploading files.
- The Agents page now includes an Enabled filter, which by default only displays enabled Agents. The All and No filter options still allow customers to view deactivated Agents when needed. For more information, see Cloud Configuration of Tetra Agents.
- The Archive files with no checksum option now appears on the Path Configurations pane in the TDP UI. Previously it was available in the Tetra File-Log Agent Management Console and API only. This option is available for Agents version 4.3.2 and higher only. For more information, see Configure Tetra File-Log Agent FileWatcher Service.
- Tetra File-Log Agent paths now have a backslash (
\
) appended to them if there’s not one already there. Adding a backslash to the Agent’s paths ensures consistent behavior with the File-Log Agent Management Console.
Tetra Hub UI Improvements
- Parent proxy settings are now configurable on the Tetra Hub management console only.
Data Harmonization and Engineering
TetraScience provides many Tetra Data models as well as options for creating custom schemas. You can use these schematized representations of common scientific data in pipelines to automate data operations and transformations.
The following are new functionalities and enhancements introduced for data harmonization and engineering in TDP v3.6.0.
New Functionality for Data Harmonization and Engineering
SSPs are Now Generally Available in All TDP Deployment Environments
All TDP deployment environments can now create their own custom, self-service pipelines (SSPs). Previously, only customer-hosted (single tenant) deployments could use SSPs. The new SSP runtime environment also simplifies the way customers define their protocols through a new protocol.yml format.
To start using SSPs in a Tetra-hosted environment, customers must first do the following:
- Upgrade to TDP v3.6.0
- Upgrade to the latest versions of the TetraScience Software Development Kit (SDK 2.0) and TetraScience Command Line Interface (CLI)
- Build custom artifacts for their SSPs by using the latest TetraScience product versions
For more information, see Self-Service Tetra Data Pipelines.
Entity Relationship Diagrams are Now Available for IDSs
On the IDS Details page, a new ERD tab displays an interactive Entity Relationship Diagram that represents the relational schema for an Intermediate Data Schema (IDS). This new IDS view can help customers quickly understand the relationship between their IDSs' associated Athena tables and create more effective SQL queries.
For more information, see View IDSs and Their Details.
Low-Code DataWeave Scripting is Now Supported in Pipeline Configurations*
Customers can now input DataWeave scripts directly into pipeline configurations within the TDP UI. This new functionality provides a standard way to pass parameters between a DataWeave script and a task script.
NOTE
The new DataWeave protocol version isn’t backward compatible. The previous protocol expects a
fileUUID
in a parameter. The new protocol expects a DataWeave script.
Enhancements for Data Harmonization and Engineering
New TetraScience SDK 2.0 Makes it More Secure to Create SSPs
The new TetraScience SDK 2.0 provides more security when creating and using self-service Tetra Data pipelines (SSPs). SDK 2.0 replaces the legacy SDK. Customers should plan on rebuilding and releasing their existing protocols to use the new SDK 2.0 before the legacy one is deprecated.
For more information, see the TetraScience SDK 2.0 Release Notes.
NOTE
Existing SSPs and task scripts built with the previous design will continue to work during the deprecation period. The current estimated earliest deprecation date is Q4 of 2024.
Improved Pipeline Processing Load Performance
The File Processing page now loads 20 times faster than in previous TDP versions.
Connector Artifacts are Now Viewable in the TDP UI
Customers can now view Connector artifacts by using the Artifacts option in the left TDP navigation menu. These artifacts contain the definition, assets, and code for Tetra Connectors.
For more information, see View Connectors and Their Details.
Improved Bulk File Reprocessing
A new Bulk Reprocess button on the File Processing page provides customers the option to quickly reprocess multiple files by the following criteria:
- PIPELINE
- WORKFLOW STATE
- FOR LAST (time range in days)
- HOW MANY FILES
- JOB NAME
For more information, see Create a Bulk Pipeline Process Job.
NOTE
The Scan for Unprocessed Files action on the File Processing page uses a 30-day time period by default. To scan by a more specific or longer time frame, customers can use the Bulk Pipeline Process page.
Notification Emails Now Clearly Indicate Deployment Information
Pipeline notification emails now include the associated organization slug and infrastructure name in the subject line and in the body of the email.
UI and Other Improvements for Data Harmonization and Engineering
To help improve usability, the following changes were made to the TDP UI.
IDS UI Improvements
- On the IDS Details page, the Search Indices option has been removed. Removing this option from the TDP UI safeguards the system from accidental actions. To review indices, customers can contact their CSM.
Data Access and Management
Data in the Tetra Scientific Data Cloud is easily accessible through search in the TDP user interface, TetraScience API, and SQL queries. This harmonized content is standardized to allow comparisons across data sets, easy access from informatics applications, and reuse by advanced analytics and AI/ML.
The following are new functionalities and enhancements introduced for data access and management in TDP v3.6.0.
New Functionality for Data Access and Management
UTF-8 and Kanji Characters are Now Supported*
The TDP now supports UTF-8 and Kanji characters in file names, contents, and labels.
For more information, see Label Formatting.
(EAP) Tetra Snowflake Integration*
The new Tetra Snowflake Integration provides customers the ability to access their Tetra Data directly through Snowflake, in addition to the current TDP SQL interface.
The Tetra Snowflake Integration is available through an early adopter program (EAP) currently and may require changes in future TDP releases. For more information, see Tetra Snowflake Integration.
(Beta Release) New Basic Search for Scientists*
The new Basic Search page is designed for scientific users to be able to:
- Quickly search for RAW data in the TDP by the following criteria:
- Existing search bar
- File upload date
- Any populated recommended labels
- Existing saved searches (also referred to as Collections)
- Create, update, and manage saved searches
- Download multiple files at once with the new Bulk Download option.
The Basic Search page is in beta release currently and may require changes in future TDP releases. This experience is a non-breaking change and is activated for customers on request only.
For more information, see Basic Search (Beta Release).
Enhancements for Data Access and Management
Improved Discoverability of Search Functionality
- An improved Context Search feature now displays results returned by the TDP’s Search bar based on content in the primary (RAW) and schematized (IDS) versions of files, allowing for more powerful contextual search without metadata, tags, or labels. Now, when customers search for content found in an IDS file through the Search bar, the results show information from that IDS file and its related, source RAW file as well as any related IDS files. This enhancement is available for data that’s processed after the TDP v3.6.0 upgrade only. To apply this enhancement to historical data, customers must reindex the data by reconciling it, or contact their CSM for support. For more information, see Perform a Search on the Search Files Page and Reprocess Files.
- An improved Broad Search feature provides customers the ability to enter a portion of a file path into either the TDP Search bar feature or TetraScience
/searchEql
endpoint search in thequery_string
to return results, rather than the entire file path. For example, if you were to search for part of a filename, such aslab123 experiment5
, then a file with the following path would now also be returned in the search results:/lab123/instrumentB/user1_experiment5_20231212.dat
. This enhancement is available for data that’s processed after the TDP v3.6.0 upgrade only. To apply this enhancement to historical data, customers must either reindex the data by reconciling it, or contact their CSM for support. For more information, see Perform a Search on the Search Files Page and Reprocess Files. - You can now copy metadata, tags, or labels from an expanded search record and paste the search-ready metadata, tags, and labels string into a search box.
- Nested fields within output (
IDS
) files are now searchable from the Search bar in the TDP UI. For more information, see Nested Types.
Improved EQL Search Functionality
- File paths are now indexed in a way that makes it more efficient and cost-effective to run RAW EQL queries. Now, wildcard prefixes aren't needed for text searches. For more information, see Search by Using Elasticsearch Query DSL.
- When running Query DSL queries, customers can now specify an index URL query string parameter. This new parameter determines the specific indexes that results are returned from, which helps reduce the compute load for each query. For more information, see Target Your Search by Index.
- The scalability of the Search files via Elasticsearch Query Language API endpoint was improved by reducing the resource requirements of requests that return large data sets.
- The Search files via Elasticsearch Query Language API endpoint now returns more specific error codes than the previous 400 status code responses.
Labels Now Support Backslashes
Label values now also support backslashes (\
) in addition to letters, numbers, spaces, and the following symbols: plus signs (+
), dashes (-
), periods (.
), underscores (_
).
For more information, see Label Formatting.
Improved File Processing Behavior
Processing a single file can no longer run a pipeline more than once and run duplicate TDP workflows, or send duplicate events to downstream systems.
Improved SQL Query Behavior
Amazon Athena SQL queries no longer return any stale or incorrect data when two workflows that are working on the same IDS file run and complete actions within 20 seconds of one another.
GxP Compliance
Good Manufacturing / Laboratory / Documentation / Machine Learning Practices (GxP) help make sure products such as drugs, medical devices, or active pharmaceutical ingredients are safe, consistent, and high quality. Establishing a universal framework for managing data across R&D and manufacturing operations provides the backbone for these compliance efforts.
The following are new functionalities and enhancements introduced for GxP compliance in TDP v3.6.0.
New Functionality for GxP Compliance
There is no new GxP compliance functionality in this release.
Enhancements for GxP Compliance
Out-of-Scope Entities and Events are Now Removed From the Audit Trail
The following entities and events unrelated to user actions that create, modify, or delete electronic records are now removed from the Part 11 Audit Trail for improved usability:
NOTE
This change does not remove any of the affected entities or events from the system. The entities and events that were removed from the Audit Trail are still logged and remain available upon request. These additional logs will also be made available through an upcoming System Log feature in TDP v4.0.
- Auth Token
- Database Credentials
- Filter Field
- Label
- Metadata
- Service User
- User
- User Setting
- Tag
- Workflow
- GIT Integration
- Task Script Profile
- Task Script Build
- Feature Flag
Also, Pipeline entities no longer require a change reason entry for Reprocess or Submit files for process actions when the Change Reason enabled in Audit Trail setting is activated.
For more information, see Entities and Logged Actions and Enable Change Reason for Audit Trail.
New Entities and Events Added to Audit Trail
The following entities are now part of the the Audit Trail for GxP compliance purposes:
- Hubs (for Tetra Hub v2s)
- Pluggable Connector
File entities also now record download events in the Audit Trail.
For more information, see Entities and Logged Actions.
TDP System Administration
By using the TDP system administration features, customers can manage organizations, users, and roles as well as access logs, metrics, alerts and more.
The following are new functionalities and enhancements introduced for TDP system administration in TDP v3.6.0.
New Functionality for TDP System Administration
Organization Certificates for Pluggable Connectors*
The new Organization Certificates feature provides organization administrators the ability to upload their own self-signed Secure Sockets Layer (SSL) certificates to the TDP. After upload, Pluggable Connectors can now trust these self-signed SSL certificates when making requests to HTTPS endpoints that use those certificates for encryption.
For more information, see Manage Self-Signed SSL Certificates for Pluggable Connectors.
New AWS KMS Keys Have Automatic Key Rotation Activated*
All new AWS Key Management Service (AWS KMS) keys now automatically rotate their key material every year (approximately 365 days from their creation). For more information, see Rotating AWS KMS keys in the AWS Documentation.
IMPORTANT
For existing AWS KMS keys that were created before TDP v3.6.0, customers must activate automatic key rotation manually in the TDP by using a ts-admin role. For instructions, see AWS KMS Key Rotation.
New Commands Page*
The new Commands page provides organization administrators the ability to view, search, and check the status of the commands run within their organization. This functionality was previously available through the TetraScience Command Service API endpoints only.
For more information, see Command Service.
New API Endpoints and Search Fields for the Command Service*
The new List command actions (commands/actions
) endpoint returns a list of all distinct actions (command types) in the system.
The following search fields were also added to the Search commands API endpoint:
status
,action
, andtargetId
now support searching for multiple values (for example,
status=FAILURE&status=SUCCESS
).sortBy
sorts response results by any of the following:createdAt
,updatedAt
, orexpiresAt
.- The following date range options are also now supported:
createdAtBefore
,createdAtAfter
,expiresAtBefore
,expiresAtAfter
,updatedAtBefore
,updatedAtAfter
. These are all inclusive date ranges.
Shared Settings Names Now Support Underscores*
Shared settings names now support underscores (_
). Previously shared settings names supported dashes (-
), periods (.
), and alphanumeric characters only.
For more information, see Add a Shared Setting.
Enhancements for TDP System Administration
Improved File Reconciliation and Health Monitoring
The TDP now provides better visibility into file errors and options for fixing those errors by introducing the following improvements:
- A new top-level Bulk Actions option in the left navigation menu provides quicker access to the following:
- Monitoring Bulk Label Edit jobs (adding or editing labels in bulk)
- Reconciliation Jobs (fixing files)
- Bulk Pipeline Process (file reprocessing)
- An easier way to view related system logs from AWS CloudWatch for errors
- On the Health Monitoring page, customers can now select between specific error codes, selected files, or only failed files when creating a Reconciliation Job (for more information, see File Failures)
- To help ensure previously deleted data isn't reprocessed, pipeline reprocessing now operates on the latest version of an existing file only (for more information, see Retrying and Reprocessing Files)
Connector Usage Is Now Shown on the Shared Settings Page
When a Connector is configured to use a shared setting or secret, the Usage count is now included on the Shared Settings page.
For more information, see Access the Shared Settings Page.
Performance and Scale
TetraScience continually works to improve TDP performance and scalability. The following are performance and scale improvements for TDP v3.6.0.
2x-4x Performance Increases across All TDP Deployment Sizes
Each available EnviornmentSize setting for the TDP includes the following performance enhancements for TDP v3.6.0:
- File registration rate increased 2x-4x
- Concurrent workflows and workflow creation rates increased 2x
- Search API (
SearchEql
) request rate increased 4x
For more information, see Deployment Size Options for TDP v3.6.x.
Security
TetraScience continually monitors and tests the TDP codebase to identify potential security issues. Various security updates were applied to the following areas:
- Operating systems
- Third-party libraries
Installation and Deployment
There are multiple TDP deployment options available to customers, each with its own set of system requirements.
The following are new functionalities and enhancements introduced for TDP installation and deployment in TDP v3.6.0.
New Functionality for Installation and Deployment
There is no new functionality for installation and deployment in this release.
Enhancements for Installation and Deployment
There are no new enhancements for installation and deployment in this release.
Bug Fixes
The following customer-reported bugs are now fixed.
Data Integration Bug Fixes
- When editing a user-defined Agent, customers can no longer create a Source Type value that includes unsupported, uppercase letters.
- Customers can now change the DNS used by an L7 proxy on a Connector’s Data Management page. (Issue #1786)
- The System Messages section of the Agent Management Console now accurately displays Disk Usage warnings and errors consistently. (Issue #2753)
- The Tetra Data Hub (previously Tetra Data Hub) installation script now detects if any existing installations of Docker or the AWS Command Line Interface (AWS CLI) are compatible. (Issue #1940 and # 2373)
- The Create an agent (
POST /v1/agents
) and Update an agent (PUT /v1/agents/<agentId>
) API endpoints now return a400
error code if theintegrationType
parameter is"api"
and a non-emptydatahubId
parameter is specified. Previously, these endpoints returned a200
error code and displayed an Agent Not Available message in the TDP UI when this happened. - The Change agent connector (
PUT /v1/agents/<agentId>/connector
) API endpoint now returns a400
error code if theintegrationType
parameter is"api"
and a non-emptydatahubId
parameter is specified. The endpoint also now returns a400
error code if theintegrationType
parameter is"datahub"
and adatahubId
parameter isn't specified. Previously, this endpoint returned a200
error code and displayed an Agent Not Available message in the TDP UI when either of these configurations happened.
Data Harmonization and Engineering Bug Fixes
- If an IDS’s protocol doesn’t have a README file, the ReadMe tab no longer displays as blank on the IDS Details page for that IDS.
Data Access and Management Bug Fixes
- On the Search Files page, in the Labels & Advanced Filters dialog, the Select Field drop-down list now consistently displays all of the searchable fields within an IDS. (Issue #2643 and #2728)
Deprecated Features
The following features have been deprecated for this release or are now on a deprecation path.
Data Integration Deprecated Features
- The Amazon Simple Storage Service (Amazon S3) metadata
ts_processed_file_type
is now deprecated. Current and previous versions of Agents were incorrectly specifyingfile
for all uploads, so future Agent versions will no longer populate this metadata key. The TDP now correctly calculates the value of thefile.type
property based on the actual file path and by ignoring thets_processed_file_type
Amazon S3 metadata.
For more information about TDP deprecations, see Tetra Product Deprecation Notices.
Known and Possible Issues
The following are known and possible issues for the TDP v3.6.0 release.
Data Integrations Known Issues
-
On the Command Details page, If a command has no response (for example, if the request's status is Pending), the Response section displays the following error:
"ERROR":{1 item "message":"src property must be a valid json object" }
When this error appears, command processing isn't affected and no action is needed. A fix for this issue is in development and testing and is scheduled for a future TDP release. For more information, see View Command Details.
-
When installing a Tetra Hub on a host server that already has an AWS Systems Manager registration key, the Amazon ECS container agent startup fails. An AccessDenied error is then logged in the agent’s Amazon CloudWatch Logs. In TDP v3.6.0, the Hub installer automatically detects the issue and provides instructions to fix it.
-
The Tetra Hub installation script doesn’t detect an existing Amazon Elastic Compute Cloud (Amazon EC2) instance role on a host server if there is one. If there is an existing AWS Identity and Access Management (IAM) role, the Hub’s Amazon ECS service will attempt to use it. The Hub’s Amazon ECS instance registration process fails when this happens. A fix for this issue is currently in development and testing for a future TDP v3.6.x patch release. As a workaround, customers can detach the Amazon EC2 IAM role from the Amazon EC2 instance, and then rerun the Hub installation script. For more information, see Why Did the Amazon ECS Instance Registration Process Fail During Hub Installation?
-
When installing or rebooting a Tetra Hub, the Hub’s Health status incorrectly displays as CRITICAL for a short time in the TDP UI. After the TDP receives the Hub’s initial metrics and proxy status, the Hub’s status displays as Online. No action is needed, and no alarms or notifications are generated.
-
Files uploaded to the TDP by Agents that use a Tetra Hub proxy incorrectly appear in the system with an
‘api'
value for theirintegrationType
. The files also incorrectly display the following hardcoded APIintegrationId
:'6f166302-df8a-4044-ab4b-7ddd3eefb50b'
. This behavior shouldn’t impact pipelines or data processing. A fix for this issue is currently in development and testing for the TDP v3.6.1 patch release. After this issue is addressed, all new uploaded file versions will have the correct metadata (integrationType 'datahub'
andintegrationId '$hubId'
). -
The Integration Events tab on the Health Monitoring Dashboard might present a spinner if an Agent is configured with no file path (
filePath
) and hasn't produced any file events (fileEvents
).
Data Harmonization and Engineering Known Issues
- In Browse view on the Search Files page, the Edit Labels on <#> Searched Files action processes all of an organization’s files in the Data Lake, not just the searched files. A fix for this issue is in development and testing and planned for TDP v4.0.0. List view on the Search Files page is unaffected by this defect.
- Files with more than 20 associated documents (high-lineage files) can cause errors during Elasticsearch indexing and reconciliation. These errors do not impact non-lineage indexing actions.
- Elasticsearch index mapping conflicts can occur when a client or private namespace creates a backwards-incompatible data type change. For example: If
doc.myField
is a string in the common IDS and an object in the non-common IDS, then it will cause an index mapping conflict, because the common and non-common namespace documents are sharing an index. When these mapping conflicts occur, the files aren’t searchable through the TDP UI or API endpoints. As a workaround, customers can either create distinct, non-overlapping version numbers for their non-common IDSs or update the names of those IDSs. - File reprocessing jobs can sometimes show fewer scanned items than expected when either a health check or out-of-memory (OOM) error occurs, but not indicate any errors in the UI. These errors are still logged in Amazon CloudWatch Logs. A fix for this issue is in development and testing.
- File reprocessing jobs can sometimes incorrectly show that a job finished with failures when the job actually retried those failures and then successfully reprocessed them. A fix for this issue is in development and testing.
- On the Pipeline Manager page, pipeline trigger conditions that customers set with a text option must match all of the characters that are entered in the text field. This includes trailing spaces, if there are any.
- File edit and update operations are not supported on metadata and label names (keys) that include special characters. Metadata, tag, and label values can include special characters, but it’s recommended that customers use the approved special characters only. For more information, see Attributes.
- The File Details page sometimes displays an Unknown status for workflows that are either in a Pending or Running status. Output files that are generated by intermediate files within a task script sometimes show an Unknown status, too.
Data Access and Management Known Issues
- File events aren’t created for temporary (TMP) files, so they’re not searchable. This behavior can also result in an Unknown state for Workflow and Pipeline views on the File Details page.
- When customers search for labels that include @ symbols in the TDP UI’s search bar, not all results are always returned.
- When customers search for some unicode character combinations in the TDP UI’s Search bar, not all results are always returned.
- If customers modify an existing collection of search queries by adding a new filter condition from one of the Options modals (Basic, Attributes, Data (IDS) Filters, or RAW EQL), but they don't select the Apply button, the previous, existing query is deleted. To modify the filters for an existing collection, customers must select the Apply button in the Options modal before you update the collection. For more information, see How to Save Collections and Shortcuts.
TDP System Administration Known Issues
- The latest Connector versions incorrectly log the following errors in Amazon CloudWatch Logs:
Error loading organization certificates. Initialization will continue, but untrusted SSL connections will fail.
Client is not initialized - certificate array will be empty
These organization certificate errors have no impact and shouldn’t be logged as errors. A fix for this issue is currently in development and testing, and is scheduled for an upcoming release. There is no workaround to prevent Connectors from producing these log messages. To filter out these errors when viewing logs, customers can apply the following CloudWatch Logs Insights query filters when querying log groups. (Issue #2818)
CloudWatch Logs Insights Query Example for Filtering Organization Certificate Errors
fields @timestamp, @message, @logStream, @log
| filter message != 'Error loading organization certificates. Initialization will continue, but untrusted SSL connections will fail.'
| filter message != 'Client is not initialized - certificate array will be empty'
| sort @timestamp desc
| limit 20
- If a reconciliation job, bulk edit of labels job, or bulk pipeline processing job is canceled, then the job’s ToDo, Failed, and Completed counts can sometimes display incorrectly.
Upgrade Considerations
During the upgrade, there might be a brief downtime when users won't be able to access the TDP user interface and APIs. There will also be a one- to three-hour data migration process running in the background. This data migration might cause the File Failures metric on the TDP Health Monitoring Dashboard to be inaccurate until the data migration is complete. No other functionality will be affected.
After the upgrade, the TetraScience team verifies that the platform infrastructure is working as expected through a combination of manual and automated tests. If any failures are detected, the issues are immediately addressed, or the release can be rolled back. Customers can also verify that TDP search functionality continues to return expected results, and that their workflows continue to run as expected.
For more information about the release schedule, including the GxP release schedule and timelines, see the Product Release Schedule.
For more details about the timing of the upgrade, customers should contact their CSM.
Quality Management
TetraScience is committed to creating quality software. Software is developed and tested by using the ISO 9001-certified TetraScience Quality Management system. This system ensures the quality and reliability of TetraScience software while maintaining data confidentiality and integrity.
Other Release Notes
To view other TDP release notes, see Tetra Data Platform (TDP) Release Notes.
Updated 3 months ago