TDP v3.6.1 Release Notes

Release date: 21 November 2023 (Last updated: 7 March 2024)

TetraScience has released its next version of the Tetra Data Platform (TDP), version 3.6.1. This release focuses on implementing various security updates as well as resolving several known issues. It also provides customers the ability to specify custom Source Types when they define a pipeline’s trigger conditions in the TDP.

📘

NOTE

Keep in mind the following:

  • Items labeled as New functionality include features that weren’t previously available in the TDP.
  • Enhancements are modifications to existing functionality that improve performance or usability, but don't alter the function or intended use of the system.
  • Features marked with an asterisk (*) are for usability, supportability, or troubleshooting, and do not affect Intended Use for validation purposes. Beta Release features are not suitable for GxP use.

Security

TetraScience continually monitors and tests the TDP codebase to identify potential security issues. Various security updates were applied to the following areas:

  • Operating systems
  • Third-party libraries

Installation and Deployment

There are multiple TDP deployment options available to customers, each with its own set of system requirements.

The following are new enhancements introduced for TDP installation and deployment in TDP v3.6.1.

Enhancements for Installation and Deployment

Minor Tetra IoT Layer Database Update

The Amazon Relational Database Service (Amazon RDS) version that supports the Tetra IoT Layer was updated to help improve performance and scalability.

Bug Fixes

The following bugs are now fixed.

Data Integrations Bug Fixes

  • Files uploaded to the TDP by Agents that use a Tetra Hub proxy now consistently include the correct metadata values for integrationType ('datahub') and integrationId ('$hubId'). These values previously appeared incorrectly in the system with an ‘api' value for their integrationType. The files also incorrectly displayed the following hardcoded API integrationId: '6f166302-df8a-4044-ab4b-7ddd3eefb50b'.

Data Harmonization and Engineering Bug Fixes

  • Customers can now specify a custom Source Type value when they define a pipeline’s trigger conditions in the TDP. For more information, see Step 1: Define Trigger Conditions in Set Up and Edit Pipelines.
  • Adding a trigger condition to a new or existing pipeline in the TDP no longer crashes the New Pipeline and Edit Pipeline pages intermittently.

Data Access and Management Bug Fixes

  • On the IDS Details page, in the ERD tab, Entity Relationship Diagrams are no longer created with table names that contain an unnecessary root prefix.

Known and Possible Issues

The following are known and possible issues for TDP v3.6.1.

Data Integrations Known Issues

  • On the Command Details page, If a command has no response (for example, if the request's status is Pending), the Response section displays the following error:

     "ERROR":{1 item
     "message":"src property must be a valid json object"
     }
    

    When this error appears, command processing isn't affected and no action is needed. A fix for this issue is in development and testing and is scheduled for a future TDP release. For more information, see View Command Details.

  • To edit or remove labels from Pluggable Connectors in the TDP, customers might need to refresh the Edit Connector Information page first.

  • When installing a Tetra Hub on a host server that already has an AWS Systems Manager registration key, the Amazon ECS container agent startup fails. An AccessDenied error is then logged in the agent’s Amazon CloudWatch Logs. In TDP v3.6.0, the Hub installer automatically detects the issue and provides instructions to fix it.

  • The Tetra Hub installation script doesn’t detect an existing Amazon Elastic Compute Cloud (Amazon EC2) instance role on a host server if there is one. If there is an existing AWS Identity and Access Management (IAM) role, the Hub’s Amazon ECS service will attempt to use it. The Hub’s Amazon ECS instance registration process fails when this happens. A fix for this issue is currently in development and testing for a future TDP v3.6.x patch release. As a workaround, customers can detach the Amazon EC2 IAM role from the Amazon EC2 instance, and then rerun the Hub installation script. For more information, see Why Did the Amazon ECS Instance Registration Process Fail During Hub Installation?

  • When installing or rebooting a Tetra Hub, the Hub’s Health status incorrectly displays as CRITICAL for a short time in the TDP UI. After the TDP receives the Hub’s initial metrics and proxy status, the Hub’s status displays as Online. No action is needed, and no alarms or notifications are generated.

  • The Integration Events tab on the Health Monitoring Dashboard might present a spinner if an Agent is configured with no file path (filePath) and hasn't produced any file events (fileEvents).

Data Harmonization and Engineering Known Issues

  • In Browse view on the Search Files page, the Edit Labels on <#> Searched Files action processes all of an organization’s files in the Data Lake, not just the searched files. A fix for this issue is in development and testing and planned for TDP v4.0.0. List view on the Search Files page is unaffected by this defect.
  • Files with more than 20 associated documents (high-lineage files) can cause errors during Elasticsearch indexing and reconciliation. These errors do not impact non-lineage indexing actions.
  • Elasticsearch index mapping conflicts can occur when a client or private namespace creates a backwards-incompatible data type change. For example: If doc.myField is a string in the common IDS and an object in the non-common IDS, then it will cause an index mapping conflict, because the common and non-common namespace documents are sharing an index. When these mapping conflicts occur, the files aren’t searchable through the TDP UI or API endpoints. As a workaround, customers can either create distinct, non-overlapping version numbers for their non-common IDSs or update the names of those IDSs.
  • File reprocessing jobs can sometimes show less scanned items than expected when either a health check or out-of-memory (OOM) error occurs, but not indicate any errors in the UI. These errors are still logged in Amazon CloudWatch Logs. A fix for this issue is in development and testing.
  • File reprocessing jobs can sometimes incorrectly show that a job finished with failures when the job actually retried those failures and then successfully reprocessed them. A fix for this issue is in development and testing.
  • On the Pipeline Manager page, pipeline trigger conditions that customers set with a text option must match all of the characters that are entered in the text field. This includes trailing spaces, if there are any.
  • File edit and update operations are not supported on metadata and label names (keys) that include special characters. Metadata, tag, and label values can include special characters, but it’s recommended that customers use the approved special characters only. For more information, see Attributes.
  • The File Details page sometimes displays an Unknown status for workflows that are either in a Pending or Running status. Output files that are generated by intermediate files within a task script sometimes show an Unknown status, too.

Data Access and Management Known Issues

  • File events aren’t created for temporary (TMP) files, so they’re not searchable. This behavior can also result in an Unknown state for Workflow and Pipeline views on the File Details page.
  • When customers search for labels that include @ symbols in the TDP UI’s search bar, not all results are always returned.
  • When customers search for some unicode character combinations in the TDP UI’s Search bar, not all results are always returned.
  • If customers modify an existing collection of search queries by adding a new filter condition from one of the Options modals (Basic, Attributes, Data (IDS) Filters, or RAW EQL), but they don't select the Apply button, the previous, existing query is deleted. To modify the filters for an existing collection, customers must select the Apply button in the Options modal before you update the collection. For more information, see How to Save Collections and Shortcuts.

TDP System Administration Known Issues

  • The latest Connector versions incorrectly log the following errors in Amazon CloudWatch Logs:
    • Error loading organization certificates. Initialization will continue, but untrusted SSL connections will fail.
    • Client is not initialized - certificate array will be empty
      These organization certificate errors have no impact and shouldn’t be logged as errors. A fix for this issue is currently in development and testing, and is scheduled for an upcoming release. There is no workaround to prevent Connectors from producing these log messages. To filter out these errors when viewing logs, customers can apply the following CloudWatch Logs Insights query filters when querying log groups. (Issue #2818)

CloudWatch Logs Insights Query Example for Filtering Organization Certificate Errors

fields @timestamp, @message, @logStream, @log
| filter message != 'Error loading organization certificates. Initialization will continue, but untrusted SSL connections will fail.'
| filter message != 'Client is not initialized - certificate array will be empty'
| sort @timestamp desc
| limit 20
  • If a reconciliation job, bulk edit of labels job, or bulk pipeline processing job is canceled, then the job’s ToDo, Failed, and Completed counts can sometimes display incorrectly.

Upgrade Considerations

During the upgrade, there might be a brief downtime when users won't be able to access the TDP user interface and APIs.

After the upgrade, the TetraScience team verifies that the platform infrastructure is working as expected through a combination of manual and automated tests. If any failures are detected, the issues are immediately addressed, or the release can be rolled back. Customers can also verify that TDP search functionality continues to return expected results, and that their workflows continue to run as expected.

For more information about the release schedule, including the GxP release schedule and timelines, see the Product Release Schedule.

For more details about the timing of the upgrade, customers should contact their CSM.

📘

Quality Management

TetraScience is committed to creating quality software. Software is developed and tested by using the ISO 9001-certified TetraScience Quality Management system. This system ensures the quality and reliability of TetraScience software while maintaining data confidentiality and integrity.

Other Release Notes

To view other TDP release notes, see Tetra Data Platform (TDP) Release Notes.