Convert Tetra Data to Lakehouse Tables
NOTE
The Data Lakehouse Architecture is available to all customers as part of an early adopter program (EAP) and will continue to be updated in future TDP releases. If you are interested in participating in the early adopter program, please contact your customer success manager (CSM).
To automatically create open-format Lakehouse tables from your Tetra Data, contact your customer success manager (CSM) and provide them your target IDS(s) and versions that you want to backfill data for.
With this information, your CSM or account manager will help you do the following:
- Activate a Steady State process for translating any new data from target IDS versions into IDS Delta Tables, File Info Tables, and File Attribute Tables. These Lakehouse tables mirror the legacy Amazon Athena table structure.
- Backfill your historical Tetra Data from target IDS versions into Lakehouse tables (IDS Delta Tables, File Info Tables, and File Attribute Tables).
After your data is converted to IDS Delta tables, you can also use Tetraflow pipelines to define and schedule data transformations in a familiar SQL language and generate custom, use case-specific Lakehouse tables that are optimized for downstream analytics applications.
To view Lakehouse tables after you create them, see Query SQL Tables in the TDP. For more information, see Lakehouse Data Access.
IMPORTANT
When querying data in your Lakehouse tables, make sure that you apply Lakehouse data access best practices. Applying these patterns ensures that you're using the latest records in downstream datasets and retrieving the most current data in SQL query results when either running SQL queries on the new tables or setting up a Tetraflow pipeline.
Updated about 1 month ago