AI Services FAQs

The following are answers to frequently asked questions about TetraScience AI Services . If you can't find an answer to your question, contact your customer account leader.

How do I get access to TetraScience AI Services?

TetraScience AI Services are currently available to Scientific AI Lighthouse (SAIL) Partners only and must be activated in coordination with TetraScience. For more information, contact your customer account leader.

What APIs are available for TetraScience AI Services?

AI Services provide five core APIs that enable AI-driven tasks:

  • Submit an online (synchronous) inference request(/inference/online): Performs real-time ML inference and returns results synchronously.
  • Submit an offline (asynchronous) inference request(/inference): Submit an asynchronous, file-based inference request with support for partial file staging success. Returns a request ID and status URL for tracking progress.
  • Health check (/health): Returns a 200 response if the service is running and supports error simulation for testing.
  • Get inference request status (/inference/{inferenceId}/status): Retrieves the current status and details of an inference request. Includes file-level details and partial success information when applicable.
  • Get metrics for an online (synchronous) inference request (/inference/online/{requestId}/metrics): Retrieves metrics and status information for a completed online (synchronous) inference request. Metrics are stored for tracking and debugging purposes.

For more information about how to call these endpoints, see the TetraScience AI Services User Guides

What types of AI workflows are supported?

AI Services support three Scientific AI Workflow types:

  • LLM workflows (online/synchronous): Language model tasks
  • Image Analysis workflows (offline/asynchronous): Scientific instrument image processing
  • Data Analysis workflows (online/synchronous): Data processing and analysis tasks

How does the AI Agent system work?

The platform supports dynamic deployment of AI agents that encapsulate task-specific instructions and logic:

  • Category/Alias-based Routing: Route requests to appropriate agents using categories or aliases (for example, Coding Agent, and Pipeline Agent).
  • Agent Behavior Configuration: Define agent behavior through instruction templates and knowledge base associations.
  • Agent Metadata Management: Associate metadata with agents for better organization and discovery.

What is Knowledge Base Integration for RAG?

You can enable Retrieval-Augmented Generation through the following integrated knowledge base capabilities:

  • Document Upload and Indexing: Upload and index documents to support agent-specific knowledge bases.
  • Hybrid Search: Execute semantic and keyword searches over knowledge bases using OpenSearch or similar engines.
  • Agent-Knowledge Base Association: Link specific knowledge bases to agents for contextual responses.

Can I upload files for inference?

Yes, AI Services support multiple input methods:

  • JSON and text files: Direct upload through the UI for quick predictions
  • Image upload: Single or multiple images from scientific instruments
  • Pipeline integration: Integrate inference into data pipelines using task scripts

How do I track my inference jobs?

You can track inference job progress through the TDP Health Monitoring Dashboard.

How does resource scaling work?

Dynamic resource management includes the following:

  • Auto-scaling: Databricks compute resources scale up to meet demand
  • Intelligent maintenance: Resources are maintained for sequential requests
  • Auto-shutdown: Resources scale down after five minutes of inactivity

Can AI Services integrate with existing pipelines?

Yes, AI Services can be integrated into existing Tetra Data Pipelines by specifying endpoints input data locations in your Task scripts.

How secure are TetraScience AI Services?

AI Services extend existing TDP security practices.

What compliance features are available?

TetraScience AI Services include the following compliance capabilities:

  • AI clauses in contracts: Disclaimers and transparency requirements
  • AI workflow observability: Tracking for data and workflow drift detection
  • Comprehensive audit trail: Full traceability including code, hyperparameters, and training data
  • Monitoring and reporting: Operational metrics for usage, performance, and error rates