StreamNative Lakehouse Tables StreamNative Lakehouse Tables provide a unified way to expose streaming topics as open table format objects—such as Apache Iceberg and Delta Lake—directly within StreamNative Cloud. With Lakehouse Tables, data produced to Pulsar or Kafka-compatible topics can be automatically stored in object storage in a transactional, analytics-ready table format. This enables seamless integration between real-time data streams and downstream analytics, AI/ML, and governance systems. Overview StreamNative Lakehouse Tables bridge the gap between streaming and batch systems by converting message data from topics into table-backed datasets. Each Lakehouse Table maintains:Documentation Index
Fetch the complete documentation index at: https://docs.streamnative.io/llms.txt
Use this file to discover all available pages before exploring further.
- Metadata (schema, manifest lists, snapshots)
- Data files (Parquet/columnar format)
- Transaction logs (for table evolution)
- Apache Iceberg
- Delta Lake (Delta 2.0 and above)
- Ingested from the streaming topic
- Serialized into Parquet files
- Committed into the table as immutable snapshots
- Made available for SQL queries and analytical engines
- Amazon S3
- Google Cloud Storage
- Azure Blob Storage
- Schema inference from topics
- Backward/forward compatible evolution
- Safe writes with schema enforcement
- Automatic mapping to Iceberg/Delta schemas
- Users retain full control over table evolution policies.
- ACID transactions
- Snapshot isolation
- Time travel (via historical snapshots)
- Incremental reads
- Full Interoperability with Data and AI Platforms
- Databricks
- Snowflake
- BigQuery Managed Tables
- Apache Spark, Flink, and Trino
- StarTree and Pinot
- DuckDB
- pandas & PyArrow