With a traditional cloud object storage, every new data source requires:
This process becomes increasingly inefficient as the number of data sources grows. Attimis OneBucket consolidates all data sources in a single unified namespace bucket. Once Snowflake is connected to Attimis, no new configuration is needed as additional sources are added behind the unified endpoint.
Snowflake performance is influenced by how many storage endpoints it must interact with and where the data physically lives. When data is distributed across multiple storage environments, Snowflake must perform additional work such as:
Attimis removes all these bottlenecks by presenting a unified namespace OneBucket, where all data appears under the same logical bucket. Snowflake reads from one endpoint, and performs operations where all data is stored in one place.
Organizations often spend months migrating data from one location to another to reduce their cloud costs or consolidate their workloads. These migrations are not only expensive, but disruptive.
Attimis eliminates the need for these migrations; Because Attimis contains all underlying data sources through a single endpoint and Snowflake can query data without relocating it. Attimis enables:
Organizations operate under strict data governance requirements that dictate how data must be stored. Sensitive data must remain in approved locations. This rule creates major challenges when teams want to distribute data across multiple environments. For example, with General Data Protection Regulations (GDPR), companies are legally authorized to satisfy requirements pertaining to the protection of processing personal data getting in and out of the European Union (EU). In this case, the traditional approach of data migration might violate governance policies and create GDPR compliance issues.
Attimis provides a secure, unified data access layer that allows platforms like Snowflake to query remote datasets in place, without requiring physical data migration. Because data remains in an approved, governed location, organizations can maintain full compliance with all residency requirements and ensure sensitive data stays in their necessary environment.
Our goal is to demonstrate how Attimis OneBucket makes it easier to connect multiple remote data sources into Snowflake. While Snowlake can ingest data from a wide range of locations, each external source requires its own configuration, credentials, and validation. This leads to unnecessary complexity, slower onboarding, and fragmented data access.
Attimis OneBucket removes these challenges by providing a unified namespace bucket that presents all storage locations through a single endpoint. Instead of configuring Snowflake for each additional cloud or on-prem region, you can simply connect to Attimis OneBucket. Attimis handles all underlying integrations, making new data sources available in Snowflake without additional setup. Because the connection is always through the same Attimis endpoint, Snowflake treats all new locations as already validated. This eliminates repetitive tasks and keeps operations clean and simple.
With Attimis, Snowflake users can get a consistent experience across all datasets, regardless of where the data physically resides.
The integration process begins by creating a Snowflake external volume that connects to the Attimis storage location. An external volume acts as a bridge between Snowflake and Attimis. Because Attimis uses a unified namespace, Snowflake only needs to point to one endpoint, regardless of how many data sources there are behind it.
Attimis S3-compatible parameters:
For example:

Once the external volume is created, a Snowflake external stage can be defined using the same endpoint and credentials. This stage serves as the data access point within Snowflake, allowing data loading directly from files stored in Attimis.
Define a parquet file format in Snowflake to for proper interpretation of the staged files. Additionally, use Snowflake's built-in INFER_SCHEMA to detect column names and data types to align iceberg tables with the structure of the parquet files in the bucket. For example:

With the external volume and stage defined, Snowflake can simply create an iceberg table that uses the unified Attimis external volume as its storage location. This allows efficient data management with full support for Snowflake iceberg table features.
Start by creating the Iceberg table and linking it to the Attimis external volume. For example:

This command creates (or replaces) a Snowflake managed Iceberg table named attimis_iceberg, using the unified Attimis external volume as the underlying storage. The BASE_LOCATION defines where within the bucket the table’s data and metadata will live. Once the table is created, populate it by loading data directly from the stage that points to the bucket location. For example:

This command reads all parquet files from the @attimis_0001 stage (where files are being stored), and tells Snowflake to load data into the table attimis_iceberg.
Once loaded, the iceberg table is fully queryable through Snowflake, backed by Attimis’s unified storage.
A customer wants to move data from AWS to an on-prem location to reduce costs. Traditionally, this requires
PATTERN = ‘.*\.parquet’ ensures only files that end with .parquet will be loaded.
With Attimis, both AWS and the on-prem data appear in one unified bucket, so no join, no cross-location transfers. All data is already centralized through a single endpoint, which greatly reduces time and operational effort.

In our testing, using AWS directly led to an S3 AuthorizedHeaderMalformed error. On the first attempt to run the query, we received the following message:
“S3 returned 'AuthorizationHeaderMalformed' error when accessing the remote file 's3compat://attimis-0000/aws.0y8vX3Lq/data/a1/snow_A4T8fQPuTXM_APi4Lp2Ydxg_0_1_016.parquet'. Please check your credentials.”
To rule out causes, we reviewed our external stage configuration to verify that the region was correctly specified. Although the region of the S3 bucket was correct, Snowflake threw a regional mismatch error on the initial request. After investigating this issue, we found that the most common cause for a temporary mismatch error on the first attempt is typically caused by a misconfiguration on Snowflake’s side when it initiates the request to AWS S3. This challenge was not specific to us and widely reported by other users as well, and in our case, simply rerunning the query immediately resolved the error, further confirming that the problem was not with credentials.
This challenge was misleading, because the temporary nature of the error created a confusing delay. Importantly, this issue does not occur with Attimis, because Attimis consistently handles requests without an intermittent AWS mismatch error, making it more dependable in this scenario,
Another common scenario involves combining structurally identical datasets from different environments. Normally, this requires unions across iceberg tables. This approach adds complexity, especially at an enterprise-scale.
Snowflake queries only one bucket, and a single combined table. This eliminates multi-location unions entirely.
In this scenario, all the data is the same: NYC taxi trips spanning a range of months. A company has old data in AWS and new data in an on-prem location and wants to create a single unified view of all trips. To achieve this, they must perform a union across the two locations, querying each separately and then combining the results.
Performing the union takes 2 minutes and 30 seconds.

With Attimis, all data stored in a single bucket can be consolidated into a single table if needed. Retrieving this table is much faster compared to performing unions across multiple locations. This approach simplifies workflows, reduces query complexity, and improves query performance time.
Retrieving from this table takes only 1 minute and 30 seconds.
