Home

Estuary

Estuary Flow is a platform for creating real-time data pipelines at scale. It combines the intuitive interface of an ELT service with an event-driven runtime and a variety of open-source connectors.

You can use Flow to migrate your data from Firestore to the Postgres database in your Supabase project. You do this by building a real-time pipeline that captures data from Firestore and materializes (loads) that data to Postgres. Once created, the pipeline backfills all your historical data from Firestore and continues to process new data events in real time.

Prerequisites#

Before you begin, you'll need:

Step 1: Capture your Firestore data#

You'll start by creating a capture, a task in Flow that connects to your data source system: in this case, Firestore. This process will create one or more data collections, backed by a real-time data lake.

  1. Go to the Captures tab of the Flow web app and choose New Capture.

  2. Locate and select the Google Firestore card.

    A form appears with the properties required for a Firestore capture.

  3. Set a name for your capture.

    Click inside the Name field to generate a drop-down menu of available prefixes and select one (likely, this will be the name of your organization). Append a unique capture name after the / to create the full name, for example, acmeCo/myFirestoreCapture.

  4. Fill out the required properties for Firestore.

    Database: Flow can autodetect the database name, but you may optionally specify it here. This is helpful if the service account used has access to multiple Firebase projects. Your database name usually follows the format projects/$PROJECTID/databases/(default).

    Credentials: The JSON service account key created per the prerequisites.

  5. Click Next.

    Flow uses the provided configuration to initiate a connection with Firestore. It maps each collection in the Firestore database to a Flow collection.

  6. Optionally, use the Collection Selector to remove any collections you don't need to migrate to Supabase.

  7. Click Save and Publish.

    You'll see a notification when the capture publishes successfully.

    The data currently in your Firestore database has been captured to Flow, and future updates to it will be captured continuously.

    Click Materialize Collections to continue.

Step 2: Materialize your collections to Postgres#

Next, you'll add a Postgres materialization to connect the captured collections to tables in your Supabase Postgres database.

  1. On the Create Materialization page, search for and select the PostgreSQL tile.

A form appears with the properties required for a Postgres materialization.

  1. Choose a unique name for your materialization like you did when naming your capture; for example, acmeCo/mySupabaseMaterialization.

  2. Fill out the required properties for Postgres. You can find most of these in Supabase by going to the Settings section and clicking Database.

    Address: Format at <host>:<port>.

    User: Usually, this is postgres.

    Password: The password you set when you created your Supabase project.

  3. Click Next.

    Flow initiates a connection with the database and the Collection Selector expands. It's populated with your collections from Firestore, each mapped to a Postgres table.

  4. For each collection, apply a stricter JSON schema. This ensure that the less-structured Firestore data will be written to a Postgres table in the correct shape.

    In the Collection Selector, choose a collection and click its Specification tab.

    Click Schema Inference. Flow scans the data in your collection and infers a new schema to use for materialization.

    Review the new schema and click Apply Inferred Schema.

  5. Click Save and Publish. You'll see a notification when the materialization publishes successfully.

    Your Firestore collections are copied to tables in Supabase. As long as you leave the capture and materialation running, any changes to the Firestore data will be reflected in Supabase in milliseconds.

Resources#

For more information, visit the Flow docs. In particular: