Skip to main content
Early access You use the to synchronize all data or specific tables from a database instance to your , in real time. You run the connector continuously, turning into a primary database with your as a logical replica. This enables you to leverage real-time analytics capabilities on your replica data. Connectors overview The leverages the well-established logical replication protocol. By relying on this protocol, it ensures compatibility, familiarity, and a broader knowledge base—making it easier for you to adopt the connector and integrate your data. You use the for data synchronization, rather than migration. This includes:
  • Copy existing data from a instance:
    • Copy data at up to 150 GB/hr. You need at least a 4 CPU/16 GB source database, and a 4 CPU/16 GB target .
    • Copy the publication tables in parallel. Large tables are still copied using a single connection. Parallel copying is in the backlog.
    • Forget foreign key relationships. The connector disables foreign key validation during the sync. For example, if a metrics table refers to the id column on the tags table, you can still sync only the metrics table without worrying about their foreign key relationships.
    • Track progress. exposes COPY progress under pg_stat_progress_copy.
  • Synchronize real-time changes from a instance.
  • Add and remove tables on demand using the PUBLICATION interface.
  • Enable features such as hypertables, columnstore, and continuous aggregates on your logical replica.
This source Postgres connector is not yet supported for production use. If you have any questions or feedback, talk to us in #livesync in the Tiger Community.
  • Tiger Cloud Console
  • Self-hosted Postgres connector

Prerequisites

To follow the steps on this page:
  • Install the client tools on your sync machine.
  • Ensure that the source instance and the target have the same extensions installed. The does not create extensions on the target. If the table uses column types from an extension, first create the extension on the target before syncing the table.

Limitations

  • The source instance must be accessible from the Internet. Services hosted behind a firewall or VPC are not supported. This functionality is on the roadmap.
  • Indexes, including the primary key and unique constraints, are not migrated to the target. We recommend that, depending on your query patterns, you create only the necessary indexes on the target.
  • This works for databases only as source. is not yet supported.
  • The source must be running 13 or later.
  • Schema changes must be co-ordinated. Make compatible changes to the schema in your first, then make the same changes to the source instance.
  • Ensure that the source instance and the target have the same extensions installed. The does not create extensions on the target. If the table uses column types from an extension, first create the extension on the target before syncing the table.
  • There is WAL volume growth on the source instance during large table copy.
  • Continuous aggregate invalidation The connector uses session_replication_role=replica during data replication, which prevents table triggers from firing. This includes the internal triggers that mark continuous aggregates as invalid when underlying data changes. If you have continuous aggregates on your target database, they do not automatically refresh for data inserted during the migration. This limitation only applies to data below the continuous aggregate’s materialization watermark. For example, backfilled data. New rows synced above the continuous aggregate watermark are used correctly when refreshing. This can lead to:
    • Missing data in continuous aggregates for the migration period.
    • Stale aggregate data.
    • Queries returning incomplete results.
    If the continuous aggregate exists in the source database, best practice is to add it to the connector publication. If it only exists on the target database, manually refresh the continuous aggregate using the force option of refresh_continuous_aggregate.

Set your connection string

This variable holds the connection information for the source database. In the terminal on your migration machine, set the following:
export SOURCE="postgres://<user>:<password>@<source host>:<source port>/<db_name>"
Avoid using connection strings that route through connection poolers like PgBouncer or similar tools. This tool requires a direct connection to the database to function properly.

Tune your source database

  • From AWS RDS/Aurora
  • From Postgres
  1. Set the rds.logical_replication parameter to 1 In the AWS console, navigate to your RDS instance parameter group and set rds.logical_replication to 1. This enables logical replication on the RDS instance. After changing this parameter, restart your RDS instance for the changes to take effect.
  2. Create a user for the connector and assign permissions
    1. Create <pg connector username>:
      psql $SOURCE -c "CREATE USER <pg connector username> PASSWORD '<password>'"
      
      You can use an existing user. However, you must ensure that the user has the following permissions.
    2. Grant permissions to create a replication slot:
      psql $SOURCE -c "ALTER ROLE <pg connector username> REPLICATION"
      
    3. Grant permissions to create a publication:
      psql $SOURCE -c "GRANT CREATE ON DATABASE <database name> TO <pg connector username>"
      
    4. Assign the user permissions on the source database:
      psql $SOURCE <<EOF
      GRANT USAGE ON SCHEMA "public" TO <pg connector username>;
      GRANT SELECT ON ALL TABLES IN SCHEMA "public" TO <pg connector username>;
      ALTER DEFAULT PRIVILEGES IN SCHEMA "public" GRANT SELECT ON TABLES TO <pg connector username>;
      EOF
      
      If the tables you are syncing are not in the public schema, grant the user permissions for each schema you are syncing:
      psql $SOURCE <<EOF
      GRANT USAGE ON SCHEMA <schema> TO <pg connector username>;
      GRANT SELECT ON ALL TABLES IN SCHEMA <schema> TO <pg connector username>;
      ALTER DEFAULT PRIVILEGES IN SCHEMA <schema> GRANT SELECT ON TABLES TO <pg connector username>;
      EOF
      
    5. On each table you want to sync, make <pg connector username> the owner:
      psql $SOURCE -c 'ALTER TABLE <table name> OWNER TO <pg connector username>;'
      
      You can skip this step if the replicating user is already the owner of the tables.
  3. Enable replication DELETE and UPDATE operations For the connector to replicate DELETE and UPDATE operations, enable REPLICA IDENTITY on each table:
    psql $SOURCE -c 'ALTER TABLE <table name> REPLICA IDENTITY FULL;'
    

Synchronize data

To sync data from your database using :
  1. Connect to your In , select the to sync live data to.
  2. Connect the source database and the target Postgres connector wizard
    1. Click Connectors > PostgreSQL.
    2. Set the name for the new connector by clicking the pencil icon.
    3. Check the boxes for Set wal_level to logical and Update your credentials, then click Continue.
    4. Enter your database credentials or a connection string, then click Connect to database. This is the connection string for <pg connector username>. The console connects to the source database and retrieves the schema information.
  3. Optimize the data to synchronize in hypertables Postgres connector start
    1. In the Select table dropdown, select the tables to sync.
    2. Click Select tables +. The console checks the table schema and, if possible, suggests the column to use as the time dimension in a .
    3. Click Create Connector. The console starts the connector between the source database and the target and displays the progress.
  4. Monitor synchronization Connectors overview
    1. To view the amount of data replicated, click Connectors. The diagram in Connector data flow gives you an overview of the connectors you have created, their status, and how much data has been replicated.
    2. To review the syncing progress for each table, click Connectors > Source connectors, then select the name of your connector in the table.
  5. Manage the connector Edit a Postgres connector
    1. To edit the connector, click Connectors > Source connectors, then select the name of your connector in the table. You can rename the connector, delete or add new tables for syncing.
    2. To pause a connector, click Connectors > Source connectors, then open the three-dot menu on the right and select Pause.
    3. To delete a connector, click Connectors > Source connectors, then open the three-dot menu on the right and select Delete. You must pause the connector before deleting it.
And that is it, you are using the connector to synchronize all the data, or specific tables, from a database instance in real time.