Skip to main content

CLI Reference

Complete reference for every MigratorXpress command-line argument.

Basic Usage

./MigratorXpress [OPTIONS]

MigratorXpress runs as a single command. On Windows, use .\MigratorXpress.exe.

Authentication

OptionTypeRequiredDescription
-a, --auth PATHPathYesPath to the JSON credentials file

The credentials file contains connection details for every database involved in the migration. Each entry has a unique ID that you reference with other flags. See Credentials File Format below.

Source Database

OptionTypeRequiredDescription
--source_db_auth_id IDStringYesSource database identifier in the credentials file
--source_db_name NAMEStringYesSource database name (e.g., orclpdb1, tpch)
--source_schema_name PATTERNStringNoSource schema name, pattern, or comma-separated list. All schemas if omitted

Schema selection

The --source_schema_name flag supports three modes:

Single schema:

--source_schema_name SALES

SQL LIKE patterns (v0.6.12+):

# All schemas starting with "app_"
--source_schema_name "app_%"

# Multiple patterns
--source_schema_name "APP_%, TENANT_%"

Exact comma-separated list (v0.6.21+):

# Multiple specific schemas without wildcards
--source_schema_name "tpch_1, tpch_2, tpch_3"

When omitted, MigratorXpress discovers and migrates all visible schemas in the source database.

When using patterns or lists, target schemas are created automatically with the same names as the source schemas. The --target_schema_name flag is only used for single-schema migrations.

Target Database

OptionTypeRequiredDescription
--target_db_auth_id IDStringYesTarget database identifier in the credentials file
--target_db_name NAMEStringYesTarget database name
--target_schema_name NAMEStringNoTarget schema name. Defaults to source schema name. Not used with patterns

Supported targets: PostgreSQL and SQL Server. See the compatibility matrix.

Migration Database

OptionTypeRequiredDescription
--migration_db_auth_id IDStringYesSQL Server database identifier for migration tracking

The migration database must be a SQL Server instance. MigratorXpress creates its tracking tables automatically on first use.

Task Control

OptionTypeDescription
--task_list TASKS...String(s)Space-separated list of tasks to execute

Tasks run in the order specified. Available tasks: translate, create, transfer, diff, copy_pk, copy_ak, copy_fk, or all.

Use all as a shortcut for the full sequence: translate create transfer diff copy_pk copy_ak copy_fk.

# Full migration
--task_list translate create transfer diff

# Schema only
--task_list translate create

# Data only (tables already exist)
--task_list transfer

# Constraints only (after data is loaded)
--task_list copy_pk copy_ak copy_fk

# Everything
--task_list all
Constraint copying limitations

Constraint copying (copy_pk, copy_ak, copy_fk) is currently supported only for Oracle to SQL Server migrations. For other migration paths, create constraints manually on the target database.

Resume Operations

OptionTypeDescription
-r RUN_ID, --resume RUN_IDStringResume a previously interrupted migration using its run ID

When a migration is interrupted, use --resume with the run ID to pick up from the last completed step. Only tables that did not finish successfully are re-processed.

Since v0.6.22, resume automatically retrieves the original task list and schema names from the migration database. You do not need to specify --task_list or --source_schema_name when resuming.

./MigratorXpress --auth ./credentials.json \
--source_db_auth_id ds_09_ms \
--source_db_name tpch \
--target_db_auth_id ds_04_pg \
--target_db_name tpch \
--migration_db_auth_id ds_02_ms \
--resume 20260203-cf5be0ac-159f-4dad-861e-8b2b6f3868d7

The run ID is displayed in the console output when a migration starts (format: YYYYMMDD-{uuid}). You can also query the migration tracking database to find it.

Run Tagging

OptionTypeDescription
--project NAMEStringOptional project tag attached to every row this run writes in the migration tracking DB

Since v0.6.30, a user-supplied --project tag groups runs that belong to the same engagement. The tag is stored alongside run_id on sourcetargetloads, eventlog, and sourcetargetdiff, making all runs for a given project retrievable with a single SQL filter weeks or months later.

./MigratorXpress --auth ./credentials.json \
--source_db_auth_id ds_09_ms --source_db_name tpch \
--target_db_auth_id ds_04_pg --target_db_name tpch \
--migration_db_auth_id ds_02_ms \
--task_list all \
--project bob_bods_oracle

Then, to find every run tagged with that project:

SELECT DISTINCT eventrunid
FROM eventlog
WHERE project = 'bob_bods_oracle';

Validation: 1–64 characters, letters, digits, underscore, or hyphen only (e.g. bob_bods_oracle, cust-2026-q2). Invalid values are rejected at CLI parse time, before any database write.

Optional and backward compatible: Runs without --project store NULL. Existing tracking databases are auto-upgraded with the new columns on the next startup via idempotent ALTER TABLE … ADD COLUMN statements.

mxp_init also accepts --project for the discovery phase.

Immutable per run_id

On --resume, any --project value on the command line is ignored with a warning, and the original run's tag is inherited from the migration DB. This keeps every row for a given run_id tagged consistently, even across multiple resumes.

FastTransfer

OptionTypeDefaultDescription
--fasttransfer_dir_path PATHPathAuto-detectedDirectory containing the FastTransfer binary
-p, --fasttransfer_p INTInteger1Parallel streams per large table (Ctid/Rowid/NZDataSlice)
--ft_large_table_th INTInteger250,000,000Row count threshold for parallel transfer methods

MigratorXpress delegates all data movement to FastTransfer. FastTransfer >= 0.15.0 is required; the version is validated at startup.

Since v0.6.25, if --fasttransfer_dir_path is omitted, MigratorXpress auto-detects a bundled FastTransfer in the engine/ subdirectory next to the binary. Trial distributions ship with FastTransfer pre-bundled.

# Explicit path
--fasttransfer_dir_path ./FastTransfer_linux-x64_v0.15.0/

# Or omit to use the bundled engine/ directory

Parallel transfer methods

Tables with row count >= --ft_large_table_th and --fasttransfer_p > 1 use database-specific parallel methods:

Source DatabaseMethodDescription
PostgreSQLCtidSplits by physical tuple ID
OracleRowidSplits by physical ROWID (requires SELECT_CATALOG_ROLE)
NetezzaNZDataSliceSplits by internal data slice distribution
SQL ServerStandardNo physical partitioning; parallelism via --n_jobs only

Performance

OptionTypeDefaultDescription
--n_jobs INTInteger1Number of tables to process in parallel
--compute_nbrows BOOLBooleanfalseUse COUNT(*) for exact row counts instead of database estimates
--without_xidFlagfalseDisable transaction ID usage for Netezza and Oracle

--n_jobs controls table-level parallelism (how many tables transfer simultaneously). Combined with --fasttransfer_p, this gives dual-level parallelism:

# 4 tables at once, each large table split into 2 parallel streams
--n_jobs 4 --fasttransfer_p 2

Table Filtering

OptionTypeDefaultDescription
-i, --include_tables PATTERNString% (all)Include only tables matching SQL LIKE patterns (comma-separated)
-e, --exclude_tables PATTERNStringNoneExclude tables matching SQL LIKE patterns (comma-separated)
-min, --min_rows NROWSInteger-1 (disabled)Exclude tables with fewer rows than this value
-max, --max_rows NROWSInteger-1 (disabled)Exclude tables with row count >= this value

Exclude patterns are applied after include patterns. Filters are pushed to SQL level for performance (v0.6.12+).

# Include only tables starting with "fact_" or "dim_"
-i "fact_%,dim_%"

# Spaces after commas are trimmed automatically
-i "fact_%, dim_%"

# Exclude temporary and test tables
-e "tmp_%,test_%"

# Combine filters
-i "%" -e "tmp_%,log_%" -min 100 -max 10000000
Shell quoting

Quote the entire comma-separated string as one argument. Do not quote each pattern separately:

# CORRECT
-i "fact_%,dim_%"

# WRONG — shell interprets as multiple arguments
-i 'fact_%','dim_%'

Migration Options

OptionTypeDefaultDescription
--drop_tables_if_exists BOOLBooleanfalseDrop target tables before creating them
--load_mode MODEStringtruncatetruncate: clear target before loading. append: add rows without clearing
--migration_db_mode MODEStringpreservepreserve: keep tracking data (enables resume). truncate: clear data, keep schema. drop: recreate schema
--basic_diffFlagfalseRow/column count comparison only (skip content-level diff)
-f, --forceFlagfalseForce migration by cleaning existing tracking data for these schemas

SQL Server Target Options

These options apply only when the target database is SQL Server.

OptionTypeDefaultDescription
--cci_threshold INTInteger1,000,000Create Clustered Columnstore Index for tables above this row count
--aci_threshold INTInteger100,000,000Use archive compression (COLUMNSTORE_ARCHIVE) above this row count
--fk_mode MODEStringdisabledForeign key enforcement: disabled, untrusted, or trusted

Columnstore behavior:

  • Tables >= cci_threshold but < aci_threshold: Standard Clustered Columnstore Index
  • Tables >= aci_threshold: CCI with DATA_COMPRESSION = COLUMNSTORE_ARCHIVE
  • Tables < cci_threshold: Standard rowstore

Foreign key modes:

  • disabled (default): FK constraints created with WITH NOCHECK and disabled. Fastest for loading.
  • untrusted: FK constraints created with WITH NOCHECK but left enabled. Enforced for new data only.
  • trusted: Standard FK constraints with full enforcement. Slowest but provides complete referential integrity.

Oracle Source Options

These options apply only when the source database is Oracle.

OptionTypeDefaultDescription
--forced_int_id_prefixes S...String(s)NoneSpace-separated column name prefixes to force as integer type
--forced_int_id_suffixes S...String(s)NoneSpace-separated column name suffixes to force as integer type
--profiling_sample_pc NUMBERInteger10Percentage of rows to sample for Oracle DATE/NUMBER profiling
--min_sample_pc_profile NUMBERInteger100,000Row count threshold below which tables are profiled in full (no sampling)
--p_query NUMBERInteger6Inner level of query parallelism for Oracle profiling queries (PARALLEL hint degree)

Oracle stores many integer values in NUMBER columns without precision. MigratorXpress profiles DATE and NUMBER columns to select optimal target types. Use --forced_int_id_prefixes and --forced_int_id_suffixes to skip profiling for known ID columns:

# Space-separated, case-insensitive matching
--forced_int_id_prefixes ID_ PK_ KEY_ \
--forced_int_id_suffixes _ID _KEY _PK

Sampling behavior:

  • Tables >= min_sample_pc_profile rows: sampled at profiling_sample_pc% using Oracle SAMPLE(n)
  • Tables < min_sample_pc_profile rows: profiled in full
  • Set --profiling_sample_pc 100 to disable sampling

Logging Options

OptionTypeDefaultDescription
--log_level LEVELStringINFOLog verbosity: DEBUG, INFO, WARNING, ERROR, CRITICAL
--log_dir PATHPathCurrent directoryDirectory for log files
--no_bannerFlagfalseSuppress the startup banner
--no_progressFlagfalseDisable the progress bar
--quiet_ftFlagfalseSuppress FastTransfer subprocess output from console (still logged to file)

See Logging and Monitoring for details on log format and database logging.

License Management

OptionTypeDescription
--license TEXTStringLicense text (ECDSA signed)
--license_file PATHPathPath to license file

License sources (in order of precedence):

  1. --license command-line argument
  2. --license_file command-line argument
  3. MIGRATORXPRESS_LICENSE environment variable
  4. Default location: ~/.migratorxpress/license.key (Linux) or %APPDATA%/MigratorXpress/license.key (Windows)
  5. Trial mode (automatic fallback)

Trial mode provides all features without restrictions and is suitable for evaluation.

Utility

OptionDescription
--versionDisplay MigratorXpress version and exit
-h, --helpShow help message and exit

Database Parameters

Connection parameters by database type

ParameterOraclePostgreSQLSQL ServerNetezza
ds_typeoraclepostgresmssqlnetezza
auth_modeclassicclassic or odbcclassic or odbcclassic
usernameRequiredRequiredRequiredRequired
passwordRequiredRequiredRequiredRequired
serverRequiredRequiredRequiredRequired
port1521543214335480
databaseService name/SIDDatabase nameDatabase nameDatabase name
lib_dirThick mode only------

Credentials File Format

The credentials file is a JSON object where each key is an identifier you reference on the command line.

{
"oracle_prod": {
"ds_type": "oracle",
"auth_mode": "classic",
"info": {
"username": "$env{ORA_USER}",
"password": "$env{ORA_PASSWORD}",
"server": "oracle-host.example.com",
"port": 1521,
"database": "ORCL"
}
},
"postgres_dev": {
"ds_type": "postgres",
"auth_mode": "classic",
"info": {
"username": "$env{PG_USER}",
"password": "$env{PG_PASSWORD}",
"server": "pg-host.example.com",
"port": 5432,
"database": "analytics"
}
},
"ms_mig_log": {
"ds_type": "mssql",
"auth_mode": "classic",
"info": {
"username": "$env{MIG_USER}",
"password": "$env{MIG_PASSWORD}",
"server": "sql-host.example.com",
"port": 1433,
"database": "migration_log"
}
},
"netezza_source": {
"ds_type": "netezza",
"auth_mode": "classic",
"info": {
"username": "$env{NZ_USER}",
"password": "$env{NZ_PASSWORD}",
"server": "nz-host.example.com",
"port": 5480,
"database": "PROD_DB"
}
}
}

Use $env{VAR_NAME} to reference environment variables. An error is raised if the variable is not set. Plain-text values work as well.

Authentication Modes

Classic (all databases)

Standard username/password authentication. Works with all supported database types.

{
"my_db": {
"ds_type": "postgres",
"auth_mode": "classic",
"info": {
"username": "user",
"password": "pass",
"server": "host",
"port": 5432,
"database": "dbname"
}
}
}

Oracle Thick Mode

For Oracle 11.2 and earlier, set lib_dir to the Oracle Instant Client directory. Later versions use thin mode by default (omit lib_dir).

{
"oracle_thick": {
"ds_type": "oracle",
"auth_mode": "classic",
"info": {
"username": "user",
"password": "pass",
"server": "host",
"port": 1521,
"database": "SID",
"lib_dir": "/opt/oracle/instantclient_19_8"
}
}
}

PostgreSQL ODBC

{
"pg_odbc": {
"ds_type": "postgres",
"auth_mode": "odbc",
"info": {
"dsn": "PostgreSQL",
"username": "user",
"password": "pass"
}
}
}

SQL Server Trusted Authentication

Windows integrated authentication using ODBC:

{
"mssql_trusted": {
"ds_type": "mssql",
"auth_mode": "odbc",
"info": {
"trusted": true,
"instance": "server.domain.com\\INSTANCE",
"database": "dbname"
}
}
}

Connection String

For advanced scenarios, pass a raw connection string:

{
"mssql_custom": {
"ds_type": "mssql",
"auth_mode": "classic",
"info": {
"connect_string": "mssql+pymssql://user:pass@host:port/database"
}
}
}

Supported Database Types

Databaseds_type valueDefault PortAs SourceAs TargetAs Migration DB
Oracleoracle1521YesNoNo
PostgreSQLpostgres5432YesYesNo
SQL Servermssql1433YesYesYes
Netezzanetezza5480YesNoNo

See Also

Copyright © 2026 Architecture & Performance.