CLI Reference
Complete reference for every MigratorXpress command-line argument.
Basic Usage
./MigratorXpress [OPTIONS]
MigratorXpress runs as a single command. On Windows, use .\MigratorXpress.exe.
Authentication
| Option | Type | Required | Description |
|---|---|---|---|
-a, --auth PATH | Path | Yes | Path to the JSON credentials file |
The credentials file contains connection details for every database involved in the migration. Each entry has a unique ID that you reference with other flags. See Credentials File Format below.
Source Database
| Option | Type | Required | Description |
|---|---|---|---|
--source_db_auth_id ID | String | Yes | Source database identifier in the credentials file |
--source_db_name NAME | String | Yes | Source database name (e.g., orclpdb1, tpch) |
--source_schema_name PATTERN | String | No | Source schema name, pattern, or comma-separated list. All schemas if omitted |
Schema selection
The --source_schema_name flag supports three modes:
Single schema:
--source_schema_name SALES
SQL LIKE patterns (v0.6.12+):
# All schemas starting with "app_"
--source_schema_name "app_%"
# Multiple patterns
--source_schema_name "APP_%, TENANT_%"
Exact comma-separated list (v0.6.21+):
# Multiple specific schemas without wildcards
--source_schema_name "tpch_1, tpch_2, tpch_3"
When omitted, MigratorXpress discovers and migrates all visible schemas in the source database.
When using patterns or lists, target schemas are created automatically with the same names as the source schemas. The --target_schema_name flag is only used for single-schema migrations.
Target Database
| Option | Type | Required | Description |
|---|---|---|---|
--target_db_auth_id ID | String | Yes | Target database identifier in the credentials file |
--target_db_name NAME | String | Yes | Target database name |
--target_schema_name NAME | String | No | Target schema name. Defaults to source schema name. Not used with patterns |
Supported targets: PostgreSQL and SQL Server. See the compatibility matrix.
Migration Database
| Option | Type | Required | Description |
|---|---|---|---|
--migration_db_auth_id ID | String | Yes | SQL Server database identifier for migration tracking |
The migration database must be a SQL Server instance. MigratorXpress creates its tracking tables automatically on first use.
Task Control
| Option | Type | Description |
|---|---|---|
--task_list TASKS... | String(s) | Space-separated list of tasks to execute |
Tasks run in the order specified. Available tasks: translate, create, transfer, diff, copy_pk, copy_ak, copy_fk, or all.
Use all as a shortcut for the full sequence: translate create transfer diff copy_pk copy_ak copy_fk.
# Full migration
--task_list translate create transfer diff
# Schema only
--task_list translate create
# Data only (tables already exist)
--task_list transfer
# Constraints only (after data is loaded)
--task_list copy_pk copy_ak copy_fk
# Everything
--task_list all
Constraint copying (copy_pk, copy_ak, copy_fk) is currently supported only for Oracle to SQL Server migrations. For other migration paths, create constraints manually on the target database.
Resume Operations
| Option | Type | Description |
|---|---|---|
-r RUN_ID, --resume RUN_ID | String | Resume a previously interrupted migration using its run ID |
When a migration is interrupted, use --resume with the run ID to pick up from the last completed step. Only tables that did not finish successfully are re-processed.
Since v0.6.22, resume automatically retrieves the original task list and schema names from the migration database. You do not need to specify --task_list or --source_schema_name when resuming.
./MigratorXpress --auth ./credentials.json \
--source_db_auth_id ds_09_ms \
--source_db_name tpch \
--target_db_auth_id ds_04_pg \
--target_db_name tpch \
--migration_db_auth_id ds_02_ms \
--resume 20260203-cf5be0ac-159f-4dad-861e-8b2b6f3868d7
The run ID is displayed in the console output when a migration starts (format: YYYYMMDD-{uuid}). You can also query the migration tracking database to find it.
Run Tagging
| Option | Type | Description |
|---|---|---|
--project NAME | String | Optional project tag attached to every row this run writes in the migration tracking DB |
Since v0.6.30, a user-supplied --project tag groups runs that belong to the same engagement. The tag is stored alongside run_id on sourcetargetloads, eventlog, and sourcetargetdiff, making all runs for a given project retrievable with a single SQL filter weeks or months later.
./MigratorXpress --auth ./credentials.json \
--source_db_auth_id ds_09_ms --source_db_name tpch \
--target_db_auth_id ds_04_pg --target_db_name tpch \
--migration_db_auth_id ds_02_ms \
--task_list all \
--project bob_bods_oracle
Then, to find every run tagged with that project:
SELECT DISTINCT eventrunid
FROM eventlog
WHERE project = 'bob_bods_oracle';
Validation: 1–64 characters, letters, digits, underscore, or hyphen only (e.g. bob_bods_oracle, cust-2026-q2). Invalid values are rejected at CLI parse time, before any database write.
Optional and backward compatible: Runs without --project store NULL. Existing tracking databases are auto-upgraded with the new columns on the next startup via idempotent ALTER TABLE … ADD COLUMN statements.
mxp_init also accepts --project for the discovery phase.
On --resume, any --project value on the command line is ignored with a warning, and the original run's tag is inherited from the migration DB. This keeps every row for a given run_id tagged consistently, even across multiple resumes.
FastTransfer
| Option | Type | Default | Description |
|---|---|---|---|
--fasttransfer_dir_path PATH | Path | Auto-detected | Directory containing the FastTransfer binary |
-p, --fasttransfer_p INT | Integer | 1 | Parallel streams per large table (Ctid/Rowid/NZDataSlice) |
--ft_large_table_th INT | Integer | 250,000,000 | Row count threshold for parallel transfer methods |
MigratorXpress delegates all data movement to FastTransfer. FastTransfer >= 0.15.0 is required; the version is validated at startup.
Since v0.6.25, if --fasttransfer_dir_path is omitted, MigratorXpress auto-detects a bundled FastTransfer in the engine/ subdirectory next to the binary. Trial distributions ship with FastTransfer pre-bundled.
# Explicit path
--fasttransfer_dir_path ./FastTransfer_linux-x64_v0.15.0/
# Or omit to use the bundled engine/ directory
Parallel transfer methods
Tables with row count >= --ft_large_table_th and --fasttransfer_p > 1 use database-specific parallel methods:
| Source Database | Method | Description |
|---|---|---|
| PostgreSQL | Ctid | Splits by physical tuple ID |
| Oracle | Rowid | Splits by physical ROWID (requires SELECT_CATALOG_ROLE) |
| Netezza | NZDataSlice | Splits by internal data slice distribution |
| SQL Server | Standard | No physical partitioning; parallelism via --n_jobs only |
Performance
| Option | Type | Default | Description |
|---|---|---|---|
--n_jobs INT | Integer | 1 | Number of tables to process in parallel |
--compute_nbrows BOOL | Boolean | false | Use COUNT(*) for exact row counts instead of database estimates |
--without_xid | Flag | false | Disable transaction ID usage for Netezza and Oracle |
--n_jobs controls table-level parallelism (how many tables transfer simultaneously). Combined with --fasttransfer_p, this gives dual-level parallelism:
# 4 tables at once, each large table split into 2 parallel streams
--n_jobs 4 --fasttransfer_p 2
Table Filtering
| Option | Type | Default | Description |
|---|---|---|---|
-i, --include_tables PATTERN | String | % (all) | Include only tables matching SQL LIKE patterns (comma-separated) |
-e, --exclude_tables PATTERN | String | None | Exclude tables matching SQL LIKE patterns (comma-separated) |
-min, --min_rows NROWS | Integer | -1 (disabled) | Exclude tables with fewer rows than this value |
-max, --max_rows NROWS | Integer | -1 (disabled) | Exclude tables with row count >= this value |
Exclude patterns are applied after include patterns. Filters are pushed to SQL level for performance (v0.6.12+).
# Include only tables starting with "fact_" or "dim_"
-i "fact_%,dim_%"
# Spaces after commas are trimmed automatically
-i "fact_%, dim_%"
# Exclude temporary and test tables
-e "tmp_%,test_%"
# Combine filters
-i "%" -e "tmp_%,log_%" -min 100 -max 10000000
Quote the entire comma-separated string as one argument. Do not quote each pattern separately:
# CORRECT
-i "fact_%,dim_%"
# WRONG — shell interprets as multiple arguments
-i 'fact_%','dim_%'
Migration Options
| Option | Type | Default | Description |
|---|---|---|---|
--drop_tables_if_exists BOOL | Boolean | false | Drop target tables before creating them |
--load_mode MODE | String | truncate | truncate: clear target before loading. append: add rows without clearing |
--migration_db_mode MODE | String | preserve | preserve: keep tracking data (enables resume). truncate: clear data, keep schema. drop: recreate schema |
--basic_diff | Flag | false | Row/column count comparison only (skip content-level diff) |
-f, --force | Flag | false | Force migration by cleaning existing tracking data for these schemas |
SQL Server Target Options
These options apply only when the target database is SQL Server.
| Option | Type | Default | Description |
|---|---|---|---|
--cci_threshold INT | Integer | 1,000,000 | Create Clustered Columnstore Index for tables above this row count |
--aci_threshold INT | Integer | 100,000,000 | Use archive compression (COLUMNSTORE_ARCHIVE) above this row count |
--fk_mode MODE | String | disabled | Foreign key enforcement: disabled, untrusted, or trusted |
Columnstore behavior:
- Tables >=
cci_thresholdbut <aci_threshold: Standard Clustered Columnstore Index - Tables >=
aci_threshold: CCI withDATA_COMPRESSION = COLUMNSTORE_ARCHIVE - Tables <
cci_threshold: Standard rowstore
Foreign key modes:
disabled(default): FK constraints created withWITH NOCHECKand disabled. Fastest for loading.untrusted: FK constraints created withWITH NOCHECKbut left enabled. Enforced for new data only.trusted: Standard FK constraints with full enforcement. Slowest but provides complete referential integrity.
Oracle Source Options
These options apply only when the source database is Oracle.
| Option | Type | Default | Description |
|---|---|---|---|
--forced_int_id_prefixes S... | String(s) | None | Space-separated column name prefixes to force as integer type |
--forced_int_id_suffixes S... | String(s) | None | Space-separated column name suffixes to force as integer type |
--profiling_sample_pc NUMBER | Integer | 10 | Percentage of rows to sample for Oracle DATE/NUMBER profiling |
--min_sample_pc_profile NUMBER | Integer | 100,000 | Row count threshold below which tables are profiled in full (no sampling) |
--p_query NUMBER | Integer | 6 | Inner level of query parallelism for Oracle profiling queries (PARALLEL hint degree) |
Oracle stores many integer values in NUMBER columns without precision. MigratorXpress profiles DATE and NUMBER columns to select optimal target types. Use --forced_int_id_prefixes and --forced_int_id_suffixes to skip profiling for known ID columns:
# Space-separated, case-insensitive matching
--forced_int_id_prefixes ID_ PK_ KEY_ \
--forced_int_id_suffixes _ID _KEY _PK
Sampling behavior:
- Tables >=
min_sample_pc_profilerows: sampled atprofiling_sample_pc% using OracleSAMPLE(n) - Tables <
min_sample_pc_profilerows: profiled in full - Set
--profiling_sample_pc 100to disable sampling
Logging Options
| Option | Type | Default | Description |
|---|---|---|---|
--log_level LEVEL | String | INFO | Log verbosity: DEBUG, INFO, WARNING, ERROR, CRITICAL |
--log_dir PATH | Path | Current directory | Directory for log files |
--no_banner | Flag | false | Suppress the startup banner |
--no_progress | Flag | false | Disable the progress bar |
--quiet_ft | Flag | false | Suppress FastTransfer subprocess output from console (still logged to file) |
See Logging and Monitoring for details on log format and database logging.
License Management
| Option | Type | Description |
|---|---|---|
--license TEXT | String | License text (ECDSA signed) |
--license_file PATH | Path | Path to license file |
License sources (in order of precedence):
--licensecommand-line argument--license_filecommand-line argumentMIGRATORXPRESS_LICENSEenvironment variable- Default location:
~/.migratorxpress/license.key(Linux) or%APPDATA%/MigratorXpress/license.key(Windows) - Trial mode (automatic fallback)
Trial mode provides all features without restrictions and is suitable for evaluation.
Utility
| Option | Description |
|---|---|
--version | Display MigratorXpress version and exit |
-h, --help | Show help message and exit |
Database Parameters
Connection parameters by database type
| Parameter | Oracle | PostgreSQL | SQL Server | Netezza |
|---|---|---|---|---|
ds_type | oracle | postgres | mssql | netezza |
auth_mode | classic | classic or odbc | classic or odbc | classic |
username | Required | Required | Required | Required |
password | Required | Required | Required | Required |
server | Required | Required | Required | Required |
port | 1521 | 5432 | 1433 | 5480 |
database | Service name/SID | Database name | Database name | Database name |
lib_dir | Thick mode only | -- | -- | -- |
Credentials File Format
The credentials file is a JSON object where each key is an identifier you reference on the command line.
{
"oracle_prod": {
"ds_type": "oracle",
"auth_mode": "classic",
"info": {
"username": "$env{ORA_USER}",
"password": "$env{ORA_PASSWORD}",
"server": "oracle-host.example.com",
"port": 1521,
"database": "ORCL"
}
},
"postgres_dev": {
"ds_type": "postgres",
"auth_mode": "classic",
"info": {
"username": "$env{PG_USER}",
"password": "$env{PG_PASSWORD}",
"server": "pg-host.example.com",
"port": 5432,
"database": "analytics"
}
},
"ms_mig_log": {
"ds_type": "mssql",
"auth_mode": "classic",
"info": {
"username": "$env{MIG_USER}",
"password": "$env{MIG_PASSWORD}",
"server": "sql-host.example.com",
"port": 1433,
"database": "migration_log"
}
},
"netezza_source": {
"ds_type": "netezza",
"auth_mode": "classic",
"info": {
"username": "$env{NZ_USER}",
"password": "$env{NZ_PASSWORD}",
"server": "nz-host.example.com",
"port": 5480,
"database": "PROD_DB"
}
}
}
Use $env{VAR_NAME} to reference environment variables. An error is raised if the variable is not set. Plain-text values work as well.
Authentication Modes
Classic (all databases)
Standard username/password authentication. Works with all supported database types.
{
"my_db": {
"ds_type": "postgres",
"auth_mode": "classic",
"info": {
"username": "user",
"password": "pass",
"server": "host",
"port": 5432,
"database": "dbname"
}
}
}
Oracle Thick Mode
For Oracle 11.2 and earlier, set lib_dir to the Oracle Instant Client directory. Later versions use thin mode by default (omit lib_dir).
{
"oracle_thick": {
"ds_type": "oracle",
"auth_mode": "classic",
"info": {
"username": "user",
"password": "pass",
"server": "host",
"port": 1521,
"database": "SID",
"lib_dir": "/opt/oracle/instantclient_19_8"
}
}
}
PostgreSQL ODBC
{
"pg_odbc": {
"ds_type": "postgres",
"auth_mode": "odbc",
"info": {
"dsn": "PostgreSQL",
"username": "user",
"password": "pass"
}
}
}
SQL Server Trusted Authentication
Windows integrated authentication using ODBC:
{
"mssql_trusted": {
"ds_type": "mssql",
"auth_mode": "odbc",
"info": {
"trusted": true,
"instance": "server.domain.com\\INSTANCE",
"database": "dbname"
}
}
}
Connection String
For advanced scenarios, pass a raw connection string:
{
"mssql_custom": {
"ds_type": "mssql",
"auth_mode": "classic",
"info": {
"connect_string": "mssql+pymssql://user:pass@host:port/database"
}
}
}
Supported Database Types
| Database | ds_type value | Default Port | As Source | As Target | As Migration DB |
|---|---|---|---|---|---|
| Oracle | oracle | 1521 | Yes | No | No |
| PostgreSQL | postgres | 5432 | Yes | Yes | No |
| SQL Server | mssql | 1433 | Yes | Yes | Yes |
| Netezza | netezza | 5480 | Yes | No | No |
See Also
- Quick Start Guide -- get started in 5 minutes
- Migration Tasks -- task types and execution order
- FastTransfer Configuration -- parallel transfer tuning
- Logging and Monitoring -- log format and database tracking
- Troubleshooting -- common issues and solutions