Get Started

Target Database Support

Data Migrator supports the following target database, which may require prerequisite setup for data loading:

  • PostgreSQL: For PostgreSQL, Data Migrator employs copy command to execute bulk data loading operations. This process is pre-configured and requires no additional configuration.
  • Microsoft SQL Server: For Microsoft SQL Server (MSSQL), Data Migrator provides three distinct bulk loading data mechanisms

    • BULK INSERT [default mode]: Data Migrator uses BULK INSERT statement for MSSQL. This mode requires the bulkadmin server role, configurable via Microsoft SQL Server Management Studio.
    mssql.png
    • INSERT INTO: This loading mode is optimized for Proof of Concept (POC), facilitating rapid prototyping. However, it has performance and stability issues for large projects. It requires no additional configuration.
    • BCP UTILITY: Data Migrator integrates the Bulk Copy Program (bcp) utility to extend its data ingestion capabilities to support more types of databases, such as Amazon RDS for SQL Server. To use this mode, bcp utility installation is required.
  • Oracle: To bulk-load data, Data Migrator uses Oracle SQL*Loader (sqlldr) utility. This process requires no additional configuration. Nevertheless, sqlldr must be properly installed and accessible in the system path on the machine running Data Migrator.

Configuration file

Configuration for Data Migrator is managed through two distinct .ini files, enhancing readability and organization:

  • config.ini: Contains one or more configuration steps.
  • all.ini: Contains one or more action steps, including the optional global step.

Each .ini file can define multiple steps, allowing for a structured and modular approach to Data Migrator tasks.                    
A step corresponds to a section in the .ini file.                    
Each section has an id as title. The id is a combination of a type and a name connected with a hyphen. (e.g.[type - name])

The section type must be one of the following:

  • Database: for a configuration step.
  • ExecuteSql: for a creation or deletion step.
  • CSV2DB: for a CSV data loading step.
  • EBCDIC2DB: for an EBCDIC data loading step.
  • ConvertQDDS: for a QDDS data conversion step.

Notes

  • A section should have a unique id. If many sections have the same id, only the content of the last one will be taken into consideration, however, the order of the first one will be kept.
  • Steps must specify non-empty values for their properties.
  • If a property is missing from a step, Data Migrator applies the default value instead and environment variables can set property values.

How it works

Data Migrator establishes connections to databases based on the parameters defined in the database configuration step.                
The tool performs all enabled action steps specified:

  • Creation or deletion: Execute SQL scripts from the step's input directory.
  • Conversion: Transforms data from the input directory and saves results to the output directory of the step.
  • Loading:
    • The tool loads data included in the input directory of the step to the database
    • It determines which tables to fill by taken from sqlModel.json table list if exists. Otherwise collected from SQL files.

Under the designed data folder (dataFolder), data files must be stored under the folder named as the table name.

data-folder.png

Return code

The Data Migrator migration process ends with one of the following status codes:

CodeDescription
0Success
1Failure
2Configuration invalid
3Database creation error
4QDDS conversion error
5Data migration error
6Database post process error

Prerequisites

  • Java 17 needs to be installed.
  • You need to have access to the Data Migrator S3 bucket. If you don’t have access yet, you can request it via Blu Insights Toolbox.
  • You need to have minimal IAM policies on your account to download the Data Migrator from S3 bucket. In case you don't have policies, then create one with the below information.
{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Effect": "Allow",
          "Action": [
              "s3:ListBucket"
          ],
          "Resource": [
              "arn:aws:s3:::toolbox-data-migrator"
          ]
      },
      {
          "Effect": "Allow",
          "Action": [
              "s3:GetObject"
          ],
          "Resource": [
              "arn:aws:s3:::toolbox-data-migrator/*"
          ]
      }
  ]
}
  • Each empty database must be first manually created.
  • You need to have Docker installed in your environment to run Data Migrator with Docker.

Run with binaries

Installation guide

Toolbox buckets are replicated on the us-east-1 and us-east-2 regions. To use these replicated buckets, append the region to the bucket name: e.g. s3://toolbox-data-migrator-us-east-1. Make sure to adapt your user or role policy accordingly.

  1. Check that you configure the AWS credentials with the AWS account used in your request made in Blu Insights Toolbox.
  2. Download Data Migrator last version archive using the command aws s3 cp --recursive s3://toolbox-data-migrator/latest LOCAL_PATH
  3. Unpack the archive present in LOCAL_PATH.

How to launch it

To launch the steps migration in command line, open your favorite shell and type:

BluageVelocityDataMigrator.exe -root [migrationProjectPath] -configurationIni [configurationFilePath];[stepsConfigurationFilePath]

Parameters

  • migrationProjectPath : Absolute path to your migration project.
  • configurationFilePath : Absolute or relative path to your migration configuration file.
  • stepsConfigurationFilePath : Absolute or relative path to your migration steps configuration file.

The relative paths in the command will be resolved to the specified reverse project.

Options

The following options can be used when launching the migration using the command line:

  • help : Display the help; it includes the options below.
  • root : Root directory for the relative paths in the command line. Usually the path of your migration/reverse project.
  • configurationIni : Load the configuration from ini files, separated by semicolon.

Run with Docker

Build the docker image

  • Download the latest version of the Data Migrator using the command aws s3 cp --recursive s3://toolbox-data-migrator/latest LOCAL_PATH
  • Unpack the Linux edition of archive (BluageVelocityDataMigrator-linux-xxxxxxxxxxxxxx.zip) present in LOCAL_PATH
  • Move to the unzipped archive folder
  • Run the following commands to prepare the docker image:

     Windows

    xcopy lib .\data-migrator\lib\ /E /I
    copy BluageVelocityDataMigrator.jar .\data-migrator
    copy eula_velocity_february_2020.txt .\data-migrator

    Linux/Mac

    mkdir data-migrator
    cp -r lib ./data-migrator/lib
    cp BluageVelocityDataMigrator.jar ./data-migrator
    cp eula_velocity_february_2020.txt ./data-migrator
  • Run the command to build the Docker image:

    docker build -t data-migrator .

How to launch it

To launch the steps migration with docker image, open your favorite shell and type:

docker run --rm -v [migrationProjectPath]:/home data-migrator -root /home -configurationIni 
[configurationFilePath];[stepsConfigurationFilePath]

Parameters

  • migrationProjectPath: absolute path to your migration project
  • configurationFilePath: relative path to your migration configuration file
  • stepsConfigurationFilePath: relative path to your migration steps configuration file