Using Tools in Aginity Pro

We are adding tools to Aginity Pro to make you more efficient when doing your job. One example of this is the Data Import wizard that is available for Redshift right now.

Using the Data Import Wizard

The data import wizard is a tool to help you take a local data file, upload it to a temporary location on the database server whether that is in the Cloud or local and then finally load the data to a table.

Note

As of December 2019 we are only supporting uploading to Redshift. Over the next few releases we will be adding support for other database platforms.

To start the data importer go to the Tools menu as shown below.

../../../_images/ag_tools_start.png

Choosing a File - Step 1

When you start the wizard you will presented with a six-step process in which to identify a file to upload, describe the behavior you want and then perform the load to a table. In the first screen the following activities are performed.

  1. Pick a database connection where you will upload data to.
  2. Add a local file using the Add CSV file button.
  3. Choose a delimiter used to separate fields in the file.
  4. If you click the With header check box the parser will assume the first row in the file is a header if not it will treat the first row as a data row.
  5. You can click Update preview to show the first few roes in the file
../../../_images/ag_tools_import_page1.png

When complete choose Next.

In the next screen you will provide your security credentials specific to the database you are uploading to. Reference the sections below for specific information on each platform supported.

Redshift Data Upload - Step 2

When working with Amazon Redshift the file you upload will first be moved to a S3 bucket of your choosing prior to being loaded into a Redshift table. You must provide the following items.

  1. Your Access Key
  2. Your Secret Key
  3. The Region Endpoint
  4. Once you enter the first three you should be able to choose the S3 bucket to use to store data.

The second section within Redshift lets you control the behavior of the file you upload to S3.

  1. If you select the check box labeled Leave the intermediate CSV file on the cloud after importing data you will leave a version of the file on S3.
  2. You can enter a filename of choice for the file.
  3. If you want the file compressed in S3 you can choose from NONE, ZIP or GZIP options.
  4. You can control file behavior if the file exists. You can either overwrite or abort the process.

See also

Please refer to the AWS Bucket Naming conventions article for proper naming standards.

../../../_images/ag_tools_import_page2.png

Choosing Columns and Rows - Step 3

In step 3, you can choose a numeric limiter for both the number of columns you want to parse and then the number of rows to load. This step is valuable if you have very wide and/or very deep files and you want to work with less columns or less rows.

Hint

It is helpful with very large files to load the first 100 rows to ensure everything works well versus letting it run for an hour only to find out it didn’t work.

../../../_images/ag_tools_import_page3.png

Choosing Columns to Include - Step 4

This step may seem redundant to the prior step for columns but it allows you to better see more about the columns. Let’s say you have 200 columns and you know you only want the first 10 in Step 3 you can choose columns 1 to 10 and in Step 4 you can better preview and determine the order of the columns you want to load.

../../../_images/ag_tools_import_page4.png

Configuring Output Columns - Step 5

The next step lets you control the column name, data type and nullability.

../../../_images/ag_tools_import_page5.png

Setting Output Details and Loading - Step 6

In step 6 you will supply the database you’ll load to, schema, table name and other behaviors that will be followed during a load execution.

../../../_images/ag_tools_import_page6.png