Posts Tagged ‘shell’

Run Data Pipelines Taskrunner jar forever on Existing Resources

Written by mannem on . Posted in Data Pipelines

When you run Data Pipelines Taskrunner on your own resource, according to the following document , it do not exit out from shell.

To get the Taskrunner detached from the terminal, use nohup , nohup

nohup java -jar /home/ec2-user/TaskRunner.jar —config /home/ec2-user/credentials.json --workerGroup=group-emr --region=us-west-2 --logUri=s3://mannem.ne/foldername &

An alternative is using screen/tmux/byobu, which will keep the shell running, independent of the terminal.


Now, if you want to run this Taskrunner every time your machine Boots, It really depends on the OS distribution of your machine.

On a Amazon linux which are based on RedHat(RHEL) :

Create a script like taskrunner-bootup

Put your script in /etc/init.d/, owned by root and executable. At the top of the script, you can give a directive for chkconfig. Example, the following script is used to start this Taskrunner java application as ec2-user. As user root you can use chkconfig to enable or disable the script at startup,

chkconfig –list taskrunner-bootup
chkconfig –add taskrunner-bootup
and you can use service start/stop taskrunner-bootup

You can also use cloud-init if you wish.

If this is an AMI, you can use USER-DATA script which only works during first start.

Here’s some solutions for

Ubuntu : http://www.askubuntu.com/a/228313
Debian : http://www.cyberciti.biz/tips/linux-how-to-run-a-command-when-boots-up.html

Running complex queries on redshift with Data-pipelines

Written by mannem on . Posted in Data Pipelines, Redshift

Sometimes AWS Data-Pipelines SQLActivity may not support complex queries. This is because Data-Pieplines SqlActivity passes this script to JDBS executeStatement(Prepared statement). This script is supposed to be idempotent. So here’s an alternative to run psql/sql commands using Data-Pipelines.

Suppose you have the following psql command,

select 'insert into event_data_' ||to_char(current_date,'yyyymm')|| '_v2 select * from stage_event_data_v2 where event_time::date >= '''||to_char(current_date, 'YYYY-MM')||'-01'' and event_time::date <= '''||last_day(current_date)||''';';

and it should output,

insert into event_data_201511_v2 select * from stage_event_data_v2 where event_time::date >= '2015-11-01' and event_time::date <= '2015-11-30';

This is a valid command in psql and can be successfully executed with workbenches and psql shell.

But using Data-pipelines, executing the above command will throw and error:

ERROR processing query/statement. Error: Parsing failed

This is because the script appears to be changing(not idempotent) when it is executed.

If you have a complex redshift commands and wish to performing operations against Redshift that involve custom logic. You could rather write a program in your favorite language and run it
using ShellCommandActivity. This is a quite valid way of interacting with Redshift.

There are several ways to do this. I am including a shell script and its Data-pipelne template as a reference here.

Sample shell command:

Sample Data-pipelines template:


-------------------------------------------------
Some info on the script and Data-pipeline:

1. This script file has 2 arguments (Arg 1 is the sql script that you need to execute , Arg 2 is the Redshift password). These arguments are provided in Data-pipeline shellCommandActivity object using scriptArgument field.

2. The script outputs its result to v2.sql and uploads to s3 bucket (with -t tuples only option), so that you can run the script later.

3. The Data-pipeline template uses the *myRedshiftPass parameter id to hide the password from DataPipelines.
-------------------------------------------------