How to install Trafodion on existing and running CDH5.0 Cluster? Without affecting existing settings
We are planning to install Trafodion on already existing cluster. How to do We do it.
The document says following instructions "https:/
In Document never says how to install on existing CDH cluster. Always you are going to talk about fresh installation of CDH Trafodion.
Please provide document how to install Trafodion on Existing running cluster?
Please provide it ASAP
Regards
Mohan
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- Trafodion Edit question
- Assignee:
- No assignee Edit question
- Last query:
- Last reply:
Revision history for this message
|
#1 |
You should be able to start from step 10.
https:/
Could you please tell me what version of the installer you have?
Thanks!
Sent from my iPhone
> On Aug 7, 2014, at 8:27 AM, "Mohan" <email address hidden> wrote:
>
> New question #252766 on Trafodion:
> https:/
>
> We are planning to install Trafodion on already existing cluster. How to do We do it.
>
> The document says following instructions "https:/
>
> In Document never says how to install on existing CDH cluster. Always you are going to talk about fresh installation of CDH Trafodion.
>
> Please provide document how to install Trafodion on Existing running cluster?
>
> Please provide it ASAP
>
> Regards
> Mohan
>
> --
> You received this question notification because you are an answer
> contact for Trafodion.
Revision history for this message
|
#2 |
On Cloudera installation page, there is a link at the bottom of the page "installing Trafodion" (Step#23)
Please follow the instructions to install Trafodion on existing running cluster:
https:/
Please make sure to follow steps under "Preparing Your Cluster Environment" before you start installing Trafodion.
Revision history for this message
|
#3 |
I get below error when I start from Step 10.
trafodion_
installer-
[root@xxx xxx]# ./trafodion_mods --trafodion_build /opt/cloudera/
cat: /opt/cloudera/
cat: /opt/cloudera/
***ERROR: Please check input parameters
Revision history for this message
|
#4 |
You will need to execute trafodion_setup script (step#7) prior to executing trafodion_installer script.
Please follow steps under "Preparing Your Cluster Environment" and start following instructions from step#1.
Revision history for this message
|
#5 |
If start from Step 7 ...
If you are installing MapR, run the ./trafodion_setup script by providing the following parameters:
Again it will go and install Cloudera manager.
My question is How to install Trafodion on existing and running CDH5.0 Cluster?
Step No7 .. What it will do , If Already Cloudera manager is up and running ?
Revision history for this message
|
#6 |
trafodion Supoort Cloudera 5.0.1 Version?
Revision history for this message
|
#7 |
The trafodion_setup script does preliminary setup (creating trafodion user, configuring environment to support trafodion_installer script, etc). It does not install Cloudera manager. Once steps under "Preparing Your Cluster Environment" are followed and all *.tar.gz files are in place (step#1 to 5), skip step#6 (as Cloudera is already up and running) and do the followings:
- Execute trafodion_setup script (step#7 to 9)
- Execute trafodion_mods script (step#10)
- Execute trafodion_installer script (step#11 to 13)
Can you please let us know which version of HBase you have?
At this moment Trafodion supports HBase 0.94 and you can soon expect it supporting 0.96 and 0.98?
Revision history for this message
|
#8 |
I have already completed steps 1 to 5 Successfully.
But, In step 7. If you are installing MapR, run the ./trafodion_setup script by providing the following parameters:
Does it apply only for MapR ? I understood the step 7 needs to be done only on MapR not on Cloudera
Revision history for this message
|
#9 |
You can use trafodion_setup regardless of the Hadoop distribution you are using.
Sorry for the confusion.
--Amanda
-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Mohan
Sent: Thursday, August 07, 2014 8:13 PM
To: Moran, Amanda
Subject: Re: [Question #252766]: How to install Trafodion on existing and running CDH5.0 Cluster? Without affecting existing settings
Question #252766 on Trafodion changed:
https:/
Status: Answered => Open
Mohan is still having a problem:
I have already completed steps 1 to 5 Successfully.
But, In step 7. If you are installing MapR, run the ./trafodion_setup script by providing the following parameters:
Does it apply only for MapR ? I understood the step 7 needs to be done only on MapR not on Cloudera
--
You received this question notification because you are an answer contact for Trafodion.
Revision history for this message
|
#10 |
We have already installed Cloudera manager and other components like (HDFS,MR,
Revision history for this message
|
#11 |
trafodion_setup will not reinstall Cloudera.
--Amanda
-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Mohan
Sent: Thursday, August 07, 2014 9:02 PM
To: Moran, Amanda
Subject: Re: [Question #252766]: How to install Trafodion on existing and running CDH5.0 Cluster? Without affecting existing settings
Question #252766 on Trafodion changed:
https:/
Status: Answered => Open
Mohan is still having a problem:
We have already installed Cloudera manager and other components like (HDFS,MR,
--
You received this question notification because you are an answer contact for Trafodion.
Revision history for this message
|
#12 |
Hi When I ran step 7.. I got below error
./trafodion_setup --nodes "hdpcdn02 hdpcdn03 hdpcdn04" --home_dir /opt/cloudera/
su: user trafodion does not exist
su: user trafodion does not exist
su: user trafodion does not exist
su: user trafodion does not exist
chmod: cannot access `/opt/cloudera/
***INFO: creating .qpidports file
su: user trafodion does not exist
***INFO: creating .bashrc file
cp: cannot create regular file `/opt/cloudera/
chown: invalid user: `trafodion.
***INFO: creating sqconfig file
cp: cannot create regular file `/opt/cloudera/
chown: invalid user: `trafodion.
***INFO: Setting up userid trafodion on all other nodes in cluster
cp: cannot stat `/opt/cloudera/
chown: cannot access `/opt/cloudera/
pdcp@hdpcmt01: can't stat /opt/cloudera/
hdpcdn04: cp: cannot stat `/opt/cloudera/
pdsh@hdpcmt01: hdpcdn04: ssh exited with exit code 1
hdpcdn02: cp: cannot stat `/opt/cloudera/
pdsh@hdpcmt01: hdpcdn02: ssh exited with exit code 1
hdpcdn03: cp: cannot stat `/opt/cloudera/
pdsh@hdpcmt01: hdpcdn03: ssh exited with exit code 1
***INFO: Creating known_hosts file for all nodes
su: user trafodion does not exist
***ERROR: Unable to ssh to node hdpcdn02
***ERROR: Unable to create Trafodion userid: trafodion
Revision history for this message
|
#13 |
I am running step number 7 from different node not on these nodes "hdpcdn02 hdpcdn03 hdpcdn04" I am running from Mgmt node which has got ssh access to all. So the step number 7 uses this script also "traf_add_user" So this got limitations. I think we need to run the step 7 from first node of "hdpcdn02 hdpcdn03 hdpcdn04"
Revision history for this message
|
#14 |
I am able to complete Step 7 successfully after including the node name which I was running.
But Step 8 through Below error
Certificate/
*******
*******
Updating Authentication Configuration
*******
Creating folders for storing certificates
Copying the log4j and log4cpp configuration files
Copying the log4j and log4cpp configuration files
***INFO: copying /opt/cloudera/
***INFO: Start of DCS install
***INFO: untarring build file /opt/cloudera/
***INFO: modifying /opt/cloudera/
***INFO: modifying /opt/cloudera/
***INFO: creating /opt/cloudera/
***INFO: End of DCS install.
***INFO: copying install to all nodes
***INFO: starting Trafodion instance
Checking orphan processes.
Removing old mpijob* files from /opt/cloudera/
Removing old monitor.port* files from /opt/cloudera/
Executing sqipcrm (output to sqipcrm.out)
Starting the SQ Environment (Executing /opt/cloudera/
Background SQ Startup job (pid: 29634)
# of SQ processes: 18 .......
Error while executing the startup script!!!
Checking if processes are up.
Checking attempt: 1; user specified max: 1. Execution time in seconds: 0.
The SQ environment is not up at all. Check the logs.
Process Configured Actual Down
------- ---------- ------ ----
DTM 4 0 \$tm0 \$tm1 \$tm2 \$tm3
Please check the SQ shell log file : /opt/cloudera/
SQ Startup (from /opt/cloudera/
./opt/cloudera/
Starting the DCS environment now
starting master, logging to /opt/cloudera/
hdpcdn02: starting server, logging to /opt/cloudera/
hdpcdn03: starting server, logging to /opt/cloudera/
You can monitor the SQ shell log file : /opt/cloudera/
Startup time 0 hour(s) 10 minute(s) 49 second(s)
***ERROR: sqstart failed with RC=1. Check /opt/cloudera/
Revision history for this message
|
#15 |
Adding more errors from a file " /opt/cloudera/
[root@hdpcdn02 ~]# cat /opt/cloudera/
Processing cluster.conf on local host hdpcdn02
[SHELL] Shell/shell Version 1.0.1 Release 0.8.3 (Build release [0.8.3-
[SHELL] %
! Start the monitor processes across the cluster
startup
[SHELL] %startup
[SHELL] - Warning using shell.env
[$Z010P6Z] %
exit
[$Z010P6Z] %exit
Able to connect to the SQ monitor.
Continuing with the Startup...
Processing cluster.conf on local host hdpcdn02
[$Z010PAF] Shell/shell Version 1.0.1 Release 0.8.3 (Build release [0.8.3-
[$Z010PAF] %
set CLUSTERNAME=
[$Z010PAF] %set CLUSTERNAME=
[$Z010PAF] Configuration Change Notice for Group: CLUSTER Key: CLUSTERNAME
[$Z010PAF] %
set SQ_MBTYPE=64
[$Z010PAF] %set SQ_MBTYPE=64
[$Z010PAF] Configuration Change Notice for Group: CLUSTER Key: SQ_MBTYPE
[$Z010PAF] %
set MY_NODES= -w hdpcdn02 -w hdpcdn03 -w hdpcdn04 -w hdpcmt01
[$Z010PAF] %set MY_NODES= -w hdpcdn02 -w hdpcdn03 -w hdpcdn04 -w hdpcmt01
[$Z010PAF] Configuration Change Notice for Group: CLUSTER Key: MY_NODES
[$Z010PAF] %
exit
[$Z010PAF] %exit
SQ_START_SEAPILOT: 0
Processing cluster.conf on local host hdpcdn02
[$Z010PAI] Shell/shell Version 1.0.1 Release 0.8.3 (Build release [0.8.3-
[$Z010PAI] %
set SQ_SEAPILOT_
[$Z010PAI] %set SQ_SEAPILOT_
[$Z010PAI] %
[$Z010PAI] Configuration Change Notice for Group: CLUSTER Key: SQ_SEAPILOT_
exit
[$Z010PAI] %exit
Processing cluster.conf on local host hdpcdn02
[$Z010PAL] Shell/shell Version 1.0.1 Release 0.8.3 (Build release [0.8.3-
[$Z010PAL] %
! Start DTM
set DTM_RUN_MODE=2
[$Z010PAL] %set DTM_RUN_MODE=2
s[$Z010PAL] Configuration Change Notice for Group: CLUSTER Key: DTM_RUN_MODE
et SQ_AUDITSVC_READY=1
[$Z010PAL] %set SQ_AUDITSVC_READY=1
s[$Z010PAL] Configuration Change Notice for Group: CLUSTER Key: SQ_AUDITSVC_READY
et DTM_TLOG_PER_TM=1
[$Z010PAL] %set DTM_TLOG_PER_TM=1
[$Z010PAL] Configuration Change Notice for Group: CLUSTER Key: DTM_TLOG_PER_TM
[$Z010PAL] %
exit
[$Z010PAL] %exit
Processing cluster.conf on local host hdpcdn02
[$Z010PAW] Shell/shell Version 1.0.1 Release 0.8.3 (Build release [0.8.3-
set {process $tm0} TMASE=TLOG0
[$Z010PAW] %set {process $tm0} TMASE=TLOG0
exec {type dtm, nowait, name $tm0, nid 0, out stdout_dtm_0} tm
[$Z010PAW] %exec {type dtm, nowait, name $tm0, nid 0, out stdout_dtm_0} tm
[$Z010PAW] NewProcess failed to spawn, error=Error in spawn call
[$Z010PAW] Process $TM0 terminated normally. Nid=0, Pid=31786
delay 5
[$Z010PAW] %delay 5
exit
[$Z010PAW] %exit
Processing cluster.conf on local host hdpcdn02
[$Z010PGZ] Shell/shell Version 1.0.1 Release 0.8.3 (Build release [0.8.3-
set {process $tm1} TMASE=TLOG1
[$Z010PGZ] %set {process $tm1} TMASE=TLOG1
exec {type dtm, nowait, name $tm1, nid 1, out stdout_dtm_1} tm
[$Z010PGZ] %exec {type dtm, nowait, name $tm1, nid 1, out stdout_dtm_1} tm
[$Z010PGZ] NewProcess failed to spawn, error=Error in spawn call
exit
[$Z010PGZ] %exit
[$Z010PGZ] Process $TM1 terminated normally. Nid=1, Pid=29996
Processing cluster.conf on local host hdpcdn02
[$Z010PHS] Shell/shell Version 1.0.1 Release 0.8.3 (Build release [0.8.3-
set {process $tm2} TMASE=TLOG2
[$Z010PHS] %set {process $tm2} TMASE=TLOG2
exec {type dtm, nowait, name $tm2, nid 2, out stdout_dtm_2} tm
[$Z010PHS] %exec {type dtm, nowait, name $tm2, nid 2, out stdout_dtm_2} tm
[$Z010PHS] NewProcess failed to spawn, error=Error in spawn call
[$Z010PHS] Process $TM2 terminated normally. Nid=2, Pid=28446
exit
[$Z010PHS] %exit
Processing cluster.conf on local host hdpcdn02
[$Z010PI3] Shell/shell Version 1.0.1 Release 0.8.3 (Build release [0.8.3-
set {process $tm3} TMASE=TLOG3
[$Z010PI3] %set {process $tm3} TMASE=TLOG3
exec {type dtm, nowait, name $tm3, nid 3, out stdout_dtm_3} tm
[$Z010PI3] %exec {type dtm, nowait, name $tm3, nid 3, out stdout_dtm_3} tm
[$Z010PI3] NewProcess failed to spawn, error=Error in spawn call
[$Z010PI3] Process $TM3 terminated normally. Nid=3, Pid=26083
delay 5
[$Z010PI3] %delay 5
exit
[$Z010PI3] %exit
Checking if processes are up.
Checking attempt: 60; user specified max: 60. Execution time in seconds: 592.
Process Configured Actual Down
------- ---------- ------ ----
DTM 4 0 \$tm0 \$tm1 \$tm2 \$tm3
The dtm process(es) are Not Ready yet. Stopping further startup (if any).
Error while executing the startup script!!!
Please check the SQ shell log file : /opt/cloudera/
SQ Startup (from /opt/cloudera/
Revision history for this message
|
#16 |
Yes, this scripts expects to be ran on the first node.
--Amanda
-----Original Message-----
From: <email address hidden> [mailto:<email address hidden>] On Behalf Of Mohan
Sent: Friday, August 08, 2014 12:57 AM
To: Moran, Amanda
Subject: Re: [Question #252766]: How to install Trafodion on existing and running CDH5.0 Cluster? Without affecting existing settings
Question #252766 on Trafodion changed:
https:/
Mohan gave more information on the question:
I am running step number 7 from different node not on these nodes
"hdpcdn02 hdpcdn03 hdpcdn04" I am running from Mgmt node which has got ssh access to all. So the step number 7 uses this script also "traf_add_user" So this got limitations. I think we need to run the step 7 from first node of "hdpcdn02 hdpcdn03 hdpcdn04"
--
You received this question notification because you are an answer contact for Trafodion.
Can you help with this problem?
Provide an answer of your own, or ask Mohan for more information if necessary.