The following components are needed to use the adapter for Hadoop/Hive/Impala:
The location of Java must be specified in an environment variable.
If you are using Linux, add a line to your profile with the location where Java is installed. For example:
If you have JDK installed:
If you are using Windows, right-click Computer and select Properties. Then select Advanced System Settings and click Environment Variables. Add the locations to your PATH variable. For example:
C:\Program Files\Java\jdk7\bin\server;C:\Program Files\Java\jdk7\bin;
The location of the Hadoop and Hive jar files must be specified to the server. If you are running the server on the same system as the Hadoop and Hive server, you can specify their location. If you are running the server on another system, copy the files listed below to some location on your system and specify their location.
This can be done in the system CLASSPATH or in the DataMigrator or WebFOCUS Reporting Server IBI_CLASSPATH variable as follows:
From the Data Management Console, expand the Workspace folder.
The Java Services Configuration page opens.
In the IBI_CLASSPATH box, enter the full location of the Hive and Hadoop files shown below, where hive_home is where Hive is installed and hadoop_home is where Hadoop is installed. You must type them explicitly and cannot use $HIVE_HOME. The file names must be entered one per line.
If you are installing the adapter on a different system than where Hadoop and Hive are installed, copy the jar files to a location on that system.
Note: For a server running on Windows, use Windows syntax for directory names. For example:
For Hive or Impala:
Use the version numbers that match your installation. In the following example, the version number for Hive is 0.14.0.