Quantcast
Channel: Hortonworks » All Topics
Viewing all 5121 articles
Browse latest View live

Oozie Script fails

$
0
0

Replies: 0

Message [JA009: Call to hdp230-3/39.7.48.3:50030 failed on local exception: java.io.EOFException]
org.apache.oozie.action.ActionExecutorException: JA009: Call to hdp230-3/39.7.48.3:50030 failed on local exception: java.io.EOFException
at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:412)
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:392)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:762)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:913)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:211)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:59)
at


Display junk Unicode characters via Openquery and Hive ODBC

$
0
0

Replies: 0

Hi,

I have a problem when create a linkserver using the driver of Hortonworks Hive ODBC Driver (Win v1.3.19) and then make a openquery to query data in Hive. For non-Unicode characters, this approach works fine. But if hive table contains Unicode characters, SQL server will display junk characters of those Unicode characters.

Any one has an idea to resolve it?

Here is the linked server creation and Open query…

EXEC master.dbo.sp_addlinkedserver @server = N’[link server name]‘, @srvproduct=N’Hive’, @provider=N’MSDASQL’, @datasrc=N’[ODBC Data Sources]‘, @provstr=N’DSN=[hive server name];DefaultStringColumnLength=8000;DESCRIPTION=;Driver=Hortonworks Hive ODBC Driver;FastSQLPrepare=0;FixUnquotedDefaultSchemaNameInQuery=1;HiveServerType=2;Host=[hive server name];HS2AuthMech=2;Port=10001;RowsFetchedPerBlock=10000;Schema=default;UseNativeQuery=0;UserName=hive;’

SELECT * FROM OpenQuery([link server name], ‘Select UnicodeColumn1,UnicodeColumn2 FROM [default].[Hive_Table]‘)

Thanks!

can not create table

$
0
0

Replies: 0

1)I have installed oracle virtual box and on this i installed Hortonworks sandbox 2.0
2) i logged in as hue (default user) and i have not done any changes in configurations settings…
3)i have loaded a file and now i want to create table from this file…but
I get the following error when i click on create table :

HCatClient error on create table: {“statement”:”use default; create table nyse row format delimited fields terminated by ‘ ‘;”,”error”:”unable to create table: nyse”,”exec”:{“stdout”:”",”stderr”:”which: no /usr/lib/hadoop/bin/hadoop in ((null))\ndirname: missing operand\nTry `dirname –help’ for more information.\n Command was terminated due to timeout(60000ms). See templeton.exec.timeout property”,”exitcode”:143}} (error 500)

Please help me..

even i am not able to create table with option “create table manually”

Is there an IDE for the hadoop components for Windows similar to HUE on Linux?

$
0
0

Replies: 0

As of HDP 2.0.6 windows, there currently isn’t an IDE for the various hadoop components that require scripting. Hortonworks is aware of the need for a GUI for monitoring as well as development and is on the roadmap. Currently there is no set date as to when these new features will be released. Please continue to visit Hortonworks.com to get the latest information on releases and features.

Hortonworks 2.1 URL not accessible from host browser

$
0
0

Replies: 1

Ok now i am exhausted all possible actions i can take to make the url accessible form host computer browser. I am using VirtualBox to launch the technical preview of 2.1 and look like machine initializes as normal because at end i am given a URL to be typed on my host computer browse. Now when i access that URL (192.168.56.102) from browser it does not show up any application. The problem is both on my laptop and desktop. I can also ping the machine using IP and it show the server is live. here is my configurations for network

Adapter 1 – Host only adopter with Promiscuous Mode set to Allow All
Adapter 2 – NAT

also i check that there is VirtualBox Host only ethernet adapter in options -> network and DHCP is enable (that is why i guess machine is getting IP)

Any suggestions what sholud i check i guess there is proxy or firwall issue if there let me know how to resolve

Shahzad

AccessControlException: getBlockLocalPathInfo() authorization Error

$
0
0

Replies: 0

I’ve run this as “user” and “hbase” but I still can’t get it to pump some data to a hbase table.
hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles /user/user/hbase_text.hfile table2
sudo -u hbase hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles /user/hbase/hbase_text.hfile table2

The Hbase spits me out with a Short Circuit access error:
14/04/18 21:28:36 WARN hdfs.DFSClient: Short circuit access failed
org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Can’t continue with getBlockLocalPathInfo() authorization. The user hbase is not allowed to call getBlockLocalPathInfo

The guide requires the hbase user provided by $HBASE_USER to be in the dfs.block.local-path-access.user key/value pair on /etc/hadoop/conf/hdfs-site.xml

http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.2.4/bk_using_Ambari_book/content/ambari-chap5-3-17.html

However neither this or adding my own user, as a comma separated group “hbase,user”, or running this hbase process as a hbase/hive user helps.
Using HDP 1.3.2 Centos 6.5

Ideas?

Change IP addresses on nodes of HDP Cluster (HDP 1.3.3)

$
0
0

Replies: 0

We will be moving our HDP cluster soon to a new location. This will warrant a change in the static IP addresses of the nodes in the cluster. We use Ambari for our installation. I couldn’t find good information on the migration steps needed in case of such a scenario. It would be helpful if someone could point me out with steps needed to be done for a smooth migration.

Unable to submit mapreduce job to Yarn via java client

$
0
0

Replies: 0

Hi All,
I’m trying to submit a mapreduce job from a java client running on Windows7 to the Hortonworks Sandbox.

Driver code:
public static void main(String[] args) throws Exception
{
UserGroupInformation ugi = UserGroupInformation.createRemoteUser(“jackie_leslie”);
ugi.doAs(new PrivilegedExceptionAction<Object>() {
String[] jobArgs;
@Override
public Object run() throws Exception {
JobWrapper mr = new JobWrapper();
int exitCode = ToolRunner.run(mr, jobArgs);
System.exit(exitCode);
return mr;
}

private PrivilegedExceptionAction init(String[] myArgs)
{
this.jobArgs = myArgs;
return this;
}
}.init(args)
);
}

Mapper class

@Override
public int run(String[] args) throws Exception
{
Configuration config = getConf();
config.set(“fs.defaultFS”, “hdfs://sandbox.hortonworks.com:8020″);
config.set(“mapreduce.framework.name”, “yarn”);
config.set(“yarn.resourcemanager.address”, “localhost:8050″); //8025 or 8032?
config.set(“hadoop.job.ugi”, “jackie_leslie”);

@SuppressWarnings(“deprecation”)
Job job = new Job(config);
job.setJobName(“upload”);
job.setJarByClass(MyKVMapper.class);
job.setMapperClass(MyKVMapper.class);
job.setMapOutputKeyClass(ImmutableBytesWritable.class);
job.setInputFormatClass(TextInputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
boolean success = job.waitForCompletion(true);

client stacktrace
Exception in thread “main” org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hadoop-yarn/staging/jackie_leslie/.staging/job_1397842310782_0004/job.split could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2503)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582) etc.

Also from the logs.
2014-04-18 13:59:25,603 WARN security.UserGroupInformation (UserGroupInformation.java:getGroupNames(1355)) – No groups available for user jackie_leslie

Why is this job not being submitted to my data node?

Thanks.


Windows 2012 STD R2 Server – Flume Errors

$
0
0

Replies: 1

Hello:
I am getting the following error message and it goes on a loop. Appreciate if someone has already fixed. I am using agent Source as spoolDir. The Syslog is sitting in the Ingest folder and also I see data.<Number>.seq file in HDFS

09 Apr 2014 14:57:51,469 ERROR [pool-9-thread-1] (org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run:173) – Uncaught exception in Runnable
java.lang.IllegalStateException: Serializer has been closed
at org.apache.flume.serialization.LineDeserializer.ensureOpen(LineDeserializer.java:124)
at org.apache.flume.serialization.LineDeserializer.readEvents(LineDeserializer.java:88)
at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:221)
at org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:160)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)

Software Versions
HDP Version : 2.0
OS : Windows 2012 Standard Server R2

Flume Configuration File:
# Name the components on this agent
agent.sources = WinHadoopC1Source
agent.sinks = WinHadoopC1Sink1
agent.channels = WinHadoopC1Channel1

# Describe/configure the source
agent.sources.WinHadoopC1Source.type=spooldir
agent.sources.WinHadoopC1Source.spoolDir = C:/flume_spooldir
agent.sources.WinHadoopC1Source.fileHeader = true

# Describe the sink
agent.sinks.WinHadoopC1Sink1.type=hdfs
agent.sinks.WinHadoopC1Sink1.hdfs.path = hdfs://WIN-ATKSGSRL5DL/logspooldir
agent.sinks..WinHadoopC1Sink1.hdfs.rollSize=1024000
agent.sinks.WinHadoopC1Sink1.hdfs.fileType = SequenceFile
agent.sinks.WinHadoopC1Sink1.hdfs.filePrefix = data
agent.sinks.WinHadoopC1Sink1.hdfs.fileSuffix = .seq
agent.sinks.WinHadoopC1Sink1.hdfs.idleTimeout=60

# Use a channel which buffers events in memory
agent.channels.WinHadoopC1Channel1.type = memory
agent.channels.WinHadoopC1Channel1.capacity = 100000
agent.channels.WinHadoopC1Channel1.transactionCapacity = 10000

# Bind the source and sink to the channel
agent.sources.WinHadoopC1Source.channels = WinHadoopC1Channel1
agent.sinks.WinHadoopC1Sink1.channel = WinHadoopC1Channel1
Thanks, Satya Raju

Issue in beeswax: NoClassDefFoundError: org/apache/hadoop/mapred/JobConf

$
0
0

Replies: 0

Hello
I have installed HDP based on the HDP provided PDF bk_installing_manually_book-20140210.pdf. Everything is Ok and checked with the various test explained in this PDF.
But I do not success on making Hue working. I stil have the java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf when beeswax server tries to start.
Any idea ?
PS: I am using Hadoop 2.2.0.

No FileSystem for scheme: hdfs

$
0
0

Replies: 0

Hi,
I have a test program that uses FileSystem api to CRUD files on hdfs. This works well with HDP1.3, but when I use it with HDP2.0 (sandbox), I get the exception “java.io.IOException: No FileSystem for scheme: hdfs”. Since there is no hadoop-core.jar anymore, I have tried different combinations of jars to make it work. Currently, I have the following (and some of the related dependencies not listed here), but I can’t make it work yet :
hadoop-annotations-2.2.0.jar
hadoop-mapreduce-client-core-2.2.0.jar
hadoop-yarn-common-2.2.0.jar
hadoop-auth-2.2.0.jar
hadoop-yarn-api-2.2.0.jar
hadoop-yarn-server-common-2.2.0.jar
hadoop-common-2.2.0.jar

Thanks in advance for any help.

Best,
Param

java.io.IOException: No FileSystem for scheme: hdfs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166)

Bulk read from Hbase table

$
0
0

Replies: 0

Hi,

We need to read Hbase table rows of volume about 100Million.
For mapper, It is taking 2 minutes to read 1Lakh rows on a single machine.
We are using “TableMapReduceUtil” for reading from the Hbase table.

Could you please help us to tune this job.

Thanks,
SambaShiva

Permission Error on HDFS User folder

$
0
0

Replies: 2

I am using the NameNode Web page to browse the file system. Clicking the Browse the filesytem link brings up a page with table listing the current directory in the HDFS system. I cannot browse the user directory. I get a Permission Denied error specifically:
Permission denied: user=gopher, access=READ_EXECUTE, inode="/user":hadoop:hdfs:drwx------

I have tried to CHMOD the user directory, however I get a permission denied when trying to alter the permissions.

Is there something I missed in the configuration? The Server is running, so this is a post-installation issue. Thanks.

Hadoop Development

$
0
0

Replies: 1

Hi,
Given that eclipse is not available, do you know where I can find resources to develop MapReduce applications on HDP?
In particular, I am looking for the client libraries for HDP development and ways to set it up and test it.
Regards,
Sreekant

command line compiling mapreduce jobs

$
0
0

Replies: 1

Hello, everyone. I recently sucessfull y install the HDP2.0 for windows on my computer. It also passed the smoke-test example. I am trying to compile my own mapreduce program via the command line. I used the command line: javac -classpath c:\hdp\hadoop-2.2.0.2.0.6.0-0009\hadoop-2.2.0.2.0.6.0-0009-core.jar wordcountclass WordCount.java , however, it doesn’t work. I found there is actually no hadoop-2.2.0.2.0.6.0-0009-core.jar under my c:\hdp\hadoop-2.2.0.2.0.6.0-0009 folder. I would like to know how to compile mapreduce program with the HDP2.0 for windows. I am not really sure which jar files I need to set as the classpath. Could you please help me, Thank you very much!!!


S3 bucket for HDFS

$
0
0

Replies: 2

Hi,

I need to add s3 bucket for hdfs.
While launching clusters, I add following property(key, value) in hdfs-site.xml through browser
fs.namenode.name.dir
s3://KEY:SECRET@MYBUCKET

But the installation fails afterwards(at Datanode). Please advice.

Thanks

installation failed whit w server 2012

$
0
0

Replies: 3

Hi, i have the following problem to install hadoop in windows server 2012 – this is the message ‘installation failed. Please see installation log for details’
this part the log.

HADOOP: Giving user/group “Users” full permissions to “c:\hadoop\logs\hadoop”
HADOOP: icacls “c:\hadoop\logs\hadoop” /grant Users:(OI)(CI)F
Se procesaron correctamente 0 archivos; error al procesar 1 archivos
HADOOP-CMD FAILURE: Users: No se efectuó ninguna asignación entre los nombres de cuenta y los identificadores de seguridad.
HADOOP FAILURE: Command “Se procesaron correctamente 0 archivos; error al procesar 1 archivos Users: No se efectuó ninguna asignación entre los nombres de cuenta y los identificadores de seguridad.” failed with exit code 1332
En C:\HadoopInstallFiles\HadoopPackages\hdp-2.0.6.0-winpkg\resources\hadoop-2.2.0.2.0.6.0-0009.winpkg\resources\Winpkg.Utils.psm1: 109 Carácter: 9
+ throw “Command "$out” failed with exit code $LastExitCode ”
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Thank¡¡

HBase in Sandbox

$
0
0

Replies: 15

Hello,
It seems that HBase does not work “out of the Sandbox”

I have installed the latest Sandbox and successfully completed all tutorials, so basically HDFS and Hive are OK in the sandbox installation.
I am able to work with HBase in Shell. The only minor problem was the following warning:


hbase shell
status
14/01/28 04:47:12 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.96.0.2.0.6.0-76-hadoop2, re6d7a56f72914d01e55c0478d74e5cfd3778f231, Thu Oct 17 18:15:20 PDT 2013

hbase(main):001:0> status
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

This is not a big issue, but demonstrate some lack of testing…

================================
Later I found the following show-stopper: I am not able to connect to the hbase from a Java program:
client.HConnectionManager$HConnectionImplementation: ZooKeeper available but no active master location found

It is also not possible to connect with a browser to Master server (http://127.0.0.1:60010) and Region server (http://127.0.0.1:60030)
As I can see, there is an issue with master hbase configuration, and as I am a very new to hadoop I am not able to fix it.

Could you please give an advice or fix the Sandbox?

PS
By the way, in the files ‘hbase-site’ and ‘zoo.cfg’ I can see ‘sandbox.hortonworks.com’
Are you sure it will work? Should’t it be ‘localhost’?

Windows Server 2012 install fail – SRSetRestorePoint API failed

$
0
0

Replies: 0

Seems to be a different error than what I am seeing for others.
Is there a way to exclude this as a parameter when running
msiexec /lv c:\hdplog.txt /i “C:\Users\Administrator\Downloads\hdp-2.0.6.0.winpkg.msi”

I have validated the java_home and path variables many, many times.
<i>
MSI (s) (1C:A4) [06:47:39:787]: Machine policy value ‘DisableUserInstalls’ is 0
MSI (s) (1C:A4) [06:47:39:790]: Note: 1: 2203 2: C:\Windows\Installer\inprogressinstallinfo.ipi 3: -2147287038
MSI (s) (1C:A4) [06:47:39:791]: Machine policy value ‘LimitSystemRestoreCheckpointing’ is 0
MSI (s) (1C:A4) [06:47:39:791]: Note: 1: 1715 2: Hortonworks Data Platform 2.0.6.0 for Windows
MSI (s) (1C:A4) [06:47:39:791]: Note: 1: 2262 2: Error 3: -2147287038
MSI (s) (1C:A4) [06:47:39:791]: Calling SRSetRestorePoint API. dwRestorePtType: 0, dwEventType: 102, llSequenceNumber: 0, szDescription: “Installed Hortonworks Data Platform 2.0.6.0 for Windows”.
MSI (s) (1C:A4) [06:47:39:791]: The call to SRSetRestorePoint API failed. Returned status: 0. GetLastError() returned: 127
MSI (s) (1C:A4) [06:47:39:791]: Note: 1: 1402 2: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer 3: 2
MSI (s) (1C:A4) [06:47:39:792]: File will have security applied from OpCode.
MSI (s) (1C:A4) [06:47:42:564]: SOFTWARE RESTRICTION POLICY: Verifying package –> ‘C:\users\administrator\downloads\hdp-2.0.6-GA\hdp-2.0.6.0.winpkg.msi’ against software restriction policy
MSI (s) (1C:A4) [06:47:42:564]: SOFTWARE RESTRICTION POLICY: C:\users\administrator\downloads\hdp-2.0.6-GA\hdp-2.0.6.0.winpkg.msi has a digital signature
MSI (s) (1C:A4) [06:47:48:689]: SOFTWARE RESTRICTION POLICY: C:\users\administrator\downloads\hdp-2.0.6-GA\hdp-2.0.6.0.winpkg.msi is permitted to run at the ‘unrestricted’ authorization level.
MSI (s) (1C:A4) [06:47:48:689]: MSCOREE not loaded loading copy from system32
MSI (s) (1C:A4) [06:47:48:693]: End dialog not enabled
</i>

Importing data from Teradata to Hive‏

$
0
0

Replies: 4

Hello All,

I am importing data from Teradata 14.0 using Hortonworks Connector for Teradata(*hdp-connector-for-teradata-1.1.1.2.0.6.1-101-distro*).

Hadoop distro: *Apache Hadoop version 2.3.0* and also on Hadoop 2.1.0.2.0.5.0-67
Hive Version: 0.12 and 0.11
Sqoop version: 1.4.4

I am able to import the Teradata tables to HDFS but CANNOT import the same into Hive Tables.
Need help regarding the compatibility of the softwares that I am using.

Here are the commands that I am using with Hortonworks Connector for(*hdp-connector-for-teradata-1.1.1.2.0.6.1-101-distro*):

Command #1
rock@rock-Vostro-3560 ~/hadoop/sqoop-1.4.4.bin__hadoop-1.0.0 $ bin/sqoop import -Dteradata.db.input.job.type=hive -Dteradata.db.input.target.table=checkin -Dteradata.db.input.target.table.schema=”first_name string, last_name string, email string, passport string, checkin_date string, checkin_time string, time_zone string, boarding_pass_id string” –verbose –connect jdbc:teradata://192.168.199.129/Database=airlinesuser –connection-manager org.apache.sqoop.teradata.TeradataConnManager –username airlinesuser –password airlinesuser –table checkin
Warning: /usr/lib/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
14/04/08 22:18:22 DEBUG tool.BaseSqoopTool: Enabled debug logging.
14/04/08 22:18:22 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/04/08 22:18:23 DEBUG sqoop.ConnFactory: Loaded manager factory: com.cloudera.sqoop.manager.DefaultManagerFactory
14/04/08 22:18:23 INFO manager.SqlManager: Using default fetchSize of 1000
14/04/08 22:18:23 INFO tool.CodeGenTool: The connection manager declares that it self manages mapping between records & fields and rows & columns. No class will will be generated.
14/04/08 22:18:23 INFO teradata.TeradataConnManager: Importing from Teradata Table:checkin
14/04/08 22:18:23 INFO teradata.TeradataSqoopImportJob: Configuring import options
14/04/08 22:18:23 INFO teradata.TeradataSqoopImportJob: Setting input file format in TeradataConfiguration to textfile
14/04/08 22:18:23 INFO teradata.TeradataSqoopImportJob: Table name to import checkin
14/04/08 22:18:23 INFO teradata.TeradataSqoopImportJob: Setting job type in TeradataConfiguration to hdfs
14/04/08 22:18:23 INFO teradata.TeradataSqoopImportJob: Setting input file format in TeradataConfiguration to textfile
14/04/08 22:18:23 INFO teradata.TeradataSqoopImportJob: Setting input separator in TeradataConfiguration to \u002c
14/04/08 22:18:23 ERROR tool.ImportTool: Imported Failed: Can not create a Path from an empty string

How do I set these:
export HADOOP_CLASSPATH=$(hcat -classpath)
export LIB_JARS=$(echo ${HADOOP_CLASSPATH} | sed -e ‘s/::*/,/g’)
Can I somehow export these manually?

Thanks
-Nirmal

Viewing all 5121 articles
Browse latest View live




Latest Images