Quantcast
Channel: Hortonworks » All Topics
Viewing all articles
Browse latest Browse all 5121

s3 for hdfs

$
0
0

Replies: 2

Hello fair internet people

I have a fully functioning 5 node Ambari cluster setup in AWS.
I am now trying to follow https://wiki.apache.org/hadoop/AmazonS3 to replace my hdfs with s3.

In my Ambari setup, I clicked on HDFS then the config tab.
In the Advanced section I found the property for
fs.defaultFS
and changed it from hdfs://ip-xx-xx-xx-xx.compute.internal:8020
to s3://bucket-name/
Then I added
fs.s3.awsAccessKeyId
and
fs.s3.awsSecretAccessKey
with their values to the hdfs-site.xml section.

I presume this is essentially adding all these relevent values into the hdfs-site.xml config file on the server.

So, when I restart the nodes, I now get the namenode not restarting with the following error:

safemode: FileSystem s3://bucket-name is not an HDFS file system
Usage: java DFSAdmin [-safemode enter | leave | get | wait]
2014-06-30 09:56:25,604 – Retrying after 10 seconds. Reason: Execution of ‘su – hdfs -c ‘hadoop dfsadmin -safemode get’ | grep ‘Safe mode is OFF” returned 1. DEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.

Do I have to configure something else somewhere to get hdfs to like using s3? Or to get Ambari specifically to use it?

Thanks in advance
Brian


Viewing all articles
Browse latest Browse all 5121

Trending Articles