HDFS is designed more for batch processing rather than interactive use
by users. The emphasis is on high throughput of data access rather than low
latency of data access.
Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size. Thus, HDFS is tuned to support large files.
Check more about -HDFSand MapReduce
Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size. Thus, HDFS is tuned to support large files.
HDFS categorises its data in files and directories.It provides a
command line interface called the FS shell that lets the user interact with
data in the HDFS and manage your hadoop cluster.
This article provides a quick handy reference to all commonly used
hadoop fs commands that can be used to manage files on a Hadoop cluster.The
syntax of the commands is similar to bash and csh.
Check more about -HDFSand MapReduce
File transferring from local to HDFS
Hadoop fs –put
File transferring from HDFS to Local
Hadoop fs –get
File transferring within the HDFS
Hadoop fs –cp
List of the File Display inside the HDFS
Hadoop fs –ls
Display the content in the File within HDFS
Hadoop fs –cat
Remove particular File in HDFS
Hadoop fs –rm
Remove File directory in HDFS
Hadoop fs –rmr
Display the File size
Hadoop fs –du <File_name>
Display the directory size
Hadoop fs –dus <directory_name>
Create new directory in HDFS
Hadoop fs –mkdir <directory_name>
Display the whole directory and its content in HDFS
Hadoop fs –lsr <directory_name>
Displays last kilobyte of the File
Hadoop fs –tail <File_name>
Display the overview of HDFS
Hadoop fsck
File transferring from local to HDFS
Hadoop fs -CopyFromLocal
File transferring from HDFS to Local
Hadoop fs –CopyToLocal