Hadoop lzo文件的并行map处理

2019-03-28 13:29|来源: 网络

Hadoop集群中启用了lzo后,还需要一些配置,才能使集群能够对单个的lzo文件进行并行的map操作,以提升job的执行速度。

首先,要为lzo文件创建index。下面的命令对某个目录里的lzo文件创建index:

$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/lib/hadoop-lzo-0.4.10.jar com.hadoop.compression.lzo.LzoIndexer /log/source/cd/

使用该命令创建index要花些时间的,我一个7.5GB大小的文件,创建index,花了2分30秒的样子。其实创建index时还有另外一个参数,即com.hadoop.compression.lzo.DistributedLzoIndexer。两个选项可以参考:https://github.com/kevinweil/hadoop-lzo,该文章对这两个选项的解释,我不是很明白,但使用后一个参数可以减少创建index时所花费的时间,而对mapreduce任务的执行没有影响。如下:

$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/lib/hadoop-lzo-0.4.10.jar com.hadoop.compression.lzo.DistributedLzoIndexer /log/source/cd/    

然后,在Hive中创建表时,要指定INPUTFORMAT和OUTPUTFORMAT,否则集群仍然不能对lzo进行并行的map处理。在hive中创建表时加入下列语句:

SET FILEFORMAT     
INPUTFORMAT "com.hadoop.mapred.DeprecatedLzoTextInputFormat"  
OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat";  

执行了这两步操作后,对hive执行速度的提升还是很明显的。在测试中,我们使用一个7.5GB大小的lzo文件,执行稍微复杂一点的Hive命令,使用上述配置后仅需34秒的时间,而原来要180秒。

README.md
Hadoop-LZO
Hadoop-LZO is a project to bring splittable LZO compression to Hadoop. LZO is an ideal compression format for Hadoop due to its combination of speed and compression size. However, LZO files are not natively splittable, meaning the parallelism that is the core of Hadoop is gone. This project re-enables that parallelism with LZO compressed files, and also comes with standard utilities (input/output streams, etc) for working with LZO files.

Origins
This project builds off the great work done at http://code.google.com/p/hadoop-gpl-compression. As of issue 41, the differences in this codebase are the following.

it fixes a few bugs in hadoop-gpl-compression -- notably, it allows the decompressor to read small or uncompressable lzo files, and also fixes the compressor to follow the lzo standard when compressing small or uncompressible chunks. it also fixes a number of inconsistenly caught and thrown exception cases that can occur when the lzo writer gets killed mid-stream, plus some other smaller issues (see commit log).
it adds the ability to work with Hadoop streaming via the com.apache.hadoop.mapred.DeprecatedLzoTextInputFormat class
it adds an easier way to index lzo files (com.hadoop.compression.lzo.LzoIndexer)
it adds an even easier way to index lzo files, in a distributed manner (com.hadoop.compression.lzo.DistributedLzoIndexer)
Hadoop and LZO, Together at Last
LZO is a wonderful compression scheme to use with Hadoop because it's incredibly fast, and (with a bit of work) it's splittable. Gzip is decently fast, but cannot take advantage of Hadoop's natural map splits because it's impossible to start decompressing a gzip stream starting at a random offset in the file. LZO's block format makes it possible to start decompressing at certain specific offsets of the file -- those that start new LZO block boundaries. In addition to providing LZO decompression support, these classes provide an in-process indexer (com.hadoop.compression.lzo.LzoIndexer) and a map-reduce style indexer which will read a set of LZO files and output the offsets of LZO block boundaries that occur near the natural Hadoop block boundaries. This enables a large LZO file to be split into multiple mappers and processed in parallel. Because it is compressed, less data is read off disk, minimizing the number of IOPS required. And LZO decompression is so fast that the CPU stays ahead of the disk read, so there is no performance impact from having to decompress data as it's read off disk.

Building and Configuring
To get started, see http://code.google.com/p/hadoop-gpl-compression/wiki/FAQ. This project is built exactly the same way; please follow the answer to "How do I configure Hadoop to use these classes?" on that page.

You can read more about Hadoop, LZO, and how we're using it at Twitter at http://www.cloudera.com/blog/2009/11/17/hadoop-at-twitter-part-1-splittable-lzo-compression/.

Once the libs are built and installed, you may want to add them to the class paths and library paths. That is, in hadoop-env.sh, set

    export HADOOP_CLASSPATH=/path/to/your/hadoop-lzo-lib.jar
    export JAVA_LIBRARY_PATH=/path/to/hadoop-lzo-native-libs:/path/to/standard-hadoop-native-libs
Note that there seems to be a bug in /path/to/hadoop/bin/hadoop; comment out the line

    JAVA_LIBRARY_PATH=''
because it keeps Hadoop from keeping the alteration you made to JAVA_LIBRARY_PATH above. (Update: seehttps://issues.apache.org/jira/browse/HADOOP-6453). Make sure you restart your jobtrackers and tasktrackers after uploading and changing configs so that they take effect.

Using Hadoop and LZO
Reading and Writing LZO Data
The project provides LzoInputStream and LzoOutputStream wrapping regular streams, to allow you to easily read and write compressed LZO data.

Indexing LZO Files
At this point, you should also be able to use the indexer to index lzo files in Hadoop (recall: this makes them splittable, so that they can be analyzed in parallel in a mapreduce job). Imagine that big_file.lzo is a 1 GB LZO file. You have two options:

index it in-process via:

hadoop jar /path/to/your/hadoop-lzo.jar com.hadoop.compression.lzo.LzoIndexer big_file.lzo
index it in a map-reduce job via:

hadoop jar /path/to/your/hadoop-lzo.jar com.hadoop.compression.lzo.DistributedLzoIndexer big_file.lzo
Either way, after 10-20 seconds there will be a file named big_file.lzo.index. The newly-created index file tells the LzoTextInputFormat's getSplits function how to break the LZO file into splits that can be decompressed and processed in parallel. Alternatively, if you specify a directory instead of a filename, both indexers will recursively walk the directory structure looking for .lzo files, indexing any that do not already have corresponding .lzo.index files.

Running MR Jobs over Indexed Files
Now run any job, say wordcount, over the new file. In Java-based M/R jobs, just replace any uses of TextInputFormat by LzoTextInputFormat. In streaming jobs, add "-inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat" (streaming still uses the old APIs, and needs a class that inherits from org.apache.hadoop.mapred.InputFormat). For Pig jobs, email me or check the pig list -- I have custom LZO loader classes that work but are not (yet) contributed back.

Note that if you forget to index an .lzo file, the job will work but will process the entire file in a single split, which will be less efficient.

相关问答

更多
  • java是执行文件,不是目录 java path默认是java_home/bin/目录 这个目录底下应该 java和javac等文件
  • hadoop和hbase问题[2022-03-08]

    Hadoop 是一个能够对大量数据进行分布式处理的软件框架。 HBase是一个分布式的、面向列的开源数据库。 HBase在Hadoop之上提供了类似于Bigtable的能力。 HBase是Apache的Hadoop项目的子项目。 HBase不同于一般的关系数据库,它是一个适合于非结构化数据存储的数据库。 另一个不同的是HBase基于列的而不是基于行的模式。
  • hadoop fs -put /source /target source :源文件位置 target:put到的位置
  • 做了一个全新安装的hadoop并用同一个罐子运行工作,问题就消失了。 似乎是一个错误,而不是编程错误。 Did a fresh installation of hadoop and ran the job with the same jar, the problem disappeared. Seems to be a bug rather than programming errors.
  • 您需要为openCV安装所需的包。 这篇文章介绍如何安装openCV: http : //www.samontab.com/web/2012/06/installing-opencv-2-4-1-ubuntu-12-04-lts/ 您需要的是以下命令: sudo apt-get install build-essential libgtk2.0-dev libjpeg-dev libtiff4-dev libjasper-dev libopenexr-dev cmake python-dev python- ...
  • TaggedWritable类没有空构造函数,因此在应该读取序列化数据的reduce阶段,app会因为无法通过反射创建TaggedWritable键入键而TaggedWritable 。 您应该添加一个空构造函数。 您的地图阶段已成功完成,因为在地图阶段,您的映射器会TaggedWritable创建TaggedWritable类型的键。 This code solves the problem and gives the expected result. It is from here, public st ...
  • 假设zipIn是java.util.zip.ZipInputStream ,你不应该迭代地调用getNextEntry而不是读取字节吗? I resolved this issue after doing some changes in my code. In the first part of code, I was trying to unzip all the zip files whereas I should have access the spilts. Hadoop basic, which ...
  • 对我的第一个问题的简短回答: AWS不会自动编制索引。 我已经用自己的工作证实了这一点,并且在他们的论坛上也从Andrew @ AWS中读到了相同的内容。 以下是如何进行索引编制的方法: 要索引一些LZO文件,你需要使用我自己的从hadoop-lzo项目构建的Jar。 如果要直接使用EMR进行索引,则需要在某处构建Jar,然后上传到Amazon S3。 另外,Cloudera对在您自己的群集上进行此设置的所有步骤都有很好的说明。 我在我的本地群集上做了这个,这允许我构建Jar并上传到S3。 如果您不想自己构 ...
  • 因此,在hadoop世界之外没有可用的库来创建lzo和lzo索引文件。 这基本上让我们使用像hadoop-lzo这样的开源项目(调用native c ++ lzo library)和lzo-java(其中包含lzo压缩的java实现) So there aren't libraries readily available outside hadoop world to create lzo and lzo index files. Which basically leaves us to use open ...
  • 我按原样使用了您的代码,并在进行了3次修改后进行了编译: 在以下语句中,将filename更改为fileName ( fileName 'N'大写) 更改: word.set(itr.nextToken().toLowerCase().replaceAll("[^a-z]+","") +" "+ filename); 至: word.set(itr.nextToken().toLowerCase().replaceAll("[^a-z]+","") +" "+ fileName); 导入的包Gene ...