1. 启动metastore服务
<code> ./hive --service metastore &/<code>
2. 建表
创建一个行表,用于存储foobar.txt文件中的每行句子。
<code>create table tbl_line(line string) row format delimited fields terminated by '\\n';/<code>
3. 加载数据
将文件数据加载到hive表中。
<code>echo "Hadoop Common\\nHadoop Distributed File System\\nHadoop YARN\\nHadoop MapReduce " > /tmp/foobar.txt/<code>
<code>hive> load data local inpath '/tmp/foobar.txt' into table tbl_line;/<code>
加载的数据会放到Hadoop中/data/hive/warehouse/test.db目录下,/data/hive/warehouse是hive-site.xml配置的hive.metastore.warehouse.dir值, test是数据库名称, tbl_line是表名。
4. HQL
根据MapReduce方式我们需要将每行句子拆分成独立的单词,然后对单词汇总。
split(字符串,分割符) 函数:用于分割字符串, 返回一个数组explode(数组)函数:将数组中的每个元素展开成列<code>hive> select split("hello world", " ") from tbl_line limit 1;OK["hello","world"]hive> select * from tbl_line;OKHadoop CommonHadoop Distributed File SystemHadoop YARNHadoop MapReduce# 将每行句子分割成每个单词数组hive> select split(line, " ") from tbl_line;OK["Hadoop","Common"]["Hadoop","Distributed","File","System"]["Hadoop","YARN"]["Hadoop","MapReduce",""]hive> select explode(split(line, " ")) from tbl_line;OKHadoopCommonHadoopDistributedFileSystemHadoopYARNHadoopMapReduce/<code>
<code># 创建一个单词表hive> create table tbl_word(word string);# 将每一行句子拆分成每个单词插入到表中hive> insert into table tbl_word select explode(split(line, " ")) as word from tbl_line;hive> select * from tbl_word;OKHadoopCommonHadoopDistributedFileSystemHadoopYARNHadoopMapReducehive> select word, count(*) as count from tbl_word group by word order by count desc;/<code>