2017-02-09 9 views
0

Каждый раз, когда я выполняю вставку в куст sql, будет создан файл и как ограничить номера файлов при использовании вставки?Ограничить номера файлов при вставке в куст sql

Я боюсь, что слишком много файлов в системе hdfs когда-нибудь сломает его.

hive> insert into table bi_st.st_usr_member_active_day 
    > select * from bi_temp.zjy_ini_st_usr_member_active_day_temp88; 
Query ID = root_20170209100404_5acdd3bf-071d-4178-aeff-b40d16499aac 
Total jobs = 1 
Launching Job 1 out of 1 
Number of reduce tasks determined at compile time: 2 
In order to change the average load for a reducer (in bytes): 
    set hive.exec.reducers.bytes.per.reducer=<number> 
In order to limit the maximum number of reducers: 
    set hive.exec.reducers.max=<number> 
In order to set a constant number of reducers: 
    set mapreduce.job.reduces=<number> 
Starting Job = job_1484675879577_4078, Tracking URL = http://hadoopmaster:8088/proxy/application_1484675879577_4078/ 
Kill Command = /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/bin/hadoop job -kill job_1484675879577_4078 
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 2 
2017-02-09 10:04:41,247 Stage-1 map = 0%, reduce = 0% 
2017-02-09 10:04:47,425 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.17 sec 
2017-02-09 10:04:53,598 Stage-1 map = 100%, reduce = 50%, Cumulative CPU 3.02 sec 
2017-02-09 10:04:57,727 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.81 sec 
MapReduce Total cumulative CPU time: 4 seconds 810 msec 
Ended Job = job_1484675879577_4078 
Loading data to table bi_st.st_usr_member_active_day 
Table bi_st.st_usr_member_active_day stats: [numFiles=8, numRows=548, totalSize=31267, rawDataSize=0] 
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1 Reduce: 2 Cumulative CPU: 4.81 sec HDFS Read: 56745 HDFS Write: 10220 SUCCESS 
Total MapReduce CPU Time Spent: 4 seconds 810 msec 
OK 

ответ

 Смежные вопросы

  • Нет связанных вопросов^_^